Virtualization and Cloud executives share their predictions for 2013. Read them in this VMblog.com series exclusive.
Contributed article by Dave Laurello, president and CEO of Stratus Technologies
The Cloud is no Place for Mission-Critical Applications. Not in 2013 or the Foreseeable Future.
Cloud computing today is like a gifted kid with raw talent
and enormous potential in need of nurturing and development. Everywhere I look,
I see an adolescent industry with raging hormones.
Here are just a few indications. IDC's CloudTrack survey found the cloud software industry is poised to
grow five times faster than the software market as a whole through 2016, to $63
billion propelled by a 24% CAGR. In another industry survey, 62% of CIOs and IT
managers in Australia said they will increase spending on cloud in the coming
year, with 78% saying they will replace physical servers with cloud services.
The technology maturity model from the Cloud Security Alliance and ISACA places
cloud computing still in the infancy stage, based on their IT end-user survey. The
industry is in an era of early adopters and "most businesses don't want to be
stuck changing the diapers of untested technology."
Here is another interesting forecast in the IDC CloudTrack survey that goes to the heart
of my opening assertion. Even with this
blistering rush to cloud adoption, 80% of the Global 2000 companies will still
have 75% of IT resources running onsite by 2016. I think one big reason for
that is the G2000 believe, as do I, mission-critical applications should remain
under house control, subject to as little diaper changing as possible.
Unplanned outages and unpredictable recovery times will
continue to plague cloud computing. That's because today the industry is
focused on lowest-cost per compute cycle and market share. For serious players
to pursue any other business strategy as this stage of the game would be foolhardy.
The financial exposure and customer dissatisfaction potentially resulting from
downtime are completely acceptable costs of doing business, given the upside
suggested by the survey findings above and the abundance of corroborating
evidence. That's little comfort to the thousands of businesses shutdown while
their cloud provider figures out its problem and resumes services.
For many applications, downtime is tolerable. Potentially, these
are great candidates for cloud computing. So are test and development projects.
Over 50% of my own company's applications are in the cloud. Unless you know
your cost of downtime and the value of individual applications, however, it is
virtually impossible to make sound technology choices and investment decisions.
Amazingly, fewer than half of businesses make the effort to figure this out. Companies
that do know their cost of downtime to the hour or the minute have the
knowledge necessary to make sound judgments about their uptime requirements.
How bad could it be? A number of reputable industry analyst firms
peg the cost of an hour of IT system downtime for the average company well
above $100,000. The biggest cost culprits, of course, are the applications your
company relies on most and would want up and running first after an outage. The
principal measure of an application's value is revenue impact. Lost sales,
diminished productivity, wages, compliance violations, reputation damage, contractual
penalties, waste and scrap, customer dissatisfaction and other factors can all
contribute to revenue impact of a failed application. Unearthing all of the
contributors is the only way to arrive at your true cost and, by extension, the
degree of uptime protection you need.
Here are a few suggestions to consider if moving workloads
to the cloud is in your 2013 plans:
Inventory all of your applications, including
those running remotely, as a first step to creating a hierarchy of application
criticality and value to your organization. Many IT managers are amazed at how
many they actually support. This can help you determine which applications are most ripe for cloud consideration (or trash
Be open to the possibility that moving
applications to the cloud may not really save money. My company's CIO
investigated putting email in the cloud. He found that cost saving were
negligible. Application SLAs also would have taken a hit. Without the promise
of big savings, there was no reason to assume the risk.
Unless your company is less than five years old,
you are likely to find legacy applications that simply cannot migrate to the
cloud without tremendous cost and re-coding, if ever. They can certainly be
virtualized, but that doesn't mean they are cloud-worthy. These applications
may also be the ones your company relies on most for business success. Pick a
few and dig deep into the impact of failure on your organization, factoring in
cost, disruption and unintended consequences (impact on other departments,
supply chain partners, operational systems resynchronization).
Look at how many single points of failure there
are in your supporting IT infrastructure for these applications. The average
company experiences 3 downtime instances per year, each lasting about 4.7 hours
for a combined total of 14.1 hours (The
Aberdeen Group). Do the math and decide if these applications deserve
higher levels of availability.
The cloud continues to change how computing is done. As with
most tectonic shifts, this is a process, not an event. Not everything belongs
there, and most certainly not critical business applications.
About the Author
Dave Laurello is president and CEO of Stratus Technologies.
He rejoined Stratus in January 2000, coming from Lucent Technologies, where he
held the position of vice president and general manager of the CNS business
unit. At Lucent, Dave was responsible for engineering, product and business
management and marketing. Prior to this, he was vice president of engineering
of the carrier signaling and management business unit at Ascend Communications.
From 1995 to 1998, Dave was vice president of hardware engineering and product
planning at Stratus.