Industry executives and experts share their predictions for 2018. Read them in this 10th annual VMblog.com series exclusive.
Contributed by Dr. Jai Menon, Chief Scientist at Cloudistics
Why We Won't See More Than 50% of Workloads Running in the Public Cloud in 2018 or Ever
As the cloud and cloud technologies
continue maturing, and we become able to clearly see the woods for the
hyperbole, it's becoming evident that a great many notions pertaining to public
cloud are way off base. One such belief is that everything will inevitably go
to the public cloud. Frankly, this is a myth and one that inspires my
prediction that we will not see more
than 50% of workloads in the public cloud - not in 2018, and probably
never, and here's why.
Future workloads will need more real-time processing than
the cloud can deliver. Let's not beat around the bush, many
emerging workloads need the kind of processing power that only edge computing
can deliver - not the cloud, and not (yet) the fog, but only the edge.
Self-driving cars and drones are prime examples of applications that simply can't
afford to send image data to the cloud and back for processing, because doing
so would be too slow and require too much core/backhaul bandwidth.
Public-cloud-like on-premise systems are an increasingly viable
alternative to public cloud. Indeed, it can
no longer be argued that the agility and economy of the public cloud cannot be
replicated on-premises. Instead, hyperconverged and superconverged systems increasingly
provide the infrastructure for new on-premise app-centric scale-out cloud
platforms that support public-cloud-like usage-based pricing and higher-level
application services. The result is a public cloud experience on-premises, and
one that the customer doesn't even have to own or manage in many instances. It
turns out that you can have your cake and eat it. Plus, this provides
the ideal public-cloud alternative to those 75% of workloads that have
reasonably predictable IT resource requirements. The other 25% of workloads are
spiky and unpredictable, and therefore clearly ideal for the public cloud.
Cost, performance and issues of governance/control are
driving customers back on-premises. Granted,
this is not so much of a phenomenon that you could call it a trend, nor do I
foresee it becoming one, but there are around 10-15% of customers who, having
tried public cloud, elect to "uncloud." Instagram is a case in point, and according
to Jay Parikh of parent company Facebook, the decision to do so, not only led
to an 80% improvement in upload times, but also cut their usage to one
on-premises server for every three servers they had previously hired on AWS. I
believe the decision to "uncloud" when faced with such benefits, is what you'd
call a no-brainer. Finally, as companies grow, so too do their requirements for
governance/control and this is often a leading cause of "unclouding" for such
businesses.
Next, there are those customers that will never move to the
cloud. This group includes those firms that can't
move to the cloud for a variety of reasons that range from dealing in personally
sensitive data and/or massive volumes of data, to having high-performance
requirements or concerns regarding data sovereignty, regulations or perceived
loss of security, and those that use legacy applications or fear cloud lock-in.
Finally, we're also see steady growth
in popularity of alternatives such as co-location and cloud hosting among companies
that don't want to deploy their own on-premise infrastructures.
These are among the many, compelling
reasons that I predict we will not see more than 50% of workloads running in
the public cloud in 2018 or ever.
Finally, I have another prediction for
2018 and it's one that I feel particularly strongly about, and which I
addressed at length in a recent byline on this esteemed platform. It is this: I
believe without hesitation that when it comes to storage, the SAN is back!
Read
more here.
##
About the Author
Dr. Jai Menon, Chief Scientist, IBM Fellow Emeritus
Jai is the Chief Scientist at Cloudistics, which he joined after having served as CTO for multi-billion dollar Systems businesses (Servers, Storage, Networking) at both IBM and Dell. Jai was an IBM Fellow, IBM's highest technical honor, and one of the early pioneers who helped create the technology behind what is now a $20B RAID industry. He impacted every significant IBM RAID product between 1990 & 2010, and he co-invented one of the earliest RAID-6 codes in the industry called EVENODD. He was also the leader of the team that created the industry's first, and still the most successful, storage virtualization product. When he left IBM, Jai was Chief Technology Officer for Systems Group, responsible for guiding 15,000 developers. In 2012, he joined Dell as VP and CTO for Dell Enterprise Solutions Group. In 2013, he became Head of Research and Chief Research Officer for Dell.
Jai holds 53 patents, has published 82 papers, and is a contributing author to three books on database and storage systems. He is an IEEE Fellow and an IBM Master Inventor, a Distinguished Alumnus of both Indian Institute of Technology, Madras and Ohio State University, and a recipient of the IEEE Wallace McDowell Award and the IEEE Reynold B. Johnson Information Systems Award. He serves on several university, customer and company advisory boards.