Virtualization Technology News and Information
Article
RSS
2019 Infrastructure Predictions: De-convergence, Maturation of FaaS, AI/ML in the Cloud, and Hybridization of Enterprise IT

Industry executives and experts share their predictions for 2019.  Read them in this 11th annual VMblog.com series exclusive.

Contributed by Nancy Wang, Founder & CEO of Advancing Women in Product (AWIP) and Pranava Adduri, Infrastructure Professional

De-convergence, Maturation of FaaS, AI/ML in the Cloud, and Hybridization of Enterprise IT

As we converge on the end of 2018 and embark into 2019, there are several interesting infrastructure trends to note:

(1) De-convergence as a theme

Recent infrastructure unicorns, such as Nutanix and Rubrik have made their name via their trademark converged hardware/software appliances (Nutanix with its Hyperconverged Infrastructure (HCI) for Datacenter Operations, and Rubrik with its converged backup/recovery appliance for Cloud Data Management).

Compared to other players in this space that have been software-only deployments (e.g., Druva, Veeam, Komprise, among many others), we've seen comparatively much faster growth with converged providers. To that end, I've heard customers mention that software-only deployments are hard to consume off the bat given that most of the industry is still on traditional SAN and secondary storage deployments.

Despite this, we are seeing a shift starting among the HCI players, starting with Nutanix, towards being software-only. Back in 2017, Nutanix announced a conscious move towards software-only sales, and I can see more players in the traditional appliances space moving towards also offering a software-only solution. The question then would be, "What percentage of these companies' revenues would compromise of software-only sales, and what percentage would be from their traditional converged appliance?"

One of the ramifications this could lead to would be increased enterprise valuations, given that software-only traditionally has been sold as a subscription rather than the more traditional license + support pricing model. It will also be interesting to see how traditional enterprise sales team adapt to this new paradigm.

As these HCI providers start offering more software-only solutions, there will also be a market opportunity to provide standardized hardware qualifications. Currently, the process to get every additional hardware vendor (popular ones being Cisco UCS, Dell, HP) qualified to support the HCI software stack is painful. If there existed a solution or ability from the companies themselves to speed up the process, I see huge potential here.

(2) Maturing of Functions-as-a-Service (FaaS) offerings

Given the dominance of players such as AWS, Microsoft Azure, and to some extent Google Cloud and IBM Cloud, there is not a lot of opportunity left in competing as a general provider. To be competitive in offering FaaS, it requires the provider to have an ecosystem of resources (for example, AWS offers SQS, RDS including Aurora Serverless, and S3; all of these have seamless integrations w/ Lambda). Such an ecosystem creates a high bar to entry for most startups attempting to be FaaS providers.

Moreover, for companies to start adopting FaaS at scale means that a fundamental shift in their development style will have to occur.

  • Developers don't often architect their code to be fully event driven. The closest analogy would be a system involving message systems to communicate. While FaaS can (and often does) interop with a messaging service, there is a subtler requirement that prevents mass adoption.
  • FaaS implementations as they stand today have bounded execution windows in terms of compute time and memory. This throws a wrench in certain ideological use cases.
  • ETL is one such use case. In ETL, data runs through a pipeline. The pipeline usually involves multiple steps of extracting data, transforming it, and storing it elsewhere. Ideally FaaS would be great for this; each step of the pipeline can be a function and the pipeline state can be managed by message passing.
  • In practice, the transforms in ETL can be memory intensive; this creates challenges in using FaaS for ETL transforms.
  • The problem can be further exacerbated in production since a developer may have written transforms in development but in prod, larger datasets lead to more memory requirements
  • Providers such as IronWorks offer customizability on compute time and memory. However, companies might want an all in one shop (i.e. AWS or Azure).

Given these constraints, engineers have to start profiling and optimizing their functions. One could argue this is a good thing; the flip side is it comes at the velocity of being able to build fast. It could also turn people away from FaaS.

In 2019, we hope to see the following:

  • More flexibility around execution and memory
  • Tooling that will help developers identify what can be migrated to FaaS from traditional architectures
  • Cost calculators that examine existing workloads and estimate how much a similar FaaS implementation would cost
  • Tooling that will help developers incrementally convert their existing codebase to FaaS. (This could start as a framework or library that helps developers annotate functions and incrementally migrate an app to be FaaS like while still running it as a local app.)
  • Abstracting/providing more value up the stack (i.e. Vandium for security, IOPipe for monitoring). We'd love to see comprehensive suites have an understanding of not just your FaaS but the shared ecosystem of resources that they rely on (i.e. S3, RDS, SQS) for monitoring and insights.

(3) AI/ML in the Public Cloud

The adoption of AI/ML in the cloud continues to grow and we'll see this trend grow into 2019. There are many use cases for leveraging public cloud resources to run heavy duty machine learning analysis in the cloud: namely, compute for AI is expensive and hard. From Amazon's dedicated AI business unit to Google's stronghold with Tensorflow, ML is increasingly being delivered as fully-managed services by the big cloud providers.

This is going to be age of PaaS for AI.

  • Bring your own model
  • Bring your own data
  • Let cloud provider handle the rest

There still exist gaps in the data -> train -> deploy -> measure loop; the cycle needs to be established. Facebook's FBLearner Flow tool is a good start, but we'd still like to see more sophisticated tools emerge in this space.

(4) Hybridization of Enterprise IT

Starting with Azure Stack, this phenomenon has also moved to other cloud players as well to supplement on-premises deployments, namely AWS Outpost and VMware on AWS, as well the ability to run RDS on-prem.

The trend here is the clouds are building to bridge edge, cloud, and datacenters together. Role and permissioning monitoring will become more important than ever as data and logic flows between these boundaries. Existing tooling and managed security service providers will have to update their offerings to ensure such hybrid deployments are well monitored and secured.

(5) Enabling Continuous Gratification

The distribution of applications across multiple clouds will be based on best of breed capabilities. There will also be a push to deliver value to the business quickly, via a developer-centric model. Continuous Integration (CI), and now continuous deployment (CD), is a big trend. But similar to FaaS, CD is a methodology that can't be immediately embraced by everyone; for example, think of companies that base their business models off of shipping physical appliances to customers or edge compute. Clearly in these cases, CD is not directly feasible. Therefore, it would be interesting to see how companies can bring CD to non-SaaS offerings, or even what form this new way of development would take.

As the "everything in GIT" movement continues, we're seeing entire infrastructure definitions move to code and Yaml files that are version controlled. As such, developers have tremendous power (and ability to mess up prod) when they commit changes. While we have unit testing for application logic we write, we unfortunately don't have the same for infrastructure definitions we write. As such, rules need to be put in place to ensure best practices are followed and infrastructure issues can be caught before they go live. We love the idea behind tools like datree.io which are starting to fill this gap and look forward to seeing more development in this space.

##

About the Authors 

 

Nancy Wang is a Lead Product Manager at Rubrik, where she currently drives development and GTM strategies for the company's Windows, filesystems, and SaaS platform (currently in stealth) product lines. Prior to Rubrik, Nancy led the product development efforts for the Google Fiber network infrastructure platform team. She is also the founder & CEO of Advancing Women in Product (AWIP), an award-winning non-profit organization (featured in Fast Company, Forbes, and Bloomberg) that strives to empower more female tech leaders and professionals via highly-rated skills-based workshops and executive coaching. Nancy graduated from the University of Pennsylvania with a bachelors in engineering, and in her free time, Nancy contributes to Forbes with her column in Women@Forbes and is an avid equestrian. 


Pranava Adduri is a founding engineer and currently leads the Platform Engineering team at Rubrik. Prior to Rubrik, Pranava led the database team at Box (starting at Shard0). Pranava graduated from the University of California - Berkeley with triple bachelors (Economics, Computer Science, and Industrial Engineering) and masters in Industrial Engineering with honors. As a Bay Area native, he loves hiking and exploring small batch whiskeys. 

Published Thursday, December 20, 2018 7:29 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<December 2018>
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345