Virtualization and Cloud executives share their predictions for 2017. Read them in this 9th annual VMblog.com series exclusive.
Contributed by Deba Chatterjee, Director of Products, Robin Systems
Prediction: Underutilized resources will become a thing of the past as we become smart about containers
In the last few years we have seen a huge demand for distributed applications that has led to a proliferation of data-driven apps in data centers. Increasingly, IT departments struggle to keep pace with the demand of changing technology. Although architecturally there are hardly any similarities, deploying these new distributed apps has been subject to the same rules as have been historically applied to legacy applications. The same strategy to have big enough servers that can address peak workloads is chosen over and over. This tends to work really well, but consider the periods when servers see low demand. In that case, resources are left idle. What a waste of system resources!!! In fact, this strategy has resulted in a huge underutilization of system resources. In my past life as an Oracle DBA I saw servers running with less than 10 percent or 20 percent CPU utilization on an average. Systems with high utilization were very rare, and in fact were seen as exceptions rather than the rule.
Proliferation of Environments
At the same time, trying to cope with the demands of latest production data has led to a huge proliferation of development and test environments. For every production environment we now see at least 10 development or test environments. Developers expect to have their own stack so they can build and test in isolation. While this may be a common practice, it definitely doesn't help the number of environments the IT admins need to spin up.
Thus as one would expect we see a fragmented pool of underutilized servers, a problem of data duplication stemming from the increasing number of development and test environments. There is also a continuous need to create clones of production or even refresh an existing environment. Sometimes a huge production environment of a few terabytes is often copied over to create a big dev or test that will probably not even see 1 percent data change. What a waste of storage!
All this work of creating, maintaining or building a new environment results in a maintenance nightmare for the IT staff. The big question: what is the solution to this problem?
You probably guessed it - Consolidation that not only ensures isolation but also guarantees performance predictability.
Today's data centers need an effective consolidation strategy, built with a single shared set of overhead. And with effective resource management we can greatly reduce any competition between workloads, and in turn improve throughput and ensure good response time.
One other point critical to the success of good consolidation is intelligent workload-aware placement strategy.
2017 will be the year IT and developers get smart about both system and application containers running to address this data center conundrum.
About the Author
Deba is an experienced product manager relied by executive management to solve complex business problems with effective technology solutions. At Robin, Deba works on product strategy, product roadmap, product positioning and requirements gathering. He also works on different stages of the product life cycle, from inception to launch and manages beta and customer reference programs.