Virtualization Technology News and Information
VMblog's Expert Interviews: Intel GM Jeff Klaus Provides Key Insights into the Recent Intel DCM Survey Results

Interview Intel

After reviewing a recent Intel DCM survey, I reached out to Jeff Klaus, GM Data Center Solutions at Intel, to gain a better understanding of some of the results.

VMblog:  Intel DCM recently concluded a new survey.  Can you provide some background information about it for VMblog readers?

Jeff Klaus:  The survey was conducted in June, 2016, and surveyed 204 IT managers, IT directors, software engineers and DevOps in the United States. We commissioned this survey, with Morar Consulting, to identify how enterprise IT teams are managing and operating their enterprise cloud environments, what challenges they are facing and the most common tools they are implementing.

VMblog:  What surprised you the most about the findings and why?

Klaus:  We were surprised by how much time data center managers, DevOps and IT teams are spending tuning their infrastructure - over 25 hours per week on average. With the use of sophisticated tools, including Intel's Data Center Manager and other tools Intel continues to introduce, this number should decrease by at least 50 percent. 

VMblog:  Why do you believe that the majority (51%) of enterprises deploying a cloud solution are using Infrastructure as a Service versus Platform as a Service (PaaS) or Software as a Service (SaaS)?

Klaus:  Motivations vary behind a business' decision to shift to another platform, and each solution offers its own set of benefits. One way to view this is as a spectrum of migration; from wholly owned, on-premise equipment at one end, to total SaaS, on the other. It is worth noting that any change has costs associated with it and migrating is typically not undertaken without specific justification.

IaaS can be seen as a mid-point on this spectrum. It provides a greater degree of control, in terms of hardware selection and software environments, and a better level of security and customization. It can also address underlying regulatory constraints, which proscribe a definite geographic location for data (German data privacy laws, as an example) or data protection methods, such as HIPAA.

VMblog:  Do you believe we'll continue to see the scale tip in 2017?

Klaus:  From a total cost of ownership perspective, I expect IT managers to continue to make the tradeoff calculations to address their specific business needs. In general, the cloud has been disruptive to the enterprise model and if TCO continues to favor Cloud providers, you could expect more businesses to move in that direction. I am reminded of the initial shift to virtualization of the enterprise, over several 'waves,' to the point where it has reached equilibrium. It will be interesting to see at what point the shift to the Cloud settles down to a steady state.

VMblog:  The findings revealed that DevOps teams are spending an average of 25 hours per week monitoring their complex IT environments - such as tuning infrastructure and remediation analysis - what can be done to lessen that timely burden and allow DevOps to refocus efforts on other programs or initiatives?

Klaus:  DevOps wants, and needs, an environment that is well instrumented at the software level, so that a well running environment operates without the need to constantly check in on it. Ideally, an environment in which you receive proactive notifications of issues, rather than having to keep monitoring the environment, would allow DevOps teams to focus on more critical issues. Proactive notifications cover a gamut of areas, from SLA performance to server health, and eliminate unneeded monitoring.

VMblog:  What can IT do to account for and address issues like the noisy neighbor problem?

Klaus:  "Noisy Neighbor" is where VMs use excessive amounts of shared resources that are not allocated directly to other VMs. This can cause VMs to end up being too elastic and, if not contained, could take away resources needed by other VMs. Firstly, measuring the resource consumption, in real time, is necessary to understand the consumption. Secondly, the ability to allocate resources to maintain a Service Level Agreement, such as a latency response threshold, is needed to assure a certain performance level is maintained.

VMblog:  How do you expect these results to change in the future?

Klaus:  Over time, we hope to see improvements in overall PUE for data centers, as they improve the operational metrics. At the next level of granularity, we would expect to see further improvement for IT equipment itself, by improved control of the equipment through dynamic and real-time access to metrics.

VMblog:  How can IT/DevOps use these results to improve their internal strategies and processes?

Klaus:  We have seen a number of ways in which deployment of a DCIM solution can be used to reduce operational and capital cost for data center operators, as they offer more efficient control of the datacenter facility. The ability to automate some of this orchestration is likely the next frontier for IT equipment throughput. This becomes the next level of optimization at the application level, with algorithmic control of workload allocation and automated load balancing.


Once again, a special thank you to Jeff Klaus, GM Data Center Solutions at Intel, for taking time out to speak with

Published Monday, October 10, 2016 6:55 AM by David Marshall
Filed under: ,
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<October 2016>