As we enter a new year, I was fortunate to have the opportunity to connect with Jeff Klaus, GM of Data
Center Solutions at Intel to dig in deeper to the findings of a recent Intel research report and get an "under the hood" perspective on DCIM and what we can expect in 2016.
VMblog: What
do you believe is the single most important, biggest data center trend to look
out for in 2016?
Jeff Klaus: Compute
demand is going to continue and increase and thus various industry trends are
likely to resume (e.g. virtualization, consolidation, "green initiatives",
efficiency). In addition, some new ones will likely start their ramp up (e.g.
Facebook Open Datacenter). I think that for all of these trends and
initiatives, granular and accurate data, along with analysis of this data is
key a capability to drive and support those initiatives. End users will need
accurate data, including power, temperature, airflow, utilization, etc., in
order to understand their situation and take actions accordingly.
VMblog: Where
will we see the biggest consolidation in DCIM? And what will this impact?
Klaus: If asking about DCIM vendors consolidation, that I believe that like in
any other domain, we started with a large number of vendors but we'll end up
with a couple of major global players, some regional
players, and some vendors with a niche solution for certain parts of DCIM. We
already see the consolidation, e.g. CA dropping from DCIM space, and we expect
this trend will resume.
VMblog: Recent
Intel DCM survey data cited that over 40% of data
center managers are still relying on manual processes
(spreadsheets/physically walking the data center floor with a tape measure) in
an attempt to "accurately" accomplish capacity planning/forecasting, requiring
40-60% of their time. In an era of automation, what needs to be done to
mitigate this manual-driven industry?
Klaus: Technologies
that can provide real time power, thermal, and other telemetries directly from
the devices are already mature. The OEMs realized this customers' need 5-7
years ago and since then we see more and more intelligent devices that can
report accurate metrics related to their power consumption, thermals,
utilization, airflow, and more. Now we need the DCIM vendors to listen to their
customers and integrate such technologies as part of their solutions, provide
accurate data and analytics to their customers, and allow them to automate
their current manual processes. The customers are not happy to use manual
methods today, they're still doing it because there's a lack of good and simple
solutions that can provide them accurate data and help them automate their
manual processes.
VMblog: Will
we see lower cost of DCIM technologies in 2016? And if so, why?
Klaus: We
saw lower costs of DCIM technologies in 2015 and I believe this trend will
likely continue. We're talking a lot with end users and still clearly hear that
price is one of the key barriers of DCIM solutions' adoption by end users.
VMblog: What
is the biggest change you'd personally like to see occur in the DCIM space in
2016?
Klaus: Obviously
I'd like to see a broader DCIM adoption by end users in 2016, ideally, I'd really
like to see this space moving from "push", where DCIM vendors try to push their
solutions, to "pull" where end users are asking for the solutions. For that to
happen, DCIM vendors will need to listen to their customers, address their key
concerns (e.g. power and thermal management, automate manual processes), and do
a better job in articulating and proving DCIM solution value and ROI for the
end users. The vendors will also need to address general concerns related to their
solutions, lower costs, simplify the solutions, and invest in its ease of use.
##
Once again, a very special thank you to Jeff Klaus of Intel for taking time out to speak with VMblog.com.
