With, what seems like, everything new in the IT world being labeled as software-defined this or that, I thought we would take a step back and find out more about this software-defined phenomenon. To do that, I spoke with Andrew Hillier, co-founder and CTO of Cirba, a company that has grown over the years and has become a provider of software-defined infrastructure control solutions. Andrew and Cirba have been around the cloud and virtualization markets for quite some time, and we've had many discussions over the years about virtualization. So I was fortunate to catch up with him once again to dig in and hear more about things like the importance of intelligent control and management of the software-defined data center.
VMblog: What exactly does Software-Defined mean? Andrew Hillier: Software-defined,
in our view, describes IT hosting strategies and infrastructure whose
behavior and operational parameters can be altered programmatically
through software, reducing or eliminating the need for special-purpose
hardware and/or application-specific physical configurations. This isn't
to say that all apps need to converge on a vanilla set of hosting
requirements, but rather that the hosting infrastructure can
automatically be made fit-for-purpose for the specific application
demands placed on it. This increases agility and reduces manual effort
by enabling higher levels of policy-based management and automation.
VMblog: What is your view on the whole Software-Defined trend?
Hillier: There
is clearly a lot of hype around everything software-defined, and while
it makes complete sense for technology to progress in this way,
organizations need to be careful. There are immediate benefits when
adopting specific technologies to provide more flexibility and
programmability, but they also bring complexity that needs to be
managed. We saw this with virtualization, which is essentially
software-defined compute, and the entry point for many organizations
into the software-defined world. Many of the true benefits of
virtualization weren't immediately realized, and people discovered that
simply sticking apps in VMs only unlocked part of the value. Safely
achieving the promised levels of efficiency and automation required a
progression in the thinking, tooling and processes used to manage these
environments. Without this, it was nearly impossible to deal with all
the moving parts, and I think we can learn from this when moving on to
other software-defined technologies.
VMblog: What is realistic goal today with respect to Software-Defined for most organizations?
Hillier: It
is useful to think of software-defined as a state that you reach, and
not just a technology you can buy. This is particularly true of the
software-defined data center, which can't simply be purchased off the
shelf, and is really more of a goal that you reach when you have put in
place all of the required pieces. And these pieces may not always be
what you think - it is actually possible to make existing virtualized
infrastructure operate in a more software-defined way purely through the
adoption of more advanced management software. If we think of complex
industrial processes, the control system is the starting point for
achieving efficiency and flexibility, and buying expensive robots may be
overkill, at least initially. The same is true of IT infrastructure,
and properly controlling VM placement and resource allocations, using
policy to define how this should be done, can get you quite far using
existing virtualization technologies. Policy allows the capabilities of
hosting environments to be defined through software, and then
scientifically aligned with application demands and their requirements,
leading to a whole new level of efficiency and automation. Adopting a
policy-based control system is an important first step, and after you
gain more precise control over what you have, it then makes sense to
look into more advanced software-defined components, such as SDN and
SDS, based on their incremental business benefit.
VMblog: You mention policy a lot - is that a key component of the software-defined data center?
Hillier: Absolutely
- if we go back to industrial control systems, the entire process is
defined by control logic and "set points". Together these drive the
actuators to achieve the desired outcome, and changing the logic and set
points can cause the same machinery to behave differently. In the IT
world a control system contains the parameters and set points that
define how workloads should be hosted - everything from overcommit
levels to redundancy requirements, compliance, storage requirements,
etc. These form the policies that are effectively the contract between
supply and demand, and they codify precisely how application demands are
fulfilled by resource supply. Having the ability to establish
policy-based management should be the starting point for any
software-defined initiative, as the flexibility and numerous "degrees of
freedom" need to be pinned down to specifics in an scientific,
repeatable way. Unfortunately, because most software-defined
technologies focus on a narrow aspect of operation, such as storage or
network, none has a "big picture" policy of how the overall system
should behave, forcing organizations to resort to spreadsheets or best
guesses when trying to manage these complex environments. This in turn
causes people to be tied up making imprecise, manual decisions, wasting
their time and preventing organizations from reaching the required
levels of automation.
VMblog: You make it sound like automation is necessary - is it?
Hillier: Yes
it is, but it may not be the kind of automation most people think
of. Most organizations focus on automating the configuration and
operation of the environment, which is the goal of provisioning and
orchestration systems. On the surface, the act of doing this is simple
enough. If you know what you need the infrastructure to look like, then
it should happen as automatically as possible. But the problem is that
determining what is required is becoming increasingly complex, and leads
to a new kind of automation that wasn't required in the past. This is
the automation of the decision making process, so people can determine
what needs to happen without being tied up using spreadsheets all
day. Think of it this way: if you can automatically program the
infrastructure to do anything you want, this newfound freedom will
quickly turn into a curse as you realize it is nearly impossible to
figure out what exactly to make it. The catch is that it is only by
defining infrastructure and application requirements and establishing
the policies that govern infrastructure operations can you automate at
this level.
VMblog: How does this relate to internal and external cloud?
Hillier: It
is interesting that cloud and software-defined are often two distinct
thought processes, and seem to be independent in many
organizations. This is mainly because one is effectively a supply-side
concept (software-defined infrastructure) and one is a demand-side
innovation (self-service access and freedom of choice). But thinking of
them separately is a mistake, and adopting software-defined technologies
also requires a new level of demand management in order to make it
work. This is because the whole premise is to be able to flex the
infrastructure to meet the specific needs of the applications, without
deploying specialized hardware that is only suited to a specific
function. To use another analogy, implementing siloed software-defined
technologies without factoring in application demands is like having an
infinitely configurable playground with no idea of what children are
using it - it can't possibly be configured correctly. A policy-based
control system for software-defined infrastructure enables supply and
demand to constantly be aligned using a combination of policy, control
analytics and automation.
##
Andrew
Hillier has over 20 years of experience in the creation and
implementation of mission-critical software for the world's largest
financial institutions and utilities. A co-founder of Cirba, he leads
product strategy and defines the overall technology roadmap for the
company.
Prior
to Cirba, Hillier pioneered a state of the art systems management
solution which was acquired by Sun Microsystems and served as the
foundation of their flagship systems management product, Sun Management
Center. Hillier has also led the development of solutions for major
financial institutions, including fixed income, equity, futures &
options and interest rate derivatives trading systems, as well as in the
fields of covert military surveillance, advanced traffic and train
control, and the robotic inspection and repair of nuclear reactors.
Hillier holds a Bachelor of Science degree in computer engineering from The University of New Brunswick.