Virtualization Technology News and Information
Article
RSS
2010 Will Require Rethinking in Data Center Technology

 

What do Virtualization and Cloud executives think about 2010?  Find out in this VMblog.com series exclusive.

Contributed Article by Craig Thompson, VP Product Marketing, Aprius

2010 Will Require Rethinking in Data Center Technology

The industry is at a key inflection point in data center technology and in how vendors and enterprises react to these technology changes. Server virtualization kick-started nothing less than a complete rethinking of how computing workloads are provisioned, managed and moved and, in 2010, everything surrounding the CPU resources (memory, storage, networking, security services) will require rethinking, too.

At the core of this shift is the notion of provisioning workloads from ‘pools' of resources.  This started with a single CPU core being virtualized and was extended to multi-core CPUs, and then several multi-core CPUs in a cluster, where workloads can be initiated, moved and torn down from any CPU within the cluster. In the near future, this will be extended to larger clusters, an entire data center and eventually across multiple data centers.

This fundamental change in how workloads are provisioned also impacts the rest of the physical infrastructure surrounding the servers: the networks, storage, memory and security services.  Not all applications are equal, nor do they require the same access to the same amount or type of resources. So how do you account for this in a highly mobile, flexible and non-deterministic virtualized environment?  One choice is to statically over-provision physical resources to account for all possible peaks from any workload.  Networks, for example, would involve dedicating lots of individual connections and bandwidth to servers so that they can handle any workload thrown at them.  This is not feasible because of cost, complexity and, perhaps most importantly, because of the inability to predict what resources will be needed in the future. As VM density increases beyond 30-to-1 and traffic fluctuates over time, especially across resource intensive applications including database and messaging systems, it becomes increasingly difficult to account for all possible workload scenarios.

So let's consider making the underlying physical infrastructure completely flexible, which can provide the means to allocate resources from a ‘pool' where and when necessary, much like CPU is treated today.  The industry is approaching this idea in several ways, and vendors are working on pieces of the overall solution; for example, providing the ability to combine distributed local disk drives into a common pool of storage, or extending addressable memory in a given server or across clusters of servers.  Networking companies are developing standards that allow distributed virtual switches to act as if they are part of one large, flat, layer 2 network.  Others are working on the problem of providing I/O (network and storage connections) as virtual functions from a common resource pool and making this I/O, including bandwidth, identity and policy, flexible and mobile along with the VMs that it serves.

Experts say this harkens back to the mainframe days, and in many ways this is true.  However, the game has changed in a much more fundamental way since all of this is accomplishable on a $2,000 server.  The next step is to allow these servers to connect to any network or storage resource on-demand, from any vendor, across an entire data center management platform that is enhanced by downloadable plug-ins and applications to provide the visibility, control and automation of the available pools of resources.

About the Author 

Craig Thompson, Vice President, Product Marketing
Craig brings diverse experience in corporate management, product marketing and engineering management to Aprius. He has held various senior marketing roles in data communications, telecommunications and broadcast video, most recently with Gennum Corporation, based in Toronto, Canada. Prior to that Craig was Director of Marketing for Intel's Optical Platform Division where he was responsible for the successful ramp of 10Gb/s MSAs into the telecommunications market. Craig holds a Bachelor of Engineering with Honours from the University of New South Wales in Sydney, Australia and Masters of Business Administration from the Massachusetts Institute of Technology.

Published Wednesday, December 23, 2009 6:01 AM by David Marshall
Comments
Twitter Trackbacks for 2010 Will Require Rethinking in Data Center Technology : VMblog.com - Virtualization Technology News and [vmblog.com] on Topsy.com - (Author's Link) - December 23, 2009 8:03 AM
2010 Will Require Rethinking in Data Center Technology : VMblog … University Intro - (Author's Link) - December 23, 2009 2:07 PM
2010 Will Require Rethinking in Data Center Technology : VMblog … | Drakz News Station - (Author's Link) - December 23, 2009 11:33 PM
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<December 2009>
SuMoTuWeThFrSa
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789