Virtualization Technology News and Information
Out of Chaos, Opportunity

The cost of provisioning a server or desktop has collapsed thanks to virtualization, thin client, multicore CPUs and ubiquitous gigabit networking in the data centre. Indeed, in the last 3 years virtualization software itself has tumbled in price from thousands of dollars per unit and is now given away for free with many Operating Systems.

So what happens when server hardware reaches true commodity pricing levels? What happens when the necessity for new capital equipment expenditure goes away, and the power to spawn whole IT estates ends up in the hands of business units or end users? Virtual system instances surge to meet demand (no bad thing) but the CIO and his team are left responsible for reliability, security, and compliance of an uncontrollable virtual estate. Not all organisations have a powerful CIO or IT function, not everyone is able to effectively enforce central policy on such a fluid infrastructure.

Early adopters of virtualization have already found this out the hard way, and are now trying to cope with this uncontrolled growth in the number of virtual systems.

We have been here before. During the 90's the cost per megabyte of hard drive storage (remember when we used to think about storage in terms of megabytes?) plummeted. Storage proliferated in the data centre, on the desktop, and in a hundred types of portable devices. The struggle to manage this storage still rages today. Do you know where your confidential data is? Perhaps it's on the SAN, and on John's desktop PC, or his laptop, you know, the one he lost on the train. We waited more than 10 years for tools to help us manage this uncontrolled growth in storage. Even now with data de-duplication, filesystem snapshots, disk encryption, and (somewhat) affordable SAN and NAS, managing storage is a daily struggle with huge associated costs.

Common sense tells us that it is better to fix problems now, before they become chronic. How much easier would it have been to manage today's terra-bytes of storage if all those powerful tools were available to us in 1990?

How can we take the lessons learned from storage and apply them to today's problem of virtualization sprawl?

What if you could devolve the power to create, destroy, and hibernate virtual machines to your authorized users in a carefully controlled way? What if virtual machines were automatically decommission after a projects pre-determined end-date? What if you could report on and produce billing records for virtual machines according to their consumption of physical resources?

If you could do all these things today, how would that help you manage virtual server sprawl now and in the future? How much time would it save, and what would all that be worth to you?


Article contributed by Nick Hutton, Principal Consultant at 360is, a first class independent provider of Virtualization and Information Security professional services to the public and private sector.  Read the 360is blog.

Published Saturday, September 06, 2008 5:36 PM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<September 2008>