Virtualization Technology News and Information
Q&A: Microsoft's utility computing guru talks about his in-house support challenges

Quoting ComputerWorld

The profile of Microsoft Corp.'s in-house server farm looks very much like the ones in many other companies: one application per server, with less than 20% peak server utilization on average. Devin Murray, Microsoft's group manager of utility services, has been working to change that.

Murray is in charge of server and desktop hardware purchases for about 40,000 of Microsoft's end users. His group handles internal computer usage, helping to shepherd the company's 260,000 computers, which are spread throughout 550 buildings in 98 countries. Another IT unit within Microsoft runs the servers for MSN and other external, customer-facing applications.

In a recent interview with Computerworld, Murray explained some of the steps he's taking as part of his strategy to boost capacity utilization rates, which centers around a new server-purchasing notion he calls RightSizing as well as liberal use of virtualization technology. Excerpts from the interview follow:

What's RightSizing all about? It's a utility concept, where our users focus on their business needs and we worry about the underlying hardware platform. The model is to get them to buy into this utility solution, and they don't have to worry about the hardware refresh -- we worry about all that for them. They think about their business requirements; we think about speeds and feeds. And if they need more compute power, we get it to them. But we buy only what they need, not 600 times more capacity than they will use in two or three years.

To get users to buy in, we have to demonstrate that we can do this better, more effectively, than they can do it on their own. We charge them a one-time sign-up fee, and then a monthly amount for operational costs to keep the systems up and running, [plus] space, power and environmental charges. We don't charge end users for the actual hardware or software.

Why are you doing this now? Back in 2005, we started looking at our compute utility. We started doing internal benchmarks -- which business units are using what portions of their servers, how they rank among business units over time. It became apparent that we were buying machines that were way overpowered for our needs. So we started trying to change the conversation about purchasing -- to help business owners understand their real needs and the options for meeting those real needs. People buy hardware based on emotional factors and based on what they've done in the past. If I built my business on an HP four-processor system, I want to continue to use that irrespective of my current needs or the costs.

Like most customers, we don't want to pay for hardware we're not using. And it's only getting more pronounced -- the current growth projections of AMD and Intel show sixfold server processing growth from 2005 through 2008. So as the servers get more and more powerful, we're using less and less of them. We're helping end users understand what they absolutely have to have to get their business objectives met, based on today's capabilities and costs and not based on what they already have or what they think they want.

How are you changing the conversation internally? We're using the SPEC benchmark as our raw computing metric. So, say you want to buy a platform with 200 compute units, but you're replacing a system with 40 units that you're not really using fully. You're using 20% of your existing system, but you want to upgrade by 400%. So we try to shift the business thinking. The conversation changes to why you want to replace the system and what you're trying to do.

What has the response been from your users? Businesses are very interested in how they are using the systems, and now they want more information. We're starting to scorecard a lot of our different IT services to show utilization and growth potential, and we're comparing and contrasting businesses to present different scorecard views to any given [manager]. So they can share best practices at the general manager and CIO level. We also scorecard ourselves, from the perspective of an infrastructure and services provider: the percentage of servers moving to virtualization, how heavily the virtualized servers are being used, and so on.

How are you doing on the virtualization front -- what progress have you made? We've got about 1,000 production and 500 development applications running on around 70 hosts. The average is a ratio of 8 [virtual machines] to 1 [physical server] in the production space, and around 16 to 1 in test or dev environments. Our goal is for all new applications to go into the virtual environment; we're not necessarily going back to existing or older applications to virtualize them all.

How do you sell the notion of virtual servers to your business customers? We don't discourage businesses from trying. So if you like the idea but are worried about performance, we offer you the opportunity to test it for free. And we have a money-back guarantee. If you jump on board the virtual machine and don't like it, we give you your one-time on-boarding fee and any monthly fees back. We've had one customer do that. But other than that, every virtual machine we said we could do from a performance-trending perspective has worked out.

Have you found applications that shouldn't or flat-out can't be virtualized -- large SQL Server databases, for instance? Yes, there are some, and we provide that guidance to the businesses. Some that require specialized cards -- telecom applications or security applications that need special I/O or NIC cards -- they can't be virtualized today. As for the others, it's all about a performance envelope. Some of our Exchange or larger workloads may not work in a low-end hosting space, but we don't exclude all Exchange workloads from being virtualization targets. We look at performance trends of the existing systems to understand if we can live within the virtual machine performance requirements of the application candidate. We want to use RightSizing and virtualization as ways to help rethink assumptions. One individual alone saved over $1 million, so we're really shifting people.

Are you using any VMware products for virtualization, and do you have any mainframes in your data centers? No, there aren't any mainframes or VMware in our shops. We have three main data centers -- in Redmond, Dublin and Japan. We've got a handful of regional hubs we're considering deploying our [utility] services in, and some midsize data center facilities that range from a handful of servers to larger environments.

Read the original, here.

Published Thursday, May 24, 2007 5:56 AM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<May 2007>