Virtualization Technology News and Information
Rackforce is Going Green to Save Energy

Quoting Globe and Mail

WHEN IT COMES TO SAVING ENERGY, Rackforce is doing better than most of us. Its three data centres in Kelowna, British Columbia, run computer applications for thousands of clients worldwide. Over the last four years, Rackforce has made those data centres 40 to 50% more energy efficient.

Rackforce did this in two ways, explains Tim Dufour, the company's president. First, in 2003, the company adopted a concept called server virtualization. This allows one physical computer to operate as several virtual machines. Although they share hardware, each virtual machine runs its own operating system and cannot see--or interfere with--the others. Most current servers are underutilized because they run only one application, to avoid the risk of programs interfering with one another. Dividing larger systems into virtual machines avoids wasting capacity while isolating each application, so no conflicts are possible. The energy savings are substantial, reducing power consumption by about 30%.

Then, last year, Rackforce took the next step, installing 320 new System x servers from IBM Corp. These machines use new quad-core processors that deliver more processing power per watt of electricity than older chips. "We have substantially increased the amount of computing power with less actual electricity consumption," Dufour says.

Braden Harrison, national brand and marketing manager for System x servers at IBM Canada in Markham, Ontario, points out that his company isn't the only computer maker offering more energy efficient hardware. Much of the improvements stem from the makers of the processor chips at the heart of the machines--Intel Corp. and its smaller rival, Advanced Micro Devices. It's all part of a new focus in the computer industry on what manufacturers call "performance per watt."

This is good news for the environment. But the motivation isn't the Kyoto Accord--it's money. For some data centres, Harrison says, power is a bigger cost than the hardware itself.

And the cost of the electricity is only half the story, because electricity generates heat. In densely packed data centres, it's essential to get rid of that heat or the equipment will crash. "For every watt of power used," Dufour says, "you have to use at least a watt of power for cooling." This becomes not just a money problem, but also a space issue. Data centres may have floor space available for more servers, but there's no room under the floor to run more power cables and no space to add more ventilation. Electric utilities may also lack the infrastructure to deliver more juice. "With fair regularity, we see people who just can't get any more power in," says Andrew Hillier, chief technical officer and co-founder of Cirba Inc., a firm in Richmond Hill, Ontario, whose software helps data centre managers plan virtualization projects.

Dividing computer servers into multiple virtual machines is popular not just because it saves power, but because it makes better use of the costly hardware. Typically, says Bogomil Balkansky, director of product marketing at VMWare Inc., a maker of virtualization software based in Palo Alto, California, a single processor can handle about four virtual servers. Many servers have two or four processors, so virtualizing eight or 16 servers on one physical machine is common.

Getting one computer to do the work of 10 doesn't mean using just 10% of the power, Hillier cautions. Processors consume power as long as they are running, but the busier they are the more they use, he says. Often a new virtualized machine will be more powerful--and hence use more energy--than the several servers it replaces. But on average, Balkansky says, every application moved to a virtual server saves 3,000 kilowatt hours in annual server-power consumption, and as much again on cooling.

It isn't always necessary to divide a server into virtual machines to use it more efficiently, Hillier adds--sometimes it's enough to run several applications together without giving each one its own virtual server.

Intel and AMD have both been working on making their processors more efficient. "We've been providing more performance within the same power band," says Brent Kerby, product manager for AMD's Opteron line of chips. Both companies are both making dual-core and multi-core chips--essentially two or more processors in one chip. Intel claims its Dual-Core Xeon 5100 processors deliver 135% more processing power than their predecessors, while reducing power consumption by 40%. It's not really the multiple cores that make the chips more energy-efficient, Kerby says, but increased miniaturization. But the smaller a chip gets the less surface area it has and the harder it is to cool, so as circuitry gets smaller chipmakers have taken to packing two or more processors into one chip.

Another focus for Intel is the power its chips use when idle, says Doug Cooper, country manager at Intel Canada. Already, a processor at rest uses less than five watts--or about as much as a Christmas tree bulb. Intel wants get the figure down to one watt. AMD technology called PowerNow also reduces power consumption when a processor isn't busy.

Intel is also working to reduce the amount of electricity given off as heat. That has dual benefits: reducing the computer's power consumption and making it easier to cool, Cooper says. And Kerby says integrating memory controllers into AMD Opteron processors permits a more efficient memory architecture that better than halves the power consumption of memory chips.

Harrison says today's power supplies, which distribute power to all parts of a server, waste about 35% of the power they take in. IBM is working to make power supplies more efficient. So is Hewlett-Packard, which is attacking the cooling issue with Dynamic Smart Cooling, a data centre energy management system it claims will cut cooling energy costs by 20 to 45%.

Growing use of blade servers--compact machines that are essentially circuit boards that stack in racks--is another way to improve efficiency, Harrison says.

"Over the last three years we've really intensified our efforts," says Greg Davis, general manager of Dell Canada. One sign of the times: Dell now posts power consumption data about its products on its website. The interest in energy efficient servers may have more to do with economics than ecology, but it has benefits on both fronts.

"Getting one computer to do the work of 10 doesn't mean using just 10% of the power. Processors consume power as long as they are running, but the busier they are the more they use."


Virtualization: Divides a server into virtual machines so multiple applications can run without conflict and cuts power use by up to 30%.

Denser dual-core and multicore processors: As chipmakers pack two or more processor cores onto one chip, the chips deliver more on as much as 40% less power.

Lower-wattage chips: Just like light bulbs, lower-wattage processors use less power and are adequate for many jobs. AMD offers 120-watt, 95-watt and 68-watt chips.

Power management: Making processors use less power when idle can save when servers aren't running full tilt. Intel is working to cut the power its chips use when idle from five watts to one watt.

Less heat: Reduce the heat that computer components give off and you save twice--the components waste less power, and less energy is used in cooling systems to get rid of the unwanted heat.

Efficient power supplies: The power supplies that distribute electricity within a server can waste about one-third of the system's total energy intake. IBM and HP are among the manufacturers working to make them more efficient.

Blade servers: Computers on densely-packed circuit boards that fit in racks can be 20 to 30% more efficient than standalone boxes, while saving space.

Read the original, here.

Published Thursday, June 21, 2007 5:35 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<June 2007>