Virtualization Technology News and Information
Article
RSS
Hardware Today: The State of Grid Computing

Quoting from Server Watch

"Grid technologies have long been used for scientific and technical work, where dispersed computers are linked to create virtual supercomputers that rapidly process vast amounts of information," Sara Murphy, program manager for grid computing at Palo Alto, Calif.-based HP, said.

"Now the commercial enterprise is moving to an IT model based on a service-oriented architecture (SOA) where grids can be used as the technology infrastructure," Murphy said. "In some cases, IT is being recast as an internal utility for enterprise-wide use, and the deployment vehicle is grid."

Patrick Rogers, vice president of products and partners at Network Appliance of Sunnyvale, Calif., also ties grids into the SOA framework.

"The use of grid computing appears to be increasing in popularity, particularly in the context of large database applications and high performance computing (HPC)," he said. "The ability to create a network-based virtual computing resource is viewed as an enabler of service oriented architectures in the enterprise as well as compute intensive applications in the HPC market."

Three Types of Grid
Grids, however, should not be viewed as a single-faceted concept. There are, in fact, three primary methods of grid computing.

  1. Linking Data Centers
    This approach is used mainly by research institutions to share their facilities for high-end applications. For example, the National Science Foundation sponsors the TeraGrid, which uses high-speed networking to link 16 compute resources at universities and laboratories around the country. Through TeraGrid, users can access 102 Teraflops of processing power, more than 15 Petabytes of online and archival storage and more than 100 discipline-specific databases.

    This concept is finding favor in the enterprise. HP, for example, has developed technology to assist in the deployment of commercial grids.

    "In multinational companies with data centers in many locations, efficient IT utilization is a serious challenge," Murphy said. "Grid is not a packaged product, but rather a set of components, technologies and services pulled together."

    In addition to grid-enabled servers and grid management software, the company offers HP Flexible Computing Services. This, according to Murphy, makes it easier for customers to reap the benefits of a utility approach to enterprise-scale IT. Customers gain direct access to data center computing via a grid-type architecture. In addition, the HP Grid Consulting Services to provide a single point of accountability from planning through migration and transition to ongoing maintenance and optimization of the grid.

    "HP grid solutions allow customers to provision applications and allocate capacity across geographically and organizationally dispersed teams as business needs change," Murphy said. "This ability to handle peaks and troughs in demand enables organizations to take advantage of underutilized resources, rapidly deploy resources for new projects, and improve time-to-market for new products."

  2. Capturing Unused Cycles on PCs
    PC CPUs typically run at less than 10 percent utilization. Link thousands of them together and you assemble a supercomputing juggernaut. The largest such project is SETI@home (Search for Extraterrestrial Intelligence). Hosted by the University of California at Berkeley, SETI@home harnesses the combined power of hundreds of thousands of PCs to analyze radio signals for evidence of extraterrestrial life. It runs around the clock at an average of around 180 Teraflops.

    These types of grids can also be used inside the firewall to harness idle workstations. Platform Computing of Markham, Ontario, has software for managing server clusters, but can also include Windows PCs as part of the infrastructure. This allows companies to run large-scale simulations when the PCs aren't being used.

    Pratt & Whitney, a division of United Technologies Corp. of East Hartford, Conn., uses Platform's LSF software to model jet engines and gas turbines rather than relying on physical testing of the hardware. A single physical test could run million dollars and take months to complete. By installing this software on 150 servers and 5,000 workstations at five locations, Pratt & Whitney runs such simulations overnight.

  3. Renting Processing Power
    Sun Microsystems of Santa Clara, Calif., for example, offers the Sun Grid Compute Utility, which allows customers to rent additional processing power on a per-hour basis. CDO2 Ltd, a London-based firm that produces software that allows banks, hedge funds and investment firms to run complex financial risk simulations on their portfolios, was one of the early adopters.

    "Before using Sun Grid, we had to deploy our software in various environments and geographical locations to meet customers' needs in-house," CDO2 director Gary Kendall said. "We now have customers around the world running on Sun Grid without having to do any local software installation ourselves."

    Offering the option of harnessing Sun's computers has expanded CDO2's potential customer base from the top 100 financial services companies to the top 1000. Current clients still run the software in house, but any new customers are set up on the grid.

    "We suggest our customers start with a relatively small amount of compute power, say 10 CPUs per hour, and then buildup as their businesses grow," Kendall said. "They can always access increased power, to say 100 CPUs per hour, when they need it.

Read the rest of the article, here.

Published Wednesday, August 23, 2006 9:54 AM by David Marshall
Filed under:
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<August 2006>
SuMoTuWeThFrSa
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789