Article Contributed by Kalyana Krishna Chadalavada and Brandon Hammersley
Engineers at HPC Systems, Inc have successfully installed and tested VMWare ESX 3.5 on a 32 core AMD Opteron system. The platform under test is A5808-32. The team successfully installed, configured and tested several major features of ESX Server including iSCSI volumes, NFS mounts, Virtual Center 2.5, VMotion and support for SATA storage.
HPC Systems A5808-32 server platform offers a unique value proposition for users considering virtualization. Initial configuration start with two sockets populated (four or eight cores) and can expand to 32 cores fully populated (eight sockets) and supports up to 256 GB of DDR II memory. In a virtualized environment, dedicated network interfaces or storage interfaces are recommended for VMs with certain workloads like database and web servers. With two x16 and two x4 PCI-Express expansion slots and three on-board gigabit Ethernet interfaces, A5808-32 offers good expandability. With custom configurations, this platform can support a wide range of virtualization workloads offering sufficient headroom for future expansion.
Engineers at HPC Systems, Inc. started out to test the platforms compatibility with VMWare ESX server. Here is an overview of the test setup:
A5808-32 server: This server is populated with eight (8) AMD Quad core Opteron processors for a total of 32 cores and installed with 32 GB memory. The system is configured with an Adaptec 31605 unified serial controller and an LSI Logic SCSI controller. Adaptec 31605 is connected to three Hitachi SAS drives in a RAID 5 volume. A local VMFS data store will be created on this volume to perform sanity tests on the ESX server installation. This system will also be referred to as system under test (SUT) for the remaining of the article.
A1403 server: This server is populated with four (4) AMD Quad core Opteron processors for a total of 16 cores andconfigrued with 32 GB memory. The system features an on board LSI RAID controller. ESX server will be installed on a volume on this controller. This system will not have any local VMFS data store. This server will be used to test VMWare VMotion technology.
HiPerStor Storage Server: This storage server will export a 500 GB iSCSI volume to both the ESX servers in the test setup. This server will also export an NFS volume containing the ISO images for Windows Server 2003 and SUSE Linux operating systems.
A1204 sever: This server is populated with one dual core AMD Opteron processor and 4 GB memory. This server is installed with Windows XP, VMWare Virtual Center Server and VMWare Virtual Infrastructure client (VI Client). This server will also be referred to as VI Client system for the rest of the article.
All the servers are interconnected with a standard gigabit Ethernet switch. Two virtual switches are configured on each ESX server. One switch is used for Service Console and VMKernel ports. Another is used for VM client network. Each switch is connected to one of the on-board gigabit Ethernet interfaces.
The test is divided in to three phases. Install & configure ESX server, Install & test VMs, VMotion
Install & Configure ESX Server:
On the first attempt, A5808-32 was configured with only 8 cores and the LSI SCSI controller in order to minimize chances of running in to installation issues. Installation of ESX server on A5808-32 was a breeze. The Adaptec controller was added to the system after one successful boot of the ESX server. During the next boot up, ESX server recognized the new Adaptec card, automatically reconfigured the boot kernel and initiated a reboot of the server to load the required drivers. Once the ESX boot up was complete, VI Client successfully connected to the system. A quick look at the storage configuration page showed the installation successfully detected the new Adaptec card and the RAID 5 volume on the controller. A pleasant surprise was that the on board SATA controller was also detected. To test the functionality, a new hard disk was connected to the on board SATA controller and ESX server successfully detected the new hard drive.
A local VMFS data store was configured with the storage from Adaptec controller. This data store was extended with additional storage from on board SATA controller. Now the data store spans across multiple storage controllers (Adaptec and on-board controller) and disk types (SAS and SATA). For a quick check, the team installed CentOS 5 64-bit on this VMFS data store.
This image shows the installation screen of CentOS in the foreground and VI Client in the background. The image also shows the system configuration in the summary page on the VI Client.
iSCSI & NFS configuration: The next step was to add external storage to the ESX server. Storage is one of the very important resources for an ESX server. The storage not only has to hold all the virtual disks for the VMs but also VM snapshots and VSWP files. Each VSWP file is as big as the amount of memory assigned to the VM. The larger the number of VMs your platform can support, the bigger the storage you need. Having local storage is easy to manage but expansion becomes a problem after you hit the space constraints of your system chassis. This is why iSCSI and NFS volumes play an important role in an ESX installation. We also need external storage in order to use VMotion.
For the purpose of this test, the team setup an HiPerStor storage server with four SAS hard drives. The system was configured to provide one NFS volume and one iSCSI volume.
VMKernel ports needed for NFS, iSCSI and VMotion are already created in the initial network configuration. Mounting NFS volume was straight forward from the VI Client. So was iSCSI volume discovery. After allowing sufficient time for iSCSI discovery, the server was able to access the iSCSI volume. Here is a screen capture showing the storage configurations on the SUT:
To test the volume, SLES 10 was installed on the iSCSI volume.
Note the data store information for this VM displayed in the VI Client in the background.
The last step in the installation phase is to test ESX server on 32 cores. Now the team populated all the sockets of the server with quad core processors. On the next boot ESX server successfully detected 32 cores as reported by VI Client.
Install & Test Virtual Machines:
Since we have successfully finished all sanity tests, it is time to stress the virtual machines a bit and see how they behave. For stress tests, the team used Dell DVD Store application. This can be downloaded from http://linux.dell.com/dvdstore/
This test stresses network, disk and CPU on the target system. The team configured a Windows Server 2003 VM with SQL Server 2000 and IIS. IIS is configured with the DVD Store application (ASP Version) and SQL Server is populated with the DVD Store database. The size of the database used is small. The team also installed a SUSE virtual machine to act as client for the DVD Store application. DVD Store client files were installed on the VI Client (A1204) system was also.
Each instance of the client was configured to start eight (8) threads. SQL performance monitor utility was used to monitor the transactions per second and CPU usage on the Windows VM. VI Client was also used to monitor the VM’s CPU, memory usage in addition to the physical CPU and memory usage. The test was run for a period of eight hours. No abnormalities in the behavior of the clients, VM or ESX were observed. The same test was repeated with SUSE VM as the target system. DVD store was installed on SUSE with Apache web server (PHP Version) and MySQL database server. No abnormalities were observed in the behavior of clients, VM or ESX server. Next the team repeated the test with on SUSE VM and Windows 2003 VM simultaneously. Two external systems were configured to act as clients to the SUSE VM and Windows VM. No abnormalities were observed in the behavior of the clients, VM or the ESX server.
Test VMotion:
VMotion allows live migration of an active virtual machine from one physical platform to another. This is very useful in scheduling maintenance of hardware and resource load balancing across a group of servers. VMWare DRS and HA also need VMotion to function successfully.
Here is an illustration of the test setup:
The team used Dell DVD Store application again for the workload during the VMotion test. A single Windows 2003 VM with 512 MB RAM, 10 GB virtual hard drive and one virtual NIC connected to the virtual machine network.
Windows VM is booted up on A5808-32 initially. The two client machines for the DVD Store application start exercising the web site. Each client has 8 threads accessing the DVD Store web site on the Windows VM. An RDP session is opened to the Windows VM. Windows performance monitor is started in this session. to monitor the CPU usage in the VM.
VMWare Virtual Center is used to initiate a live migration (VMotion) of this virtual machine from A5808-32 to A1403. Both A5808-32 and A1403 are registered with our Virtual Center. While the migration is in progress, team continues to monitor CPU usage on the Windows VM and also the clients accessing the DVD Store application. While VMotion is in progress, DVD Store clients and RDP client should not experience any problems and the CPU usage on the VM itself should stay roughly the same except at two instances - start of VMotion and successful completion of VMotion. There might be slight change in the Operations per minute (OPM) reported by the clients. The test is repeated and the VM is moved back from A1403 to A5808-32. No abnormalities were observed in the process.
Conclusion:
The team of engineers at HPC Systems, Inc. successfully installed, configured and tested VMWare ESX Server 3.5 on 32 core AMD Opteron server, A5808-32. Using HiPerStor, the team was able to successfully demonstrate VMotion technology also. With versatile configuration options and availability in multiple form factors, A5808-32 AMD Opteron platform and HiPerStor storage server provide unique value proposition to customers considering virtualization with VMWare. HPC Systems, Inc. has fully tested this platform with VMWare and will support its customers on this platform.
Future Work:
The team has plans to execute VMMark 1.0 benchmark and capacity planning exercises for A5808-32, A1403. A similar exercise will also be carried out on Intel Xeon platforms like E2406. The team also plans to test and validate Xen and Virtual Iron on HPC Systems, Inc servers and storage.
About the Authors:
Kalyana Krishna Chadalavada is a director for the HPC Systems, Inc. engineering team and specializes in high performance computing and storage systems. Previously, Kalyana worked with Dell’s Enterprise Solutions team and CDAC’s National PARAM Supercomputing Facility. He has a B.Tech in computer sciences & engineering from Nagarjuna University.
Brandon Hammersley is a Senior Systems Engineer with HPC Systems specializing in customized system design and implementation. Previously, Brandon was employed at Tyan Computer for six years where he worked as a Field Applications Engineer & Project Manager for Tyan's server and workstation motherboards.
About HPC Systems, Inc.
HPC Systems, Inc. is a leading provider of commodity based high-performance compute servers, workstations, storage and supercomputing cluster solutions since 2003. HPC Systems, Inc. specializes in end-to-end technology consulting, customized pre-packaged data center solutions, thermal and chassis designs for unconventional field deployment, high density compute systems and high end workstations. HPC Systems Inc. is an AMD Platinum Solution Provider. HPC Systems Inc. is a GSA schedule holder & SBA certified SDB / 8 (a) Small Business.
Links for more information are as follows:
http://www.hpcsystems.com/
www.hpcsystems.com/blog
http://hpcsystems.com/AMDQuadOpteron_A5808-32.htm
http://hpcsystems.com/hiperstor/