Virtualization Technology News and Information
Article
RSS
Optimizing a Virtualized Application Stack, also known as IT Jenga

Welcome to Virtualization and Beyond

Contributed by Michael Thompson, Director, Systems Management Product Marketing, SolarWinds

Optimizing a Virtualized Application Stack, also known as IT Jenga®

In this era of rapid technological evolution, there are good reasons for specialization. Optimizing a large, complex virtual or storage environment; fine tuning a database; or managing a critical application can easily be full time jobs in and of themselves, each with a unique set of needed knowledge and skills. However optimizing by infrastructure layer isn't always the same as optimizing across a virtualized application stack.

You see, it's harder for a virtualized application than a non-virtualized application given how dynamic the infrastructure is-just when you think you have your solution optimized, someone unexpectedly pulls out another block of resources from your carefully stacked application infrastructure, sending application performance tumbling in a nice, electronic simulation of the game Jenga.

But even with today's environment of rapid change and specialized management, there is still a lot you can do.

Start Strong

If you're starting fresh, make sure you have a good understanding of application priority and operational characteristics when designing the initial system. If you know it has high CPU, memory or IOPs requirements, allocate resources accordingly. For example, consider a target pCPU:vCPU ratio of 1:1 for tier 1 applications versus a 1:3 or higher ratio for tier 2 and tier 3 applications. Also, watch the NUMA node size of your hosts and make sure that no VM requires more memory than the host when application performance is critical. 

At the same time you're working on the virtualization level, you need to align the application resources (hint: the database is one of the bigger factors here) as well as the storage layer. Without keeping each of the layers aligned, it will be hard to maintain overall application performance.

If you're working on an existing virtual application, it's a good idea to baseline performance under load conditions before you start making changes. A good baseline can be critical in telling you the type and degree of impact the changes have, as well as allowing for at least a high level cost-benefit analysis of changes or investments.

Leverage the Hypervisor Capabilities

Things like affinity rules can help maintain some of the relationships you want for your applications, such as keeping two components on the same host to reduce IO latency. This can be especially important with multi-tier applications that have high I/O or performance requirements.

Another important thing can be the proper use of resource pools to help manage performance. While it may be relatively straightforward to allocate resources initially, it's not a "set it and forget it" feature. Larger resource pools might be very active, but may not have as high a percentage change as a smaller resource pool. As a result, either the number of VMs or the size of the resource pool should be evaluated and adjusted frequently to ensure the desired performance levels are maintained.

Grouping

How you group your VMs and applications can also be important. There are two trains of thought around this. One approach is to group high demand applications on single host or data store and make sure there are plenty of resources allocated to ensure performance. This has the advantage of leveraging higher cost hardware for your most important applications, plus it makes it easier to manage as the primary focus can be on those resources.

The alternate approach, which I discussed previously, is to mix applications (e.g., high demand production with development and test servers). With this approach, you're less likely to have a single resource slammed by a flood of CPU, memory or I/O demands at the same time. In a worst case scenario, the impact of having to shut down a development VM to prevent a critical app from failing will likely be less than having to choose which production application to kill.

Visibility

We all know what happens to the best laid plans. As a result, the ability to have visibility into what's happening in your environment can allow you to see problems as they develop and be more proactive at fixing them before they become critical. Since what you're trying to do is align performance all the way from the application to the VM/host/data store then down to storage LUNs/raid groups/disks, this visibility needs to be able to provide all that information with an application context in real-time. This type of visibility can be achieved with the help of an integrated server, application and virtualization monitoring tool.

Given the speed at which the IT environment moves, it can be challenging to keep all the pieces aligned, thereby maintaining overall application performance, especially when different teams have ownership of each of the application and infrastructure layers involved. By creating a solid initial plan and identifying how everyone will communicate, prioritize and react to the inevitable changes that will occur, you can stay ahead of performance problems as opposed to being in crisis mode after they break.

##

About the Author

Michael Thompson, Director, Systems Management Product Marketing, SolarWinds. 

Michael has worked in the IT management industry for more than 13 years, including leading product management teams and portfolios in the storage and virtualization/cloud spaces for IBM. He holds a master of business administration and a bachelor's degree in chemical engineering. 

Make sure to also read, "App-Centric and Admin-Centric -- Too Much to Ask?"

Published Tuesday, January 27, 2015 6:36 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<January 2015>
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
25262728293031
1234567