Virtualization Technology News and Information
Article
RSS
How to Create a More Sophisticated Cloud Management Platform
3 Steps to Enhance Routing Decisions in VMware vRealize Automation

A Contributed Article by Andrew Hillier, Co-founder & CTO of Cirba

As more organizations plan to deploy cloud management platforms (CMPs), like VMware vRealize Automation (vRA, recently rebranded from vCloud Automation Center), that will span multiple hosting environments, they will start to examine how to route new workloads to the most appropriate environments. vRA is obviously one of those offerings that is designed to automate the provisioning workflow surrounding new VMs; however CMPs like this one provide a very rudimentary approach to routing logic, such as round robin or random placement. But determining where to host a workload is actually quite complicated, as a number of factors need to be considered, including the technical requirements of the workloads (think software licensing, storage type, network connectivity, etc), the business and operational policies (think service tiers, regulatory requirements, etc), the resource availability (think CPU and memory requirements, operational patterns, peak times/seasons) and relative cost.

vRA generally performs three major functions:

  • Providing a self-service portal for capturing new workload placement requests
  • Selecting an environment to start the VMs based on a round robin algorithm
  • Working with vRealize Orchestrator to automate the provisioning process and start the VM

In order to make vRA suitable for enterprise scale deployments and operations, organizations need to dig deeper into each step, and make sure it works with their other tools, processes and ultimately will place workloads according to the goals and requirements of the organization. Unfortunately, this invariably leads to the realization that round-robin workload routing is not sufficient for their needs. In reality, a typical deployment looks like the following diagram:

 

The big question that arises is why, when an organization is pursuing automation of self-service provisioning, would it insert a manual step right into the middle of that process? Here are three steps organizations can take to achieve their goals with a vRA deployment to achieve desired levels of automation, efficiency and reduced risk.

Step #1 - Turn to analytics to replace vRA's default environment selection function

Most organizations recognize that round robin workload routing introduces too much risk. Making the wrong choice can lead to issues with performance, cost and compliance. Inserting spreadsheets and manual decision making allows an organization to factor in a few more criteria, but still cannot account for the complex set of technical, business and utilization constraints that go into hosting decisions. It also will not enable you to automate the process. But there is a solution.

In order to put workloads into the right environment, organizations need to understand their detailed requirements such as: utilization patterns and resource requirements, storage requirements and tiers, network connectivity, software licensing requirements, security & data protection, availability and redundancy, backup and replication, operational policies, business group affinities and constraints,  regulatory and compliance constraints, and last but not least proximity requirements.

Analytics are key to matching these requirements against the capabilities of the available infrastructure. Only through detailed analysis can it be understood which environments can provide enough of the right type of resources, with the appropriate security, compliance and SLA characteristics. At the same time, it is equally important that this analysis ensures that the environment is not over-specified for the workload, and ensures that the needs are met at the lowest relative cost. And to make a truly accurate decision, the analysis must consider future changes in workload demands and resource supply.

Using analytics that look at these factors will ensure:

  • Workloads get access to the kinds of resources required without providing incurring excessively high levels of service and cost
  • Workloads are placed with a forward looking view of changes pending in an environment to avoid having to make moves just days or weeks down the road
  • Placements adhere to policy or regulatory requirements, ensuring compliance
  • Workloads are globally balanced across an organization to ensure efficiency and avoid prematurely saturating an environment by consuming one resource well beyond others
  • The entire process can be automated, removing humans from the loop for cloud provisioning processes

 

 

Step #2 - Implement a reservation system that actually locks resources.

vRA doesn't actually lock in requested capacity for a specific deployment. While vRA has a function that is called reservations, the product will hold whatever capacity is available and assign a priority rating to reservations that determines which requests actually get the available resources when the time comes to deploy the workloads. Using vRA it is possible to overbook the resources, so your "reservation" may be meaningless.

A reservation system needs to be based on a continuously updated model of your environment. Within this model, the system must be able to book and hold the required resources for new workloads at the time when they will be required.

Take for example how this management challenge relates to hotel operators as they constantly align guest demands with hotel resources and amenities. This analogy is actually quite instructive when considering how IT organizations operate. You wouldn't dream of managing a major hotel (particularly one like the Ritz that prides itself on tracking and understanding its customer's unique needs) without an extremely refined and detailed reservation system, and yet this is exactly how companies are currently managing their virtual and internal cloud environments. If the Ritz operated this way, the chain would need to build many more rooms than are actually necessary in order to ensure they could meet unpredictable guest demands. The hotels would also be plagued with customer complaints as room configurations are wrong, amenities are unavailable and families are split across different floors. This should start sounding familiar to anyone who has managed a production virtual environment. Rather than using simplistic spreadsheets, IT organizations can significantly reduce risk, cost and ensure service levels when they adopt these same principles for matching demand to the right capacity supply.

If done correctly, the ability to actually hold a reservation will:

  • Guarantee resource availability at the time of deployment
  • Enable support for all types of enterprise workload requests beyond immediate self-service requests, such as enterprise application deployments or upgrades that are planned long in advance
  • Create an accurate forward looking model so you know when you will actually need to buy more resources

Step #3 - Leverage policy-based analytics to determine host-level placements

VMware relies on its Distributed Resource Scheduler (DRS) product to determine host-level placements. DRS is an effective real-time load balancer that can be used to ensure that VMs move to avoid resource contention or when host servers are overloaded. Used in that way, it's a great safety net for operations. Unfortunately, although this approach helps avoid hot-spots, DRS does not combine workloads in a way that optimizes VM density.

VM placements are critical to achieving efficient infrastructure. By analyzing the detailed workload patterns and personalities and considering applicable technical, business and policy constraints that dictate where workloads can go, it is possible to combine workloads in a way that dovetails utilization patterns and safely increases density in virtualized and cloud infrastructure.

Think of it like a game of Tetris. If you are smart in fitting the Tetris blocks together you make better use of playing area and are able to fit more blocks in. Virtual infrastructure is no different. Smarter placements translate to a reduced risk of resource contention and increased density, which saves you money on hardware and software.

This goes beyond initial placement, and is important to do on a continuous basis. As environments grow and as workloads change, what was once optimal may no longer be. DRS can act as an effective safety net, but solely relying on DRS won't enable you to increase or maintain efficiency. You can only densify infrastructure by leveraging purpose built analytics that calculate all combinations and permutations of workload placements to determine the optimal solution.

 

 

Conclusion

It's important to invest time understanding how your chosen CMP handles VM routing. It's not uncommon for organizations to turn to spreadsheets, inserting a manual step into the process of a user requesting capacity through a self-service portal and offering automated provisioning. Not only does this go against the goal of being able to automate self-service, but it also doesn't solve the problem at hand. Humans using spreadsheet lists of new requests cannot effectively match all the various requirements against the existing available infrastructures in enterprises, accounting for utilization levels, current workload placements, and the myriad of other factors that impact the decision.  The solution lies in applying purpose-built analytics that scientifically match all the requirements of the demand against the capabilities of infrastructure resources in available environments. This approach not only provides a low risk way of routing workloads, but it also enables automated access to capacity, which is one of the key goals of deploying a CMP in the first place.

##

About the Author

Andrew Hillier has over 20 years of experience in the creation and implementation of mission-critical software for the world's largest financial institutions and utilities. A co-founder of Cirba, he leads product strategy and defines the overall technology roadmap for the company.

Prior to Cirba, Hillier pioneered a state of the art systems management solution which was acquired by Sun Microsystems and served as the foundation of their flagship systems management product, Sun Management Center. Hillier has also led the development of solutions for major financial institutions, including fixed income, equity, futures & options and interest rate derivatives trading systems, as well as in the fields of covert military surveillance, advanced traffic and train control, and the robotic inspection and repair of nuclear reactors. 

Hillier holds a Bachelor of Science degree in computer engineering from The University of New Brunswick.

Published Tuesday, March 24, 2015 6:40 AM by David Marshall
Filed under: ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<March 2015>
SuMoTuWeThFrSa
22232425262728
1234567
891011121314
15161718192021
22232425262728
2930311234