Virtualization Technology News and Information
Article
RSS
The changes introduced by Virtualization

Contributed by Paul Lopez from bodHOST.com

Here we are going to see four major concerns related to with applications design and infrastructure development.

It is obvious that today all companies are moving toward Virtualization technology for their applications implementation. The server consolidation has also resulted in the consolidation of storage and networks, previously available only to a few intensive application technologies. Many companies invested funds in this sector quickly because these technologies are reliable and the ROI (Return on Investment) is usually fast. Though, this point is very crucial. The question you might ask: "The adoption of these technologies has not been faster than our ability to adapt the change?"

Indeed, if we agree that these technological changes have a major impact on the skills of IT staff, reduce the consequences of failure, improves the quality of service to users and offer many technical possibilities to increase overall performance of the infrastructure, then why today's design for an application infrastructure is still often addresses as physical environment?

For years, the applications sizing have been with dedicated resources. All arguments (architecture, diagnostic, monitoring and much more) are still strongly attached to this logic. The rules are different now. For example, I can use all applications sizing tools with a set of Cloud servers.

The evolution of processors

For 20 years, the evolution has been playing an important role in this area with the appearance of instruction that can simultaneously process multiple datasets, hyper-threading (instructions execute at the same time in the same memory), multi-heart (several channels of instructions that run along with each other in their own memory space), or higher frequencies (Internal processor bus and memory). As we know the frequency of a processor matter because the performance factors are mostly dependent on it. In this context, what is a virtual processor? Typically a hypervisor divides the physical processor in virtual processor considering that the two hyper-threading processors and each processor with a full heart.

For example, a hypervisor with two quad processors + hyper threading heart: This corresponds to 16 virtual processors (2 processors x 2 x 4 hyper-threading cores). As the hypervisor manages virtual servers and requiring less virtual processors that it has, the allocation is static. This means that a single server virtual processor that is running on a single hypervisor can use the full power (1/16th in the case of our example). When you need more virtual processors and hypervisor has the capability provide it, a scheduler is actually responsible for the distribution of resources by using applications. The principle implies many consequences.

The memory allocation of virtual servers

The second major change is the memory allocation. It may be good to remember that when a virtual server uses all the memory that is assigned to it, then it uses its paging file (swap virtual memory). Even if the hypervisor has more memory, virtual server specifications remain static. The hypervisor does not reserve memory allocated to virtual servers. It distributes the application within the design of the virtual server. If a virtual server doesn't use the entire memory to which he is allowed then it means the hypervisor is preserving it.

From this principle, it is possible to run the required application on a hypervisor based virtual server with a combined memory which is greater than the capacity of the hypervisor.

What is happening in this case?

As virtual servers do not require all the memory they are allowed to, there is no consequence. But if the hypervisor is active in different mechanisms of memory allocation, the first thing to do is a collection of unused memory on all virtual servers. This is called "ballooning" on the VMware vSphere editor. The second mechanism is a sharing of identical memory pages between virtual servers. And the third is a compression on the fly memory of virtual servers. The latter mechanism creates a swap file on the hypervisor (swap virtual memory). Memory of virtual servers becomes involved in a disk file in this mechanism which increases performance significantly.

Pooling disks

This is a third major change. We know the most types of rules: disk 10,000 RPM provides a performance of 130 I/O per second, a record 15,000 RPM provides a performance 180 I/O per second per second and the type of RAID affects on total disk performance. It is valid to some extent, but other factors come into play. These discs are no longer local but moved to a storage array that embeds its caching technologies and data processing to maximize the performance of magnetic media. Each manufacturer offers different performances. Then, this storage is often available through a network (iSCSI, Fiber Channel, etc...), which leads to consider the network architecture.

Finally, this material usually accessed by multiple virtual servers at the same time. The performances are shared. In the case of public cloud performance IOPS is not even reported. Then calculating how many discs are required, in what format and what type of RAID to size a server is no longer suitable. The concept of storage class service appears and monitoring key indicators ensure that the infrastructure provides the level of performance expected.

The changes to the network

In order with the changes related to storage, there are several things related to the network. While the hypervisor pooling its network interfaces for virtual servers that run on the physical node, it remains with the basic principles of the OSI network model. In an IP frame, there is hardware address which supplies routing tables on network equipment. For your information hypervisors are physically connected. If a hypervisor has multiple network cards, a network map of a virtual server will always use one these cards or sometimes it uses complex protocols to implement aggregation.

The network distribution flows virtual server on the network card of the virtual servers, which is usually static. Several virtual servers can therefore share a single physical card to share the bandwidth as well.

These four basic elements profoundly alter the approach of sizing infrastructure and its daily management. The cloud, which is becoming increasingly important, makes a little up to date hardware approach. But in reality, service providers do not provide the definition of the base (type of processors, memory, disk performance, bandwidth, hypervisor, etc.). And elsewhere it is not what is important.      

So how do I resize infrastructure and applications? How can we ensure a quality of service to its users? Monitoring of key pointers within virtual servers through standard measurement can help to reap all the benefits of the hardware base concept. The presentation of these pointers will be a future expert advice. And I am looking forward to hear your response to these questions. 

##

Author the Author

Paul Lopez, a technology writer and sales & marketing executive at bodHOST.com, a cloud & dedicated server hosting company based in New Jersey.
Published Friday, January 03, 2014 7:09 AM by David Marshall
Filed under:
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<January 2014>
SuMoTuWeThFrSa
2930311234
567891011
12131415161718
19202122232425
2627282930311
2345678