In this paper, we outline the Architectural components and considerations for our Stratusphere FIT and Stratusphere UX products. This paper is intended for technical audiences who are already generally familiar with these solutions and the functionality they provide.
As more organizations prepare for and deploy hosted virtual desktops, it has become clear that there is a need to support two related but critical phases. The first is to inventory and assess the physical desktop environment in order to create a baseline for the performance and quality of the user experience for the virtual desktop counterparts. When planning and preparing, organizations would like to know which desktops, users and applications are a good fit for desktop virtualization and which ones are not. The second phase is to track the virtual desktops in production in order to proactively identify when performance and user experience does not meet expectations, as well as to continue to refine and optimize the desktop image and infrastructure as changes are introduced into the environment.
Because virtual desktops live on shared systems, the layers of technology make it more complex to measure and classify fitness and user experience. But with increased industry knowledge and the emergence of best practices, plus new purpose-built products such as Liquidware Labs’ Stratusphere FIT and Stratusphere UX, it is now possible to more accurately measure and classify both fitness and user experience. This white paper covers these best practices and provides an introduction to the VDI FIT and VDI UX classification capabilities in Stratusphere FIT and Stratusphere UX.
Chapter 1: An Introduction to VMware Virtualization
Chapter 2: Backup and Recovery Methodologies
Chapter 3: Data Recovery in Virtual Environments
This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.
Capacity defragmentation is a concept that is becoming increasingly important in the management of modern data centers. As virtualization increases its penetration into production environments, and as public and private clouds move to the forefront of the IT mindset, the ability to leverage this newly-found agility while at the same driving high efficiency (and low risk) is a real game changer. This white paper outlines how managers of IT environments make the transition from old-school capacity management to new-school efficiency management.
applications, especially within virtualized environments, require high
performance from storage to keep up with the rate of data acquisition
and unpredictable demands of enterprise workloads. In a world that
requires near instant response times and increasingly faster access to
data, the needs of business-critical tier 1 enterprise applications,
such as databases including SQL, Oracle and SAP, have been largely
The major bottleneck holding back the industry is I/O performance.
This is because current systems still rely on device -level
optimizations tied to specific disk and flash technologies since they
don’t have software optimizations that can fully harness the latest
advances in more powerful server system technologies such as multicore
architectures. Therefore, they have not been able to keep up with the
pace of Moore’s Law.
On closer examination, we find the root cause to be IO-starved
virtual machines (VMs), especially for heavy online transactional
processing (OLTP) apps, databases and mainstream IO-intensive workloads.
Plenty of compute power is at their disposal, but servers have a tough
time fielding inputs and outputs. This gives rise to an odd phenomenon
of stalled virtualized apps while many processor cores remain idle.
So how exactly do we crank up IOs to keep up with the computational
appetite while shaving costs? This can best be achieved by parallel IO
technology designed to process IO across many cores simultaneously,
thereby putting those idle CPUs to work. Such technology has been
developed by DataCore Software, a long-time master of parallelism in the
field of storage virtualization.
In this paper, we will discuss DataCore’s underlying parallel
architecture, how it evolved over the years and how it results in a
markedly different way to address the craving for IOPS (input/output
operations per second) in a software-defined world.
now the most common data replication technologies and methods essential
to mission-critical BC/DR initiatives have been tied to the physical
environment. Although they do work in the virtual environment, they
aren’t optimized for it. With the introduction of hypervisor-based
replication, Zerto elevates BC/DR up the infrastructure stack where it
belongs: in the virtualization layer.
Benefits of Hypervisor-Based Replication:
Can mission-critical apps really be protected in the cloud?
Introducing: Cloud Disaster Recovery
enterprises of all sizes are virtualizing their mission-critical
applications, either within their own data center, or with an external
cloud vendor. One key driver is to leverage the flexibility and agility
virtualization offers to increase availability, business continuity and
the cloud becoming more of an option, enterprises of all sizes are
looking for the cloud, be it public, hybrid or private, to become part
of their BC/DR solution. However, these options do not always exist.
Virtualization has created the opportunity, but there is still a
significant technology gap. Mission-critical applications can be
effectively virtualized and managed; however, the corresponding data
cannot be effectively protected in a cloud environment.
Additional Challenges for Enterprises with the Cloud:
Solutions with Zerto Virtual Replication: