Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 56 white papers, page 1 of 4.
HP, VMware & Liquidware Labs Simplify Desktop Transformation
This whitepaper provides an overview of the requirements and benefits of launching a virtual desktop project on proven, enterprise ready solution stack from HP, VMware, and Liquidware Labs. HP VirtualSystem CV2, with VMware View and Liquidware Labs ProfileUnity, offers a comprehensive Virtual Desktop solutions stack with integrated User Virtualization Management and Dynamic Application Portability. By combining offerings from proven industry leaders in this end-to-end solution, customers can fas
Desktops and workspaces are transforming to virtual and cloud technologies at a lightning fast pace. With the rapid growth of Microsoft Windows 7 (and soon Windows 8) adoption, virtual desktop strategies, and cloud storage and virtual application adoption, there is a perfect storm brewing that is driving organizations to adopt client virtualization now.

You need a plan, one that is complete and well capable of guiding you through this key phase of your desktop transformation project. HP and Liquidware Labs offer a comprehensive User Virtualization Management and Dynamic Application Portability (DAP) solution that takes care of the key requirements for your desktop transformation to a virtual desktop infrastructure (VDI).

User Virtualization and Dynamic Application Portability from HP and Liquidware Labs is integral to your VDI project by providing the following:

  • Dramatic savings in storage, licensing, and management costs with the use of robust and flexible persona management to leverage non-persistent desktops.
  • Instant productivity within seconds of logon with automatic context-aware configurations which enable flexible desktop environments where users can logon to any desktop, physical or virtual. Minimize golden image builds while allowing the ultimate in personalization, break down the barriers to user adoption and fast-track productivity in the new environment with user and department installed applications.
Stratusphere Architectural Overview
In this paper, we outline the Architectural components and considerations for our Stratusphere FIT and Stratusphere UX products. This paper is intended for technical audiences who are already generally familiar with these solutions and the functionality they provide.

In this paper, we outline the Architectural components and considerations for our Stratusphere FIT and Stratusphere UX products. This paper is intended for technical audiences who are already generally familiar with these solutions and the functionality they provide.

VDI FIT and VDI UX Key Metrics
As more organizations prepare for and deploy hosted virtual desktops, it has become clear that there is a need to measure compatibility and performance for virtual desktop infrastructure both in the planning process and when measuring user experience after moving to virtual desktops. This whitepaper covers best practices and provides an introduction to composite metrics VDI FIT in Stratusphere FIT and VDI UX in Stratusphere UX to provide a framework to measure Good/Fair/Poor desktops.

As more organizations prepare for and deploy hosted virtual desktops, it has become clear that there is a need to support two related but critical phases. The first is to inventory and assess the physical desktop environment in order to create a baseline for the performance and quality of the user experience for the virtual desktop counterparts. When planning and preparing, organizations would like to know which desktops, users and applications are a good fit for desktop virtualization and which ones are not. The second phase is to track the virtual desktops in production in order to proactively identify when performance and user experience does not meet expectations, as well as to continue to refine and optimize the desktop image and infrastructure as changes are introduced into the environment.

Because virtual desktops live on shared systems, the layers of technology make it more complex to measure and classify fitness and user experience. But with increased industry knowledge and the emergence of best practices, plus new purpose-built products such as Liquidware Labs’ Stratusphere FIT and Stratusphere UX, it is now possible to more accurately measure and classify both fitness and user experience. This white paper covers these best practices and provides an introduction to the VDI FIT and VDI UX classification capabilities in Stratusphere FIT and Stratusphere UX.

Liquidware Labs ProfileUnity and its Role as a Disaster Recovery Solution
As desktop operating systems become more and more complex, the need for a proper disaster recovery methodology on the desktop is increasingly crucial, whether the desktop is physical or virtual. In addition, many enterprise customers are leveraging additional technologies, including local hard drive encryption, persistent virtual images and non‐persistent virtual images. This paper provides an overview of these issues and outlines how disaster recovery (DR) plan, coupled with Liquidware Labs Pro
Many corporations around the globe leverage Virtual Desktop Infrastructure (VDI) as a strategic, cost‐effective methodology to deliver business continuity for user applications and data. Virtualization renders a physical computer made of metal, plastic and silica as a portable file that can be moved through a network from a data center to a disaster recovery (DR) site. Although this may sound easy, transferring virtual machine files can be challenging for corporate networks in a number of ways. Moving large amounts of data is a time consuming process that may take days to complete. Moreover, once archival process is complete, the data is effectively out of date or out of context. As a response various strategies focus on leaving the bulk of the data transferred and only updating and replicating the changes in data. Desktop infrastructure is particularly sensitive to the issue of synchronization so applications run properly. The challenge is keeping desktops in sync because desktops, applications and data change often. This has given birth to a whole new set of strategies and software unique to desktops to accomplish backups safely and effectively. Liquidware Labs’ ProfileUnity™ is a best of breed solution that provides a seamless end user DR experience identical to the one at the home office.
5 Fundamentals of Modern Data Protection
Some data protection software vendors will say that they are “agentless” because they can do an agentless backup. However, many of these vendors require agents for file-level restore, proper application backup, or to restore application data. My advice is to make sure that your data protection tool is able to address all backup and recovery scenarios without the need for an agent.
Legacy backup is costly, inefficient, and can force IT administrators to make risky compromises that impact critical business applications, data and resources. Read this NEW white paper to learn how Modern Data Protection capitalizes on the inherent benefits of virtualization to:
  • Increase your ability to meet RPOs and RTOs
  • Eliminate the need for complex and inefficient agents
  • Reduce operating costs and optimize resources
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
EMA Reviews - Software-Defined Infrastructure Control:  The Key to Empowering the SDDC
Topics: SDDC, cirba, EMA
As enterprises strive to achieve more effective, efficient, and agile operations, they are increasingly adopting processes that enable a Software-Defined Data Center (SDDC). However, in many cases, increased complexity in current IT environments, processes, and cultures has substantially impeded the organization's ability to complete this transition. Software-Defined Infrastructure Control (SDIC) provides the critical infrastructure visibility and intelligence necessary to optimally align data c
As enterprises strive to achieve more effective, efficient, and agile operations, they are increasingly adopting processes that enable a Software-Defined Data Center (SDDC). However, in many cases, increased complexity in current IT environments, processes, and cultures has substantially impeded the organization's ability to complete this transition. Software-Defined Infrastructure Control (SDIC) provides the critical infrastructure visibility and intelligence necessary to optimally align data center capabilities and resources with application demands and their specific requirements.
Server Capacity Defrag
This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

Capacity defragmentation is a concept that is becoming increasingly important in the management of modern data centers. As virtualization increases its penetration into production environments, and as public and private clouds move to the forefront of the IT mindset, the ability to leverage this newly-found agility while at the same driving high efficiency (and low risk) is a real game changer. This white paper outlines how managers of IT environments make the transition from old-school capacity management to new-school efficiency management.

How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Building a Highly Available Data Infrastructure
Topics: DataCore, storage, SAN, HA
This white paper outlines best practices for improving overall business application availability by building a highly available data infrastructure.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.

Download this paper to:
•    Learn how to develop a High Availability strategy for your applications
•    Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
•    Learn how to build a Highly Available data infrastructure using Hyper-converged storage

Scale Computing’s hyperconverged system matches the needs of the SMB and mid-market
Scale Computing HC3 is cost effective, scalable and designed for installation and management by the IT generalist
Everyone has heard the buzz about hyper-converged systems – appliances with compute, storage and virtualization infrastructures built in – these days. Hyper-converged infrastructure systems are an extension of infrastructure convergence – the combination of compute, storage and networking resources in one compact box – that promise of simplification by consolidating resources onto a commodity x86 server platform.
2015 State of SMB IT Infrastructure Survey Results
Overall, companies of all sizes are moving faster to virtualize their servers but very few are taking advantage of hyperconvergence and all that it offers.
Demandson IT in small and medium businesses (SMBs) continue to rise exponentially.Budget changes, increased application and customization demands, and more arestretching IT administrators to the limit. At the same time, new technologieslike hyperconverged infrastructure bring light to the end of thestrained-resources tunnel through improved efficiency, scaling, and managementbreakthroughs. More and more, IT groups at SMBs are being pushed to “do morewith less,” as the unwelcome saying goes. So, in order to meet thesechallenges, some SMBs leverage new technology.
 
See how 1,227 technologists replied to a surveyin early 2015 as a part of our State of SMB IT Infrastructure Survey. Theresponses to this very popular survey yielded some surprising results!