Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 50 white papers, page 1 of 4.
HP, VMware & Liquidware Labs Simplify Desktop Transformation
This whitepaper provides an overview of the requirements and benefits of launching a virtual desktop project on proven, enterprise ready solution stack from HP, VMware, and Liquidware Labs. HP VirtualSystem CV2, with VMware View and Liquidware Labs ProfileUnity, offers a comprehensive Virtual Desktop solutions stack with integrated User Virtualization Management and Dynamic Application Portability. By combining offerings from proven industry leaders in this end-to-end solution, customers can fas
Desktops and workspaces are transforming to virtual and cloud technologies at a lightning fast pace. With the rapid growth of Microsoft Windows 7 (and soon Windows 8) adoption, virtual desktop strategies, and cloud storage and virtual application adoption, there is a perfect storm brewing that is driving organizations to adopt client virtualization now.

You need a plan, one that is complete and well capable of guiding you through this key phase of your desktop transformation project. HP and Liquidware Labs offer a comprehensive User Virtualization Management and Dynamic Application Portability (DAP) solution that takes care of the key requirements for your desktop transformation to a virtual desktop infrastructure (VDI).

User Virtualization and Dynamic Application Portability from HP and Liquidware Labs is integral to your VDI project by providing the following:

  • Dramatic savings in storage, licensing, and management costs with the use of robust and flexible persona management to leverage non-persistent desktops.
  • Instant productivity within seconds of logon with automatic context-aware configurations which enable flexible desktop environments where users can logon to any desktop, physical or virtual. Minimize golden image builds while allowing the ultimate in personalization, break down the barriers to user adoption and fast-track productivity in the new environment with user and department installed applications.
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
Hyper-converged Infrastructure: No-Nonsense Selection Criteria
This white paper helps you identify the key selection criteria for building a business savvy hyper-converged infrastructure model for your business based on cost, availability, fitness to purpose and performance. Also, it includes a checklist you can use to evaluate hyper-converged storage options.
Hyper-converged storage is the latest buzz phrase instorage. The exact meaning of hyper-converged storage varies depending on thevendor that one consults, with solutions varying widely with respect to theirsupport for multiple hypervisor and workload types and their flexibility interms of hardware componentry and topology.
 
Regardless of the definition that vendors ascribe to theterm, the truth is that building a business savvy hyper-converged infrastructurestill comes down to two key requirements: selecting a combination ofinfrastructure products and services that best fit workload requirements, andselecting a hyper-converged model that can adapt and scale with changingstorage demands without breaking available budgets.
 
Download this paper to:
  • Learn about hyper-converged storage and virtualSANs
  • Identify key criteria for selecting the right hyper-converged infrastructure
  • Obtain a checklist for evaluating options
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Building a Highly Available Data Infrastructure
Topics: DataCore, storage, SAN, HA
This white paper outlines best practices for improving overall business application availability by building a highly available data infrastructure.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.

Download this paper to:
•    Learn how to develop a High Availability strategy for your applications
•    Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
•    Learn how to build a Highly Available data infrastructure using Hyper-converged storage

The State of Software-Defined Storage (SDS) in 2015
Topics: DataCore, storage, SAN, SDS
For the fifth consecutive year, DataCore Software explored the impact of Software-Defined Storage (SDS) on organizations across the globe. The 2015 survey distills the expectations and experiences of 477 IT professionals that are currently using or evaluating SDS technology to solve critical data storage challenges. The results yield surprising insights from a cross-section of industries over a wide range of workloads. The survey was conducted on April, 2015.
For the fifth consecutive year, DataCore Software explored the impact of Software-Defined Storage (SDS) on organizations across the globe. The 2015 survey distills the expectations and experiences of 477 IT professionals that are currently using or evaluating SDS technology to solve critical data storage challenges. The results yield surprising insights from a cross-section of industries over a wide range of workloads. The survey was conducted on April, 2015.

Scale Computing’s hyperconverged system matches the needs of the SMB and mid-market
Scale Computing HC3 is cost effective, scalable and designed for installation and management by the IT generalist
Everyone has heard the buzz about hyper-converged systems – appliances with compute, storage and virtualization infrastructures built in – these days. Hyper-converged infrastructure systems are an extension of infrastructure convergence – the combination of compute, storage and networking resources in one compact box – that promise of simplification by consolidating resources onto a commodity x86 server platform.
Boone County Health Center Runs Faster with Infinio
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day.
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day. Employees access electronic medical records (EMR), business reports, financial data, email and other essential applications required to manage daily operations and provide optimum patient care. Some medical staff and administrators occasionally log in from their homes on personal devices such as laptops or iPads. The Health Center initially considered purchasing an add-on all-flash array for the VDI to help eliminate slow response periods during boot storms. However, before making this type of investment, the Center wanted to explore other alternative solutions.
Masergy accelerates VDI and storage performance with Infinio
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work.
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work. The team was investigating SSD solutions and options from SanDisk, VMware and Dell, as well as all-flash arrays, when it discovered Infinio at VMworld 2014. Unlike the solutions Masergy considered previously, the simplicity of the Infinio Accelerator and low price point caught the Masergy team’s attention. Fewer than six months later, Masergy’s Infinio installation was under way. Infinio provides an alternative to expensive, hardware-based solutions to address VDI performance, which is what Masergy wanted to improve.
Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Waiting on IO: The Straw That Broke Virtualization’s Back
In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.
Despite the increasing horsepower of modern multi-core processors and the promise of virtualization, we’re seeing relatively little progress in the amount of concurrent work they accomplish. That’s why we’re having to buy a lot more virtualized servers than we expected.

On closer examination, we find the root cause to be IO-starved virtual machines (VMs), especially for heavy online transactional processing (OLTP) apps, databases and mainstream IO-intensive workloads. Plenty of compute power is at their disposal, but servers have a tough time fielding inputs and outputs. This gives rise to an odd phenomenon of stalled virtualized apps while many processor cores remain idle.

So how exactly do we crank up IOs to keep up with the computational appetite while shaving costs? This can best be achieved by parallel IO technology designed to process IO across many cores simultaneously, thereby putting those idle CPUs to work. Such technology has been developed by DataCore Software, a long-time master of parallelism in the field of storage virtualization.

In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.

VMware Data Replication Done Right
Until now the most common data replication technologies and methods essential to mission-critical BC/DR initiatives have been tied to the physical environment. Although they do work in the virtual environment, they aren’t optimized for it.

Until now the most common data replication technologies and methods essential to mission-critical BC/DR initiatives have been tied to the physical environment. Although they do work in the virtual environment, they aren’t optimized for it. With the introduction of hypervisor-based replication, Zerto elevates BC/DR up the infrastructure stack where it belongs: in the virtualization layer.

Challenges:

  • If a data replication solution isn’t virtual-ready, management overhead could be more than doubled.
  • Customer data is always growing, so a company can find its information inventory expanding exponentially and not have a data replication solution to keep pace.
  • Some replication methods remain firmly tied to a single vendor and hardware platform, limiting the organization’s ability to get the best solutions – and service – at the best price.

Benefits of Hypervisor-Based Replication:

  • Granularity - The ability to replicate at the correct level of any virtual entity is critical. Zerto’s solution can replicate all virtual machines and all of the meta data as well.
  • Scalability - Zerto’s hypervisor-based replication solution is software-based so it can be deployed and managed easily, no matter how fast the infrastructure expands.
  • Hardware-agnostic - Zerto’s data replication is hardware-agnostic, supporting all storage arrays, so organizations can replicate from anything to anything. This allows users to mix storage technologies such as SAN & NAS, and virtual disk types such as RDM & VMFS.
Zerto Offsite Cloud Backup & Data Protection
Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Zerto Offsite Backup in the Cloud

What is Offsite Backup?

Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Why Cloud Backup?

  • Offsite Backup combines replication and long retention in a new way
  • The repository can be located in public cloud storage, a private cloud, or as part of a hybrid cloud solution.
  • Copies are saved on a daily, weekly and monthly schedule.
  • The data volumes and configuration information are included to allow VM backups to be restored on any compatible platform, cloud or otherwise.
A Flash Storage Technical and Economic Primer
Topics: tegile, storage, flash
Flash technology is rapidly evolving. Chances are the game has changed since you last checked. With every step forward, flash storage is becoming faster, more reliable, and less expensive. And there’s more than one kind of flash technology out there. Some flash focuses on performance, while others balance performance with capacity. Read this white paper for a technical breakdown of the latest in flash storage. Learn how flash has changed in the last few years, and how the economics have shifted.
Normal 0 false false false EN-US X-NONE X-NONE Although today’s NAND flash storage has its roots in 30-year-old technology, innovation has negated almost all of the challenges that are inherent in the media. Moreover, modern storage companies are taking even more software-based steps to further overcome such challenges. Between these advances, it’s clear that flash media use will continue to grow in the data center.