Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 50 white papers, page 1 of 4.
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012
Topics: Hyper-V, veeam
This chapter is designed to get you started quickly with Hyper-V 3.0. It starts with a discussion of the hardware requirements for Hyper-V 3.0 and then explains a basic Hyper-V–deployment followed by an upgrade from Hyper-V 2.0 to Hyper-V 3.0. The chapter concludes with a demonstration of migrating virtual machines from Hyper-V 2.0 to Hyper-V 3.0
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012 gives you simple step-by-step instructions to help you perform Hyper-V-related tasks like a seasoned expert. You will learn how to:
  • Build clustered Hyper-V deployment
  • Manage Hyper-V through PowerShell
  • Create virtual machine replicas
  • Transition from a legacy Hyper-V environment
  • and more
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
Using VM Archiving to Solve VM Sprawl
Topics: commvault, sprawl
This Commvault whitepaper discusses how archiving virtual machines can mitigate VM sprawl with a comprehensive approach to VM lifecycle management.
This Commvault whitepaper discusses how archiving virtual machines can mitigate VM sprawl with a comprehensive approach to VM lifecycle management.
Workload Routing & Reservation:  5 Reasons Why It Is Critical To Virtual & Cloud Operation
Topics: cirba
When observing the current generation virtual and internal cloud environments, it appears that the primary planning and management tasks have also made the transition to purpose-built software solutions. But when you dig in a little deeper, there is one area that is still shamefully behind: the mechanism to determine what infrastructure to host workloads on is still in the stone ages. The ability to understand the complete set of deployed infrastructure, quantify and qualify the hosting capabili
When observing the current generation virtual and internal cloud environments, it appears that the primary planning and management tasks have also made the transition to purpose-built software solutions. But when you dig in a little deeper, there is one area that is still shamefully behind: the mechanism to determine what infrastructure to host workloads on is still in the stone ages. The ability to understand the complete set of deployed infrastructure, quantify and qualify the hosting capabilities of each environment, and to make informed decisions regarding where to host new applications and workloads, is still the realm of spreadsheets and best guesses.

This paper identifies five reasons why the entire process of workload routing and capacity reservation must make the transition to become a core, automated component of IT planning and management.

Server Capacity Defrag
This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

Capacity defragmentation is a concept that is becoming increasingly important in the management of modern data centers. As virtualization increases its penetration into production environments, and as public and private clouds move to the forefront of the IT mindset, the ability to leverage this newly-found agility while at the same driving high efficiency (and low risk) is a real game changer. This white paper outlines how managers of IT environments make the transition from old-school capacity management to new-school efficiency management.

How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
Hyper-converged Infrastructure: No-Nonsense Selection Criteria
This white paper helps you identify the key selection criteria for building a business savvy hyper-converged infrastructure model for your business based on cost, availability, fitness to purpose and performance. Also, it includes a checklist you can use to evaluate hyper-converged storage options.
Hyper-converged storage is the latest buzz phrase instorage. The exact meaning of hyper-converged storage varies depending on thevendor that one consults, with solutions varying widely with respect to theirsupport for multiple hypervisor and workload types and their flexibility interms of hardware componentry and topology.
 
Regardless of the definition that vendors ascribe to theterm, the truth is that building a business savvy hyper-converged infrastructurestill comes down to two key requirements: selecting a combination ofinfrastructure products and services that best fit workload requirements, andselecting a hyper-converged model that can adapt and scale with changingstorage demands without breaking available budgets.
 
Download this paper to:
  • Learn about hyper-converged storage and virtualSANs
  • Identify key criteria for selecting the right hyper-converged infrastructure
  • Obtain a checklist for evaluating options
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Building a Highly Available Data Infrastructure
Topics: DataCore, storage, SAN, HA
This white paper outlines best practices for improving overall business application availability by building a highly available data infrastructure.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.

Download this paper to:
•    Learn how to develop a High Availability strategy for your applications
•    Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
•    Learn how to build a Highly Available data infrastructure using Hyper-converged storage

The State of Software-Defined Storage (SDS) in 2015
Topics: DataCore, storage, SAN, SDS
For the fifth consecutive year, DataCore Software explored the impact of Software-Defined Storage (SDS) on organizations across the globe. The 2015 survey distills the expectations and experiences of 477 IT professionals that are currently using or evaluating SDS technology to solve critical data storage challenges. The results yield surprising insights from a cross-section of industries over a wide range of workloads. The survey was conducted on April, 2015.
For the fifth consecutive year, DataCore Software explored the impact of Software-Defined Storage (SDS) on organizations across the globe. The 2015 survey distills the expectations and experiences of 477 IT professionals that are currently using or evaluating SDS technology to solve critical data storage challenges. The results yield surprising insights from a cross-section of industries over a wide range of workloads. The survey was conducted on April, 2015.

Discover Cross-Hypervisor Replication with Zerto
Zerto Virtual Replication (ZVR) is the first hypervisor-based replication solution to offer enterprise-class cross-hypervisor replication, disaster recovery, data protection and workload mobility. With ZVR 4.0, IT departments can automatically convert Hyper-V VMs to VMware, convert VMware VMs to Hyper- V, and convert Hyper-V to AWS for increased flexibility and cost savings.
Zerto Virtual Replication (ZVR) is the first hypervisor-based replication solution to offer enterprise-class cross-hypervisor replication, disaster recovery, data protection and workload mobility. With ZVR 4.0, IT departments can automatically convert Hyper-V VMs to VMware, convert VMware VMs to Hyper- V, and convert Hyper-V to AWS for increased flexibility and cost savings.
Boone County Health Center Runs Faster with Infinio
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day.
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day. Employees access electronic medical records (EMR), business reports, financial data, email and other essential applications required to manage daily operations and provide optimum patient care. Some medical staff and administrators occasionally log in from their homes on personal devices such as laptops or iPads. The Health Center initially considered purchasing an add-on all-flash array for the VDI to help eliminate slow response periods during boot storms. However, before making this type of investment, the Center wanted to explore other alternative solutions.