Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 28 white papers, page 1 of 2.
5 Fundamentals of Modern Data Protection
Some data protection software vendors will say that they are “agentless” because they can do an agentless backup. However, many of these vendors require agents for file-level restore, proper application backup, or to restore application data. My advice is to make sure that your data protection tool is able to address all backup and recovery scenarios without the need for an agent.
Legacy backup is costly, inefficient, and can force IT administrators to make risky compromises that impact critical business applications, data and resources. Read this NEW white paper to learn how Modern Data Protection capitalizes on the inherent benefits of virtualization to:
  • Increase your ability to meet RPOs and RTOs
  • Eliminate the need for complex and inefficient agents
  • Reduce operating costs and optimize resources
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012
Topics: Hyper-V, veeam
This chapter is designed to get you started quickly with Hyper-V 3.0. It starts with a discussion of the hardware requirements for Hyper-V 3.0 and then explains a basic Hyper-V–deployment followed by an upgrade from Hyper-V 2.0 to Hyper-V 3.0. The chapter concludes with a demonstration of migrating virtual machines from Hyper-V 2.0 to Hyper-V 3.0
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012 gives you simple step-by-step instructions to help you perform Hyper-V-related tasks like a seasoned expert. You will learn how to:
  • Build clustered Hyper-V deployment
  • Manage Hyper-V through PowerShell
  • Create virtual machine replicas
  • Transition from a legacy Hyper-V environment
  • and more
CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
Server Capacity Defrag
This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

Capacity defragmentation is a concept that is becoming increasingly important in the management of modern data centers. As virtualization increases its penetration into production environments, and as public and private clouds move to the forefront of the IT mindset, the ability to leverage this newly-found agility while at the same driving high efficiency (and low risk) is a real game changer. This white paper outlines how managers of IT environments make the transition from old-school capacity management to new-school efficiency management.

How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
Hyper-V Replica in depth
Topics: veeam, hyper-v
When Windows Server 2012 hit the market in 2012 a new feature called - Hyper-V Replica hit the shelf. In 2013, when Windows Server 2012 R2 was released, the Hyper-V Replica feature was improved. This white paper gives you an in-depth look at Hyper-V Replica: what it is, how it works, what capabilities it offers and specific-use cases.

When Windows Server 2012 hit the market in 2012 a new feature called - Hyper-V Replica hit the shelf. In 2013, when Windows Server 2012 R2 was released, the Hyper-V Replica feature was improved. This white paper gives you an in-depth look at Hyper-V Replica: what it is, how it works, what capabilities it offers and specific-use cases.

By the end of this white paper, you’ll know:

  • If this feature is right for your environment
  • Steps for successful implementation
  • Best practices and much more!
Building a Highly Available Data Infrastructure
Topics: DataCore, storage, SAN, HA
This white paper outlines best practices for improving overall business application availability by building a highly available data infrastructure.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.

Download this paper to:
•    Learn how to develop a High Availability strategy for your applications
•    Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
•    Learn how to build a Highly Available data infrastructure using Hyper-converged storage

Scale Computing’s hyperconverged system matches the needs of the SMB and mid-market
Scale Computing HC3 is cost effective, scalable and designed for installation and management by the IT generalist
Everyone has heard the buzz about hyper-converged systems – appliances with compute, storage and virtualization infrastructures built in – these days. Hyper-converged infrastructure systems are an extension of infrastructure convergence – the combination of compute, storage and networking resources in one compact box – that promise of simplification by consolidating resources onto a commodity x86 server platform.
Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Zerto Offsite Cloud Backup & Data Protection
Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Zerto Offsite Backup in the Cloud

What is Offsite Backup?

Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Why Cloud Backup?

  • Offsite Backup combines replication and long retention in a new way
  • The repository can be located in public cloud storage, a private cloud, or as part of a hybrid cloud solution.
  • Copies are saved on a daily, weekly and monthly schedule.
  • The data volumes and configuration information are included to allow VM backups to be restored on any compatible platform, cloud or otherwise.
Backup is Not Replication - Backup VMware
Why Rely on Backup Shipping as Your VMware DR Solution? Some people think, if you want to protect data in your virtual VMware environment, your easiest solution is to backup VMware using snapshots or agents.

Why Rely on Backup Shipping as Your VMware DR Solution?

Some people think, if you want to protect data in your virtual VMware environment, your easiest solution is to backup VMware using snapshots or agents.  However, solutions like this, such as Veeam, can slow down your production environment, and they are difficult to scale.

However, many of us understand that backup is not disaster recovery. The right approach to a BC/DR solution is hypervisor-based replication.

With hypervisor-based replication you receive:

  • Continuous data replication
  • Recovery capabilities built specifically for VMware datacenters
  • Full consistency between and among all application components
  • Zerto’s Offsite Backup for VMware
Unlock the Full Performance of Your Servers
Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software

Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software.

The Problem:

Current systems don't have software optimizations that can fully harness the latest advances in more powerful server system technologies.

As a result, I/O performance has been the major bottleneck holding back the industry.

ESG Lab Spotlight: SIOS iQ & FlashSoft: Analytics-driven Server Acceleration
Download this ESG Lab Spotlight and learn how easy accelerating performance in your VMware environment through host-based caching can be. ESG Lab looked at a new approach that uses SIOS iQ machine learning analytics platform to identify candidate VMs that can be accelerated with FlashSoft host-based caching software using SanDisk Fusion ioMemory PCIe application accelerators and SanDisk SAS and SATA SSDs. They validated the benefits of this approach in this detailed Lab Spotlight

ESG Lab Spotlight evaluates the power of SIOS iQ Machine Learning and Flashsoft software to enable companies to improve application performance through easy, cost-efficient host based caching with solid-state storage devices (SSDs).by reducing storage bottlenecks, speeding application performance, and minimizing latency. However, the challenge for many organizations, especially since many SSDs are still more expensive than HDDs, is to know when and where to apply SSDs to both maximize performance and minimize costs. This lab report evaluates the SIOS iQ IT Analytics and SanDisk FlashSoft, ioMemory, and SSDs. SIOS iQ, a machine learning analytics platform for optimizing VMware environments, identifies which virtual machines will benefit most from host-based caching and recommends the configuration that will provide the best results. FlashSoft host-based caching software leverages SanDisk Fusion ioMemory PCIe application accelerators, SanDisk Lightning, Optimus, and CloudSpeed SSDs, or any other solid-state storage device, to reduce latency and improve throughput in read-intensive virtual and physical server workloads.

ESG Lab used a simulated enterprise IT infrastructure to validate how organizations can use the SIOS iQ analytics platform to identify applications that could be accelerated with host-based caching software, recommend an optimal cache configuration, and predict the resulting storage performance if caching were configured as recommended. 

Vembu Changes the Dynamics of Data Protection for  Business Applications in a vSphere Environment
This paper examines how to use Vembu BDR to implement distributed backup and disaster recovery (DR) operations in a centrally managed data protection environment with an ingenious twist. Rather than store image backups of VMs and block-level backups of physical and VM guest host systems as a collection of backup files, Vembu BDR utilizes a document-oriented database as a backup repository, dubbed VembuHIVE, which Vembu virtualizes as a file system.
Normal 0 false false false EN-US X-NONE X-NONE In this analysis, openBench Labs assesses the performance and functionality of the Vembu Backup & Disaster Recovery (BDR) host-level (a.k.a. agentless) data protection solution in a VMware vSphere 5.5 HA Cluster. For this test they utilized a vSphere VM configured with three logical disks located on separate datastores to support an Exchange server with two mailbox databases. Each of the mailbox databases was configured to support 1,000 user accounts.

This paper provides technically savvy IT decision makers with the detailed performance and resource configuration information needed to analyze the trade-offs involved in setting up an optimal data protection and business continuity plan to support a service level agreement (SLA) with line of business (LoB) executives.

To test backup performance, they created 2,000 AD users and utilized LoadGen to generate email traffic. Each user received 120 messages and sent 20 messages over an 8-hour workday. Using this load level, we established performance baselines for a data protection using direct SAN-based agentless VM backups.

In this scenario they were able to :

  • Finish crash-consistent incremental agent-less backups in 18 minutes, while processing our base transaction load of 12 Outlook TPS.
  • Restore a fully functional VM in less than 5 minutes as a Hyper-V VM capable of sustaining an indefinite load of 4 Outlook TPS
  • Recover all user mailboxes as .pst files from a host-level agentless VM backup with no need to schedule a Windows Client backup initiated within the VM’s guest Windows OS.
In a DR scenario, Vembu leverages the ability to restore a VM in any format, to provide an Instant-boot function, When Vembu Backup Server is installed on a server that is concurrently running Hyper-V, Vembu exports the datastores associated with a VM backup as Hyper-V disks and configures a VM to boot from the datastores.