Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 16 white papers, page 1 of 1.
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012
Topics: Hyper-V, veeam
This chapter is designed to get you started quickly with Hyper-V 3.0. It starts with a discussion of the hardware requirements for Hyper-V 3.0 and then explains a basic Hyper-V–deployment followed by an upgrade from Hyper-V 2.0 to Hyper-V 3.0. The chapter concludes with a demonstration of migrating virtual machines from Hyper-V 2.0 to Hyper-V 3.0
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012 gives you simple step-by-step instructions to help you perform Hyper-V-related tasks like a seasoned expert. You will learn how to:
  • Build clustered Hyper-V deployment
  • Manage Hyper-V through PowerShell
  • Create virtual machine replicas
  • Transition from a legacy Hyper-V environment
  • and more
CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
Hyper-V Replica in depth
Topics: veeam, hyper-v
When Windows Server 2012 hit the market in 2012 a new feature called - Hyper-V Replica hit the shelf. In 2013, when Windows Server 2012 R2 was released, the Hyper-V Replica feature was improved. This white paper gives you an in-depth look at Hyper-V Replica: what it is, how it works, what capabilities it offers and specific-use cases.

When Windows Server 2012 hit the market in 2012 a new feature called - Hyper-V Replica hit the shelf. In 2013, when Windows Server 2012 R2 was released, the Hyper-V Replica feature was improved. This white paper gives you an in-depth look at Hyper-V Replica: what it is, how it works, what capabilities it offers and specific-use cases.

By the end of this white paper, you’ll know:

  • If this feature is right for your environment
  • Steps for successful implementation
  • Best practices and much more!
Building a Highly Available Data Infrastructure
Topics: DataCore, storage, SAN, HA
This white paper outlines best practices for improving overall business application availability by building a highly available data infrastructure.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.

Download this paper to:
•    Learn how to develop a High Availability strategy for your applications
•    Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
•    Learn how to build a Highly Available data infrastructure using Hyper-converged storage

Scale Computing’s hyperconverged system matches the needs of the SMB and mid-market
Scale Computing HC3 is cost effective, scalable and designed for installation and management by the IT generalist
Everyone has heard the buzz about hyper-converged systems – appliances with compute, storage and virtualization infrastructures built in – these days. Hyper-converged infrastructure systems are an extension of infrastructure convergence – the combination of compute, storage and networking resources in one compact box – that promise of simplification by consolidating resources onto a commodity x86 server platform.
Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Unlock the Full Performance of Your Servers
Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software

Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software.

The Problem:

Current systems don't have software optimizations that can fully harness the latest advances in more powerful server system technologies.

As a result, I/O performance has been the major bottleneck holding back the industry.

ESG Lab Spotlight: SIOS iQ & FlashSoft: Analytics-driven Server Acceleration
Download this ESG Lab Spotlight and learn how easy accelerating performance in your VMware environment through host-based caching can be. ESG Lab looked at a new approach that uses SIOS iQ machine learning analytics platform to identify candidate VMs that can be accelerated with FlashSoft host-based caching software using SanDisk Fusion ioMemory PCIe application accelerators and SanDisk SAS and SATA SSDs. They validated the benefits of this approach in this detailed Lab Spotlight

ESG Lab Spotlight evaluates the power of SIOS iQ Machine Learning and Flashsoft software to enable companies to improve application performance through easy, cost-efficient host based caching with solid-state storage devices (SSDs).by reducing storage bottlenecks, speeding application performance, and minimizing latency. However, the challenge for many organizations, especially since many SSDs are still more expensive than HDDs, is to know when and where to apply SSDs to both maximize performance and minimize costs. This lab report evaluates the SIOS iQ IT Analytics and SanDisk FlashSoft, ioMemory, and SSDs. SIOS iQ, a machine learning analytics platform for optimizing VMware environments, identifies which virtual machines will benefit most from host-based caching and recommends the configuration that will provide the best results. FlashSoft host-based caching software leverages SanDisk Fusion ioMemory PCIe application accelerators, SanDisk Lightning, Optimus, and CloudSpeed SSDs, or any other solid-state storage device, to reduce latency and improve throughput in read-intensive virtual and physical server workloads.

ESG Lab used a simulated enterprise IT infrastructure to validate how organizations can use the SIOS iQ analytics platform to identify applications that could be accelerated with host-based caching software, recommend an optimal cache configuration, and predict the resulting storage performance if caching were configured as recommended. 

Vembu Changes the Dynamics of Data Protection for  Business Applications in a vSphere Environment
This paper examines how to use Vembu BDR to implement distributed backup and disaster recovery (DR) operations in a centrally managed data protection environment with an ingenious twist. Rather than store image backups of VMs and block-level backups of physical and VM guest host systems as a collection of backup files, Vembu BDR utilizes a document-oriented database as a backup repository, dubbed VembuHIVE, which Vembu virtualizes as a file system.
Normal 0 false false false EN-US X-NONE X-NONE In this analysis, openBench Labs assesses the performance and functionality of the Vembu Backup & Disaster Recovery (BDR) host-level (a.k.a. agentless) data protection solution in a VMware vSphere 5.5 HA Cluster. For this test they utilized a vSphere VM configured with three logical disks located on separate datastores to support an Exchange server with two mailbox databases. Each of the mailbox databases was configured to support 1,000 user accounts.

This paper provides technically savvy IT decision makers with the detailed performance and resource configuration information needed to analyze the trade-offs involved in setting up an optimal data protection and business continuity plan to support a service level agreement (SLA) with line of business (LoB) executives.

To test backup performance, they created 2,000 AD users and utilized LoadGen to generate email traffic. Each user received 120 messages and sent 20 messages over an 8-hour workday. Using this load level, we established performance baselines for a data protection using direct SAN-based agentless VM backups.

In this scenario they were able to :

  • Finish crash-consistent incremental agent-less backups in 18 minutes, while processing our base transaction load of 12 Outlook TPS.
  • Restore a fully functional VM in less than 5 minutes as a Hyper-V VM capable of sustaining an indefinite load of 4 Outlook TPS
  • Recover all user mailboxes as .pst files from a host-level agentless VM backup with no need to schedule a Windows Client backup initiated within the VM’s guest Windows OS.
In a DR scenario, Vembu leverages the ability to restore a VM in any format, to provide an Instant-boot function, When Vembu Backup Server is installed on a server that is concurrently running Hyper-V, Vembu exports the datastores associated with a VM backup as Hyper-V disks and configures a VM to boot from the datastores.
True 15-Minute RTO for Mission-Critical VM Systems with Vembu VMBackup Replication
Vembu Backup & Disaster Recovery (BDR) provides IT with a Disaster Recovery Management (DRM) system capable of meeting even more aggressive RTO and RPO goals than the previous release. For highly active database-driven systems, Vembu VMBackup leverages VMtools and VMware Changed Block Tracking (CBT) to perform incremental backups in 15-minute intervals with minimal impact on query processing. As a result, IT can limit data loss to 15 minutes of processing on active mission critical VMs.

The only way to recover a VM with full functionality and full performance without performing an explicit restore operation is through VM replication. Maintaining a replica VM, however, requires frequent and potentially expensive update processes that involve both explicit backup and implicit restore operations. To enable the extensive use of replication by IT, VMBackup adds critical optimizations to both restore and replication operations that dramatically minimize overhead on ESXi hosts and production VMs to just VM snapshot processing. Specifically, a BDR Backup server running on a VM is able to leverage hot-add SCSI transfer mode to write logical disk and logical disk snapshot files directly to a vSphere datastore, without involving the ESXi host for anything more than creating a VM snapshot.

A key a value proposition for Vembu VMBackup is its ability to directly read and write all backup and restore data directly to and from a datastore snapshot. As a result, Vembu VMBackup offloads all I/O overhead from production VMs and ESXi hosts, which is critical for maintaining an aggressive DRM strategy in a highly active virtual environment. What’s more, the performance of Vembu VMBackup in openBench lab's test environment made it possible to enhance support for a mission-critical OLTP application running on a VM using a combination of incremental backups for backup and replication. As a result, they were able to comply with a 30-minute RPO, restore the VM to a production environment in 5 minutes, and return to full-production level processing of business transactions—850 cTPS—in under 15 minutes.

VMBackup adds a new replica management module that enables an IT administrator to fully manage an initial failover and later finalize failover or failback with consolidation. In addition, BDR backup server simplifies all management functions by eliminating the need to run a separate client module on a BDR backup server, which becomes its own client within the BDR reporting hierarchy.

Vembu OffsiteDR Server: Optimize RPO & RTO While Enhancing DR Resilience
Vembu BDR’s data protection solution enhances DRM operations by eliminating all potential single points of failure for restore functions. Using Vembu BDR Suite, IT is able to replicate backup data from multiple BDR Backup servers to a system running OffsiteDR Server within their own data center. As a result, IT garners an alternate system from which to recover protected VMs and physical servers using the same procedures that IT administrators employ on a BDR Backup server.

The ability to configure and deploy high-performance VMs within a vSphere virtual environment continues to put CIO’s under increasing pressure to deal with the rampant bête noire of IT: business continuity. What started with Line of Business (LoB) driven Service Level Agreements (SLAs) requiring IT to meet rigorous Recovery Time and Recovery Point Objectives (RTO and RPO) has grown into an auditable ISO standard (ISO22301) and an emerging software niche for Disaster Recovery Management (DRM) systems.

For this analysis, openBench Labs assessed the performance and functionality of the Vembu OffsiteDR Server, a DRM device that increases the resilience of recovery processes. Their initial intent was to examine the ability to restore data in the event of a catastrophic failure in vSphere environment, including:

  • A VM running BDR Backup server,
  • an ESXi host, and
  • a SAN device.

The full capabilities of Vembu OffsiteDR Server, however, quickly revealed that the device had a much broader operational impact. With the installation of OffsiteDR Server on an external physical server, they were free to configure end-to-end backup and restore operations in a way that optimized RTO and RPO for all business-critical application scenarios running in vSphere test environment.

In openBench lab's test environment, the combination of Vembu OffsiteDR Server deployed on a physical server with a Vembu BDR Backup server deployed on a VM provided a value proposition that extended far beyond the enhancement of DRM recovery resilience. With OffsiteDR Server installed on a physical server, they were able to optimally leverage VM and physical server platforms to easily implement all of the data protection functionality provided by Vembu BDR Suite, leverage all of the performance optimizations available to VMs in a vSphere environment, and do so in the most cost-effective system configuration.

Virtual Machine Migration Checklist
Preparing for a virtualized infrastructure migration can be daunting, use the Zerto checklist to help you plan and execute your migration smoothly!
This checklist provides an overview to help plan a datacenter migration project and ensure accountability through each step including:
  • Communication: Maintain clear, and regular communication with everyone.
  • Scoping: Understand what makes up the application. Get rid of the unknowns to ensure nothing breaks.
  • Ownership and permissions: Who owns the server and application and who will test and validate pre and post migration?
  • Priority: Are there other projects in your way?
  • Organization: Checklists and spreadsheets: How are you tracking all of this?
  • Execution: How is the migration happening? How will the servers and data be moved?
  • Contingency: If something goes wrong, how do I back out and reschedule if necessary?
  • Tracking: Prepare for change. Track what changes need to occur on machines, applications, and infrastructure.
  • Data Hygiene: Cleanup the source environment
2017 State of Storage in Virtualization
For the second year, Tegile has partnered with ActualTech Media to research how storage and virtualization trends track with one another and how respondents view technologies such as VMware’s VVols. With responses from more than 700 IT pros and IT leaders, you will discover the applications and storage characteristics that are important today and how respondents’ thoughts have shifted over the past year.
Learn how storage has become the key enabler to bigger and faster virtualization. Key findings:
•    Virtualized instances of SQL Server continue to increase in popularity.
•    Hybrid flash storage (combination of flash and spinning disk) gains even more traction. Hybrid storage systems are now found in 55% of respondent environments.
•    Storage performance issues persist. The majority of respondents indicate that they experience storage performance challenges.
•    iSCSI now exceeds Fibre Channel in deployment popularity. 50% of respondents indicated that iSCSI is one of their protocols of choice.
•    VMware VVols continues to be panned by customers. 66% respondents feel that they have little to no knowledge of VVols.

Product Spotlight: Does Your Copy Data Solution Play Checkers or Chess?
This report by Storage Switzerland discusses the difference between Copy Data solutions that provide the basic minimum functionality (playing checkers) compared to those that are truly of strategic value to an organization (playing chess). In speaking of Catalogic, the report notes that "the value of the solution continues to increase." Covered are new feature additions for SAP HANA, Epic Electronic Health Record, InterSystem Caché and SQL Server.
From the report: We covered Catalogic since its first release. In that time, it has moved from a niche NetApp solution to a multi-vendor copy data management solution. Now as it adds coverage of an increasing number of databases, the value of the solution continues to increase. Copy data management has both short term benefits and long term strategic advantages. Organizations should consider the technology a foundational component of an entire storage strategy not just their data protection process.