Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 36 white papers, page 1 of 3.
5 Fundamentals of Modern Data Protection
Some data protection software vendors will say that they are “agentless” because they can do an agentless backup. However, many of these vendors require agents for file-level restore, proper application backup, or to restore application data. My advice is to make sure that your data protection tool is able to address all backup and recovery scenarios without the need for an agent.
Legacy backup is costly, inefficient, and can force IT administrators to make risky compromises that impact critical business applications, data and resources. Read this NEW white paper to learn how Modern Data Protection capitalizes on the inherent benefits of virtualization to:
  • Increase your ability to meet RPOs and RTOs
  • Eliminate the need for complex and inefficient agents
  • Reduce operating costs and optimize resources
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012
Topics: Hyper-V, veeam
This chapter is designed to get you started quickly with Hyper-V 3.0. It starts with a discussion of the hardware requirements for Hyper-V 3.0 and then explains a basic Hyper-V–deployment followed by an upgrade from Hyper-V 2.0 to Hyper-V 3.0. The chapter concludes with a demonstration of migrating virtual machines from Hyper-V 2.0 to Hyper-V 3.0
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012 gives you simple step-by-step instructions to help you perform Hyper-V-related tasks like a seasoned expert. You will learn how to:
  • Build clustered Hyper-V deployment
  • Manage Hyper-V through PowerShell
  • Create virtual machine replicas
  • Transition from a legacy Hyper-V environment
  • and more
Blueprint for Delivering IT-as-a-Service - 9 Steps for Success
You’ve got the materials (your constantly changing IT infrastructure). You’ve got the work order (your boss made that perfectly clear). But now what? Delivering IT-as-a-service has never been more challenging than it is today...virtualization, private, public, and hybrid cloud computing are drastically changing how IT needs to provide service delivery and assurance. You know exactly what you need to do, the big question is HOW to do it. If only there was some kind of blueprint for this…
You’ve got the materials (your constantly changing ITinfrastructure). You’ve got the work order (your boss made that perfectlyclear). But now what? Delivering IT-as-a-service has never been morechallenging than it is today...virtualization, private, public, and hybridcloud computing are drastically changing how IT needs to provide servicedelivery and assurance. You know exactly what you need to do, the big questionis HOW to do it. If only there was some kind of blueprint for this…

Based on our experience working with Zenoss customers whohave built highly virtualized and cloud infrastructures, we know what it takesto operationalize IT-as-a-Service in today’s ever-changing technicalenvironment. We’ve put together a guided list of questions in this eBook around the following topics to help you build your blueprint for getting the job done,and done right: 
  • Unified Operations
  • Maximum Automation
  • Model Driven
  • Service Oriented
  • Multi-Tenant
  • Horizontal Scale
  • Open Extensibility
  • Subscription
  • ExtremeService
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
PowerShell for newbies: Getting started with PowerShell 4.0
Topics: veeam, powershell
This white paper is a Windows PowerShell guide for beginners. If you are an IT Professional with little-to-no experience with PowerShell and want to learn more about this powerful scripting framework, this quick-start guide is for you. With the PowerShell engine, you can automate daily management of Windows-based servers, applications and platforms.

This white paper is a Windows PowerShell guide for beginners. If you are an IT Professional with little-to-no experience with PowerShell and want to learn more about this powerful scripting framework, this quick-start guide is for you. With the PowerShell engine, you can automate daily management of Windows-based servers, applications and platforms. This e-book provides the fundamentals every PowerShell administrator needs to know. The getting started guide will give you a crash course on PowerShell essential terms, concepts and commands and help you quickly understand PowerShell basics.

You will also learn about:

  • What is PowerShell?
  • Using PowerShell Help
  • PowerShell Terminology
  • The PowerShell Paradigm
  • And more!

This white paper focuses on PowerShell 4.0; however, you can be sure that all the basics provided are relevant to earlier versions as well. For those who are ready to take the next steps in learning PowerShell and looking for more information on the topic, this PDF contains a list of helpful resources.

How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
Active Directory basics: Under the hood of Active Directory
Topics: veeam
Microsoft’s Active Directory (AD) offers IT system administrators a central way to manage user accounts and devices in an IT infrastructure network. Active Directory authenticates and authorizes users when they log onto devices and into applications, and allows them to use the settings and files across all devices in the network. Active Directory services are involved in multiple aspects of networking environments and enable interplay with other directories. Considering the important role AD pla

Microsoft’s Active Directory (AD) offers IT system administrators a central way to manage user accounts and devices in an IT infrastructure network. Active Directory authenticates and authorizes users when they log onto devices and into applications, and allows them to use the settings and files across all devices in the network. Active Directory services are involved in multiple aspects of networking environments and enable interplay with other directories. Considering the important role AD plays in user data-management and security, it’s important to deploy it properly and consistently follow best practices.

Active Directory Basics is a tutorial that will help you address many AD management challenges. You’ll learn what really goes on under the Active Directory hood, including its integration with network services and the features that enable its many great benefits. This white paper also explains how administrators can make changes in AD to provide consistency across an environment.

In addition, the Active Directory Basics tutorial explains how to:

  • Log onto devices and into applications with the same username and password combination (other optional authentication methods)
  • Use settings and files across all devices, which are AD members
  • Remain productive on secondary AD-managed devices if the primary device is lost, defective or stolen.
  • Best practices to follow, and references for further reading
Hyper-V Replica in depth
Topics: veeam, hyper-v
When Windows Server 2012 hit the market in 2012 a new feature called - Hyper-V Replica hit the shelf. In 2013, when Windows Server 2012 R2 was released, the Hyper-V Replica feature was improved. This white paper gives you an in-depth look at Hyper-V Replica: what it is, how it works, what capabilities it offers and specific-use cases.

When Windows Server 2012 hit the market in 2012 a new feature called - Hyper-V Replica hit the shelf. In 2013, when Windows Server 2012 R2 was released, the Hyper-V Replica feature was improved. This white paper gives you an in-depth look at Hyper-V Replica: what it is, how it works, what capabilities it offers and specific-use cases.

By the end of this white paper, you’ll know:

  • If this feature is right for your environment
  • Steps for successful implementation
  • Best practices and much more!
How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
Hyper-converged Infrastructure: No-Nonsense Selection Criteria
This white paper helps you identify the key selection criteria for building a business savvy hyper-converged infrastructure model for your business based on cost, availability, fitness to purpose and performance. Also, it includes a checklist you can use to evaluate hyper-converged storage options.
Hyper-converged storage is the latest buzz phrase instorage. The exact meaning of hyper-converged storage varies depending on thevendor that one consults, with solutions varying widely with respect to theirsupport for multiple hypervisor and workload types and their flexibility interms of hardware componentry and topology.
 
Regardless of the definition that vendors ascribe to theterm, the truth is that building a business savvy hyper-converged infrastructurestill comes down to two key requirements: selecting a combination ofinfrastructure products and services that best fit workload requirements, andselecting a hyper-converged model that can adapt and scale with changingstorage demands without breaking available budgets.
 
Download this paper to:
  • Learn about hyper-converged storage and virtualSANs
  • Identify key criteria for selecting the right hyper-converged infrastructure
  • Obtain a checklist for evaluating options
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Building a Highly Available Data Infrastructure
Topics: DataCore, storage, SAN, HA
This white paper outlines best practices for improving overall business application availability by building a highly available data infrastructure.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.

Download this paper to:
•    Learn how to develop a High Availability strategy for your applications
•    Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
•    Learn how to build a Highly Available data infrastructure using Hyper-converged storage

The State of Software-Defined Storage (SDS) in 2015
Topics: DataCore, storage, SAN, SDS
For the fifth consecutive year, DataCore Software explored the impact of Software-Defined Storage (SDS) on organizations across the globe. The 2015 survey distills the expectations and experiences of 477 IT professionals that are currently using or evaluating SDS technology to solve critical data storage challenges. The results yield surprising insights from a cross-section of industries over a wide range of workloads. The survey was conducted on April, 2015.
For the fifth consecutive year, DataCore Software explored the impact of Software-Defined Storage (SDS) on organizations across the globe. The 2015 survey distills the expectations and experiences of 477 IT professionals that are currently using or evaluating SDS technology to solve critical data storage challenges. The results yield surprising insights from a cross-section of industries over a wide range of workloads. The survey was conducted on April, 2015.

Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Waiting on IO: The Straw That Broke Virtualization’s Back
In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.
Despite the increasing horsepower of modern multi-core processors and the promise of virtualization, we’re seeing relatively little progress in the amount of concurrent work they accomplish. That’s why we’re having to buy a lot more virtualized servers than we expected.

On closer examination, we find the root cause to be IO-starved virtual machines (VMs), especially for heavy online transactional processing (OLTP) apps, databases and mainstream IO-intensive workloads. Plenty of compute power is at their disposal, but servers have a tough time fielding inputs and outputs. This gives rise to an odd phenomenon of stalled virtualized apps while many processor cores remain idle.

So how exactly do we crank up IOs to keep up with the computational appetite while shaving costs? This can best be achieved by parallel IO technology designed to process IO across many cores simultaneously, thereby putting those idle CPUs to work. Such technology has been developed by DataCore Software, a long-time master of parallelism in the field of storage virtualization.

In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.