Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 67 white papers, page 1 of 5.
5 Fundamentals of Modern Data Protection
Some data protection software vendors will say that they are “agentless” because they can do an agentless backup. However, many of these vendors require agents for file-level restore, proper application backup, or to restore application data. My advice is to make sure that your data protection tool is able to address all backup and recovery scenarios without the need for an agent.
Legacy backup is costly, inefficient, and can force IT administrators to make risky compromises that impact critical business applications, data and resources. Read this NEW white paper to learn how Modern Data Protection capitalizes on the inherent benefits of virtualization to:
  • Increase your ability to meet RPOs and RTOs
  • Eliminate the need for complex and inefficient agents
  • Reduce operating costs and optimize resources
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012
Topics: Hyper-V, veeam
This chapter is designed to get you started quickly with Hyper-V 3.0. It starts with a discussion of the hardware requirements for Hyper-V 3.0 and then explains a basic Hyper-V–deployment followed by an upgrade from Hyper-V 2.0 to Hyper-V 3.0. The chapter concludes with a demonstration of migrating virtual machines from Hyper-V 2.0 to Hyper-V 3.0
The Hands-on Guide: Understanding Hyper-V in Windows Server 2012 gives you simple step-by-step instructions to help you perform Hyper-V-related tasks like a seasoned expert. You will learn how to:
  • Build clustered Hyper-V deployment
  • Manage Hyper-V through PowerShell
  • Create virtual machine replicas
  • Transition from a legacy Hyper-V environment
  • and more
Blueprint for Delivering IT-as-a-Service - 9 Steps for Success
You’ve got the materials (your constantly changing IT infrastructure). You’ve got the work order (your boss made that perfectly clear). But now what? Delivering IT-as-a-service has never been more challenging than it is today...virtualization, private, public, and hybrid cloud computing are drastically changing how IT needs to provide service delivery and assurance. You know exactly what you need to do, the big question is HOW to do it. If only there was some kind of blueprint for this…
You’ve got the materials (your constantly changing ITinfrastructure). You’ve got the work order (your boss made that perfectlyclear). But now what? Delivering IT-as-a-service has never been morechallenging than it is today...virtualization, private, public, and hybridcloud computing are drastically changing how IT needs to provide servicedelivery and assurance. You know exactly what you need to do, the big questionis HOW to do it. If only there was some kind of blueprint for this…

Based on our experience working with Zenoss customers whohave built highly virtualized and cloud infrastructures, we know what it takesto operationalize IT-as-a-Service in today’s ever-changing technicalenvironment. We’ve put together a guided list of questions in this eBook around the following topics to help you build your blueprint for getting the job done,and done right: 
  • Unified Operations
  • Maximum Automation
  • Model Driven
  • Service Oriented
  • Multi-Tenant
  • Horizontal Scale
  • Open Extensibility
  • Subscription
  • ExtremeService
5 IT Questions Your CEO Could Ask You Tomorrow
Topics: Zenoss
5 IT questions your CEO could ask you tomorrow that could make or break your career!

Picture this:  You round the corner and your CEO or another executive ambushes you with a question about an IT issue that’s keeping him up at night…

Business leaders and executives often need quick answers from IT, especially when there’s an issue underway and the ripple effects are beginning to spread.  How you respond in these moments could make or break your career.  (Here’s a hint:  they won’t want technology-focused answers.)

This eBook will prepare you for those moments, showing you how to think about IT ‘as-a-Service’ and communicate in the language your leaders and executives understand, and the terms they care about.

You'll learn:

  • The 5 most common questions that executives could spring on you
  • How to provide the business-focused answers they want and need
  • Examples of the approach in practice across multiple industries
  • How Zenoss Service Dynamics can help with all of this

Don’t be caught off guard the next time you get put on the spot - download this eBook today!
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Making a Business Case for Unified IT Monitoring
Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

In large IT organizations, monitoring tool sprawl has become so commonplace that it is not unusual for administrators to be monitoring 10 to 50 solutions across various departments.

Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

This paper can help you make an effective business case for moving to a unified monitoring solution. Key considerations include:

•    The direct costs associated with moving to a unified monitoring tool
•    The savings potential of improved IT operations through productivity and efficiency
•    The business impact of monitoring tools in preventing and reducing both downtime and service degradation

Download the paper now!

EMA Reviews - Software-Defined Infrastructure Control:  The Key to Empowering the SDDC
Topics: SDDC, cirba, EMA
As enterprises strive to achieve more effective, efficient, and agile operations, they are increasingly adopting processes that enable a Software-Defined Data Center (SDDC). However, in many cases, increased complexity in current IT environments, processes, and cultures has substantially impeded the organization's ability to complete this transition. Software-Defined Infrastructure Control (SDIC) provides the critical infrastructure visibility and intelligence necessary to optimally align data c
As enterprises strive to achieve more effective, efficient, and agile operations, they are increasingly adopting processes that enable a Software-Defined Data Center (SDDC). However, in many cases, increased complexity in current IT environments, processes, and cultures has substantially impeded the organization's ability to complete this transition. Software-Defined Infrastructure Control (SDIC) provides the critical infrastructure visibility and intelligence necessary to optimally align data center capabilities and resources with application demands and their specific requirements.
Workload Routing & Reservation:  5 Reasons Why It Is Critical To Virtual & Cloud Operation
Topics: cirba
When observing the current generation virtual and internal cloud environments, it appears that the primary planning and management tasks have also made the transition to purpose-built software solutions. But when you dig in a little deeper, there is one area that is still shamefully behind: the mechanism to determine what infrastructure to host workloads on is still in the stone ages. The ability to understand the complete set of deployed infrastructure, quantify and qualify the hosting capabili
When observing the current generation virtual and internal cloud environments, it appears that the primary planning and management tasks have also made the transition to purpose-built software solutions. But when you dig in a little deeper, there is one area that is still shamefully behind: the mechanism to determine what infrastructure to host workloads on is still in the stone ages. The ability to understand the complete set of deployed infrastructure, quantify and qualify the hosting capabilities of each environment, and to make informed decisions regarding where to host new applications and workloads, is still the realm of spreadsheets and best guesses.

This paper identifies five reasons why the entire process of workload routing and capacity reservation must make the transition to become a core, automated component of IT planning and management.

Optimizing Capacity Forecasting Processes with a Capacity Reservations System for IT
Virtually every area of human endeavour that involves the use of shared resources relies on a reservation system to manage the booking of these assets. Hotels, airlines, rental companies and even the smallest of restaurants rely on reservation systems to optimize the use of their assets and balance customer satisfaction with profitability. Or, as economists would say, strike a balance between supply and demand.

Virtually every area of human endeavour that involves the use of shared resources relies on a reservation system to manage the booking of these assets. Hotels, airlines, rental companies and even the smallest of restaurants rely on reservation systems to optimize the use of their assets and balance customer satisfaction with profitability. Or, as economists would say, strike a balance between supply and demand.

So how can a modern IT environment expect to operate effectively without having a functioning capacity reservation system? The simple answer is that it can't. With the rise of cloud computing, where resources are shared on a larger scale and capacity is commoditized, modeling future bookings and proper forecasting of demand is critical to the survival of IT. Not having proper systems in place leaves forecasting to trending and guesswork - a dangerous proposition that usually results in over-provisioning and excessive capacity.

Download this paper to learn how to manage the demand pipeline for new workload placements in order to improve the accuracy of capacity forecasting and increase agility in response to new workload placement requests.

Server Capacity Defrag
This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

Capacity defragmentation is a concept that is becoming increasingly important in the management of modern data centers. As virtualization increases its penetration into production environments, and as public and private clouds move to the forefront of the IT mindset, the ability to leverage this newly-found agility while at the same driving high efficiency (and low risk) is a real game changer. This white paper outlines how managers of IT environments make the transition from old-school capacity management to new-school efficiency management.

The Path to Hybrid Cloud: Intelligent Bursting To Amazon Web Services & Microsoft Azure
In this whitepaper you will learn: The challenges in implementing an effective hybrid cloud; How key vendors are addressing their challenges; How to answer what, when and where to burst.

The hybrid cloud has been heralded as a promising IT operational model enabling enterprises to maintain security and control over the infrastructure on which their applications run. At the same time, it promises to maximize ROI from their local data center and leverage public cloud infrastructure for an occasional demand spike. However, these benefits don’t come without challenges.

In this whitepaper you will learn:
•    The challenges in implementing an effective hybrid cloud
•    How key vendors are addressing their challenges
•    How to answer what, when and where to burst

PowerShell for newbies: Getting started with PowerShell 4.0
Topics: veeam, powershell
This white paper is a Windows PowerShell guide for beginners. If you are an IT Professional with little-to-no experience with PowerShell and want to learn more about this powerful scripting framework, this quick-start guide is for you. With the PowerShell engine, you can automate daily management of Windows-based servers, applications and platforms.

This white paper is a Windows PowerShell guide for beginners. If you are an IT Professional with little-to-no experience with PowerShell and want to learn more about this powerful scripting framework, this quick-start guide is for you. With the PowerShell engine, you can automate daily management of Windows-based servers, applications and platforms. This e-book provides the fundamentals every PowerShell administrator needs to know. The getting started guide will give you a crash course on PowerShell essential terms, concepts and commands and help you quickly understand PowerShell basics.

You will also learn about:

  • What is PowerShell?
  • Using PowerShell Help
  • PowerShell Terminology
  • The PowerShell Paradigm
  • And more!

This white paper focuses on PowerShell 4.0; however, you can be sure that all the basics provided are relevant to earlier versions as well. For those who are ready to take the next steps in learning PowerShell and looking for more information on the topic, this PDF contains a list of helpful resources.

Active Directory basics: Under the hood of Active Directory
Topics: veeam
Microsoft’s Active Directory (AD) offers IT system administrators a central way to manage user accounts and devices in an IT infrastructure network. Active Directory authenticates and authorizes users when they log onto devices and into applications, and allows them to use the settings and files across all devices in the network. Active Directory services are involved in multiple aspects of networking environments and enable interplay with other directories. Considering the important role AD pla

Microsoft’s Active Directory (AD) offers IT system administrators a central way to manage user accounts and devices in an IT infrastructure network. Active Directory authenticates and authorizes users when they log onto devices and into applications, and allows them to use the settings and files across all devices in the network. Active Directory services are involved in multiple aspects of networking environments and enable interplay with other directories. Considering the important role AD plays in user data-management and security, it’s important to deploy it properly and consistently follow best practices.

Active Directory Basics is a tutorial that will help you address many AD management challenges. You’ll learn what really goes on under the Active Directory hood, including its integration with network services and the features that enable its many great benefits. This white paper also explains how administrators can make changes in AD to provide consistency across an environment.

In addition, the Active Directory Basics tutorial explains how to:

  • Log onto devices and into applications with the same username and password combination (other optional authentication methods)
  • Use settings and files across all devices, which are AD members
  • Remain productive on secondary AD-managed devices if the primary device is lost, defective or stolen.
  • Best practices to follow, and references for further reading
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN