Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 72 white papers, page 1 of 5.
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
Using VM Archiving to Solve VM Sprawl
Topics: commvault, sprawl
This Commvault whitepaper discusses how archiving virtual machines can mitigate VM sprawl with a comprehensive approach to VM lifecycle management.
This Commvault whitepaper discusses how archiving virtual machines can mitigate VM sprawl with a comprehensive approach to VM lifecycle management.
Making a Business Case for Unified IT Monitoring
Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

In large IT organizations, monitoring tool sprawl has become so commonplace that it is not unusual for administrators to be monitoring 10 to 50 solutions across various departments.

Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

This paper can help you make an effective business case for moving to a unified monitoring solution. Key considerations include:

•    The direct costs associated with moving to a unified monitoring tool
•    The savings potential of improved IT operations through productivity and efficiency
•    The business impact of monitoring tools in preventing and reducing both downtime and service degradation

Download the paper now!

PowerShell for newbies: Getting started with PowerShell 4.0
Topics: veeam, powershell
This white paper is a Windows PowerShell guide for beginners. If you are an IT Professional with little-to-no experience with PowerShell and want to learn more about this powerful scripting framework, this quick-start guide is for you. With the PowerShell engine, you can automate daily management of Windows-based servers, applications and platforms.

This white paper is a Windows PowerShell guide for beginners. If you are an IT Professional with little-to-no experience with PowerShell and want to learn more about this powerful scripting framework, this quick-start guide is for you. With the PowerShell engine, you can automate daily management of Windows-based servers, applications and platforms. This e-book provides the fundamentals every PowerShell administrator needs to know. The getting started guide will give you a crash course on PowerShell essential terms, concepts and commands and help you quickly understand PowerShell basics.

You will also learn about:

  • What is PowerShell?
  • Using PowerShell Help
  • PowerShell Terminology
  • The PowerShell Paradigm
  • And more!

This white paper focuses on PowerShell 4.0; however, you can be sure that all the basics provided are relevant to earlier versions as well. For those who are ready to take the next steps in learning PowerShell and looking for more information on the topic, this PDF contains a list of helpful resources.

How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
Active Directory basics: Under the hood of Active Directory
Topics: veeam
Microsoft’s Active Directory (AD) offers IT system administrators a central way to manage user accounts and devices in an IT infrastructure network. Active Directory authenticates and authorizes users when they log onto devices and into applications, and allows them to use the settings and files across all devices in the network. Active Directory services are involved in multiple aspects of networking environments and enable interplay with other directories. Considering the important role AD pla

Microsoft’s Active Directory (AD) offers IT system administrators a central way to manage user accounts and devices in an IT infrastructure network. Active Directory authenticates and authorizes users when they log onto devices and into applications, and allows them to use the settings and files across all devices in the network. Active Directory services are involved in multiple aspects of networking environments and enable interplay with other directories. Considering the important role AD plays in user data-management and security, it’s important to deploy it properly and consistently follow best practices.

Active Directory Basics is a tutorial that will help you address many AD management challenges. You’ll learn what really goes on under the Active Directory hood, including its integration with network services and the features that enable its many great benefits. This white paper also explains how administrators can make changes in AD to provide consistency across an environment.

In addition, the Active Directory Basics tutorial explains how to:

  • Log onto devices and into applications with the same username and password combination (other optional authentication methods)
  • Use settings and files across all devices, which are AD members
  • Remain productive on secondary AD-managed devices if the primary device is lost, defective or stolen.
  • Best practices to follow, and references for further reading
How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
Hyper-converged Infrastructure: No-Nonsense Selection Criteria
This white paper helps you identify the key selection criteria for building a business savvy hyper-converged infrastructure model for your business based on cost, availability, fitness to purpose and performance. Also, it includes a checklist you can use to evaluate hyper-converged storage options.
Hyper-converged storage is the latest buzz phrase instorage. The exact meaning of hyper-converged storage varies depending on thevendor that one consults, with solutions varying widely with respect to theirsupport for multiple hypervisor and workload types and their flexibility interms of hardware componentry and topology.
 
Regardless of the definition that vendors ascribe to theterm, the truth is that building a business savvy hyper-converged infrastructurestill comes down to two key requirements: selecting a combination ofinfrastructure products and services that best fit workload requirements, andselecting a hyper-converged model that can adapt and scale with changingstorage demands without breaking available budgets.
 
Download this paper to:
  • Learn about hyper-converged storage and virtualSANs
  • Identify key criteria for selecting the right hyper-converged infrastructure
  • Obtain a checklist for evaluating options
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Scale Computing’s hyperconverged system matches the needs of the SMB and mid-market
Scale Computing HC3 is cost effective, scalable and designed for installation and management by the IT generalist
Everyone has heard the buzz about hyper-converged systems – appliances with compute, storage and virtualization infrastructures built in – these days. Hyper-converged infrastructure systems are an extension of infrastructure convergence – the combination of compute, storage and networking resources in one compact box – that promise of simplification by consolidating resources onto a commodity x86 server platform.
Boone County Health Center Runs Faster with Infinio
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day.
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day. Employees access electronic medical records (EMR), business reports, financial data, email and other essential applications required to manage daily operations and provide optimum patient care. Some medical staff and administrators occasionally log in from their homes on personal devices such as laptops or iPads. The Health Center initially considered purchasing an add-on all-flash array for the VDI to help eliminate slow response periods during boot storms. However, before making this type of investment, the Center wanted to explore other alternative solutions.
Masergy accelerates VDI and storage performance with Infinio
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work.
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work. The team was investigating SSD solutions and options from SanDisk, VMware and Dell, as well as all-flash arrays, when it discovered Infinio at VMworld 2014. Unlike the solutions Masergy considered previously, the simplicity of the Infinio Accelerator and low price point caught the Masergy team’s attention. Fewer than six months later, Masergy’s Infinio installation was under way. Infinio provides an alternative to expensive, hardware-based solutions to address VDI performance, which is what Masergy wanted to improve.
Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Waiting on IO: The Straw That Broke Virtualization’s Back
In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.
Despite the increasing horsepower of modern multi-core processors and the promise of virtualization, we’re seeing relatively little progress in the amount of concurrent work they accomplish. That’s why we’re having to buy a lot more virtualized servers than we expected.

On closer examination, we find the root cause to be IO-starved virtual machines (VMs), especially for heavy online transactional processing (OLTP) apps, databases and mainstream IO-intensive workloads. Plenty of compute power is at their disposal, but servers have a tough time fielding inputs and outputs. This gives rise to an odd phenomenon of stalled virtualized apps while many processor cores remain idle.

So how exactly do we crank up IOs to keep up with the computational appetite while shaving costs? This can best be achieved by parallel IO technology designed to process IO across many cores simultaneously, thereby putting those idle CPUs to work. Such technology has been developed by DataCore Software, a long-time master of parallelism in the field of storage virtualization.

In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.

Citrix AppDNA and FlexApp: Application Compatibility Solution Analysis
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.

This whitepaper provides an overview of Citrix AppDNA with Liquidware Labs FlexApp.