Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 25 white papers, page 1 of 2.
5 Fundamentals of Modern Data Protection
Some data protection software vendors will say that they are “agentless” because they can do an agentless backup. However, many of these vendors require agents for file-level restore, proper application backup, or to restore application data. My advice is to make sure that your data protection tool is able to address all backup and recovery scenarios without the need for an agent.
Legacy backup is costly, inefficient, and can force IT administrators to make risky compromises that impact critical business applications, data and resources. Read this NEW white paper to learn how Modern Data Protection capitalizes on the inherent benefits of virtualization to:
  • Increase your ability to meet RPOs and RTOs
  • Eliminate the need for complex and inefficient agents
  • Reduce operating costs and optimize resources
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
Server Capacity Defrag
This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

Capacity defragmentation is a concept that is becoming increasingly important in the management of modern data centers. As virtualization increases its penetration into production environments, and as public and private clouds move to the forefront of the IT mindset, the ability to leverage this newly-found agility while at the same driving high efficiency (and low risk) is a real game changer. This white paper outlines how managers of IT environments make the transition from old-school capacity management to new-school efficiency management.

Scale Computing’s hyperconverged system matches the needs of the SMB and mid-market
Scale Computing HC3 is cost effective, scalable and designed for installation and management by the IT generalist
Everyone has heard the buzz about hyper-converged systems – appliances with compute, storage and virtualization infrastructures built in – these days. Hyper-converged infrastructure systems are an extension of infrastructure convergence – the combination of compute, storage and networking resources in one compact box – that promise of simplification by consolidating resources onto a commodity x86 server platform.
Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Waiting on IO: The Straw That Broke Virtualization’s Back
In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.
Despite the increasing horsepower of modern multi-core processors and the promise of virtualization, we’re seeing relatively little progress in the amount of concurrent work they accomplish. That’s why we’re having to buy a lot more virtualized servers than we expected.

On closer examination, we find the root cause to be IO-starved virtual machines (VMs), especially for heavy online transactional processing (OLTP) apps, databases and mainstream IO-intensive workloads. Plenty of compute power is at their disposal, but servers have a tough time fielding inputs and outputs. This gives rise to an odd phenomenon of stalled virtualized apps while many processor cores remain idle.

So how exactly do we crank up IOs to keep up with the computational appetite while shaving costs? This can best be achieved by parallel IO technology designed to process IO across many cores simultaneously, thereby putting those idle CPUs to work. Such technology has been developed by DataCore Software, a long-time master of parallelism in the field of storage virtualization.

In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.

VMware Data Replication Done Right
Until now the most common data replication technologies and methods essential to mission-critical BC/DR initiatives have been tied to the physical environment. Although they do work in the virtual environment, they aren’t optimized for it.

Until now the most common data replication technologies and methods essential to mission-critical BC/DR initiatives have been tied to the physical environment. Although they do work in the virtual environment, they aren’t optimized for it. With the introduction of hypervisor-based replication, Zerto elevates BC/DR up the infrastructure stack where it belongs: in the virtualization layer.

Challenges:

  • If a data replication solution isn’t virtual-ready, management overhead could be more than doubled.
  • Customer data is always growing, so a company can find its information inventory expanding exponentially and not have a data replication solution to keep pace.
  • Some replication methods remain firmly tied to a single vendor and hardware platform, limiting the organization’s ability to get the best solutions – and service – at the best price.

Benefits of Hypervisor-Based Replication:

  • Granularity - The ability to replicate at the correct level of any virtual entity is critical. Zerto’s solution can replicate all virtual machines and all of the meta data as well.
  • Scalability - Zerto’s hypervisor-based replication solution is software-based so it can be deployed and managed easily, no matter how fast the infrastructure expands.
  • Hardware-agnostic - Zerto’s data replication is hardware-agnostic, supporting all storage arrays, so organizations can replicate from anything to anything. This allows users to mix storage technologies such as SAN & NAS, and virtual disk types such as RDM & VMFS.
Introducing Cloud Disaster Recovery
Can mission-critical apps really be protected in the cloud? Introducing: Cloud Disaster Recovery Today, enterprises of all sizes are virtualizing their mission-critical applications, either within their own data center, or with an external cloud vendor.

Can mission-critical apps really be protected in the cloud?

Introducing: Cloud Disaster Recovery

Today, enterprises of all sizes are virtualizing their mission-critical applications, either within their own data center, or with an external cloud vendor. One key driver is to leverage the flexibility and agility virtualization offers to increase availability, business continuity and disaster recovery.

With the cloud becoming more of an option, enterprises of all sizes are looking for the cloud, be it public, hybrid or private, to become part of their BC/DR solution. However, these options do not always exist. Virtualization has created the opportunity, but there is still a significant technology gap. Mission-critical applications can be effectively virtualized and managed; however, the corresponding data cannot be effectively protected in a cloud environment.

Additional Challenges for Enterprises with the Cloud:

  • Multi-tenancy
  • Data protection & mobility
  • Lack of centralized management

Solutions with Zerto Virtual Replication:

  • Seamless integration with no environment change
  • Multi-site support
  • Hardware-agnostic replications
The Visionary’s Guide to VM-aware storage
The storage market is noisy. On the surface, storage providers tout all flash, more models and real-time analytics. But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization cloud. It’s called VM-aware storage (VAS). Fortunately this guide offers you (the Visionary) a closer look at VAS and the chance to see storage differently.

The storage market is noisy. On the surface, storage providers tout all flash, more models and real-time analytics. But under the covers lies a dirty little secret—their operating systems (the foundation of storage) are all the same… built on LUNs and volumes.

But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization cloud. It’s called VM-aware storage (VAS), and if you’ve got a large virtual footprint, it’s something you need to explore further. Fortunately this guide offers you (the Visionary) a closer look at VAS and the chance to see storage differently.

The Six Factors that Determine Your Storage TCO
Over the past decade, capital expenses have grown 3x, but operating expenses have grown 8x. That’s a remarkable pace, and so it’s critical to understand the drivers. Read up on the six TCO factors you need to consider and suggestions for how to think through—and even measure—each one.

When you’re in a buying cycle, calculating the cost of different storage options can be overwhelming—its not as simple as comparing cost-per-gigabyte.

Over the past decade, capital expenses have grown 3x, but operating expenses have grown 8x. That’s a remarkable pace, and so it’s critical to understand the drivers. Read up on the six TCO factors you need to consider and suggestions for how to think through—and even measure—each one.

Citrix Ready Whitepaper: Top Eight Best Practices for Deploying Citrix XenApp and XenDesktop 7.7
Citrix XenApp/XenDesktop 7.7 are fast becoming the standard platforms for deploying application and desktop virtualization. The new FMA provides a unified platform that makes application and desktop delivery fast and easy. Discover how to take advantage of all the new Citrix features and enhancements to improve the security, manageability and remote access of your virtual applications and desktops in this Citrix-Ready White Paper
Normal 0 false false false EN-US X-NONE X-NONE

As Citrix XenApp and XenDesktop 7.7 are fast becoming the standard platforms for deploying application and desktop virtualization in today’s expanding enterprise, they provide a unified platform that makes application and desktop delivery fast and easy.

You can utilize the new Flexcast Management Architecture (FMA) as well as several new features and enhancements to improve the security, manageability and remote access of your virtual applications and desktops.

In this Citrix-Ready white paper, Top Eight Best Practices for Deploying Citrix XenApp and XenDesktop 7.7, you will discover real-world experiences as well as the top eight best practices in deploying Citrix XenApp/XenDesktop 7.x, including how to:

  • Be cloud ready by separating the management plane and the workload
  • Enhance scalability and performance for multi-media applications with GPUs
  • Take advantage of new XenApp/XenDesktop 7.x features including Connection Leasing, anonymous user accesses, session pre-launch and lingering, and application folders/grouping support
  • Improve the user experience on broadband wireless connections with the Framehawk technology (available in Feature Pack 2)
  • Benefit from Citrix Provisioning Services enhancements and understand the trade-offs between PVS vs. Machine Creation Services
  • Analyze virtualization platform choices and how to optimize the deployment to get more users per server
  • Leverage built-in Citrix tools for performance visibility
  • Implement end-to-end monitoring, diagnosis and reporting for complete visibility across all Citrix and non-Citrix tiers in a single unified view

Performance Monitoring for Your Citrix Infrastructure - Considerations & Checklist
Citrix environments incorporate numerous components as well as diverse back-end application elements and user-specific items – all complex variables that can affect the user experience. This white paper provides a checklist of monitoring-related criteria that should be considered as part of due diligence by enterprises and service providers to effectively manage the performance of their complex Citrix infrastructures
Normal 0 false false false EN-US X-NONE X-NONE

Numerous components as well as diverse back-end application elements and user-specific items comprise today’s complex Citrix infrastructures and all can affect the user experience. At any given time, one or more of these components may fail or experience an issue and service organizations spend a significant amount of time and effort diving into each of the various components in order to properly address and resolve problems.

Download this white paper to get a comprehensive checklist of monitoring-related criteria that should be considered as part of due diligence by enterprises and service providers to effectively manage performance of their complex Citrix infrastructures. Jo Harder, Application and Desktop Virtualization Analyst, details a checklist for importance level and vendor consideration broken out into the following categories:

  • Monitoring of all Datacenter Components
  • Monitoring of User Experience
  • Monitoring of Citrix Key Performance Indicators
  • Administration and Reporting
  • Monitoring System Functionality
  • Vendor Support and Product Development
User Profile and Environment Management with ProfileUnity
This whitepaper has been authored by experts at Liquidware Labs in order to provide guidance to adopters of desktop virtualization technologies. In this paper, we outline how ProfileUnity was designed to address many of the shortcomings of Roaming Profiles, and basic profile management tools that are just a step away from roaming profiles, in managing user profiles and user-authored data over multiple desktop platforms, including physical upgrades and refreshes, Windows migrations and moves to
User Profile Management on Microsoft Windows desktops continues to provide challenges.  Most Administrators find that Roaming Profiles and even Microsoft UEV generally fall short due to several factors. Profile Corruption, Lack of Customization, and lack of Enterprise Features are just some of the top shortcomings of Microsoft Windows profile management with these options.

Basic tools such as roaming profiles do not support a mixed operating environment, therefore it does not allow users to move among desktops with mixed profile versions, e.g. Windows 7, Windows 10, Server2008, 2012 r2, etc.

The lack of support of mixed OS versions makes Microsoft profile management methods a serious hindrance when upgrading/migrating operating systems. Microsoft profile management tools also only support very limited granular management, so admins do not have the ability to exclude bloated areas of a user profile or to include files and registry keys outside of the profile. Profile bloat is one of the number one reasons for long logon times on Windows desktops.

Most organizations who will upgrade from a previous Windows® OS, such as Windows 7, to Windows 10, will want the flexibility to move at their own pace and upgrade machines on a departmental or ‘as needed’ basis.  As a result, management of Microsoft profiles and migration become a huge challenge for these environments because neither operation is seamlessly supported or functional between the two operating systems.  

A user’s profile consists of nearly everything needed to provide a personalized user experience within Windows.  If one could separate out the user profile from Windows and enable dynamic profiles that can adapt to any Windows OS version, several advantages can be realized:
  • User state can be stored separately and delivered just-in-time to enable workers to roam from workspace to workspace
  • Users’ profiles can co-exist in mixed OS environments or automatically migrate from one OS to the next, making OS upgrades easy and essentially irrelevant during a point-in-time upgrade
  • Integral policies and self-managed settings, such as local and network printer management, as well as security policies, can be readily restored in the event of a PC failure or loss (disaster recovery)
Given the growing complexity and diversity of Windows desktops technologies, today’s desktop administrators are looking for better ways to manage user profiles across the ever-increasing spectrum of desktop platforms available. In this whitepaper, we will cover the issues inherent with Roaming Profiles and how ProfileUnity addresses these issues.
A Deep Dive into Tintri VM Scale-out
This paper illustrates how Tintri’s per-VM management tools work together with predictive analytics and VM Scale-out technology to make it possible to scale out storage while utilizing the management simplicity that is designed into each Tintri VMstore.

LUNs, volumes, RAID, striping and more have nothing to do with the virtual machines and applications that run your business, and only result in storage performance issues and management complexity. Tintri lets you manage what matters: individual virtual machines. With the announcement of VM Scale-out and analytics, Tintri provides virtualization scale-out technology that makes it possible to focus on managing individual VMs instead of storage.

This paper illustrates how Tintri’s per-VM management tools work together with predictive analytics and VM Scale-out technology to make it possible to scale out storage while utilizing the management simplicity that is designed into each Tintri VMstore.