Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 48 white papers, page 1 of 3.
5 Fundamentals of Modern Data Protection
Some data protection software vendors will say that they are “agentless” because they can do an agentless backup. However, many of these vendors require agents for file-level restore, proper application backup, or to restore application data. My advice is to make sure that your data protection tool is able to address all backup and recovery scenarios without the need for an agent.
Legacy backup is costly, inefficient, and can force IT administrators to make risky compromises that impact critical business applications, data and resources. Read this NEW white paper to learn how Modern Data Protection capitalizes on the inherent benefits of virtualization to:
  • Increase your ability to meet RPOs and RTOs
  • Eliminate the need for complex and inefficient agents
  • Reduce operating costs and optimize resources
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Building a Highly Available Data Infrastructure
Topics: DataCore, storage, SAN, HA
This white paper outlines best practices for improving overall business application availability by building a highly available data infrastructure.
Regardless of whether you use a direct attached storage array, or a network-attached storage (NAS) appliances, or a storage area network (SAN) to host your data, if this data infrastructure is not designed for high availability, then the data it stores is not highly available by extension, application availability is at risk – regardless of server clustering.

Download this paper to:
•    Learn how to develop a High Availability strategy for your applications
•    Identify the differences between Hardware and Software-defined infrastructures in terms of Availability
•    Learn how to build a Highly Available data infrastructure using Hyper-converged storage

Scale Computing’s hyperconverged system matches the needs of the SMB and mid-market
Scale Computing HC3 is cost effective, scalable and designed for installation and management by the IT generalist
Everyone has heard the buzz about hyper-converged systems – appliances with compute, storage and virtualization infrastructures built in – these days. Hyper-converged infrastructure systems are an extension of infrastructure convergence – the combination of compute, storage and networking resources in one compact box – that promise of simplification by consolidating resources onto a commodity x86 server platform.
2015 State of SMB IT Infrastructure Survey Results
Overall, companies of all sizes are moving faster to virtualize their servers but very few are taking advantage of hyperconvergence and all that it offers.
Demandson IT in small and medium businesses (SMBs) continue to rise exponentially.Budget changes, increased application and customization demands, and more arestretching IT administrators to the limit. At the same time, new technologieslike hyperconverged infrastructure bring light to the end of thestrained-resources tunnel through improved efficiency, scaling, and managementbreakthroughs. More and more, IT groups at SMBs are being pushed to “do morewith less,” as the unwelcome saying goes. So, in order to meet thesechallenges, some SMBs leverage new technology.
 
See how 1,227 technologists replied to a surveyin early 2015 as a part of our State of SMB IT Infrastructure Survey. Theresponses to this very popular survey yielded some surprising results!
Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Waiting on IO: The Straw That Broke Virtualization’s Back
In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.
Despite the increasing horsepower of modern multi-core processors and the promise of virtualization, we’re seeing relatively little progress in the amount of concurrent work they accomplish. That’s why we’re having to buy a lot more virtualized servers than we expected.

On closer examination, we find the root cause to be IO-starved virtual machines (VMs), especially for heavy online transactional processing (OLTP) apps, databases and mainstream IO-intensive workloads. Plenty of compute power is at their disposal, but servers have a tough time fielding inputs and outputs. This gives rise to an odd phenomenon of stalled virtualized apps while many processor cores remain idle.

So how exactly do we crank up IOs to keep up with the computational appetite while shaving costs? This can best be achieved by parallel IO technology designed to process IO across many cores simultaneously, thereby putting those idle CPUs to work. Such technology has been developed by DataCore Software, a long-time master of parallelism in the field of storage virtualization.

In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.

Citrix AppDNA and FlexApp: Application Compatibility Solution Analysis
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.

This whitepaper provides an overview of Citrix AppDNA with Liquidware Labs FlexApp.

FlexApp: Application Layering Technology
FlexApp Application Layering in ProfileUnity enables applications to be virtualized in such an innate way that they look native to the Windows OS and other applications.
FlexApp Application Layering in ProfileUnity enables applications to be virtualized in such an innate way that they look native to the Windows OS and other applications.

Application Layering leads to much higher rates of compatibility than previous technologies which used Application Isolation to virtualize applications. Once applications have been packaged for layering, they are containerized on virtual hard disks (VHDs) or VMDKs.  They can be centrally assigned to users on a machine-level or “context-aware” basis.

This whitepaper provides an overview of FlexApp concepts and ways in which FlexApp can serve as a cornerstone in an application delivery strategy.
VMware Data Replication Done Right
Until now the most common data replication technologies and methods essential to mission-critical BC/DR initiatives have been tied to the physical environment. Although they do work in the virtual environment, they aren’t optimized for it.

Until now the most common data replication technologies and methods essential to mission-critical BC/DR initiatives have been tied to the physical environment. Although they do work in the virtual environment, they aren’t optimized for it. With the introduction of hypervisor-based replication, Zerto elevates BC/DR up the infrastructure stack where it belongs: in the virtualization layer.

Challenges:

  • If a data replication solution isn’t virtual-ready, management overhead could be more than doubled.
  • Customer data is always growing, so a company can find its information inventory expanding exponentially and not have a data replication solution to keep pace.
  • Some replication methods remain firmly tied to a single vendor and hardware platform, limiting the organization’s ability to get the best solutions – and service – at the best price.

Benefits of Hypervisor-Based Replication:

  • Granularity - The ability to replicate at the correct level of any virtual entity is critical. Zerto’s solution can replicate all virtual machines and all of the meta data as well.
  • Scalability - Zerto’s hypervisor-based replication solution is software-based so it can be deployed and managed easily, no matter how fast the infrastructure expands.
  • Hardware-agnostic - Zerto’s data replication is hardware-agnostic, supporting all storage arrays, so organizations can replicate from anything to anything. This allows users to mix storage technologies such as SAN & NAS, and virtual disk types such as RDM & VMFS.
Introducing Cloud Disaster Recovery
Can mission-critical apps really be protected in the cloud? Introducing: Cloud Disaster Recovery Today, enterprises of all sizes are virtualizing their mission-critical applications, either within their own data center, or with an external cloud vendor.

Can mission-critical apps really be protected in the cloud?

Introducing: Cloud Disaster Recovery

Today, enterprises of all sizes are virtualizing their mission-critical applications, either within their own data center, or with an external cloud vendor. One key driver is to leverage the flexibility and agility virtualization offers to increase availability, business continuity and disaster recovery.

With the cloud becoming more of an option, enterprises of all sizes are looking for the cloud, be it public, hybrid or private, to become part of their BC/DR solution. However, these options do not always exist. Virtualization has created the opportunity, but there is still a significant technology gap. Mission-critical applications can be effectively virtualized and managed; however, the corresponding data cannot be effectively protected in a cloud environment.

Additional Challenges for Enterprises with the Cloud:

  • Multi-tenancy
  • Data protection & mobility
  • Lack of centralized management

Solutions with Zerto Virtual Replication:

  • Seamless integration with no environment change
  • Multi-site support
  • Hardware-agnostic replications