Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 44 white papers, page 1 of 3.
VDI FIT and VDI UX Key Metrics
As more organizations prepare for and deploy hosted virtual desktops, it has become clear that there is a need to measure compatibility and performance for virtual desktop infrastructure both in the planning process and when measuring user experience after moving to virtual desktops. This whitepaper covers best practices and provides an introduction to composite metrics VDI FIT in Stratusphere FIT and VDI UX in Stratusphere UX to provide a framework to measure Good/Fair/Poor desktops.

As more organizations prepare for and deploy hosted virtual desktops, it has become clear that there is a need to support two related but critical phases. The first is to inventory and assess the physical desktop environment in order to create a baseline for the performance and quality of the user experience for the virtual desktop counterparts. When planning and preparing, organizations would like to know which desktops, users and applications are a good fit for desktop virtualization and which ones are not. The second phase is to track the virtual desktops in production in order to proactively identify when performance and user experience does not meet expectations, as well as to continue to refine and optimize the desktop image and infrastructure as changes are introduced into the environment.

Because virtual desktops live on shared systems, the layers of technology make it more complex to measure and classify fitness and user experience. But with increased industry knowledge and the emergence of best practices, plus new purpose-built products such as Liquidware Labs’ Stratusphere FIT and Stratusphere UX, it is now possible to more accurately measure and classify both fitness and user experience. This white paper covers these best practices and provides an introduction to the VDI FIT and VDI UX classification capabilities in Stratusphere FIT and Stratusphere UX.

Free VCP5-DCV and VCAP5-DCA study guides
One of the memory management techniques ESXi uses is Memory Compression. When a give ESXi host is under memory strain ESXi will compress virtual pages and store them in memory. Using this memory management technique allows for better performance the accessing memory that has been swapped to disk. You can all set the size of the compression cache as percentage of the assigned memory to a VM.
Free VCP5-DCV Study Guide

In this 136-page study guide Jason and Josh cover all seven of the exam blueprint sections to help prepare you for the VCP exam.

Free VCAP5-DCA Study Guide

For those currently holding their VCP certification and want to take it up a notch, Jason and Josh have you covered with the 248-page VCAP5-DCA study guide. Using this study guide along with hands-on lab time will help you in the three and a half hours, lab-based VCAP5-DCA exam.
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Making a Business Case for Unified IT Monitoring
Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

In large IT organizations, monitoring tool sprawl has become so commonplace that it is not unusual for administrators to be monitoring 10 to 50 solutions across various departments.

Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

This paper can help you make an effective business case for moving to a unified monitoring solution. Key considerations include:

•    The direct costs associated with moving to a unified monitoring tool
•    The savings potential of improved IT operations through productivity and efficiency
•    The business impact of monitoring tools in preventing and reducing both downtime and service degradation

Download the paper now!

How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
Hyper-converged Infrastructure: No-Nonsense Selection Criteria
This white paper helps you identify the key selection criteria for building a business savvy hyper-converged infrastructure model for your business based on cost, availability, fitness to purpose and performance. Also, it includes a checklist you can use to evaluate hyper-converged storage options.
Hyper-converged storage is the latest buzz phrase instorage. The exact meaning of hyper-converged storage varies depending on thevendor that one consults, with solutions varying widely with respect to theirsupport for multiple hypervisor and workload types and their flexibility interms of hardware componentry and topology.
 
Regardless of the definition that vendors ascribe to theterm, the truth is that building a business savvy hyper-converged infrastructurestill comes down to two key requirements: selecting a combination ofinfrastructure products and services that best fit workload requirements, andselecting a hyper-converged model that can adapt and scale with changingstorage demands without breaking available budgets.
 
Download this paper to:
  • Learn about hyper-converged storage and virtualSANs
  • Identify key criteria for selecting the right hyper-converged infrastructure
  • Obtain a checklist for evaluating options
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Boone County Health Center Runs Faster with Infinio
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day.
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day. Employees access electronic medical records (EMR), business reports, financial data, email and other essential applications required to manage daily operations and provide optimum patient care. Some medical staff and administrators occasionally log in from their homes on personal devices such as laptops or iPads. The Health Center initially considered purchasing an add-on all-flash array for the VDI to help eliminate slow response periods during boot storms. However, before making this type of investment, the Center wanted to explore other alternative solutions.
Masergy accelerates VDI and storage performance with Infinio
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work.
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work. The team was investigating SSD solutions and options from SanDisk, VMware and Dell, as well as all-flash arrays, when it discovered Infinio at VMworld 2014. Unlike the solutions Masergy considered previously, the simplicity of the Infinio Accelerator and low price point caught the Masergy team’s attention. Fewer than six months later, Masergy’s Infinio installation was under way. Infinio provides an alternative to expensive, hardware-based solutions to address VDI performance, which is what Masergy wanted to improve.
IDC: Cirba Targets Software-Defined Infrastructure Control with Workload-Aware Predictive Analytics
This IDC vendor profile analyzes Cirba’s Software-Defined Infrastructure Control with workload aware predictive analytics. “Customers interviewed by IDC credit Cirba with helping them substantially reduce infrastructure and software licensing costs by improving the density of their environments without compromising application and workload performance.”

This IDC vendor profile analyzes Cirba’s Software-Defined Infrastructure Control with workload aware predictive analytics.

“Customers interviewed by IDC credit Cirba with helping them substantially reduce infrastructure and software licensing costs by improving the density of their environments without compromising application and workload performance.”

Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Waiting on IO: The Straw That Broke Virtualization’s Back
In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.
Despite the increasing horsepower of modern multi-core processors and the promise of virtualization, we’re seeing relatively little progress in the amount of concurrent work they accomplish. That’s why we’re having to buy a lot more virtualized servers than we expected.

On closer examination, we find the root cause to be IO-starved virtual machines (VMs), especially for heavy online transactional processing (OLTP) apps, databases and mainstream IO-intensive workloads. Plenty of compute power is at their disposal, but servers have a tough time fielding inputs and outputs. This gives rise to an odd phenomenon of stalled virtualized apps while many processor cores remain idle.

So how exactly do we crank up IOs to keep up with the computational appetite while shaving costs? This can best be achieved by parallel IO technology designed to process IO across many cores simultaneously, thereby putting those idle CPUs to work. Such technology has been developed by DataCore Software, a long-time master of parallelism in the field of storage virtualization.

In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.

A Flash Storage Technical and Economic Primer
Topics: tegile, storage, flash
Flash technology is rapidly evolving. Chances are the game has changed since you last checked. With every step forward, flash storage is becoming faster, more reliable, and less expensive. And there’s more than one kind of flash technology out there. Some flash focuses on performance, while others balance performance with capacity. Read this white paper for a technical breakdown of the latest in flash storage. Learn how flash has changed in the last few years, and how the economics have shifted.
Normal 0 false false false EN-US X-NONE X-NONE Although today’s NAND flash storage has its roots in 30-year-old technology, innovation has negated almost all of the challenges that are inherent in the media. Moreover, modern storage companies are taking even more software-based steps to further overcome such challenges. Between these advances, it’s clear that flash media use will continue to grow in the data center.
Understanding the Economics of Flash
Flash is faster, it’s more reliable, and it can transform your business. However, you might be concerned about its high cost. Read this white paper to learn how to identify the hidden costs of flash and choose a storage solution that delivers the performance and capacity your organization needs, yet fits within your budget.
In today’s world, IT organizations want everything to be better, faster, and cheaper. As changes come to the industry, it’s important to understand how to measure improvement. Specific to flash storage, it’s important to understand how choices about flash versus disk impact the bottom line. When it comes to making this determination, how can you be sure you're getting the most out of every dollar?
The Guide to Selecting Flash for Virtual Environments
High-performance flash-based storage has dramatically improved the storage infrastructure’s ability to respond to the demands of servers and the applications that count on it. Nowhere does this improvement have more potential than in the virtualized-server environment. The performance benefits of flash are so great that it can be deployed indiscriminately and still performance gains can be seen. But doing so may not allow the environment to take full advantage of flash performance. It may also b
Flash storage allows for a higher number of VMs per host. Increasing VM density reduces the number of physical servers required and thereby reduces one of the largest ongoing costs, buying more physical servers, which often are configured with multiple processors and extra DRAM. At the same time, the high performance and low latency of flash allows more mission critical applications to be virtualized. This report explains how storage and hypervisor vendors are getting smarter about leveraging this faster form of storage by including storage quality of service (QoS) intelligence in their systems or environments.
Unlock the Full Performance of Your Servers
Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software

Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software.

The Problem:

Current systems don't have software optimizations that can fully harness the latest advances in more powerful server system technologies.

As a result, I/O performance has been the major bottleneck holding back the industry.