Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 42 white papers, page 1 of 3.
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Making a Business Case for Unified IT Monitoring
Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

In large IT organizations, monitoring tool sprawl has become so commonplace that it is not unusual for administrators to be monitoring 10 to 50 solutions across various departments.

Unified monitoring solutions like Zenoss offer a cost-effective alternative for those seeking to rein-in monitoring inefficiencies. By establishing a central nerve center to collect data from multiple tools and managed resources, IT groups can gain visibility into the end-to-end availability and performance of their infrastructure. This helps simplify operational processes and reduce the risk of service disruption for the enterprise.

This paper can help you make an effective business case for moving to a unified monitoring solution. Key considerations include:

•    The direct costs associated with moving to a unified monitoring tool
•    The savings potential of improved IT operations through productivity and efficiency
•    The business impact of monitoring tools in preventing and reducing both downtime and service degradation

Download the paper now!

How Software-Defined Storage Enhances Hyper-converged Storage
This paper describes how to conquer the challenges of using SANs in a virtual environment and why organizations are looking into hyper-converged systems that take advantage of Software-Defined Storage as a solution to provide reliable application performance and a highly available infrastructure.
One of thefundamental requirements for virtualizing applications is shared storage.Applications can move around to different servers as long as those servers haveaccess to the storage with the application and its data. Typically, sharedstorage takes place over a storage network known as a SAN. However, SANstypically run into issues in a virtual environment, so organizations arecurrently looking for new options. Hyper-converged infrastructure is a solutionthat seems well-suited to address these issues.
 
By downloading thispaper you will:
 
  • Identify the issues with running SANs in virtualized environments
  • Learn why Hyper-converged systems are ideal for solving performance issues
  • Learn why Hyper-converged systems are ideal for remote offices
  • Discover real world use-cases where DataCore's Hyper-converged Virtual SAN faced these issues
Hyper-converged Infrastructure: No-Nonsense Selection Criteria
This white paper helps you identify the key selection criteria for building a business savvy hyper-converged infrastructure model for your business based on cost, availability, fitness to purpose and performance. Also, it includes a checklist you can use to evaluate hyper-converged storage options.
Hyper-converged storage is the latest buzz phrase instorage. The exact meaning of hyper-converged storage varies depending on thevendor that one consults, with solutions varying widely with respect to theirsupport for multiple hypervisor and workload types and their flexibility interms of hardware componentry and topology.
 
Regardless of the definition that vendors ascribe to theterm, the truth is that building a business savvy hyper-converged infrastructurestill comes down to two key requirements: selecting a combination ofinfrastructure products and services that best fit workload requirements, andselecting a hyper-converged model that can adapt and scale with changingstorage demands without breaking available budgets.
 
Download this paper to:
  • Learn about hyper-converged storage and virtualSANs
  • Identify key criteria for selecting the right hyper-converged infrastructure
  • Obtain a checklist for evaluating options
DataCore Virtual SAN – A Deep Dive into Converged Storage
Topics: DataCore, storage, SAN
This white paper describes how DataCore’s Virtual SAN software can help you deploy a converged, flexible architecture to address painful challenges that exist today such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.

DataCore Virtual SAN introduces the next evolution in Software-defined Storage (SDS) by creating high-performance and highly-available shared storage pools using the disks and flash storage in your servers. It addresses the requirements for fast and reliable access to storage across a cluster of servers at remote sites as well as in high-performance applications.

Download this white paper to learn about:

•    The technical aspects of DataCore’s Virtual SAN solution - a deep dive into converged storage
•    How DataCore Virtual SAN addresses IT challenges such as single points of failure, poor application performance, low storage efficiency and utilization, and high infrastructure costs.
•    Explore possible use cases and benefits of DataCore’s Virtual SAN

Boone County Health Center Runs Faster with Infinio
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day.
Boone County Health Center’s IT team needed a solution to improve the response times of virtual desktops during their peak times of morning usage when most employees log on for the day. Employees access electronic medical records (EMR), business reports, financial data, email and other essential applications required to manage daily operations and provide optimum patient care. Some medical staff and administrators occasionally log in from their homes on personal devices such as laptops or iPads. The Health Center initially considered purchasing an add-on all-flash array for the VDI to help eliminate slow response periods during boot storms. However, before making this type of investment, the Center wanted to explore other alternative solutions.
Masergy accelerates VDI and storage performance with Infinio
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work.
To support its global users, Masergy needed to accelerate its virtual desktop infrastructure (VDI) and was unconvinced that spending budget on solid-state drive (SSD) solutions would work. The team was investigating SSD solutions and options from SanDisk, VMware and Dell, as well as all-flash arrays, when it discovered Infinio at VMworld 2014. Unlike the solutions Masergy considered previously, the simplicity of the Infinio Accelerator and low price point caught the Masergy team’s attention. Fewer than six months later, Masergy’s Infinio installation was under way. Infinio provides an alternative to expensive, hardware-based solutions to address VDI performance, which is what Masergy wanted to improve.
Why Parallel I/O & Moore's Law Enable Virtualization and SDDC to Achieve their Potential
Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet.

Today’s demanding applications, especially within virtualized environments, require high performance from storage to keep up with the rate of data acquisition and unpredictable demands of enterprise workloads. In a world that requires near instant response times and increasingly faster access to data, the needs of business-critical tier 1 enterprise applications, such as databases including SQL, Oracle and SAP, have been largely unmet. 

The major bottleneck holding back the industry is I/O performance. This is because current systems still rely on device -level optimizations tied to specific disk and flash technologies since they don’t have software optimizations that can fully harness the latest advances in more powerful server system technologies such as multicore architectures. Therefore, they have not been able to keep up with the pace of Moore’s Law.

Waiting on IO: The Straw That Broke Virtualization’s Back
In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.
Despite the increasing horsepower of modern multi-core processors and the promise of virtualization, we’re seeing relatively little progress in the amount of concurrent work they accomplish. That’s why we’re having to buy a lot more virtualized servers than we expected.

On closer examination, we find the root cause to be IO-starved virtual machines (VMs), especially for heavy online transactional processing (OLTP) apps, databases and mainstream IO-intensive workloads. Plenty of compute power is at their disposal, but servers have a tough time fielding inputs and outputs. This gives rise to an odd phenomenon of stalled virtualized apps while many processor cores remain idle.

So how exactly do we crank up IOs to keep up with the computational appetite while shaving costs? This can best be achieved by parallel IO technology designed to process IO across many cores simultaneously, thereby putting those idle CPUs to work. Such technology has been developed by DataCore Software, a long-time master of parallelism in the field of storage virtualization.

In this paper, we will discuss DataCore’s underlying parallel architecture, how it evolved over the years and how it results in a markedly different way to address the craving for IOPS (input/output operations per second) in a software-defined world.

A Flash Storage Technical and Economic Primer
Topics: tegile, storage, flash
Flash technology is rapidly evolving. Chances are the game has changed since you last checked. With every step forward, flash storage is becoming faster, more reliable, and less expensive. And there’s more than one kind of flash technology out there. Some flash focuses on performance, while others balance performance with capacity. Read this white paper for a technical breakdown of the latest in flash storage. Learn how flash has changed in the last few years, and how the economics have shifted.
Normal 0 false false false EN-US X-NONE X-NONE Although today’s NAND flash storage has its roots in 30-year-old technology, innovation has negated almost all of the challenges that are inherent in the media. Moreover, modern storage companies are taking even more software-based steps to further overcome such challenges. Between these advances, it’s clear that flash media use will continue to grow in the data center.
Understanding the Economics of Flash
Flash is faster, it’s more reliable, and it can transform your business. However, you might be concerned about its high cost. Read this white paper to learn how to identify the hidden costs of flash and choose a storage solution that delivers the performance and capacity your organization needs, yet fits within your budget.
In today’s world, IT organizations want everything to be better, faster, and cheaper. As changes come to the industry, it’s important to understand how to measure improvement. Specific to flash storage, it’s important to understand how choices about flash versus disk impact the bottom line. When it comes to making this determination, how can you be sure you're getting the most out of every dollar?
The Guide to Selecting Flash for Virtual Environments
High-performance flash-based storage has dramatically improved the storage infrastructure’s ability to respond to the demands of servers and the applications that count on it. Nowhere does this improvement have more potential than in the virtualized-server environment. The performance benefits of flash are so great that it can be deployed indiscriminately and still performance gains can be seen. But doing so may not allow the environment to take full advantage of flash performance. It may also b
Flash storage allows for a higher number of VMs per host. Increasing VM density reduces the number of physical servers required and thereby reduces one of the largest ongoing costs, buying more physical servers, which often are configured with multiple processors and extra DRAM. At the same time, the high performance and low latency of flash allows more mission critical applications to be virtualized. This report explains how storage and hypervisor vendors are getting smarter about leveraging this faster form of storage by including storage quality of service (QoS) intelligence in their systems or environments.
Unlock the Full Performance of Your Servers
Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software

Unlock the full performance of your servers with DataCore Adaptive Parallel I/O Software.

The Problem:

Current systems don't have software optimizations that can fully harness the latest advances in more powerful server system technologies.

As a result, I/O performance has been the major bottleneck holding back the industry.

2016 Citrix Performance Management Report
This 2nd-annual research report from DABCC and eG Innovations provides the results of a comprehensive survey of the Citrix user community that explored the current state of Citrix performance management and sought to better understand the current challenges, technology choices and best practices in the Citrix community. The survey results have been compiled into a data rich, easily-digestible report to provide you with benchmarks and new insights into the best practices for Citrix performance m

Over the last decade, the Citrix portfolio of solutions has dramatically expanded to include Citrix XenApp, XenDesktop, XenServer, XenMobile, Sharefile and Workspace Cloud. And, the use cases for Citrix technologies have also expanded with the needs of the market. Flexwork and telework, BYOD, mobile workspaces, PC refresh alternatives and remote partner access are now common user paradigms that are all supported by Citrix technologies.

To deliver the best possible user experience with all these Citrix technologies, Citrix environments need to be well architected but also well monitored and managed to identify and diagnose problems early on and prevent issues from escalating and impacting end users and business processes.

This 2nd-annual research report from DABCC and eG Innovations provides the results of a comprehensive survey of the Citrix user community with a goal of exploring the current state of Citrix performance management and helping Citrix users better understand current challenges, technology choices and best practices in the Citrix community.

The survey results have been compiled into a data-rich, easy-to-digest report to provide you with benchmarks and new insights into the best practices for effective Citrix performance management.

Citrix Ready Whitepaper: Top Eight Best Practices for Deploying Citrix XenApp and XenDesktop 7.7
Citrix XenApp/XenDesktop 7.7 are fast becoming the standard platforms for deploying application and desktop virtualization. The new FMA provides a unified platform that makes application and desktop delivery fast and easy. Discover how to take advantage of all the new Citrix features and enhancements to improve the security, manageability and remote access of your virtual applications and desktops in this Citrix-Ready White Paper
Normal 0 false false false EN-US X-NONE X-NONE

As Citrix XenApp and XenDesktop 7.7 are fast becoming the standard platforms for deploying application and desktop virtualization in today’s expanding enterprise, they provide a unified platform that makes application and desktop delivery fast and easy.

You can utilize the new Flexcast Management Architecture (FMA) as well as several new features and enhancements to improve the security, manageability and remote access of your virtual applications and desktops.

In this Citrix-Ready white paper, Top Eight Best Practices for Deploying Citrix XenApp and XenDesktop 7.7, you will discover real-world experiences as well as the top eight best practices in deploying Citrix XenApp/XenDesktop 7.x, including how to:

  • Be cloud ready by separating the management plane and the workload
  • Enhance scalability and performance for multi-media applications with GPUs
  • Take advantage of new XenApp/XenDesktop 7.x features including Connection Leasing, anonymous user accesses, session pre-launch and lingering, and application folders/grouping support
  • Improve the user experience on broadband wireless connections with the Framehawk technology (available in Feature Pack 2)
  • Benefit from Citrix Provisioning Services enhancements and understand the trade-offs between PVS vs. Machine Creation Services
  • Analyze virtualization platform choices and how to optimize the deployment to get more users per server
  • Leverage built-in Citrix tools for performance visibility
  • Implement end-to-end monitoring, diagnosis and reporting for complete visibility across all Citrix and non-Citrix tiers in a single unified view

Performance Monitoring for Your Citrix Infrastructure - Considerations & Checklist
Citrix environments incorporate numerous components as well as diverse back-end application elements and user-specific items – all complex variables that can affect the user experience. This white paper provides a checklist of monitoring-related criteria that should be considered as part of due diligence by enterprises and service providers to effectively manage the performance of their complex Citrix infrastructures
Normal 0 false false false EN-US X-NONE X-NONE

Numerous components as well as diverse back-end application elements and user-specific items comprise today’s complex Citrix infrastructures and all can affect the user experience. At any given time, one or more of these components may fail or experience an issue and service organizations spend a significant amount of time and effort diving into each of the various components in order to properly address and resolve problems.

Download this white paper to get a comprehensive checklist of monitoring-related criteria that should be considered as part of due diligence by enterprises and service providers to effectively manage performance of their complex Citrix infrastructures. Jo Harder, Application and Desktop Virtualization Analyst, details a checklist for importance level and vendor consideration broken out into the following categories:

  • Monitoring of all Datacenter Components
  • Monitoring of User Experience
  • Monitoring of Citrix Key Performance Indicators
  • Administration and Reporting
  • Monitoring System Functionality
  • Vendor Support and Product Development