Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 37 white papers, page 1 of 3.
Blueprint for Delivering IT-as-a-Service - 9 Steps for Success
You’ve got the materials (your constantly changing IT infrastructure). You’ve got the work order (your boss made that perfectly clear). But now what? Delivering IT-as-a-service has never been more challenging than it is today...virtualization, private, public, and hybrid cloud computing are drastically changing how IT needs to provide service delivery and assurance. You know exactly what you need to do, the big question is HOW to do it. If only there was some kind of blueprint for this…
You’ve got the materials (your constantly changing ITinfrastructure). You’ve got the work order (your boss made that perfectlyclear). But now what? Delivering IT-as-a-service has never been morechallenging than it is today...virtualization, private, public, and hybridcloud computing are drastically changing how IT needs to provide servicedelivery and assurance. You know exactly what you need to do, the big questionis HOW to do it. If only there was some kind of blueprint for this…

Based on our experience working with Zenoss customers whohave built highly virtualized and cloud infrastructures, we know what it takesto operationalize IT-as-a-Service in today’s ever-changing technicalenvironment. We’ve put together a guided list of questions in this eBook around the following topics to help you build your blueprint for getting the job done,and done right: 
  • Unified Operations
  • Maximum Automation
  • Model Driven
  • Service Oriented
  • Multi-Tenant
  • Horizontal Scale
  • Open Extensibility
  • Subscription
  • ExtremeService
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
Workload Routing & Reservation:  5 Reasons Why It Is Critical To Virtual & Cloud Operation
Topics: cirba
When observing the current generation virtual and internal cloud environments, it appears that the primary planning and management tasks have also made the transition to purpose-built software solutions. But when you dig in a little deeper, there is one area that is still shamefully behind: the mechanism to determine what infrastructure to host workloads on is still in the stone ages. The ability to understand the complete set of deployed infrastructure, quantify and qualify the hosting capabili
When observing the current generation virtual and internal cloud environments, it appears that the primary planning and management tasks have also made the transition to purpose-built software solutions. But when you dig in a little deeper, there is one area that is still shamefully behind: the mechanism to determine what infrastructure to host workloads on is still in the stone ages. The ability to understand the complete set of deployed infrastructure, quantify and qualify the hosting capabilities of each environment, and to make informed decisions regarding where to host new applications and workloads, is still the realm of spreadsheets and best guesses.

This paper identifies five reasons why the entire process of workload routing and capacity reservation must make the transition to become a core, automated component of IT planning and management.

Optimizing Capacity Forecasting Processes with a Capacity Reservations System for IT
Virtually every area of human endeavour that involves the use of shared resources relies on a reservation system to manage the booking of these assets. Hotels, airlines, rental companies and even the smallest of restaurants rely on reservation systems to optimize the use of their assets and balance customer satisfaction with profitability. Or, as economists would say, strike a balance between supply and demand.

Virtually every area of human endeavour that involves the use of shared resources relies on a reservation system to manage the booking of these assets. Hotels, airlines, rental companies and even the smallest of restaurants rely on reservation systems to optimize the use of their assets and balance customer satisfaction with profitability. Or, as economists would say, strike a balance between supply and demand.

So how can a modern IT environment expect to operate effectively without having a functioning capacity reservation system? The simple answer is that it can't. With the rise of cloud computing, where resources are shared on a larger scale and capacity is commoditized, modeling future bookings and proper forecasting of demand is critical to the survival of IT. Not having proper systems in place leaves forecasting to trending and guesswork - a dangerous proposition that usually results in over-provisioning and excessive capacity.

Download this paper to learn how to manage the demand pipeline for new workload placements in order to improve the accuracy of capacity forecasting and increase agility in response to new workload placement requests.

Server Capacity Defrag
This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

This is not a paper on disk defrag. Although conceptually similar, it describes an entirely new approach to server optimization that performs a similar operation on the compute, memory and IO capacity of entire virtual and cloud environments.

Capacity defragmentation is a concept that is becoming increasingly important in the management of modern data centers. As virtualization increases its penetration into production environments, and as public and private clouds move to the forefront of the IT mindset, the ability to leverage this newly-found agility while at the same driving high efficiency (and low risk) is a real game changer. This white paper outlines how managers of IT environments make the transition from old-school capacity management to new-school efficiency management.

The Path to Hybrid Cloud: Intelligent Bursting To Amazon Web Services & Microsoft Azure
In this whitepaper you will learn: The challenges in implementing an effective hybrid cloud; How key vendors are addressing their challenges; How to answer what, when and where to burst.

The hybrid cloud has been heralded as a promising IT operational model enabling enterprises to maintain security and control over the infrastructure on which their applications run. At the same time, it promises to maximize ROI from their local data center and leverage public cloud infrastructure for an occasional demand spike. However, these benefits don’t come without challenges.

In this whitepaper you will learn:
•    The challenges in implementing an effective hybrid cloud
•    How key vendors are addressing their challenges
•    How to answer what, when and where to burst

IDC Technology Spotlight - Zerto Cloud Continuity Platform
This spotlight IDC Technology highlights IT professionals facing aggressive service level and low budgets, Learn how Zerto Cloud Continuity Platform helps resolve this issue.

A recent IDC survey of small and medium-sized business (SMB) users revealed that 67% have a recovery time requirement of less than four hours, and 31% have a recovery time requirement of less than two hours. Additionally, IDC estimates that as many as half of all organizations have insufficient business continuity and disaster recovery plans to meet business requirements, or to even survive a disaster.

Although business continuity is perhaps the top use case for cloud computing, simply focusing on this one use limits the broad potential of cloud, especially in a hybrid cloud context.

Citrix AppDNA and FlexApp: Application Compatibility Solution Analysis
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.

This whitepaper provides an overview of Citrix AppDNA with Liquidware Labs FlexApp.

Zerto Offsite Cloud Backup & Data Protection
Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Zerto Offsite Backup in the Cloud

What is Offsite Backup?

Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Why Cloud Backup?

  • Offsite Backup combines replication and long retention in a new way
  • The repository can be located in public cloud storage, a private cloud, or as part of a hybrid cloud solution.
  • Copies are saved on a daily, weekly and monthly schedule.
  • The data volumes and configuration information are included to allow VM backups to be restored on any compatible platform, cloud or otherwise.
Introducing Cloud Disaster Recovery
Can mission-critical apps really be protected in the cloud? Introducing: Cloud Disaster Recovery Today, enterprises of all sizes are virtualizing their mission-critical applications, either within their own data center, or with an external cloud vendor.

Can mission-critical apps really be protected in the cloud?

Introducing: Cloud Disaster Recovery

Today, enterprises of all sizes are virtualizing their mission-critical applications, either within their own data center, or with an external cloud vendor. One key driver is to leverage the flexibility and agility virtualization offers to increase availability, business continuity and disaster recovery.

With the cloud becoming more of an option, enterprises of all sizes are looking for the cloud, be it public, hybrid or private, to become part of their BC/DR solution. However, these options do not always exist. Virtualization has created the opportunity, but there is still a significant technology gap. Mission-critical applications can be effectively virtualized and managed; however, the corresponding data cannot be effectively protected in a cloud environment.

Additional Challenges for Enterprises with the Cloud:

  • Multi-tenancy
  • Data protection & mobility
  • Lack of centralized management

Solutions with Zerto Virtual Replication:

  • Seamless integration with no environment change
  • Multi-site support
  • Hardware-agnostic replications
A New Approach to Per User Application Management
Our premise is simple: existing methodologies for delivering and deploying Windows applications are based upon outmoded ideas and outdated technology. There remains a need for a product that makes it simple for each user to have their Windows applications individually tailored for their device. When a user logs on they should see only the applications that they are licensed to use regardless of whether they are using cloud, virtual or traditional desktops.
Normal 0 false false false EN-US X-NONE X-NONE

A simple truth: Current application delivery and deployment solutions for Windows®-based desktops are often not fast enough, and the methods employed introduce complexities and limitations that cost enterprises valuable time, money and productivity. There is a strong need for a solution that is faster to deploy, simpler to use, and improves productivity rather than degrades it. In fact, the best solution would seamlessly and instantaneously personalizing the entire desktop, from profiles and printers to applications and extensions, while supporting license compliance and cost optimization. And, of course, it wouldn’t matter if the target desktops were physical, virtual, or cloud-based. FSLogix is delivering that solution today.

UNIQUE, CUTTING EDGE TECHNOLOGY

FSLogix has devised a revolutionary technique called Image Masking to create a single Unified Base Image that hides everything a logged in user shouldn’t see, providing predictable and real-time access to applications and profiles. This approach is driving unprecedented success in image reduction, with a side benefit of license cost optimization. Image masking functions identically across a wide range of Windows-based platforms, greatly simplifying the path from traditional to virtual environments, and dramatically reducing the management overhead required for enterprise desktops. This solution eliminates multiple layers of management infrastructure, creating a single, unified approach to image management, profile access, and application delivery.
The Visionary’s Guide to VM-aware storage
The storage market is noisy. On the surface, storage providers tout all flash, more models and real-time analytics. But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization cloud. It’s called VM-aware storage (VAS). Fortunately this guide offers you (the Visionary) a closer look at VAS and the chance to see storage differently.

The storage market is noisy. On the surface, storage providers tout all flash, more models and real-time analytics. But under the covers lies a dirty little secret—their operating systems (the foundation of storage) are all the same… built on LUNs and volumes.

But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization cloud. It’s called VM-aware storage (VAS), and if you’ve got a large virtual footprint, it’s something you need to explore further. Fortunately this guide offers you (the Visionary) a closer look at VAS and the chance to see storage differently.

Top 5 Reasons Cloud Service Providers Choose Tintri
Tintri storage is specifically designed for virtualized workloads and cloud. If you’re a cloud provider that has built your business on differentiated services, Tintri is built for you.

Tintri storage is specifically designed for virtualized workloads and cloud. If you’re a cloud provider that has built your business on differentiated services, Tintri is built for you.

Leaders across industries trust Tintri. 800+ customers store 500,000+ virtualized workloads with a 50+ PB footprint on Tintri. That includes more than 75 of the world’s fastest growing cloud service providers, who are bolstering their business on a Tintri storage backbone.

Why join their ranks? When it comes to the success of your business, storage is a make or break decision. Read the five reasons how Tintri will make it happen.

2016 Citrix Performance Management Report
This 2nd-annual research report from DABCC and eG Innovations provides the results of a comprehensive survey of the Citrix user community that explored the current state of Citrix performance management and sought to better understand the current challenges, technology choices and best practices in the Citrix community. The survey results have been compiled into a data rich, easily-digestible report to provide you with benchmarks and new insights into the best practices for Citrix performance m

Over the last decade, the Citrix portfolio of solutions has dramatically expanded to include Citrix XenApp, XenDesktop, XenServer, XenMobile, Sharefile and Workspace Cloud. And, the use cases for Citrix technologies have also expanded with the needs of the market. Flexwork and telework, BYOD, mobile workspaces, PC refresh alternatives and remote partner access are now common user paradigms that are all supported by Citrix technologies.

To deliver the best possible user experience with all these Citrix technologies, Citrix environments need to be well architected but also well monitored and managed to identify and diagnose problems early on and prevent issues from escalating and impacting end users and business processes.

This 2nd-annual research report from DABCC and eG Innovations provides the results of a comprehensive survey of the Citrix user community with a goal of exploring the current state of Citrix performance management and helping Citrix users better understand current challenges, technology choices and best practices in the Citrix community.

The survey results have been compiled into a data-rich, easy-to-digest report to provide you with benchmarks and new insights into the best practices for effective Citrix performance management.