Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 28 white papers, page 1 of 2.
Blueprint for Delivering IT-as-a-Service - 9 Steps for Success
You’ve got the materials (your constantly changing IT infrastructure). You’ve got the work order (your boss made that perfectly clear). But now what? Delivering IT-as-a-service has never been more challenging than it is today...virtualization, private, public, and hybrid cloud computing are drastically changing how IT needs to provide service delivery and assurance. You know exactly what you need to do, the big question is HOW to do it. If only there was some kind of blueprint for this…
You’ve got the materials (your constantly changing ITinfrastructure). You’ve got the work order (your boss made that perfectlyclear). But now what? Delivering IT-as-a-service has never been morechallenging than it is today...virtualization, private, public, and hybridcloud computing are drastically changing how IT needs to provide servicedelivery and assurance. You know exactly what you need to do, the big questionis HOW to do it. If only there was some kind of blueprint for this…

Based on our experience working with Zenoss customers whohave built highly virtualized and cloud infrastructures, we know what it takesto operationalize IT-as-a-Service in today’s ever-changing technicalenvironment. We’ve put together a guided list of questions in this eBook around the following topics to help you build your blueprint for getting the job done,and done right: 
  • Unified Operations
  • Maximum Automation
  • Model Driven
  • Service Oriented
  • Multi-Tenant
  • Horizontal Scale
  • Open Extensibility
  • Subscription
  • ExtremeService
Application Response Time for Virtual Operations
For applications running in virtualized, distributed and shared environments it will no longer work to infer the performance of an application by looking at various resource utilization statistics. Rather it is essential to define application performance by measuring response and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

Massive changes are occurring to how applications are built and how they are deployed and run. The benefits of these changes are dramatically increased responsiveness to the business (business agility), increased operational flexibility, and reduced operating costs.

The environments onto which these applications are deployed are also undergoing a fundamental change. Virtualized environments offer increased operational agility which translates into a more responsive IT Operations organization. Cloud Computing offers applications owners a complete out-sourced alternative to internal data center execution environments. IT organizations are in turn responding to public cloud with IT as a Service (IaaS) initiatives.

For applications running in virtualized, distributed and shared environments, it will no longer work to infer the “performance” of an application by looking at various resource utilization statistics. Rather it will become essential to define application performance as response time – and to directly measure the response time and throughput of every application in production. This paper makes the case for how application performance management for virtualized and cloud based environments needs to be modernized to suit these new environments.

CIO Guide to Virtual Server Data Protection
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, and faster across the IT spectrum.
Server virtualization is changing the face of the modern data center. CIOs are looking for ways to virtualize more applications, faster across the IT spectrum. Selecting the right data protection solution that understands the new virtual environment is a critical success factor in the journey to cloud-based infrastructure. This guide looks at the key questions CIOs should be asking to ensure a successful virtual server data protection solution.
Five Fundamentals of Virtual Server Protection
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments.
The benefits of server virtualization are compelling and are driving the transition to large scale virtual server deployments. From cost savings recognized through server consolidation or business flexibility and agility inherent in the emergent private and public cloud architectures, virtualization technologies are rapidly becoming a cornerstone of the modern data center. With Commvault's software, you can take full advantage of the developments in virtualization technology and enable private and public cloud data centers while continuing to meet all your data management, protection and retention needs. This whitepaper outlines the to 5 challenges to overcome in order to take advantage of the benefits of virtualization for your organization.
Citrix AppDNA and FlexApp: Application Compatibility Solution Analysis
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.
Desktop computing has rapidly evolved over the last 10 years. Once defined as physical PCs, Windows desktop environments now include everything from virtual to shared hosted (RDSH), to cloud based. With these changes, the enterprise application landscape has also changed drastically over the last few years.

This whitepaper provides an overview of Citrix AppDNA with Liquidware Labs FlexApp.

Zerto Offsite Cloud Backup & Data Protection
Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Zerto Offsite Backup in the Cloud

What is Offsite Backup?

Offsite Backup is a new paradigm in data protection that combines hypervisor-based replication with longer retention. This greatly simplifies data protection for IT organizations. The ability to leverage the data at the disaster recovery target site or in the cloud for VM backup eliminates the impact on production workloads.

Why Cloud Backup?

  • Offsite Backup combines replication and long retention in a new way
  • The repository can be located in public cloud storage, a private cloud, or as part of a hybrid cloud solution.
  • Copies are saved on a daily, weekly and monthly schedule.
  • The data volumes and configuration information are included to allow VM backups to be restored on any compatible platform, cloud or otherwise.
A New Approach to Per User Application Management
Our premise is simple: existing methodologies for delivering and deploying Windows applications are based upon outmoded ideas and outdated technology. There remains a need for a product that makes it simple for each user to have their Windows applications individually tailored for their device. When a user logs on they should see only the applications that they are licensed to use regardless of whether they are using cloud, virtual or traditional desktops.
Normal 0 false false false EN-US X-NONE X-NONE

A simple truth: Current application delivery and deployment solutions for Windows®-based desktops are often not fast enough, and the methods employed introduce complexities and limitations that cost enterprises valuable time, money and productivity. There is a strong need for a solution that is faster to deploy, simpler to use, and improves productivity rather than degrades it. In fact, the best solution would seamlessly and instantaneously personalizing the entire desktop, from profiles and printers to applications and extensions, while supporting license compliance and cost optimization. And, of course, it wouldn’t matter if the target desktops were physical, virtual, or cloud-based. FSLogix is delivering that solution today.

UNIQUE, CUTTING EDGE TECHNOLOGY

FSLogix has devised a revolutionary technique called Image Masking to create a single Unified Base Image that hides everything a logged in user shouldn’t see, providing predictable and real-time access to applications and profiles. This approach is driving unprecedented success in image reduction, with a side benefit of license cost optimization. Image masking functions identically across a wide range of Windows-based platforms, greatly simplifying the path from traditional to virtual environments, and dramatically reducing the management overhead required for enterprise desktops. This solution eliminates multiple layers of management infrastructure, creating a single, unified approach to image management, profile access, and application delivery.
The Visionary’s Guide to VM-aware storage
The storage market is noisy. On the surface, storage providers tout all flash, more models and real-time analytics. But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization cloud. It’s called VM-aware storage (VAS). Fortunately this guide offers you (the Visionary) a closer look at VAS and the chance to see storage differently.

The storage market is noisy. On the surface, storage providers tout all flash, more models and real-time analytics. But under the covers lies a dirty little secret—their operating systems (the foundation of storage) are all the same… built on LUNs and volumes.

But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization cloud. It’s called VM-aware storage (VAS), and if you’ve got a large virtual footprint, it’s something you need to explore further. Fortunately this guide offers you (the Visionary) a closer look at VAS and the chance to see storage differently.

Top 5 Reasons Cloud Service Providers Choose Tintri
Tintri storage is specifically designed for virtualized workloads and cloud. If you’re a cloud provider that has built your business on differentiated services, Tintri is built for you.

Tintri storage is specifically designed for virtualized workloads and cloud. If you’re a cloud provider that has built your business on differentiated services, Tintri is built for you.

Leaders across industries trust Tintri. 800+ customers store 500,000+ virtualized workloads with a 50+ PB footprint on Tintri. That includes more than 75 of the world’s fastest growing cloud service providers, who are bolstering their business on a Tintri storage backbone.

Why join their ranks? When it comes to the success of your business, storage is a make or break decision. Read the five reasons how Tintri will make it happen.

Growing at 35% per year, Vembu branches out from its backup/recovery roots
Cloud­based backup/recovery is a cutthroat business with shrinking margins, commoditization and a surfeit of contenders trying to get a piece of the pie. The company's decision to push its resellers away from rebranding and into carrying Vembu's name on their services will give it much ­needed name/brand recognition in a crowded arena.

Vembu has grown its revenue 35% annually over the past two years and is on track to meet that mark in 2014. Key product additions this year include a suite of CRM applications and the introduction of on­premises virtual appliances (with physical appliances to come in the near future). The latter move puts Vembu in more direct competition with relatively well ­known players in the hybrid cloud backup battle.

Vembu is celebrating its 10­ year anniversary by exceeding the 60,000 ­customer milestone, sold mainly through its 4,400 channel partners. That compares with 55,000 customers and 4,000 resellers in February 2014. The company has added 400 resellers so far this year, and has begun to emphasize VARs in addition to its traditional target market of MSPs. Notable service­provider partners include Verizon's Terremark subsidiary, mindSHIFT Technologies, HostPapa and Hitachi Data Systems. The profitable Vembu claims to have exceeded 35% revenue growth in each of the past two years, and is on track for similar gains this year.

The company expects to have 200 employees by the end of 2014 (up from 160 in February), and 300 by the end of 2015. Most of its employees are near its headquarters in Chennai, India (with 65% engaged in R&D), but Vembu has been steadily expanding internationally. It opened an office in London this year, and relocated its US headquarters to Addison, Texas, where it expects to grow its workforce from 15 employees this year to 50 next year Vembu's worldwide distribution of partners roughly equates to its worldwide revenue distribution: 70% North America, 20% Europe and 10% AsiaPacific – a distribution that has remained fairly steady over the past year. However, although about 30% of its revenue comes from outside North America today, Vembu hopes to increase that to 50% in 2015. Key target markets for 2015 include the EU­5 countries, Scandinavia, Brazil and China

Does Backup Need a File System of its Own?
VembuHIVETM is an efficient cloud file system designed for large-scale backup and disaster recovery (BDRTM) application with support for advanced use-cases. VembuHIVETM can be thought of as a File System of File Systems with in-built version control, deduplication (elimination of redundant information to enhance storage reduction), encryption, and in-built error correction.

Backup is just not about storage. It’s the intelligence on top of storage. Typically when businesses think of backup, they see it as a simple data copy from one location to another. Traditional file systems would suffice if the need were to just copy the data. But backup is the intelligence applied on top of storage where data can be put to actual use. Imagine the ability to use backup data for staging, testing, development and preproduction deployment. Traditional file systems are not designed to meet such complex requirements.

With the advent of information technology, more and more organizations are relying on IT for running their businesses. They cannot afford to have downtime on their critical applications and need instant access to data in the event of disaster. Hence, a new type of file system is necessary to satisfy this need. 

VembuHIVETM manages the metadata smartly through its patent-pending technology, in a way that is agnostic to the file system of the backup, which is why we call VembuHIVETM, a file system of file systems. This helps the backup application to instantly associate the data in VembuHIVETM to any file system metadata, thereby allowing on-demand file or image restores in many possible file formats. The data and metadata storage, harness cluster file system and computing and storage.

This is a really powerful concept that will address some very interesting use cases not just in the backup and recovery domain but also in other domains, such as big-data analytics.

The key to the design of VembuHIVETM is its novel mechanism to capture and generate appropriate metadata and store it intelligently in a cloud infrastructure. The increment data (the changes with respect to a previous version of the same backup) are treated like versions in a version control system (CVS, GIT). This revolutionary way of data capture and metadata generation provides seamless support to a wide range of complex restore use cases.

FSLogix Lab Installation Guide for Azure
This document contains the steps required to install a Proof-of-Concept (POC) environment for RDSH Full Desktop with FSLogix Apps. POC environments created according to this install guide can be used to test applications controlled by FSLogix Apps in a reproducible way. We build the environment in the cloud in order to minimize costs associated with required hardware, and to streamline the build process. The testing environment is built entirely in Azure and contains an IaaS RDSH deployment incl

This document contains the steps required to install a Proof-of-Concept (POC) environment for RDSH Full Desktop with FSLogix Apps.  POC environments created according to this install guide can be used to test applications controlled by FSLogix Apps in a reproducible way.  We build the environment in the cloud in order to minimize costs associated with required hardware, and to streamline the build process.

The testing environment is built entirely in Azure and contains an IaaS RDSH deployment including the supporting network and domain infrastructure.  A non-Azure based client component accesses the RDSH deployment.

Zerto's Cloud Continuity Platform: Enabling the Hybrid Cloud
Download this white paper and learn more about Zerto's Cloud Continuity Platform for Hybrid Cloud IT - empowering you to move, translate, and migrate virtual worklaods between virtualized infrastructures with confidence and ease.
Hybrid Cloud is rapidly becoming the preferred model for IT. A missing piece for enabling true, production grade Hybrid Cloud is the ability to mobilize and protect production workloads between different infrastructure types. Cloud Continuity Platform is a new infrastructure concept which enables application mobility and protection across public, managed and private clouds, and across different hypervisors. With Cloud continuity platform the right infrastructure can be used to optimize for cost, SLA and performance with simple scalability and flexibility, without disruption to the business and while enabling full business continuity. The choice of a Hybrid Cloud is here.
The Essential Guide to VM-aware Storage
Storage providers tout all-flash, more models and real-time analytics. But under the covers lies a dirty little secret—their operating systems (the foundation of storage) are all the same… built on LUNs and volumes. But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization and cloud. It’s called VM-aware storage (VAS), and if you’ve got a large virtual footprint, it’s something you need to explore further.
Storage providers tout all-flash, more models and real-time analytics. But under the covers lies a dirty little secret—their operating systems (the foundation of storage) are all the same… built on LUNs and volumes.

But now a new category of storage has emerged—with operating systems built on virtual machines, and specifically attuned to virtualization and cloud. It’s called VM-aware storage (VAS), and if you’ve got a large virtual footprint, it’s something you need to explore further.

The Definitive Guide to Monitoring Virtual Environments
The virtualization of physical computers has become the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multi-tenancy and more. These environments are complex and ephemeral, creating requirements and challenges beyond the capability of traditional monitoring tools that were originally designed for static physical environments. But modern solutions exist, and can bring your virtual env

OVERVIEW

The virtualization of physical computers has become the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multi-tenancy and more. These environments are complex and ephemeral, creating requirements and challenges beyond the capability of traditional monitoring tools that were originally designed for static physical environments. But modern solutions exist, and can bring your virtual environment to new levels of efficiency, performance and scale.

This guide explains the pervasiveness of virtualized environments in modern data centers, the demand these environments create for more robust monitoring and analytics solutions, and the keys to getting the most out of virtualization deployments.

TABLE OF CONTENTS

·        History and Expansion of Virtualized Environments

·        Monitoring Virtual Environments

·        Approaches to Monitoring

·        Why Effective Virtualization Monitoring Matters

·        A Unified Approach to Monitoring Virtualized Environments

·        5 Key Capabilities for Virtualization Monitoring

o   Real-Time Awareness

o   Rapid Root-Cause Analytics

o   End-to-End Visibility

o   Complete Flexibility

o   Hypervisor Agnosticism

·        Evaluating a Monitoring Solution

o   Unified View

o   Scalability

o   CMDB Support

o   Converged Infrastructure

o   Licensing

·        Zenoss for Virtualization Monitoring

2017 Strategic Roadmap for Storage
Gartner offers recommendations for IT leaders responsible for infrastructure modernization and agility. Emerging storage hardware and software enable IT leaders to lower acquisition costs per terabyte and improve manageability. In addition to focusing on agility, automation and cost reductions, IT leaders should address the cultural changes and skill set shortages caused by digital business projects.
Key Findings:

•    Vendor consolidation continues in the storage and hyperconverged integrated system market, causing reassessments of vendor relationships, cost impacts and potential solution switches.
•    New storage initiatives focus on the need for agility, automation and cost reduction, as evidenced by the high adoption of solid-state arrays and HCIS, along with increasing interest in software-defined storage and drastically simplified integrated backup appliances.
•    Cloud storage continues to be a polarizing practice, with business more optimistic and IT more cautious, resulting in clashes and conflicts between tactical decisions and strategic movements.
•    Digital business and other new business initiatives often require changes in the culture between business units and IT operations; this highlights the challenges of skill set shortages in such areas as the evaluation and management of IT service providers.