Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 21 white papers, page 1 of 2.
UNC Health Care Leverages IGEL in Virtual Desktop Infrastructure Deployment
UNC Health Care selected IGEL Universal Desktop Converter (UDC) and IGEL Universal Management Suite (UMS) for simplicity, cost-savings and security. This document outlines key findings on how IGEL helps organizations manage entire fleets of thin clients from a single console. In addition, you will see how IGEL Universal Desktop Converter provides IT organizations with the flexibility they need to convert any compatible thin client, desktop or laptop computer into an IGEL thin client solution, wi

UNC Health Care selects IGEL Universal Desktop Converter (UDC) and IGEL Universal Management Suite (UMS) for simplicity, cost-savings and security.

“The need to provide users with access to their desktops from any device anywhere, anytime is driving a growing number of IT organizations to migrate toward VDI environments,” said Simon Clephan, Vice President of Business Development and Strategic Alliances, IGEL. “One of the key advantages that IGEL brings to the table is the simplicity that comes from being able to manage an entire fleet of thin clients from a single console. Additionally, the IGEL Universal Desktop Converter provides IT organizations with the flexibility they need to convert any compatible thin client, desktop or laptop computer into an IGEL thin client solution, without having to make an upfront investment in new hardware to support their virtualized infrastructures.” 

UNC Health Care selected the IGEL UDC and UMS software for its Citrix VDI deployment following a “bake-off” between thin client solutions. “IGEL won hands down due the simplicity and superiority of its management capabilities,” said James Cole, Technical Architect, UNC Health Care. “And, because the IGEL UDC software is designed to quickly and efficiently convert existing endpoint hardware into IGEL Linux OS-powered thin clients, we knew that by selecting the IGEL solution we would also realize a significant reduction in our capital expenditures.”

Since initiating the deployment of the IGEL UDC and UMS software, UNC Health Care has also experienced significant time savings. “Prior to deploying the IGEL UDC and UMS software, it took our team 25-30 minutes to create a virtual image on each system, not counting the personalization of the system for each use case, now that process takes less than 10 minutes, and even less time when converting the system to VDI roaming,” added Cole.

Additionally, the ease of integration between the IGEL UDC and IGEL UMS with Citrix XenDesktop and other solutions offered by Citrix Ecosystem partners, including Imprivata, has enabled secure access to the health care network’s Epic Systems’ Electronic Medical Records (EMR) system.

Solution Guide for Sennheiser Headsets, IGEL Endpoints and Skype for Business on Citrix VDI
Topics: IGEL, Citrix, skype, VDI
Enabling voice and video with a bundled solution in an existing Citrix environment delivers clearer and crisper voice and video than legacy phone systems. This solution guide describes how Sennheiser headsets combine with Citrix infrastructure and IGEL endpoints to provide a better, more secure user experience. It also describes how to deploy the bundled Citrix-Sennheiser-IGEL solution.

Virtualizing Windows applications and desktops in the data center or cloud has compelling security, mobility and management benefits, but delivering real-time voice and video in a virtual environment is a challenge. A poorly optimized implementation can increase costs and compromise user experience. Server scalability and bandwidth efficiency may be less than optimal, and audio-video quality may be degraded.

Enabling voice and video with a bundled solution in an existing Citrix environment delivers clearer and crisper voice and video than legacy phone systems. This solution guide describes how Sennheiser headsets combine with Citrix infrastructure and IGEL endpoints to provide a better, more secure user experience. It also describes how to deploy the bundled Citrix-Sennheiser-IGEL solution.

PrinterLogic and IGEL Enable Healthcare Organizations to Deliver Better Patient Outcomes
Healthcare professionals need to print effortlessly and reliably to nearby or appropriate printers within virtual environments, and PrinterLogic and IGEL can help make that an easy, reliable process—all while efficiently maintaining the protection of confidential patient information.

Many organizations have turned to virtualizing user endpoints to help reduce capital and operational expenses while increasing security. This is especially true within healthcare, where hospitals, clinics, and urgent care centers seek to offer the best possible patient outcomes while adhering to a variety of mandated patient security and information privacy requirements.

With the movement of desktops and applications into the secure data center or cloud, the need for reliable printing of documents, some very sensitive in nature, remains a constant that can be challenging when desktops are virtual but the printing process remains physical. Directing print jobs to the correct printer with the correct physical access rights in the correct location while ensuring compliance with key healthcare mandates like the General Data Protection Regulation (GDPR) and the Healthcare Insurance Portability and Accountability Act (HIPAA) is critical.

Healthcare IT needs to keep pace with these requirements and the ongoing printing demands of healthcare. Medical professionals need to print effortlessly and reliably to nearby or appropriate printers within virtual environments, and PrinterLogic and IGEL can help make that an easy, reliable process—all while efficiently maintaining the protection of confidential patient information. By combining PrinterLogic’s enterprise print management software with centrally managed direct IP printing and IGEL’s software-defined thin client endpoint management, healthcare organizations can:

  • Reduce capital and operational costs
  • Support virtual desktop infrastructure (VDI) and electronic medical records (EMR) systems effectively
  • Centralize and simplify print management
  • Add an essential layer of security from the target printer all the way to the network edge
vSphere Troubleshooting Guide
Troubleshooting complex virtualization technology is something all VMware users will have to face at some point. It requires an understanding of how various components fit together and finding a place to start is not easy. Thankfully, VMware vExpert Ryan Birk is here to help with this eBook preparing you for any problems you may encounter along the way.

This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.

ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.

A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.

One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.

You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.

In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!

Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.

Now that the basics have been covered, download the eBook to discover how to put this theory into practice!

How to Get the Most Out of Windows Admin Center
Windows Admin Center is the future of Windows and Windows Server management. Are you using it to its full potential? In this free eBook, Microsoft Cloud and Datacenter Management MVP, Eric Siron, has put together a 70+ page guide on what Windows Admin Center brings to the table, how to get started, and how to squeeze as much value out of this incredible free management tool from Microsoft. This eBook covers: - Installation - Getting Started - Full UI Analysis - Security - Managing Extensions

Each version of Windows and Windows Server showcases new technologies. The advent of PowerShell marked a substantial step forward in managing those features. However, the built-in graphical Windows management tools have largely stagnated - the same basic Microsoft Management Console (MMC) interfaces had remained since Windows Server 2000. Microsoft tried out multiple overhauls over the years to the built-in Server Manager console but gained little traction. Until Windows Admin Center.

WHAT IS WINDOWS ADMIN CENTER?
Windows Admin Center (WAC) represents a modern turn in Windows and Windows Server system management. From its home page, you establish a list of the networked Windows and Windows Server computers to manage. From there, you can connect to an individual system to control components such as hardware drivers. You can also use it to manage Windows roles, such as Hyper-V.

On the front-end, Windows Admin Center is presented through a sleek HTML 5 web interface. On the back-end, it leverages PowerShell extensively to control the systems within your network. The entire package runs on a single system, so you don’t need a complicated infrastructure to support it. In fact, you can run it locally on your Windows 10 workstation if you want. If you require more resiliency, you can run Windows Admin Center as a role on a Microsoft Failover Cluster.

WHY WOULD I USE WINDOWS ADMIN CENTER?
In the modern era of Windows management, we have shifted to a greater reliance on industrial-strength tools like PowerShell and Desired State Configuration. However, we still have servers that require individualized attention and infrequently utilized resources. WAC gives you a one-stop hub for dropping in on any system at any time and work with almost any of its facets.

ABOUT THIS EBOOK
This eBook has been written by Microsoft Cloud & Datacenter Management MVP Eric Siron. Eric has worked in IT since 1998, designing, deploying, and maintaining server, desktop, network, and storage systems. He has provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. He has achieved numerous Microsoft certifications and was a Microsoft Certified Trainer for four years. Eric is also a seasoned technology blogger and has amassed a significant following through his top-class work on the Altaro Hyper-V Dojo.

ESG Report: Verifying Network Intent with Forward Enterprise
This ESG Technical Review documents hands-on validation of Forward Enterprise, a solution developed by Forward Networks to help organizations save time and resources when verifying that their IT networks can deliver application traffic consistently in line with network and security policies. The review examines how Forward Enterprise can reduce network downtime, ensure compliance with policies, and minimize adverse impact of configuration changes on network behavior.
ESG research recently uncovered that 66% of organizations view their IT environments as more or significantly more complex than they were two years ago. The complexity will most likely increase, since 46% of organizations anticipate their network infrastructure spending to exceed that of 2018 as they upgrade and expand their networks.

Large enterprise and service provider networks consist of multiple device types—routers, switches, firewalls, and load balancers—with proprietary operating systems (OS) and different configuration rules. As organizations support more applications and users, their networks will grow and become more complex, making it more difficult to verify and manage correctly implemented policies across the entire network. Organizations have also begun to integrate public cloud services with their on-premises networks, adding further network complexity to manage end-to-end policies.

With increasing network complexity, organizations cannot easily confirm that their networks are operating as intended when they implement network and security policies. Moreover, when considering a fix to a service-impact issue or a network update, determining how it may impact other applications negatively or introduce service-affecting issues becomes difficult. To assess adherence to policies or the impact of any network change, organizations have typically relied on disparate tools and material—network topology diagrams, device inventories, vendor-dependent management systems, command line (CLI) commands, and utilities such as “ping” and “traceroute.” The combination of these tools cannot provide a reliable and holistic assessment of network behavior efficiently.

Organizations need a vendor-agnostic solution that enables network operations to automate the verification of network implementations against intended policies and requirements, regardless of the number and types of devices, operating systems, traffic rules, and policies that exist. The solution must represent the topology of the entire network or subsets of devices (e.g., in a region) quickly and efficiently. It should verify network implementations from prior points in time, as well as proposed network changes prior to implementation. Finally, the solution must also enable organizations to quickly detect issues that affect application delivery or violate compliance requirements.
How to seamlessly and securely transition to hybrid cloud
Find out how to optimize your hybrid cloud workload through system hardening, incident detection, active defense and mitigation, quarantining and more. Plus, learn how to ensure protection and performance in your environment through an ideal hybrid cloud workload protection solution.

With digital transformation a constantly evolving reality for the modern organization, businesses are called upon to manage complex workloads across multiple public and private clouds—in addition to their on-premises systems.

The upside of the hybrid cloud strategy is that businesses can benefit from both lowered costs and dramatically increased agility and flexibility. The problem, however, is maintaining a secure environment through challenges like data security, regulatory compliance, external threats to the service provider, rogue IT usage and issues related to lack of visibility into the provider’s infrastructure.

Find out how to optimize your hybrid cloud workload through system hardening, incident detection, active defense and mitigation, quarantining and more. Plus, learn how to ensure protection and performance in your environment through an ideal hybrid cloud workload protection solution that:


•    Provides the necessary level of protection for different workloads
•    Delivers an essential set of technologies
•    Is structured as a comprehensive, multi-layered solution
•    Avoids performance degradation for services or users
•    Supports compliance by satisfying a range of regulation requirements
•    Enforces consistent security policies through all parts of hybrid infrastructure
•    Enables ongoing audit by integrating state of security reports
•    Takes account of continuous infrastructure changes

Multi-cloud Data Protection-as-a-service: The HYCU Protégé Platform
Multi-cloud environments are here to stay and will keep on growing in diversity, use cases, and, of course, size. Data growth is not stopping anytime soon, only making the problem more acute. HYCU has taken a very different approach from many traditional vendors by selectively delivering deeply integrated solutions to the platforms they protect, and is now moving to the next challenge of unification and simplification with Protégé, calling it a data protection-as-a-service platform.

There are a number of limitations today keeping organizations from not only lifting and shifting from one cloud to another but also migrating across clouds. Organizations need the flexibility to leverage multiple clouds and move applications and workloads around freely, whether for data reuse or for disaster recovery. This is where the HYCU Protégé platform comes in. HYCU Protégé is positioned as a complete multi-cloud data protection and disaster recovery-as-a-service solution. It includes a number of capabilities that make it relevant and notable compared with other approaches in the market:

  • It was designed for multi-cloud environments, with a “built-for-purpose” approach to each workload and environment, leveraging APIs and platform expertise.
  • It is designed as a one-to-many cross-cloud disaster recovery topology rather than a one-to-one cloud or similarly limited topology.
  • It is designed for the IT generalist. It’s easy to use, it includes dynamic provisioning on-premises and in the cloud, and it can be deployed without impacting production systems. In other words, no need to manually install hypervisors or agents.
  • It is application-aware and will automatically discover and configure applications. Additionally, it supports distributed applications with shared storage. 
Normal 0 false false false EN-US X-NONE X-NONE
Data Protection as a Service - Simplify Your Backup and Disaster Recovery
Data protection is a catch-all term that encompasses a number of technologies, business practices and skill sets associated with preventing the loss, corruption or theft of data. The two primary data protection categories are backup and disaster recovery (DR) — each one providing a different type, level and data protection objective. While managing each of these categories occupies a significant percentage of the IT budget and systems administrator’s time, it doesn’t have to. Data protection can
Simplify Your Backup and Disaster Recovery

Today, there are an ever-growing number of threats to businesses and uptime is crucial. Data protection has never been a more important function of IT. As data center complexity and demand for new resources increases, the difficulty of providing effective and cost-efficient data protection increases as well.

Luckily, data protection can now be provided as a service.

Get this white paper to learn:
  • How data protection service providers enable IT teams to focus on business objectives
  • The difference, and importance, of cloud-based backup and disaster recovery
  • Why cloud-based backup and disaster recovery are required for complete protection
Mind The Gap: Understanding the threats to your Office 365 data
Download this whitepaper to learn more about how you can prevent, or mitigate, these common Office 365 data threats: External threats like ransomware, Malicious insiders, User-errors and accidental keystrokes.
From corporate contacts to sensitive messages and attachments, email systems at all companies contain some of the most important data needed to keep business running and successful. At the same time, your office productivity suite of documents, notes and spreadsheets created by your employees is equally vital. Unfortunately, in both cases, protecting that data is increasingly challenging. Microsoft provides what some describe as marginal efforts to protect and backup data, however the majority of the burden is placed on the customer.

Download this whitepaper to learn more about how you can prevent, or mitigate, these common Office 365 data threats:
•    External threats like ransomware
•    Malicious insiders
•    User-errors and accidental keystrokes

ESG Showcase - DataCore vFilO: NAS Consolidation Means Freedom from Data Silos
File and object data are valuable tools that help organizations gain market insights, improve operations, and fuel revenue growth. However, success in utilizing all of that data depends on consolidating data silos. Replacing an existing infrastructure is often expensive and impractical, but DataCore vFilO software offers an intelligent, powerful option—an alternative, economically appealing way to consolidate and abstract existing storage into a single, efficient, capable ecosystem of readily-se

Companies have NAS systems all over the place—hardware-centric devices that make data difficult to migrate and leverage to support the business. It’s natural that companies would desire to consolidate those systems, and vFilO is a technology that could prove to be quite useful as an assimilation tool. Best of all, there’s no need to replace everything. A business can modernize its IT environment and finally achieve a unified view, plus gain more control and efficiency via the new “data layer” sitting on top of the hardware. When those old silos finally disappear, employees will discover they can find whatever information they need by examining and searching what appears to be one big catalog for a large pool of resources.

And for IT, the capacity-balancing capability should have especially strong appeal. With it, file and object data can shuffle around and be balanced for efficiency without IT or anyone needing to deal with silos. Today, too many organizations still perform capacity balancing work manually—putting some files on a different NAS system because the first one started running out of room. It’s time for those days to end. DataCore, with its 20-year history offering SANsymphony, is a vendor in a great position to deliver this new type of solution, one that essentially virtualizes NAS and object systems and even includes keyword search capabilities to help companies use their data to become stronger, more competitive, and more profitable.

The Time is Now for File Virtualization
DataCore’s vFilO is a distributed files and object storage virtualization solution that can consume storage from a variety of providers, including NFS or SMB file servers, most NAS systems, and S3 Object Storage systems, including S3-based public cloud providers. Once vFilO integrates these various storage systems into its environment, it presents users with a logical file system and abstracts it from the actual physical location of data.

DataCore vFilO is a top-tier file virtualization solution. Not only can it serve as a global file system, IT can also add new NAS systems or file servers to the environment without having to remap users of the new hardware. vFilO supports live migration of data between the storage systems it has assimilated and leverages the capabilities of the global file system and the software’s policy-driven data management to move older data to less expensive storage automatically; either high capacity NAS or an object storage system. vFilO also transparently moves data from NFS/SMB to object storage. If the user needs access to this data in the future, they access it like they always have. To them, the data has not moved.

The ROI of File virtualization is powerful, but it has struggled to gain adoption in the data center. File Virtualization needs to be explained, and explaining it takes time. vFilO more than meets the requirements to qualify as a top tier file virtualization solution. DataCore has the advantage of over 10,000 customers that are much more likely to be receptive to the concept since they have already embraced block storage virtualization with SANSymphony. Building on its customer base as a beachhead, DataCore can then expand File Virtualization’s reach to new customers, who, because of the changing state of unstructured data, may finally be receptive to the concept. At the same time, these new file virtualization customers may be amenable to virtualizing block storage, and it may open up new doors for SANSymphony.

IDC: SaaS Backup and Recovery: Simplified Data Protection Without Compromise
Although the majority of organizations have a "cloud first" strategy, most also continue to manage onsite applications and the backup infrastructure associated with them. However, many are moving away from backup specialists and instead are leaving the task to virtual infrastructure administrators or other IT generalists. Metallic represents Commvault's direct entry into one of the fastest-growing segments of the data protection market. Its hallmarks are simplicity and flexibility of deployment

Metallic is a new SaaS backup and recovery solution based on Commvault's data protection software suite, proven in the marketplace for more than 20 years. It is designed specifically for the needs of medium-scale enterprises but is architected to grow with them based on data growth, user growth, or other requirements. Metallic initially offers either monthly or annual subscriptions through reseller partners; it will be available through cloud service providers and managed service providers over time. The initial workload use cases for Metallic include virtual machine (VM), SQL Server, file server, MS Office 365, and endpoint device recovery support; the company expects to add more use cases and supported workloads as the solution evolves.

Metallic is designed to offer flexibility as one of the service's hallmarks. Aspects of this include:

  • On-demand infrastructure: Metallic manages the cloud-based infrastructure components and software for the backup environment, though the customer will still manage any of its own on- premise infrastructure. This environment will support on-premise, cloud, and hybrid workloads. IT organizations are relieved of the daily task of managing the infrastructure components and do not have to worry about upgrades, OS or firmware updates and the like, for the cloud infrastructure, so people can repurpose that time saved toward other activities.
  • Metallic offers preconfigured plans designed to have users up and running in approximately 15 minutes, eliminating the need for a proof-of-concept test. These preconfigured systems have Commvault best practices built into the design, or organizations can configure their own.
  • Partner-delivered services: Metallic plans to go to market with resellers that can offer a range of services on top of the basic solution's capabilities. These services will vary by provider and will give users a variety of choices when selecting a provider to match the services offered with the organization's needs.
  • "Bring your own storage": Among the flexible options of Metallic, including VM and file or SQL database use cases, users can deploy their own storage, either on-premise or in the cloud, while utilizing the backup/recovery services of Metallic. The company refers to this option as "SaaS Plus."
TechGenix Product Review: DataCore vFilO Software-Defined Storage
TechGenix gave DataCore’s vFilO 4.7 stars, which is a gold star review, in its product review. The review found that its interface is relatively intuitive so long as you have a basic understanding of file shares and enterprise storage. Its ability to assign objectives to shares, directories, and even individual files, and the seamless blending of block, file, and object storage delivers a new generation of storage system that is flexible and very powerful.
Managing an organization’s many distributed files and file storage systems has always been challenging, but this task has become far more complex in recent years. System admins commonly find themselves trying to manage several different types of cloud and data center storage, each with its own unique performance characteristics and costs. Bringing all of this storage together in a cohesive way while also keeping costs in check can be a monumental challenge. Not to mention how disruptive data migrations tend to be when space runs short. While there are a few products that use an abstraction layer to provide a consolidated view of an organization’s storage, it is important to keep in mind that all storage is not created equally.
Process Optimization with Stratusphere UX
This whitepaper explores the developments of the past decade that have prompted the need for Stratusphere UX Process Optimization. We also cover how this feature works and the advantages it provides, including specific capital and operating cost benefits.

Managing the performance of Windows-based workloads can be a challenge. Whether physical PCs or virtual desktops, the effort required to maintain, tune and optimize workspaces is endless. Operating system and application revisions, user installed applications, security and bug patches, BIOS and driver updates, spyware, multi-user operating systems supply a continual flow of change that can disrupt expected performance. When you add in the complexities introduced by virtual desktops and cloud architectures, you have added another infinite source of performance instability. Keeping up with this churn, as well as meeting users’ zero tolerance for failures, are chief worries for administrators.

To help address the need for uniform performance and optimization in light of constant change, Liquidware introduced the Process Optimization feature in its Stratusphere UX solution. This feature can be set to automatically optimize CPU and Memory, even as system demands fluctuate. Process Optimization can keep “bad actor” applications or runaway processes from crippling the performance of users’ workspaces by prioritizing resources for those being actively used over not used or background processes. The Process Optimization feature requires no additional infrastructure. It is a simple, zero-impact feature that is included with Stratusphere UX. It can be turned on for single machines, or groups, or globally. Launched with the check of a box, you can select from pre-built profiles that operate automatically. Or administrators can manually specify the processes they need to raise, lower or terminate, if that task becomes required. This feature is a major benefit in hybrid multi-platform environments that include physical, pool or image-based virtual and cloud workspaces, which are much more complex than single-delivery systems. The Process Optimization feature was designed with security and reliability in mind. By default, this feature employs a “do no harm” provision affecting normal and lower process priorities, and a relaxed policy. No processes are forced by default when access is denied by the system, ensuring that the system remains stable and in line with requirements.

Digital Workspace Disasters and How to Beat Them
This paper looks at risk management as it relates to the Windows desktops that are permanently connected to a campus, head office or branch network. In particular, we will look at how ‘digital workspace’ solutions designed to streamline desktop delivery and provide greater user flexibility can also be leveraged to enable a more effective and efficient approach to desktop disaster recovery (DR).
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data. And even if those problems could be overcome with the use of software agents, plus de-deduplication to take common files such as the operating system out of the backup window, restoring damaged systems could still mean days of software reinstallation and reconfiguration. Yet at the same time, most organizations have a strategic need to deploy and provision new desktop systems, and to be able to migrate existing ones to new platforms. Again, these are tasks that benefit from reducing both duplication and the need to reconfigure the resulting installation. The parallels with desktop DR should be clear. We often write about the importance of an integrated approach to investing in backup and recovery. By bringing together business needs that have a shared technical foundation, we can, for example, gain incremental benefits from backup, such as improved data visibility and governance, or we can gain DR capabilities from an investment in systems and data management. So it is with desktop DR and user workspace management. Both of these are growing in importance as organizations’ desktop estates grow more complex. Not only are we adding more ways to work online, such as virtual PCs, more applications, and more layers of middleware, but the resulting systems face more risks and threats and are subject to higher regulatory and legal requirements. Increasingly then, both desktop DR and UWM will be not just valuable, but essential. Getting one as an incremental bonus from the other therefore not only strengthens the business case for that investment proposal, it is a win-win scenario in its own right.