Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 52 white papers, page 1 of 4.
UNC Health Care Leverages IGEL in Virtual Desktop Infrastructure Deployment
UNC Health Care selected IGEL Universal Desktop Converter (UDC) and IGEL Universal Management Suite (UMS) for simplicity, cost-savings and security. This document outlines key findings on how IGEL helps organizations manage entire fleets of thin clients from a single console. In addition, you will see how IGEL Universal Desktop Converter provides IT organizations with the flexibility they need to convert any compatible thin client, desktop or laptop computer into an IGEL thin client solution, wi

UNC Health Care selects IGEL Universal Desktop Converter (UDC) and IGEL Universal Management Suite (UMS) for simplicity, cost-savings and security.

“The need to provide users with access to their desktops from any device anywhere, anytime is driving a growing number of IT organizations to migrate toward VDI environments,” said Simon Clephan, Vice President of Business Development and Strategic Alliances, IGEL. “One of the key advantages that IGEL brings to the table is the simplicity that comes from being able to manage an entire fleet of thin clients from a single console. Additionally, the IGEL Universal Desktop Converter provides IT organizations with the flexibility they need to convert any compatible thin client, desktop or laptop computer into an IGEL thin client solution, without having to make an upfront investment in new hardware to support their virtualized infrastructures.” 

UNC Health Care selected the IGEL UDC and UMS software for its Citrix VDI deployment following a “bake-off” between thin client solutions. “IGEL won hands down due the simplicity and superiority of its management capabilities,” said James Cole, Technical Architect, UNC Health Care. “And, because the IGEL UDC software is designed to quickly and efficiently convert existing endpoint hardware into IGEL Linux OS-powered thin clients, we knew that by selecting the IGEL solution we would also realize a significant reduction in our capital expenditures.”

Since initiating the deployment of the IGEL UDC and UMS software, UNC Health Care has also experienced significant time savings. “Prior to deploying the IGEL UDC and UMS software, it took our team 25-30 minutes to create a virtual image on each system, not counting the personalization of the system for each use case, now that process takes less than 10 minutes, and even less time when converting the system to VDI roaming,” added Cole.

Additionally, the ease of integration between the IGEL UDC and IGEL UMS with Citrix XenDesktop and other solutions offered by Citrix Ecosystem partners, including Imprivata, has enabled secure access to the health care network’s Epic Systems’ Electronic Medical Records (EMR) system.

Ovum: Igel's Security Enhancements for Thin Clients
Thin client vendor Igel is enhancing the security capabilities of its products, both under its own steam and in collaboration with technology partners. Ovum sees these developments as important for the next wave of thin client computing, which will be software-based – particularly if the desktop-as-a-service (DaaS) market is to take off.

With hardware-based thin client shipments in the region of 4–5 million units annually, this market is still a drop in the ocean compared to the 270 million PCs shipping each year, though the latter figure has been declining since 2011. And within the thin client market, Igel is in fourth place behind Dell and HP (each at around 1.2 million units annually) and China’s Centerm, which only sells into its home market.

However, the future for thin clients looks bright, in that the software-based segment of the market  (which some analyst houses refuse to acknowledge) is expanding, particularly for Igel. Virtual desktop infrastructure (VDI) technology has stimulated this growth, but the greatest promise is probably in the embryonic DaaS market, whereby enterprises will have standard images for their workforce hosted by service providers.

Optimizing Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them. Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case.

These include:

1. ProfileDisk, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

Application & Desktop Delivery for Dummies
In this book, you learn how solutions, such as Parallels Remote Application Server (RAS), replace traditional application deployment with on-demand application delivery, and why it's right for your organization.
Applications are essential to the businesses and organizations of all sizes and in all industries. End-users need to have continuous and reliable access to their applications whether working in the office or remotely, at any time of the day or night, and from any device. With the advent of cloud computing, office desktops with installed applications (that had to be constantly updated) have become a thing of the past — application streaming, virtual desktop infrastructure (VDI), and hosted applications are the future (and the present, for that matter). Application virtualization is an easy way to manage, distribute, and maintain business applications. Virtualized applications run on a server, while end-users view and interact with their applications over a network via a remote display protocol. Remote applications can be completely integrated with the user’s desktop so that they appear and behave like local applications. Today, you can dynamically publish applications to remote users in several ways. The server-based operating system (OS) instances that run remote applications can be shared with other users (a terminal services desktop), or the application can be running on its own OS instance on the server (a VDI desktop).
A Journey Through Hybrid IT and the Cloud
How to navigate between the trenches. Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?

How to navigate between the trenches

Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?
 
“A Journey Through Hybrid IT and the Cloud” provides insight on:

  • What Hybrid IT means for the network, storage, compute, monitoringand your staff
  • Real world examples that can occur along your journey (what did vs. didn’t work)
  • How to educate employees on Hybrid IT and the Cloud
  • Proactively searching out technical solutions to real business challenges
Top 10 Reasons to Adopt Software-Defined Storage
In this brief, learn about the top ten reasons why businesses are adopting software-defined storage to empower their existing and new storage investments with greater performance, availability and functionality.
DataCore delivers a software-defined architecture that empowers existing and new storage investments with greater performance, availability and functionality. But don’t take our word for it. We decided to poll our customers to learn what motivated them to adopt software-defined storage. As a result, we came up with the top 10 reasons our customers have adopted software-defined storage.
Download this white paper to learn about:
•    How software-defined storage protects investments, reduces costs, and enables greater buying power
•    How you can protect critical data, increase application performance, and ensure high-availability
•    Why 10,000 customers have chosen DataCore’s software-defined storage solution
How Parallels RAS Enhances Microsoft RDS
In 2001, Microsoft introduced the RDP protocol that allowed users to access an operating system’s desktop remotely. Since then, Microsoft has developed the Microsoft Remote Desktop Services (RDS) to facilitate remote desktop access. However, Microsoft RDS leaves a lot to be desired. This white paper highlights the pain points of RDS solutions, and how systems administrators can use Parallels® Remote Application Server (RAS) to enhance their Microsoft RDS infrastructure.

In 2001, Microsoft introduced the RDP protocol that allowed users to access an operating system’s desktop remotely. Since then, Microsoft has developed the Microsoft Remote Desktop Services (RDS) to facilitate remote desktop access.

However, Microsoft RDS leaves a lot to be desired. This white paper highlights the pain points of RDS solutions, and how systems administrators can use Parallels Remote Application Server (RAS) to enhance their Microsoft RDS infrastructure.

Microsoft RDS Pain Points:
•    Limited Load Balancing Functionality
•    Limited Client Device Support
•    Difficult to Install, Set Up, and Update

Parallels RAS is an application and virtual desktop delivery solution that allows systems administrators to create a private cloud from which they can centrally manage the delivery of applications, virtual desktops, and business-critical data. This comprehensive VDI solution is well known for its ease of use, low license costs, and feature list.

How Parallels RAS Enhances Your Microsoft RDS Infrastructure:
•    Easy to Install and Set Up
•    Centralized Configuration Console
•    Auto-Configuration of Remote Desktop Session Hosts
•    High Availability Load Balancing (HALB)
•    Superior user experience on mobile devices
•    Supports hypervisors from Citrix, VMware, Microsoft’s own Hyper-V, Nutanix Acropolis, and Kernel-based Virtual Machine (KVM)

As this white paper highlights, Parallels RAS allows you to enhance your Microsoft Remote Desktop Services infrastructure, enabling you to offer a superior application and virtual desktop delivery solution.

Built around Microsoft’s RDP protocol, Parallels RAS allows systems administrators to do more in less time with fewer resources. Since it is easier to implement and use, systems administrators can manage and easily scale up the Parallels RAS farm without requiring any specialized training. Because of its extensive feature list and multisite support, they can build solutions that meet the requirements of any enterprise, regardless of its size and scale.

Data Protection and File Sharing for the Mobile Workforce
Critical data is increasingly created, stored and shared outside the data center. It lives on laptops, tablets, mobile devices and cloud services. This data is subject to many threats: malware, ransomware, hacking, device failure, loss or theft, and human error. Catalogic KODO provides a unified solution to these challenges with easy, automated protection of endpoints (laptops, mobile devices) and cloud services (Office 365, Box), along with organizational file sharing and synchronization.
Critical data is increasingly created, stored and shared outside the data center. It lives on laptops, tablets, mobile devices and cloud services. This data is subject to many threats: malware, ransomware, hacking, device failure, loss or theft, and human error.

Catalogic KODO provides a unified solution to these challenges with easy, automated protection of endpoints (laptops, mobile devices) and cloud services (Office 365, Box), along with organizational file sharing and synchronization.
Catalogic Software-Defined Secondary Storage Appliance
The Catalogic software-defined secondary-storage appliance is architected and optimized to work seamlessly with Catalogic’s data protection product DPX, with Catalogic/Storware vProtect, and with future Catalogic products. Backup nodes are deployed on a bare metal server or as virtual appliances to create a cost-effective yet robust second-tier storage solution. The backup repository offers data reduction and replication. Backup data can be archived off to tape for long-term retention.
The Catalogic software-defined secondary-storage appliance is architected and optimized to work seamlessly with Catalogic’s data protection product DPX, with Catalogic/Storware vProtect, and with future Catalogic products.

Backup nodes are deployed on a bare metal server or as virtual appliances to create a cost-effective yet robust second-tier storage solution. The backup repository offers data reduction and replication. Backup data can be archived off to tape for long-term retention.
Microsoft Azure Cloud Cost Calculator
Move Workloads to the Cloud and Reduce Costs! Considering a move to Azure? Use this simple tool to find out how much you can save on storage costs by mobilizing your applications to the cloud with Zerto on Azure!

Move Workloads to the Cloud and Reduce Costs!

Considering a move to Azure? Use this simple tool to find out how much you can save on storage costs by mobilizing your applications to the cloud with Zerto on Azure!  

Gartner Market Guide for IT Infrastructure Monitoring Tools
With the onset of more modular and cloud-centric architectures, many organizations with disparate monitoring tools are reassessing their monitoring landscape. According to Gartner, hybrid IT (especially with IaaS subscription) enterprises must adopt more holistic IT infrastructure monitoring tools (ITIM) to gain visibility into their IT landscapes.

With the onset of more modular and cloud-centric architectures, many organizations with disparate monitoring tools are reassessing their monitoring landscape. According to Gartner, hybrid IT (especially with IaaS subscription) enterprises must adopt more holistic IT infrastructure monitoring tools (ITIM) to gain visibility into their IT landscapes.

The guide provides insight into the IT infrastructure monitoring tool market and providers as well as key findings and recommendations.

Get the 2018 Gartner Market Guide for IT Infrastructure Monitoring Tools to see:

  • The ITIM market definition, direction and analysis
  • A list of representative ITIM vendors
  • Recommendations for adoption of ITIM platforms

Key Findings Include:

  • ITIM tools are helping organizations simplify and unify monitoring across domains within a single tool, eliminating the problems of multitool integration.
  • ITIM tools are allowing infrastructure and operations (I&O) leaders to scale across hybrid infrastructures and emerging architectures (such as containers and microservices).
  • Metrics and data acquired by ITIM tools are being used to derive context enabling visibility for non-IT teams (for example, line of business [LOB] and app owners) to help achieve optimization targets.
vSphere Troubleshooting Guide
Troubleshooting complex virtualization technology is something all VMware users will have to face at some point. It requires an understanding of how various components fit together and finding a place to start is not easy. Thankfully, VMware vExpert Ryan Birk is here to help with this eBook preparing you for any problems you may encounter along the way.

This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.

ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.

A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.

One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.

You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.

In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!

Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.

Now that the basics have been covered, download the eBook to discover how to put this theory into practice!

Mastering vSphere – Best Practices, Optimizing Configurations & More
Do you regularly work with vSphere? If so, this free eBook is for you. Learn how to leverage best practices for the most popular features contained within the vSphere platform and boost your productivity using tips and tricks learnt direct from an experienced VMware trainer and highly qualified professional. In this eBook, vExpert Ryan Birk shows you how to master: Advanced Deployment Scenarios using Auto-Deploy Shared Storage Performance Monitoring and Troubleshooting Host Network configurati

If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.

The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.

Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!

Forrester: Monitoring Containerized Microservices - Elevate Your Metrics
As enterprises continue to rapidly adopt containerized microservices, infrastructure and operations (I&O) teams need to address the growing complexities of monitoring these highly dynamic and distributed applications. The scale of these environments can pose tremendous monitoring challenges. This report will guide I&O leaders in what to consider when developing their technology and metric strategies for monitoring microservices and container-based applications.
As enterprises continue to rapidly adopt containerized microservices, infrastructure and operations (I&O) teams need to address the growing complexities of monitoring these highly dynamic and distributed applications. The scale of these environments can pose tremendous monitoring challenges. This report will guide I&O leaders in what to consider when developing their technology and metric strategies for monitoring microservices and container-based applications.
How to Get the Most Out of Windows Admin Center
Windows Admin Center is the future of Windows and Windows Server management. Are you using it to its full potential? In this free eBook, Microsoft Cloud and Datacenter Management MVP, Eric Siron, has put together a 70+ page guide on what Windows Admin Center brings to the table, how to get started, and how to squeeze as much value out of this incredible free management tool from Microsoft. This eBook covers: - Installation - Getting Started - Full UI Analysis - Security - Managing Extensions

Each version of Windows and Windows Server showcases new technologies. The advent of PowerShell marked a substantial step forward in managing those features. However, the built-in graphical Windows management tools have largely stagnated - the same basic Microsoft Management Console (MMC) interfaces had remained since Windows Server 2000. Microsoft tried out multiple overhauls over the years to the built-in Server Manager console but gained little traction. Until Windows Admin Center.

WHAT IS WINDOWS ADMIN CENTER?
Windows Admin Center (WAC) represents a modern turn in Windows and Windows Server system management. From its home page, you establish a list of the networked Windows and Windows Server computers to manage. From there, you can connect to an individual system to control components such as hardware drivers. You can also use it to manage Windows roles, such as Hyper-V.

On the front-end, Windows Admin Center is presented through a sleek HTML 5 web interface. On the back-end, it leverages PowerShell extensively to control the systems within your network. The entire package runs on a single system, so you don’t need a complicated infrastructure to support it. In fact, you can run it locally on your Windows 10 workstation if you want. If you require more resiliency, you can run Windows Admin Center as a role on a Microsoft Failover Cluster.

WHY WOULD I USE WINDOWS ADMIN CENTER?
In the modern era of Windows management, we have shifted to a greater reliance on industrial-strength tools like PowerShell and Desired State Configuration. However, we still have servers that require individualized attention and infrequently utilized resources. WAC gives you a one-stop hub for dropping in on any system at any time and work with almost any of its facets.

ABOUT THIS EBOOK
This eBook has been written by Microsoft Cloud & Datacenter Management MVP Eric Siron. Eric has worked in IT since 1998, designing, deploying, and maintaining server, desktop, network, and storage systems. He has provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. He has achieved numerous Microsoft certifications and was a Microsoft Certified Trainer for four years. Eric is also a seasoned technology blogger and has amassed a significant following through his top-class work on the Altaro Hyper-V Dojo.

Digital Workspace Disasters and How to Beat Them
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data.
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data. And even if those problems could be overcome with the use of software agents, plus de-deduplication to take common files such as the operating system out of the backup window, restoring damaged systems could still mean days of software reinstallation and reconfiguration. Yet at the same time, most organizations have a strategic need to deploy and provision new desktop systems, and to be able to migrate existing ones to new platforms. Again, these are tasks that benefit from reducing both duplication and the need to reconfigure the resulting installation. The parallels with desktop DR should be clear. We often write about the importance of an integrated approach to investing in backup and recovery. By bringing together business needs that have a shared technical foundation, we can, for example, gain incremental benefits from backup, such as improved data visibility and governance, or we can gain DR capabilities from an investment in systems and data management. So it is with desktop DR and user workspace management. Both of these are growing in importance as organizations’ desktop estates grow more complex. Not only are we adding more ways to work online, such as virtual PCs, more applications, and more layers of middleware, but the resulting systems face more risks and threats and are subject to higher regulatory and legal requirements. Increasingly then, both desktop DR and UWM will be not just valuable, but essential. Getting one as an incremental bonus from the other therefore not only strengthens the business case for that investment proposal, it is a win-win scenario in its own right.
top25