Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 15 of 15 white papers, page 1 of 1.
Ease of Management and Flexibility Lead to Long-Term Relationship for IGEL at Texas Credit Union
Randolph-Brooks Federal Credit Union was looking for a more powerful endpoint computing solution to deliver e-mail and core financial applications through its Citrix-based infrastructure to its end-users, and IGEL’s Universal Desktop thin clients and Universal Management Suite (UMS) software fit the bill.

Randolph-Brooks Federal Credit Union is more than just a bank. It is a financial cooperative intent on helping its members save time, save money and earn money. Over the years, the credit union has grown from providing financial resources to military service members and their families to serving hundreds of thousands of members across Texas and around the world. RBFCU has a presence in three major market areas — Austin, Dallas and San Antonio — and has more than 55 branches dedicated to serving members and the community.

First and foremost, RBFCU is people. It’s the more than 1,800 employees who serve members’ needs each day. It’s the senior team and Board of Directors that guide the credit union’s growth. It’s the members who give their support and loyalty to the credit union each day.

To help its employees provide the credit union’s members with the highest levels of services and support, Randolph-Brooks Federal Credit Union relies on IGEL’s endpoint computing solutions.

vSphere Troubleshooting Guide
Troubleshooting complex virtualization technology is something all VMware users will have to face at some point. It requires an understanding of how various components fit together and finding a place to start is not easy. Thankfully, VMware vExpert Ryan Birk is here to help with this eBook preparing you for any problems you may encounter along the way.

This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.

ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.

A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.

One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.

You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.

In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!

Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.

Now that the basics have been covered, download the eBook to discover how to put this theory into practice!

Why Network Verification Requires a Mathematical Model
Learn how verification can be used in key IT processes and workflows, why a mathematical model is required and how it works; as well as example use cases from the Forward Enterprise platform.
Network verification is a rapidly emerging technology that is a key part of Intent Based Networking (IBN). Verification can help avoid outages, facilitate compliance processes and accelerate change windows. Full-feature verification solutions require an underlying mathematical model of network behavior to analyze and reason about policy objectives and network designs. A mathematical model, as opposed to monitoring or testing live traffic, can perform exhaustive and definitive analysis of network implementations and behavior, including proving network isolation or security rules.

In this paper, we will describe how verification can be used in key IT processes and workflows, why a mathematical model is required and how it works, as well as example use cases from the Forward Enterprise platform. This will also clarify what requirements a mathematical model must meet and how to evaluate alternative products.
ESG Report: Verifying Network Intent with Forward Enterprise
This ESG Technical Review documents hands-on validation of Forward Enterprise, a solution developed by Forward Networks to help organizations save time and resources when verifying that their IT networks can deliver application traffic consistently in line with network and security policies. The review examines how Forward Enterprise can reduce network downtime, ensure compliance with policies, and minimize adverse impact of configuration changes on network behavior.
ESG research recently uncovered that 66% of organizations view their IT environments as more or significantly more complex than they were two years ago. The complexity will most likely increase, since 46% of organizations anticipate their network infrastructure spending to exceed that of 2018 as they upgrade and expand their networks.

Large enterprise and service provider networks consist of multiple device types—routers, switches, firewalls, and load balancers—with proprietary operating systems (OS) and different configuration rules. As organizations support more applications and users, their networks will grow and become more complex, making it more difficult to verify and manage correctly implemented policies across the entire network. Organizations have also begun to integrate public cloud services with their on-premises networks, adding further network complexity to manage end-to-end policies.

With increasing network complexity, organizations cannot easily confirm that their networks are operating as intended when they implement network and security policies. Moreover, when considering a fix to a service-impact issue or a network update, determining how it may impact other applications negatively or introduce service-affecting issues becomes difficult. To assess adherence to policies or the impact of any network change, organizations have typically relied on disparate tools and material—network topology diagrams, device inventories, vendor-dependent management systems, command line (CLI) commands, and utilities such as “ping” and “traceroute.” The combination of these tools cannot provide a reliable and holistic assessment of network behavior efficiently.

Organizations need a vendor-agnostic solution that enables network operations to automate the verification of network implementations against intended policies and requirements, regardless of the number and types of devices, operating systems, traffic rules, and policies that exist. The solution must represent the topology of the entire network or subsets of devices (e.g., in a region) quickly and efficiently. It should verify network implementations from prior points in time, as well as proposed network changes prior to implementation. Finally, the solution must also enable organizations to quickly detect issues that affect application delivery or violate compliance requirements.
Forward Networks ROI Case Study
See how a large financial services business uses Forward Enterprise to achieve significant ROI with process improvements in trouble ticket resolution, audit-related fixes and change windows.
Because Forward Enterprise automates the intelligent analysis of network designs, configurations and state, we provide an immediate and verifiable return on investment (ROI) in terms of accelerating key IT processes and reducing manhours of highly skilled engineers in troubleshooting and testing the network.

In this paper, we will quantify the ROI of a large financial services firm and document the process improvements that led to IT cost savings and a more agile network. In this analysis, we will look at process improvements in trouble ticket resolution, audit-related fixes and acceleration of network updates and change windows. We will explore each of these areas in more detail, along with the input assumptions for the calculations, but for this financial services customer, the following benefits were achieved, resulting in an annualized net savings of over $3.5 million.
The SysAdmin Guide to Azure Infrastructure as a Service
If you're used to on-premises infrastructures, cloud platforms can seem daunting. But it doesn't need to be. This eBook written by the veteran IT consultant and trainer Paul Schnackenburg, covers all aspects of setting up and maintaining a high-performing Azure IaaS environment, including: • VM sizing and deployment • Migration • Storage and networking • Security and identity • Infrastructure as code and more!

The cloud computing era is well and truly upon us, and knowing how to take advantage of the benefits of this computing paradigm while maintaining security, manageability, and cost control are vital skills for any IT professional in 2020 and beyond. And its importance is only getting greater.

In this eBook, we’re going to focus on Infrastructure as a Service (IaaS) on Microsoft’s Azure platform - learning how to create VMs, size them correctly, manage storage, networking, and security, along with backup best practices. You’ll also learn how to operate groups of VMs, deploy resources based on templates, managing security and automate your infrastructure. If you currently have VMs in your own datacenter and are looking to migrate to Azure, we’ll also teach you that.

If you’re new to the cloud (or have experience with AWS/GCP but not Azure), this book will cover the basics as well as more advanced skills. Given how fast things change in the cloud, we’ll cover the why (as well as the how) so that as features and interfaces are updated, you’ll have the theoretical knowledge to effectively adapt and know how to proceed.

You’ll benefit most from this book if you actively follow along with the tutorials. We will be going through terms and definitions as we go – learning by doing has always been my preferred way of education. If you don’t have access to an Azure subscription, you can sign up for a free trial with Microsoft. This will give you 30 days 6 to use $200 USD worth of Azure resources, along with 12 months of free resources. Note that most of these “12 months” services aren’t related to IaaS VMs (apart from a few SSD based virtual disks and a small VM that you can run for 750 hours a month) so be sure to get everything covered on the IaaS side before your trial expires. There are also another 25 services that have free tiers “forever”.

Now you know what’s in store, let’s get started!

Evaluator Group Report on Liqid Composable Infrastructure
In this report from Eric Slack, Senior Analyst at the Evaluator Group, learn how Liqid’s software-defined platform delivers comprehensive, multi-fabric composable infrastructure for the industry’s widest array of data center resources.
Composable Infrastructures direct-connect compute and storage resources dynamically—using virtualized networking techniques controlled by software. Instead of physically constructing a server with specific internal devices (storage, NICs, GPUs or FPGAs), or cabling the appropriate device chassis to a server, composable enables the virtual connection of these resources at the device level as needed, when needed.

Download this report from Eric Slack, Senior Analyst at the Evaluator Group to learn how Liqid’s software-defined platform delivers comprehensive, multi-fabric composable infrastructure for the industry’s widest array of data center resources.
IDC: SaaS Backup and Recovery: Simplified Data Protection Without Compromise
Although the majority of organizations have a "cloud first" strategy, most also continue to manage onsite applications and the backup infrastructure associated with them. However, many are moving away from backup specialists and instead are leaving the task to virtual infrastructure administrators or other IT generalists. Metallic represents Commvault's direct entry into one of the fastest-growing segments of the data protection market. Its hallmarks are simplicity and flexibility of deployment

Metallic is a new SaaS backup and recovery solution based on Commvault's data protection software suite, proven in the marketplace for more than 20 years. It is designed specifically for the needs of medium-scale enterprises but is architected to grow with them based on data growth, user growth, or other requirements. Metallic initially offers either monthly or annual subscriptions through reseller partners; it will be available through cloud service providers and managed service providers over time. The initial workload use cases for Metallic include virtual machine (VM), SQL Server, file server, MS Office 365, and endpoint device recovery support; the company expects to add more use cases and supported workloads as the solution evolves.

Metallic is designed to offer flexibility as one of the service's hallmarks. Aspects of this include:

  • On-demand infrastructure: Metallic manages the cloud-based infrastructure components and software for the backup environment, though the customer will still manage any of its own on- premise infrastructure. This environment will support on-premise, cloud, and hybrid workloads. IT organizations are relieved of the daily task of managing the infrastructure components and do not have to worry about upgrades, OS or firmware updates and the like, for the cloud infrastructure, so people can repurpose that time saved toward other activities.
  • Metallic offers preconfigured plans designed to have users up and running in approximately 15 minutes, eliminating the need for a proof-of-concept test. These preconfigured systems have Commvault best practices built into the design, or organizations can configure their own.
  • Partner-delivered services: Metallic plans to go to market with resellers that can offer a range of services on top of the basic solution's capabilities. These services will vary by provider and will give users a variety of choices when selecting a provider to match the services offered with the organization's needs.
  • "Bring your own storage": Among the flexible options of Metallic, including VM and file or SQL database use cases, users can deploy their own storage, either on-premise or in the cloud, while utilizing the backup/recovery services of Metallic. The company refers to this option as "SaaS Plus."
GigaOM Key Criteria for Software-Defined Storage – Vendor Profile: DataCore Software
DataCore SANsymphony is one of the most flexible solutions in the software-defined storage (SDS) market, enabling users to build modern storage infrastructures that combine software-defined storage functionality with storage virtualization and hyperconvergence. This results in a very smooth migration path from traditional infrastructures based on physical appliances and familiar data storage approaches, to a new paradigm built on flexibility and agility.
DataCore SANsymphony is a scale-out solution with a rich feature set and extensive functionality to improve resource optimization and overall system efficiency. Data services exposed to the user include snapshots with continuous data protection and remote data replication options, including a synchronous mirroring capability to build metro clusters and respond to demanding, high-availability scenarios. Encryption at rest can be configured as well, providing additional protection for data regardless of the physical device on which it is stored.

On top of the core block storage services provided in its SANsymphony products, DataCore recently released vFiLo to add file and object storage capabilities to its portfolio. VFiLo enables users to consolidate additional applications and workloads on its platform, and to further simplify storage infrastructure and its management. The DataCore platform has been adopted by cloud providers and enterprises of all sizes over the years, both at the core and at the edge.

SANsymphony combines superior flexibility and support for a diverse array of use cases with outstanding ease of use. The solution is mature and provides a very broad feature set. DataCore boasts a global partner network that provides both products and professional services, while its sales model supports perpetual licenses and subscription options typical of competitors in the sector. DataCore excels at providing tools to build balanced storage infrastructures that can serve multiple workloads and scale in different dimensions, while keeping complexity and cost at bay.

The Backup Bible – Complete Edition
In the modern workplace, your data is your lifeline. A significant data loss can cause irreparable damage. Every company must ask itself - is our data properly protected? Learn how to create a robust, effective backup and DR strategy and how put that plan into action with the Backup Bible – a free eBook written by backup expert and Microsoft MVP Eric Siron. The Backup Bible Complete Edition features 200+ pages of actionable content divided into 3 core parts, including 11 customizable templates

Part 1 explains the fundamentals of backup and how to determine your unique backup specifications. You'll learn how to:

  • Get started with backup and disaster recovery planning
  • Set recovery objectives and loss tolerances
  • Translate your business plan into a technically oriented outlook
  • Create a customized agenda for obtaining key stakeholder support
  • Set up a critical backup checklist

Part 2 shows you what exceptional backup looks like on a daily basis and the steps you need to get there, including:

  • Choosing the Right Backup and Recovery Software
  • Setting and Achieving Backup Storage Targets
  • Securing and Protecting Backup Data
  • Defining Backup Schedules
  • Monitoring, Testing, and Maintaining Systems

Part 3 guides you through the process of creating a reliable disaster recovery strategy based on your own business continuity requirements, covering:

  • Understanding key disaster recovery considerations
  • Mapping out your organizational composition
  • Replication
  • Cloud solutions
  • Testing the efficacy of your strategy
  • The Backup Bible is the complete guide to protecting your data and an essential reference book for all IT admins and professionals.


Introduction to Microsoft Windows Virtual Desktop
This whitepaper provides an overview of WVD and a historical perspective of the evolution of Windows desktops – especially multi-session Windows. This paper was authored by industry veterans with active involvement in multi-session Windows desktop computing since its inception in the early 1990s. Disclaimer: Professionals at Liquidware, a Microsoft WVD partner, authored this paper based on information available at the time of writing.
Microsoft announced the general availability of Windows Virtual Desktop (WVD) on September 30, 2019. The release came after an initial public-preview-evaluation program that lasted about six months. This whitepaper provides an overview of WVD and a historical perspective of the evolution of Windows desktops – especially multi-session Windows. This paper was authored by industry veterans with active involvement in multi-session Windows desktop computing since its inception in the early 1990s. Disclaimer: Professionals at Liquidware, a Microsoft WVD partner, authored this paper based on information available at the time of writing. Information regarding WVD is evolving quickly; consequently, readers should understand that this whitepaper (v2.0) presents the most up-to-date information available. Any inaccuracies in this paper are unintentional. Research and buying decisions are ultimately the readers’ responsibility.
Optimising Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
This whitepaper has been authored by experts at Liquidware Labs in order to provide guidance to adopters of desktop virtualization technologies. In this paper, two types of profile management with ProfileUnity are outlined: (1) ProfileDisk and (2) Profile Portability. This paper covers best practice recommendations for each technology and when they can be used together. ProfileUnity is the only full featured UEM solution on the market to feature an embedded ProfileDisk technology and the advanta

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case. These include:

1. ProfileDisk™, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

CloudCasa - Kubernetes and Cloud Database Protection as a Service
CloudCasa™ was built to address data protection for Kubernetes and cloud native infrastructure, and to bridge the data management and protection gap between DevOps and IT Operations. CloudCasa is a simple, scalable and cloud-native BaaS solution built using Kubernetes for protecting Kubernetes and cloud databases. CloudCasa removes the complexity of managing traditional backup infrastructure, and it provides the same level of application-consistent data protection and disaster recovery that IT O

CloudCasa supports all major Kubernetes managed cloud services and distributions, provided they are based on Kubernetes 1.13 or above. Supported cloud services include Amazon EKS, DigitalOcean, Google GKE, IBM Cloud Kubernetes Service, and Microsoft AKS. Supported Kubernetes distributions include Kubernetes.io, Red Hat OpenShift, SUSE Rancher, and VMware Tanzu Kubernetes Grid. Multiple worker node architectures are supported, including x86-64, ARM, and S390x.

With CloudCasa, managing data protection in complex hybrid cloud or multi-cloud environments is as easy as managing it for a single cluster. Just add your multiple clusters and cloud databases to CloudCasa, and you can manage backups across them using common policies, schedules, and retention times. And you can see and manage all your backups in a single easy-to-use GUI.

Top 10 Reasons for Using CloudCasa:

  1. Backup as a service
  2. Intuitive UI
  3. Multi-Cluster Management
  4. Cloud database protection
  5. Free Backup Storage
  6. Secure Backups
  7. Account Compromise Protection
  8. Cloud Provider Outage Protection
  9. Centralized Catalog and Reporting
  10. Backups are Monitored

With CloudCasa, we have your back based on Catalogic Software’s many years of experience in enterprise data protection and disaster recovery. Our goal is to do all the hard work for you to backup and protect your multi-cloud, multi-cluster, cloud native databases and applications so you can realize the operational efficiency and speed of development advantages of containers and cloud native applications.

How Parallels RAS Helps Institutions Deliver a Better Virtual Education Experience for Students
The COVID-19 pandemic had a huge impact on education, with institutions and learners having to adapt to a mix of online and in-person teaching. Remote and hybrid learning can certainly be challenging, but with the right technology solutions, institutions can provide students with a better virtual education experience. This white paper discusses how virtual desktop infrastructure (VDI) facilitates remote learning and how Parallels® Remote Application Server (RAS) can help create an improved virtu

Hybrid and remote learning environments have been on the rise since the start of the pandemic and will likely continue until the health risk abates completely. The good news is that virtualization technology has helped make educational resources more readily accessible to staff and students regardless of device and/or location, allowing teaching and learning to continue despite the pandemic.

This whitepaper identifies how virtual desktop infrastructure (VDI) enables better virtual learning environments by providing greater accessibility, mobility and flexibility for users. In addition, VDI can further enhance virtual education because it creates a customized learning experience by:

  • Giving students access to application on any device.
  • Providing tailored access to educational IT infrastructure.
  • Delivering relevant applications based on students’ individual requirements.

Parallels Remote Application Server (RAS) is an all-in-one, cost-efficient VDI solution that provides seamless, secure access to educational applications and desktops to all stakeholders - administrators, educators, and students.
Download this whitepaper today and find out why so many educational institutions choose Parallels RAS for VDI and application delivery to create more effective virtual learning environments.

IGEL and LG Team to Improve the Digital Experience for Kaleida Health
Bringing secure, easy to manage, and high-performance access to cloud workspaces for Kaleida Health’s clinical and back office support teams, IGEL OS and LG’s All-in- One Thin Clients standardize and simplify the on-site and remote desktop experience with Citrix VDI.

Kaleida Health was looking to modernize the digital experience for its clinicians and back office support staff. Aging and inconsistent desktop hardware and evolving Windows OS support requirements were taxing the organization’s internal IT resources. Further, the desire to standardize on Citrix VDI  for both on-site and remote workers meant the healthcare organization needed to identify a new software and hardware solution that would support simple and secure access to cloud workspaces.

The healthcare organization began the process by evaluating all of the major thin client OS vendors, and determined IGEL to be the leader for multiple reasons – it is hardware agnostic, stable and has a small footprint based on Linux OS, and it offers a great management platform, the IGEL UMS, for both on-site users and remote access.

Kaleida Health also selected LG thin client monitors early on because the All-in-One form factor supports both back office teams and more importantly, clinical areas including WoW carts, letting medical professionals securely log in and access information and resources from one, protected data center.