Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 31 white papers, page 1 of 2.
The Definitive Guide to Monitoring Virtual Environments
The virtualization of physical computers has become the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multi-tenancy and more. These environments are complex and ephemeral, creating requirements and challenges beyond the capability of traditional monitoring tools that were originally designed for static physical environments. But modern solutions exist, and can bring your virtual env

OVERVIEW

The virtualization of physical computers has become the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multi-tenancy and more. These environments are complex and ephemeral, creating requirements and challenges beyond the capability of traditional monitoring tools that were originally designed for static physical environments. But modern solutions exist, and can bring your virtual environment to new levels of efficiency, performance and scale.

This guide explains the pervasiveness of virtualized environments in modern data centers, the demand these environments create for more robust monitoring and analytics solutions, and the keys to getting the most out of virtualization deployments.

TABLE OF CONTENTS

·        History and Expansion of Virtualized Environments

·        Monitoring Virtual Environments

·        Approaches to Monitoring

·        Why Effective Virtualization Monitoring Matters

·        A Unified Approach to Monitoring Virtualized Environments

·        5 Key Capabilities for Virtualization Monitoring

o   Real-Time Awareness

o   Rapid Root-Cause Analytics

o   End-to-End Visibility

o   Complete Flexibility

o   Hypervisor Agnosticism

·        Evaluating a Monitoring Solution

o   Unified View

o   Scalability

o   CMDB Support

o   Converged Infrastructure

o   Licensing

·        Zenoss for Virtualization Monitoring

High Availability Clusters in VMware vSphere without Sacrificing Features or Flexibility
This paper explains the challenges of moving important applications from traditional physical servers to virtualized environments, such as VMware vSphere in order to take advantage of key benefits such as configuration flexibility, data and application mobility, and efficient use of IT resources and highlights six key facts you should know about HA protection in VMware vSphere environments that can save you money.

Many large enterprises are moving important applications from traditional physical servers to virtualized environments, such as VMware vSphere in order to take advantage of key benefits such as configuration flexibility, data and application mobility, and efficient use of IT resources.

Realizing these benefits with business critical applications, such as SQL Server or SAP can pose several challenges. Because these applications need high availability and disaster recovery protection, the move to a virtual environment can mean adding cost and complexity and limiting the use of important VMware features. This paper explains these challenges and highlights six key facts you should know about HA protection in VMware vSphere environments that can save you money.

IGEL Delivers Manageability, Scalability and Security for The Auto Club Group
The Auto Club Group realizes cost-savings; increased productivity; and improved time-to-value with IGEL’s software-defined endpoint management solutions.
In 2016, The Auto Club Group was starting to implement a virtual desktop infrastructure (VDI) solution leveraging Citrix XenDesktop on both its static endpoints and laptop computers used in the field by its insurance agents, adjusters and other remote employees. “We were having a difficult time identifying a solution that would enable us to simplify the management of our laptop computers, in particular, while providing us with the flexibility, scalability and security we wanted from an endpoint management perspective,” said James McVicar, IT Architect, The Auto Club Group.

Some of the mobility management solutions The Auto Club has been evaluating relied on Windows CE, a solution that is nearing end-of-life. “We didn’t want to deal with the patches and other management headaches related to a Windows-based solutions, so this was not an attractive option,” said McVicar.

In the search for a mobile endpoint management solution, McVicar and his team came across IGEL and were quickly impressed. McVicar said, “What first drew our attention to IGEL was the ability to leverage the IGEL UDC to quickly and easily convert our existing laptop computers into an IGEL OS-powered desktop computing solution, that we could then manage via the IGEL UMS. Because IGEL is Linux-based, we found that it offered both the functionality and stability we needed within our enterprise.”

As The Auto Club Group continues to expand its operations, it will be rolling out additional IGEL OS-powered endpoints to its remote workers, and expects its deployment to exceed 400 endpoints once the project is complete.

The Auto Club Group is also looking at possibly leveraging the IGEL Cloud Gateway, which will help bring more performance and functionality to those working outside of the corporate WAN.
vSphere Troubleshooting Guide
Troubleshooting complex virtualization technology is something all VMware users will have to face at some point. It requires an understanding of how various components fit together and finding a place to start is not easy. Thankfully, VMware vExpert Ryan Birk is here to help with this eBook preparing you for any problems you may encounter along the way.

This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.

ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.

A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.

One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.

You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.

In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!

Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.

Now that the basics have been covered, download the eBook to discover how to put this theory into practice!

Mastering vSphere – Best Practices, Optimizing Configurations & More
Do you regularly work with vSphere? If so, this free eBook is for you. Learn how to leverage best practices for the most popular features contained within the vSphere platform and boost your productivity using tips and tricks learnt direct from an experienced VMware trainer and highly qualified professional. In this eBook, vExpert Ryan Birk shows you how to master: Advanced Deployment Scenarios using Auto-Deploy Shared Storage Performance Monitoring and Troubleshooting Host Network configurati

If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.

The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.

Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!

Implementing High Availability in a Linux Environment
This white paper explores how organizations can lower both CapEx and OpEx running high-availability applications in a Linux environment without sacrificing performance or security.
Using open source solutions can dramatically reduce capital expenditures, especially for software licensing fees. But most organizations also understand that open source software needs more “care and feeding” than commercial software—sometimes substantially more- potentially causing operating expenditures to increase well above any potential savings in CapEx. This white paper explores how organizations can lower both CapEx and OpEx running high-availability applications in a Linux environment without sacrificing performance or security.
Controlling Cloud Costs without Sacrificing Availability or Performance
This white paper is to help prevent cloud services sticker shock from occurring ever again and to help make your cloud investments more effective.
After signing up with a cloud service provider, you receive a bill that causes sticker shock. There are unexpected and seemingly excessive charges, and those responsible seem unable to explain how this could have happened. The situation is critical because the amount threatens to bust the budget unless cost-saving changes are made immediately. The objective of this white paper is to help prevent cloud services sticker shock from occurring ever again.
PowerCLI - The Aspiring Automator's Guide
Automation is awesome but don't just settle for using other people's scripts. Learn how to create your own scripts and take your vSphere automation game to the next level! Written by VMware vExpert Xavier Avrillier, this free eBook presents a use-case approach to learning how to automate tasks in vSphere environments using PowerCLI. We start by covering the basics of installation, set up, and an overview of PowerCLI terms. From there we move into scripting logic and script building with step-by

Scripting and PowerCLI are words that most people working with VMware products know pretty well and have used once or twice. Everyone knows that scripting and automation are great assests to have in your toolbox. The problem usually is that getting into scripting appears daunting to many people who feel like the learning curve is just too steep, and they usually don't know where to start. The good thing is you don't need to learn everything straight away to start working with PowerShell and PowerCLI. Once you have the basics down and have your curiosity tickled, you’ll learn what you need as you go, a lot faster than you thought you would!

ABOUT POWERCLI

Let's get to know PowerCLI a little better before we start getting our hands dirty in the command prompt. If you are reading this you probably already know what PowerCLI is about or have a vague idea of it, but it’s fine you don’t. After a while working with it, it becomes second nature, and you won't be able to imagine life without it anymore! Thanks to VMware's drive to push automation, the product's integration with all of their components has significantly improved over the years, and it has now become a critical part of their ecosystem.

WHAT IS PowerCLI?

Contrary to what many believe, PowerCLI is not in fact a stand-alone software but rather a command-line and scripting tool built on Windows PowerShell for managing and automating vSphere environments. It used to be distributed as an executable file to install on a workstation. Previously, an icon was generated that would essentially launch PowerShell and load the PowerCLI snap-ins in the session. This behavior changed back in version 6.5.1 when the executable file was removed and replaced by a suite of PowerShell modules to install from within the prompt itself. This new deployment method is preferred because these modules are now part of Microsoft’s Official PowerShell Gallery. 7 These modules provide the means to interact with the components of a VMware environment and offer more than 600 cmdlets! The below command returns a full list of VMware-Associated Cmdlets!

Why Should Enterprises Move to a True Composable Infrastructure Solution?
IT Infrastructure needs are constantly fluctuating in a world where powerful emerging software applications such as artificial intelligence can create, transform, and remodel markets in a few months or even weeks. While the public cloud is a flexible solution, it doesn’t solve every data center need—especially when businesses need to physically control their data on premises. Download this report to see how composable infrastructure helps you deploy faster, effectively utilize existing hardwar

IT Infrastructure needs are constantly fluctuating in a world where powerful emerging software applications such as artificial intelligence can create, transform, and remodel markets in a few months or even weeks. While the public cloud is a flexible solution, it doesn’t solve every data center need—especially when businesses need to physically control their data on premises. This leads to overspend— purchasing servers and equipment to meet peak demand at all times. The result? Expensive equipment sitting idle during non-peak times.

For years, companies have wrestled with overspend and underutilization of equipment, but now businesses can reduce cap-ex and rein in operational expenditures for underused hardware with software-defined composable infrastructure. With a true composable infrastructure solution, businesses realize optimal performance of IT resources while improving business agility. In addition, composable infrastructure allows organizations to take better advantage of the most data-intensive applications on existing hardware while preparing for future, disaggregated growth.

Download this report to see how composable infrastructure helps you deploy faster, effectively utilize existing hardware, rein in capital expenses, and more.

LQD4500 Gen4x16 NVMe SSD Performance Report
The LQD4500 is the World’s Fastest SSD. The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB. What does all that mean in real-world testing? Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demand
The LQD4500 is the World’s Fastest SSD.

The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB.

The document will contain test results and performance measurements for the Liqid LQD4500 Gen4x16 NVMe SSD. The performance test reports include Sequential, Random, and Latency measurement on the LQD4500 high performance storage device. The following data has been measured in a Linux OS environment and results are taken per the SNIA enterprise performance test specification standards. The below results are steady state after sufficient device preconditioning.

Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demanding applications.
Make the Move: Linux Desktops with Cloud Access Software
Gone are the days where hosting Linux desktops on-premises is the only way to ensure uncompromised customization, choice and control. You can host Linux desktops & applications remotely and visualize them to further security, flexibility and performance. Learn why IT teams are virtualizing Linux.

Make the Move: Linux Remote Desktops Made Easy

Securely run Linux applications and desktops from the cloud or your data center.

Download this guide and learn...

  • Why organizations are virtualizing Linux desktops & applications
  • How different industries are leveraging remote Linux desktops & applications
  • What your organization can do to begin this journey


Ten Topics to Discuss with Your Cloud Provider
Find the “just right” cloud for your business. For this paper, we will focus on existing applications (vs. new application services) that require high levels of performance and security, but that also enable customers to meet specific cost expectations.

Choosing the right cloud service for your organization, or for your target customer if you are a managed service provider, can be time consuming and effort intensive. For this paper, we will focus on existing applications (vs. new application services) that require high levels of performance and security, but that also enable customers to meet specific cost expectations.

Topics covered include:

  • Global access and availability
  • Cloud management
  • Application performance
  • Security and compliance
  • And more!
10 Best Practices for VMware vSphere Backups
In 2021, VMware is still the market leader in the virtualization sector and, for many IT pros, VMware vSphere is the virtualization platform of choice. But can you keep up with the everchanging backup demands of your organization, reduce complexity and out‑perform legacy backup?

Read this whitepaper to learn critical best practices for VMware vSphere with Veeam Backup & Replication v11, such as:

  • Choose the right backup mode wisely
  • Plan how to restore
  • Integrate Continuous Data Protection into disaster recovery concept
  • And much more!
Process Optimization with Stratusphere UX
This whitepaper explores the developments of the past decade that have prompted the need for Stratusphere UX Process Optimization. We also cover how this feature works and the advantages it provides, including specific capital and operating cost benefits.

Managing the performance of Windows-based workloads can be a challenge. Whether physical PCs or virtual desktops, the effort required to maintain, tune and optimize workspaces is endless. Operating system and application revisions, user installed applications, security and bug patches, BIOS and driver updates, spyware, multi-user operating systems supply a continual flow of change that can disrupt expected performance. When you add in the complexities introduced by virtual desktops and cloud architectures, you have added another infinite source of performance instability. Keeping up with this churn, as well as meeting users’ zero tolerance for failures, are chief worries for administrators.

To help address the need for uniform performance and optimization in light of constant change, Liquidware introduced the Process Optimization feature in its Stratusphere UX solution. This feature can be set to automatically optimize CPU and Memory, even as system demands fluctuate. Process Optimization can keep “bad actor” applications or runaway processes from crippling the performance of users’ workspaces by prioritizing resources for those being actively used over not used or background processes. The Process Optimization feature requires no additional infrastructure. It is a simple, zero-impact feature that is included with Stratusphere UX. It can be turned on for single machines, or groups, or globally. Launched with the check of a box, you can select from pre-built profiles that operate automatically. Or administrators can manually specify the processes they need to raise, lower or terminate, if that task becomes required. This feature is a major benefit in hybrid multi-platform environments that include physical, pool or image-based virtual and cloud workspaces, which are much more complex than single-delivery systems. The Process Optimization feature was designed with security and reliability in mind. By default, this feature employs a “do no harm” provision affecting normal and lower process priorities, and a relaxed policy. No processes are forced by default when access is denied by the system, ensuring that the system remains stable and in line with requirements.

Spotcheck Inspection with Stratusphere UX
This whitepaper defines an inspection technique―and the necessary broad-stroke steps to perform a limited health check of an existing platform or architecture. The paper defines and provides a practical-use example that will help you to execute a SpotCheck inspection using Liquidware’s Stratusphere UX.
The ability to meet user expectations and deliver the appropriate user-experience in a shared host and storage infrastructure can be a complex and challenging task. Further, the variability in deployment (settings and overall supportive infrastructure) on platforms such as VMware View and Citrix XenApp and XenDesktop make these architectures complex and difficult to troubleshoot and optimize. This whitepaper defines an inspection technique―and the necessary broad-stroke steps to perform a limited health check of an existing platform or architecture. The paper defines and provides a practical-use example that will help you to execute a SpotCheck inspection using Liquidware’s Stratusphere UX.
Why User Experience is Key to Your Desktop Transformation
This whitepaper has been authored by experts at Liquidware and draws upon its experience with customers as well as the expertise of its Acceler8 channel partners in order to provide guidance to adopters of desktop virtualization technologies. In this paper, we explain the importance of thorough planning— factoring in user experience and resource allocation—in delivering a scalable next-generation workspace that will produce both near- and long-term value.

There’s little doubt we’re in the midst of a change in the way we operationalize and manage our end users’ workspaces. On the one hand, IT leaders are looking to gain the same efficiencies and benefits realized with cloud and next-generation virtual-server workloads. And on the other hand, users are driving the requirements for anytime, anywhere and any device access to the applications needed to do their jobs. To provide the next-generation workspaces that users require, enterprises are adopting a variety of technologies such as virtual-desktop infrastructure (VDI), published applications and layered applications. At the same time, those technologies are creating new and challenging problems for those looking to gain the full benefits of next-generation end-user workspaces. 

Before racing into any particular desktop transformation delivery approach it’s important to define appropriate goals and adopt a methodology for both near- and long-term success. One of the most common planning pitfalls we’ve seen in our history supporting the transformation of more than 6 million desktops is that organizations tend to put too much emphasis on the technical delivery and resource allocation aspects of the platform, and too little time considering the needs of users. How to meet user expectations and deliver a user experience that fosters success is often overlooked. 

To prevent that problem and achieve near-term success as well as sustainable long-term value from a next-generation desktop transformation approach, planning must also include defining a methodology that should include the following three things:

•    Develop a baseline of “normal” performance for current end user computing delivery
•    Set goals for functionality and defined measurements supporting user experience
•    Continually monitor the environment to ensure users are satisfied and the environment is operating efficiently

This white paper will show why the user experience is difficult to predict, why it’s essential to planning, and why factoring in the user experience—along with resource allocation—is key to creating and delivering the promise of a next-generation workspace that is scalable and will produce both near-and long-term value.