OVERVIEW
The virtualization of physical computers has become the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multi-tenancy and more. These environments are complex and ephemeral, creating requirements and challenges beyond the capability of traditional monitoring tools that were originally designed for static physical environments. But modern solutions exist, and can bring your virtual environment to new levels of efficiency, performance and scale.
This guide explains the pervasiveness of virtualized environments in modern data centers, the demand these environments create for more robust monitoring and analytics solutions, and the keys to getting the most out of virtualization deployments.
TABLE OF CONTENTS
· History and Expansion of Virtualized Environments
· Monitoring Virtual Environments
· Approaches to Monitoring
· Why Effective Virtualization Monitoring Matters
· A Unified Approach to Monitoring Virtualized Environments
· 5 Key Capabilities for Virtualization Monitoring
o Real-Time Awareness
o Rapid Root-Cause Analytics
o End-to-End Visibility
o Complete Flexibility
o Hypervisor Agnosticism
· Evaluating a Monitoring Solution
o Unified View
o Scalability
o CMDB Support
o Converged Infrastructure
o Licensing
· Zenoss for Virtualization Monitoring
Many large enterprises are moving important applications from traditional physical servers to virtualized environments, such as VMware vSphere in order to take advantage of key benefits such as configuration flexibility, data and application mobility, and efficient use of IT resources.
Realizing these benefits with business critical applications, such as SQL Server or SAP can pose several challenges. Because these applications need high availability and disaster recovery protection, the move to a virtual environment can mean adding cost and complexity and limiting the use of important VMware features. This paper explains these challenges and highlights six key facts you should know about HA protection in VMware vSphere environments that can save you money.
This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.
ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.
A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.
One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.
You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.
In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!
Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.
Now that the basics have been covered, download the eBook to discover how to put this theory into practice!
If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.
The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.
Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!
Scripting and PowerCLI are words that most people working with VMware products know pretty well and have used once or twice. Everyone knows that scripting and automation are great assests to have in your toolbox. The problem usually is that getting into scripting appears daunting to many people who feel like the learning curve is just too steep, and they usually don't know where to start. The good thing is you don't need to learn everything straight away to start working with PowerShell and PowerCLI. Once you have the basics down and have your curiosity tickled, you’ll learn what you need as you go, a lot faster than you thought you would!
ABOUT POWERCLI
Let's get to know PowerCLI a little better before we start getting our hands dirty in the command prompt. If you are reading this you probably already know what PowerCLI is about or have a vague idea of it, but it’s fine you don’t. After a while working with it, it becomes second nature, and you won't be able to imagine life without it anymore! Thanks to VMware's drive to push automation, the product's integration with all of their components has significantly improved over the years, and it has now become a critical part of their ecosystem.
WHAT IS PowerCLI?
Contrary to what many believe, PowerCLI is not in fact a stand-alone software but rather a command-line and scripting tool built on Windows PowerShell for managing and automating vSphere environments. It used to be distributed as an executable file to install on a workstation. Previously, an icon was generated that would essentially launch PowerShell and load the PowerCLI snap-ins in the session. This behavior changed back in version 6.5.1 when the executable file was removed and replaced by a suite of PowerShell modules to install from within the prompt itself. This new deployment method is preferred because these modules are now part of Microsoft’s Official PowerShell Gallery. 7 These modules provide the means to interact with the components of a VMware environment and offer more than 600 cmdlets! The below command returns a full list of VMware-Associated Cmdlets!
IT Infrastructure needs are constantly fluctuating in a world where powerful emerging software applications such as artificial intelligence can create, transform, and remodel markets in a few months or even weeks. While the public cloud is a flexible solution, it doesn’t solve every data center need—especially when businesses need to physically control their data on premises. This leads to overspend— purchasing servers and equipment to meet peak demand at all times. The result? Expensive equipment sitting idle during non-peak times. For years, companies have wrestled with overspend and underutilization of equipment, but now businesses can reduce cap-ex and rein in operational expenditures for underused hardware with software-defined composable infrastructure. With a true composable infrastructure solution, businesses realize optimal performance of IT resources while improving business agility. In addition, composable infrastructure allows organizations to take better advantage of the most data-intensive applications on existing hardware while preparing for future, disaggregated growth.
Download this report to see how composable infrastructure helps you deploy faster, effectively utilize existing hardware, rein in capital expenses, and more.
Make the Move: Linux Remote Desktops Made Easy
Securely run Linux applications and desktops from the cloud or your data center.
Download this guide and learn...
Choosing the right cloud service for your organization, or for your target customer if you are a managed service provider, can be time consuming and effort intensive. For this paper, we will focus on existing applications (vs. new application services) that require high levels of performance and security, but that also enable customers to meet specific cost expectations.
Topics covered include:
Read this whitepaper to learn critical best practices for VMware vSphere with Veeam Backup & Replication v11, such as:
Managing the performance of Windows-based workloads can be a challenge. Whether physical PCs or virtual desktops, the effort required to maintain, tune and optimize workspaces is endless. Operating system and application revisions, user installed applications, security and bug patches, BIOS and driver updates, spyware, multi-user operating systems supply a continual flow of change that can disrupt expected performance. When you add in the complexities introduced by virtual desktops and cloud architectures, you have added another infinite source of performance instability. Keeping up with this churn, as well as meeting users’ zero tolerance for failures, are chief worries for administrators.
To help address the need for uniform performance and optimization in light of constant change, Liquidware introduced the Process Optimization feature in its Stratusphere UX solution. This feature can be set to automatically optimize CPU and Memory, even as system demands fluctuate. Process Optimization can keep “bad actor” applications or runaway processes from crippling the performance of users’ workspaces by prioritizing resources for those being actively used over not used or background processes. The Process Optimization feature requires no additional infrastructure. It is a simple, zero-impact feature that is included with Stratusphere UX. It can be turned on for single machines, or groups, or globally. Launched with the check of a box, you can select from pre-built profiles that operate automatically. Or administrators can manually specify the processes they need to raise, lower or terminate, if that task becomes required. This feature is a major benefit in hybrid multi-platform environments that include physical, pool or image-based virtual and cloud workspaces, which are much more complex than single-delivery systems. The Process Optimization feature was designed with security and reliability in mind. By default, this feature employs a “do no harm” provision affecting normal and lower process priorities, and a relaxed policy. No processes are forced by default when access is denied by the system, ensuring that the system remains stable and in line with requirements.
There’s little doubt we’re in the midst of a change in the way we operationalize and manage our end users’ workspaces. On the one hand, IT leaders are looking to gain the same efficiencies and benefits realized with cloud and next-generation virtual-server workloads. And on the other hand, users are driving the requirements for anytime, anywhere and any device access to the applications needed to do their jobs. To provide the next-generation workspaces that users require, enterprises are adopting a variety of technologies such as virtual-desktop infrastructure (VDI), published applications and layered applications. At the same time, those technologies are creating new and challenging problems for those looking to gain the full benefits of next-generation end-user workspaces.
Before racing into any particular desktop transformation delivery approach it’s important to define appropriate goals and adopt a methodology for both near- and long-term success. One of the most common planning pitfalls we’ve seen in our history supporting the transformation of more than 6 million desktops is that organizations tend to put too much emphasis on the technical delivery and resource allocation aspects of the platform, and too little time considering the needs of users. How to meet user expectations and deliver a user experience that fosters success is often overlooked.
To prevent that problem and achieve near-term success as well as sustainable long-term value from a next-generation desktop transformation approach, planning must also include defining a methodology that should include the following three things:
• Develop a baseline of “normal” performance for current end user computing delivery• Set goals for functionality and defined measurements supporting user experience• Continually monitor the environment to ensure users are satisfied and the environment is operating efficiently
This white paper will show why the user experience is difficult to predict, why it’s essential to planning, and why factoring in the user experience—along with resource allocation—is key to creating and delivering the promise of a next-generation workspace that is scalable and will produce both near-and long-term value.