OVERVIEW
The virtualization of physical computers has become the backbone of public and private cloud computing from desktops to data centers, enabling organizations to optimize hardware utilization, enhance security, support multi-tenancy and more. These environments are complex and ephemeral, creating requirements and challenges beyond the capability of traditional monitoring tools that were originally designed for static physical environments. But modern solutions exist, and can bring your virtual environment to new levels of efficiency, performance and scale.
This guide explains the pervasiveness of virtualized environments in modern data centers, the demand these environments create for more robust monitoring and analytics solutions, and the keys to getting the most out of virtualization deployments.
TABLE OF CONTENTS
· History and Expansion of Virtualized Environments
· Monitoring Virtual Environments
· Approaches to Monitoring
· Why Effective Virtualization Monitoring Matters
· A Unified Approach to Monitoring Virtualized Environments
· 5 Key Capabilities for Virtualization Monitoring
o Real-Time Awareness
o Rapid Root-Cause Analytics
o End-to-End Visibility
o Complete Flexibility
o Hypervisor Agnosticism
· Evaluating a Monitoring Solution
o Unified View
o Scalability
o CMDB Support
o Converged Infrastructure
o Licensing
· Zenoss for Virtualization Monitoring
Fulton Financial Corporation has a long and storied history that began in 1882 in Lancaster, Pennsylvania, where local merchants and farmers organized Fulton National Bank. The bank’s name was chosen to honor Lancaster County native Robert Fulton, the inventor and artist best known for designing and building the Clermont, the first successful steamboat.
In an effort to optimize the productivity of its employees and enable them to have more time to focus on their customers, Fulton sought to upgrade the thin clients for its Citrix application virtualization infrastructure, with the help of its Citrix partner and IGEL Platinum Partner, Plan B Technologies.
In selecting a desktop computing solution to support its Citrix application virtualization infrastructure, Fulton had one unique business requirement, they were looking for a solution that would mirror the experience provided by a Windows PC, without actually being a Windows PC.
During the evaluation process, Fulton looked at thin clients from IGEL and another leading manufacturer, conducting a “bake-off” of several models including the IGEL Universal Desktop (UD6). Fulton like the fact that IGEL is forward- thinking in designing its desktop computing solutions, and began its IGEL roll-out by purchasing 2,300 IGEL UD6 thin clients in 2016 for its headquarters and branch offices, and plans to complete the roll out of IGEL thin clients to the remainder of its 3,700 employees in the coming months. The bank is also leveraging the IGEL Universal Management Suite (UMS) to manage its fleet of IGEL thin clients.
Headquartered in Austin, Texas, Trinsic Technologies is a technology solutions provider focused on delivering managed IT and cloud solutions to SMBs since 2005.
In 2014, Trinsic introduced Anytime Cloud, a Desktop-as-a-Service (DaaS) designed to help SMB clients improve the end user computing experience and streamline business operations. To support Anytime Cloud, the solution provider was looking for a desktop delivery and endpoint management solution that would fulfill a variety of different end user needs and requirements across the multiple industries it serves. Trinsic also wanted a solution that provided ease of management and robust security features for clients operating within regulated industries such as healthcare and financial services.
The solution provider selected the IGEL Universal Desktop (UD) thin clients, the IGEL Universal Desktop Converter (UDC), the IGEL OS and the IGEL Universal Management Suite. As a result, some of the key benefits Trinsic has experienced include ease of management and configuration, security and data protection, improved resource allocation and cost savings.
Print data is generally unencrypted and almost always contains personal, proprietary or sensitive information. Even a simple print request sent from an employee may potentially pose a high security risk for an organization if not adequately monitored and managed. To put it bluntly, the printing processes that are repeated countless times every day at many organizations are great ways for proprietary data to end up in the wrong hands.
Mitigating this risk, however, should not impact the workforce flexibility and productivity print-anywhere capabilities deliver. Organizations seek to adopt print solutions that satisfy government-mandated regulations for protecting end users and that protect proprietary organizational data — all while providing a first-class desktop and application experience for users.
This solution guide outlines some of the regulatory issues any business faces when it prints sensitive material. It discusses how a Citrix-IGEL-ThinPrint bundled solution meets regulation criteria such as HIPAA standards and the EU’s soon-to-be-enacted General Data Protection Regulations without diminishing user convenience and productivity.
Finally, this guide provides high-level directions and recommendations for the deployment of the bundled solution.
Many large enterprises are moving important applications from traditional physical servers to virtualized environments, such as VMware vSphere in order to take advantage of key benefits such as configuration flexibility, data and application mobility, and efficient use of IT resources.
Realizing these benefits with business critical applications, such as SQL Server or SAP can pose several challenges. Because these applications need high availability and disaster recovery protection, the move to a virtual environment can mean adding cost and complexity and limiting the use of important VMware features. This paper explains these challenges and highlights six key facts you should know about HA protection in VMware vSphere environments that can save you money.
Are you ready to achieve #monitoringglory?
After reading this e-book, "Monitoring 201", you will:
With the onset of more modular and cloud-centric architectures, many organizations with disparate monitoring tools are reassessing their monitoring landscape. According to Gartner, hybrid IT (especially with IaaS subscription) enterprises must adopt more holistic IT infrastructure monitoring tools (ITIM) to gain visibility into their IT landscapes.
The guide provides insight into the IT infrastructure monitoring tool market and providers as well as key findings and recommendations.
Get the 2018 Gartner Market Guide for IT Infrastructure Monitoring Tools to see:
Key Findings Include:
If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.
The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.
Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!
In The Forrester Wave: Intelligent Application and Service Monitoring, Q2 2019, Forrester identified the 13 most significant IASM providers in the market today, with Zenoss ranked amongst them as a Leader.“As complexity grows, I&O teams struggle to obtain full visibility into their environments and do troubleshooting. To meet rising customer expectations, operations leaders need new monitoring technologies that can provide a unified view of all components of a service, from application code to infrastructure.”Who Should Read This
Enterprise organizations looking for a solution to provide:
Our Takeaways
Trends impacting the infrastructure and operations (I&O) team include:
There are many new challenges, and reasons, to migrate workloads to the cloud.
For example, here are four of the most popular:
Whether it is for backup, disaster recovery, or production in the cloud, you should be able to leverage the cloud platform to solve your technology challenges. In this step-by-step guide, we outline how GCP is positioned to be one of the easiest cloud platforms for app development. And, the critical role data protection as-as-service (DPaaS) can play.
The primary goal of a multi-cloud data management strategy is to supply data, either via copying or moving data to the various multi-cloud use cases. A key enabler of this movement is the data management software applications. In theory, data protection applications can perform both of the copy and move functions. A key consideration is how the multi-cloud data management experience is unified. In most cases, data protection applications ignore the user experience of each cloud and use their proprietary interface as the unifying entity, which increases complexity.
There are a variety of reasons organizations may want to leverage multiple clouds. The first use case is to use public cloud storage as a backup mirror to an on-premises data protection process. Using public cloud storage as a backup mirror enables the organization to automatically off-site data. It also sets up many of the more advanced use cases.
Another use case is using the cloud for disaster recovery.
Another use case is ���Lift and Shift,” which means the organization wants to run the application in the cloud natively. Initial steps in the “lift and shift” use case are similar to Dev/Test, but now the workload is storing unique data in the cloud.
Multi-cloud is a reality now for most organizations and managing the movement of data between these clouds is critical.
Managing the performance of Windows-based workloads can be a challenge. Whether physical PCs or virtual desktops, the effort required to maintain, tune and optimize workspaces is endless. Operating system and application revisions, user installed applications, security and bug patches, BIOS and driver updates, spyware, multi-user operating systems supply a continual flow of change that can disrupt expected performance. When you add in the complexities introduced by virtual desktops and cloud architectures, you have added another infinite source of performance instability. Keeping up with this churn, as well as meeting users’ zero tolerance for failures, are chief worries for administrators.
To help address the need for uniform performance and optimization in light of constant change, Liquidware introduced the Process Optimization feature in its Stratusphere UX solution. This feature can be set to automatically optimize CPU and Memory, even as system demands fluctuate. Process Optimization can keep “bad actor” applications or runaway processes from crippling the performance of users’ workspaces by prioritizing resources for those being actively used over not used or background processes. The Process Optimization feature requires no additional infrastructure. It is a simple, zero-impact feature that is included with Stratusphere UX. It can be turned on for single machines, or groups, or globally. Launched with the check of a box, you can select from pre-built profiles that operate automatically. Or administrators can manually specify the processes they need to raise, lower or terminate, if that task becomes required. This feature is a major benefit in hybrid multi-platform environments that include physical, pool or image-based virtual and cloud workspaces, which are much more complex than single-delivery systems. The Process Optimization feature was designed with security and reliability in mind. By default, this feature employs a “do no harm” provision affecting normal and lower process priorities, and a relaxed policy. No processes are forced by default when access is denied by the system, ensuring that the system remains stable and in line with requirements.
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.
Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case. These include:
1. ProfileDisk™, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and
2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.