Although Lockton is the world’s largest privately owned insurance brokerage firm, clients most frequently describe the insurance broker as team members who make their businesses better. Energy, innovation and deep expertise fuel Lockton’s focus on solving its clients’ problems and achieving real results.
It is this spirit of innovation that led Lockton to recently embark on the deployment of Citrix Virtual Desktop Infrastructure (VDI) at its headquarters in the Country Club Plaza area of Kansas City, MO., and local office locations across the United States and around the globe.
Lockton was about a quarter of the way through its Cirrus VDI deployment when one of its thin client manufacturers changed some of the key features on the model the insurance broker was using, without warning. This presented a number of challenges that Lockton’s IT team was unable to overcome during a critical stage in the VDI roll-out.
Around the same time Lockton was experiencing issues with the thin client manufacturer, they participated in a charity golf tournament in Kansas City that was hosted by IGEL Platinum Partner Choice Solutions and sponsored by IGEL. Following the golf tournament, Lockton tested the IGEL Universal Desktop (UD2-LX) and the IGEL Universal Management Suite (UMS) management console. The insurer realized immediately that the superior design and secure infrastructure management capabilities would make it possible for them to easily manage their entire network of thin clients in the U.S. from the company’s headquarters in Kansas City, and this was a key selling point.
To date, with the help of Choice Solutions, Lockton has deployed 1,200 IGEL UD2-LX thin clients and expected that number to increase to 1,800 by the end of 2017. Lockton also has a small number of licenses for the IGEL Universal Desktop Converter software which they are leveraging to turn hardware from other thin client manufacturers into IGEL-powered endpoints. Additionally, the insurer is using the IGEL UD Pocket to deliver the IGEL desktop to employees using legacy thin client hardware.
How to navigate between the trenches
Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you? “A Journey Through Hybrid IT and the Cloud” provides insight on:
IT security has always been a major concern for businesses that accept online credit card payments. They hold sensitive information that malicious hackers are after: cardholder data. This is why such businesses are legally obliged to build IT systems and networks that are PCI DSS compliant.
What Is PCI DSS?PCI DSS is a security standard developed by the PCI Security Standards Council. Designed for businesses that do online transactions and hold customers’ payment records, it helps them build and maintain secure IT systems and networks, ensuring the privacy and security of their customers’ credit-card details and cardholder data.
The set of standards defined in the PCI DSS are the minimum required level of computer systems security that must be in place when processing credit-card data. These standards apply to merchants, processors, financial institutions, service providers, and any other entity that store, process, or transmit credit-card and cardholder information.
Why Businesses Need to Be PCI DSS CompliantThe challenges of building and maintaining a PCI DSS–compliant network are many and depend on several factors—for example, the type of software used, the network setup, and the procedures in place. If organizations that process credit-card payments and store cardholder details fail to build PCI DSS–compliant networks and computer systems, they risk being fined up to $500,000 per month—or even worse, having their trading licence revoked.
This white paper explains how using Parallels Remote Application Server (RAS) can help organizations build scalable PCI DSS–compliant networks and also save on costs and administration overheads.
Many organizations have turned to virtualizing user endpoints to help reduce capital and operational expenses while increasing security. This is especially true within healthcare, where hospitals, clinics, and urgent care centers seek to offer the best possible patient outcomes while adhering to a variety of mandated patient security and information privacy requirements.
With the movement of desktops and applications into the secure data center or cloud, the need for reliable printing of documents, some very sensitive in nature, remains a constant that can be challenging when desktops are virtual but the printing process remains physical. Directing print jobs to the correct printer with the correct physical access rights in the correct location while ensuring compliance with key healthcare mandates like the General Data Protection Regulation (GDPR) and the Healthcare Insurance Portability and Accountability Act (HIPAA) is critical.
Healthcare IT needs to keep pace with these requirements and the ongoing printing demands of healthcare. Medical professionals need to print effortlessly and reliably to nearby or appropriate printers within virtual environments, and PrinterLogic and IGEL can help make that an easy, reliable process—all while efficiently maintaining the protection of confidential patient information. By combining PrinterLogic’s enterprise print management software with centrally managed direct IP printing and IGEL’s software-defined thin client endpoint management, healthcare organizations can:
A2U, an IGEL Platinum Partner, recently experienced a situation where one of its large, regional healthcare clients was hit by a cyberattack. “Essentially, malware entered the client’s network via a computer and began replicating like wildfire,” recalls A2U Vice President of Sales, Robert Hammond.
During the cyberattack, a few hundred of the hospital’s PCs were affected. Among those were 30 endpoints within the finance department that the healthcare organization deemed mission critical due to the volume of daily transactions between patients, insurance companies, and state and county agencies for services rendered. “It was very painful from a business standpoint not to be able to conduct billing and receiving, not to mention payroll,” said Hammond.
Prior to this particular incident, A2U had received demo units of the IGEL UD Pocket, a revolutionary micro thin client that can transform x86-compatible PCs and laptops into IGEL OS-powered desktops.
“We had been having a discussion with this client about re-imaging their PCs, but their primary concern was maintaining the integrity of the data that was already on the hardware,” continued Hammond. “HIPAA and other regulations meant that they needed to preserve the data and keep it secure, and we thought that the IGEL UD Pocket could be the answer to this problem. We didn’t see why it wouldn’t work, but we needed to test our theory.”
When the malware attack hit, that opportunity came sooner, rather than later for A2U. “We plugged the UD Pocket into one of the affected machines and were able to bypass the local hard drive, installing the Linux-based IGEL OS on the system without impacting existing data,” said Hammond. “It was like we had created a ‘Linux bubble’ that protected the machine, yet created an environment that allowed end users to quickly return to productivity.”
Working with the hospital’s IT team, it only took a few hours for A2U to get the entire finance department back online. “They were able to start billing the very next day,” added Hammond.
This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.
ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.
A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.
One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.
You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.
In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!
Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.
Now that the basics have been covered, download the eBook to discover how to put this theory into practice!
If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.
The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.
Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!
Each version of Windows and Windows Server showcases new technologies. The advent of PowerShell marked a substantial step forward in managing those features. However, the built-in graphical Windows management tools have largely stagnated - the same basic Microsoft Management Console (MMC) interfaces had remained since Windows Server 2000. Microsoft tried out multiple overhauls over the years to the built-in Server Manager console but gained little traction. Until Windows Admin Center.
WHAT IS WINDOWS ADMIN CENTER?Windows Admin Center (WAC) represents a modern turn in Windows and Windows Server system management. From its home page, you establish a list of the networked Windows and Windows Server computers to manage. From there, you can connect to an individual system to control components such as hardware drivers. You can also use it to manage Windows roles, such as Hyper-V.
On the front-end, Windows Admin Center is presented through a sleek HTML 5 web interface. On the back-end, it leverages PowerShell extensively to control the systems within your network. The entire package runs on a single system, so you don’t need a complicated infrastructure to support it. In fact, you can run it locally on your Windows 10 workstation if you want. If you require more resiliency, you can run Windows Admin Center as a role on a Microsoft Failover Cluster.
WHY WOULD I USE WINDOWS ADMIN CENTER?In the modern era of Windows management, we have shifted to a greater reliance on industrial-strength tools like PowerShell and Desired State Configuration. However, we still have servers that require individualized attention and infrequently utilized resources. WAC gives you a one-stop hub for dropping in on any system at any time and work with almost any of its facets.
ABOUT THIS EBOOKThis eBook has been written by Microsoft Cloud & Datacenter Management MVP Eric Siron. Eric has worked in IT since 1998, designing, deploying, and maintaining server, desktop, network, and storage systems. He has provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. He has achieved numerous Microsoft certifications and was a Microsoft Certified Trainer for four years. Eric is also a seasoned technology blogger and has amassed a significant following through his top-class work on the Altaro Hyper-V Dojo.
INTRODUCTION
Humans tend to think optimistically. We plan for the best outcomes because we strive to make them happen. As a result, many organizations implicitly design their computing and data storage systems around the idea that they will operate as expected. They employ front-line fault-tolerance technologies such as RAID and multiple network adapters that will carry the systems through common, simple failures. However, few design plans include comprehensive coverage of catastrophic failures. Without a carefully crafted approach to backup, and a strategic plan to work through and recover from disasters, an organization runs substantial risks. They could experience data destruction or losses that cost them excessive amounts of time and money. Business principals and managers might even find themselves facing personal liability consequences for failing to take proper preparatory steps. At the worst, an emergency could permanently end the enterprise. This book seeks to guide you through all stages of preparing for, responding to, and recovering from a substantial data loss event. In this first part, you will learn how to assess your situation and plan out a strategy that uniquely fits your needs.
WHO SHOULD READ THIS BOOK
This book was written for anyone with an interest in protecting organizational data, from system administrators to business owners. It explains the terms and technologies that it covers in simple, approachable language. As much as possible, it focuses on the business needs first. However, a reader with little experience in server and storage technologies may struggle with applying the content. To put it into action, use this material in conjunction with trained technical staff.