Fulton Financial Corporation has a long and storied history that began in 1882 in Lancaster, Pennsylvania, where local merchants and farmers organized Fulton National Bank. The bank’s name was chosen to honor Lancaster County native Robert Fulton, the inventor and artist best known for designing and building the Clermont, the first successful steamboat.
In an effort to optimize the productivity of its employees and enable them to have more time to focus on their customers, Fulton sought to upgrade the thin clients for its Citrix application virtualization infrastructure, with the help of its Citrix partner and IGEL Platinum Partner, Plan B Technologies.
In selecting a desktop computing solution to support its Citrix application virtualization infrastructure, Fulton had one unique business requirement, they were looking for a solution that would mirror the experience provided by a Windows PC, without actually being a Windows PC.
During the evaluation process, Fulton looked at thin clients from IGEL and another leading manufacturer, conducting a “bake-off” of several models including the IGEL Universal Desktop (UD6). Fulton like the fact that IGEL is forward- thinking in designing its desktop computing solutions, and began its IGEL roll-out by purchasing 2,300 IGEL UD6 thin clients in 2016 for its headquarters and branch offices, and plans to complete the roll out of IGEL thin clients to the remainder of its 3,700 employees in the coming months. The bank is also leveraging the IGEL Universal Management Suite (UMS) to manage its fleet of IGEL thin clients.
Virtualizing Windows applications and desktops in the data center or cloud has compelling security, mobility and management benefits, but delivering real-time voice and video in a virtual environment is a challenge. A poorly optimized implementation can increase costs and compromise user experience. Server scalability and bandwidth efficiency may be less than optimal, and audio-video quality may be degraded.
Enabling voice and video with a bundled solution in an existing Citrix environment delivers clearer and crisper voice and video than legacy phone systems. This solution guide describes how Sennheiser headsets combine with Citrix infrastructure and IGEL endpoints to provide a better, more secure user experience. It also describes how to deploy the bundled Citrix-Sennheiser-IGEL solution.
There are many new challenges, and reasons, to migrate workloads to the cloud.
For example, here are four of the most popular:
Whether it is for backup, disaster recovery, or production in the cloud, you should be able to leverage the cloud platform to solve your technology challenges. In this step-by-step guide, we outline how GCP is positioned to be one of the easiest cloud platforms for app development. And, the critical role data protection as-as-service (DPaaS) can play.
The primary goal of a multi-cloud data management strategy is to supply data, either via copying or moving data to the various multi-cloud use cases. A key enabler of this movement is the data management software applications. In theory, data protection applications can perform both of the copy and move functions. A key consideration is how the multi-cloud data management experience is unified. In most cases, data protection applications ignore the user experience of each cloud and use their proprietary interface as the unifying entity, which increases complexity.
There are a variety of reasons organizations may want to leverage multiple clouds. The first use case is to use public cloud storage as a backup mirror to an on-premises data protection process. Using public cloud storage as a backup mirror enables the organization to automatically off-site data. It also sets up many of the more advanced use cases.
Another use case is using the cloud for disaster recovery.
Another use case is “Lift and Shift,” which means the organization wants to run the application in the cloud natively. Initial steps in the “lift and shift” use case are similar to Dev/Test, but now the workload is storing unique data in the cloud.
Multi-cloud is a reality now for most organizations and managing the movement of data between these clouds is critical.
There are a number of limitations today keeping organizations from not only lifting and shifting from one cloud to another but also migrating across clouds. Organizations need the flexibility to leverage multiple clouds and move applications and workloads around freely, whether for data reuse or for disaster recovery. This is where the HYCU Protégé platform comes in. HYCU Protégé is positioned as a complete multi-cloud data protection and disaster recovery-as-a-service solution. It includes a number of capabilities that make it relevant and notable compared with other approaches in the market:
The future of compute is in the cloud
Flexible, efficient, and economical, the cloud is no longer a question - it's the answer.
IT professionals that once considered if or when to migrate to the cloud are now talking about how. Earlier this year, we reached out to thousands of IT professionals to learn more about how.
Private Cloud, On-Prem, Public Cloud, Hybrid, Multicloud - each of these deployment models offers unique advantages and challenges. We asked IT decision-makers how they are currently leveraging the cloud and how they plan to grow.
Survey respondents overwhelmingly believed in the importance of a hybrid or multicloud strategy, regardless of whether they had actually implemented one themselves.
The top reasons for moving workloads between clouds
Managing the performance of Windows-based workloads can be a challenge. Whether physical PCs or virtual desktops, the effort required to maintain, tune and optimize workspaces is endless. Operating system and application revisions, user installed applications, security and bug patches, BIOS and driver updates, spyware, multi-user operating systems supply a continual flow of change that can disrupt expected performance. When you add in the complexities introduced by virtual desktops and cloud architectures, you have added another infinite source of performance instability. Keeping up with this churn, as well as meeting users’ zero tolerance for failures, are chief worries for administrators.
To help address the need for uniform performance and optimization in light of constant change, Liquidware introduced the Process Optimization feature in its Stratusphere UX solution. This feature can be set to automatically optimize CPU and Memory, even as system demands fluctuate. Process Optimization can keep “bad actor” applications or runaway processes from crippling the performance of users’ workspaces by prioritizing resources for those being actively used over not used or background processes. The Process Optimization feature requires no additional infrastructure. It is a simple, zero-impact feature that is included with Stratusphere UX. It can be turned on for single machines, or groups, or globally. Launched with the check of a box, you can select from pre-built profiles that operate automatically. Or administrators can manually specify the processes they need to raise, lower or terminate, if that task becomes required. This feature is a major benefit in hybrid multi-platform environments that include physical, pool or image-based virtual and cloud workspaces, which are much more complex than single-delivery systems. The Process Optimization feature was designed with security and reliability in mind. By default, this feature employs a “do no harm” provision affecting normal and lower process priorities, and a relaxed policy. No processes are forced by default when access is denied by the system, ensuring that the system remains stable and in line with requirements.
There’s little doubt we’re in the midst of a change in the way we operationalize and manage our end users’ workspaces. On the one hand, IT leaders are looking to gain the same efficiencies and benefits realized with cloud and next-generation virtual-server workloads. And on the other hand, users are driving the requirements for anytime, anywhere and any device access to the applications needed to do their jobs. To provide the next-generation workspaces that users require, enterprises are adopting a variety of technologies such as virtual-desktop infrastructure (VDI), published applications and layered applications. At the same time, those technologies are creating new and challenging problems for those looking to gain the full benefits of next-generation end-user workspaces.
Before racing into any particular desktop transformation delivery approach it’s important to define appropriate goals and adopt a methodology for both near- and long-term success. One of the most common planning pitfalls we’ve seen in our history supporting the transformation of more than 6 million desktops is that organizations tend to put too much emphasis on the technical delivery and resource allocation aspects of the platform, and too little time considering the needs of users. How to meet user expectations and deliver a user experience that fosters success is often overlooked.
To prevent that problem and achieve near-term success as well as sustainable long-term value from a next-generation desktop transformation approach, planning must also include defining a methodology that should include the following three things:
• Develop a baseline of “normal” performance for current end user computing delivery• Set goals for functionality and defined measurements supporting user experience• Continually monitor the environment to ensure users are satisfied and the environment is operating efficiently
This white paper will show why the user experience is difficult to predict, why it’s essential to planning, and why factoring in the user experience—along with resource allocation—is key to creating and delivering the promise of a next-generation workspace that is scalable and will produce both near-and long-term value.
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.
Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case. These include:
1. ProfileDisk™, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and
2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.
Kaleida Health was looking to modernize the digital experience for its clinicians and back office support staff. Aging and inconsistent desktop hardware and evolving Windows OS support requirements were taxing the organization’s internal IT resources. Further, the desire to standardize on Citrix VDI for both on-site and remote workers meant the healthcare organization needed to identify a new software and hardware solution that would support simple and secure access to cloud workspaces.
The healthcare organization began the process by evaluating all of the major thin client OS vendors, and determined IGEL to be the leader for multiple reasons – it is hardware agnostic, stable and has a small footprint based on Linux OS, and it offers a great management platform, the IGEL UMS, for both on-site users and remote access.
Kaleida Health also selected LG thin client monitors early on because the All-in-One form factor supports both back office teams and more importantly, clinical areas including WoW carts, letting medical professionals securely log in and access information and resources from one, protected data center.