Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 16 white papers, page 1 of 1.
Secure Printing Using ThinPrint, Citrix and IGEL: Solution Guide
This solution guide outlines some of the regulatory issues any business faces when it prints sensitive material. It discusses how a Citrix-IGEL-ThinPrint bundled solution meets regulation criteria such as HIPAA standards and the EU’s soon-to-be-enacted General Data Protection Regulations without diminishing user convenience and productivity.

Print data is generally unencrypted and almost always contains personal, proprietary or sensitive information. Even a simple print request sent from an employee may potentially pose a high security risk for an organization if not adequately monitored and managed. To put it bluntly, the printing processes that are repeated countless times every day at many organizations are great ways for proprietary data to end up in the wrong hands.

Mitigating this risk, however, should not impact the workforce flexibility and productivity print-anywhere capabilities deliver. Organizations seek to adopt print solutions that satisfy government-mandated regulations for protecting end users and that protect proprietary organizational data — all while providing a first-class desktop and application experience for users.

This solution guide outlines some of the regulatory issues any business faces when it prints sensitive material. It discusses how a Citrix-IGEL-ThinPrint bundled solution meets regulation criteria such as HIPAA standards and the EU’s soon-to-be-enacted General Data Protection Regulations without diminishing user convenience and productivity.

Finally, this guide provides high-level directions and recommendations for the deployment of the bundled solution.

High Availability Clusters in VMware vSphere without Sacrificing Features or Flexibility
This paper explains the challenges of moving important applications from traditional physical servers to virtualized environments, such as VMware vSphere in order to take advantage of key benefits such as configuration flexibility, data and application mobility, and efficient use of IT resources and highlights six key facts you should know about HA protection in VMware vSphere environments that can save you money.

Many large enterprises are moving important applications from traditional physical servers to virtualized environments, such as VMware vSphere in order to take advantage of key benefits such as configuration flexibility, data and application mobility, and efficient use of IT resources.

Realizing these benefits with business critical applications, such as SQL Server or SAP can pose several challenges. Because these applications need high availability and disaster recovery protection, the move to a virtual environment can mean adding cost and complexity and limiting the use of important VMware features. This paper explains these challenges and highlights six key facts you should know about HA protection in VMware vSphere environments that can save you money.

Understanding Windows Server Cluster Quorum Options
This white paper discusses the key concepts you need to configure a failover clustering environment to protect SQL Server in the cloud.
This white paper discusses the key concepts you need to configure a failover clustering environment to protect SQL Server in the cloud. Understand the options for configuring the cluster Quorum to meet your specific needs. Learn the benefits and key takeaways for providing high availability for SQL Server in a public cloud (AWS, Azure, Google) environment.
Top 10 Reasons to Adopt Software-Defined Storage
In this brief, learn about the top ten reasons why businesses are adopting software-defined storage to empower their existing and new storage investments with greater performance, availability and functionality.
DataCore delivers a software-defined architecture that empowers existing and new storage investments with greater performance, availability and functionality. But don’t take our word for it. We decided to poll our customers to learn what motivated them to adopt software-defined storage. As a result, we came up with the top 10 reasons our customers have adopted software-defined storage.
Download this white paper to learn about:
•    How software-defined storage protects investments, reduces costs, and enables greater buying power
•    How you can protect critical data, increase application performance, and ensure high-availability
•    Why 10,000 customers have chosen DataCore’s software-defined storage solution
How Parallels RAS Enhances Microsoft RDS
In 2001, Microsoft introduced the RDP protocol that allowed users to access an operating system’s desktop remotely. Since then, Microsoft has developed the Microsoft Remote Desktop Services (RDS) to facilitate remote desktop access. However, Microsoft RDS leaves a lot to be desired. This white paper highlights the pain points of RDS solutions, and how systems administrators can use Parallels® Remote Application Server (RAS) to enhance their Microsoft RDS infrastructure.

In 2001, Microsoft introduced the RDP protocol that allowed users to access an operating system’s desktop remotely. Since then, Microsoft has developed the Microsoft Remote Desktop Services (RDS) to facilitate remote desktop access.

However, Microsoft RDS leaves a lot to be desired. This white paper highlights the pain points of RDS solutions, and how systems administrators can use Parallels Remote Application Server (RAS) to enhance their Microsoft RDS infrastructure.

Microsoft RDS Pain Points:
•    Limited Load Balancing Functionality
•    Limited Client Device Support
•    Difficult to Install, Set Up, and Update

Parallels RAS is an application and virtual desktop delivery solution that allows systems administrators to create a private cloud from which they can centrally manage the delivery of applications, virtual desktops, and business-critical data. This comprehensive VDI solution is well known for its ease of use, low license costs, and feature list.

How Parallels RAS Enhances Your Microsoft RDS Infrastructure:
•    Easy to Install and Set Up
•    Centralized Configuration Console
•    Auto-Configuration of Remote Desktop Session Hosts
•    High Availability Load Balancing (HALB)
•    Superior user experience on mobile devices
•    Supports hypervisors from Citrix, VMware, Microsoft’s own Hyper-V, Nutanix Acropolis, and Kernel-based Virtual Machine (KVM)

As this white paper highlights, Parallels RAS allows you to enhance your Microsoft Remote Desktop Services infrastructure, enabling you to offer a superior application and virtual desktop delivery solution.

Built around Microsoft’s RDP protocol, Parallels RAS allows systems administrators to do more in less time with fewer resources. Since it is easier to implement and use, systems administrators can manage and easily scale up the Parallels RAS farm without requiring any specialized training. Because of its extensive feature list and multisite support, they can build solutions that meet the requirements of any enterprise, regardless of its size and scale.

Overcome the Data Protection Dilemma - Vembu
Selecting a high-priced legacy backup application that protects an entire IT environment or adopting a new age solution that focuses on protecting a particular area of an environment is a dilemma for every IT professional. Read this whitepaper to overcome the data protection dilemma with Vembu.
IT professionals face a dilemma while selecting a backup solution for their environment. Selecting a legacy application that protects their entire environment means that they have to tolerate high pricing and live with software that does not fully exploit the capabilities of modern IT environment.

On the other hand, they can adopt solutions that focus on a particular area of an IT environment and limited just to that environment. These solutions have a relatively small customer base which means the solution has not been vetted as the legacy applications. Vembu is a next-generation company that provides the capabilities of the new class of backup solutions while at the same time providing completeness of platform coverage, similar to legacy applications.
Understanding Windows Server Hyper-V Cluster Configuration, Performance and Security
The Windows Server Hyper-V Clusters are definitely an important option when trying to implement High Availability to critical workloads of a business. Guidelines on how to get started with things like deployment, network configurations to some of the industries best practices on performance, security, and storage management are something that any IT admin would not want to miss. Get started with reading this white paper that discusses the same through scenarios on a production field and helps yo
How do you increase the uptime of your critical workloads? How do you start setting up a Hyper-V Cluster in your organization? What are the Hyper-V design and networking configuration best practices? These are some of the questions you may have when you have large environments with many Hyper-V deployments. It is very essential for IT administrators to build disaster-ready Hyper-V Clusters rather than thinking about troubleshooting them in their production workloads. This whitepaper will help you in deploying a Hyper-V Cluster in your infrastructure by providing step-by-step configuration and consideration guides focussing on optimizing the performance and security of your setup.
Mastering vSphere – Best Practices, Optimizing Configurations & More
Do you regularly work with vSphere? If so, this free eBook is for you. Learn how to leverage best practices for the most popular features contained within the vSphere platform and boost your productivity using tips and tricks learnt direct from an experienced VMware trainer and highly qualified professional. In this eBook, vExpert Ryan Birk shows you how to master: Advanced Deployment Scenarios using Auto-Deploy Shared Storage Performance Monitoring and Troubleshooting Host Network configurati

If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.

The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.

Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!

Implementing High Availability in a Linux Environment
This white paper explores how organizations can lower both CapEx and OpEx running high-availability applications in a Linux environment without sacrificing performance or security.
Using open source solutions can dramatically reduce capital expenditures, especially for software licensing fees. But most organizations also understand that open source software needs more “care and feeding” than commercial software—sometimes substantially more- potentially causing operating expenditures to increase well above any potential savings in CapEx. This white paper explores how organizations can lower both CapEx and OpEx running high-availability applications in a Linux environment without sacrificing performance or security.
Controlling Cloud Costs without Sacrificing Availability or Performance
This white paper is to help prevent cloud services sticker shock from occurring ever again and to help make your cloud investments more effective.
After signing up with a cloud service provider, you receive a bill that causes sticker shock. There are unexpected and seemingly excessive charges, and those responsible seem unable to explain how this could have happened. The situation is critical because the amount threatens to bust the budget unless cost-saving changes are made immediately. The objective of this white paper is to help prevent cloud services sticker shock from occurring ever again.
Lift and Shift Backup and Disaster Recovery Scenario for Google Cloud: Step by Step Guide
There are many new challenges, and reasons, to migrate workloads to the cloud. Especially for public cloud, like Google Cloud Platform. Whether it is for backup, disaster recovery, or production in the cloud, you should be able to leverage the cloud platform to solve your technology challenges. In this step-by-step guide, we outline how GCP is positioned to be one of the easiest cloud platforms for app development. And, the critical role data protection as-as-service (DPaaS) can play.

There are many new challenges, and reasons, to migrate workloads to the cloud.

For example, here are four of the most popular:

  • Analytics and Machine learning (ML) are everywhere. Once you have your data in a cloud platform like Google Cloud Platform, you can leverage their APIs to run analytics and ML on everything.
  • Kubernetes is powerful and scalable, but transitioning legacy apps to Kubernetes can be daunting.
  • SAP HANA is a secret weapon. With high mem instances in the double digit TeraBytes migrating SAP to a cloud platform is easier than ever.
  • Serverless is the future for application development. With CloudSQL, Big Query, and all the other serverless solutions, cloud platforms like GCP are well positioned to be the easiest platform for app development.

Whether it is for backup, disaster recovery, or production in the cloud, you should be able to leverage the cloud platform to solve your technology challenges. In this step-by-step guide, we outline how GCP is positioned to be one of the easiest cloud platforms for app development. And, the critical role data protection as-as-service (DPaaS) can play.

LQD4500 Gen4x16 NVMe SSD Performance Report
The LQD4500 is the World’s Fastest SSD. The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB. What does all that mean in real-world testing? Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demand
The LQD4500 is the World’s Fastest SSD.

The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB.

The document will contain test results and performance measurements for the Liqid LQD4500 Gen4x16 NVMe SSD. The performance test reports include Sequential, Random, and Latency measurement on the LQD4500 high performance storage device. The following data has been measured in a Linux OS environment and results are taken per the SNIA enterprise performance test specification standards. The below results are steady state after sufficient device preconditioning.

Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demanding applications.
eBook – Backup & DR Planning Guide for Small & Medium Businesses
The Backup & Disaster Recovery for SMBs - Concepts, Best Practices, and Design Decisions ebook covers in-depth the considerations that need to be made while designing your backup and disaster recovery strategy along with the concepts involved in doing so.
The Backup & Disaster Recovery for SMBs- Concepts, Best Practices, and Design Decisions ebook covers in-depth the considerations that need to be made while designing your backup and disaster recovery strategy along with the concepts involved in doing so.
  • What are RPO and RTO, and their differences
  • Why is High Availability not enough to protect your data
  • The difference between High Availability and Disaster Recovery
  • 3-2-1 backup strategy to ensure data protection, and much more.
The Time is Now for File Virtualization
DataCore’s vFilO is a distributed files and object storage virtualization solution that can consume storage from a variety of providers, including NFS or SMB file servers, most NAS systems, and S3 Object Storage systems, including S3-based public cloud providers. Once vFilO integrates these various storage systems into its environment, it presents users with a logical file system and abstracts it from the actual physical location of data.

DataCore vFilO is a top-tier file virtualization solution. Not only can it serve as a global file system, IT can also add new NAS systems or file servers to the environment without having to remap users of the new hardware. vFilO supports live migration of data between the storage systems it has assimilated and leverages the capabilities of the global file system and the software’s policy-driven data management to move older data to less expensive storage automatically; either high capacity NAS or an object storage system. vFilO also transparently moves data from NFS/SMB to object storage. If the user needs access to this data in the future, they access it like they always have. To them, the data has not moved.

The ROI of File virtualization is powerful, but it has struggled to gain adoption in the data center. File Virtualization needs to be explained, and explaining it takes time. vFilO more than meets the requirements to qualify as a top tier file virtualization solution. DataCore has the advantage of over 10,000 customers that are much more likely to be receptive to the concept since they have already embraced block storage virtualization with SANSymphony. Building on its customer base as a beachhead, DataCore can then expand File Virtualization’s reach to new customers, who, because of the changing state of unstructured data, may finally be receptive to the concept. At the same time, these new file virtualization customers may be amenable to virtualizing block storage, and it may open up new doors for SANSymphony.

Ten Topics to Discuss with Your Cloud Provider
Find the “just right” cloud for your business. For this paper, we will focus on existing applications (vs. new application services) that require high levels of performance and security, but that also enable customers to meet specific cost expectations.

Choosing the right cloud service for your organization, or for your target customer if you are a managed service provider, can be time consuming and effort intensive. For this paper, we will focus on existing applications (vs. new application services) that require high levels of performance and security, but that also enable customers to meet specific cost expectations.

Topics covered include:

  • Global access and availability
  • Cloud management
  • Application performance
  • Security and compliance
  • And more!
Top 10 Best Practices for VMware Backups
Topics: vSphere, backup, Veeam
Backup is the foundation for restores, so it is essential to have backups always available with the required speed. The “Top 10 Best Practices for vSphere Backups” white paper discusses best practices with Veeam Backup & Replication and VMware vSphere.

More and more companies come to understand that server virtualization is the way for modern data safety. In 2019, VMware is still the market leader and many Veeam customers use VMware vSphere as their preferred virtualization platform. But, backup of virtual machines on vSphere is only one part of service Availability. Backup is the foundation for restores, so it is essential to have backups always available with the required speed. The “Top 10 Best Practices for vSphere Backups” white paper discusses best practices with Veeam Backup & Replication and VMware vSphere, such as:

•    Planning your data restore in advance
•    Keeping track of your data backup software updates and keeping your backup tools up-to-date
•    Integrating storage based snapshots into your Availability concept
•    And much more!

top25