Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 33 - 48 of 71 white papers, page 3 of 5.
PCI DSS Compliance
IT security has always been a major concern for businesses that accept online credit card payments. They hold sensitive information that malicious hackers are after: cardholder data and customer information. This is why businesses are legally obliged to build PCI DSS compliant IT infrastructures.

IT security has always been a major concern for businesses that accept online credit card payments. They hold sensitive information that malicious hackers are after: cardholder data. This is why such businesses are legally obliged to build IT systems and networks that are PCI DSS compliant.

What Is PCI DSS?
PCI DSS is a security standard developed by the PCI Security Standards Council. Designed for businesses that do online transactions and hold customers’ payment records, it helps them build and maintain secure IT systems and networks, ensuring the privacy and security of their customers’ credit-card details and cardholder data.

The set of standards defined in the PCI DSS are the minimum required level of computer systems security that must be in place when processing credit-card data. These standards apply to merchants, processors, financial institutions, service providers, and any other entity that store, process, or transmit credit-card and cardholder information.

Why Businesses Need to Be PCI DSS Compliant
The challenges of building and maintaining a PCI DSS–compliant network are many and depend on several factors—for example, the type of software used, the network setup, and the procedures in place. If organizations that process credit-card payments and store cardholder details fail to build PCI DSS–compliant networks and computer systems, they risk being fined up to $500,000 per month—or even worse, having their trading licence revoked.

This white paper explains how using Parallels Remote Application Server (RAS) can help organizations build scalable PCI DSS–compliant networks and also save on costs and administration overheads.

From... to cloud ready in less than one day with Parallels and ThinPrint
Mobility, security and compliance, automation, and the demand for “the workspace of the future” are just some of the challenges that businesses face today. The cloud is best positioned to support these challenges, but it can be hard to pick the right kind of cloud and find the right balance between cost and benefits. Together, Parallels and ThinPrint allow an organization to become a cloud-ready business on its own terms, with unprecedented ease and cost-effectiveness.

Mobility, security and compliance, automation, and the demand for “the workspace of the future” are just some of the challenges that businesses face today.

The cloud is best positioned to support these challenges, but it can be hard to pick the right kind of cloud and find the right balance between cost and benefits.

Parallels Introduction
Parallels is a global leader in cross-platform technologies and is renowned for its award-winning software solutions that cut complexity and lower costs for a wide range of industries, including healthcare, education, banking and finance, manufacturing, the public sector, and many others.

Parallels Remote Application Server (RAS) provides easy-to-use, comprehensive application and desktop delivery that enables business and public-sector organizations to seamlessly integrate virtual Windows applications and desktops on nearly any device or operating system.

ThinPrint Introduction
ThinPrint is a global leader in solutions that support an organization’s digital transformation, helping ensure users can draw on highly reliable and innovative print solutions that support today’s and tomorrow’s requirements.

Joint Value Statement

Together, Parallels and ThinPrint allow an organization to become a cloud-ready business on its own terms, with unprecedented ease and cost-effectiveness.

We support any endpoint device from a desktop PC to a smartphone or tablet, can deploy on-premise or in the cloud, and follow your business as it completes its digital transformation.

You may decide to start digitally transforming your business by delivering applications or desktops from an existing server in your datacenter and move to Amazon Web Services (AWS) or Microsoft Azure later. You can also replace user workstations with newer, more mobile devices, or expand from an initial pilot group to new use cases for the entire company.

Whatever your plans are, Parallels and ThinPrint will help you implement them with easy, cost-effective solutions and the ability to adapt to future challenges.

Is Your Citrix Monitoring Ready for Virtual Apps and Desktops 7.x?
Organizations across the world are migrating from XenApp 6.5 to the latest version of Citrix Virtual Apps and Desktops 7.x. This upgrade comes with radical changes to Citrix architecture, configuration, policy settings, protocols and deployment models. There are many components in the Citrix architecture that have been replaced with new ones. And many have also been newly introduced. With such a strategic change in the Citrix environment, traditional methods of monitoring won't hold good.

With the EOL of XenApp 6.5, organizations across the world are migrating to the latest version of Citrix Virtual Apps and Desktops 7. This upgrade comes with radical changes to Citrix architecture, configuration, policies, protocols and deployment models.

With so many changes in effect, traditional methods of monitoring won't hold good. Your monitoring solution should be ready to adapt to the new enhancements in the 7.x product line.

Read this white paper by George Spiers, Citrix CTP and EUC Architect, where he explains in detail:

  • What’s changed from XenApp 6.5 to Virtual Apps and Desktops 7
  • What monitoring best practices to adopt for ensuring top performance of your virtualized environment and deliver outstanding user experience
The Case for Converged Application & Infrastructure Performance Monitoring
Read this white paper and learn how you can combine and correlate performance insights from the application (code, SQL, logs) and the underlying hardware infrastructure (server, network, virtualization, storage, etc.)

One of the toughest problems facing enterprise IT teams today is troubleshooting slow applications. When a user complains of slowness in application access, all hell breaks loose, and the blame game begins: app owners, developers and IT ops teams enter into endless war room sessions to figure out what went wrong and where. Have you also been in this situation before?

Read this white paper by Larry Dragich, and learn how you can combine and correlate performance insights from the application (code, SQL, logs) and the underlying hardware infrastructure (server, network, virtualization, storage, etc.) in order to:

  • Proactively detect user experience issues before your customers are impacted
  • Trace business transactions and isolate the cause of application slowness
  • Get code-level visibility to identify inefficient application code and slow database queries
  • Automatically map application dependencies within the infrastructure to pinpoint the root cause of the problem
Achieve centralized visibility of all your applications and infrastructure and easily diagnose the root cause of performance slowdowns.
PrinterLogic and IGEL Enable Healthcare Organizations to Deliver Better Patient Outcomes
Healthcare professionals need to print effortlessly and reliably to nearby or appropriate printers within virtual environments, and PrinterLogic and IGEL can help make that an easy, reliable process—all while efficiently maintaining the protection of confidential patient information.

Many organizations have turned to virtualizing user endpoints to help reduce capital and operational expenses while increasing security. This is especially true within healthcare, where hospitals, clinics, and urgent care centers seek to offer the best possible patient outcomes while adhering to a variety of mandated patient security and information privacy requirements.

With the movement of desktops and applications into the secure data center or cloud, the need for reliable printing of documents, some very sensitive in nature, remains a constant that can be challenging when desktops are virtual but the printing process remains physical. Directing print jobs to the correct printer with the correct physical access rights in the correct location while ensuring compliance with key healthcare mandates like the General Data Protection Regulation (GDPR) and the Healthcare Insurance Portability and Accountability Act (HIPAA) is critical.

Healthcare IT needs to keep pace with these requirements and the ongoing printing demands of healthcare. Medical professionals need to print effortlessly and reliably to nearby or appropriate printers within virtual environments, and PrinterLogic and IGEL can help make that an easy, reliable process—all while efficiently maintaining the protection of confidential patient information. By combining PrinterLogic’s enterprise print management software with centrally managed direct IP printing and IGEL’s software-defined thin client endpoint management, healthcare organizations can:

  • Reduce capital and operational costs
  • Support virtual desktop infrastructure (VDI) and electronic medical records (EMR) systems effectively
  • Centralize and simplify print management
  • Add an essential layer of security from the target printer all the way to the network edge
Overcome the Data Protection Dilemma - Vembu
Selecting a high-priced legacy backup application that protects an entire IT environment or adopting a new age solution that focuses on protecting a particular area of an environment is a dilemma for every IT professional. Read this whitepaper to overcome the data protection dilemma with Vembu.
IT professionals face a dilemma while selecting a backup solution for their environment. Selecting a legacy application that protects their entire environment means that they have to tolerate high pricing and live with software that does not fully exploit the capabilities of modern IT environment.

On the other hand, they can adopt solutions that focus on a particular area of an IT environment and limited just to that environment. These solutions have a relatively small customer base which means the solution has not been vetted as the legacy applications. Vembu is a next-generation company that provides the capabilities of the new class of backup solutions while at the same time providing completeness of platform coverage, similar to legacy applications.
Understanding Windows Server Hyper-V Cluster Configuration, Performance and Security
The Windows Server Hyper-V Clusters are definitely an important option when trying to implement High Availability to critical workloads of a business. Guidelines on how to get started with things like deployment, network configurations to some of the industries best practices on performance, security, and storage management are something that any IT admin would not want to miss. Get started with reading this white paper that discusses the same through scenarios on a production field and helps yo
How do you increase the uptime of your critical workloads? How do you start setting up a Hyper-V Cluster in your organization? What are the Hyper-V design and networking configuration best practices? These are some of the questions you may have when you have large environments with many Hyper-V deployments. It is very essential for IT administrators to build disaster-ready Hyper-V Clusters rather than thinking about troubleshooting them in their production workloads. This whitepaper will help you in deploying a Hyper-V Cluster in your infrastructure by providing step-by-step configuration and consideration guides focussing on optimizing the performance and security of your setup.
Mastering vSphere – Best Practices, Optimizing Configurations & More
Do you regularly work with vSphere? If so, this free eBook is for you. Learn how to leverage best practices for the most popular features contained within the vSphere platform and boost your productivity using tips and tricks learnt direct from an experienced VMware trainer and highly qualified professional. In this eBook, vExpert Ryan Birk shows you how to master: Advanced Deployment Scenarios using Auto-Deploy Shared Storage Performance Monitoring and Troubleshooting Host Network configurati

If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.

The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.

Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!

Implementing High Availability in a Linux Environment
This white paper explores how organizations can lower both CapEx and OpEx running high-availability applications in a Linux environment without sacrificing performance or security.
Using open source solutions can dramatically reduce capital expenditures, especially for software licensing fees. But most organizations also understand that open source software needs more “care and feeding” than commercial software—sometimes substantially more- potentially causing operating expenditures to increase well above any potential savings in CapEx. This white paper explores how organizations can lower both CapEx and OpEx running high-availability applications in a Linux environment without sacrificing performance or security.
Controlling Cloud Costs without Sacrificing Availability or Performance
This white paper is to help prevent cloud services sticker shock from occurring ever again and to help make your cloud investments more effective.
After signing up with a cloud service provider, you receive a bill that causes sticker shock. There are unexpected and seemingly excessive charges, and those responsible seem unable to explain how this could have happened. The situation is critical because the amount threatens to bust the budget unless cost-saving changes are made immediately. The objective of this white paper is to help prevent cloud services sticker shock from occurring ever again.
How to Get the Most Out of Windows Admin Center
Windows Admin Center is the future of Windows and Windows Server management. Are you using it to its full potential? In this free eBook, Microsoft Cloud and Datacenter Management MVP, Eric Siron, has put together a 70+ page guide on what Windows Admin Center brings to the table, how to get started, and how to squeeze as much value out of this incredible free management tool from Microsoft. This eBook covers: - Installation - Getting Started - Full UI Analysis - Security - Managing Extensions

Each version of Windows and Windows Server showcases new technologies. The advent of PowerShell marked a substantial step forward in managing those features. However, the built-in graphical Windows management tools have largely stagnated - the same basic Microsoft Management Console (MMC) interfaces had remained since Windows Server 2000. Microsoft tried out multiple overhauls over the years to the built-in Server Manager console but gained little traction. Until Windows Admin Center.

WHAT IS WINDOWS ADMIN CENTER?
Windows Admin Center (WAC) represents a modern turn in Windows and Windows Server system management. From its home page, you establish a list of the networked Windows and Windows Server computers to manage. From there, you can connect to an individual system to control components such as hardware drivers. You can also use it to manage Windows roles, such as Hyper-V.

On the front-end, Windows Admin Center is presented through a sleek HTML 5 web interface. On the back-end, it leverages PowerShell extensively to control the systems within your network. The entire package runs on a single system, so you don’t need a complicated infrastructure to support it. In fact, you can run it locally on your Windows 10 workstation if you want. If you require more resiliency, you can run Windows Admin Center as a role on a Microsoft Failover Cluster.

WHY WOULD I USE WINDOWS ADMIN CENTER?
In the modern era of Windows management, we have shifted to a greater reliance on industrial-strength tools like PowerShell and Desired State Configuration. However, we still have servers that require individualized attention and infrequently utilized resources. WAC gives you a one-stop hub for dropping in on any system at any time and work with almost any of its facets.

ABOUT THIS EBOOK
This eBook has been written by Microsoft Cloud & Datacenter Management MVP Eric Siron. Eric has worked in IT since 1998, designing, deploying, and maintaining server, desktop, network, and storage systems. He has provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. He has achieved numerous Microsoft certifications and was a Microsoft Certified Trainer for four years. Eric is also a seasoned technology blogger and has amassed a significant following through his top-class work on the Altaro Hyper-V Dojo.

Digital Workspace Disasters and How to Beat Them
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data.
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data. And even if those problems could be overcome with the use of software agents, plus de-deduplication to take common files such as the operating system out of the backup window, restoring damaged systems could still mean days of software reinstallation and reconfiguration. Yet at the same time, most organizations have a strategic need to deploy and provision new desktop systems, and to be able to migrate existing ones to new platforms. Again, these are tasks that benefit from reducing both duplication and the need to reconfigure the resulting installation. The parallels with desktop DR should be clear. We often write about the importance of an integrated approach to investing in backup and recovery. By bringing together business needs that have a shared technical foundation, we can, for example, gain incremental benefits from backup, such as improved data visibility and governance, or we can gain DR capabilities from an investment in systems and data management. So it is with desktop DR and user workspace management. Both of these are growing in importance as organizations’ desktop estates grow more complex. Not only are we adding more ways to work online, such as virtual PCs, more applications, and more layers of middleware, but the resulting systems face more risks and threats and are subject to higher regulatory and legal requirements. Increasingly then, both desktop DR and UWM will be not just valuable, but essential. Getting one as an incremental bonus from the other therefore not only strengthens the business case for that investment proposal, it is a win-win scenario in its own right.
The Forrester Wave: Intelligent Application and Service Monitoring, Q2 2019
Thirteen of the most significant IASM providers identified, researched, analyzed and scored in criteria in the three categories of current offering, market presence, and strategy by Forrester Research. Leaders, strong performers and contenders emerge — and you may be surprised where each provider lands in this Forrester Wave.

In The Forrester Wave: Intelligent Application and Service Monitoring, Q2 2019, Forrester identified the 13 most significant IASM providers in the market today, with Zenoss ranked amongst them as a Leader.

“As complexity grows, I&O teams struggle to obtain full visibility into their environments and do troubleshooting. To meet rising customer expectations, operations leaders need new monitoring technologies that can provide a unified view of all components of a service, from application code to infrastructure.”

Who Should Read This

Enterprise organizations looking for a solution to provide:

  • Strong root-cause analysis and remediation
  • Digital customer experience measurement capabilities
  • Ease of deployment across the customer’s whole environment, positioning themselves to successfully deliver intelligent application and service monitoring

Our Takeaways

Trends impacting the infrastructure and operations (I&O) team include:

  • Operations leaders favor a unified view
  • AI/machine learning adoption reaches 72% within the next 12 months
  • Intelligent root-cause analysis soon to become table stakes
  • Monitoring the digital customer experience becomes a priority
  • Ease and speed of deployment are differentiators

ESG Report: Verifying Network Intent with Forward Enterprise
This ESG Technical Review documents hands-on validation of Forward Enterprise, a solution developed by Forward Networks to help organizations save time and resources when verifying that their IT networks can deliver application traffic consistently in line with network and security policies. The review examines how Forward Enterprise can reduce network downtime, ensure compliance with policies, and minimize adverse impact of configuration changes on network behavior.
ESG research recently uncovered that 66% of organizations view their IT environments as more or significantly more complex than they were two years ago. The complexity will most likely increase, since 46% of organizations anticipate their network infrastructure spending to exceed that of 2018 as they upgrade and expand their networks.

Large enterprise and service provider networks consist of multiple device types—routers, switches, firewalls, and load balancers—with proprietary operating systems (OS) and different configuration rules. As organizations support more applications and users, their networks will grow and become more complex, making it more difficult to verify and manage correctly implemented policies across the entire network. Organizations have also begun to integrate public cloud services with their on-premises networks, adding further network complexity to manage end-to-end policies.

With increasing network complexity, organizations cannot easily confirm that their networks are operating as intended when they implement network and security policies. Moreover, when considering a fix to a service-impact issue or a network update, determining how it may impact other applications negatively or introduce service-affecting issues becomes difficult. To assess adherence to policies or the impact of any network change, organizations have typically relied on disparate tools and material—network topology diagrams, device inventories, vendor-dependent management systems, command line (CLI) commands, and utilities such as “ping” and “traceroute.” The combination of these tools cannot provide a reliable and holistic assessment of network behavior efficiently.

Organizations need a vendor-agnostic solution that enables network operations to automate the verification of network implementations against intended policies and requirements, regardless of the number and types of devices, operating systems, traffic rules, and policies that exist. The solution must represent the topology of the entire network or subsets of devices (e.g., in a region) quickly and efficiently. It should verify network implementations from prior points in time, as well as proposed network changes prior to implementation. Finally, the solution must also enable organizations to quickly detect issues that affect application delivery or violate compliance requirements.
Lift and Shift Backup and Disaster Recovery Scenario for Google Cloud: Step by Step Guide
There are many new challenges, and reasons, to migrate workloads to the cloud. Especially for public cloud, like Google Cloud Platform. Whether it is for backup, disaster recovery, or production in the cloud, you should be able to leverage the cloud platform to solve your technology challenges. In this step-by-step guide, we outline how GCP is positioned to be one of the easiest cloud platforms for app development. And, the critical role data protection as-as-service (DPaaS) can play.

There are many new challenges, and reasons, to migrate workloads to the cloud.

For example, here are four of the most popular:

  • Analytics and Machine learning (ML) are everywhere. Once you have your data in a cloud platform like Google Cloud Platform, you can leverage their APIs to run analytics and ML on everything.
  • Kubernetes is powerful and scalable, but transitioning legacy apps to Kubernetes can be daunting.
  • SAP HANA is a secret weapon. With high mem instances in the double digit TeraBytes migrating SAP to a cloud platform is easier than ever.
  • Serverless is the future for application development. With CloudSQL, Big Query, and all the other serverless solutions, cloud platforms like GCP are well positioned to be the easiest platform for app development.

Whether it is for backup, disaster recovery, or production in the cloud, you should be able to leverage the cloud platform to solve your technology challenges. In this step-by-step guide, we outline how GCP is positioned to be one of the easiest cloud platforms for app development. And, the critical role data protection as-as-service (DPaaS) can play.

Data Protection Overview and Best Practices
This white paper works through data protection processes and best practices using the Tintri VMstore. Tintri technology is differentiated by its level of abstraction—the ability to take every action on individual virtual machines. Hypervisor administrators and staff members associated with architecting, deploying and administering a data protection and disaster recovery solution will want to dig into this document to understand how Tintri can save them the majority of their management effort an

This white paper works through data protection processes and best practices using the Tintri VMstore. Tintri technology is differentiated by its level of abstraction—the ability to take every action on individual virtual machines.  In this paper, you’ll:

  • Learn how that greatly increases the precision and efficiency of snapshots for data protection
  • Explore the ability to move between recovery points
  • Analyze the behavior of individual virtual machines
  • Predict the need for additional capacity and performance for data protection

If you’re focused on building a successful data protection solution, this document targets key best practices and known challenges. Hypervisor administrators and staff members associated with architecting, deploying and administering a data protection and disaster recovery solution will want to dig into this document to understand how Tintri can save them a great deal of their management effort and greatly reduce operating expense.

top25