Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 23 white papers, page 1 of 2.
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
How VembuHIVE, a backup repository as a file system is changing the dynamics of Data Protection
MS Applications are a critical segment of the core systems managed and run in IT of most organizations. Backing them up is not enough. The effect of the backup process on your system and storage determines its efficiency. From this whitepaper, learn how VembuHIVE transforms the way backups are performed to achieve disaster-readiness.
It is imperative that Microsoft Applications like SQL, Exchange, Active Directory and many others have been instrumental in running some of the mission-critical processes of an IT setup. While there are many solutions that address its Data Protection concerns, efficient recovery from a storage medium has always been a pivotal issue. Read this white paper that includes performance and resource utilization reports on how Vembu BDR Suite with its in-house proprietary file system VembuHIVE, reduces the backup footprints on the storage repositories enabling quick recovery with minimal RTOs.
Storage Playbook: Essential Enterprise Storage Concepts
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.

In this e-book, we’ll cover storage basics, storage performance and capacity, forecasting and usage and storage best practices.

Overcoming IT Monitoring Too Sprawl with a Single-Pane-of-Glass Solution
For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. Read this white paper to understand how to consolidate IT performance monitoring and implement a single-pane-of-glass monitoring solution.

For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. But, many fail to achieve this as they do not know how to implement a single-pane-of-glass solution.

Read this eG Innovations white paper, and understand:

  • How an organization ends up with more tools than what they need
  • The challenges of dealing with multiple tools
  • Myths and popular misconceptions about a single-pane-of-glass monitoring tool
  • Best practices for achieving unified IT monitoring
  • Benefits of consolidating monitoring into a single-pane-of-glass monitoring solution
Optimizing Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them. Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case.

These include:

1. ProfileDisk, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

HyperCore-Direct: NVMe Optimized Hyperconvergence
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance.
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance. In this whitepaper, we will showcase the performance of a Scale HyperCore-Direct cluster which has been equipped with Intel P3700 NVMe drives, as well as a single-node HyperCore-Direct system with Intel Optane P4800X NVMe drives. Various workloads have been tested using off-the-shelf Linux and Windows virtual machine instances. The results show that HyperCore-Direct’s new NVMe optimized version of SCRIBE, the same software-defined- storage powering every HC3 cluster in production today, is able to offer the lowest latency per IO delivered to virtual machines.
HC3, SCRIBE and HyperCore Theory of Operations
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
A Journey Through Hybrid IT and the Cloud
How to navigate between the trenches. Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?

How to navigate between the trenches

Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?
 
“A Journey Through Hybrid IT and the Cloud” provides insight on:

  • What Hybrid IT means for the network, storage, compute, monitoringand your staff
  • Real world examples that can occur along your journey (what did vs. didn’t work)
  • How to educate employees on Hybrid IT and the Cloud
  • Proactively searching out technical solutions to real business challenges
Top 10 Reasons to Adopt Software-Defined Storage
In this brief, learn about the top ten reasons why businesses are adopting software-defined storage to empower their existing and new storage investments with greater performance, availability and functionality.
DataCore delivers a software-defined architecture that empowers existing and new storage investments with greater performance, availability and functionality. But don’t take our word for it. We decided to poll our customers to learn what motivated them to adopt software-defined storage. As a result, we came up with the top 10 reasons our customers have adopted software-defined storage.
Download this white paper to learn about:
•    How software-defined storage protects investments, reduces costs, and enables greater buying power
•    How you can protect critical data, increase application performance, and ensure high-availability
•    Why 10,000 customers have chosen DataCore’s software-defined storage solution
Cloud Migration Planning Guide
Effective migration planning needs to start with evaluating current footprint to determine how the move will affect all functional and non-functional areas of the organization. Having a framework for assessment will streamline migration efforts, whether an enterprise plans to undertake this project on its own or with the help of a cloud service provider. HyperCloud helps enterprises navigate the complex, cloud ecosystem, to help them build an assessment that is precise and accurate. The platfo

Most enterprises underestimate the planning process; they do not spend sufficient time understanding the cloud landscape and the different options available. While there are tools at hand to assist the implementation and the validation phases of the migration, planning is where all the crucial decisions need to be made.

Bad planning will lead to failed migrations. Challenges that enterprises often grapple with include:

  • Visibility and the ability to compile an inventory of their existing on-premises VMware  resources
  • Cherry-picking workloads and applications that are cloud-ready
  • Right-sizing for the public cloud
  • A financial assessment of what the end state will look like

HyperCloud Analytics provides intelligence backed by 400+ million benchmarked data-points to enable enterprises to make the right choices for the organization. HyperCloud’s cloud planning framework provides automation for four key stages that enterprises should consider as they plan their migration projects.

They get automated instance recommendations and accurate cost forecasts made with careful considerations of their application requirements (bandwidth, storage, security etc). Multiple assessments can be run across the different cloud providers to understand application costs post-migration.

Download our whitepaper to learn more about how you can build high-confidence, accurate plans with detailed cloud bills and cost forecasts, while expediting your cloud migrations.

The Case for Converged Application & Infrastructure Performance Monitoring
Read this white paper and learn how you can combine and correlate performance insights from the application (code, SQL, logs) and the underlying hardware infrastructure (server, network, virtualization, storage, etc.)

One of the toughest problems facing enterprise IT teams today is troubleshooting slow applications. When a user complains of slowness in application access, all hell breaks loose, and the blame game begins: app owners, developers and IT ops teams enter into endless war room sessions to figure out what went wrong and where. Have you also been in this situation before?

Read this white paper by Larry Dragich, and learn how you can combine and correlate performance insights from the application (code, SQL, logs) and the underlying hardware infrastructure (server, network, virtualization, storage, etc.) in order to:

  • Proactively detect user experience issues before your customers are impacted
  • Trace business transactions and isolate the cause of application slowness
  • Get code-level visibility to identify inefficient application code and slow database queries
  • Automatically map application dependencies within the infrastructure to pinpoint the root cause of the problem
Achieve centralized visibility of all your applications and infrastructure and easily diagnose the root cause of performance slowdowns.
Catalogic Software-Defined Secondary Storage Appliance
The Catalogic software-defined secondary-storage appliance is architected and optimized to work seamlessly with Catalogic’s data protection product DPX, with Catalogic/Storware vProtect, and with future Catalogic products. Backup nodes are deployed on a bare metal server or as virtual appliances to create a cost-effective yet robust second-tier storage solution. The backup repository offers data reduction and replication. Backup data can be archived off to tape for long-term retention.
The Catalogic software-defined secondary-storage appliance is architected and optimized to work seamlessly with Catalogic’s data protection product DPX, with Catalogic/Storware vProtect, and with future Catalogic products.

Backup nodes are deployed on a bare metal server or as virtual appliances to create a cost-effective yet robust second-tier storage solution. The backup repository offers data reduction and replication. Backup data can be archived off to tape for long-term retention.
Microsoft Azure Cloud Cost Calculator
Move Workloads to the Cloud and Reduce Costs! Considering a move to Azure? Use this simple tool to find out how much you can save on storage costs by mobilizing your applications to the cloud with Zerto on Azure!

Move Workloads to the Cloud and Reduce Costs!

Considering a move to Azure? Use this simple tool to find out how much you can save on storage costs by mobilizing your applications to the cloud with Zerto on Azure!  

Understanding Windows Server Hyper-V Cluster Configuration, Performance and Security
The Windows Server Hyper-V Clusters are definitely an important option when trying to implement High Availability to critical workloads of a business. Guidelines on how to get started with things like deployment, network configurations to some of the industries best practices on performance, security, and storage management are something that any IT admin would not want to miss. Get started with reading this white paper that discusses the same through scenarios on a production field and helps yo
How do you increase the uptime of your critical workloads? How do you start setting up a Hyper-V Cluster in your organization? What are the Hyper-V design and networking configuration best practices? These are some of the questions you may have when you have large environments with many Hyper-V deployments. It is very essential for IT administrators to build disaster-ready Hyper-V Clusters rather than thinking about troubleshooting them in their production workloads. This whitepaper will help you in deploying a Hyper-V Cluster in your infrastructure by providing step-by-step configuration and consideration guides focussing on optimizing the performance and security of your setup.
vSphere Troubleshooting Guide
Troubleshooting complex virtualization technology is something all VMware users will have to face at some point. It requires an understanding of how various components fit together and finding a place to start is not easy. Thankfully, VMware vExpert Ryan Birk is here to help with this eBook preparing you for any problems you may encounter along the way.

This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.

ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.

A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.

One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.

You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.

In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!

Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.

Now that the basics have been covered, download the eBook to discover how to put this theory into practice!

top25