Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 24 white papers, page 1 of 2.
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
How VembuHIVE, a backup repository as a file system is changing the dynamics of Data Protection
MS Applications are a critical segment of the core systems managed and run in IT of most organizations. Backing them up is not enough. The effect of the backup process on your system and storage determines its efficiency. From this whitepaper, learn how VembuHIVE transforms the way backups are performed to achieve disaster-readiness.
It is imperative that Microsoft Applications like SQL, Exchange, Active Directory and many others have been instrumental in running some of the mission-critical processes of an IT setup. While there are many solutions that address its Data Protection concerns, efficient recovery from a storage medium has always been a pivotal issue. Read this white paper that includes performance and resource utilization reports on how Vembu BDR Suite with its in-house proprietary file system VembuHIVE, reduces the backup footprints on the storage repositories enabling quick recovery with minimal RTOs.
Storage Playbook: Essential Enterprise Storage Concepts
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.

In this e-book, we’ll cover storage basics, storage performance and capacity, forecasting and usage and storage best practices.

The 4th Era of IT Infrastructure: Composable Systems
Learn the benefits and limitations of the 3 generations of IT infrastructure – siloed, converged and hyperconverged – and discover how the 4th generation of IT infrastructure can transform your business.

Learn the benefits and limitations of  the 3 generations of IT infrastructure  – siloed, converged and hyperconverged – and discover how the 4th generation of IT infrastructure  – composable –  can transform your business. 

The composable infrastructure enables you to:

  • Tightly integrate server, storage, networking, virtualization, VM/container centric management and an application marketplace to create a high-speed infrastructure platform that includes everything needed to run applications out of the box – a single silo platform
  • Easily and independently scale network, storage, and compute resources on-demand, making it possible to run a wide diversity of workloads

Manage your infrastructure from anywhere through a single SaaS management portal

Composable Systems – The 4th Infrastructure Era - Executive Summary
Understand the difference between Hyperconverged solutions and Cloudistics Composable platform. This executive summary discusses the evolution of infrastructures and how the emergence of superconverged systems will transform the future of IT.
Learn how Ignite goes beyond hyperconverged platforms by delivering a virtualized network and network switch, along with converged all-flash storage, compute, virtualization, and centralized SaaS management into one plug-and-play platform.

The composable infrastructure enables you to:
  • Deliver 5x faster applications than traditional hyperconverged solutions
  • Scale independent network, storage and compute resources for 4x lower costs
  • Eliminate expensive hypervisor costs with our KVM-based hypervisor built-in
The Next Generation Clusterless Federation Design in the Cloudistics Cloud Platform
Is there another way to manage your VMs? Yes, the introduction of the Cloudistics Cloud Platform has an innovative approach – non-clustered (or clusterless) federated design.The clusterless federation of the Cloudistics Cloud platform uses categories and tags to characterize computer nodes, migration zones, and storage groups (or blocks). With these benefits: ⦁ Node Limits Are A Thing of the Past. ⦁ Locking limitations are removed. ⦁ Flexibility is Enhanced. ⦁ Ladders of Latency are Removed. ⦁

This paper is written in the context of modern virtualized infrastructures, such as VMware or Nutanix. In such systems, a hypervisor runs on each compute node creating multiple virtual machines (VMs) per compute node. A guest OS runs inside each VM.

Data associated with each VM is stored in one or more virtual disks (vDisks). A virtual disk appears like a local disk, but can be mapped to physical storage in many ways as we will discuss.

Virtualized infrastructures use clustering to provide for non-disruptive VM migration between compute nodes, for load balancing across the nodes, for sharing storage, and for high availability and failover. Clustering is well known and has been used to build
computer systems for a long time. However, in the context of virtualized infrastructures, clustering has a number of significant limitations. Specifically, as we explain below, clusters limit scalability, decrease resource efficiency, hurt performance, reduce flexibility and impair manageability.

In this paper, we will present an innovative alternative architecture that does not have these limitations of clustering. We call our new approach clusterless federation and it is the approach used in the Cloudistics platform.

The rest of this paper is organized as follows. In Section 2, we describe the limitations of clustering and in Section 3, we drive the point home by using the specific example of VMware; other virtualized systems are similar. In Section 4, we present the clusterless federated approach and show how it avoids the limitations of clustering. We summarize in Section 5.

Overcoming IT Monitoring Too Sprawl with a Single-Pane-of-Glass Solution
For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. Read this white paper to understand how to consolidate IT performance monitoring and implement a single-pane-of-glass monitoring solution.

For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. But, many fail to achieve this as they do not know how to implement a single-pane-of-glass solution.

Read this eG Innovations white paper, and understand:

  • How an organization ends up with more tools than what they need
  • The challenges of dealing with multiple tools
  • Myths and popular misconceptions about a single-pane-of-glass monitoring tool
  • Best practices for achieving unified IT monitoring
  • Benefits of consolidating monitoring into a single-pane-of-glass monitoring solution
Optimizing Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them. Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case.

These include:

1. ProfileDisk, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

HyperCore-Direct: NVMe Optimized Hyperconvergence
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance.
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance. In this whitepaper, we will showcase the performance of a Scale HyperCore-Direct cluster which has been equipped with Intel P3700 NVMe drives, as well as a single-node HyperCore-Direct system with Intel Optane P4800X NVMe drives. Various workloads have been tested using off-the-shelf Linux and Windows virtual machine instances. The results show that HyperCore-Direct’s new NVMe optimized version of SCRIBE, the same software-defined- storage powering every HC3 cluster in production today, is able to offer the lowest latency per IO delivered to virtual machines.
HC3, SCRIBE and HyperCore Theory of Operations
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
A Journey Through Hybrid IT and the Cloud
How to navigate between the trenches. Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?

How to navigate between the trenches

Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?
 
“A Journey Through Hybrid IT and the Cloud” provides insight on:

  • What Hybrid IT means for the network, storage, compute, monitoringand your staff
  • Real world examples that can occur along your journey (what did vs. didn’t work)
  • How to educate employees on Hybrid IT and the Cloud
  • Proactively searching out technical solutions to real business challenges
Smarter Storage Management Series
Organizations are creating huge amounts of data but how and where are they going to store it all? Answers can be found in this eBook, which includes a series of thought provoking articles covering critical issues including storage silos, data protection, application performance bottlenecks, data fabrics, and the role of hyperconverged infrastructure.

In addition to the thought provoking articles about storage silos, data protection, application performance bottlenecks, data fabrics, and the role of hyperconverged infrastructure, the eBook concludes with an original thought leadership section, A Blueprint for Smarter Storage Management: Addressing the Top 3 Challenges Impacting Your Data with Software-Defined Storage, including:
•    Maintaining data accessibility even in the event of a catastrophic failure
•    Capacity utilization and scale
•    Extreme performance to keep up with the rate of data acquisition

eBook topics:
•    Storage Silos 101: What They Are and How To Manage Them
•    RPO/RTO and the Impact on Business Continuity
•    Data Storage 101: How To Determine Business-Critical App Performance Requirements
•    How Best To Protect and Access Your Company’s Data
•    Data Fabrics and Their Role in the Storage Hardware Refresh
•    The Rise of Hyper-Converged Infrastructure
•    A Blueprint for Smarter Storage Management – Addressing the Top 3 Challenges Impacting Your Data with Software-Defined Storage

Solving Healthcare IT Challenges with DataCore Software-Defined Storage
In this solution brief, learn how software-defined storage can help solve the most pressing challenges in Healthcare IT: (1) consolidating and managing data from disparate systems, (2) safeguarding data and applications from cyberattacks, system outages, natural disaster, and human error, (3) ensuring fast application response times and real-time data availability for life-critical applications, and (4) scaling storage as needed — easily, instantaneously, inexpensively, and non-disruptively.

Today’s hospitals are benefitting from an explosion of information technologies that is ushering in a new era of healthcare. With these advanced technologies, significantly more data is being generated, from a much wider variety of sources, and at a more frequent pace. All of this data must be stored, shared, and protected.

With data the lifeblood of healthcare, IT departments are challenged to adopt new storage and management strategies to handle the deluge of data. The skyrocketing costs to achieve continuous data availability, cope with exponential data growth, and provide timely data access rank among the most pressing challenges facing healthcare IT organizations today.

That is why a growing number of healthcare institutions are deploying DataCore’s software-defined storage platform. Only DataCore enables today’s hospitals and health systems to address mission-critical healthcare IT challenges while maximizing the availability, performance, and utilization of IT resources – allowing them to enhance patient outcomes while keeping costs low. Learn more in this solution brief.

Leveraging Hyperconvergence for Databases and Tier 1 Enterprise Applications
In this white paper, vExpert Scott Lowe, discusses: (1) why organizations are implementing hyperconverged infrastructures, (2) how to move forward with hyperconverged infrastructure plans while ensuring your database applications continue to grow, (3) how to boost hyperconvergence performance, (4) what is the performance metric that rules them all, and (5) what are the business benefits of leveraging hyperconvergence.

We live in a data-driven world. The quantity of and need for increasing amounts of data will skyrocket in the coming years, and behind it all are databases intended to help organizations maintain the madness. At the same time, the infrastructure that powers these databases is undergoing a massive transformation as organizations seek to simplify complex IT systems to reduce the cost of such systems and to improve the speed of the business.

Hyperconverged infrastructure has emerged in recent years as an incredibly powerful way for organizations to rein in data center madness. Adoption of this technology has been explosive—in a good way!—and organizations are enjoying significant operational efficiency and cost benefits to such adoption. How do you forge? Learn more from vExpert, Scott Lowe in this white paper.

Top 10 Reasons to Adopt Software-Defined Storage
In this brief, learn about the top ten reasons why businesses are adopting software-defined storage to empower their existing and new storage investments with greater performance, availability and functionality.
DataCore delivers a software-defined architecture that empowers existing and new storage investments with greater performance, availability and functionality. But don’t take our word for it. We decided to poll our customers to learn what motivated them to adopt software-defined storage. As a result, we came up with the top 10 reasons our customers have adopted software-defined storage.
Download this white paper to learn about:
•    How software-defined storage protects investments, reduces costs, and enables greater buying power
•    How you can protect critical data, increase application performance, and ensure high-availability
•    Why 10,000 customers have chosen DataCore’s software-defined storage solution