Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 21 white papers, page 1 of 2.
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
How VembuHIVE, a backup repository as a file system is changing the dynamics of Data Protection
MS Applications are a critical segment of the core systems managed and run in IT of most organizations. Backing them up is not enough. The effect of the backup process on your system and storage determines its efficiency. From this whitepaper, learn how VembuHIVE transforms the way backups are performed to achieve disaster-readiness.
It is imperative that Microsoft Applications like SQL, Exchange, Active Directory and many others have been instrumental in running some of the mission-critical processes of an IT setup. While there are many solutions that address its Data Protection concerns, efficient recovery from a storage medium has always been a pivotal issue. Read this white paper that includes performance and resource utilization reports on how Vembu BDR Suite with its in-house proprietary file system VembuHIVE, reduces the backup footprints on the storage repositories enabling quick recovery with minimal RTOs.
Storage Playbook: Essential Enterprise Storage Concepts
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.

In this e-book, we’ll cover storage basics, storage performance and capacity, forecasting and usage and storage best practices.

Overcoming IT Monitoring Too Sprawl with a Single-Pane-of-Glass Solution
For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. Read this white paper to understand how to consolidate IT performance monitoring and implement a single-pane-of-glass monitoring solution.

For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. But, many fail to achieve this as they do not know how to implement a single-pane-of-glass solution.

Read this eG Innovations white paper, and understand:

  • How an organization ends up with more tools than what they need
  • The challenges of dealing with multiple tools
  • Myths and popular misconceptions about a single-pane-of-glass monitoring tool
  • Best practices for achieving unified IT monitoring
  • Benefits of consolidating monitoring into a single-pane-of-glass monitoring solution
Optimizing Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them. Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case.

These include:

1. ProfileDisk, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

HyperCore-Direct: NVMe Optimized Hyperconvergence
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance.
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance. In this whitepaper, we will showcase the performance of a Scale HyperCore-Direct cluster which has been equipped with Intel P3700 NVMe drives, as well as a single-node HyperCore-Direct system with Intel Optane P4800X NVMe drives. Various workloads have been tested using off-the-shelf Linux and Windows virtual machine instances. The results show that HyperCore-Direct’s new NVMe optimized version of SCRIBE, the same software-defined- storage powering every HC3 cluster in production today, is able to offer the lowest latency per IO delivered to virtual machines.
HC3, SCRIBE and HyperCore Theory of Operations
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
A Journey Through Hybrid IT and the Cloud
How to navigate between the trenches. Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?

How to navigate between the trenches

Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?
 
“A Journey Through Hybrid IT and the Cloud” provides insight on:

  • What Hybrid IT means for the network, storage, compute, monitoringand your staff
  • Real world examples that can occur along your journey (what did vs. didn’t work)
  • How to educate employees on Hybrid IT and the Cloud
  • Proactively searching out technical solutions to real business challenges
Smarter Storage Management Series
Organizations are creating huge amounts of data but how and where are they going to store it all? Answers can be found in this eBook, which includes a series of thought provoking articles covering critical issues including storage silos, data protection, application performance bottlenecks, data fabrics, and the role of hyperconverged infrastructure.

In addition to the thought provoking articles about storage silos, data protection, application performance bottlenecks, data fabrics, and the role of hyperconverged infrastructure, the eBook concludes with an original thought leadership section, A Blueprint for Smarter Storage Management: Addressing the Top 3 Challenges Impacting Your Data with Software-Defined Storage, including:
•    Maintaining data accessibility even in the event of a catastrophic failure
•    Capacity utilization and scale
•    Extreme performance to keep up with the rate of data acquisition

eBook topics:
•    Storage Silos 101: What They Are and How To Manage Them
•    RPO/RTO and the Impact on Business Continuity
•    Data Storage 101: How To Determine Business-Critical App Performance Requirements
•    How Best To Protect and Access Your Company’s Data
•    Data Fabrics and Their Role in the Storage Hardware Refresh
•    The Rise of Hyper-Converged Infrastructure
•    A Blueprint for Smarter Storage Management – Addressing the Top 3 Challenges Impacting Your Data with Software-Defined Storage

Solving Healthcare IT Challenges with DataCore Software-Defined Storage
In this solution brief, learn how software-defined storage can help solve the most pressing challenges in Healthcare IT: (1) consolidating and managing data from disparate systems, (2) safeguarding data and applications from cyberattacks, system outages, natural disaster, and human error, (3) ensuring fast application response times and real-time data availability for life-critical applications, and (4) scaling storage as needed — easily, instantaneously, inexpensively, and non-disruptively.

Today’s hospitals are benefitting from an explosion of information technologies that is ushering in a new era of healthcare. With these advanced technologies, significantly more data is being generated, from a much wider variety of sources, and at a more frequent pace. All of this data must be stored, shared, and protected.

With data the lifeblood of healthcare, IT departments are challenged to adopt new storage and management strategies to handle the deluge of data. The skyrocketing costs to achieve continuous data availability, cope with exponential data growth, and provide timely data access rank among the most pressing challenges facing healthcare IT organizations today.

That is why a growing number of healthcare institutions are deploying DataCore’s software-defined storage platform. Only DataCore enables today’s hospitals and health systems to address mission-critical healthcare IT challenges while maximizing the availability, performance, and utilization of IT resources – allowing them to enhance patient outcomes while keeping costs low. Learn more in this solution brief.

Leveraging Hyperconvergence for Databases and Tier 1 Enterprise Applications
In this white paper, vExpert Scott Lowe, discusses: (1) why organizations are implementing hyperconverged infrastructures, (2) how to move forward with hyperconverged infrastructure plans while ensuring your database applications continue to grow, (3) how to boost hyperconvergence performance, (4) what is the performance metric that rules them all, and (5) what are the business benefits of leveraging hyperconvergence.

We live in a data-driven world. The quantity of and need for increasing amounts of data will skyrocket in the coming years, and behind it all are databases intended to help organizations maintain the madness. At the same time, the infrastructure that powers these databases is undergoing a massive transformation as organizations seek to simplify complex IT systems to reduce the cost of such systems and to improve the speed of the business.

Hyperconverged infrastructure has emerged in recent years as an incredibly powerful way for organizations to rein in data center madness. Adoption of this technology has been explosive—in a good way!—and organizations are enjoying significant operational efficiency and cost benefits to such adoption. How do you forge? Learn more from vExpert, Scott Lowe in this white paper.

Top 10 Reasons to Adopt Software-Defined Storage
In this brief, learn about the top ten reasons why businesses are adopting software-defined storage to empower their existing and new storage investments with greater performance, availability and functionality.
DataCore delivers a software-defined architecture that empowers existing and new storage investments with greater performance, availability and functionality. But don’t take our word for it. We decided to poll our customers to learn what motivated them to adopt software-defined storage. As a result, we came up with the top 10 reasons our customers have adopted software-defined storage.
Download this white paper to learn about:
•    How software-defined storage protects investments, reduces costs, and enables greater buying power
•    How you can protect critical data, increase application performance, and ensure high-availability
•    Why 10,000 customers have chosen DataCore’s software-defined storage solution
Cloud Migration Planning Guide
Effective migration planning needs to start with evaluating current footprint to determine how the move will affect all functional and non-functional areas of the organization. Having a framework for assessment will streamline migration efforts, whether an enterprise plans to undertake this project on its own or with the help of a cloud service provider. HyperCloud helps enterprises navigate the complex, cloud ecosystem, to help them build an assessment that is precise and accurate. The platfo

Most enterprises underestimate the planning process; they do not spend sufficient time understanding the cloud landscape and the different options available. While there are tools at hand to assist the implementation and the validation phases of the migration, planning is where all the crucial decisions need to be made.

Bad planning will lead to failed migrations. Challenges that enterprises often grapple with include:

  • Visibility and the ability to compile an inventory of their existing on-premises VMware  resources
  • Cherry-picking workloads and applications that are cloud-ready
  • Right-sizing for the public cloud
  • A financial assessment of what the end state will look like

HyperCloud Analytics provides intelligence backed by 400+ million benchmarked data-points to enable enterprises to make the right choices for the organization. HyperCloud’s cloud planning framework provides automation for four key stages that enterprises should consider as they plan their migration projects.

They get automated instance recommendations and accurate cost forecasts made with careful considerations of their application requirements (bandwidth, storage, security etc). Multiple assessments can be run across the different cloud providers to understand application costs post-migration.

Download our whitepaper to learn more about how you can build high-confidence, accurate plans with detailed cloud bills and cost forecasts, while expediting your cloud migrations.

The Case for Converged Application & Infrastructure Performance Monitoring
Read this white paper and learn how you can combine and correlate performance insights from the application (code, SQL, logs) and the underlying hardware infrastructure (server, network, virtualization, storage, etc.)

One of the toughest problems facing enterprise IT teams today is troubleshooting slow applications. When a user complains of slowness in application access, all hell breaks loose, and the blame game begins: app owners, developers and IT ops teams enter into endless war room sessions to figure out what went wrong and where. Have you also been in this situation before?

Read this white paper by Larry Dragich, and learn how you can combine and correlate performance insights from the application (code, SQL, logs) and the underlying hardware infrastructure (server, network, virtualization, storage, etc.) in order to:

  • Proactively detect user experience issues before your customers are impacted
  • Trace business transactions and isolate the cause of application slowness
  • Get code-level visibility to identify inefficient application code and slow database queries
  • Automatically map application dependencies within the infrastructure to pinpoint the root cause of the problem
Achieve centralized visibility of all your applications and infrastructure and easily diagnose the root cause of performance slowdowns.
Catalogic Software-Defined Secondary Storage Appliance
The Catalogic software-defined secondary-storage appliance is architected and optimized to work seamlessly with Catalogic’s data protection product DPX, with Catalogic/Storware vProtect, and with future Catalogic products. Backup nodes are deployed on a bare metal server or as virtual appliances to create a cost-effective yet robust second-tier storage solution. The backup repository offers data reduction and replication. Backup data can be archived off to tape for long-term retention.
The Catalogic software-defined secondary-storage appliance is architected and optimized to work seamlessly with Catalogic’s data protection product DPX, with Catalogic/Storware vProtect, and with future Catalogic products.

Backup nodes are deployed on a bare metal server or as virtual appliances to create a cost-effective yet robust second-tier storage solution. The backup repository offers data reduction and replication. Backup data can be archived off to tape for long-term retention.