Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 16 of 24 white papers, page 1 of 2.
The Expert Guide to VMware Data Protection
Virtualization is a very general term for simulating a physical entity by using software. There are many different forms of virtualization that may be found in a data center including server, network and storage virtualization. When talking about server virtualization there are many unique terms and concepts that you may hear that are part of the technology that makes up server virtualization.
Virtualization is the most disruptive technology of the decade. Virtualization-enabled data protection and disaster recovery is especially disruptive because it allows IT to do things dramatically better at a fraction of the cost of what it would be in a physical data center.

Chapter 1: An Introduction to VMware Virtualization

Chapter 2: Backup and Recovery Methodologies

Chapter 3: Data Recovery in Virtual Environments

Chapter 4: Learn how to choose the right backup solution for VMware
How to avoid VM sprawl and improve resource utilization in VMware and Veeam backup infrastructures
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.

Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.

This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.

Read this FREE white paper and learn how to:

  • Identify “zombies”
  • Clean up garbage and orphaned snapshots
  • Establish a transparent system to get sprawl under control
  • And more!
How VembuHIVE, a backup repository as a file system is changing the dynamics of Data Protection
MS Applications are a critical segment of the core systems managed and run in IT of most organizations. Backing them up is not enough. The effect of the backup process on your system and storage determines its efficiency. From this whitepaper, learn how VembuHIVE transforms the way backups are performed to achieve disaster-readiness.
It is imperative that Microsoft Applications like SQL, Exchange, Active Directory and many others have been instrumental in running some of the mission-critical processes of an IT setup. While there are many solutions that address its Data Protection concerns, efficient recovery from a storage medium has always been a pivotal issue. Read this white paper that includes performance and resource utilization reports on how Vembu BDR Suite with its in-house proprietary file system VembuHIVE, reduces the backup footprints on the storage repositories enabling quick recovery with minimal RTOs.
Storage Playbook: Essential Enterprise Storage Concepts
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.

In this e-book, we’ll cover storage basics, storage performance and capacity, forecasting and usage and storage best practices.

Overcoming IT Monitoring Too Sprawl with a Single-Pane-of-Glass Solution
For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. Read this white paper to understand how to consolidate IT performance monitoring and implement a single-pane-of-glass monitoring solution.

For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. But, many fail to achieve this as they do not know how to implement a single-pane-of-glass solution.

Read this eG Innovations white paper, and understand:

  • How an organization ends up with more tools than what they need
  • The challenges of dealing with multiple tools
  • Myths and popular misconceptions about a single-pane-of-glass monitoring tool
  • Best practices for achieving unified IT monitoring
  • Benefits of consolidating monitoring into a single-pane-of-glass monitoring solution
Optimizing Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them. Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case.

These include:

1. ProfileDisk, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

HyperCore-Direct: NVMe Optimized Hyperconvergence
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance.
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance. In this whitepaper, we will showcase the performance of a Scale HyperCore-Direct cluster which has been equipped with Intel P3700 NVMe drives, as well as a single-node HyperCore-Direct system with Intel Optane P4800X NVMe drives. Various workloads have been tested using off-the-shelf Linux and Windows virtual machine instances. The results show that HyperCore-Direct’s new NVMe optimized version of SCRIBE, the same software-defined- storage powering every HC3 cluster in production today, is able to offer the lowest latency per IO delivered to virtual machines.
HC3, SCRIBE and HyperCore Theory of Operations
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
A Journey Through Hybrid IT and the Cloud
How to navigate between the trenches. Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?

How to navigate between the trenches

Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?
 
“A Journey Through Hybrid IT and the Cloud” provides insight on:

  • What Hybrid IT means for the network, storage, compute, monitoringand your staff
  • Real world examples that can occur along your journey (what did vs. didn’t work)
  • How to educate employees on Hybrid IT and the Cloud
  • Proactively searching out technical solutions to real business challenges
Salem State University Teams with IGEL, Citrix and Nutanix to Deliver Digital Workspaces
Limited IT resources drive need for the IGEL’s robust management features; maturity of Citrix virtual desktop infrastructure, and the simplicity and time-to-value for Nutanix’s hyperconverged infrastructure offering make the combined solution a no-brainer for the university.
When Jake Snyder joined Salem State University’s IT department, the public university located just outside of Boston, Mass. was only using traditional PCs. “95% of the PCs were still on Windows 7 and there was no clear migration path in sight to Windows 10,” recalls Snyder. “Additionally, all updates to these aging desktop computers were being done locally in the university’s computer labs. Management was difficult and time consuming.”

The university realized something had to change, and that was one of the reasons why they brought Snyder on board – to upgrade its end-user computing environment to VDI. Salem State was looking for the security and manageability that a VDI solution could provide. “One of the biggest challenges that the university had been experiencing was managing desktop imaging and applications,” said Snyder. “They wanted to be able to keep their student, faculty and staff end-points up to date and secure, while at the same time easing the troubleshooting process. They weren’t able to do any of this with their current set-up.”

Snyder first saw a demo of the IGEL solution at the final BriForum event in Boston in 2016. “It was great to see IGEL at that event as I had heard a lot of good buzz around their products and solutions, especially from other colleagues in the industry,” said Snyder. “After BriForum, I went back and ordered some evaluation units to test out within our EUC environment.”

What Snyder quickly discovered during the evaluation period was that the IGEL Universal Management Suite (UMS) was not just plug-and-play, like he had expected. “The IGEL UMS was a very customizable solution, and I liked the robust interface,” continued Snyder. “Despite competitive solutions, it was clear from the start that the IGEL devices were going to be easier to use and cheaper in the long run. IGEL really was a ‘no-brainer’ when you consider the management capabilities and five-year warranty they offer on their hardware.”

Salem State University currently has 400 IGEL Universal Desktop software-defined thin clients deployed on its campus including 360 UD3 thin clients, which are the workhorse of the IGEL portfolio, and 40 UD6 thin clients, which support high-end graphics capabilities for multimedia users. Salem State has also purchased IGEL UD Pocket micro thin clients which they are now testing.
IGEL Delivers Manageability, Scalability and Security for The Auto Club Group
The Auto Club Group realizes cost-savings; increased productivity; and improved time-to-value with IGEL’s software-defined endpoint management solutions.
In 2016, The Auto Club Group was starting to implement a virtual desktop infrastructure (VDI) solution leveraging Citrix XenDesktop on both its static endpoints and laptop computers used in the field by its insurance agents, adjusters and other remote employees. “We were having a difficult time identifying a solution that would enable us to simplify the management of our laptop computers, in particular, while providing us with the flexibility, scalability and security we wanted from an endpoint management perspective,” said James McVicar, IT Architect, The Auto Club Group.

Some of the mobility management solutions The Auto Club has been evaluating relied on Windows CE, a solution that is nearing end-of-life. “We didn’t want to deal with the patches and other management headaches related to a Windows-based solutions, so this was not an attractive option,” said McVicar.

In the search for a mobile endpoint management solution, McVicar and his team came across IGEL and were quickly impressed. McVicar said, “What first drew our attention to IGEL was the ability to leverage the IGEL UDC to quickly and easily convert our existing laptop computers into an IGEL OS-powered desktop computing solution, that we could then manage via the IGEL UMS. Because IGEL is Linux-based, we found that it offered both the functionality and stability we needed within our enterprise.”

As The Auto Club Group continues to expand its operations, it will be rolling out additional IGEL OS-powered endpoints to its remote workers, and expects its deployment to exceed 400 endpoints once the project is complete.

The Auto Club Group is also looking at possibly leveraging the IGEL Cloud Gateway, which will help bring more performance and functionality to those working outside of the corporate WAN.
Smarter Storage Management Series
Organizations are creating huge amounts of data but how and where are they going to store it all? Answers can be found in this eBook, which includes a series of thought provoking articles covering critical issues including storage silos, data protection, application performance bottlenecks, data fabrics, and the role of hyperconverged infrastructure.

In addition to the thought provoking articles about storage silos, data protection, application performance bottlenecks, data fabrics, and the role of hyperconverged infrastructure, the eBook concludes with an original thought leadership section, A Blueprint for Smarter Storage Management: Addressing the Top 3 Challenges Impacting Your Data with Software-Defined Storage, including:
•    Maintaining data accessibility even in the event of a catastrophic failure
•    Capacity utilization and scale
•    Extreme performance to keep up with the rate of data acquisition

eBook topics:
•    Storage Silos 101: What They Are and How To Manage Them
•    RPO/RTO and the Impact on Business Continuity
•    Data Storage 101: How To Determine Business-Critical App Performance Requirements
•    How Best To Protect and Access Your Company’s Data
•    Data Fabrics and Their Role in the Storage Hardware Refresh
•    The Rise of Hyper-Converged Infrastructure
•    A Blueprint for Smarter Storage Management – Addressing the Top 3 Challenges Impacting Your Data with Software-Defined Storage

Solving Healthcare IT Challenges with DataCore Software-Defined Storage
In this solution brief, learn how software-defined storage can help solve the most pressing challenges in Healthcare IT: (1) consolidating and managing data from disparate systems, (2) safeguarding data and applications from cyberattacks, system outages, natural disaster, and human error, (3) ensuring fast application response times and real-time data availability for life-critical applications, and (4) scaling storage as needed — easily, instantaneously, inexpensively, and non-disruptively.

Today’s hospitals are benefitting from an explosion of information technologies that is ushering in a new era of healthcare. With these advanced technologies, significantly more data is being generated, from a much wider variety of sources, and at a more frequent pace. All of this data must be stored, shared, and protected.

With data the lifeblood of healthcare, IT departments are challenged to adopt new storage and management strategies to handle the deluge of data. The skyrocketing costs to achieve continuous data availability, cope with exponential data growth, and provide timely data access rank among the most pressing challenges facing healthcare IT organizations today.

That is why a growing number of healthcare institutions are deploying DataCore’s software-defined storage platform. Only DataCore enables today’s hospitals and health systems to address mission-critical healthcare IT challenges while maximizing the availability, performance, and utilization of IT resources – allowing them to enhance patient outcomes while keeping costs low. Learn more in this solution brief.

Leveraging Hyperconvergence for Databases and Tier 1 Enterprise Applications
In this white paper, vExpert Scott Lowe, discusses: (1) why organizations are implementing hyperconverged infrastructures, (2) how to move forward with hyperconverged infrastructure plans while ensuring your database applications continue to grow, (3) how to boost hyperconvergence performance, (4) what is the performance metric that rules them all, and (5) what are the business benefits of leveraging hyperconvergence.

We live in a data-driven world. The quantity of and need for increasing amounts of data will skyrocket in the coming years, and behind it all are databases intended to help organizations maintain the madness. At the same time, the infrastructure that powers these databases is undergoing a massive transformation as organizations seek to simplify complex IT systems to reduce the cost of such systems and to improve the speed of the business.

Hyperconverged infrastructure has emerged in recent years as an incredibly powerful way for organizations to rein in data center madness. Adoption of this technology has been explosive—in a good way!—and organizations are enjoying significant operational efficiency and cost benefits to such adoption. How do you forge? Learn more from vExpert, Scott Lowe in this white paper.

Top 10 Reasons to Adopt Software-Defined Storage
In this brief, learn about the top ten reasons why businesses are adopting software-defined storage to empower their existing and new storage investments with greater performance, availability and functionality.
DataCore delivers a software-defined architecture that empowers existing and new storage investments with greater performance, availability and functionality. But don’t take our word for it. We decided to poll our customers to learn what motivated them to adopt software-defined storage. As a result, we came up with the top 10 reasons our customers have adopted software-defined storage.
Download this white paper to learn about:
•    How software-defined storage protects investments, reduces costs, and enables greater buying power
•    How you can protect critical data, increase application performance, and ensure high-availability
•    Why 10,000 customers have chosen DataCore’s software-defined storage solution
Cloud Migration Planning Guide
Effective migration planning needs to start with evaluating current footprint to determine how the move will affect all functional and non-functional areas of the organization. Having a framework for assessment will streamline migration efforts, whether an enterprise plans to undertake this project on its own or with the help of a cloud service provider. HyperCloud helps enterprises navigate the complex, cloud ecosystem, to help them build an assessment that is precise and accurate. The platfo

Most enterprises underestimate the planning process; they do not spend sufficient time understanding the cloud landscape and the different options available. While there are tools at hand to assist the implementation and the validation phases of the migration, planning is where all the crucial decisions need to be made.

Bad planning will lead to failed migrations. Challenges that enterprises often grapple with include:

  • Visibility and the ability to compile an inventory of their existing on-premises VMware  resources
  • Cherry-picking workloads and applications that are cloud-ready
  • Right-sizing for the public cloud
  • A financial assessment of what the end state will look like

HyperCloud Analytics provides intelligence backed by 400+ million benchmarked data-points to enable enterprises to make the right choices for the organization. HyperCloud’s cloud planning framework provides automation for four key stages that enterprises should consider as they plan their migration projects.

They get automated instance recommendations and accurate cost forecasts made with careful considerations of their application requirements (bandwidth, storage, security etc). Multiple assessments can be run across the different cloud providers to understand application costs post-migration.

Download our whitepaper to learn more about how you can build high-confidence, accurate plans with detailed cloud bills and cost forecasts, while expediting your cloud migrations.