Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 17 - 30 of 30 white papers, page 2 of 2.
Remove complexity in protecting your virtual infrastructure with IBM Spectrum Protect Plus
This white paper focuses on the deployment and basic setup of IBM Spectrum Protect Plus for protecting VMware. Readers will be taken through a step-by-step explanation of what is required to install and configure IBM Spectrum Protect Plus for basic backup and recovery of VMware virtual machines (VMs). Integration with Spectrum Protect for long-term data retention is also discussed.
IBM Spectrum Protect™ Plus is a new data protection and availability solution for virtual environments that can unlock your valuable data for emerging use cases. You can deploy it in minutes and have your environment fully protected within an hour. IBM Spectrum Protect Plus can be implemented as a stand-alone solution or can integrate easily with your IBM Spectrum Protect environment to off-load copies for long-term storage and governance with scale and efficiency.

This white paper focuses on the deployment and basic setup of IBM Spectrum Protect Plus for protecting VMware. Readers will be taken through a step-by-step explanation of what is required to install and configure IBM Spectrum Protect Plus for basic backup and recovery of VMware virtual machines (VMs). Integration with Spectrum Protect for long-term data retention is also discussed.
A Hybrid Approach to Big Data
We consider a scenario where a data science team needs to analyze a dataset that resides within an object storage service on a public cloud. This is a common scenario when an externally exposed service is deployed on a public cloud and telemetry data is stored for future analysis. In this scenario, we make use of a specific set of technologies which are representative of those that are typically adopted as part of big data and cloud computing deployment paradigms. We demonstrate how you can us

All IT managers understand the critical role they play in ensuring that the enterprise computing infrastructure under their control is effective in meeting the business goals of the organization. Indeed, staying competitive in the rapidly evolving world of today requires that businesses take a proactive mindset towards leveraging their IT investments to deliver key insights through big data initiatives.

The adoption of this mentality is often reflected by the inclusion of dedicated data science teams capable of driving these projects. These groups apply advanced analytical algorithms and artificial intelligence to large datasets with a goal of deriving competitive advantages for the organization.

The specific implementations vary and may impact a variety of business processes such as product development, marketing initiatives, sales management, etc. However, as the adoption of these practices continues to increase, IT teams need to be prepared to effectively support them. While on the surface it may seem that big data workloads are simply another application to deploy and manage, in practice they entail multiple challenges.

In this paper, we demonstrate the ability to implement a big data use case in practice using a hybrid cloud deployment strategy. We provide an overview of our sample scenario, followed by a detailed deployment walk through that enables users to easily replicate the scenario in their own environments. By the end of this paper, readers should have a clear understanding of how big data use cases can be implemented using the underlying technologies covered and how the adoption of hybrid cloud computing environments enables IT leaders to successfully meet the needs of these initiatives within their organizations.

The Next Generation Clusterless Federation Design in the Cloudistics Cloud Platform
Is there another way to manage your VMs? Yes, the introduction of the Cloudistics Cloud Platform has an innovative approach – non-clustered (or clusterless) federated design.The clusterless federation of the Cloudistics Cloud platform uses categories and tags to characterize computer nodes, migration zones, and storage groups (or blocks). With these benefits: ⦁ Node Limits Are A Thing of the Past. ⦁ Locking limitations are removed. ⦁ Flexibility is Enhanced. ⦁ Ladders of Latency are Removed. ⦁

This paper is written in the context of modern virtualized infrastructures, such as VMware or Nutanix. In such systems, a hypervisor runs on each compute node creating multiple virtual machines (VMs) per compute node. A guest OS runs inside each VM.

Data associated with each VM is stored in one or more virtual disks (vDisks). A virtual disk appears like a local disk, but can be mapped to physical storage in many ways as we will discuss.

Virtualized infrastructures use clustering to provide for non-disruptive VM migration between compute nodes, for load balancing across the nodes, for sharing storage, and for high availability and failover. Clustering is well known and has been used to build
computer systems for a long time. However, in the context of virtualized infrastructures, clustering has a number of significant limitations. Specifically, as we explain below, clusters limit scalability, decrease resource efficiency, hurt performance, reduce flexibility and impair manageability.

In this paper, we will present an innovative alternative architecture that does not have these limitations of clustering. We call our new approach clusterless federation and it is the approach used in the Cloudistics platform.

The rest of this paper is organized as follows. In Section 2, we describe the limitations of clustering and in Section 3, we drive the point home by using the specific example of VMware; other virtualized systems are similar. In Section 4, we present the clusterless federated approach and show how it avoids the limitations of clustering. We summarize in Section 5.

Overcoming IT Monitoring Too Sprawl with a Single-Pane-of-Glass Solution
For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. Read this white paper to understand how to consolidate IT performance monitoring and implement a single-pane-of-glass monitoring solution.

For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. But, many fail to achieve this as they do not know how to implement a single-pane-of-glass solution.

Read this eG Innovations white paper, and understand:

  • How an organization ends up with more tools than what they need
  • The challenges of dealing with multiple tools
  • Myths and popular misconceptions about a single-pane-of-glass monitoring tool
  • Best practices for achieving unified IT monitoring
  • Benefits of consolidating monitoring into a single-pane-of-glass monitoring solution
Converged Application and Infrastructure Performance Monitoring
In today’s distributed, heterogeneous environments, the siloed monitoring of applications and infrastructure tiers (network, storage, virtualization, database, etc.) is no longer sufficient. Read this white paper to find out how eG Innovations provides unified visibility of application performance, end-user experience, and infrastructure health—all from a single pane of glass.

As detecting and troubleshooting application performance issues increases in complexity in today’s distributed, heterogeneous environments, the siloed monitoring of applications and infrastructure tiers (network, storage, virtualization, database, etc.) is no longer sufficient. eG Enterprise delivers the first converged application and infrastructure performance monitoring solution, providing unified visibility of application performance, end-user experience, and infrastructure health—all from a single pane of glass.

Read this white paper to find out how eG Enterprise’s converged application and infrastructure monitoring capabilities help you:

  • Proactively detect user experience issues before your customers are impacted
  • Trace business transactions and isolate the cause of application slowness
  • Get code-level visibility to identify inefficient application code and slow database queries
  • Automatically map application dependencies within the infrastructure to pinpoint the root cause of the problem
Optimizing Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them. Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case.

These include:

1. ProfileDisk, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

Essential Guide to Storage for Virtualization
If your organization is highly virtualized, or if you’re planning a virtual-first strategy for your organization, you cannot meet your objectives with conventional LUN and volume-based storage. Conventional storage architectures that were built for physical workloads decades ago are still used by both legacy providers and storage newcomers today. Instead, you need storage specifically built for virtualization and cloud.
If your organization is highly virtualized, or if you’re planning a virtual-first strategy for your organization, you cannot meet your objectives with conventional LUN and volume-based storage. Conventional storage architectures that were built for physical workloads decades ago are still used by both legacy providers and storage newcomers today. Instead, you need storage specifically built for virtualization and cloud.

Key Takeaways:
1) Understand the constraints of conventional, legacy storage solutions.
2) Learn how modern storage systems eliminate the disconnect between virtualized applications and physical-era storage.
3) Recognize the business benefits - including time and cost savings - that can be realized with storage that is optimized for virtualization and cloud.

Essential Guide to Storage for DevOps
Many companies today are adopting a DevOps model to accelerate development efforts and deliver new applications and services. Choosing the right enterprise cloud storage provides the foundation to support your growing DevOps practice.
Gain insight into new storage trends and innovations for DevOps

Many companies today are adopting a DevOps model to accelerate development efforts and deliver new applications and services. Choosing the right enterprise cloud storage provides the foundation to support your growing DevOps practice.

This essentials guide helps you understand the storage features that are most beneficial to your DevOps practice and provides specific guidelines on what to look for.

Key Takeaways:
1) Create a successful DevOps Strategy that considers functionality, cost, and ease of use
2) How to best manage your storage needs for DevOps, QA, and Developers
3) Learn how automation and copy data management makes routine tasks simpler and faster, saving time
4 How to accelerate release cycles

3 Potential Risks of an HCI Architecture
The appeal of HCI as a concept is that by bringing compute, network and storage together in a fully-tested, controlled environment, infrastructure administrators would be freed from the challenges of integrating point solutions and would have scalable, guaranteed performance at lower risk. For specific workloads such as VDI and ROBO, the theory is that customers can allocate resources quickly, scale easily, and reduce costs significantly because of the integration and features in its platform. T
The appeal of HCI as a concept is that by bringing compute, network and storage together in a fully-tested, controlled environment, infrastructure administrators would be freed from the challenges of integrating point solutions and would have scalable, guaranteed performance at lower risk. For specific workloads such as VDI and ROBO, the theory is that customers can allocate resources quickly, scale easily, and reduce costs significantly because of the integration and features in its platform. The reality is more complicated—it’s difficult for HCI to deliver scalability, simplicity, and cost advantages without sacrificing performance.
Essential Guide to Storage for Virtual Desktops
Topics: Tintri, VDI
"What are the 7 criteria you need to weigh when choosing storage for VDI? What 3 lessons can you learn from VDI failures? The Essential Guide to Storage for VDI has those answers and more. We designed this practical guide to get you thinking. We peppered it with anecdotes from your peers—sharing their failures and learnings, so you’ll be prepared to succeed."
What are the 7 criteria you need to weigh when choosing storage for VDI? What 3 lessons can you learn from VDI failures? The Essential Guide to Storage for VDI has those answers and more.

We designed this practical guide to get you thinking. We peppered it with anecdotes from your peers—sharing their failures and learnings, so you’ll be prepared to succeed.
Cisco UCS B-Series Best Practice & Deployment Guide
Topics: Tintri, Cisco UCS
This guide describes the Tintri best practices for a UCS environment with VMware. Tintri recommends cabling the VMstore such that one port on each controller is configured on each UCS fabric. Fabric A is configured to preferentially carry storage traffic under normal operating conditions.
Cisco UCS B-Series Blade Servers are a popular server choice. A typical UCS configuration includes a Fabric Interconnect (FI) with two separate fabrics, and there are some important considerations for configuring Tintri VMstore storage systems in the UCS environment.

This guide describes the Tintri best practices for a UCS environment with VMware. Tintri recommends cabling the VMstore such that one port on each controller is configured on each UCS fabric. Fabric A is configured to preferentially carry storage traffic under normal operating conditions.

Distributed virtual switching (dvSwitch) is described as the best practice for switching in the VMware environment. However, a separate appendix describes the configuration of vstandard switching.

Additional appendices describe design considerations for LACP, native VLAN use, and jumbo frames as well as configuration for the Cisco Nexus 5K switch.
HyperCore-Direct: NVMe Optimized Hyperconvergence
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance.
Scale Computing’s award winning HC3 solution has long been a leader in the hyperconverged infrastructure space. Now targeting even higher performing workloads, Scale Computing is announcing HyperCore-Direct, the first hyperconverged solution to provide software defined block storage utilizing NVMe over fabrics at near bare-metal performance. In this whitepaper, we will showcase the performance of a Scale HyperCore-Direct cluster which has been equipped with Intel P3700 NVMe drives, as well as a single-node HyperCore-Direct system with Intel Optane P4800X NVMe drives. Various workloads have been tested using off-the-shelf Linux and Windows virtual machine instances. The results show that HyperCore-Direct’s new NVMe optimized version of SCRIBE, the same software-defined- storage powering every HC3 cluster in production today, is able to offer the lowest latency per IO delivered to virtual machines.
HC3, SCRIBE and HyperCore Theory of Operations
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
This document is intended to describe the technology, concepts and operating theory behind the Scale Computing HC3 System (Hyper-converged Compute Cluster) and the HyperCore OS that powers it, including the SCRIBE (Scale Computing Reliable Independent Block Engine) storage layer.
3 Steps to Dockerize and Migrate your Java and .NET Applications
Migrating existing applications to the cloud can be achieved by completing 3 simple steps. Download this guide for detailed instructions on containerizing and migrating your Java & .NET applications to any cloud.
For many IT organizations that inherit legacy applications without any documentation or much knowledge about the application dependencies, the task of modernizing these applications while avoiding application code changes becomes overwhelming.  HyperForm drives business innovation by modernizing existing legacy applications without making a single code change and using the existing skill sets within an organization.

The on-the-fly containerization capabilities allow users to “lift and shift” existing Java and .NET applications to containers while taking care of the complex application dependencies, automatic service discovery, auto-scaling and integration with any external service (e.g. storage, networking, logging, etc.). HyperForm transforms non-cloud-native legacy applications into completely portable applications that can take advantage of auto scaling and deployment agility on any cloud.

Migrating existing applications to the cloud can be achieved by completing 3 simple steps. Download this guide for detailed instructions on containerizing and migrating your Java & .NET applications to any cloud.