Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 11 of 11 white papers, page 1 of 1.
A Journey Through Hybrid IT and the Cloud
How to navigate between the trenches. Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?

How to navigate between the trenches

Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you?
 
“A Journey Through Hybrid IT and the Cloud” provides insight on:

  • What Hybrid IT means for the network, storage, compute, monitoringand your staff
  • Real world examples that can occur along your journey (what did vs. didn’t work)
  • How to educate employees on Hybrid IT and the Cloud
  • Proactively searching out technical solutions to real business challenges
How to Develop a Multi-cloud Management Strategy
Increasingly, organizations are looking to move workloads into the cloud. The goal may be to leverage cloud resources for Dev/Test, or they may want to “lift and shift” an application to the cloud and run it natively. In order to enable these various cloud options, it is critical that organizations develop a multi-cloud data management strategy.

The primary goal of a multi-cloud data management strategy is to supply data, either via copying or moving data to the various multi-cloud use cases. A key enabler of this movement is the data management software applications. In theory, data protection applications can perform both of the copy and move functions. A key consideration is how the multi-cloud data management experience is unified. In most cases, data protection applications ignore the user experience of each cloud and use their proprietary interface as the unifying entity, which increases complexity.

There are a variety of reasons organizations may want to leverage multiple clouds. The first use case is to use public cloud storage as a backup mirror to an on-premises data protection process. Using public cloud storage as a backup mirror enables the organization to automatically off-site data. It also sets up many of the more advanced use cases.

Another use case is using the cloud for disaster recovery.

Another use case is “Lift and Shift,” which means the organization wants to run the application in the cloud natively. Initial steps in the “lift and shift” use case are similar to Dev/Test, but now the workload is storing unique data in the cloud.

Multi-cloud is a reality now for most organizations and managing the movement of data between these clouds is critical.

Evaluator Group Report on Liqid Composable Infrastructure
In this report from Eric Slack, Senior Analyst at the Evaluator Group, learn how Liqid’s software-defined platform delivers comprehensive, multi-fabric composable infrastructure for the industry’s widest array of data center resources.
Composable Infrastructures direct-connect compute and storage resources dynamically—using virtualized networking techniques controlled by software. Instead of physically constructing a server with specific internal devices (storage, NICs, GPUs or FPGAs), or cabling the appropriate device chassis to a server, composable enables the virtual connection of these resources at the device level as needed, when needed.

Download this report from Eric Slack, Senior Analyst at the Evaluator Group to learn how Liqid’s software-defined platform delivers comprehensive, multi-fabric composable infrastructure for the industry’s widest array of data center resources.
LQD4500 Gen4x16 NVMe SSD Performance Report
The LQD4500 is the World’s Fastest SSD. The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB. What does all that mean in real-world testing? Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demand
The LQD4500 is the World’s Fastest SSD.

The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB.

The document will contain test results and performance measurements for the Liqid LQD4500 Gen4x16 NVMe SSD. The performance test reports include Sequential, Random, and Latency measurement on the LQD4500 high performance storage device. The following data has been measured in a Linux OS environment and results are taken per the SNIA enterprise performance test specification standards. The below results are steady state after sufficient device preconditioning.

Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demanding applications.
Spotcheck Inspection with Stratusphere UX
This whitepaper defines an inspection technique―and the necessary broad-stroke steps to perform a limited health check of an existing platform or architecture. The paper defines and provides a practical-use example that will help you to execute a SpotCheck inspection using Liquidware’s Stratusphere UX.
The ability to meet user expectations and deliver the appropriate user-experience in a shared host and storage infrastructure can be a complex and challenging task. Further, the variability in deployment (settings and overall supportive infrastructure) on platforms such as VMware View and Citrix XenApp and XenDesktop make these architectures complex and difficult to troubleshoot and optimize. This whitepaper defines an inspection technique―and the necessary broad-stroke steps to perform a limited health check of an existing platform or architecture. The paper defines and provides a practical-use example that will help you to execute a SpotCheck inspection using Liquidware’s Stratusphere UX.
Optimising Performance for Office 365 and Large Profiles with ProfileUnity ProfileDisk
This whitepaper has been authored by experts at Liquidware Labs in order to provide guidance to adopters of desktop virtualization technologies. In this paper, two types of profile management with ProfileUnity are outlined: (1) ProfileDisk and (2) Profile Portability. This paper covers best practice recommendations for each technology and when they can be used together. ProfileUnity is the only full featured UEM solution on the market to feature an embedded ProfileDisk technology and the advanta

Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them.

Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case. These include:

1. ProfileDisk™, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and

2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.

Liqid Launches the Industry’s Fastest, No-Compromise, One-Socket Servers, powered by Dell
Liqid has collaborated with Dell Technologies OEM | Embedded & Edge Solutions to design a solution delivering the fastest single socket storage solution on the market today. Liqid’s composable Gen4 fabric technology deployed with the AMD EPYC 7002 Series, and designed on the Dell EMC PowerEdge R7515 Rack Server delivers the ideal architecture for next-generation, AI-driven, and HPC application environments.
Liqid has collaborated with Dell Technologies OEM | Embedded & Edge Solutions to design a solution delivering the fastest single socket storage solution on the market today. Liqid’s composable Gen4 fabric technology deployed with the AMD EPYC 7002 Series, and designed on the Dell EMC PowerEdge R7515 Rack Server delivers the ideal architecture for next-generation, AI-driven, and HPC application environments.
RANSOMWARE – How to Protect and Recover Your Data From this Growing Threat
Ransomware is a growing threat to every organization on the planet; it seems we cannot go a day without seeing another high-profile ransomware attack being detailed in mainstream media. Cyber-criminals are innovating at a phenomenal pace in this growing ‘industry’ because they have the funds to do so. In fact many cyber-criminal groups have more funds than most enterprises.

Ransomware is a growing threat to every organization on the planet; it seems we cannot go a day without seeing another high-profile ransomware attack being detailed in mainstream media.

Cyber-criminals are innovating at a phenomenal pace in this growing ‘industry’ because they have the funds to do so. In fact many cyber-criminal groups have more funds than most enterprises.

The disruption these attacks are causing to businesses is huge with billions of dollars’ worth of revenue being lost due to system outages caused via ransomware attacks.

Research has shown that a 41% increase in attacks has occurred since the beginning of 2021 with a staggering 93% increase year over year.

Companies are getting hit via ransomware every day, but how does it get in? Some of the most common ways ransomware is getting in is via the following methods:

1. Phishing emails that launch ransomware attacks via inline links, links in attachments, or fake attachments.
2. Browsing unknown links and websites.
3. Downloading and accidentally running infected software.
4. Inserting or connecting an infected disk, disc, or drive.
5. Operating system based vulnerabilities if the OS is not patched to the latest levels.
6. Plugin based vulnerabilities if plugins are not patched to the latest levels.
7. Infrastructure vulnerabilities (network, storage etc.) if not patched to the latest levels.

Why backup is breaking hyper-converged infrastructure and how to fix it
The goal of a hyperconverged infrastructure (HCI) is to simplify how to apply compute, network and storage resources to applications. Ideally, the data center’s IT needs are consolidated down to a single architecture that automatically scales as the organization needs to deploy more applications or expand existing ones. The problem is that the backup process often breaks the consolidation effort by requiring additional independent architectures to create a complete solution.

How Backup Breaks Hyperconvergence

Backup creates several separate architectures outside of the HCI architecture. Each of these architectures need independent management. First, the backup process will often require a dedicated backup server. That server will run on a stand-alone system and then connect to the HCI solution to perform a backup. Second, the dedicated backup server will almost always have its own storage system to store data backed up from the HCI. Third, there are some features, like instant recovery and off-site replication, that require production quality storage to function effectively.

The answer for IT is to find a backup solution that fully integrates with the HCI solution, eliminating the need to create these additional silos.

VMware performance and storage monitoring
Scaling up your VMware monitoring is a challenging game. IT teams have to constantly diagnose performance issues, rationalize their VM sprawl, and visualize their dynamic deployments to troubleshoot, manage, and cut costs. Read this whitepaper to understand why IT teams need an observability platform like Site24x7 to monitor their VMware deployments. Learn how Site24x7's detailed graphs and dashboards go beyond the default vSphere metrics provided onboard, to help observe all their VMware deplo
VMware virtualizes servers and consolidates resource allocation and usage. However, to reap the full benefits of this technology, it's important to understand the challenges surrounding VMware to overcome them and fully optimize their use. For this, it is necessary to understand the architecture of VMware resources, their usage, and performance at every step. To build a successful virtual environment, you have to first observe, understand, and evaluate the current trends and performance of your IT infrastructure, and then plan resource allocation for the future based on this analysis. This whitepaper illustrates the need to monitor different CPU, memory, disk, and network metrics and explains how these metrics provide visibility to your virtual environment. It will also discuss how each of them is important to eliminate over-commitment and under-provisioning.
GigaOm Radar for Kubernetes Data Protection
Read this report to learn how CloudCasa is positioned as a Leader and an Outperformer in the GigaOm Radar, and how it stacks up against other vendor solutions in the key criteria comparison. The Vendor Insights summarizes the strengths of CloudCasa as a SaaS service that enables you to backup, restore, migrate, and secure Kubernetes-based applications.
Kubernetes is the industry standard for container orchestration, and it’s being used by born-in-the-cloud startups and cloud-native enterprises alike. It’s found in production on-premises, in the cloud, and at the edge for many different types of applications, including some that Kubernetes wasn’t initially built for.

Kubernetes was never really meant for stateful applications, and by default, it lacks many data management and protection features. However, many organizations are building and running their stateful applications on top of Kubernetes, indicating there’s a gap in functionality between what Kubernetes offers and what the (enterprise) market wants.

Unfortunately, existing data protection tools, mostly built for legacy technologies such as virtual machines (VMs), do not fit well into the container paradigm. However, vendors are adapting existing solutions or creating new products from scratch that are better aligned with the cloud-native and container worlds.

Many of these solutions include data protection and other data management features, such as data integrity and security, disaster recovery, and heterogeneous data migration capabilities. There’s some overlap among data storage solutions, data protection solutions, and data management solutions in the cloud-native space, with each solution offering some adjacency in terms of features.

We have seen a particular focus on ransomware and other data integrity and security features in the last year, with vendors developing protective measures against different kinds of attacks, including ransomware, abuse of misconfigured cloud resources, and more. The companion Key Criteria report dives into the capabilities we expect to see in this space—namely, cloud-native data storage, protection, security, and migration.