Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 17 - 24 of 24 white papers, page 2 of 2.
Choosing the Best Approach for Monitoring Citrix User Experience
This white paper provides an analysis of the different approaches to Citrix user experience monitoring – from the network, server, client, and simulation. You will understand the benefits and shortcomings of these approaches and become well-informed to choose the best approach that suits your requirements.

A great user experience is key for the success of any Citrix/VDI initiative. To ensure user satisfaction and productivity, Citrix administrators should monitor the user experience proactively, detect times when users are likely to be seeing slowness, pinpoint the cause of such issues and initiate corrective actions to quickly resolve issues.

This white paper provides an analysis of the different approaches to Citrix user experience monitoring – from the network, server, client, and simulation. You will understand the benefits and shortcomings of these approaches and become well-informed to choose the best approach that suits your requirements. Normal 0 false false false EN-US X-NONE X-NONE
Top 10 VMware Performance Metrics That Every VMware Admin Must Monitor
How does one track the resource usage metrics for VMs and which ones are important? VMware vSphere comprises many different resource components. Knowing what these components are and how each component influences resource management decisions is key to efficiently managing VM performance. In this blog, we will discuss the top 10 metrics that every VMware administrator must continuously track.

Virtualization technology is being widely adopted thanks to the flexibility, agility, reliability and ease of administration it offers. At the same time, any IT technology – hardware or software – is only as good as its maintenance and upkeep, and VMware virtualization is no different. With physical machines, failure or poor performance of a machine affects the applications running on that machine. With virtualization, multiple virtual machines (VMs) run on the same physical host and a slowdown of the host will affect applications running on all of the VMs. Hence, performance monitoring is even more important in a virtualized infrastructure than it is in a physical infrastructure.

How does one determine what would be the right amount of resources to allocate to a VM? The answer to that question lies in tracking the resource usage of VMs over time, determining the norms of usage and then right-sizing the VMs accordingly.

But how does one track the resource usage metrics for VMs and which ones are important? VMware vSphere comprises many different resource components. Knowing what these components are and how each component influences resource management decisions is key to efficiently managing VM performance. In this blog, we will discuss the top 10 metrics that every VMware administrator must continuously track.

The Top 10 Metrics a Citrix Administrator Must Monitor in Their Environment
We have invited George Spiers, who is a Citrix architect with rich experience in consulting and implementing Citrix technologies for organizations in various sectors, to write this guest blog and enlighten us on the topic of Citrix monitoring. George is also a Citrix Technology Professional and has contributed immensely to the Citrix community. Read on to see what George thinks are the top 10 most important metrics that Citrix administrators must monitor.

Citrix application and desktop virtualization technologies are widely used by organizations that are embarking on digital transformation initiatives. The success of these initiatives is closely tied to ensuring a great user experience for end users as they access their virtual apps and desktops. Given the multitude of components and services that make up the Citrix delivery architecture, administrators constantly face an uphill challenge in measuring performance and knowing what key performance indicators (KPIs) to monitor.

We have invited George Spiers (https://www.jgspiers.com/), who is a Citrix architect with rich experience in consulting and implementing Citrix technologies for organizations in various sectors, to write this guest blog and enlighten us on the topic of Citrix monitoring. George is also a Citrix Technology Professional and has contributed immensely to the Citrix community. Read on to see what George thinks are the top 10 most important metrics that Citrix administrators must monitor.

The Monitoring ELI5 Guide
The goal of this book is to describe complex IT ideas simply. Very simply. So simply, in fact, a five-year-old could understand it. This book is also written in a way we hope is funny, and maybe a little irreverent—just the right mix of snark and humor and insubordination.

Not too long ago, a copy of Randall Munroe’s “Thing Explainer” made its way around the SolarWinds office—passing from engineering to marketing to development to the Head Geeks™ (yes, that’s actually a job at SolarWinds. It’s pretty cool.), and even to management.

Amid chuckles of appreciation, we recognized Munroe had struck upon a deeper truth: as IT practitioners, we’re often asked to describe complex technical ideas or solutions. However, often it’s for folks who need a simplified version. These may be people who consider themselves non-technical, but just as easily it could be for people who are technical in a different discipline. Amid frustrated eye-rolling we’re asked to “explain it to me like I’m five years old” (a phrase shortened to just “Explain Like I’m Five,” or ELI5, in forums across the internet).

There, amid the blueprints and stick figures, were explanations of the most complex concepts in hyper-simplified language that had achieved the impossible alchemy of being amusing, engaging, and accurate.

We were inspired. What you hold in your hands (or read on your screen) is the result of this inspiration.

In this book, we hope to do for IT what Randall Munroe did for rockets, microwaves, and cell phones: explain what they are, what they do, and how they work in terms anyone can understand, and in a way that may even inspire a laugh or two.

ESG Showcase - DataCore vFilO: NAS Consolidation Means Freedom from Data Silos
File and object data are valuable tools that help organizations gain market insights, improve operations, and fuel revenue growth. However, success in utilizing all of that data depends on consolidating data silos. Replacing an existing infrastructure is often expensive and impractical, but DataCore vFilO software offers an intelligent, powerful option—an alternative, economically appealing way to consolidate and abstract existing storage into a single, efficient, capable ecosystem of readily-se

Companies have NAS systems all over the place—hardware-centric devices that make data difficult to migrate and leverage to support the business. It’s natural that companies would desire to consolidate those systems, and vFilO is a technology that could prove to be quite useful as an assimilation tool. Best of all, there’s no need to replace everything. A business can modernize its IT environment and finally achieve a unified view, plus gain more control and efficiency via the new “data layer” sitting on top of the hardware. When those old silos finally disappear, employees will discover they can find whatever information they need by examining and searching what appears to be one big catalog for a large pool of resources.

And for IT, the capacity-balancing capability should have especially strong appeal. With it, file and object data can shuffle around and be balanced for efficiency without IT or anyone needing to deal with silos. Today, too many organizations still perform capacity balancing work manually—putting some files on a different NAS system because the first one started running out of room. It’s time for those days to end. DataCore, with its 20-year history offering SANsymphony, is a vendor in a great position to deliver this new type of solution, one that essentially virtualizes NAS and object systems and even includes keyword search capabilities to help companies use their data to become stronger, more competitive, and more profitable.

ESG Showcase - Time to Better Leverage Your File and Object Data
The need to handle unstructured data according to business relevance is becoming urgent. Modern datarelated demands have begun surpassing what traditional file and object storage architectures can achieve. Businesses today need an unstructured data storage environment that lets end-users easily and flexibly access the benefits of the file- and object-based information they need to do their jobs.
It’s time for a “Nirvana” between file and object: The marketplace has long been debating which of the two unstructured data type models is the “right one.” But data is data. If you remove the traditional constraints and challenges associated with those two particular technologies, it becomes possible to imagine a type of data service that supports both. That kind of solution would give an organization the benefits of a global multi-site namespace (i.e., object style), including the ability to address a piece of data regardless of where it is in a folder hierarchy, or what device it’s stored on. And it would offer the rich metadata that object storage is known for. This new kind of solution would also deliver the performance and ease of use that file systems are known for. DataCore says it is providing such a solution, called vFilO. To vFilO, data is just data, where metadata is decoupled but tightly aligned with the data, and data’s location is independent of data access needs or metadata location. It bridges both worlds by providing a system that has all the benefits of file and object—without any of the limitations. It is a next-generation system built on a different paradigm, promising to bridge the world of file and object. It just might be the answer IT departments need. No longer required to pick between two separate systems, IT organizations can find great value in a consolidated, global, unified storage system that can deliver on the business’s needs for unstructured data.
IDC: DataCore SDS: Enabling Speed of Business and Innovation with Next-Gen Building Blocks
DataCore solutions include and combine block, file, object, and HCI software offerings that enable the creation of a unified storage system which integrates more functionalities such as data protection, replication, and storage/device management to eliminate complexity. They also converge primary and secondary storage environments to give a unified view, predictive analytics and actionable insights. DataCore’s newly engineered SDS architecture make it a key player in the modern SDS solutions spa
The enterprise IT infrastructure market is undergoing a once-in-a-generation change due to ongoing digital transformation initiatives and the onslaught of applications and data. The need for speed, agility, and efficiency is pushing demand for modern datacenter technologies that can lower costs while providing new levels of scale, quality, and operational efficiency. This has driven strong demand for next-generation solutions such as software-defined storage/networking/compute, public cloud infrastructure as a service, flash-based storage systems, and hyperconverged infrastructure. Each of these solutions offers enterprise IT departments a way to rethink how they deploy, manage, consume, and refresh IT infrastructure. These solutions represent modern infrastructure that can deliver the performance and agility required for both existing virtualized workloads and next-generation applications — applications that are cloud-native, highly dynamic, and built using containers and microservices architectures. As we enter the next phase of datacenter modernization, businesses need to leverage newer capabilities enabled by software-defined storage that help them eliminate management complexities, overcome data fragmentation and growth challenges, and become a data-driven organization to propel innovation. As enterprises embark on their core datacenter modernization initiatives with compelling technologies, they should evaluate enterprise-grade solutions that redefine storage and data architectures designed for the demands of the digital-native economy. Digital transformation is a technology-based business strategy that is becoming increasingly imperative for success. However, unless infrastructure provisioning evolves to suit new application requirements, IT will not be viewed as a business enabler. IDC believes that those organizations that do not leverage proven technologies such as SDS to evolve their datacenters truly risk losing their competitive edge.
Systems Monitoring for Dummies
To build an effective systems monitoring solution, the true starting point is understanding the fundamental concepts. You must know what monitoring is before you can set up what monitoring does. For that reason, this book introduces you to the underpinnings of monitoring techniques, theory, and philosophy, as well as the ways in which systems monitoring is accomplished.
Systems crash unexpectedly, users make bizarre claims about how the Internet is slow, and managers request statistics that leave you scratching your head wondering how to collect them in a way that’s meaningful and doesn’t consign you to the headache of hitting Refresh and spending half the day writing down numbers on a piece of scratch paper just to get a baseline for a report. The answer to all these challenges (and many, many more) lies in systems monitoring — effectively monitoring the servers and applications in your environment by collecting sta-tistics and/or checking for error conditions so you can act or report effectively when needed.
top25