Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 33 - 48 of 50 white papers, page 3 of 4.
How to Get the Most Out of Windows Admin Center
Windows Admin Center is the future of Windows and Windows Server management. Are you using it to its full potential? In this free eBook, Microsoft Cloud and Datacenter Management MVP, Eric Siron, has put together a 70+ page guide on what Windows Admin Center brings to the table, how to get started, and how to squeeze as much value out of this incredible free management tool from Microsoft. This eBook covers: - Installation - Getting Started - Full UI Analysis - Security - Managing Extensions

Each version of Windows and Windows Server showcases new technologies. The advent of PowerShell marked a substantial step forward in managing those features. However, the built-in graphical Windows management tools have largely stagnated - the same basic Microsoft Management Console (MMC) interfaces had remained since Windows Server 2000. Microsoft tried out multiple overhauls over the years to the built-in Server Manager console but gained little traction. Until Windows Admin Center.

WHAT IS WINDOWS ADMIN CENTER?
Windows Admin Center (WAC) represents a modern turn in Windows and Windows Server system management. From its home page, you establish a list of the networked Windows and Windows Server computers to manage. From there, you can connect to an individual system to control components such as hardware drivers. You can also use it to manage Windows roles, such as Hyper-V.

On the front-end, Windows Admin Center is presented through a sleek HTML 5 web interface. On the back-end, it leverages PowerShell extensively to control the systems within your network. The entire package runs on a single system, so you don’t need a complicated infrastructure to support it. In fact, you can run it locally on your Windows 10 workstation if you want. If you require more resiliency, you can run Windows Admin Center as a role on a Microsoft Failover Cluster.

WHY WOULD I USE WINDOWS ADMIN CENTER?
In the modern era of Windows management, we have shifted to a greater reliance on industrial-strength tools like PowerShell and Desired State Configuration. However, we still have servers that require individualized attention and infrequently utilized resources. WAC gives you a one-stop hub for dropping in on any system at any time and work with almost any of its facets.

ABOUT THIS EBOOK
This eBook has been written by Microsoft Cloud & Datacenter Management MVP Eric Siron. Eric has worked in IT since 1998, designing, deploying, and maintaining server, desktop, network, and storage systems. He has provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. He has achieved numerous Microsoft certifications and was a Microsoft Certified Trainer for four years. Eric is also a seasoned technology blogger and has amassed a significant following through his top-class work on the Altaro Hyper-V Dojo.

Digital Workspace Disasters and How to Beat Them
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data.
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data. And even if those problems could be overcome with the use of software agents, plus de-deduplication to take common files such as the operating system out of the backup window, restoring damaged systems could still mean days of software reinstallation and reconfiguration. Yet at the same time, most organizations have a strategic need to deploy and provision new desktop systems, and to be able to migrate existing ones to new platforms. Again, these are tasks that benefit from reducing both duplication and the need to reconfigure the resulting installation. The parallels with desktop DR should be clear. We often write about the importance of an integrated approach to investing in backup and recovery. By bringing together business needs that have a shared technical foundation, we can, for example, gain incremental benefits from backup, such as improved data visibility and governance, or we can gain DR capabilities from an investment in systems and data management. So it is with desktop DR and user workspace management. Both of these are growing in importance as organizations’ desktop estates grow more complex. Not only are we adding more ways to work online, such as virtual PCs, more applications, and more layers of middleware, but the resulting systems face more risks and threats and are subject to higher regulatory and legal requirements. Increasingly then, both desktop DR and UWM will be not just valuable, but essential. Getting one as an incremental bonus from the other therefore not only strengthens the business case for that investment proposal, it is a win-win scenario in its own right.
ESG Report: Verifying Network Intent with Forward Enterprise
This ESG Technical Review documents hands-on validation of Forward Enterprise, a solution developed by Forward Networks to help organizations save time and resources when verifying that their IT networks can deliver application traffic consistently in line with network and security policies. The review examines how Forward Enterprise can reduce network downtime, ensure compliance with policies, and minimize adverse impact of configuration changes on network behavior.
ESG research recently uncovered that 66% of organizations view their IT environments as more or significantly more complex than they were two years ago. The complexity will most likely increase, since 46% of organizations anticipate their network infrastructure spending to exceed that of 2018 as they upgrade and expand their networks.

Large enterprise and service provider networks consist of multiple device types—routers, switches, firewalls, and load balancers—with proprietary operating systems (OS) and different configuration rules. As organizations support more applications and users, their networks will grow and become more complex, making it more difficult to verify and manage correctly implemented policies across the entire network. Organizations have also begun to integrate public cloud services with their on-premises networks, adding further network complexity to manage end-to-end policies.

With increasing network complexity, organizations cannot easily confirm that their networks are operating as intended when they implement network and security policies. Moreover, when considering a fix to a service-impact issue or a network update, determining how it may impact other applications negatively or introduce service-affecting issues becomes difficult. To assess adherence to policies or the impact of any network change, organizations have typically relied on disparate tools and material—network topology diagrams, device inventories, vendor-dependent management systems, command line (CLI) commands, and utilities such as “ping” and “traceroute.” The combination of these tools cannot provide a reliable and holistic assessment of network behavior efficiently.

Organizations need a vendor-agnostic solution that enables network operations to automate the verification of network implementations against intended policies and requirements, regardless of the number and types of devices, operating systems, traffic rules, and policies that exist. The solution must represent the topology of the entire network or subsets of devices (e.g., in a region) quickly and efficiently. It should verify network implementations from prior points in time, as well as proposed network changes prior to implementation. Finally, the solution must also enable organizations to quickly detect issues that affect application delivery or violate compliance requirements.
Data Protection Overview and Best Practices
This white paper works through data protection processes and best practices using the Tintri VMstore. Tintri technology is differentiated by its level of abstraction—the ability to take every action on individual virtual machines. Hypervisor administrators and staff members associated with architecting, deploying and administering a data protection and disaster recovery solution will want to dig into this document to understand how Tintri can save them the majority of their management effort an

This white paper works through data protection processes and best practices using the Tintri VMstore. Tintri technology is differentiated by its level of abstraction—the ability to take every action on individual virtual machines.  In this paper, you’ll:

  • Learn how that greatly increases the precision and efficiency of snapshots for data protection
  • Explore the ability to move between recovery points
  • Analyze the behavior of individual virtual machines
  • Predict the need for additional capacity and performance for data protection

If you’re focused on building a successful data protection solution, this document targets key best practices and known challenges. Hypervisor administrators and staff members associated with architecting, deploying and administering a data protection and disaster recovery solution will want to dig into this document to understand how Tintri can save them a great deal of their management effort and greatly reduce operating expense.

Frost & Sullivan Best Practices in Storage Management 2019
This analyst report examines the differentiation of Tintri Global Center in storage management, resulting in receiving this award for product leadership in 2019. Businesses need planning tools to help them handle data growth, new workloads, decommissioning old hardware, and more. The Tintri platform provides both the daily management tasks required to streamline storage management, as well as the forward-looking insights to help businesses plan accordingly. Today Tintri technology is different
This analyst report examines the differentiation of Tintri Global Center in storage management, resulting in receiving this award for product leadership in 2019.  Businesses need planning tools to help them handle data growth, new workloads, decommissioning old hardware, and more. The Tintri platform provides both the daily management tasks required to streamline storage management, as well as the forward-looking insights to help businesses plan accordingly.  Today Tintri technology is differentiated by its level of abstraction—the ability to take every action on individual virtual machines.  Hypervisor administrators and staff members associated with architecting, deploying and managing virtual machines will want to dig into this document to understand how Tintri can save them the majority of their management effort and greatly reduce operating expense.
Office 365 / Microsoft 365: The Essential Companion Guide
Office 365 and Microsoft 365 contain truly powerful applications that can significantly boost productivity in the workplace. However, there’s a lot on offer so we’ve put together a comprehensive companion guide to ensure you get the most out of your investment! This free 85-page eBook, written by Microsoft Certified Trainer Paul Schnackenburg, covers everything from basic descriptions, to installation, migration, use-cases, and best practices for all features within the Office/Microsoft 365 sui

Welcome to this free eBook on Office 365 and Microsoft 365 brought to you by Altaro Software. We’re going to show you how to get the most out of these powerful cloud packages and improve your business. This book follows an informal reference format providing an overview of the most powerful applications of each platform’s feature set in addition to links directing to supporting information and further reading if you want to dig further into a specific topic. The intended audience for this book is administrators and IT staff who are either preparing to migrate to Office/Microsoft 365 or who have already migrated and who need to get the lay of the land. If you’re a developer looking to create applications and services on top of the Microsoft 365 platform, this book is not for you. If you’re a business decision-maker, rather than a technical implementer, this book will give you a good introduction to what you can expect when your organization has been migrated to the cloud and ways you can adopt various services in Microsoft 365 to improve the efficiency of your business.

THE BASICS

We’ll cover the differences (and why one might be more appropriate for you than the other) in more detail later but to start off let’s just clarify what each software package encompasses in a nutshell. Office 365 (from now on referred to as O365) 7 is an email collaboration and a host of other services provided as a Software as a Service (SaaS) whereas Microsoft 365 (M365) is Office 365 plus Azure Active Directory Premium, Intune – cloud-based management of devices and security and Windows 10 Enterprise. Both are per user-based subscription services that require no (or very little) infrastructure deployments on-premises.

How to Develop a Multi-cloud Management Strategy
Increasingly, organizations are looking to move workloads into the cloud. The goal may be to leverage cloud resources for Dev/Test, or they may want to “lift and shift” an application to the cloud and run it natively. In order to enable these various cloud options, it is critical that organizations develop a multi-cloud data management strategy.

The primary goal of a multi-cloud data management strategy is to supply data, either via copying or moving data to the various multi-cloud use cases. A key enabler of this movement is the data management software applications. In theory, data protection applications can perform both of the copy and move functions. A key consideration is how the multi-cloud data management experience is unified. In most cases, data protection applications ignore the user experience of each cloud and use their proprietary interface as the unifying entity, which increases complexity.

There are a variety of reasons organizations may want to leverage multiple clouds. The first use case is to use public cloud storage as a backup mirror to an on-premises data protection process. Using public cloud storage as a backup mirror enables the organization to automatically off-site data. It also sets up many of the more advanced use cases.

Another use case is using the cloud for disaster recovery.

Another use case is “Lift and Shift,” which means the organization wants to run the application in the cloud natively. Initial steps in the “lift and shift” use case are similar to Dev/Test, but now the workload is storing unique data in the cloud.

Multi-cloud is a reality now for most organizations and managing the movement of data between these clouds is critical.

How to Make Citrix Logons 75% Faster
Slow logon is one of the most common user complaints faced by Citrix admins. When logon is slow, it affects the end-user experience and business productivity. Because Citrix XenApp and XenDesktop logon comprises many steps and depends on various parts of the infrastructure, it is often difficult to know what is causing logon slowness. Watch this on-demand webinar where Citrix expert George Spiers will share best practices to optimize your Citrix infrastructure to make logons up to 75% faster.

Slow logon is one of the most common user complaints faced by Citrix admins. When logon is slow, it affects the end-user experience and business productivity. Because Citrix XenApp and XenDesktop logon comprises many steps and depends on various parts of the infrastructure, it is often difficult to know what is causing logon slowness. The biggest question every Citrix admin has is, “How do I make Citrix logons faster?”

Optimize Citrix Logon Every Step of the Way and Reduce Logon Times Up To 75%.

Watch this on-demand webinar where Citrix expert George Spiers will share best practices based on his real-world experience to optimize your Citrix infrastructure to make logons up to 75% faster.

•    Understand what factors are involved in Citrix login processing
•    Learn optimization techniques to make logon faster including profile management and image optimization
•    Learn how to improve logon times using new Citrix technologies such as App Layering and WEM
•    Pick up tips, tricks and tools to proactively detect logon slowdowns
•    View this webinar and become an expert at managing Citrix logon performance end to end.

View this webinar and become an expert at managing Citrix logon performance end to end.

Top 10 VMware Performance Metrics That Every VMware Admin Must Monitor
How does one track the resource usage metrics for VMs and which ones are important? VMware vSphere comprises many different resource components. Knowing what these components are and how each component influences resource management decisions is key to efficiently managing VM performance. In this blog, we will discuss the top 10 metrics that every VMware administrator must continuously track.

Virtualization technology is being widely adopted thanks to the flexibility, agility, reliability and ease of administration it offers. At the same time, any IT technology – hardware or software – is only as good as its maintenance and upkeep, and VMware virtualization is no different. With physical machines, failure or poor performance of a machine affects the applications running on that machine. With virtualization, multiple virtual machines (VMs) run on the same physical host and a slowdown of the host will affect applications running on all of the VMs. Hence, performance monitoring is even more important in a virtualized infrastructure than it is in a physical infrastructure.

How does one determine what would be the right amount of resources to allocate to a VM? The answer to that question lies in tracking the resource usage of VMs over time, determining the norms of usage and then right-sizing the VMs accordingly.

But how does one track the resource usage metrics for VMs and which ones are important? VMware vSphere comprises many different resource components. Knowing what these components are and how each component influences resource management decisions is key to efficiently managing VM performance. In this blog, we will discuss the top 10 metrics that every VMware administrator must continuously track.

Evaluator Group Report on Liqid Composable Infrastructure
In this report from Eric Slack, Senior Analyst at the Evaluator Group, learn how Liqid’s software-defined platform delivers comprehensive, multi-fabric composable infrastructure for the industry’s widest array of data center resources.
Composable Infrastructures direct-connect compute and storage resources dynamically—using virtualized networking techniques controlled by software. Instead of physically constructing a server with specific internal devices (storage, NICs, GPUs or FPGAs), or cabling the appropriate device chassis to a server, composable enables the virtual connection of these resources at the device level as needed, when needed.

Download this report from Eric Slack, Senior Analyst at the Evaluator Group to learn how Liqid’s software-defined platform delivers comprehensive, multi-fabric composable infrastructure for the industry’s widest array of data center resources.
LQD4500 Gen4x16 NVMe SSD Performance Report
The LQD4500 is the World’s Fastest SSD. The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB. What does all that mean in real-world testing? Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demand
The LQD4500 is the World’s Fastest SSD.

The Liqid Element LQD4500 PCIe Add-in-Card (AIC), aka the “Honey Badger" for its fierce, lightning fast data speeds, delivers Gen-4 PCIe performance with up to 4m IOPS, 24 GB/s throughput, ultra-low transactional latency of just 20 µs, in capacities up to 32TB.

The document will contain test results and performance measurements for the Liqid LQD4500 Gen4x16 NVMe SSD. The performance test reports include Sequential, Random, and Latency measurement on the LQD4500 high performance storage device. The following data has been measured in a Linux OS environment and results are taken per the SNIA enterprise performance test specification standards. The below results are steady state after sufficient device preconditioning.

Download the full LQD4500 performance report and learn how the ultra-high performance and density Gen-4 AIC can supercharge data for your most demanding applications.
Exploring AIOps: Cluster Analysis for Events
AIOps, i.e., artificial intelligence for IT operations, has become the latest strategy du jour in the IT operations management space to help address and better manage the growing complexity and extreme scale of modern IT environments. AIOps enables some unique and new capabilities on this front, though it is quite a bit more complicated than the panacea that it is made out to be. However, the underlying AI and machine learning (ML) concepts do help complement, supplement and, in particular cases
AIOps, i.e., artificial intelligence for IT operations, has become the latest strategy du jour in the IT operations management space to help address and better manage the growing complexity and extreme scale of modern IT environments. AIOps enables some unique and new capabilities on this front, though it is quite a bit more complicated than the panacea that it is made out to be. However, the underlying AI and machine learning (ML) concepts do help complement, supplement and, in particular cases, even supplant more traditional approaches to handling typical IT Ops scenarios at scale.

An AIOps platform has to ingest and deal with multiple types of data to develop a comprehensive understanding of the state of the managed domain(s) and to better discern the push and pull of diverse trends in the environment, both overt and subtle, that may destabilize critical business outcomes. In this white paper, we will take a look at an AIOps approach to handling one of the fundamental data types: events.
The Monitoring ELI5 Guide
The goal of this book is to describe complex IT ideas simply. Very simply. So simply, in fact, a five-year-old could understand it. This book is also written in a way we hope is funny, and maybe a little irreverent—just the right mix of snark and humor and insubordination.

Not too long ago, a copy of Randall Munroe’s “Thing Explainer” made its way around the SolarWinds office—passing from engineering to marketing to development to the Head Geeks™ (yes, that’s actually a job at SolarWinds. It’s pretty cool.), and even to management.

Amid chuckles of appreciation, we recognized Munroe had struck upon a deeper truth: as IT practitioners, we’re often asked to describe complex technical ideas or solutions. However, often it’s for folks who need a simplified version. These may be people who consider themselves non-technical, but just as easily it could be for people who are technical in a different discipline. Amid frustrated eye-rolling we’re asked to “explain it to me like I’m five years old” (a phrase shortened to just “Explain Like I’m Five,” or ELI5, in forums across the internet).

There, amid the blueprints and stick figures, were explanations of the most complex concepts in hyper-simplified language that had achieved the impossible alchemy of being amusing, engaging, and accurate.

We were inspired. What you hold in your hands (or read on your screen) is the result of this inspiration.

In this book, we hope to do for IT what Randall Munroe did for rockets, microwaves, and cell phones: explain what they are, what they do, and how they work in terms anyone can understand, and in a way that may even inspire a laugh or two.

ESG Showcase - Time to Better Leverage Your File and Object Data
The need to handle unstructured data according to business relevance is becoming urgent. Modern datarelated demands have begun surpassing what traditional file and object storage architectures can achieve. Businesses today need an unstructured data storage environment that lets end-users easily and flexibly access the benefits of the file- and object-based information they need to do their jobs.
It’s time for a “Nirvana” between file and object: The marketplace has long been debating which of the two unstructured data type models is the “right one.” But data is data. If you remove the traditional constraints and challenges associated with those two particular technologies, it becomes possible to imagine a type of data service that supports both. That kind of solution would give an organization the benefits of a global multi-site namespace (i.e., object style), including the ability to address a piece of data regardless of where it is in a folder hierarchy, or what device it’s stored on. And it would offer the rich metadata that object storage is known for. This new kind of solution would also deliver the performance and ease of use that file systems are known for. DataCore says it is providing such a solution, called vFilO. To vFilO, data is just data, where metadata is decoupled but tightly aligned with the data, and data’s location is independent of data access needs or metadata location. It bridges both worlds by providing a system that has all the benefits of file and object—without any of the limitations. It is a next-generation system built on a different paradigm, promising to bridge the world of file and object. It just might be the answer IT departments need. No longer required to pick between two separate systems, IT organizations can find great value in a consolidated, global, unified storage system that can deliver on the business’s needs for unstructured data.
The Time is Now for File Virtualization
DataCore’s vFilO is a distributed files and object storage virtualization solution that can consume storage from a variety of providers, including NFS or SMB file servers, most NAS systems, and S3 Object Storage systems, including S3-based public cloud providers. Once vFilO integrates these various storage systems into its environment, it presents users with a logical file system and abstracts it from the actual physical location of data.

DataCore vFilO is a top-tier file virtualization solution. Not only can it serve as a global file system, IT can also add new NAS systems or file servers to the environment without having to remap users of the new hardware. vFilO supports live migration of data between the storage systems it has assimilated and leverages the capabilities of the global file system and the software’s policy-driven data management to move older data to less expensive storage automatically; either high capacity NAS or an object storage system. vFilO also transparently moves data from NFS/SMB to object storage. If the user needs access to this data in the future, they access it like they always have. To them, the data has not moved.

The ROI of File virtualization is powerful, but it has struggled to gain adoption in the data center. File Virtualization needs to be explained, and explaining it takes time. vFilO more than meets the requirements to qualify as a top tier file virtualization solution. DataCore has the advantage of over 10,000 customers that are much more likely to be receptive to the concept since they have already embraced block storage virtualization with SANSymphony. Building on its customer base as a beachhead, DataCore can then expand File Virtualization’s reach to new customers, who, because of the changing state of unstructured data, may finally be receptive to the concept. At the same time, these new file virtualization customers may be amenable to virtualizing block storage, and it may open up new doors for SANSymphony.

IDC: DataCore SDS: Enabling Speed of Business and Innovation with Next-Gen Building Blocks
DataCore solutions include and combine block, file, object, and HCI software offerings that enable the creation of a unified storage system which integrates more functionalities such as data protection, replication, and storage/device management to eliminate complexity. They also converge primary and secondary storage environments to give a unified view, predictive analytics and actionable insights. DataCore’s newly engineered SDS architecture make it a key player in the modern SDS solutions spa
The enterprise IT infrastructure market is undergoing a once-in-a-generation change due to ongoing digital transformation initiatives and the onslaught of applications and data. The need for speed, agility, and efficiency is pushing demand for modern datacenter technologies that can lower costs while providing new levels of scale, quality, and operational efficiency. This has driven strong demand for next-generation solutions such as software-defined storage/networking/compute, public cloud infrastructure as a service, flash-based storage systems, and hyperconverged infrastructure. Each of these solutions offers enterprise IT departments a way to rethink how they deploy, manage, consume, and refresh IT infrastructure. These solutions represent modern infrastructure that can deliver the performance and agility required for both existing virtualized workloads and next-generation applications — applications that are cloud-native, highly dynamic, and built using containers and microservices architectures. As we enter the next phase of datacenter modernization, businesses need to leverage newer capabilities enabled by software-defined storage that help them eliminate management complexities, overcome data fragmentation and growth challenges, and become a data-driven organization to propel innovation. As enterprises embark on their core datacenter modernization initiatives with compelling technologies, they should evaluate enterprise-grade solutions that redefine storage and data architectures designed for the demands of the digital-native economy. Digital transformation is a technology-based business strategy that is becoming increasingly imperative for success. However, unless infrastructure provisioning evolves to suit new application requirements, IT will not be viewed as a business enabler. IDC believes that those organizations that do not leverage proven technologies such as SDS to evolve their datacenters truly risk losing their competitive edge.
top25