Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 33 - 46 of 46 white papers, page 3 of 3.
Modernized Backup for Open VMs
Catalogic vProtect is an agentless enterprise backup solution for Open VM environments such as RedHat Virtualization, Nutanix Acropolis, Citrix XenServer, KVM, Oracle VM, PowerKVM, KVM for IBM z, oVirt, Proxmox and Xen. vProtect enables VM-level protection and can function as a standalone solution or integrate with enterprise backup software such as IBM Spectrum Protect, Veritas NetBackup or Dell-EMC Networker. It is easy to use and affordable.
Catalogic vProtect is an agentless enterprise backup solution for Open VM environments such as RedHat Virtualization, Nutanix Acropolis, Citrix XenServer, KVM, Oracle VM, PowerKVM, KVM for IBM z, oVirt, Proxmox and Xen.  vProtect enables VM-level protection and can function as a standalone solution or integrate with enterprise backup software such as IBM Spectrum Protect, Veritas NetBackup or Dell-EMC Networker. It is easy to use and affordable.
Modernized Backup for Nutanix Acropolis Hypervisor
Catalogic vProtect is an agentless enterprise backup solution for Nutanix Acropolis. vProtect enables VM-level protection with incremental backups, and can function as a standalone solution or integrate with enterprise backup software such as IBM Spectrum Protect, Veritas NetBackup or Dell-EMC Networker. It is easy to use and affordable. It also supports Open VM environments such as RedHat Virtualization, Citrix XenServer, KVM, Oracle VM, and Proxmox.
Catalogic vProtect is an agentless enterprise backup solution for Nutanix Acropolis. vProtect enables VM-level protection with incremental backups, and can function as a standalone solution or integrate with enterprise backup software such as IBM Spectrum Protect, Veritas NetBackup or Dell-EMC Networker. It is easy to use and affordable.  It also supports Open VM environments such as RedHat Virtualization, Citrix XenServer, KVM, Oracle VM, and Proxmox.
Mastering vSphere – Best Practices, Optimizing Configurations & More
Do you regularly work with vSphere? If so, this free eBook is for you. Learn how to leverage best practices for the most popular features contained within the vSphere platform and boost your productivity using tips and tricks learnt direct from an experienced VMware trainer and highly qualified professional. In this eBook, vExpert Ryan Birk shows you how to master: Advanced Deployment Scenarios using Auto-Deploy Shared Storage Performance Monitoring and Troubleshooting Host Network configurati

If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.

The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.

Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!

Digital Workspace Disasters and How to Beat Them
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data.
Desktop DR - the recovery of individual desktop systems from a disaster or system failure - has long been a challenge. Part of the problem is that there are so many desktops, storing so much valuable data and - unlike servers - with so many different end user configurations and too little central control. Imaging everyone would be a huge task, generating huge amounts of backup data. And even if those problems could be overcome with the use of software agents, plus de-deduplication to take common files such as the operating system out of the backup window, restoring damaged systems could still mean days of software reinstallation and reconfiguration. Yet at the same time, most organizations have a strategic need to deploy and provision new desktop systems, and to be able to migrate existing ones to new platforms. Again, these are tasks that benefit from reducing both duplication and the need to reconfigure the resulting installation. The parallels with desktop DR should be clear. We often write about the importance of an integrated approach to investing in backup and recovery. By bringing together business needs that have a shared technical foundation, we can, for example, gain incremental benefits from backup, such as improved data visibility and governance, or we can gain DR capabilities from an investment in systems and data management. So it is with desktop DR and user workspace management. Both of these are growing in importance as organizations’ desktop estates grow more complex. Not only are we adding more ways to work online, such as virtual PCs, more applications, and more layers of middleware, but the resulting systems face more risks and threats and are subject to higher regulatory and legal requirements. Increasingly then, both desktop DR and UWM will be not just valuable, but essential. Getting one as an incremental bonus from the other therefore not only strengthens the business case for that investment proposal, it is a win-win scenario in its own right.
The Forrester Wave: Intelligent Application and Service Monitoring, Q2 2019
Thirteen of the most significant IASM providers identified, researched, analyzed and scored in criteria in the three categories of current offering, market presence, and strategy by Forrester Research. Leaders, strong performers and contenders emerge — and you may be surprised where each provider lands in this Forrester Wave.

In The Forrester Wave: Intelligent Application and Service Monitoring, Q2 2019, Forrester identified the 13 most significant IASM providers in the market today, with Zenoss ranked amongst them as a Leader.

“As complexity grows, I&O teams struggle to obtain full visibility into their environments and do troubleshooting. To meet rising customer expectations, operations leaders need new monitoring technologies that can provide a unified view of all components of a service, from application code to infrastructure.”

Who Should Read This

Enterprise organizations looking for a solution to provide:

  • Strong root-cause analysis and remediation
  • Digital customer experience measurement capabilities
  • Ease of deployment across the customer’s whole environment, positioning themselves to successfully deliver intelligent application and service monitoring

Our Takeaways

Trends impacting the infrastructure and operations (I&O) team include:

  • Operations leaders favor a unified view
  • AI/machine learning adoption reaches 72% within the next 12 months
  • Intelligent root-cause analysis soon to become table stakes
  • Monitoring the digital customer experience becomes a priority
  • Ease and speed of deployment are differentiators

How to Develop a Multi-cloud Management Strategy
Increasingly, organizations are looking to move workloads into the cloud. The goal may be to leverage cloud resources for Dev/Test, or they may want to “lift and shift” an application to the cloud and run it natively. In order to enable these various cloud options, it is critical that organizations develop a multi-cloud data management strategy.

The primary goal of a multi-cloud data management strategy is to supply data, either via copying or moving data to the various multi-cloud use cases. A key enabler of this movement is the data management software applications. In theory, data protection applications can perform both of the copy and move functions. A key consideration is how the multi-cloud data management experience is unified. In most cases, data protection applications ignore the user experience of each cloud and use their proprietary interface as the unifying entity, which increases complexity.

There are a variety of reasons organizations may want to leverage multiple clouds. The first use case is to use public cloud storage as a backup mirror to an on-premises data protection process. Using public cloud storage as a backup mirror enables the organization to automatically off-site data. It also sets up many of the more advanced use cases.

Another use case is using the cloud for disaster recovery.

Another use case is “Lift and Shift,” which means the organization wants to run the application in the cloud natively. Initial steps in the “lift and shift” use case are similar to Dev/Test, but now the workload is storing unique data in the cloud.

Multi-cloud is a reality now for most organizations and managing the movement of data between these clouds is critical.

Add Zero-Cost, Proactive Monitoring to Your Citrix Services with FREE Citrix Logon Simulator
Performance is central to any Citrix project, whether it’s a new deployment, upgrading from XenApp 6.5 to XenApp 7.x, or scaling and optimization. Watch this on-demand webinar and learn how you can leverage eG Enterprise Express, the free Citrix logon monitoring solution from eG Innovations, to deliver added value to your customers and help them proactively fix logon slowdowns and improve the user experience.

Performance is central to any Citrix project, whether it’s a new deployment, upgrading from XenApp 6.5 to XenApp 7.x, or scaling and optimization. Rather than simply focusing on system resource usage metrics (CPU, memory, disk usage, etc.), Citrix administrators need to monitor all aspects of user experience. And, Citrix logon performance is the most important of them all.

Watch this on-demand webinar and learn how you can leverage eG Enterprise Express, the free Citrix logon monitoring solution from eG Innovations, to deliver added value to your customers and help them proactively fix logon slowdowns and improve the user experience. In this webinar, you will learn:

•    What the free Citrix logon simulator does, how it works, and its benefits
•    How you can set it up for your clients in just minutes
•    Different ways to use logon monitoring to improve your client projects
•    Upsell opportunities for your service offerings

Choosing the Best Approach for Monitoring Citrix User Experience
This white paper provides an analysis of the different approaches to Citrix user experience monitoring – from the network, server, client, and simulation. You will understand the benefits and shortcomings of these approaches and become well-informed to choose the best approach that suits your requirements.

A great user experience is key for the success of any Citrix/VDI initiative. To ensure user satisfaction and productivity, Citrix administrators should monitor the user experience proactively, detect times when users are likely to be seeing slowness, pinpoint the cause of such issues and initiate corrective actions to quickly resolve issues.

This white paper provides an analysis of the different approaches to Citrix user experience monitoring – from the network, server, client, and simulation. You will understand the benefits and shortcomings of these approaches and become well-informed to choose the best approach that suits your requirements. Normal 0 false false false EN-US X-NONE X-NONE
How to Make Citrix Logons 75% Faster
Slow logon is one of the most common user complaints faced by Citrix admins. When logon is slow, it affects the end-user experience and business productivity. Because Citrix XenApp and XenDesktop logon comprises many steps and depends on various parts of the infrastructure, it is often difficult to know what is causing logon slowness. Watch this on-demand webinar where Citrix expert George Spiers will share best practices to optimize your Citrix infrastructure to make logons up to 75% faster.

Slow logon is one of the most common user complaints faced by Citrix admins. When logon is slow, it affects the end-user experience and business productivity. Because Citrix XenApp and XenDesktop logon comprises many steps and depends on various parts of the infrastructure, it is often difficult to know what is causing logon slowness. The biggest question every Citrix admin has is, “How do I make Citrix logons faster?”

Optimize Citrix Logon Every Step of the Way and Reduce Logon Times Up To 75%.

Watch this on-demand webinar where Citrix expert George Spiers will share best practices based on his real-world experience to optimize your Citrix infrastructure to make logons up to 75% faster.

•    Understand what factors are involved in Citrix login processing
•    Learn optimization techniques to make logon faster including profile management and image optimization
•    Learn how to improve logon times using new Citrix technologies such as App Layering and WEM
•    Pick up tips, tricks and tools to proactively detect logon slowdowns
•    View this webinar and become an expert at managing Citrix logon performance end to end.

View this webinar and become an expert at managing Citrix logon performance end to end.

The Top 10 Metrics a Citrix Administrator Must Monitor in Their Environment
We have invited George Spiers, who is a Citrix architect with rich experience in consulting and implementing Citrix technologies for organizations in various sectors, to write this guest blog and enlighten us on the topic of Citrix monitoring. George is also a Citrix Technology Professional and has contributed immensely to the Citrix community. Read on to see what George thinks are the top 10 most important metrics that Citrix administrators must monitor.

Citrix application and desktop virtualization technologies are widely used by organizations that are embarking on digital transformation initiatives. The success of these initiatives is closely tied to ensuring a great user experience for end users as they access their virtual apps and desktops. Given the multitude of components and services that make up the Citrix delivery architecture, administrators constantly face an uphill challenge in measuring performance and knowing what key performance indicators (KPIs) to monitor.

We have invited George Spiers (https://www.jgspiers.com/), who is a Citrix architect with rich experience in consulting and implementing Citrix technologies for organizations in various sectors, to write this guest blog and enlighten us on the topic of Citrix monitoring. George is also a Citrix Technology Professional and has contributed immensely to the Citrix community. Read on to see what George thinks are the top 10 most important metrics that Citrix administrators must monitor.

The SysAdmin Guide to Azure Infrastructure as a Service
If you're used to on-premises infrastructures, cloud platforms can seem daunting. But it doesn't need to be. This eBook written by the veteran IT consultant and trainer Paul Schnackenburg, covers all aspects of setting up and maintaining a high-performing Azure IaaS environment, including: • VM sizing and deployment • Migration • Storage and networking • Security and identity • Infrastructure as code and more!

The cloud computing era is well and truly upon us, and knowing how to take advantage of the benefits of this computing paradigm while maintaining security, manageability, and cost control are vital skills for any IT professional in 2020 and beyond. And its importance is only getting greater.

In this eBook, we’re going to focus on Infrastructure as a Service (IaaS) on Microsoft’s Azure platform - learning how to create VMs, size them correctly, manage storage, networking, and security, along with backup best practices. You’ll also learn how to operate groups of VMs, deploy resources based on templates, managing security and automate your infrastructure. If you currently have VMs in your own datacenter and are looking to migrate to Azure, we’ll also teach you that.

If you’re new to the cloud (or have experience with AWS/GCP but not Azure), this book will cover the basics as well as more advanced skills. Given how fast things change in the cloud, we’ll cover the why (as well as the how) so that as features and interfaces are updated, you’ll have the theoretical knowledge to effectively adapt and know how to proceed.

You’ll benefit most from this book if you actively follow along with the tutorials. We will be going through terms and definitions as we go – learning by doing has always been my preferred way of education. If you don’t have access to an Azure subscription, you can sign up for a free trial with Microsoft. This will give you 30 days 6 to use $200 USD worth of Azure resources, along with 12 months of free resources. Note that most of these “12 months” services aren’t related to IaaS VMs (apart from a few SSD based virtual disks and a small VM that you can run for 750 hours a month) so be sure to get everything covered on the IaaS side before your trial expires. There are also another 25 services that have free tiers “forever”.

Now you know what’s in store, let’s get started!

The Time is Now for File Virtualization
DataCore’s vFilO is a distributed files and object storage virtualization solution that can consume storage from a variety of providers, including NFS or SMB file servers, most NAS systems, and S3 Object Storage systems, including S3-based public cloud providers. Once vFilO integrates these various storage systems into its environment, it presents users with a logical file system and abstracts it from the actual physical location of data.

DataCore vFilO is a top-tier file virtualization solution. Not only can it serve as a global file system, IT can also add new NAS systems or file servers to the environment without having to remap users of the new hardware. vFilO supports live migration of data between the storage systems it has assimilated and leverages the capabilities of the global file system and the software’s policy-driven data management to move older data to less expensive storage automatically; either high capacity NAS or an object storage system. vFilO also transparently moves data from NFS/SMB to object storage. If the user needs access to this data in the future, they access it like they always have. To them, the data has not moved.

The ROI of File virtualization is powerful, but it has struggled to gain adoption in the data center. File Virtualization needs to be explained, and explaining it takes time. vFilO more than meets the requirements to qualify as a top tier file virtualization solution. DataCore has the advantage of over 10,000 customers that are much more likely to be receptive to the concept since they have already embraced block storage virtualization with SANSymphony. Building on its customer base as a beachhead, DataCore can then expand File Virtualization’s reach to new customers, who, because of the changing state of unstructured data, may finally be receptive to the concept. At the same time, these new file virtualization customers may be amenable to virtualizing block storage, and it may open up new doors for SANSymphony.

IDC: SaaS Backup and Recovery: Simplified Data Protection Without Compromise
Although the majority of organizations have a "cloud first" strategy, most also continue to manage onsite applications and the backup infrastructure associated with them. However, many are moving away from backup specialists and instead are leaving the task to virtual infrastructure administrators or other IT generalists. Metallic represents Commvault's direct entry into one of the fastest-growing segments of the data protection market. Its hallmarks are simplicity and flexibility of deployment

Metallic is a new SaaS backup and recovery solution based on Commvault's data protection software suite, proven in the marketplace for more than 20 years. It is designed specifically for the needs of medium-scale enterprises but is architected to grow with them based on data growth, user growth, or other requirements. Metallic initially offers either monthly or annual subscriptions through reseller partners; it will be available through cloud service providers and managed service providers over time. The initial workload use cases for Metallic include virtual machine (VM), SQL Server, file server, MS Office 365, and endpoint device recovery support; the company expects to add more use cases and supported workloads as the solution evolves.

Metallic is designed to offer flexibility as one of the service's hallmarks. Aspects of this include:

  • On-demand infrastructure: Metallic manages the cloud-based infrastructure components and software for the backup environment, though the customer will still manage any of its own on- premise infrastructure. This environment will support on-premise, cloud, and hybrid workloads. IT organizations are relieved of the daily task of managing the infrastructure components and do not have to worry about upgrades, OS or firmware updates and the like, for the cloud infrastructure, so people can repurpose that time saved toward other activities.
  • Metallic offers preconfigured plans designed to have users up and running in approximately 15 minutes, eliminating the need for a proof-of-concept test. These preconfigured systems have Commvault best practices built into the design, or organizations can configure their own.
  • Partner-delivered services: Metallic plans to go to market with resellers that can offer a range of services on top of the basic solution's capabilities. These services will vary by provider and will give users a variety of choices when selecting a provider to match the services offered with the organization's needs.
  • "Bring your own storage": Among the flexible options of Metallic, including VM and file or SQL database use cases, users can deploy their own storage, either on-premise or in the cloud, while utilizing the backup/recovery services of Metallic. The company refers to this option as "SaaS Plus."
7 Tips to Safeguard Your Company's Data
Anyone who works in IT will tell you, losing data is no joke. Ransomware and malware attacks are on the rise, but that’s not the only risk. Far too often, a company thinks data is backed up – when it’s really not. The good news? There are simple ways to safeguard your organization. To help you protect your company (and get a good night’s sleep), our experts share seven common reasons companies lose data – often because it was never really protected in the first place – plus tips to help you avoi

Anyone who works in IT will tell you, losing data is no joke. Ransomware and malware attacks are on the rise, but that’s not the only risk. Far too often, a company thinks data is backed up – when it’s really not. The good news? There are simple ways to safeguard your organization. To help you protect your company (and get a good night’s sleep), our experts share seven common reasons companies lose data – often because it was never really protected in the first place – plus tips to help you avoid the same.

Metallic’s our engineers and product team have decades of combined experience protecting customer data. When it comes to backup and recovery, we’ve seen it all – the good, the bad and the ugly.

We understand backup is not something you want to worry about – which is why we’ve designed MetallicTM enterprise- grade backup and recovery with the simplicity of SaaS. Our cloud-based data protection solution comes with underlying technology from industry-leader Commvault and best practices baked in. Metallic offerings help you ensure your backups are running fast and reliably, and your data is there when you need it. Any company can be up and running with simple, powerful backup and recovery in as little as 15 minutes.

top25