Chapter 1: An Introduction to VMware Virtualization
Chapter 2: Backup and Recovery Methodologies
Chapter 3: Data Recovery in Virtual Environments
You're facing VM sprawl if you're experiencing an uncontrollable increase of unused and unneeded objects in your virtual VMware environment. VM sprawl occurs often in virtual infrastructures because they expand much faster than physical, which can make management a challenge. The growing number of virtualized workloads and applications generate “virtual junk” causing VM sprawl issue. Eventually it can put you at risk of running out of resources.
Getting virtual sprawl under control will help you reallocate and better provision your existing storage, CPU and memory resources between critical production workloads and high-performance, virtualized applications. With proper resource management, you can save money on extra hardware.
This white paper examines how you can avoid potential VM sprawl risks and automate proactive monitoring by using Veeam ONE, a part of Veeam Availability Suite. Veeam ONE will arm you with a list of VM sprawl indicators and explain how you can pick up and configure a handy report kit to detect and eliminate VM sprawl threats in your VMware environment.
Read this FREE white paper and learn how to:
In this e-book, we’ll cover storage basics, storage performance and capacity, forecasting and usage and storage best practices.
For years, IT managers have been seeking a single-pane-of-glass tool that can help them monitor and manage all aspects of their IT infrastructure – from desktops to servers, hardware to application code, and network to storage. But, many fail to achieve this as they do not know how to implement a single-pane-of-glass solution.
Read this eG Innovations white paper, and understand:
Managing Windows user profiles can be a complex and challenging process. Better profile management is usually sought by organizations looking to reduce Windows login times, accommodate applications that do not adhere to best practice application data storage, and to give users the flexibility to login to any Windows Operating System (OS) and have their profile follow them. Note that additional profile challenges and solutions are covered in a related ProfileUnity whitepaper entitled “User Profile and Environment Management with ProfileUnity.” To efficiently manage the complex challenges of today’s diverse Windows profile environments, Liquidware ProfileUnity exclusively features two user profile technologies that can be used together or separately depending on the use case.
These include:
1. ProfileDisk, a virtual disk based profile that delivers the entire profile as a layer from an attached user VHD or VMDK, and
2. Profile Portability, a file and registry based profile solution that restores files at login, post login, or based on environment triggers.
How to navigate between the trenches
Hybrid IT has moved from buzzword status to reality and organizations are realizing its potential impact. Some aspects of your infrastructure may remain in a traditional setting, while another part runs on cloud infrastructure—causing great complexity. So, what does this mean for you? “A Journey Through Hybrid IT and the Cloud” provides insight on:
Most enterprises underestimate the planning process; they do not spend sufficient time understanding the cloud landscape and the different options available. While there are tools at hand to assist the implementation and the validation phases of the migration, planning is where all the crucial decisions need to be made.
Bad planning will lead to failed migrations. Challenges that enterprises often grapple with include:
HyperCloud Analytics provides intelligence backed by 400+ million benchmarked data-points to enable enterprises to make the right choices for the organization. HyperCloud’s cloud planning framework provides automation for four key stages that enterprises should consider as they plan their migration projects.They get automated instance recommendations and accurate cost forecasts made with careful considerations of their application requirements (bandwidth, storage, security etc). Multiple assessments can be run across the different cloud providers to understand application costs post-migration.Download our whitepaper to learn more about how you can build high-confidence, accurate plans with detailed cloud bills and cost forecasts, while expediting your cloud migrations.
One of the toughest problems facing enterprise IT teams today is troubleshooting slow applications. When a user complains of slowness in application access, all hell breaks loose, and the blame game begins: app owners, developers and IT ops teams enter into endless war room sessions to figure out what went wrong and where. Have you also been in this situation before?
Read this white paper by Larry Dragich, and learn how you can combine and correlate performance insights from the application (code, SQL, logs) and the underlying hardware infrastructure (server, network, virtualization, storage, etc.) in order to:
Move Workloads to the Cloud and Reduce Costs!
Considering a move to Azure? Use this simple tool to find out how much you can save on storage costs by mobilizing your applications to the cloud with Zerto on Azure!
This eBook explains how to identify problems with vSphere and how to solve them. Before we begin, we need to start off with an introduction to a few things that will make life easier. We’ll start with a troubleshooting methodology and how to gather logs. After that, we’ll break this eBook into the following sections: Installation, Virtual Machines, Networking, Storage, vCenter/ESXi and Clustering.
ESXi and vSphere problems arise from many different places, but they generally fall into one of these categories: Hardware issues, Resource contention, Network attacks, Software bugs, and Configuration problems.
A typical troubleshooting process contains several tasks: 1. Define the problem and gather information. 2. Identify what is causing the problem. 3. Fix the problem, implement a fix.
One of the first things you should try to do when experiencing a problem with a host, is try to reproduce the issue. If you can find a way to reproduce it, you have a great way to validate that the issue is resolved when you do fix it. It can be helpful as well to take a benchmark of your systems before they are implemented into a production environment. If you know HOW they should be running, it’s easier to pinpoint a problem.
You should decide if it’s best to work from a “Top Down” or “Bottom Up” approach to determine the root cause. Guest OS Level issues typically cause a large amount of problems. Let’s face it, some of the applications we use are not perfect. They get the job done but they utilize a lot of memory doing it.
In terms of virtual machine level issues, is it possible that you could have a limit or share value that’s misconfigured? At the ESXi Host Level, you could need additional resources. It’s hard to believe sometimes, but you might need another host to help with load!
Once you have identified the root cause, you should assess the impact of the problem on your day to day operations. When and what type of fix should you implement? A short-term one or a long-term solution? Assess the impact of your solution on daily operations. Short-term solution: Implement a quick workaround. Long-term solution: Reconfiguration of a virtual machine or host.
Now that the basics have been covered, download the eBook to discover how to put this theory into practice!
If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.
The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.
Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!
Each version of Windows and Windows Server showcases new technologies. The advent of PowerShell marked a substantial step forward in managing those features. However, the built-in graphical Windows management tools have largely stagnated - the same basic Microsoft Management Console (MMC) interfaces had remained since Windows Server 2000. Microsoft tried out multiple overhauls over the years to the built-in Server Manager console but gained little traction. Until Windows Admin Center.
WHAT IS WINDOWS ADMIN CENTER?Windows Admin Center (WAC) represents a modern turn in Windows and Windows Server system management. From its home page, you establish a list of the networked Windows and Windows Server computers to manage. From there, you can connect to an individual system to control components such as hardware drivers. You can also use it to manage Windows roles, such as Hyper-V.
On the front-end, Windows Admin Center is presented through a sleek HTML 5 web interface. On the back-end, it leverages PowerShell extensively to control the systems within your network. The entire package runs on a single system, so you don’t need a complicated infrastructure to support it. In fact, you can run it locally on your Windows 10 workstation if you want. If you require more resiliency, you can run Windows Admin Center as a role on a Microsoft Failover Cluster.
WHY WOULD I USE WINDOWS ADMIN CENTER?In the modern era of Windows management, we have shifted to a greater reliance on industrial-strength tools like PowerShell and Desired State Configuration. However, we still have servers that require individualized attention and infrequently utilized resources. WAC gives you a one-stop hub for dropping in on any system at any time and work with almost any of its facets.
ABOUT THIS EBOOKThis eBook has been written by Microsoft Cloud & Datacenter Management MVP Eric Siron. Eric has worked in IT since 1998, designing, deploying, and maintaining server, desktop, network, and storage systems. He has provided all levels of support for businesses ranging from single-user through enterprises with thousands of seats. He has achieved numerous Microsoft certifications and was a Microsoft Certified Trainer for four years. Eric is also a seasoned technology blogger and has amassed a significant following through his top-class work on the Altaro Hyper-V Dojo.
With a software-based approach, IT organizations see a better return on their storage investment. DataCore’s software-defined storage provides improved resource utilization, seamless integration of new technologies, and reduced administrative time - all resulting in lower CAPEX and OPEX, yielding a superior TCO.
A survey of 363 DataCore customers found that over half of them (55%) achieved positive ROI within the first year of deployment, and 21% were able to reach positive ROI in less than 6 months.
Download this white paper to learn how software-defined storage can help reduce data center infrastructure costs, including guidelines to help you structure your TCO analysis comparison.