Virtualization Technology News and Information
White Papers
RSS
White Papers Search Results
Showing 1 - 7 of 7 white papers, page 1 of 1.
Storage Playbook: Essential Enterprise Storage Concepts
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.
Storage can seem like a confusing topic for the uninitiated, but a little bit of knowledge can go a long way. It is important to understand the basic concepts of storage technologies, performance, and configuration before diving into more advanced practices.

In this e-book, we’ll cover storage basics, storage performance and capacity, forecasting and usage and storage best practices.

Cloud Migration Planning Guide
Effective migration planning needs to start with evaluating current footprint to determine how the move will affect all functional and non-functional areas of the organization. Having a framework for assessment will streamline migration efforts, whether an enterprise plans to undertake this project on its own or with the help of a cloud service provider. HyperCloud Analytics provides intelligence backed by 400+ million benchmarked data-points to enable enterprises to make the right choices for

Most enterprises underestimate the planning process; they do not spend sufficient time understanding the cloud landscape and the different options available. While there are tools at hand to assist the implementation and the validation phases of the migration, planning is where all the crucial decisions need to be made.

Bad planning will lead to failed migrations. Challenges that enterprises often grapple with include:

  • Visibility and the ability to compile an inventory of their existing on-premises VMware  resources
  • Cherry-picking workloads and applications that are cloud-ready
  • Right-sizing for the public cloud
  • A financial assessment of what the end state will look like

HyperCloud Analytics provides intelligence backed by 400+ million benchmarked data-points to enable enterprises to make the right choices for the organization. HyperCloud’s cloud planning framework provides automation for four key stages that enterprises should consider as they plan their migration projects.

They get automated instance recommendations and accurate cost forecasts made with careful considerations of their application requirements (bandwidth, storage, security etc). Multiple assessments can be run across the different cloud providers to understand application costs post-migration.

Download our whitepaper to learn more about how you can build high-confidence, accurate plans with detailed cloud bills and cost forecasts, while expediting your cloud migrations.

Top 10 strategies to manage cost and continuously optimize AWS
The great cloud migration has upended decades of established architecture patterns, operating principles, and governance models. Without any controls in place, cloud spend inevitably rises faster than anticipated and often gets overlooked until it gets out of control. With its granular and short-term billing cycles, the cloud requires a degree of financial discipline that is unfamiliar to most traditional IT departments. Faced with having to provide short-term forecasts and justify them agains

The public cloud has unleashed an unprecedented wave of creativity and agility for the modern enterprise. A great cloud migration has upended decades of established architecture patterns, operating principles, and governance models. However, without any replacement for these traditional controls in place, cloud spend inevitably rises faster than anticipated. If not addressed early in the cycle, this is often overlooked until it gets out of control.

Over the course of a few decades, we have created a well-established model of IT spending; to arrive at economies of scale, procurement is centralized and typically happens at three- to five-year intervals, with all internal customers forecasting and pooling their needs. This created a natural tendency for individual project owners to overprovision resources as insurance against unexpected demand. As a result, the corporate data center today is where the two famous laws of technology meet:

  • Moore’s Law ensures that capacity increases to meet demand
  • Parkinson’s Law ensures that demand rises to meet capacity

With its granular and short-term billing cycles, the cloud requires a degree of financial discipline that is unfamiliar to most traditional IT departments. Faced with having to provide short-term forecasts and justify them against actual spend, they need to evolve their governance models to support these new patterns.

Having a well thought out AWS strategy is crucial to your long-term cloud gains. Taking the time to understand and pick the right instances for your apps is well worth the time and effort as it will directly impact your AWS pricing and bill.

We hope these ten strategies help inform and support you as you navigate the sometimes-turbulent waters of cloud transition. They are here for you to consult and rely on as best practices and cost-saving opportunities.

Given the virtually uncountable number of combinations, we have tried to identify the most practical and reliable ways to optimize your deployment at all stages and empower your end-users while insulating them from temptations, assumptions and habits that can cost you some unpleasant surprises when the bill arrives.

Mastering vSphere – Best Practices, Optimizing Configurations & More
Do you regularly work with vSphere? If so, this free eBook is for you. Learn how to leverage best practices for the most popular features contained within the vSphere platform and boost your productivity using tips and tricks learnt direct from an experienced VMware trainer and highly qualified professional. In this eBook, vExpert Ryan Birk shows you how to master: Advanced Deployment Scenarios using Auto-Deploy Shared Storage Performance Monitoring and Troubleshooting Host Network configurati

If you’re here to gather some of the best practices surrounding vSphere, you’ve come to the right place! Mastering vSphere: Best Practices, Optimizing Configurations & More, the free eBook authored by me, Ryan Birk, is the product of many years working with vSphere as well as teaching others in a professional capacity. In my extensive career as a VMware consultant and teacher (I’m a VMware Certified Instructor) I have worked with people of all competence levels and been asked hundreds - if not thousands - of questions on vSphere. I was approached to write this eBook to put that experience to use to help people currently working with vSphere step up their game and reach that next level. As such, this eBook assumes readers already have a basic understanding of vSphere and will cover the best practices for four key aspects of any vSphere environment.

The best practices covered here will focus largely on management and configuration solutions so should remain relevant for quite some time. However, with that said, things are constantly changing in IT, so I would always recommend obtaining the most up-to-date information from VMware KBs and official documentation especially regarding specific versions of tools and software updates. This eBook is divided into several sections, and although I would advise reading the whole eBook as most elements relate to others, you might want to just focus on a certain area you’re having trouble with. If so, jump to the section you want read about.

Before we begin, I want to note that in a VMware environment, it’s always best to try to keep things simple. Far too often I have seen environments be thrown off the tracks by trying to do too much at once. I try to live by the mentality of “keeping your environment boring” – in other words, keeping your host configurations the same, storage configurations the same and network configurations the same. I don’t mean duplicate IP addresses, but the hosts need identical port groups, access to the same storage networks, etc. Consistency is the name of the game and is key to solving unexpected problems down the line. Furthermore, it enables smooth scalability - when you move from a single host configuration to a cluster configuration, having the same configurations will make live migrations and high availability far easier to configure without having to significantly re-work the entire infrastructure. Now the scene has been set, let’s get started!

How Data Temperature Drives Data Placement Decisions and What to Do About It
In this white paper, learn (1) how the relative proportion of hot, warm, and cooler data changes over time, (2) new machine learning (ML) techniques that sense the cooling temperature of data throughout its half-life, and (3) the role of artificial intelligence (AI) in migrating data to the most cost-effective tier.

The emphasis on fast flash technology concentrates much attention on hot, frequently accessed data. However, budget pressures preclude consuming such premium-priced capacity when the access frequency diminishes. Yet many organizations do just that, unable to migrate effectively to lower cost secondary storage on a regular basis.
In this white paper, explore:

•    How the relative proportion of hot, warm, and cooler data changes over time
•    New machine learning (ML) techniques that sense the cooling temperature of data throughout its half-life
•    The role of artificial intelligence (AI) in migrating data to the most cost-effective tier.

ESG Report: Verifying Network Intent with Forward Enterprise
This ESG Technical Review documents hands-on validation of Forward Enterprise, a solution developed by Forward Networks to help organizations save time and resources when verifying that their IT networks can deliver application traffic consistently in line with network and security policies. The review examines how Forward Enterprise can reduce network downtime, ensure compliance with policies, and minimize adverse impact of configuration changes on network behavior.
ESG research recently uncovered that 66% of organizations view their IT environments as more or significantly more complex than they were two years ago. The complexity will most likely increase, since 46% of organizations anticipate their network infrastructure spending to exceed that of 2018 as they upgrade and expand their networks.

Large enterprise and service provider networks consist of multiple device types—routers, switches, firewalls, and load balancers—with proprietary operating systems (OS) and different configuration rules. As organizations support more applications and users, their networks will grow and become more complex, making it more difficult to verify and manage correctly implemented policies across the entire network. Organizations have also begun to integrate public cloud services with their on-premises networks, adding further network complexity to manage end-to-end policies.

With increasing network complexity, organizations cannot easily confirm that their networks are operating as intended when they implement network and security policies. Moreover, when considering a fix to a service-impact issue or a network update, determining how it may impact other applications negatively or introduce service-affecting issues becomes difficult. To assess adherence to policies or the impact of any network change, organizations have typically relied on disparate tools and material—network topology diagrams, device inventories, vendor-dependent management systems, command line (CLI) commands, and utilities such as “ping” and “traceroute.” The combination of these tools cannot provide a reliable and holistic assessment of network behavior efficiently.

Organizations need a vendor-agnostic solution that enables network operations to automate the verification of network implementations against intended policies and requirements, regardless of the number and types of devices, operating systems, traffic rules, and policies that exist. The solution must represent the topology of the entire network or subsets of devices (e.g., in a region) quickly and efficiently. It should verify network implementations from prior points in time, as well as proposed network changes prior to implementation. Finally, the solution must also enable organizations to quickly detect issues that affect application delivery or violate compliance requirements.
Data Protection Overview and Best Practices
This white paper works through data protection processes and best practices using the Tintri VMstore. Tintri technology is differentiated by its level of abstraction—the ability to take every action on individual virtual machines. Hypervisor administrators and staff members associated with architecting, deploying and administering a data protection and disaster recovery solution will want to dig into this document to understand how Tintri can save them the majority of their management effort an

This white paper works through data protection processes and best practices using the Tintri VMstore. Tintri technology is differentiated by its level of abstraction—the ability to take every action on individual virtual machines.  In this paper, you’ll:

  • Learn how that greatly increases the precision and efficiency of snapshots for data protection
  • Explore the ability to move between recovery points
  • Analyze the behavior of individual virtual machines
  • Predict the need for additional capacity and performance for data protection

If you’re focused on building a successful data protection solution, this document targets key best practices and known challenges. Hypervisor administrators and staff members associated with architecting, deploying and administering a data protection and disaster recovery solution will want to dig into this document to understand how Tintri can save them a great deal of their management effort and greatly reduce operating expense.

top25