Virtualization Technology News and Information
Article
RSS
HiveIO 2019 Predictions: Setting the Foundation for an Intelligent Data Center

Industry executives and experts share their predictions for 2019.  Read them in this 11th annual VMblog.com series exclusive.

Contributed by Toby Coleridge, Vice President of Product at HiveIO

Setting the Foundation for an Intelligent Data Center

Throughout the past year, businesses have gained further confidence in hyperconverged technology and industry sales have skyrocketed. Mergers and acquisitions continued to evolve the industry dynamic, while some fresh players like HiveIO have emerged to turn legacy IT on its head. In 2019 and beyond, we're expecting to see increasing innovations around cloud orchestration, containers, and resource utilization, driving further value to data center operations.

Orchestrating between clouds

Organizations have woken up to the fact that ‘Cloud' isn't all or nothing, but forms part of an overall IT strategy.  Its importance varying wildly for different organizations depending on their business and their IT requirements. It is not as binary as saying, "we're moving to the cloud." There are a variety of management factors that impact the movement and performance of data as digital operations become more prevalent. We see this starting to unwind in two ways:

1.  With numerous vendors building multi-cloud platform management services.

2.  With many organizations wanting the softer benefits that the cloud provides such as scalable resource management and consumption-based pricing but under their own control in their datacenter.

Moving into the next year, we'll see the focus shift toward applications and workloads that move seamlessly between clouds. This will intensify the conversation around compatibility and require more IT leaders to evaluate their current data center solutions to ensure they have the flexibility to adapt to larger changes within our industry. In 2019, there will be a heavier focus on mixing on-premise, hyperconverged and cloud technologies together, offering organizations the ability to scale easier and streamline processes.

Compute building block continues to shrink

Throughout the past 10 years, the industry has moved from physical to virtual servers, continuously breaking compute power down into smaller building blocks.  As the requirement for scale changes and DevOps become mainstream, the next step in this progression - containers - have become mainstream over the last 24 months. The compute block can be broken down further, going beyond containers and enter into the Serverless world.  It's early days for Serverless but enterprise application architectures are rapidly evolving and we are starting to see the next generation of these taking a cue from some of the largest content providers such as Netflix and Facebook.

Looking ahead to 2019, enterprises will adopt containers as the goto application delivery platform to move data between private and public clouds. The benefits of this include:

  • Reducing complexities through automation
  • Better resource scheduling and increased flexibility of computing capabilities
  • Offering policy-based optimization.

Hyperconverged technology has already incorporated some aspects of Containers and even some components of Serverless. Looking beyond 2019, applications will continue to evolve and Serverless will become the new standard as IT leaders look to further increase resource utilization: make the building blocks smaller, fully stateless, and less costly.

Intelligent Datacenter

Machine learning (ML) is advancing rapidly and enterprises are starting to adopt it. As a result, we expect to see more applications and data center management tools begin to integrate ML features reducing the technical expertise required to manage and maintain the data center. Operating a traditional data center is still a very manual process. For example, IT admins still typically decide which servers to carry out maintenance on, the order that they are updated, how they are updated, and when to schedule this.  For a large estate, this takes weeks if not months and is down to luck if every server is treated in the same way. In the future, ML will predict and manage resource utilization, in real-time. It will automatically update servers when they become free and deal with dependencies removing a huge burden from the IT team, freeing them up to drive innovation for the business.

The key to optimizing datacenter utilization is to assign workloads to infrastructure in an agile way, based on usage patterns and application behavior needs. When ML does begin enhancing operations, a data center with a large number of servers will automatically know the necessary virtual servers, desktops, or containers that need to be turned on to meet the expected demand of the organization at any given time. This intelligent resource scheduling saves an organization considerable amounts of OPEX through optimized power and cooling utilization across data centers whilst freeing up resources for other applications and workloads to utilize.

HiveIO has set the foundation for the intelligent datacenter by building its Hive Fabric solution around an AI and ML ready Message Bus. Hive Fabric is capable of hosting a variety of workloads across the datacenter including VMs, Containers and Serverless platforms and is poised to be the cornerstone for tomorrow's datacenter.

To learn more about HiveIO, visit www.hiveio.com.

##

About the Author

 

Toby Coleridge is a technology executive and product leader with more than 15 years of experience building industry leading cloud and virtualization products. He has a wide variety of experience in both global organizations and startups. He is currently the VP of Product at HiveIO.

Published Monday, December 03, 2018 7:22 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<December 2018>
SuMoTuWeThFrSa
2526272829301
2345678
9101112131415
16171819202122
23242526272829
303112345