Virtualization Technology News and Information
Article
RSS
DDN 2019 Predictions: Tech Evolution Will Continue to Push the Boundaries

Industry executives and experts share their predictions for 2019.  Read them in this 11th annual VMblog.com series exclusive.

Contributed by Kurt Kuckein, Sr. Director of Marketing, DDN Storage

Tech Evolution Will Continue to Push the Boundaries

2019 looks to be an exciting year. With the emergence of large-scale, AI and machine learning environments, the advancement of granular data management capabilities, the move toward cloud-like data management models and autonomous storage as well as the accelerated adoption of at-scale Flash deployments and NVMe the existing market is poised to experience an evolution that will continue to push the boundaries. As the team at DDN looks to 2019 we are focused on the trends that will successfully and efficiently handle the various scenarios presented. Here are our top five predictions for 2019:

1.     The emergence of large-scale AI and Machine Learning deployments.  The past couple of years have seen many organizations trialing machine learning algorithms on small data sets, implementing small but growing deployments, and planning at-scale infrastructures. We have already seen successful projects emerge for predictive analytics for chronic disease management, workflow enhancement in radiology as well as administrative and financial use cases that bring operational efficiency to these industries.  2019 will be the year that large-scale AI and machine learning environments emerge in mass, with organizations moving from deployments of 4, 8 or 16 GPUs to deployments that range from hundreds to thousands of GPUs. At-scale AI and machine learning environments pose unique challenges, including demanding analytical workload characteristics of both read intensive and random I/O, and the fact that normal caching techniques used in testing are not scalable enough to cope with hundreds of terabytes of data within the flat-cache or with petabytes of data on the backend. Successful at-scale AI and machine learning infrastructures will require storage systems that are able to scale massively and transform the I/O with flash cache layers that sit between the application and file systems.

2.     The advancement of granular data management capabilities for at-scale data systems and private clouds. At-scale data systems and private clouds are increasingly supporting diverse types of data, such as AI and deep learning workflows, that require advanced, granular data management capabilities. These data management solutions will need to deliver simplicity, allow for more sophisticated tagging and searching of the data itself, and provide insight into the types of data that customers have within their systems and within their clouds. The sheer amount of data required for deep learning projects, especially as some businesses begin to focus on deploying AI models that work better for real world problems, having data from disparate sources clearly defined, labeled and discoverable will allow nimble companies to move quickly.  Equally important is the efficient storage of long-term data - information that might not appear to be of value today, but in six months, or 2 years may provide a unique differentiation.

3.     The move toward cloud-like data management models for on-premise deployments with transparent mobility to public cloud. The need for better, more granular data management capabilities for at-scale data systems and private clouds will drive the adoption of "cloud" models for on-premise environments. Areas such as security and multi-tenancy are becoming increasingly important as organizations want to be able to allocate specific parts of data collections to users, groups or business units to allow either collaborative access or not, and to provide services such as quality assurance and guaranteed performance, especially for latency-critical applications. Cloud and cloud-native models have always been enabled by multi-tenant capabilities, but it is new and needed for data-intensive, on-premise workloads.  Equally important will be the portability of workloads - as public cloud systems become more capable in delivering high performance compute and storage capabilities, on-prem systems need to provide on ramps that can optimize data placement for most efficient operations.

4.     Major steps towards delivering autonomous storage. Storage systems themselves will also benefit from the refinement of algorithms and analytics by starting to implement machine learning that will AI play an increasing role in supporting and even automating decisions.  Storage vendors that are already on their way towards gathering data about the data on their systems will be better placed to take advantage of this.  Additionally, storage systems that already integrate tightly with VMs, containers and the applications found in them will be able to automatically rebalance to ensure those applications are running optimally and fully protected against failures.  They key is for systems to go beyond capacity and performance and delve into the needs and characteristics of the applications run on those systems.  This combination of capabilities will allow IT organizations to concentrate on the value found in their data and less (no) time struggling to optimize cost, lower risk, or deliver performance manually.

5.     Accelerated adoption of at-scale Flash deployments and the ascendency of NVMe.  The adoption of scale-out storage architectures to manage flash on demand and to scale performance as needed will accelerate as flash usage provides an optimal means for balancing performance and cost in long-term storage, and as the price of flash storage will drop significantly in the first six months of 2019.  NVMe will be the default media for tier-1 applications (low latency, high IOPs and density - what's not to like?), but NVMeOF will continue to lag as a networking standard as other more established RDMA networks like InfiniBand and RoCE continue to thrive and meet performance demands.

##

About the Author

 

Kurt Kuckein is the Director of Marketing for DDN Storage and is responsible for linking DDN's innovative storage solutions with a customer focused message to create greater awareness and advocacy. In this role, Kurt oversees all marketing aspects including brand development, digital marketing, product marketing, customer relations, and media and analyst communications. Prior to this role, Kurt served as Product Manager for a number of DDN solutions since joining the company in 2015. Previous roles include Product Management and Product Marketing positions at EMC and SGI. Kurt earned an MBA from Santa Clara University and a BA in Political Science and Philosophy from University of San Diego.

Published Wednesday, January 09, 2019 7:34 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<January 2019>
SuMoTuWeThFrSa
303112345
6789101112
13141516171819
20212223242526
272829303112
3456789