Virtualization Technology News and Information
Article
RSS
Scality 2017 Predictions: Multi-Cloud, SDS and Flash

VMblog Predictions 2017

Virtualization and Cloud executives share their predictions for 2017.  Read them in this 9th annual VMblog.com series exclusive.

Contributed by Giorgio Regni, co-founder and CTO, Scality

Storage Strategy 2017: Multi-Cloud, SDS and Flash

As winter approaches and we look ahead to 2017, there's no time for hibernation. Those of us working in storage technology know that data doesn't take a vacation. It continues to accumulate, every second of every day, into abundant amounts at petabyte scale. An advanced, comprehensive data storage strategy is a fundamental enabler of every digital business. As the need for faster, bigger, cheaper, more reliable storage intensifies alongside the explosive growth of Big Data, we see large capacity object and cloud storage solutions becoming a mainstream key to success in 2017.

It's time to educate storage teams and personnel now on the mainstream emergence of software defined storage (SDS), scalable object storage, and specifically the growing and defacto AWS S3 API for object storage, which will soon be as critical for storage centric applications as NFS was for the NAS generation.  Get your teams to start thinking about transformative approaches to storage, identifying business problems and use cases for software defined storage before you hit scale problems-unpredictable costs, burdensome migrations, storage holding back business growth, downtime and latency issues.

Optimizing for Multi-Cloud

I'm not the first to say it's a multi-cloud world. In the blink of an eye, we've gone from wondering if "the cloud" is reliable and secure to being overwhelmed by the wealth of cloud infrastructure options at our fingertips. The coming year will be about finding solutions to properly utilize multiple storage clouds for different use cases. Most companies will rely on a mixture of private and public clouds-private for sensitive data and key applications, public for sharing content between users and taking advantage of public cloud infrastructure features like content delivery networks and automated archiving.  In other words, we foresee a strong workflow and lifecycle collaboration between companies reliance on local "on premises" storage and the need to for business to leverage public cloud based services.  This will be managed via policies and rules, centered on intelligent metadata management.

The multiplicity of cloud infrastructure options creates boundless opportunities, and the coming year will certainly see companies taking advantage of hybrid cloud capabilities in creative and unexpected ways. But the mere fact that a company uses multiple public and private clouds does not mean they are reaping benefits; getting the most performance, scalability, and efficiency out of multi-cloud infrastructure requires innovate ways to handle user authentication and access control, workload provisioning, and automation of routine system management. After all, it's not getting any easier or cheaper to hire qualified IT staff to manage multiple services and vendors.

SDS Automation and Management

By selecting private and public cloud storage systems that can be linked to your central IT services management complexity can be eased. For example, user identity management can be integrated with corporate directory services such as Microsoft Active Directory through SAML 2.0 compliant systems (e.g., Google for Enterprise or AD FS) to help IT manage end user access to various clouds, delete users as needed, and allow companies to continue using their regular sysadmin tools from a secure central location. Tight, granular control over access to data and cloud services is paramount to cyber security, data privacy, and the protection of intellectual property. Insider threats, whether arising from accidental or malicious activity, figure prominently in the risk factors that lead to data breaches and exploitation. When employees circumvent or weaken controls, act against access policies, or leave the company, administrators must be able to act swiftly to terminate access to company data and systems. Regular review of adherence to carefully implemented access policies and controls will prevent "access sprawl", wherein too many individuals have access to valuable data without proper justification.  The number of data collections, devices, applications, and services in use by a typical company will continue to mount. Without a proper access management system, risks and vulnerabilities will grow apace. A strong model such as the AWS Identity and Access Management (IAM) service can provide this type of secure multi-tenancy, user and group management along with granular access control policies will address the required capabilities.

The need for centralized management does not stop at user access control. Software defined storage (SDS) controllers federate multi-cloud infrastructure and provide a unique entry point for all cloud storage repositories, both public and private, and gives you full control of your metadata, all while being fully integrated with your enterprise user directory system. SDS controllers will enable enterprises to automate peak demand capacity management, for example keeping storage workloads on-premise capped at one petabyte with the option to deploy additional capacity as needed in a public cloud like AWS. Storage admins spend the vast majority of their time moving storage from one place to another to accommodate ever-changing performance and capacity needs.

SDS controllers will automate and optimize the availability, speed, and integrity of data to a wider variety of business models and use cases. These cases include the use of cloud-compute for resource intensive operations such as transcoding for adaptive video streaming and more efficient processing of huge data clusters with MapReduce. With SDS controller technology, for example, enterprises will be better able to take advantage of compelling cloud features like content delivery networks and massive compute capacity bursts.

As this next-generation intelligent SDS controller technology emerges over the next year, we will finally realize many of the promised yet elusive benefits of multi-cloud deployments: increased hardware and vendor independence; flexibility in customization; optimized performance and minimized latency; and sustainable reductions in TCO.

Flash Forward

The heady buzz around virtualization and cloud technology naturally leads us to focus on the ability of software to advance computing into new realms. Yet hardware still plays an essential role in infrastructure, especially when it comes to costs related to power, cooling, and real estate. Flash storage, so essential to the global success of personal digital devices, is now set to make a more widespread impact in high capacity storage arrays. As the price of flash storage technology decreases, its advantages-low latency, high throughput, and very low power consumption-will be welcomed in enterprise storage systems and data centers, where reducing overall equipment cost, physical footprint, maintenance requirements, and energy use are paramount. In the last year alone, we've seen remarkable increases in flash storage density, with options including 500 Terabytes in a single 3U chasis, to enable multiple Petabytes in a single rack.   I think the next year will see wider adoption as competing offerings come to market and IT teams recognize and begin to act on the lowered costs (below $1/GB per many claims) and proven reliability.  As they identify vendors and solutions with demonstrated performance, the compelling benefits of all flash arrays will give them the confidence to make the leap.

A New Era

As enterprises build storage infrastructure to answer the challenges of big data analytics, online entertainment, and instantaneous, universal access to the "memory of mankind" we are quickly abandoning traditional storage solutions in favor of storage technology that can quickly and efficiently scale out, ensure hardware independence and flexibility, avoid vendor lockdown, and meet mission critical reliability requirements. The advances in object and cloud storage, SDS controllers, and flash arrays-just to name a few-will bring us into a whole new way of deploying and managing storage over the next few years. If your business runs on data, it depends on storage. The data explosion waits for no one-if you aren't formulating and building on a next-generation enterprise storage strategy in 2017, you won't be ready for the petabyte era.

##

About the Author

As a Scality co-founder and our Chief Technology Officer, Giorgio Regni oversees the company's development, research, and product management. He is a recognized expert in distributed infrastructure software at web scale, and has authored multiple US patents for distributed systems. Prior to Scality, Giorgio was a co-founder and VP of Engineering at Bizanga, where he developed anti-abuse software that still protects hundreds of millions of mailboxes across the world.

Giorgio holds an engineering degree in computer science from INSA (Institut National des Sciences Appliquées) in Toulouse, France. He is also an accomplished hacker and developer. In his spare time, Giorgio has created mobile phone applications that are currently in use by an installed base of more than 2 million people. On an artistic note, Giorgio is a skilled electric guitar player, drawing his inspiration from guitar legends like Joe Satriani and Steve Vai.

giorgio regni 

Published Friday, November 18, 2016 8:02 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<November 2016>
SuMoTuWeThFrSa
303112345
6789101112
13141516171819
20212223242526
27282930123
45678910