Virtualization Technology News and Information
Article
RSS
Cloud Storage: Where Private and Public Diverge
By Nir Peleg, CTO, Reduxio Systems 

Background

The technological drivers to both private and public clouds overlap to a great degree - Efficient use of resources, Resource availability, Elasticity, and Quick, efficient provisioning.

However, the overlap between private and public clouds ends when it comes to storage. Storage differs from compute resources in the sense that there is a state to preserve (storage contents) in addition to the temporal, provisioned state.

In a public cloud environment, there is little sense in charging customers for storage as a single service due to the diversity of customer and application profiles and requirements. This, along with the motivation to better monetize resources, drives public cloud providers to offer storage services that are broken into different pools, having different characteristics and satisfying different requirements, charging separately for each service. Block storage, file storage, object stores and other services are made available in separate pools that have little or no synergy between them. It would rarely be the case that a single customer would consume all service types. Hence, each customer is encouraged to consume (and pay for) only the particular services that meet their requirements.

Private clouds face a different set of challenges and drivers: An organization that owns, operates and consumes the entire infrastructure is mostly concerned with overall efficiency and with global infrastructure optimization. Manual decisions as to which particular service to consume for an application are redundant overhead, and breaking down storage infrastructure into individually managed pools has negative impact in terms of both efficiency and complexity. Infrastructure and application management need to based on QoS and API requirements, while everything else would better be automated and globally optimized.

While public cloud vendors have been successful in deploying a host of services that are in line with their requirements and goals, little has been done in private cloud environments. To date, there is no cloud native system that fulfills the requirements and answers to the challenges private cloud deployments are facing. Private cloud implementations resort to deploying legacy storage system silos that are alien to the cloud, or, in some cases, mimic public cloud services, which results in suboptimal overall efficiency, inefficient resource utilization and increased management overhead.

Storage in the Private Cloud

The optimal storage system for the private cloud should be capable of aggregating resources that live in the cloud (SSD, SCM, other locally attached), together with resources that may live at the datacenter beside the cloud (Object store/archive), and transform them into a software defined storage and data management system.

Such system, if correctly implemented, could provide:

  • High availability, with no single point of failure
  • Advanced data management services such as fine grained point in time snapshots and clones
  • Smart, fine grained, perpetual tiering based on real time heat statistics
  • Data reduction: inline, always on dedupe and compression
  • Virtually infinite scalability of capacity, bandwidth and IOPs while maintaining elasticity
  • Enterprise grade performance with latency equivalent to or surpassing monolithic hardware arrays
  • Support for containerized or legacy (virtualized) workloads

With the increasing adoption of Kubernetes, it becomes possible to take advantage of many of its available features to implement functions that have traditionally been proprietary to the storage system. These include, among others Co-locating helper processes, Distributing secrets, Software components health checking, Replicating software components instances, Horizontal auto-scaling, Naming and discovery, Load balancing, Rolling updates, Resource monitoring, Log access and ingestion, Support for introspection and debugging, Identity and authorization, and Flexibility in data representation.

Relying on Kubernetes to provide the above functionality to a storage system requires the storage stack to be implemented as a set of containerized services.

Data resiliency could be handled by tiers of locally attached media protected by distributed erasure coding. Tiering between local and remote media could be fully automatic, based on real time statistics and cloud wide (or, at least, cluster wide) heuristics, saving the need for manual decisions about data placement.

Challenges and Innovation Focus

While much of the control functionality required could be based on mechanisms native to Kubernetes, and much of the data path functionality can be implemented using well known methods and mechanisms, some innovation is still required to make such a system a reality.

First, the system needs to be broken into components that enable horizontal scaling without adding much network traffic while allowing independent scaling in different axes (IOPS, bandwidth, capacity). The protocols used to communicate between such components need to be carefully designed to support these goals.

Another challenge is the fine granularity of nodes in the cloud. While traditional storage arrays rely on hardware resources such as common RAM caches backed up by batteries that support large, monolithic systems, it is impractical to include such hardware in each cloud node. Instead, a cloud native storage system must effectively take advantage of resources that are more practical to include in smaller form factors, such as Storage Class Memory. This requires total rethinking of metadata architecture.

Finally, and most importantly: Innovation should focus on data structures; algorithms can be derived from the structures.

  • Smart data structures can abstract data and present it in an access method independent manner.
  • Lock-free, and to the extent possible synchronization free data structures enable massively parallel, highly concurrent, scalable operation.
  • To enable application access to data regardless of location, data should be defined by what it is, rather than where it is.

The End Result

Fulfilling the above requirements and meeting the challenges would end up with a storage system that could scale infinitely and offer a complete set of data management services, encompassing both primary and secondary storage functionalities.

The entire set of local and remote resources are unified to a single system that optimizes the cloud wide use of resources.

Back to Public Clouds

Given a system such as described here, it could also be applicable to public cloud deployments in the context of a single cloud consumer. While we have argued that public cloud vendors have different motivations, and a different set of requirements when architecting their storage stack, a more holistic system such as described appeals to the public cloud consumer.

The ideal scenario is that of a consumer that leases from the cloud vendor storage and compute resources that meet their requirements and uses the private cloud storage stack to govern and manage the set of storage resources within their account. Without compromising the cloud vendor interest (the consumer still leases all resources and is charged according to their consumption), such consumer takes better advantage of the resources available to them.

##

About the Author

Nir Peleg 

Nir Peleg is the founder and CTO of Reduxio, and architected its groundbreaking core technology. He is responsible for the company's strategic roadmap and its intellectual property management. Nir is an accomplished high-technology industry executive and visionary, with over 30 years of experience. Nir joins Reduxio after a series of ventures. Nir co-founded and served as the CTO of Montilio, where he designed an innovative file server acceleration product. Prior to Montilio, Nir founded Exanet, and served as its Executive Vice President of R&D and Chief Technology Officer. In this role he led the development of one of the first clustered storage solutions ever created, innovating in the areas of grid storage and distributed cache. Prior to Exanet, Nir was the first employee and chief architect of Digital Appliance, Larry Ellison's massively parallel computing venture that eventually became Pillar Data Systems, a tiered storage vendor which was later acquired by Oracle Corporation. He also worked at Digital Equipment Corporation (DEC), where he led a European pre-sale and special projects team, focusing on UNIX based Symmetrical Multi-Processor (SMP) systems. Nir started his career in Tel Aviv University School of Mathematical Science, where he managed the computer science lab, one of the first UNIX installations in Israel. Nir holds over 20 U.S. patents and patents-pending in the areas of computer systems, distributed storage, data deduplication and encryption.
Published Thursday, November 07, 2019 7:37 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<November 2019>
SuMoTuWeThFrSa
272829303112
3456789
10111213141516
17181920212223
24252627282930
1234567