Virtualization Technology News and Information
Article
RSS
StorPool 2023 Predictions: What 2023 Might Look Like for Storage and the Cloud

vmblog-predictions-2023 

Industry executives and experts share their predictions for 2023.  Read them in this 15th annual VMblog.com series exclusive.

What 2023 Might Look Like for Storage and the Cloud

By Boyan Ivanov, CEO of StorPool

New technologies and standards in storage and server hardware

Compute Express Link (CXL), a CPU-to-device connection standard, like PCI Express, is finally becoming available in standard x86 servers. In the first generation of servers that support it, CXL will enable memory expansion modules, so servers with several TB RAM will be economical. Previously RAM was only available in DDR4 Registered DIMMs, which were limited in number and capacity. 

CXL persistent memory modules will supersede Optane NVDIMMs as the most popular persistent memory technology because of wide compatibility and easier integration. 

And finally, CXL-connected "memory-semantic" SSDs will make their first small steps on the market over time, becoming a mainstream product like NVMe SSDs did in the previous decade.

Flexible Data Placement (FDP), a standard for SSDs for write amplification reduction, will become more widely available in datacenter SSDs. With it, the life of flash media is extended to support heavier workloads for longer periods of time. FDP requires support from the storage software, so it will be interesting to follow how it pans out.

QLC SSDs will grow their storage market share for colder storage use cases

QLC SSDs continue to gain popularity in storage products that serve "capacity-optimized" and "capacity-oriented" applications - i.e., data lakes, analytics, backups, disaster recovery, and dev/test environments. 

For primary workloads like heavily-loaded databases, virtual desktops, and web servers, the 15%-20% lower cost per TB compared to TLC SSDs does not justify the lower drive endurance, lower write performance, and higher read and write latency. QLC SSDs simply do not provide sufficient value for money in the context of primary storage systems yet. They will become attractive for primary workloads as hardware vendors find ways to improve their endurance to at least 1 DWPD or introduce technologies like Flexible Data Placement in next-generation QLC SSDs.

Legacy storage architectures continue facing headwinds everywhere

The dual-controller shared-disk primary storage array designs and the software-defined implementations of that architecture make sense for simple deployments with predictable workload requirements. However, they are losing ground both at the edge and in core data centers because today's environments are neither simple nor predictable.

At the edge, small HCI solutions are pushing out standalone storage systems thanks to their flexibility, ease of deployment and management, and simple approach to high availability.

In core data centers, modern applications demand massive I/O parallelization and low latency at scale. Fleets of standard servers are preferred for medium- and large-scale infrastructure deployments. New clouds use such building blocks to deliver structured and unstructured data storage services. Customers need solutions that meet their performance needs, and enable end-to-end automation and a wide range of hardware choices, while deployments, upgrades, and refreshes happen on their timeframes. Continuous workload monitoring, the need for workload rebalancing between data silos, and performance degradation due to usable capacity consumption are all becoming things of the past.  

Storage services priced for performance and size independently

Public clouds are starting to offer independent pricing of capacity and performance. Previously the pricing of block storage services was based on performance tiers. It is now moving to independent size (GiB provisioned) and performance (IOPS, MB/s) pay-per-use pricing. Example services are AWS EBS gp3 and Google Cloud HyperDisk. We expect that similar pricing of storage services will be used by smaller service providers and managed storage vendors.

Latency-optimized vs throughput-optimized CPUs

In public and private clouds, the split between latency-optimized configurations (milliseconds query response time) and throughput-optimized server configurations ($ per unit of work, lower power per core) is becoming increasingly wider. For throughput-optimized workloads, ARM servers are getting a bigger market share with Amazon's Graviton CPUs and Ampere Altra CPUs. X86 CPU vendors are also experiencing a split in their portfolio, with some products targeting 2-3W per core for throughput-optimized workloads, while other server CPUs use 10W+ per core for latency-optimized workloads. 

In 2023 many public and private clouds will have two compute offerings - one for batch processing and throughput-optimized workloads and a second one for online transaction processing and latency-sensitive workloads. 

##

ABOUT THE AUTHOR

Boyan-Ivanov 

Boyan Ivanov is CEO of StorPool Storage, a leading global storage software provider.

Published Tuesday, December 06, 2022 7:53 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<December 2022>
SuMoTuWeThFrSa
27282930123
45678910
11121314151617
18192021222324
25262728293031
1234567