Virtualization Technology News and Information
Astute Networks Top Three Predictions for 2014

VMblog 2014 Prediction Series

Virtualization and Cloud executives share their predictions for 2014.  Read them in this series exclusive.

Contributed article by Keith Klarer, Senior Vice President of Engineering and Founder, Astute Networks

Astute Networks Top Three Predictions for 2014

#1: Dynamic Random-Access Memory (DRAM) and Flash Team-Up

Most professional recognize that when you power off a server, whatever is stored in the server's DRAM is lost.  Needless to say, this can result in data loss unless the contents of the DRAM have been committed to storage before "lights out."

In 2014, we will see the deployment of server DRAM DIMMs that also hold flash memory, across physical, virtual and cloud environments.  On the detection of an imminent power loss event, each DIMM will transfer the contents of its DRAM to flash.  When the system gets powered back up, the contents of the DRAM will be restored from flash.  Voila, the server can continue on from the point of the power failure.

While this is an interesting way to reduce the risk of data loss from power failure, it also offers significant benefits in reducing the overall data center power consumption.  If the processing load on the data center decreases, servers can be virtually instantly powered-off.  When the load on the data center increases, the dark servers can be powered back up very quickly, with no boot time delays.  This can save a lot more power than the current techniques of putting individual CPUs in lowered power states.

Bottom Line: Given that power consumption is one of the biggest costs in large data centers, designers will find ways to take advantage of this DRAM+Flash union.

Although, this is a great technology to provide compute resources "on demand" the need for fast SAN-connected storage will increase with data center density. Virtualized applications will need to be removed from the server that is to be powered down, and then consolidated onto another server.  SAN storage provides the repository with which this transfer is effected, and the speed of this storage will determine how quickly the transfer can take place.

#2: Mainstream SSDs Transition to 3D NAND Flash

Until now, the flash memory devices used in SSDs have been built as a single layer of storage cells arranged on a silicon base.  A typical flash device would have 16 billion or more of these storage cells squeezed onto a piece of silicon less than a fingernail in area.  Each storage cell can hold two or three bits of information, resulting in single device capacities of up to 128 Gbits.

To fit all of these storage cells into such a small area clearly requires that each cell be very small, with dimensions of the latest cells approaching 10 nanometers.  Due to physics of electron storage in cells this small, cell lifetimes (endurance) are typically limited to a few thousand writes.  Once the endurance limit is reached, you can no longer write data.  To compensate for this, SSD manufacturers have been applying more sophisticated data coding, write balancing, cell charge modulation techniques, and even write throttling in order to mitigate physical endurance thresholds - but there is a limit to what can be done.

The limited write endurance is certainly a concern to enterprise users whose business practices require the collection or update of a great deal of data each day.  Heavy write loads can preclude the use of the highest density flash devices, and instead require the use of previous generation (less dense/more costly) flash devices.

The development and introduction of 3D NAND flash devices dramatically changes things.  Rather than having to shrink the storage cells to increase flash device capacity, 3D NAND allows thee cells to be stacked, one on top of another.  The result is a 3D "tower" of storage cells in the same area of what was previously a single cell.  The first 3D devices to market reportedly are stacked 24 layers deep, and have a 128 Gbit capacity.

What is the big benefit of 3D NAND?  Storage cell geometries can be scaled back up to the 40-50 nanometer range without compromising capacity.  This increases endurance by a factor of 10x, makes writes about 20 percent faster, and reduces power consumption by around 40 percent.

Bottom Line: 3D NAND will enable larger capacity and lower cost SSDs, while keeping write endurance at a level that meets enterprise requirements.

Regardless of what type of NAND is used in a SSD, make sure that you understand the Petabytes Written (PBW) guaranteed by the SSD manufacturer.  This number allows you to determine how durable the SSD is, regardless of its internal design.  And beware of vague endurance improvement claims by various flash system manufacturers - use the guaranteed PBW numbers.

#3: Software Defined Storage (SDS) Is The Next Logical Follow-On In The Marketing Hype Cycle

The acquisition of the "software defined networking" startup Nicira by VMware for over a billion dollars in 2012 has gotten a lot of marketing folks salivating over other technologies that can be tagged as "software defined."  Of course, software defined storage (SDS) is a logical follow-on in the marketing hype cycle.

Emulating the model set by the networking innovators, it would seem that software defined storage (SDS) would have the following features: 

1.       Runs on commodity hardware.  This implication is the software developers can capture most of the sales margin running on mass produced hardware platforms, with no technology tie-in to a particular manufacturer.

2.       Abstracts away the underlying physical storage implementation, i.e. the user does not know about the implementation.

3.       Pools together the resources provided by the hardware in order to scale performance and capacity.

4.       Uses standardized interfaces to provision the storage resources and service levels.

The first two points would put SDS at odds with the incumbent storage manufacturers.  Because of the lock the big storage players have on market share, widespread adoption of SDS will likely require the support of some major, or many minor, providers that don't have a vested interest in their existing storage lines.

The third point implies a pooling methodology that should be independent of the underlying hardware, since it's unlikely that independent hardware vendors are at a point to define a set of common interfaces to support this functionality.

The fourth point either requires a significant standardization effort or the evangelization and general adoption of a compelling open interface.

This is a lot to undertake in a single year, and while bits and pieces of these features will appear in 2014, it remains to be seen if a complete SDS implementation will appear.


About the Author

Keith Klarer has 30 years of hardware design and management experience, including fifteen years of technical management experience and four years as a founder and CEO of an engineering services company. Before co-founding Astute Networks in 2000, Keith was the CEO of Axym Design, which provided system design services to such companies as Philips Semiconductor, IBM and Gateway 2000. He has held technical and management positions at Logic Innovations, a consulting firm that provided product design services to clients such as General Instruments, Brooktree, and Symbol Technologies. He has also held engineering positions at Scientific Computer Systems and Control Data Corporation. Keith holds a BSEE Degree from the University of Waterloo, Canada.
Published Thursday, December 05, 2013 9:19 PM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<December 2013>