Virtualization Technology News and Information
Article
RSS
VMblog's Expert Interviews: NGD Systems Talks Computational Storage Technology and Data Movement

 

Intelligent edge computing applications have gained major momentum recently, while hyperscale system growth has accelerated.  Each offers tremendous possibilities that can transform technology.  However, both rely on fundamentally old computing architectures.  VMblog recently spoke with industry expert, Scott Shadley, Principal Technologist at NGD Systems, to learn more about this area of the industry.

VMblog:  There's been some buzz about computational storage from a few different perspectives.  How does NGD define computational storage and why is your approach different?

Scott Shadley:  NGD Systems' approach to computational storage centers on In-Situ processing. In-Situ processing is processing that's done right where the data resides. NGD Systems brings computational resources right to the storage device (in our case, an SSD).

NGD Systems unites compute and storage in the highest capacity and most power efficient NVMe SSD available in the industry's smallest form factor. This patented technology can save enterprises up to 45 percent annually in costs associated with physical footprint, server expense and energy. Newport Platform of NVMe computational storage solution provides increased efficiency by running applications like AI, via In-Situ processing, for mass datasets wherever the data is generated and stored.  This radically reduces the network bandwidth required, by up to 500 percent, to analyze mass data sets produced for AI and other data analytic applications.

NGD Systems' CEO, Nader Salessi, was the pioneer behind In-Situ processing and NGD Systems is credited as the first company to deliver In-Situ processing technology in a commercially available platform. 

VMblog:  Why is data movement so cumbersome?

Shadley:  Data naturally has gravity and requires resources (host memory and CPUs) and energy to move. As deployments grow, data is moved increasingly longer distances between nodes and local compute/memory complexes, increasing resource and energy usage, and thus costs.

Until recently, the size of typical data sets has made data movement only moderately costly. However, as data sets grow and data-intensive applications such as Big Data analytics, artificial intelligence (AI), machine learning (ML), genomics, and IoT gain in use, the costs and time needed for data movement is becoming critically challenged. Moving massive amounts of data from storage to host CPU memory to process a query is costly in terms of power consumption and time.

The impact of data movement is being felt in nearly all compute applications. Even in consumer devices such as smartphones, tablets, mobile-PCs, and wearable devices, where cloud services are becoming necessary, it has been shown that data movement between the main memory system and computation units accounts for, on average, 62.7% of the total system energy. For Big Data, AI and machine learning applications with large data stores and significant search, indexing, or pattern matching Workloads, the cost of data movement is even greater.

VMblog:  What other ways have vendors tried to address this problem?

Shadley:  Vendors have attempted to address the challenge of data movement by delivering disaggregated solutions, such as NVMe-oF fabrics, composable architectures and GPU and FPGA accelerators. While these can speed up the process to some degree, they don't move eliminate the data movement challenge and only minimize some of its effects. All these solutions require space and power needs that may not exist, and they are not innovating the way to move and manage the stored data itself.

VMblog:  What sort of companies are looking to deploy computational storage technology?

Shadley:  Computational storage is ideal for any organization employing hyperscale environments, edge computing, content delivery networks (CDNs). For example, this would include web companies like Facebook, cloud providers like AWS, telcos like AT&T and CDNs like Akamai. Ultimately, though, computational storage is useful for any company relying heavily on AI and data analytic applications.

VMblog:  What are some of the real-world applications that computational storage supports - now or in the future?

Shadley:  Computational storage enables a wide variety of compelling use cases. This is especially apparent in edge computing and IoT. Imagine a commercial jet that uses sensor technology to determine in seconds rather than hours what its maintenance needs are as it sits outside the gate before its next takeoff. Computational storage would be needed to provide such efficiency and power in a small form factor.

Another great example of an edge implementation is object tracking in surveillance. Consider a remote camera platform that can analyze and track a single person in a stadium in real time by running the AI-based search algorithm while the data is being stored on cameras. No need to ‘look back' over the data.

VMblog:  And finally, are there any key partners or industry groups helping advance computational storage?

Shadley:  The Storage Networking Industry Association (SNIA) recently launched the Computational Storage Technical Working Group. This SNIA body is focused on developing standards to promote the interoperability of computational storage devices, and on defining interface standards for system deployment, provisioning, management, and security.  These efforts will enable storage architectures and software to be integrated with computation in its many forms.

##

Published Friday, May 31, 2019 7:35 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
top25
Calendar
<May 2019>
SuMoTuWeThFrSa
2829301234
567891011
12131415161718
19202122232425
2627282930311
2345678