Virtualization Technology News and Information
In-Memory Computing: A Foundation for Real-Time IoT

Written by Nikita Ivanov, CTO and Co-founder of GridGain Systems

As part of digital transformation and omnichannel customer experience initiatives, enterprises often deploy IoT applications that require the collection and analysis of massive amounts of data from a variety of sources. According to the "Cisco Global Cloud Index: Forecast and Methodology, 2015-2020," the total amount of data created by devices, driven by IoT, will reach 600 ZB per year by 2020, up from 145 ZB per year in 2015. In order for these new IoT applications to deliver the anticipated benefits, the analysis of their data must often take place simultaneously with the data collection. Consider the following: 
  • Smart cities - For smart cities to successfully reduce traffic congestion, they must be able to analyze real-time data from vehicle-based routing applications, traffic cams, weather stations, police reports, event calendars and much more and then immediately update traffic flow models and suggest optimal route changes to all vehicles - self-driving cars, buses, delivery trucks, etc. - that are connected to the system.
  • Mobile payments - For banks to reduce fraud associated with mobile access, they must be able to combine historical data with real-time data collected from their payment terminals and credit card agencies in order to detect and limit emerging fraud strategies before they spread.
  • Patient monitoring - For hospitals to provide optimal care for at-home patients, the data collected from each sensor that is monitoring each patient must be analyzed and acted upon in real-time.

Simultaneously collecting and analyzing data is a challenge for organizations that rely on an extract, transform and load (ETL) process to move data from an online transactional processing (OLTP) database into an online analytical processing (OLAP) database. The time delay introduced by the ETL process can mean the data is out of date before the analysis even begins.

Simultaneous Data Collection and Analysis with In-memory Computing

In-memory computing can eliminate the time delay caused by the ETL process. In-memory computing platforms typically include the following technologies:

  • In-memory data grids and databases deployed on a cluster of on-premises or cloud servers. In-memory data grids can be easily inserted between the data and application layers of existing applications, while in-memory databases are typically used for new applications or when launching a major re-architecting of an existing application. In both cases, the entire available memory and CPU power of the in-memory computing cluster is available for processing, and the cluster can be scaled out simply by adding new nodes.
  • A streaming analytics engine that manages dataflow and event processing. By taking advantage of in-memory computing speed, the streaming analytics engine enables user queries without any impact on performance.
  • A memory-centric architecture that allows organizations to balance infrastructure costs and application performance by keeping the full operational data set on disk and keeping only a subset of user-defined data in memory. This architecture can be built using a distributed ACID and ANSI-99 SQL-compliant disk store deployed on spinning disks, solid state drives (SSDs), Flash, 3D XPoint and other storage-class memory technologies.
  • A continuous learning framework based on integrated, fully distributed machine learning (ML) and deep learning (DL) libraries that have been optimized for massively parallel processing. This enables each ML or DL algorithm to run locally against the data residing in-memory on each node of the in-memory computing cluster. This allows for the continuous updating of data without impacting performance, even at petabyte scale.

IoT and HTAP

In-memory computing is a mature technology that can deliver a 1,000X increase in speed and massive scalability compared to applications relying on disk-based databases. Gartner predicts that by 2019 75 percent of cloud-native application development will use in-memory computing or services using in-memory computing to enable mainstream developers to implement high-performance, massively scalable applications.

By eliminating the need for ETL processes and supporting in-memory speeds at petabyte scale, in-memory computing platforms enable what Gartner calls in-process HTAP (hybrid transactional/analytical processing) using a single database for transactions and analytics. In-process HTAP can also significantly reduce the cost and complexity of the data layer architecture while allowing real-time machine or deep learning to drive end user interactions.

With HTAP, enterprises can more cost-effectively implement a variety of IoT and Industrial IoT use cases. For example, the infrastructure of a major bank with 135 million customers was overwhelmed when it began offering 24/7 online and mobile banking. With in-memory computing, the bank is developing a web-scale architecture using a 2,000-node in-memory data grid that can handle up to 1.5 PB of data, along with the required transaction volume.

The Internet of Things can be a primary driver of business innovation and competitiveness. However, deploying a successful IoT project that delivers on its promise can require finding a cost-effective path to a massive increase in computing performance and scalability. For many organizations, in-memory computing will be the solution, and the sooner IT decision makers begin laying out their in-memory computing strategies, the sooner they will be able to deliver on the IoT promise.


About the Author


Nikita Ivanov is founder and CTO of GridGain Systems, started in 2007 and funded by RTP Ventures and Almaz Capital. Nikita has led GridGain to develop advanced and distributed in-memory data processing technologies - the top in-memory data fabric starting two times per second every day around the world. Nikita has over 20 years of experience in software application development, building HPC and middleware platforms, contributing to the efforts of other startups and notable companies including Adaptec, Visa and BEA Systems. Nikita was one of the pioneers in using Java technology for server side middleware development while working for one of Europe's largest system integrators in 1996.

Published Tuesday, July 24, 2018 7:39 AM by David Marshall
Filed under: , ,
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<July 2018>