Virtualization Technology News and Information
Redis and Tecton Partner to Deliver High-Performance Feature Serving for Real-Time Machine Learning

Redis and Tecton announced a partnership and a product integration that enables low-latency, highly scalable and cost-effective serving of features to support operational Machine Learning (ML) applications.

"Tecton and Redis are partnering in order to reduce the time to action for enterprises. Many machine learning use cases require the ability to transform streaming data, serve features to the machine learning model and calculate feature values, all on a real-time basis. Tecton helps transform incoming data and calculate feature values, and Redis helps retrieve feature values at ultra-low latency for model serving," said Kevin Petrie, Vice President of Research at Eckerson Group.

The Tecton feature store is a central hub for ML features, the real-time data signals that power ML models. Tecton allows data teams to define features as code using Python and SQL. Tecton then automates ML data pipelines, generates accurate training datasets and serves features online for real-time inference. With Tecton, data teams can build features collaboratively using DevOps engineering best practices and share features across models and use cases. New features can be delivered in minutes without the need to build bespoke data pipelines.

Feature stores support two data access patterns for ML: retrieving millions of rows of historical data for model training, and retrieving individual rows online at millisecond latencies for real-time predictions. Feature stores typically use key-value databases as online storage for low-latency serving.

Redis Enterprise Cloud is a cost-effective, fully managed Database-as-a-Service (DBaaS) available as a hybrid and multi-cloud solution. Built on a serverless concept, Redis Enterprise Cloud simplifies and automates database provisioning on the leading cloud service providers: AWS, Microsoft Azure and Google Cloud. Designed for modern distributed applications, Redis Enterprise Cloud delivers sub-millisecond performance at a virtually infinite scale. This allows developers and operations teams to build intelligent, high-performance, scalable and resilient applications faster using Redis native data structures and modern data models with low-latency retrieval necessary for online stores.

With the new integration, Tecton customers now have the option to use Redis Enterprise Cloud as the online store for their ML features. Redis Enterprise Cloud provides 3x faster serving latencies compared to Amazon DynamoDB, while reducing the cost per transaction by up to 14x. This enables organizations to support more demanding ML use cases, such as recommendations and search ranking.

"The Tecton feature store is designed to support a broad range of ML use cases, each with unique serving latency and volume requirements," said Mike Del Balso, co-founder and CEO of Tecton. "Customers with latency-sensitive and high-volume use cases have been asking for the option to use Redis Enterprise Cloud for their online store. With today's announcement, we're happy to be providing that option and continuing to make the Tecton feature store more flexible and modular."

"As more organizations operationalize machine learning for real-time, performance becomes especially important for customer-facing applications and experiences," said Taimur Rashid, Chief Business Development Officer at Redis. "Feature stores are at the center of modern data architecture, and there is increasing adoption of Redis to store online features for low-latency serving. With Tecton's capabilities for data orchestration combined with Redis Enterprise Cloud's low operational overhead and submillisecond performance, organizations can deliver online predictions and perform complex operations in milliseconds."

Published Thursday, March 10, 2022 1:49 PM by David Marshall
Filed under:
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<March 2022>