Virtualization Technology News and Information
Article
RSS
Tecton Releases Low-latency Streaming Pipelines for Machine Learning, Allowing Data Teams to Build and Deploy Real-Time Models in Hours Instead of Months
Tecton, the enterprise feature store company, announced that it has added low-latency streaming pipelines to its feature store so that organizations can quickly and reliably build real-time ML models.

"Enterprises are increasingly deploying real-time ML to support new customer-facing applications and to automate business processes," said Kevin Stumpf, co-founder and CTO of Tecton. "The addition of low-latency streaming pipelines to the Tecton feature store enables our customers to build real-time ML applications faster, and with more accurate predictions."

Real-time ML means that predictions are generated online, at low latency, using an organization's real-time data; any updates in the data sources are reflected in real-time in the model's predictions. Real-time ML is valuable for any use case that is sensitive to the freshness of the predictions, such as fraud detection, product recommendations and pricing use cases.

For example, fraud detection models need to generate predictions based not just on what a user was doing yesterday but on what they have been doing for the past few seconds. Similarly, real-time pricing models need to incorporate the supply and demand of a product at the current time, not just from a few hours ago.

The data is the hardest part of building real-time ML models. It requires operational data pipelines which can process features at sub-second freshness, serve features at millisecond latency, while delivering production-grade SLAs. Building these data pipelines is very hard without proper tooling and can add weeks or months to the deployment time of ML projects.

With Tecton, data teams can build and deploy features using streaming data sources like Kafka or Kinesis in hours. Users only need to provide the data transformation logic using powerful Tecton primitives, and Tecton executes this logic in fully-managed operational data pipelines which can process and serve features in real-time. Tecton also processes historical data to create training datasets and backfills that are consistent with the online data and eliminates training / serving skew. Time window aggregations - by far the most common feature type used in real-time ML applications - are supported out-of-the-box with an optimized implementation.

Data teams who are already using real-time ML can now build and deploy models faster, increase prediction accuracy and reduce the load on engineering teams. Data teams that are new to streaming can build a new class of real-time ML applications that require ultra-fresh feature values. Tecton simplifies the most difficult step in the transition to real-time ML - building and operating the streaming ML pipelines.
Published Tuesday, August 10, 2021 12:47 PM by David Marshall
Filed under:
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<August 2021>
SuMoTuWeThFrSa
25262728293031
1234567
891011121314
15161718192021
22232425262728
2930311234