JFrog Ltd. ("JFrog") announced a new integration with
Amazon SageMaker,
which helps companies build, train, and deploy machine learning (ML)
models for any use case with fully managed infrastructure, tools, and
workflows. By pairing
JFrog Artifactory
with Amazon SageMaker, ML models can be delivered alongside all other
software development components in a modern DevSecOps workflow, making
each model immutable, traceable, secure, and validated as it matures for
release. JFrog also unveiled new versioning capabilities for its ML Model management solution, which help ensure compliance and security are incorporated at every step of ML model development.
"As more companies begin managing big data in the cloud, DevOps team
leaders are asking how they can scale data science and ML capabilities
to accelerate software delivery without introducing risk and
complexity," said Kelly Hartman, SVP, Global Channels and Alliances,
JFrog. "The combination of Artifactory and Amazon SageMaker creates a
single source of truth that indoctrinates DevSecOps best practices to ML
model development in the cloud - delivering flexibility, speed,
security, and peace of mind - breaking into a new frontier of MLSecOps."
According to a recent Forrester survey,
50 percent of data decision-makers cited applying governance policies
within AI/ML as the biggest challenge to widespread usage, while 45
percent cited data and model security as the gating factor. JFrog's
Amazon SageMaker integration applies DevSecOps best practices to ML
model management, allowing developers and data scientists to expand,
accelerate, and secure the development of ML projects in a manner that
is enterprise-grade, secure, and abides by regulatory and organizational
compliance.
JFrog's new Amazon SageMaker integration allows organizations to:
-
Maintain a single source of truth for data scientists and developers,
ensuring all models are readily accessible, traceable, and tamper-proof.
-
Bring ML closer to the software development and production lifecycle workflows, protecting models from deletion or modification.
-
Develop, train, secure and deploy ML models.
-
Detect and block the use of malicious ML models across the organization.
-
Scan ML model licenses to ensure compliance with company policies and regulatory requirements.
-
Store home-grown or internally augmented ML models with robust access controls and versioning history for greater transparency.
-
Bundle and distribute ML models as part of any software release.
"Traditional software development processes and machine learning stand
apart, lacking integration with existing tools," said Larry Carvalho,
Principal and founder of RobustCloud.
"Together, JFrog Artifactory and Amazon SageMaker provide an integrated
end-to-end, governed environment for machine learning. Bringing these
worlds together represents significant progress towards harmonizing
machine learning pipelines with established software development
lifecycles and best practices."
Along with its Amazon SageMaker integration, JFrog unveiled new versioning capabilities for its ML Model Management solution
that incorporate model development into an organization's DevSecOps
workflow to increase transparency around each model version so
developers, DevOps teams, and data scientists can ensure the correct,
secure version of a model is utilized.
The JFrog integration with Amazon SageMaker, available now for JFrog
customers and Amazon SageMaker users, ensures all artifacts consumed by
data scientists or used to develop ML applications are pulled from and
saved in JFrog Artifactory.