There are a number of cloud services out there, and the number is exploding. And there's been a lot of innovation being made that enterprises can take advantage of but they also have legacy systems that they still depend on. Enter TriggerMesh. The company's current offerings include integrations for a variety of multi-cloud and legacy applications in an effort to respond to this challenge. To learn more, VMblog spoke with Mark Hinkle, Co-Founder & CEO at TriggerMesh.
VMblog: What problem is TriggerMesh focused on solving?
Mark Hinkle: Most businesses these days rely on multiple clouds. Increasingly,
developers want to build applications that integrate functionality and
data from apps running on these different clouds. It's not dissimilar to
the way developers pick and choose from open source libraries to build
apps. Except now, many of the "libraries" are cloud services from Twilio,
Salesforce, etc. or the data they need to power their app is spread across
any number of cloud and on-premises databases and storage environments.
TriggerMesh enables developers to weave their polyglot cloud and on-prem
environment together using cloud native approaches.
VMblog: Who has this problem? And what types of
businesses/users?
Hinkle: At a high level, customers are bringing us two types of challenges. One is
getting real-time event data out of different SaaS applications and clouds
into other applications. For example, we helped one business consolidate
their multi-cloud Application Performance Management (APM) data into
Datadog in real time. Their multi-cloud footprint was pretty broad,
including AWS, Azure, and Oracle Cloud. In this case, the problem wasn't
managing multiple cloud services, but rather integrating them.
On the other side, we work with some customers that are mature DevOps
shops that have a cloud native Kubernetes environment. Generally, these
customers are looking to modernize established applications and processes
to make them more efficient, more event-driven, and more responsive. These
types of customers tend to be large and established, and often operate in
regulated industries. Our Kubernetes-based declarative API to define
integration as code and manage the integrations in their CI/CD pipeline
appeals to these customers. Integrations with these customers tend to be
more complex, involving multiple event sources, sophisticated brokers that
act as event store, splitter, and/or transformer, and often intermediary
steps before the events arrive at their target application. For these
integrations, the fact that TriggerMesh uses Knative serverless eventing to connect
applications is a critical advantage. The loosely-coupled, scale to zero,
and event-driven characteristics of TriggerMesh integrations match their
architectural direction.
In terms of industries, we see a lot of interest from finserve,
technology, manufacturing, and media.
VMblog: What is different about today's integration
challenges versus those of the past?
Hinkle: We talk about TriggerMesh as a cloud native
integration platform. If we look at the last part of that-Integration
Platform-this is no different than what we've been doing in the enterprise
for quite a while. Namely, integrating different applications together,
usually through some type of messaging system, point A to point B, through
a messaging channel. It goes back to some of the basic principles of
service oriented architecture (SOA), enterprise service bus (ESB), and so
on. The enterprise integration patterns are well known.
In many ways, enterprises today are doing exactly the same thing-building
and integrating applications. The big difference is that now, we need to
start doing it in a cloud native manner and leverage our entire cloud
migration/adoption initiatives.
This means that the application building blocks are containerized and they
frequently run on a platform like Kubernetes. Plus, being cloud native, it
means that a lot of the end points we're trying to integrate come from the
cloud. They come from infrastructure clouds, or generic clouds like
Google, Azure, AWS, etc. They come from ad hoc clouds like Salesforce, and
Twilio, etc. So very much, the integrations we are now doing are
multi-cloud.
That's what we mean by cloud native. And this new mode of building apps
creates a unique set of integration challenges to bring multiple clouds
together with on-premises applications. So, TriggerMesh focuses on
integrating this multi-cloud and cloud native environment and, critically,
we help automate all these integration. Taken together, integration and
automation accelerates our customers' time to market when building a new
app and bringing it to their customers.
VMblog: You talk about automation - what do you mean by
that? Is it similar to the DevOps definition exemplified by vendors like
Ansible, Chef, Hashicorp, etc.?
Hinkle: Yes, this is exactly the idea. Before DevOps practices were defined, and
associated automation systems like Chef, Ansible, Terraform, etc. started
being used widely, developers would submit system requests to the
infrastructure team and wait for the servers or VMs to be provisioned.
DevOps practices, in conjunction with containers and microservices, brought self service and
automation which sped up innovation. A fully-automated flow today is that
your source code is stored in a version control repository. When you make a push, it
triggers cloud build jobs, which generate images
that get stored in a container registry.
With Kubernetes, your application is now defined with a declarative API
and the Kubernetes controllers do all the work to reach the desired state
of your application. Similarly to infrastructure definitions like a
terraform plan, the deployment manifests can also be stored in your version
control system. The same infrastructure automation principles are now
being used for applications and this fostered the advent of the GitOps
mindset.
We do exactly the same thing at TriggerMesh for your integrations. All components of an application (sources,
sinks, event splitter, transformation etc.) are defined with a declarative
API and the complete integration is described "as code" and stored in
version control registry. And this all happens continuously.
VMblog: Why do enterprises need to think about automating
their integrations?
Hinkle: As enterprises embrace DevOps and cloud native, they are automating their
entire software development lifecycle (SDLC), and so it is essential that
they can automate their integrations as well. Otherwise, the integrations
that developers need to bring apps to market become a bottleneck.
TriggerMesh helps by providing a way to automate using declarative
programming for integration-we call this integration as code. We want
integrations to be defined through a set of API objects, following the
OpenAPI specification, and very friendly to the entire Kubernetes
ecosystem. So you have to think about using Kubernetes API objects to
declare integrations. For every source, every target, every
transformation, and every bridge (our term for integrations) that you
build, TriggerMesh has an API object. So we have a kind bridge, we have a kind
Salesforce source, we have a kind Splunk target, and so on. If you need to do a transformation of the actual
event, delete keys, add keys, you name it, we have a kind transformation.
And of course, when we talk about integration as code, this means that the
integrations can be driven by your CI/CD system and continuously delivered
to your setup. Managing and deploying integrations with CI/CD-whether it's
Jenkins, CircleCI, Tekton, or some other-means now all your
integrations benefit from this powerful automation. Your integrations
inherit your CI/CD security, review process, testing, etc. This is
necessary to match the expectations enterprises have based on their
experience with ESB-based integration tools. When the necessary controls
are built into your CI/CD pipeline, adding TriggerMesh integration as code
makes getting the integrations you need faster and easier without
sacrificing the enterprise security you need.
VMblog: It sounds like TriggerMesh could replace cloud
migration/integration consultants. Fair statement? If so, please explain.
Hinkle: There are many great consultants out there, and I don't know if
TriggerMesh replaces them. But what we do believe is that the automated
multi-cloud integration we provide means they can spend less time fiddling
with field-mapping between different clouds, and less time building
brittle, one-off integrations. And instead, they get a replicable,
scalable, and automated way to do the cross-cloud and cloud-to-on prem
plumbing. With less time spent on the mechanics of integration,
consultants can spend more time helping more enterprises implement their integration
strategy.
VMblog: This begs the question - who within enterprises is
responsible for integrations? Historically, this has been something an
Enterprise Architect looked after. But lately I see progressive companies
with Cloud Native Architects. What are you seeing?
Hinkle: This is a great question, and I think the answer is partly "it depends"
and partly it is TBD. The emergence of DevOps and infrastructure as code
certainly shifted some traditional infrastructure responsibilities away
from SysAdmins to Developers. But it did not eliminate the need for Ops
experts. It was a cultural shift that helped both types of professionals
understand and share information about where and how infrastructure was
deployed and what each workload needed to maximize performance, etc. We
think a similar shift is occurring with integrations. As developers seek
more autonomy to build the integrations they need to bring their apps to
market, we do think some of the responsibility for integrations will shift
to development teams and to cloud and cloud native Architects. We're also
seeing Product Owners (in the Scrum sense) assuming responsibility for
integrations in some enterprises.
VMblog: What's coming next from TriggerMesh? What new
features are on the horizon and why are they important?
Hinkle: We have some pretty exciting partnership news
coming down the pike that we can't wait to share with everyone soon. On
the product side, we are expanding our integration as code capability to
make it even easier to define integrations (think Hashicorp Configuration
Language, but for integrations). And we are working to enhance our UI and
make that more self-service and intuitive. What's cool is that some of our
customers start out with a DevOps team using our declarative API to build
integration as code. Then they expose these integrations to their internal
users-who are often less technical-with the TriggerMesh UI. One such
customer made the comment that it's like creating their own internal
"Zapier." Other features we're working on will support this usage model,
such as what we call Bring Your Own, which will basically be an
integration wrapper that customers can put around any internal app or
service so it can be an integration source or target in TriggerMesh. So,
lots to talk about in our next Q&A!
##