In this new era of cloud
computing, how organizations build, deploy and manage applications is changing
radically. In the world of VMs, we understand how to scale applications to meet
SLAs of security, reliability, performance and elasticity. With the introduction
of containers and microservices, the rules change. The software stack is being
re-imagined.
One of the catalysts driving this change in Silicon Valley is the
venture firm Benchmark Capital, backers of Docker, Hortonworks, the Apache
Kafka company Confluent and now a new startup. They're leading a $10.5 million
Series A round in San Francisco startup, Buoyant, that has created a new layer
in the cloud software infrastructure stack that it calls a "service mesh." I
recently spoke to former Twitter engineer and Buoyant co-founder and CEO
William Morgan to learn more.
VMblog: You guys seem like you're trying to be to the new networking
stack for cloud-native, what Cisco was to the TCP / IP stack. Is that a fair
comparison?
William Morgan: It's a flattering comparison and I'll happily take it. Cisco was
at the forefront of a massive industry transformation onto TCP/IP. They provided
some real value to their customers as part of that transition. With Buoyant, I
believe we're seeing a similar, industry-wide transformation with the move to
cloud native architectures. The details are different, of course, but there are
some real analogies between the concept of the service mesh and TCP/IP itself
-- just up a few layers of abstraction.
VMblog: How do the break-downs between services look different in
cloud native vs. the old virtual machine world? And what are the sorts of outages
that are most common in cloud-native / microservices stacks?
Morgan: The biggest difference between the VM world and the cloud native
world is the application architecture enabled by the cloud native environment --
specifically, microservices. Why? Because with containers and container
orchestrators, the cost of moving to microservices is now dramatically reduced.
It's always been a good idea, it's just been painfully expensive. Now it's
cheap and people are doing it, but of course there is a whole new set of
failures that get introduced by the fact that you're now running a big
distributed system. You have new failure modes where one small issue can easily
cascade to take down the entire application, because of the way that
cross-service communication happens. Linkerd helps with that.
VMblog: Describe who is using the Linkerd Service Mesh
- what their requirements look like and how they were tackling the problem
before they discovered they needed the Service Mesh?
Morgan: That's a fun one because it's actually quite different from what
we expected when we started out. We thought it would be the real high-scale
companies with tons of traffic. Instead, it's been companies of all sizes and
all traffic scales, and the thread that's tying them together is the move to a
cloud native architecture. From startups like Monzo to big companies like
Paypal, the unifying theme is that they're adopting things like Kubernetes and
Docker and finding that Linkerd solves a class of significant challenges for
them.
VMblog: Do you see the Service Mesh ever being used in VM
environments, or is this strictly for Cloud-Native / new applications?
Morgan: Absolutely. One of our favorite use cases for Linkerd is in
introducing something like Kubernetes into an existing environment. You never
adopt Kubernetes in a vacuum; you add it to your existing stack. So we've
focused on making Linkerd work in every possible environment, including VMs and
physical hardware and everything else we can get our hands on. One of the
biggest values to the service mesh is having a uniform, consistent layer for
cross-service traffic across your entire environment. Then you can decouple
application code from the underlying infrastructure and migrate things back and
forth at your own pace. So Linkerd allows you to be be poly-environment in the
same way that Docker allows you to be polyglot.
##