Virtualization Technology News and Information
#DockerCon 2019 Q&A: Opsani Will Showcase AI Driven Continuous Optimization at Booth C1


Are you attending DockerCon 2019?  If so, I invite you to add Opsani to your MUST SEE list of vendors.

DockerCon 2019 is right around the corner, taking place April 29th - May 2nd at the Moscone Center in San Francisco, CA.  This is one of the leading container industry conferences, covering all things containers related, including Kubernetes, microservices, and DevOps.

One of the vendors exhibiting this year is Opsani.  If you are attending, make sure to get them on your busy schedule and visit their booth to learn more.  They are a leading provider of machine learning-enabled autonomous operations for DevOps teams, and they plan to showcase an AI offering that continuously optimizes performance and reduces cloud application costs.

Read this exclusive pre-show interview with Opsani to learn what they have planned ahead of the start of the show.  Be in the know!

opsani logo 

VMblog:  Attendees are going to want to speak with you.  How can they find you at DockerCon 2019?  Where will you be located?  And how can they follow you?

Opsani:  We will be at booth #C1 at the Moscone Center. We invite attendees to schedule a convenient time to meet with us here. You can also follow us on Twitter @Opsani_ and on our blog at

VMblog:  We've told attendees to come by and visit you.  But can you better articulate WHY they need to add you to their MUST SEE list?

Opsani:  Cost reduction is a major impetus in moving to the cloud, and with Opsani, enterprises can quickly reduce current cloud spend while maintaining or even improving performance. Our customers have realized a positive ROI in less than 30 days. Customers sign up to achieve better performance and/or saving money but they share with us that nothing has improved their delivery pipeline and release predictability as much as using the Opsani AI optimization service.

VMblog:  If an attendee likes what they see and hear at your booth, what message about your product can you send them back with to sell their boss on your technology?

Opsani:  No other solution at DockerCon19 will translate as quickly to reducing your current cloud spend while maintaining or even improving performance. We have shown a positive ROI in less than 30 days. With Opsani, enterprises routinely achieve 2x improvement in performance and 60+ percent cost reduction, while serving more users, with faster response times and keeping cloud costs under control. That translates into millions of dollars of savings a year while getting better performance, improved delivery pipeline and release predictability.

VMblog:  Your company talks about Continuous Optimization.  Can you describe what that means, and can you talk about who uses it?

Opsani:  Continuous Optimization (CO) is an advanced form of application performance tuning that measures, predicts and implements parameter changes in cloud-native applications to optimize for cost and performance automatically.

In the past, if you were pushing out code twice a year or even every month, you had enough time to tune the application performance. Today, with weekly and daily releases it becomes virtually impossible for an engineering team to choose the right resources and parameter settings for your microservices, containers, middleware and cloud instances. Evaluating a few hundred combinations of settings and making the best choice takes time and expertise.  Doing it with 256 billion combinations and a daily release cycle defies human scale. So as a result, DevOps teams do the next best thing: they guess and then overprovision to make sure they have enough resources to keep the uptime, have adequate performance and be able to ship new releases.

We see CO as the next stage of your Continuous Integration (CI) /Continuous Delivery (CD) pipeline. CI and CD have increased the velocity of new code releases 100X over the last decade, making tremendous strides in software delivery. When you turn CI/CD into CI/CD/CO and use AI to optimize the configurations for your app and infrastructure, the results are really spectacular.

First of all, our optimization service is built into your production environment. It is continuously searching for an even better result. This means you don't have optimization as a project or wait for the release to go through performance testing.  You can validate the performance of new releases with live testing in canaries and then automatically update the stack as they pass. This eliminates a huge barrier to achieving continuous deployment and can increase release velocity 4X.

The real benefits of CO go beyond the impact on your pipeline.  Defensive overprovisioning of bound resources means that every level of your stack has substantial operational inefficiencies. Our AI is set to attack this in two phases.  First, it looks for the set of parameters that maximizes performance and throughput while minimizing latency. Then it tries to achieve the same result while minimizing costs.  Finally, it looks for a combination of performance and costs that yields the greatest efficiency.

VMblog:  How are you different from other AI Ops players today?

Opsani:  The term AIOps was initially coined by Gartner, who defined it as a platform to provide greater insight and smart alerts for better analysis to human engineers.

The challenges we see DevOps teams face in optimizing modern applications are too complex to solve by alerts. At Opsani, we believe that AIOps systems must learn from your data and adapt to how your app works -- meaning they won't do the same thing every time.  AIOps systems should be able to make and implement decisions without human intervention -- although you can specify constraints and keep a human in the loop until you trust the system. And most importantly,  AIOps systems should become a standard part of your delivery and run continuously, at every new deployment. They reduce your work, rather than creating additional tickets your team needs to look at.

VMblog:  And how is Continuous Optimization different from Application Performance Monitoring (APM)?

Opsani:  APM is a white-box approach that focuses on code and human understanding of complex systems. It allows developers to deeply understand how their code works and then devise new approaches or optimize it. APM is a technology that has been with us for the last two decades and has gained adoption, despite the complexity and commitment required to use it.

Continuous Optimization is a new, cloud-native approach, enabled by the latest advances in cloud technologies and continuous integration/continuous delivery. It goes beyond just the code and affects the runtime systems, application deployment and operations. Unlike APM, which requires deep, focused work by engineers to comprehend the behaviors and make code changes in human-scale projects, continuous optimization works autonomously, on every release, reducing substantially the amount of repetitive work engineers need to do to ensure smooth and reliable performance at the most efficient price point.

VMblog:  If you would, please explain or give readers a few reasons why your product or service is considered unique?

Opsani:  Opsani measures performance at a system level and optimizes based on the business objectives set by the customer. Rather than trying to find the right settings for a single component in isolation, we measure the overall output against the multiple inputs.  What the AI actually ends up with during this process frequently defies everyone's expectations as it is not limited by human preconceptions.

This holistic approach simultaneously simplifies the human decision making process and gives our customers a lot of flexibility.  If your business goals suddenly shift from growth and maximizing performance to cost cutting, Opsani can make that adjustment on the fly, you do not have to change anything in your process or toolchain.

VMblog:  In your expert opinion, what, if anything, is holding containers back?  Or are you seeing a change in the technology's growth pattern?

Opsani:  Adoption of containers have practically exploded since the first DockerCon in 2014. They provide several strategic benefits that enable enterprises to scale their software development linearly and to streamline deployment and operations. Specifically, the adoption of CI/CD and ability to meld Dev and Ops have been accelerated by container technologies, resulting in much faster releases, better quality and easier architecture changes.

For all their benefits, containers bring additional complexity in the form of increased number of moving parts. Instead of the single, monolith application server, applications are now built out of tens and sometimes hundreds of different interconnected containers. Added to this, professional developer tools have struggled to keep pace; for example debuggers have evolved into sophisticated distributed tracing tools.

Understanding the relationships and correctly configuring and sizing the myriad of containers and their interactions is something that is increasingly causing friction and slowing down the pace of adoption, sometimes even resulting in the anti-pattern of "monolith containers". Being able to observe and manage the group of containers that comprise the application as one entity will remove that roadblock.

VMblog:  Give us a quick overview of how one of your customers are using your product.

Opsani:  Our service provides various optimization scenarios and it is up to the customer to choose what results make most business sense. The impact Opsani has made for organizations has been significant; we have seen up to a 30% increase in performance while maintaining the cost and up to 80% decrease in costs without reducing performance. Combining the two, we see clients selecting the optimal scenario for their business goals. That translates into millions of dollars a year in savings while improving performance and velocity. 

VMblog:  Finally, attendees like trade show giveaways.  Are you giving away anything at your booth this year?

Opsani:  This is Opsani's first time at DockerCon and aside from the usual chachkies and t-shirts and stickers, we want our visitors to have something tangible they can use as they go back to the office. So we are conducting an online survey to capture the current state of cloud applications performance optimization, which we will share with all participants and VMBlog, of course. This just another venue to highlight the needs and benefits of controlling runtime costs and improving efficiency by using AI enabled continuous optimization of performance.


Published Thursday, April 25, 2019 7:35 AM by David Marshall
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<April 2019>