Virtualization Technology News and Information
Article
RSS
Lightrun Introduces Runtime Autonomous AI Debugger: Revolutionizing Developer Observability in the GenAI Era

interview lightrun peleg 

As Generative AI becomes ubiquitous, engineering teams are rushing to leverage genAI-based tools to boost developer productivity. At the same time, adopting such tools poses risks including code hallucinations and bugs. To learn how genAI-based runtime debugging ensures software quality in the AI era, I spoke with Ilan Peleg, co-founder and CEO of Lightrun, a company focused on providing tools that ensure Developer Observability. Today, Lightrun announced the industry's first Runtime Autonomous AI Debugger to automate end-to-end production debugging and free developers from endless troubleshooting cycles.

VMblog: It's been a while since we last spoke. Can you give us an update on what Lightrun has been up to, and what "Developer Observability" is all about?

Ilan Peleg:  We are continuing to add significant capabilities to our core platform, focused on connecting developers to their live applications and reducing time spent continuously troubleshooting issues. Often, developers lack visibility into their applications at runtime, so "Developer Observability" is all about uniting dev and ops workflows and giving developers more ownership. With Lightrun, developers can securely inject logs, metrics and traces in real time while applications are running, rather than the typical monitoring and observability approach which keeps each layer of the SDLC more siloed.

We also work hard to support users at any scale no matter where they run applications, whether that is on-premise, serverless, or public cloud. Further, Lightrun can give developers visibility into any code they interact with: proprietary, legacy, and third-party. Our mission is to make it as easy as possible for developers to understand everything that happens in their live applications, at speed and regardless of where the application is deployed.

VMblog: We have seen the rise of genAI-based developer productivity tools like Copilot, but they introduce a lot of risks. How does Lightrun's "shift left observability" approach combat this?

Peleg:  The race to adopt genAI-based programming tools to maximize development velocity is only increasing and is incredibly helpful to the development process. These tools can make developers more productive and let them focus on the parts of their code that are most complex and valuable. However, they can also introduce bugs and code ‘hallucinations' as they rely on external Gen-AI frameworks, which lack the enterprise context. When developers utilize code they did not write, it makes it much harder for them to see and understand what's happening at runtime.

Essentially, shifting observability left allows developers to be proactive about debugging, because it happens continuously and in real time. When developers can insert logs, metrics and traces during runtime, they identify issues much earlier in the pipeline which ultimately reduces Mean Time to Resolution (MTTR) and logging costs.

This approach is also cost-effective - instead of "log everything, analyze later" we are giving developers the ability to log only and exactly what they need, right when they need it. Shift left observability goes hand in hand with the overall rise of the cloud-native production mindset. Given that so many teams are implementing CI/CD, progressive delivery, and testing in production, dev-native observability has never been more critical.

VMblog: Today you announced the launch of Lightrun's Runtime Autonomous AI Debugger. How does this tool move beyond existing solutions to enable teams to safely integrate genAI into development workflows?

Peleg:  It is the first Runtime Autonomous AI Debugger available to the industry, and we are thrilled to make it available in private beta. It is a safety net for developers as they integrate genAI into the development lifecycle. The tool is based on our proprietary runtime debugging genAI model and allows developers to troubleshoot applications by automating live production debugging, ultimately cutting MTTR down to just minutes. The AI debugger works across the debugging journey, from receiving the ticket to identifying the specific line of code responsible. The best part is this all happens within the developer's IDE of choice. We think it is critical to remove change management obstacles, so Lightrun integrates with developers' existing IDEs, tools, pipelines, workflows, and cloud platforms - and supports JVM, Python, Node and .Net.

More specifically, the debugger mimics and automates the current developer workflow to troubleshoot issues during runtime. The process is iterative, and begins by using observability and ITOps signals to determine a potential root cause. Next, the AI debugger introduces dynamic logs and snapshots to specific lines of code using our observability SDK, which unlocks ultra-granular runtime debugging. Finally, our runtime debugging genAI models suggest likely root causes and validate them with production data compiled by the SDK. This whole cycle repeats until the root cause of the issue is identified.

In typical observability workflows, developers are quite removed from their live applications. The process has been largely reactive, static, and non-agile, requiring restarting or re-deploying the app to add telemetry. Worse, the mindset of just logging everything to look at later adds tremendous financial burden to organizations because most of that log data is never queried. We are forwarding a completely new approach that gives developers more responsibility for the performance of applications, yet makes it easy to take ownership by automating the process, increasing visibility, and integrating with their ideal toolsets.

VMblog: Can you say more about the three tiers of your platform, which now include the new genAI capabilities?

Peleg:  The Runtime Autonomous AI Debugger is part of our three-tier platform, which includes the Lightrun Client, The Lightrun SDK, and Lightrun Management Server.

The Client is our IDE Plugin which allows developers to add new logs, snapshots, and metrics directly from the IDE. It is currently available for IntelliJ, PyCharm, WebStorm, VS Code, Visual Studio, certain web IDEs, and more. Lightrun's plugins are totally self-contained, so developers can write code and observe the application at the same time. The Client never transmits source code over the wire, only file name and line numbers, to figure out where to insert Lightrun's actions in the user's code.

The Lightrun SDK is the core of the platform and ensures the state of applications is always immaculate. The SDK handles adding, removing and monitoring the state of Lightrun Actions, and is easy to deploy on containers, Kubernetes, serverless and bare-metal. We also support various runtimes and CPU architectures. Most importantly, our patented Lightrun Sandbox comes in every SDK to ensure each action is ready-only and performant.

The Lightrun Management Server contains the state of the system and gives administrators a user-friendly interface to configure privacy and security. It acts as a mediator between active developers and Lightrun SDKs to coordinate Lightrun Actions. Again, the server does not ingest code, only metadata, so we will never process, store, or send source code to any external party.

And now, we've added these powerful automated genAI debugging capabilities into the platform to make it even faster, easier, and more secure for organizations to implement Developer Observability.

VMblog: There is already a lot of talk about shifting left in the industry, and some observability vendors are transitioning to AIOps techniques. What makes Lightrun's platform different?

Peleg:  The shift left movement take off is at the heart of everything we do. However, shifting observability left is lagging behind as tooling remains mostly separate from the development workflow. As vendors move towards AIOps, their tools are still operational and not integrated into the code development workflow or the IDE. Because of that, they are not amazing at dealing with code level issues. Here is where Lightrun's Developer Observability framework, and the new Runtime Autonomous AI Debugger, really shines. By letting developers dynamically instrument live applications with logs, metrics, and traces from inside the IDE, we provide the most granular visibility possible - down to every individual line. This brings developers in concert with their live applications, capturing real-time context to ensure code quality, resilience, and secure genAI transformations.

It is quite transformative for developers to see the runtime state of their code, line by line, right in the IDE - it's not typically how the lifecycle works. And when adopting genAI developer productivity tools, this method gives developers confidence that they can rely on the machine-generated code, because they can see its behavior in full context, continuously.

VMblog: Is there anything else our readers should know? What can they expect from Lightrun moving forward?

Peleg:  A couple of things. For people who want to learn more, we have what we call our playground, where you can just enter an email address to play around with Lightrun and a real, live application - no configuration required.

And, for the product, moving forward, we will continue to push updates that bring Developer Observability to organizations of all types and sizes. And we have very exciting additions to our genAI capabilities in the works, so stay tuned.

##

Published Friday, August 09, 2024 7:30 AM by David Marshall
Filed under: ,
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<August 2024>
SuMoTuWeThFrSa
28293031123
45678910
11121314151617
18192021222324
25262728293031
1234567