Vianai Systems announced the release of veryLLM, an open-source toolkit that enables reliable,
transparent and transformative AI systems for enterprises. The veryLLM toolkit
empowers developers and data scientists to build a universally needed
transparency layer into Large Language Models (LLMs), to evaluate the accuracy
and authenticity of AI-generated responses -- addressing a critical challenge
that has prevented many enterprises from deploying LLMs due to the risks of
false responses.
AI hallucinations
AI hallucinations, in which LLMs create false, offensive or otherwise
inaccurate or unethical responses raise particularly challenging issues for
enterprises as the risks of financial, reputational, legal and/or ethical
consequences are extremely high. The AI hallucination problem left unaddressed
by LLM providers has continued to plague the industry and hinder adoption, with
many enterprises simply unwilling to bring the risks of hallucinations into
their mission-critical enterprise systems. Vianai is releasing the veryLLM
toolkit (under the Apache 2.0 open-source license) to make this capability
available for anyone to use, to build trust and to drive adoption of AI
systems.
How veryLLM works
The veryLLM toolkit introduces a foundational ability to understand the
basis of every sentence generated by an LLM via several built-in functions.
These functions are designed to classify statements into distinct categories
using context pools that the LLMs are trained on (e.g., Wikipedia, Common
Crawl, Books3 and others), with the introductory release of veryLLM based on a
subset of Wikipedia articles. Given that most publicly disclosed LLM training
datasets include Wikipedia, this approach provides a robust foundation for the
veryLLM verification process. Developers can use veryLLM in any application
that leverages LLMs, to provide transparency on AI generated responses. The
veryLLM functions are designed to be modular, extensible, and work alongside
any LLM, providing support for existing and future language models.
"AI hallucinations pose serious risks for enterprises, holding back
their adoption of AI. As a student of AI for many years, it is also just
well-known that we cannot allow these powerful systems to be opaque about the
basis of their outputs, and we need to urgently solve this. Our veryLLM library
is a small first step to bring transparency and confidence to the outputs of
any LLM - transparency that any developer, data scientist or LLM provider can
use in their AI applications," said Dr. Vishal Sikka, Founder and CEO of Vianai Systems and
advisor to Stanford University's Center for Human-Centered Artificial Intelligence. "We
are excited to bring these capabilities, and many other anti-hallucination
techniques, to enterprises worldwide, and I believe this is why we are seeing
unprecedented adoption of our solutions."
Try veryLLM here.
Access the code here.