Elastic announced LLM Safety Assessment: The
Definitive Guide on Avoiding Risk and Abuses, the latest research
issued by Elastic Security Labs.
The LLM Safety Assessment explores large language model (LLM) safety and
provides attack mitigation best practices and suggested countermeasures for LLM
abuses.
Generative AI and LLM
implementations have become widely adopted over the past 18 months, with some
companies pushing to implement them as quickly as possible. This has expanded
the attack surface and left developers and security teams without clear
guidance on how to adopt emerging LLM technology safely.
"For all their potential,
broad LLM adoption has been met with unease by enterprise leaders, seen as yet
another doorway for malicious actors to gain access to private information or a
foothold in their IT ecosystems," said Jake King, head of threat and
security intelligence at Elastic. "Publishing open detection engineering
content is in Elastic's DNA. Security knowledge should be for everyone-safety
is in numbers. We hope that all organizations, whether Elastic customers or
not, can take advantage of these new rules and guidance."
The LLM Safety Assessment
builds and expands on the Open Web Application Security Project (OWASP) research focused on
the most common LLM attack techniques. The research includes crucial
information security teams can use to protect their LLM implementations,
including in-depth explanations of risks, best practices and suggested
countermeasures to mitigate attacks. The countermeasures explored in the
research cover different areas of the enterprise architecture - primarily
in-product controls - that developers should adopt when building LLM-enabled
applications and information security measures SOCs must add to verify and validate
the secure usage of LLMs.
In addition to 1000+ detection rules
already published and maintained on GitHub, Elastic Security Labs added an
initial set of detections just for LLM abuses. These new rules are an example
of the out-of-box detection rules now included to detect LLM abuses.
"Normalizing and
standardizing how data is ingested and analyzed makes the industry safer for
everyone - which is exactly what this research intends to do," said
King. "Our detection rule repository helps customers monitor threats
with confidence, as quickly as possible, and now includes LLM implementations.
The rules are built and maintained publicly in alignment with Elastic's
dedication to transparency."