HackerOne revealed data that found 48% of
security professionals believe AI is the most significant security risk to
their organization. Ahead of the launch of its annual Hacker-Powered Security
Report, HackerOne revealed early findings, which include data from a survey of
500 security professionals. When it comes to AI, respondents were most
concerned with the leaking of training data (35%), unauthorized usage of AI
within their organizations (33%), and the hacking of AI models by outsiders
(32%).
When asked about handling the challenges that AI safety and
security issues present, 68% said that an external and unbiased review of AI
implementations is the most effective way to identify AI safety and security
issues. AI red teaming offers this type of external review through the global
security researcher community, who help to safeguard AI models from risks,
biases, malicious exploits, and harmful outputs.
"While we're still reaching industry consensus around AI
security and safety best practices, there are some clear tactics where
organizations have found success," said Michiel Prins, co-founder at HackerOne.
"Anthropic, Adobe, Snap, and other leading organizations all trust the global
security researcher community to give expert third-party perspective on their
AI deployments."
Further research from a HackerOne-sponsored
SANS Institute report explored the impact of AI on cybersecurity and found
that over half (58%) of respondents predict AI may contribute to an "arms race"
between the tactics and techniques used by security teams and cybercriminals.
The research also found optimism around the use of AI for security team
productivity, with 71% reporting satisfaction from implementing AI to automate
tedious tasks. However, respondents believed AI productivity gains have
benefited adversaries and were most concerned with AI-powered phishing campaigns
(79%) and automated vulnerability exploitation (74%).
"Security teams must find the best applications for AI to
keep up with adversaries while also considering its existing limitations - or
risk creating more work for themselves," said Matt Bromiley, Analyst at The
SANS Institute. "Our research suggests AI should be viewed as an enabler,
rather than a threat to jobs. Automating routine tasks empowers security teams
to focus on more strategic activities."
HackerOne's AI-powered co-pilot Hai continues to free up
time for security teams by automating tasks and offering deeper vulnerability
insights. These benefits drive Hai's adoption, which has grown 150% since
launch and saves security teams an average of five hours of work per week.
AI-focused products also continue to drive HackerOne's business, with AI
Red Teaming growing 200% quarter over quarter in Q2 and a 171% increase in
security programs adding AI assets into scope.
Test your AI risk readiness with this interactive quiz and read the
full SANS AI 2024 Survey and methodology here.
The full Hacker Powered Security Report will be
released this fall.