According to a newly released report from
Swimlane,
a concerning 74% of cybersecurity decision-makers are aware of
sensitive data being input into public AI models despite having
established protocols in place. The report,
"Reality Check: Is AI Living Up to Its Cybersecurity Promises?",
reveals that the rush to embrace AI, especially generative AI and large
language models, has outpaced most organizations' ability to keep their
data safe and effectively enforce security protocols.
As AI becomes more integral to organizational operations, companies are
grappling with both its benefits and the associated risks. To better
understand this landscape, Swimlane surveyed 500 cybersecurity
decision-makers in the United States and the United Kingdom to uncover
how AI is influencing data security and governance, workforce
strategies, and cybersecurity budgets.
"There's no doubt that AI is reshaping cybersecurity as we know it, and
its impact reaches far beyond the digital sphere," said Cody Cornell,
co-founder and chief strategy officer of Swimlane. "The fact that 74% of
respondents view AI-generated misinformation as a significant threat to
the U.S., particularly with the 2024 elections approaching, underscores
the complex challenges ahead. While AI offers tremendous benefits in
improving security and efficiency, it's crucial that we approach its use
responsibly, balancing innovation with the potential risks to both
organizations and society."
Key Takeaways
-
Is AI Making It Impossible to Balance Innovation and Confidentiality? While
70% of organizations have specific protocols in place when it comes to
what data is shared in a public LLM, 74% of respondents said they were
aware of individuals at their organization inputting sensitive data into
a public LLM.
-
Who Should Govern AI? Only 28% of respondents believe the
government should bear the primary responsibility for setting and
enforcing guidelines. At the same time, almost half (46%) of respondents
said the company that developed the AI should be held primarily
responsible for the consequences when AI systems cause harm.
-
AI Hype or Growth Engine? Seventy-six percent of respondents
believe the current market is saturated with AI hype. This overload of
AI-centric messaging is taking its toll, with 55% of respondents saying
they are starting to feel fatigued by the constant focus on AI.
-
Are AI Skills Essential? A significant 86% of
organizations report that experience with AI and machine learning (ML)
technologies significantly influences hiring decisions.
-
Will AI Adoption Fuel Efficiency Gains and Increased Budgets? The
majority of organizations (89%) report that the use of GenAI and LLMs
has improved productivity and efficiency for their cybersecurity teams.
This shift has led to a third (33%) of organizations planning to
allocate more than 30% of their 2025 cybersecurity budgets to AI-powered
or AI-enhanced solutions.
"Effective use of AI is no longer a luxury-it's a necessity," said
Michael Lyborg, CISO at Swimlane. "By automating routine tasks and
boosting threat detection, AI enables cybersecurity professionals to
tackle more complex challenges head-on. Organizations that embrace AI
strengthen their defenses and regain time for proactive threat hunting.
As we navigate these turbulent waters, it's vital that we implement AI
thoughtfully to enhance security and uphold public trust."
Download the report:
Reality Check: Is AI Living Up to Its Cybersecurity Promises