Industry executives and experts share their predictions for 2024. Read them in this 16th annual VMblog.com series exclusive.
Three Crucial AI Cybersecurity Predictions Ahead of 2024
By Tal Zamir, CTO
of Perception Point
In 2023, a wave of remarkable AI tools and applications
swept the tech landscape, putting once seemingly imaginary technologies into
the hands of virtually all internet users. In 2024, even more extraordinary AI
tools, applications and capabilities will emerge.
However, just like AI can be leveraged to scale productive
workflows and bolster ROI, as with many other cutting-edge technologies, it
also presents new opportunities for threat actors to orchestrate increasingly
sophisticated cyberattacks.
In preparation for the challenges ahead, organizations must
understand how threat actors can leverage AI in order to formulate
comprehensive defense strategies and counter malicious attacks.
Hackers will let AI
do the heavy lifting
For businesses, new and improved AI tools are meant to
automate and therein alleviate much of employees' workloads, granting workers
the capacity to perform a multitude of tasks in a fraction of the time it would
have taken them to do manually.
Unfortunately, hackers are exploiting this very faculty.
By taking advantage of the very same AI tools used to
enhance productivity at the workplace, hackers are customizing ChatGPT-like
bots in alarming ways. These customized bots can be used as social engineering
powerhouses - they can be designed to create highly personalized messages and
carry out targeted phishing campaigns to gain access to sensitive company data
such as private customer information, financials, and intellectual property.
Moreover, attackers can optimize their operations with the
help of AI and Large Language Models (LLMs). Through AI-based automation,
malicious actors can launch large-scale social engineering campaigns, take over
user email accounts, hijack email threads to trick victims into transferring
funds, and move laterally across the organization in a viral way.
AI's ability to automate the preliminary grunt work needed
to fashion sophisticated attacks and become weaponized for a wide range of
malicious acts is a testament not only to its versatility, but also its potency
in crafting sophisticated attacks without significant effort. Organizations of
all sizes across various industries should ensure they upgrade their defenses
across the main communication channels, including email, browsers, and
collaboration apps.
AI will make attacks
harder to detect
Beyond its ability to amplify both the volume and frequency
of cyberattacks, AI has granted hackers the ability to engineer and execute
personalized attacks with heightened sophistication.
In particular, the widespread availability of multimodal
machine learning models has empowered hackers to fabricate and deploy
shockingly realistic deepfakes - including image doctoring and audio-visual
impersonation - to scam employees into giving up sensitive company
information.
These techniques raise the effectiveness of social engineering attempts
to new heights, whereby a specific individual's characteristics or mannerisms
are replicated so convincingly that the average user has great difficulty
detecting that they are forged.
When
targeting an individual employee who normally receives a barrage of work emails
on a daily basis, for example, the chances that one malicious email goes
undetected is high, and are greatly affected by the efficacy of the deployed
security systems. Without sufficient due diligence and implementation of
effective automated security controls and security awareness training to
identify malicious content, employees won't be able to exercise the necessary
caution.
Software companies
will face off against adversarial prompts
Many tech companies are now building their own AI-powered
tools to provide advanced GenAI-based services to their customers.
Guarding against malicious and adversarial prompts in LLMs
will be a cornerstone of securing digital assets across industries in the
coming years. Hackers are already manipulating or outright disabling the
safeguards built into LLMs that prevent generative AI systems from dispensing criminal
content.
In the coming year, a cohort of startups committed to
safeguarding against adversarial prompts and providing cutting-edge solutions
to mitigate the risks such threats pose will likely arise. By addressing the
elevated risk of this attack category, organizations can maintain a
competitive edge throughout 2024.
AI vs AI
Hackers will continue leveraging AI to launch sophisticated
attacks more frequently and more precisely as its capabilities grow ever more
potent. As AI-driven attacks have
proliferated within the threat landscape, organizations have already been
advancing their own AI defenses to thwart these mounting threats and must
continue to improve upon these tools as attackers simultaneously increase the
sophistication and scope of their own AI attacks.
Using AI defensively will continue to be a critical
component of cybersecurity in 2024, especially to counter these newly emerging
threats.
##
ABOUT THE AUTHOR
Tal is the Chief
Technology Officer at Perception
Point. Previously the Founder & CTO of Hysolate, Tal Zamir is a 20-year
software industry leader with a track record of solving urgent business
challenges by reimagining how technology works.
He has pioneered
multiple breakthrough cybersecurity and virtualization products. Tal incubated
next-gen end-user computing products while in the CTO office at VMware.