Industry executives and experts share their predictions for 2024. Read them in this 16th annual VMblog.com series exclusive.
Thanks to AI, Cybercriminals are Embracing Creativity
By Candid Wüest, Vice President of Product Management,
Acronis
This past year, we've seen a major spike in AI capabilities
across the board, with generative AI and Large Language Models (LLMs) becoming mainstream.
Organizations are consequently investing in AI to keep up with these new
developments, which we'll continue to see as we head into the new year. Most businesses
are incorporating AI into their existing technology stacks, using it to
automate routine processes and hunt cyberthreats, among other tasks - but not
all of these organizations are thinking critically about the impact that this
has on their security.
The newfound prevalence of AI means novel security risks are
increasing. Not only are cybercriminals able to take advantage of
vulnerabilities within AI technology to attack an organization's infrastructure,
but they will also be able to use it as a tool to generate more creative and
complicated attacks like deepfake campaigns and AI-assisted phishing.
Beware of deepfakes
Deepfakes utilize AI technology to convincingly manipulate
multimedia content like photos and videos. Additionally, deepfake technology
can be used to generate totally original multimedia materials, like a video depicting
a person saying or doing something that in actuality never happened.
The NSA and other government agencies have warned
against the cybersecurity threats associated with deepfakes, as well as the
ease and scale with which threat actors are now able to manipulate multimedia
content. These attacks have continued to rise, with the FBI reporting numerous cases and campaigns
through which deepfakes are being used to harm users and organizations through disinformation.
Threat actors exploit deepfakes with the intention to cause
serious consequences such as public crises, family extortions, spreading
misinformation or severe stock disruptions - often for financial gain. It is
likely that we will continue to see this occur more frequently in 2024 as
deepfake technology becomes better understood and more widely available,
especially as the success of this attack vector is proven and financial
incentives grow. We also expect to see this become more prevalent in the
upcoming 2024 election.
Phishing is the "golden child" of generative AI
Generative AI's biggest strength is aiding with language-based
tasks, like creating original text or predicting the next word in a document,
among other things. With the rise in accessibility of LLMs like ChatGPT,
cybercriminals may continue to utilize this technology as a tool to creatively extract
sensitive information from users and organizations.
Specifically, we expect an uptick in the number of threat
actors using LLMs to help with phishing attacks in 2024. Generative AI models
like ChatGPT can help cybercriminals draft convincing phishing content that lacks
the typical warning signs of these kinds of attacks such as spelling or stylistic
errors, as well as grammatical mistakes that increase the efficacy of these
campaigns.
LLMs can also make phishing efforts more efficient by
decreasing the time a threat actor needs to spend generating phishing
materials. The ability to launch more - and more believable - phishing
campaigns within a given period means more data can be stolen overall. And as Phishing-as-a-service
(PhaaS) also gains popularity, LLMs can be pivotal in turning phishing from a
time-consuming, individualized attack method to a streamlined and effective
business model.
AI regulation should be prioritized in 2024
Undoubtedly, continued advancements in AI technology will
pose a security threat until appropriate and comprehensive regulation is
passed. Between the Biden Administration's executive
order on AI and Senate hearings with technology leaders to
discuss regulation and future threats, it's clear this is top of mind for
both security leaders and the government.
The conversation and controversy surrounding AI laws are
sure to persist in 2024. Regulators must heed security leaders' concerns and
work to ensure that AI innovations are monitored and used responsibly to limit
harm to organizations and users. Otherwise, AI's security risks will continue
to grow as threat actors become more creative with their tactics.
##
ABOUT THE AUTHOR
Candid Wuest is the VP of Product Management at Acronis,
the Swiss-Singaporean cyber protection company, where he researches new threat
trends and comprehensive protection methods. He has worked for 16+ years as the
tech lead for Symantec's global security response team. Wuest is a frequent
conference speaker, holds a Master of Computer Science from ETH Zurich, various
certifications & patents.