Virtualization Technology News and Information
Article
RSS
Trellix 2024 Predictions: The Cyber Threat of Artificial Intelligence

vmblog-predictions-2024 

Industry executives and experts share their predictions for 2024.  Read them in this 16th annual VMblog.com series exclusive.

The Cyber Threat of Artificial Intelligence

By John Fokker, Head of Threat Intelligence and Principal Engineer, Trellix

Artificial Intelligence (AI) has changed the way we interact with the world, impacting industries for better and for worse. The cybersecurity industry is no different. AI complicates the threat landscape, allowing cybercriminals to increase the sophistication and scale of their attacks at little cost and with less existing skillset.

With the AI genie out of the bottle, there's little to do to force it back inside. With a significantly lower barrier to entry for cybercriminals, the question for defenders is how will threat actors leverage this technology? And how do we protect against these new threats? Analyzing the data and trends from 2023, Trellix researchers at the Advanced Research Center identified key areas where AI will play a key role in shifting attack vectors in 2024:

Cybercriminal Generative AI

The progress we've seen throughout this past year in generative AI and large language models (LLMs) exhibits remarkable potential for positive applications. Current LLMs like ChatGPT have shown unparalleled results in mimicking human reasoning and language, exemplified in answering queries, creating art, problem-solving, coding, etc. More importantly, these models are incredibly easy to use and widely accessible.

But what makes these tools attractive to most users equally makes them valuable for cybercriminals. Threat actors are looking to these technologies to develop new tools that reduce the need for extensive expertise and expensive resources to launch large-scale cyberattacks. Tools like FraudGPT and WormGPT already prevail in underground networks and are heavily leveraged by threat actors. With these, conducting large-scale phishing campaigns has never been easier. We expect the development of these tools to accelerate in 2024 as the technology evolves and more threat actors realize the value of leveraging them.

A New Generation of Script Kiddies

Building off the improvement of these AI tools for malicious purposes is how it enables even amateur threat actors to operate at a high level. Often dismissively called "Script Kiddies," this particular demographic of threat actors now poses a significant threat to today's market due to the democratization of advanced AI tools.

Though most widely available tools on the internet come with mechanisms that prevent malicious usage, cybercriminals are finding ways to get around those mechanisms - if not directly leveraging malicious LLMs. As hackers with more technical expertise continue to develop advanced AI models on the dark web, the potential for any individual to leverage tools that can write malicious code, create deepfakes, and assist with social engineering schemes is growing at a breakneck speed. Moreover, the widespread use of such tools will make analysis of such attacks even more challenging for defenders.

AI for Social Engineering Scams

This year has seen a huge surge in cybercriminals executing advanced social engineering scams that involve voice calls, and this trend is set to grow in the coming year. These schemes are executed to play on the emotions of individuals and manipulate them into taking actions such as sharing sensitive personal information. Now that the ability to deepfake voices with AI is becoming available, AI-generated voice scams will be a major concern as people are inherently more trusting of voice and video than text and images.

These applications have advanced to the point where it has become nearly impossible for individuals to differentiate between real and fake voices. Additionally, these tools are now able to overcome barriers such as language, allowing scammers to target a wider pool of victims with personalized messages.

Thus, we expect that cybercriminals will start to leverage this technology in live calls, which will greatly increase the effectiveness of their phishing ploys.

Looking at the Future

These 3 areas are only the tip of the iceberg for how AI may continue to shift the cyber landscape in 2024. The duality of AI means that in the wrong hands, these tools can be used to overcome language barriers - spoken and programming languages -, challenges with technical expertise, costs, and more. In the right hands, cybersecurity professionals use it to strengthen defenses against emerging threats and improve an organization's cyber resiliency. 

##

ABOUT THE AUTHOR

John Fokker 

John Fokker is a Principal Engineer at Trellix. John leads the Threat Intelligence Group (TIG) that empowers Trellix customers, industry partners, and global law enforcement efforts with 24/7 mission-critical insights on the ever-evolving threat landscape. Prior to joining Trellix, he worked at the Dutch National High-Tech Crime Unit (NHTCU), the Dutch National Police unit dedicated to investigating advanced forms of cybercrime. During his career, he has supervised numerous large-scale cybercrime investigations and takedowns. Fokker is also one of the co-founders of the NoMoreRansom Project.

Published Thursday, January 11, 2024 7:34 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<January 2024>
SuMoTuWeThFrSa
31123456
78910111213
14151617181920
21222324252627
28293031123
45678910