Virtualization Technology News and Information
Article
RSS
Enzoic 2024 Predictions: Navigating the Cyber Threat Landscape

vmblog-predictions-2024 

Industry executives and experts share their predictions for 2024.  Read them in this 16th annual VMblog.com series exclusive.

Navigating the Cyber Threat Landscape

By Mike Wilson, Founder & CTO Enzoic

The cybersecurity landscape will keep evolving in 2024 and will remain a critical concern as high profile breaches and attacks continue. From deep fakes to malware to breaches, hackers will not falter with their relentless and innovative approaches to expose any and every security vulnerability to their advantage. Additionally, as intelligent technologies are increasingly adopted, a rigorous security posture has never been more critical to identify any vulnerabilities from being exploited and prevent cybersecurity attack.

Here's a quick rundown of my predictions for cybersecurity in the year ahead. 

1.     AI's Ugly Side is Further Revealed

The 2024 Presidential Election is one example of how the coming year will reveal more of AI's nefarious capabilities. Expect to see deepfakes and other AI-generated disinformation designed to influence the election emerge at an alarming rate. If used by savvy threat actors, it's possible these images could become compelling propaganda, creating a veritable wilderness of mirrors for voters, who will have trouble discerning reality from carefully crafted disinformation. This will be a growing focus area as the candidates' campaigns kick into high gear. 

Perhaps no better example of the technology's ugly side exists than AI-generated abuse imagery, which has been increasing in recent months. We'll see more attention focused on preventing this in 2024, with a cluster of new solutions released to address the issue.

Of course, we can also expect hackers to increasingly leverage AI for their bread-and-butter campaigns-attacking organizations and employees to exfiltrate sensitive data. Think threat actors leveraging the technology to improve their malware code or relying on generative AI to craft more legitimate phishing emails. As this happens, organizations will need to adjust their training-for example, poor grammar, once a hallmark of phishing campaigns, will no longer serve as a red flag, thanks to generative AI

2.     Cloud API Attack Traffic Will Soar 

APIs in the cloud are an increasingly popular threat vector for cybercriminals as, if breached, they expose sensitive data. Part of the appeal is that they are often the easiest way for hackers to access a company's network. The increasing popularity of API attacks will accelerate the number of organizations deploying security test automation solutions in 2024 and beyond to combat the problem. 

With more utilization of cloud-based APIs, it's imperative that companies shore up their defenses and secure them- and IT infrastructure. Otherwise, they run the risk of falling victim to a data breach. The number of cloud-based API attacks will surge in 2024 and GPU farming, where a set of servers allocate resources to perform calculations in the minimum amount of time, will become another popular target of cloud-based attacks. Cybersecurity incumbents must ensure their solutions address these issues, or new entrants will seize market share.

3.     SaaS Attacks: Subscription Model Set to Fuel More Cyberattacks

Cybercriminals are increasingly turning to subscription models to access a range of tools and tactics. Various malware, including ransomware and infostealers, will now only be available via a "Malware as a Service (MaaS)" subscription, making it easy for a bad actor with limited experience to launch sophisticated, targeted attacks at scale. By 2030, the vast majority of software-based cyber threats will be readily available via a subscription.

4.     AI Gets Some Overdue Positive Press

AI's negative implications have dominated headlines, but the technology isn't all doom and gloom. In 2024, we'll see AI harnessed to combat cyberattacks by helping organizations adopt a more proactive security posture. For example, large language models (LLM) will increasingly be deployed to sort through large quantities of data quickly-enabling companies to leverage security analysis resources more effectively. Threat data can now be analyzed at a previously impossible scale and can unlock new insights, especially when combined with other big data techniques. Will this be enough to counter the pervasive narrative that AI is inherently bad? 

##

ABOUT THE AUTHOR

Mike Wilson 

Mike is a co-founder and CTO of Enzoic, a cybersecurity company committed to preventing account takeover, identity theft and fraud through actionable Dark Web research. Mike has spent 20 years in software development, with 12 years specifically in the information security space at companies like Webroot and LogicNow. Mike started his career in the high-security environment at NASA, working on the mission control center redevelopment project. Apart from his security experience, Mike also founded several successful startups over the years. 

Published Wednesday, November 29, 2023 7:32 AM by David Marshall
Comments
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
Calendar
<November 2023>
SuMoTuWeThFrSa
2930311234
567891011
12131415161718
19202122232425
262728293012
3456789