Industry executives and experts share their predictions for 2025. Read them in this 17th annual VMblog.com series exclusive. By Willy Leichter, CMO, AppSOC
The new year begins with several notable issues and trends
in cybersecurity:
-
Ransomware continues to dominate: This is
both predictable and frustrating. Given that this has been a continuous
security focus for years, you would expect more progress. The problem is that
ransomware is not a specific technique - it's more of a proven way to monetize
most types of breaches. Extortion has always been effective and there's little
indication that this will change.
-
AI goes mainstream, along with security
concerns: In the last year, AI usage and projects went from being mostly
experimental to being widely deployed and adopted. While it is still early
days, security experts are beginning to reckon with a much larger attack
surface area, moving from just code vulnerabilities (which are still alive and
well), to MLOps, LLMs and other models, datasets, notebooks, and more.
-
Software supply chains remain a weak link:
Over the last few years, there have been an increasing number of high-profile
supply chain attacks - such as SolarWinds and Log4j. While there is more
awareness of this problem, we're not close to solving it because the dependence
on third-party code continues to expand. Addressing supply chain security
requires considerably better communication and cooperation with suppliers, and
much greater scrutiny of third-party code than most security teams can handle.
Here are some predictions for security issues in 2025:
-
AI offense will have an edge over AI defense:
We know that AI will be used increasingly on both sides of the cyber war.
However, attackers will continue to be less constrained because they worry less
about AI accuracy, ethics, or unintended consequences. Techniques such as
highly personalized phishing and scouring networks for legacy weaknesses will
benefit from AI. While AI has huge potential defensively, there are more
constraints - both legal and practical, that will slow adoption.
-
AI systems will become targets: AI
technology greatly expands the attack surface area with rapidly emerging
threats to models, datasets, and MLOps systems. Also, when AI applications are
rushed from the lab to production, the full security impact won't be understood
until the inevitable breaches occur.
-
Security teams will have to take charge over
AI security: This sounds obvious, but in many organizations, initial AI
projects have been driven by data scientist and business specialists, who often
bypass conventional application security processes. Security teams will fight a
losing battle if they try to block or slow down AI initiatives, but they will
have to bring rogue AI projects under the security and compliance umbrella.
-
Supply chain exposure will expand: We've
already seen supply chains become a major vector for attack, as complex
software stacks rely heavily on third-party and open-source code. The explosion
of AI adoption makes this target larger with new complex vectors of attack on
datasets and models. Understanding the lineage of models and maintaining
integrity of changing datasets is a complex problem, and currently there is no
viable way for an AI model to "unlearn" poisonous data.
-
Lessons learned - AI brings its own risks: 2024
saw the beginnings of the broad realization of AI's promise and potential, evidenced
both by major breakthroughs and increasing experimenting with use cases,
accompanied in many cases by the launch of public-facing applications. It was
also the year when many realized that AI and LLM systems introduce new risks,
and - equally problematic -- that security teams are often only vaguely aware
of fast-moving AI projects.
The Road Ahead
Blocking important AI projects will not work and will put
companies at a competitive disadvantage. Instead, organizations must enable AI
innovation by ensuring adequate visibility, guardrails, application security,
and governance to prevent costly and damaging security incidents.
How can organizations address these threats and begin to
more fully realize the innovation and growth potential that AI puts within
reach?
Meeting these threats head-on and adopting AI with
confidence will demand trusted security. With AI's move to the production app
mainstream, organizations are increasingly understanding that it's no longer
sufficient to merely detect new, isolated anomalies. It's imperative to protect
AI systems from end-to-end with a complete AI and Application Security platform.
This means finding, implementing and embracing a complete solution to discover,
correlate, prioritize, remediate, and govern AI systems. This is why AppSOC's
focus is covering the new AI attack surface, while providing robust security
management for all AI stacks, applications, infrastructure and innovation.
##