Companies have only
begun to feel the transformative impact of generative AI. As it becomes embedded
in more enterprise systems and applications, there are destined to be big leaps
in digital journeys, with the ability to reimagine everything from contact
center experiences to legacy modernization.
But like any breakthrough technology, generative AI can be used for
nefarious purposes. The remarkable processing power that can lead to thoughtful
responses to complex questions can also be deployed to create so-called
"deepfakes," in which videos are manipulated to look and sound authentic when
in reality they are pure fabrications. Similarly, generative AI can be
exploited to create phishing emails that will trick even sophisticated
individuals into clicking malicious links or revealing private information.
The immediate challenge for companies is working to prevent the abuses
of generative AI without sacrificing its many benefits. That challenge will be
front and center for cybersecurity professionals, whose work will be even more taxing
given that they will be navigating territory that is largely uncharted.
Generative AI raises several complex issues. One such issue is highlighted by
Stanford University: "Who owns the legal rights to the
content generated by AI tools and who owns the IP of the data they were trained
on?" Similarly, there are privacy risks connected to code being generated by AI
tools. A Stanford professor, Arvind Karunakaran, notes that some companies
"have cut off access to these chatbots like ChatGPT. They don't want to expose
their own code or use unverified code generated by these technologies.
Those are just some of the generative AI issues companies need to be
grappling with now. And it's clear there are no simple security solutions. But one
of the first steps must be educating employees about the ins and outs of
generative AI - precisely how it works, how it can be beneficial, but also how
it can be abused.
Training courses should be mandatory, with videos that illustrate the
tools of deception, with scenarios in which people are deceived. Employees
should also be taught about what can be lost when one is tricked by generative
AI, including financial capital, highly-confidential intellectual property, and
personal information.
One of the cybersecurity lessons learned from the past two decades is
that threats are ever-present and they should be dealt with pre-emptively. That
means companies need to invest the time and resources to develop a 360 degree
profile of their information environment - including all the networks and all
the devices within that environment and how they are (or are not) connected.
This can be the foundation of an analysis that identifies where the known
vulnerabilities exist - but also the unknowns.
We also know that 24-7 monitoring is essential. Cyber attacks are often
preceded by activity that monitoring could identify as suspicious - the cyber equivalent
of hostile troops beginning to stockpile weapons. Rigorous monitoring can
detect that suspicious activity and address it before the full-fledged attack
is mounted.
The other lesson is that while companies need to be prepared not only
for stopping threats, they also need to know how they're going to respond if
cyber criminals succeed in penetrating their systems. What's the plan for
identifying precisely what information has been compromised? How soon will the
board of directors be told? What about shareholders? And law enforcement?
It seems likely that governments throughout the world will enact
regulations to govern generative AI. While regulation provides a framework for
ethical practices, information security and consumer rights, it also poses a
threat to the progression and innovation of this technology. Regulation can
also provide a false sense of security: it's going to do little, or nothing, to
ward off cyber predators.
Generative AI is exciting and presents extraordinary new opportunities
for companies. But the lesson from new technologies from the past 25 years is
that there's always a small subset of predators who are looking for ways to use
these technologies for illicit purposes. Sadly, that's going to be the case
with generative AI as well - and the potential for deception by the predators
could become even easier, given the sophistication of the technology.
That underscores the importance of companies preparing comprehensive
plans - not just to prevent the deception unleashed by generative AI, but to
know how to respond to it when it succeeds.
##
ABOUT THE AUTHOR
Anant Adya is EVP - Infosys
Cobalt. He and his team are responsible for designing solutions to help
customers in their Digital and Cloud journey. They use a combination of AI-led
solution sets combined with capabilities from partner and startup ecosystem to
design best solutions for customers. Cloud and Infrastructure Service line
include infrastructure operations, security, data center and network
transformation, cloud (public, private and hybrid), workload migration and
service experience.