By Steve Shwartz
Imagine
the damage an intelligent virus could inflict.
It would arrive at the network perimeter. Or worse, it would penetrate your firewalls
via a phishing attack. It would take
stock of the system's defenses, make real-time intelligent decisions, and start
attacking. For example, it could
conceivably turn its virus characteristics on and off when necessary to evade
antivirus software. It would be almost
like letting an unescorted human cybercriminal inside your datacenter.
Fortunately,
truly intelligent malware, computers, and robots exist only in science fiction.
See this article for an explanation of why we will almost certainly
not see intelligent malware or intelligent computers of any type in our
lifetimes. So, we do not have to worry
about viruses that can think and reason like people.
AI as a Friend
We do need
to worry about cyberattacks that execute conventionally programmed (non-AI)
patterns of attack. Many vendors offer
conventional software such as intrusion detection systems and anti-virus
software that recognize the signatures (patterns) of these programs. These conventional software programs access threat
intelligence databases of malicious signatures and do a great job identifying
and stopping known cybersecurity threats.
Where
conventional software falls short is in identifying newly released malware
whose signatures are not yet available.
Here, AI becomes a friend. Machine
learning can be used to analyze network traffic patterns using port mirroring
or netflow to determine what constitutes "normal" traffic. These network detection and response systems can
then raise alerts when suspicious traffic is observed. They can be used to raise alerts for both
north-south traffic (traffic coming in from the internet) and east-west traffic
(behind the firewall).
Machine
learning can also be used in other ways to identify security issues. For example, log data can be analyzed to
determine normal patterns and raise alerts when anomalies are detecting. Similarly, machine learning can be used to analyze
patterns of user behavior and flag suspicious behavior.
AI as a
Foe
There are
many ways machine learning can be used by attackers to design more effective
malware:
-
Machine
learning can be used to generate new strains of malware that are harder to
detect. However, once their signatures
make it into the security databases, they will be detectable by conventional
anti-malware software.
-
An
attacker could acquire commercially available threat detection systems and use
AI to learn the types of traffic that will bypass the system defenses.
-
An
attacker could use machine learning to monitor the behavior of the target
network and create malware that resemble "normal" traffic.
AI can also
be used to generate emails for spear phishing attacks and audio deepfakes can
be used to send those same targeted individuals voice messages that sound like
someone they know.
AI
Applications Broaden the Attack Surface
Conventional
software has many well-known vulnerabilities that can be exploited by malware
such as SQL injection and cross-site scripting.
AI software adds new types of vulnerabilities.
AI
software can be analyzed to create adversarial examples that cause the software
to respond incorrectly. For example,
researchers showed that small alterations to lane markers that would not fool
humans caused a self-driving car to drive in the wrong lane. Baidu researchers published a set of tools
that can be used by other researchers to fool virtually any deep learning
system. Their goal, of course, was not to encourage attackers, but rather to
help researchers create defense mechanisms in their deep learning systems.
Also,
because machine learning systems are trained on large datasets, the behavior of
these systems can be altered by changing the training data. This happened to Microsoft when it released
its Tay chatbot which learned to be a racist from user interactions.
Conclusion:
Friend or Foe?
AI can be
both a cybersecurity friend and a foe.
The ability to use AI to defend against novel malware is essential to
defenders. This is balanced against the
ability for attackers to create novel malware using AI and the larger attack
surface created by AI applications.
Cybersecurity
has always been a cat-and-mouse game. AI
adds to the toolkits of both the attackers and defenders but does not give
either a winning advantage.
##
About
the Author
STEVE SHWARTZ is a
successful serial software entrepreneur and investor. After receiving his PhD
in 1979 from Johns Hopkins University, AI luminary Roger Schank invited Shwartz
to join the Yale University faculty as a postdoctoral researcher in Computer
Science. Since then, he has been a founder and investor in many successful
startup companies. His new book, Evil
Robots, Killer Computers, and Other Myths: The Truth About AI and the Future of
Humanity, will be available on Amazon and other
major booksellers February 9, 2021.
Learn more about Steve Shwartz at www.AIPerspectives.com and connect with him on Twitter, Facebook and LinkedIn.