According
to the
Information Security Forum
(ISF), trusted resource for
executives and board members on cyber security and risk management, Artificial
Intelligence (AI) inspires intrigue, fear and confusion in equal measures. To
thrive in the new era, organizations need to reduce the risks posed by AI and
make the most of the opportunities it offers. That means securing their own
intelligent systems and deploying their own intelligent defenses.
ISF research has found that AI already poses risks to
information assets, as well as the potential to significantly improve cyber
defenses. In an effort to support global organizations, the ISF today announced the
release of Demystifying Artificial Intelligence in Information
Security, the
organizations latest digest which helps security professionals to remove
the confusion, demystifying AI in information security. The report allows
business and security leaders to better understand what AI is, identify the
information risks posed by AI and how to mitigate them, and explore
opportunities around using AI in defense.
"AI
is creating a new frontier in information security. Systems that independently
learn, reason and act will increasingly replicate human behavior - and like
humans they will be flawed, but also capable of achieving great things. AI
poses new information risks and makes some existing ones more dangerous," said Steve Durbin, Managing
Director, ISF. "However, it can also be used for good and should become
a key part of every organization's defensive arsenal. Business and information
security leaders alike must understand both the risks and opportunities before
embracing technologies that will soon become a critically important part of
everyday business."
As AI systems are adopted by organizations, they will become
increasingly critical to day-to-day business operations. No matter the function
for which an organization uses AI, such systems, and the information that
supports them, have inherent vulnerabilities and are at risk from both
accidental and adversarial threats. Compromised AI systems make poor decisions
and produce unexpected outcomes. Simultaneously, organizations are beginning to
face sophisticated AI-enabled attacks - which have the potential to compromise
information and cause severe business impact at a greater speed and scale than
ever before. Taking steps both to secure internal AI systems and defend
against external AI-enabled threats will become vitally important in reducing
information risk. Organizations must be ready to adapt their defenses in order
to cope with the scale and sophistication of AI-enabled cyber attacks.
Security practitioners are always fighting to keep up with
the methods used by attackers, and AI systems can provide at least a short-term
boost by significantly enhancing a variety of defensive mechanisms. AI can
automate numerous tasks, helping understaffed security departments to bridge
the specialist skills gap and improve the efficiency of their human
practitioners. Protecting against many existing threats, AI can put defenders a
step ahead. However, adversaries are not standing still - as AI-enabled threats
become more sophisticated, security practitioners will need to use AI-supported
defenses simply to keep up.
"As early adopters of defensive AI get to grips with a new
way of working, they are seeing the benefits in terms of the ability to more
easily counter existing threats. However, an arms race is developing. AI tools
and techniques that can be used in defense are also available to malicious
actors including criminals, hacktivists and state-sponsored groups," continued Durbin. "Sooner
rather than later these adversaries will find ways to use AI to create
completely new threats such as intelligent malware - and at that point,
defensive AI will not just be a ‘nice to have'. It will be a necessity.
Security practitioners using traditional controls will not be able to cope with
the speed, volume and sophistication of attacks."
Demystifying Artificial Intelligence in Information
Security is available now
to ISF Member companies via the ISF
website.