The risk of deepfakes is rising with almost half of organizations (47%)
having encountered a deepfake and three-quarters of them (70%) believing
deepfake attacks which are created using generative AI tools, will have
a high impact on their organizations. Yet perceptions of AI are hopeful
as two thirds of organizations (68%) believe that while it's impactful
at creating cybersecurity threats, more (84%) find it's instrumental in
protecting against them. This is according to a new global survey of
technology decision-makers from
iProov,
a leading provider of science-based biometric identity solutions, which
also found three quarters (75%) of solutions being implemented to
address the deepfake threat are biometric solutions.
The Good, The Bad, and The Ugly,
is a global survey commissioned by iProov that gathered the opinions of
500 technology decision makers from the UK, US, Brazil, Australia, New
Zealand and Singapore on the threat of generative AI and deepfakes.
While organizations recognize the increased efficiencies that AI can
bring, these benefits are also enjoyed by threat technology developers
and bad actors. Almost three quarters (73%) of organizations are
implementing solutions to address the deepfake threat but confidence is
low with the study identifying an overriding concern that not enough is
being done by organizations to combat them. More than two-thirds (62%)
worry their organization isn't taking the threat of deepfakes seriously
enough.
The survey shows recognition by organizations that the threat of
deepfakes is a real and present threat. They can be used against people
in numerous harmful ways including defamation and reputational damage
but perhaps the most quantifiable risk is in financial fraud. Here they
can be used to commit large-scale identity fraud by impersonating
individuals in order to gain unauthorized access to systems or data,
initiate financial transactions, or deceive others into sending money on
the scale of the recent Hong Kong deepfake scam.
The stark reality is that deepfakes pose a threat to any situation
where an individual needs to verify their identity remotely but those
surveyed worry that organizations aren't taking the threat seriously
enough.
"We've been observing deepfakes for years but what's changed in the past
six to twelve months is the quality and ease with which they can be
created and cause large scale destruction to organizations and
individuals alike," said Andrew Bud, founder and CEO, iProov. "Perhaps
the most overlooked use of deepfakes is the creation of synthetic
identities which because they're not real and have no owner to report
their theft go largely undetected while wreaking havoc and defrauding
organizations and governments of millions of dollars."
"And despite what some might believe, it's now impossible for the naked
eye to detect quality deepfakes. Even though our research reports that
half of organizations surveyed have encountered a deepfake, the
likelihood is that this figure is a lot higher because most
organizations are not properly equipped to identify deepfakes. With the
rapid pace at which the threat landscape is innovating, organizations
can't afford to ignore the resulting attack methodologies and how facial
biometrics have distinguished themselves as the most resilient solution
for remote identity verification," adds Andrew Bud.
Regional nuances
The study also reveals some rather nuanced perceptions of deepfakes on
the global stage. APAC (51%), European (53%), and LATAM (53%)
organizations are significantly more likely than North American (34%)
organizations to say they have encountered a deepfake. APAC (81%),
European (72%), and North American (71%) organizations are significantly
more likely than LATAM organizations (54%) to believe deepfake attacks
will have an impact on their organization.
Amidst the ever-shifting terrain of the threat landscape, the tactics
employed to breach organizations often mirror those used in identity
fraud. Unsurprisingly, deepfakes are now tied for third place amongst
the most prevalent concerns for survey respondents with the following
order: password breaches (64%), ransomware (63%), phishing/social
engineering attacks (61%), and deepfakes (61%) .
AI's not all bad
There are many different types of deepfakes but they all have one common
denominator: they are created using generative-AI tools. Organizations
recognise that generative AI is innovative, secure, reliable, and helps
them to solve problems. They view it as more ethical than unethical and
believe it will have a positive impact on the future. And they're taking
action: just 17% have failed to increase their budget in programs that
encompass the risk of AI. Additionally, most have introduced policies on
the use of new AI tools.
Biometrics leads the charge against deepfakes
Biometrics have emerged as the solution of choice by organizations to
address the threat of deepfakes. Organizations stated that they are most
likely to use facial and fingerprint biometrics however, the type of
biometric can vary based on tasks. For example, the study found
organizations consider facial to be the most appropriate additional mode
of authentication to protect against deepfakes for account
access/log-in, personal details account changes, and typical
transactions.
Software is not Enough
It's clear from the study that organizations view biometrics as a
specialist area of expertise with nearly all (94%) agreeing a biometric
security partner should be more than just a software product.
Organizations surveyed stated that they are looking for a solution
provider that evolves and keeps pace with the threat landscape with
continuous monitoring (80%), multi-modal biometrics (79%), and liveness
detection (77%) all featuring highly on their requirements to adequately
protect biometric solutions against deepfakes.