Security researchers have warned of a sophisticated new
Trojan designed to steal facial biometric data and use it to produce
deepfakes of victims which can bypass banking logins.
According to the Feb 15th report by Group-IB, the malware steals the users facial image with video and stills and gets them to upload the images and PII to their C2 servers. The threat actors have been using “multi-staged social engineering scheme” to persuade victims to install a Mobile Device Management (MDM) profile that gives them full control of the user’s device.
Group-IB said that the GoldPickaxe malware is available for
Android and iOS, and developed by a suspected Chinese cybercrime
actor dubbed "GoldFactory". The infection chain begins with threat actors
impersonating government officials convincing the victim to use messaging app
Line to communicate and trick them into downloading a Trojan-laden app
disguised as a "digital pension" application, or one providing other government
services.
The Android app is downloaded either from a fake Google Play
page or spoofed corporate website. For the iOS version, it could leverage the
TestFlight developer platform, or the threat actors could trick the victim into
installing a mobile device management (MDM) profile, which gives them control
over the device. The threat actors cite personal information they have obtained
about the victim to increase their chances of success, according to Group-IB.
Here's what industry experts are saying about this news:
Jason Soroko, Senior Vice
President of Product at Sectigo
Biometric authentication should rarely be used as a sole
form of authentication. It is a very handy PIN code replacement in most
cases. Why isn't it more secure? It's because your fingerprints,
your face and your voice are not secrets. In the case of the GoldPickaxe
malware, what is novel here is the recording of video in order to create
deepfakes of the victim, in order to cause further social engineering.
This is a scary development, but it is not surprising. Deepfakes
are very effective in social engineering. It should be noted that the
trojan mobile application that is installed by the victim has been made
available via a fake Google Play store, and for iOS devices, the victim needs
to utilize unusual installation methods. I suspect this means that
Android users are targeted for this attack more than iOS for this reason
but everyone should be aware to not be convinced to install fake applications.
++
Krishna Vishnubhotla, Vice
President of Product Strategy at Zimperium
Facial recognition data on smartphones is encrypted and
stored in a secure area of the processor, such as a Secure Enclave or Trusted
Execution Environment, which isolates it from the device's main operating
system and applications to prevent unauthorized access. This data is
anonymized, converting facial features into a mathematical model rather than
storing actual images, and is kept locally on the device to minimize the risk
of external breaches. Despite these security measures, risks remain, particularly
if the device is physically compromised or if vulnerabilities within the
device's security hardware or software are exploited by sophisticated
attackers. Furthermore, the potential for unauthorized access by malicious apps
due to permissions mismanagement or software flaws poses a continuous threat,
emphasizing the need for ongoing vigilance and regular security updates to
mitigate these risks.
Deep fakes are a type of digital manipulation that alters or
synthesizes someone's appearance in videos or photos convincingly. These
manipulations often use artificial intelligence and machine learning
technologies. It's important to note that the risk of deep fakes doesn't only
come from facial recognition data stored on smartphones. Instead, it arises
from the broader ecosystem of digital facial data.
++
Callie Guenther, Senior
Manager, Cyber Threat Research at Critical
Start
While the use of Trojans and malware to steal personal and
financial information is not new, the integration of deepfake technology to
circumvent biometric security measures adds a novel layer of complexity and
danger. Similar tactics have been observed in the past, where attackers have
used various forms of social engineering, phishing, and malware to compromise
devices and steal sensitive data. However, the use of deepfakes to bypass
biometric security systems is a relatively newer and less common tactic,
highlighting the continuous innovation among cybercriminals.
To guard against sophisticated cyber threats, security teams
and individuals should take a multifaceted approach. This includes educating
users on the dangers of downloading apps from non-official sources and the need
to verify communications from supposed authoritative entities. It's important
to use official app stores, be cautious with app permissions, and employ
multi-factor authentication to add a layer of security beyond biometric
measures. Implementing security solutions that can identify and thwart advanced
malware is crucial, as is keeping operating systems and applications updated
with the latest security patches. Encouraging the use of encrypted
communication and secure networks can help protect data, and for organizations,
mobile device management policies can enable monitoring and control over
corporate devices to mitigate risks effectively.
++
John Gallagher, Vice
President of Viakoo Labs at Viakoo
Just like personal data such as social security numbers and
date of birth, biometrics are increasingly being scraped, stored, and analyzed
by threat actors. In just the same way, biometrics alone as a method of
authentication will fade away and be replaced with multi-factor
authentication. A biometrics database with more than 27M records including
fingerprints and facial recognition database was stolen in 2019, adding to
other hacks with biometrics on millions of people. https://theconversation.com/stolen-fingerprints-could-spell-the-end-of-biometric-security-heres-how-to-save-it-122001
IoT security is known to be weak, with IP cameras in
particular being vulnerable to being exploited. It is not too hard to
imagine video databases being mined for iris, fingerprint, and facial
recognition data; think of a typical office environment where the subject of
interest may pass a high resolution camera multiple times a day for several
months. A bit of the iris here, a partial fingerprint there...with enough
repetition, compute power, and time threat actors can likely "crack" a person's
full biometrics. Not to mention capture their passwords if the cameras can be
tilted to see the keyboard being types on.
This has already happened significantly with voice, and as
Steve highlighted with fingerprints. Other biometrics like facial
recognition will of course be equally compromised. Mother receives a late
night ransom call, with her 15 year old daughter pleading and screaming at the
other end of the line. It wasn't her daughter, it was an AI-generated
call based off of her daughters voice print that was so accurate even her
mother couldn't tell the difference. https://www.cnn.com/2023/04/29/us/ai-scam-calls-kidnapping-cec/index.html
Threats are growing at AI speed, and AI solutions are needed
to address that. The speed of AI and potential for quantum computing will
soon be able to break biometrics, strong encryption, and passwords. The
solution to the end of biometrics and strong encryption will be found through
more extensive use of AI by defenders at all levels, and more specifically in
using AI to drive more rapid expansion of zero trust approaches, threat
detection mechanisms, very early eradication of bots and malware, and multi-factor
use of digital authentication methods such as certificates.
++
Ted Miracco, CEO, Approov Mobile Security
While the social engineering piece of this attack is common, and stealing facial data isn't entirely new, the focus on deepfake creation for financial fraud is a concerning and very recent development, that wouldn’t have been possible a couple years ago. This is part of the rapidly evolving threat landscape that are 100% enabled through the use of AI technologies.
At this time, the GoldPickaxe malware can trick users into generating images and videos from their iOS and Android phones. This is not the same as stealing biometric data that is stored on the device’s secure enclave and is encrypted and remains secure. This malware is not breaching the Face ID functionality nor breaching either of the two mobile OSes security features, so at this time there is no reason to fear widespread attacks, and there is no reason to disable biometric support from the apps and phones that enable them.
There are several things that can be done to prevent these kinds of attacks. Endpoint detection and response (EDR) and runtime application self protection (RASP) are solutions specifically designed for mobile devices to detect and respond to malicious activity in real time.
It's extremely unlikely that "GoldPickaxe" will slow facial recognition development, however, it serves as a wake-up call for responsible development and implementation of security mechanisms to detect deep fakes and other fraud.
##