Virtualization Technology News and Information
For over a decade, AI has been on the front line against fraud

You would have to have been living in a cave for the past two years to have not heard about the current and potential impact of generative artificial intelligence (AI). There has been much debate on the topic: just looking at a single day of coverage on Forbes brings up 30 different articles, ranging from simple explainers to talk of how it is ‘reshaping drug discovery' and academia. Much like shovel salesmen during a gold rush, companies who supply the power behind AI applications, like the chip-maker NVIDIA, have seen their valuations skyrocket.

To us in the finance sector, and particularly in chargeback remediation, AI is hardly new. With cybercrime as a whole expected to cost the world $10.5 trillion annually by 2025, there is simply no way to counter it ‘by hand,' so our industry deployed machine learning solutions many years ago to aggregate and segment large sets of transaction data to help guide policies and decision making.

We would like to take this opportunity to set the record straight on what AI is, what it can do and what it has been doing for many years to keep you and your business safe.

The many faces of artificial intelligence

A key stumbling block for many when it comes to understanding why, for example, ChatGPT isn't ‘talking' to them is the distinction between artificial intelligence and artificial general intelligence. Artificial general intelligence is what we think of when we think of ‘AI,' and it is what the world's artificial intelligence companies are building toward: a virtual being with intelligence comparable to a human who, like humans, could be conversed with. These are apparently the SkyNet-like entities that AI alarmists believe will destroy humanity. This is not what OpenAI, Google and others have created, or are anywhere near creating. Like any digital tool, including the web browser you may have used to find this very article, tools like ChatGPT can perform particular tasks in particular ways, but unlike human brains or hypothetical AGIs, they cannot learn new tasks, nor do they have distinct perspectives, opinions or personalities.

A Large Language Model, like ChatGPT, trawls as many pieces of written content as possible to build a model of what kind of words go with other words, much in the same way that the autocorrect in your phone might learn that the word ‘keys' often follows ‘I can't find my'. It also learns what kind of words follow certain questions, giving it the ability to answer questions in a realistic way, as if it were a thinking being. It doesn't understand the meaning or context of any of these words but given a large enough dataset and enough tweaking by its human programmers a Large Language Model can be very realistic. We can see in the recent ‘scandal' involving Google's AI image generator that these AI platforms don't have any understanding of context, meaning that when asked to generate images of the Founding Fathers or Nazi soldiers, they will show random people because all they ‘understand' about Founding Fathers and soldiers is that they are human beings, and will include people from races and sexes who would not be allowed to hold those positions.

AI in finance

If AI is so prone to error, then shouldn't it be restricted from use in finance? After all, people could be accused of fraud or have legitimate chargebacks denied if a fallible AI system were used.

Over many years, AI (or more accurately, machine learning) in anti-fraud applications has become so adept at finding fraud and representing chargebacks that these worries are gone. This is because instead of using computing for a new purpose that it is badly suited for-such as creating original text or images-the anti-fraud industry uses it for something that computers are uniquely good at: spotting irregularities in patterns. For example, if every field in an order form is filled in instantly, instead of taking a little time as a human being fills it in, this could indicate that the form is being filled in automatically rather than by a human being, a telltale sign of fraudulent activity.

In chargeback management, machine learning can also look for patterns, which can be as basic as whether or not a person has repeatedly issued chargeback claims, or deeply complex. More importantly, this can be done on a per-business basis, so the machine-learning algorithm learns the specific nuances of how fraudulent chargebacks affect your particular industry without being polluted with information from other companies with vastly different services. Because it is far faster than a human operator at learning these signs of chargebacks-both valid and invalid-and can make connections that people just couldn't make as quickly, it contributes to customer satisfaction by reducing the number of false positives and only letting through genuine transactions in an efficient manner.

Being realistic about the capabilities of AI is going to be crucial for many companies over the coming years, as more businesses bring various forms of AI into their workflows. Although there are going to be some trials, errors and learning opportunities along the way, the use of AI and machine learning to prevent fraud and chargebacks is a mature technology that businesses around the world can trust if in experienced hands.



Monica Eaton 

Monica Eaton is the Founder and CEO of Chargebacks911 and Fi911, as well as Chief Information Officer of Global Risk Technologies. Monica has worked tirelessly to educate merchants and financial institutions about hidden threats in the rapidly changing payment fraud landscape. Leading Chargebacks911, was founded in Tampa Bay, Florida, expanding internationally also to become Europe's first chargeback remediation specialist to tackle the chargeback fraud problem. In ten years, Chargebacks911 has successfully protected more than 10 billion online transactions and has recovered over $1 billion in chargeback fraud. 

Recognizing that the impact of chargebacks goes beyond merchants, Fi911 provides unrivaled support to financial institutions with innovative back-office management technologies. Fi911's pioneering DisputeLabTM tool streamlines chargeback management for acquirers, automating legacy processes and standardizing methods that simplify and speed the end-to-end workflow, improving the customer experience and accountability for all stakeholders.

Monica is a passionate diversity advocate committed to developing and sharing innovative solutions that empower the global fintech space. She has earned numerous awards, distinctions and special recognitions, including the Retail Systems Awards, where she received the ‘Outstanding Individual Achievement Award' and was named ‘Global Leader of the Year' at the Women in IT Awards.

Published Thursday, April 11, 2024 7:33 AM by David Marshall
Filed under: ,
There are no comments for this post.
To post a comment, you must be a registered user. Registration is free and easy! Sign up now!
<April 2024>