Skip to content
August 28, 2023

How AI is Transforming Fraud Prevention

Fraud is one of the most critical issues financial services companies are facing right now. According to Alloy’s State of Fraud Benchmark Report, 91% of banks and fintechs reported an increase in fraud year-over-year in 2022, costing 70% of respondents over half a million dollars in losses last year.

There are numerous reasons contributing to the widespread rise in fraud. One of the most significant of which is that fraudsters have become more organized, tech-savvy, and scrappy than ever before.

Their most notable new (not-so) secret weapon is artificial intelligence (AI), which they leverage to execute more sophisticated fraud attacks. Meanwhile, banks and fintechs are also racing to figure out how they can use AI to fight back.

If you’re like us and you were equally enthralled/terrified by the 2023 horror film M3GAN, which tells the story of an AI-powered children’s doll turned evil killer, you might be a little (a lot) nervous about how AI could potentially impact life as we know it. We’ll save the existential dread for another time, but instead, take a look at how AI might help and hurt the financial services industry.

AI Technology: A Double-Edged Sword for Fighting Fraud

Something that was just a buzzword a few years ago is becoming an invaluable tool for both sides of the fence. Banks and fintechs use it to identify fraud. Meanwhile, bad actors are using it to create more sophisticated fraud attacks.
How AI Can Be Used by Bad Actors to Commit Fraud
AI is a boundless technology, and bad actors are using it in a multitude of ways for their financial benefit. Let’s take a look at a few key methods below.
Synthetic Identities
There’s a lot of buzz around synthetic identity fraud, which occurs when a bad actor uses a combination of phony (and sometimes legitimate) personally identifiable information (PII) to create a new fake identity to commit fraud. Generative adversarial networks (GANs) can be used to create fake documents, such as fake passports and driver’s licenses that can be used to commit fraud. Generative AI can also be used to generate completely fake faces to go along with your synthetic identities.
Deepfakes are videos or audio recordings that have been manipulated to make it appear as if someone is saying or doing something that they did not actually say or do (here is one example). Despite being a relatively new concept, deepfakes have already been getting increased regulatory attention as they are becoming more prevalent.
Biometric Spoofing
AI can be employed to bypass biometric authentication systems used in IDV processes. Fraudsters can use sophisticated AI algorithms to create 3D facial masks that mimic the biometric features of an individual, allowing them to impersonate someone else during identity verification.
Social Engineering Scams
Fraudsters are using AI to make their phishing, SMS phishing (smishing), and voice phishing (vishing) attempts more realistic than ever before. In these scams, fraudsters use AI to better emulate the people they are impersonating to perform second-party fraud on unsuspecting victims. GANs can also be used to create completely fake websites that look like legitimate websites, which can be used to steal personal information from victims.
Data Manipulation and Augmentation
The use of AI algorithms to alter or augment data within an ID document itself. They can modify text, numbers, or other features on an ID using AI tools, making it difficult for manual or automated verification systems to detect the alterations.
How AI Can Help Banks and Fintechs Identify Fraud
Identity Document Verification (IDV)
Even with how rapidly the fraud landscape has been changing, document verification remains the most popular tool to prevent fraud. IDV uses AI to verify an applicant’s identity by comparing their facial biometrics to the image on their government-issued ID, ensuring the person claiming the identity aligns with the presented evidence.
Document authenticity checks can involve analyzing the captured image to determine its authenticity, including the placement of text fields and images. Banks and fintechs should also compare the submitted ID against a database of thousands of templates, examining physical features, and detectings signs of tampering, forgery, or bogus evidence such as electronic screens, photoprint IDs, and photoshopped IDs. AI can also power facial/selfie verification to compare a selfie image with the facial image provided on the ID and confirm a match.
Driver's License Verification stands as an integral part of IDV. As fraudsters become more sophisticated, this traditional method of establishing a person's identity maintains its relevance, enhancing the overall trustworthiness of IDV processes.
Advanced AI technologies can scrutinize these documents, examining their microprint, holographic overlays, and other security features for signs of tampering or forgery. Cross-verifying ID data across multiple third-party sources provides a comprehensive check that enhances the authenticity of the IDV process. By validating identification against third-party sources like the state DMV registry and other state government databases, banks and fintechs are able to not only verify the visual ID, but verify the legitimacy of the information presented on the ID, helping to outskirt falsified, AI generated documents.
Predictive Risk Ratings
Traditionally, banks and fintechs used rules-based fraud detection systems that identify fraud based on a customer’s PII and their historical fraud data. While those systems can be effective at identifying obvious signs of types of fraud you’ve seen in the past, it doesn’t future-proof your fraud prevention for new and more sophisticated ways bad actors will attempt to defraud you in the future.
AI and machine learning models can leverage customer PII and your historical data alongside anonymized fraud trends in the financial services industry to identify patterns and anomalies in data or ID documents to better predict the likelihood of whether or not an entity will commit fraud in the future. Over time, the algorithms will get smarter on their own and detect new fraud methods before you’ve seen them.
Behavioral Analytics
Sometimes, suspicious behaviors can be hard to catch when you’re only looking at single transactions. The individual transactions themselves may not be reason enough to flag, but when you look at them across all of a customer’s financial activity they can start to become more suspicious.
For example, a transaction at a nail salon may not be cause for concern. However, a transaction at a nail salon at 2:00 in the morning is a bit more suspicious. Now, what if you add in that the same customer is buying gas up and down a known route for human trafficking in the middle of the night too? AI and machine learning models can identify patterns like these to better detect patterns associated with human trafficking, money laundering, and other financial crimes.
Similarly, AI models can be built out to identify account takeover or credit card fraud by learning about each of your customers and flagging behavior that doesn’t fit their specific profile.

Fraud Strategies are Constantly Evolving, but Identity is Key

Technology will continue to evolve and so organizations have to be constantly evolving their strategies to keep pace with new fraud risks. But whether it is AI and machine learning or other tactics, at the end of the day, these AI-generated identities, AI-enabled fraud tactics, and bots are not the ones cashing in on the fraudulent activity. There will always be someone behind the computer screen with a real identity who benefits from fraud.
Leveraging technology like AI can be a part of a cohesive fraud prevention strategy, but it shouldn’t be your only tool. The key to understanding the real identity of a user and fighting fraud is to use a combination of different methods, tools, and data sources together to confidently understand the identity of your customers.
Note: The original post appeared on Alloy's blog on 7/18/23 and is republished here with permission. This article is co-authored by Lilach Shenker. Lilach leads Product, Partner, and Client Marketing at the identity risk management company Alloy. We're grateful for her expertise and contribution to this piece.
Tag(s): Blog , Articles , Vouched

Asher Furnad

Asher Furnad is a GTM Lead at Vouched and excels in the field of financial services, leveraging experiences from his tenure at NASDAQ and Gartner to help drive Vouched's market positioning and growth.

Other posts you might be interested in

View All Posts