How to Deploy AI for Fraud Detection in Financial Services

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

How to Deploy AI for Fraud Detection in Financial Services
This Emerj Plus article has been made publicly available for a limited time.
To unlock our full library of AI use-cases and AI strategy best practices, visit Emerj Plus.

Fraud, money laundering, and other cyber crimes often increase in times of economic strife, and the pandemic is no different. In light of the coronavirus crisis, we believe that fraud detection applications are among the AI use-cases that are most likely to be adopted and deployed even when funds dry up for other kinds of more long-term, strategic innovation investments.

Emerj’s AI Oppourtunity Landscape research in financial services shows are often the best for a measurable, near-term ROI after a relatively simple integration. Applying artificial intelligence in the enterprise is never easy. However, many AI-enabled fraud detection applications only require a few key data sources to yield tangible results. This makes them  good first projects for building AI-related skills in the enterprise.

In this article we discuss a strategy for deploying an AI-enabled fraud detection application in financial services. First, we run through four steps for bringing one such application to life in the enterprise, and we end with a discussion of how to measure its success.

Steps for Deployment

There is a lot that goes into deploying an artificial intelligence solution and. We’ve put together an entire AI strategy report on this topic: the AI Deployment Roadmap. There are four steps that leaders should understand if they want to deploy AI in the enterprise. Leaders in financial services need to:

  1. Determine Cross-Functional AI Team Members
  2. Audit the Data
  3. Think Long-Term
  4. Train the model

1. Determine Cross-Functional AI Team Members

The first is determining cross-functional AI team members. In this section, we use the example of a credit card company that is trying to detect payment fraud.

A credit card company’s cross-functional AI team will need a business leader with experience in fraud detection processes, someone who is intimately familiar with the strategic goals of the company and the financial objectives of the company at a high level. 

The company will need several people involved who are subject-matter experts explicitly in fraud detection, who understand what to look for, who understand the origins of fraud, and who understand in robust detail the processes in place to combat it.

The company will need in-house data scientists, preferably in-house data scientists that have some experience with fraud data previously. 

Finally, the company will need a stakeholder from IT who understands how to access relevant payment fraud data and customer data that would be necessary to train algorithms. 

It is these team members together that will be able to assess the state of the data, assess a solution as it’s being developed, and have confidence in actually deploying it.

2. Audit the Data

Many credit card companies are already training algorithms on payment fraud data, but companies will need to look at their data and get a sense of whether the data is accessible.

A credit card company needs to know if the data is harmonized so that it’s easy to train an algorithm on it. They’ll also need to know if that data has actual value. 

It is often assumed that data scientists can do this job by themselves, but in order for a data scientist to make sense of this kind of data, they need to have subject-matter experts explain aspects and features of it to them.

3. Think Long-Term

A credit card company needs to think about a realistic time horizon for getting a project like this done. It needs to think about how this project will contribute to long-term strategic objectives and what it thinks would be reasonable for near term goals for the project.

It’s critical that the company has the different stakeholders involved in this meeting because business leadership will often set unrealistic goals given what AI is capable of.

Data scientists are also unable to set these goals by themselves without the input of IT and subject-matter experts to help ground their expectations and understanding. 

We’ll go into more depth on understanding the ROI of these applications in the next section of the article. 

4. Train the Model

Normally, a model is trained by working with a historical data set. A credit card company can look at a number of payments that were non-fraudulent and a number of payments that were fraudulent. 

Subject-matter experts might even label different kinds of fraud and determine which features and factors about the fraudulent and non-fraudulent transactions were indicative of the fraud.

There may be some labeling in supervision beyond simply the dimension of fraud and legitimate transaction. Subject-matter experts may also label transactions with certain geolocations and certain variances in purchase activity that it believes indicated fraud. 

The company would then test the model on some small stream of incoming data. It doesn’t actually use the outputs of this model, however. The company shouldn’t interrupt its fraud team’s existing workflows as of yet. It simply wants to see if this system is able to detect fraud more effectively than how they’re currently detecting it.

Often, this is as simple as getting an understanding of what the company’s false positive and false negative rates are and having some benchmark estimate as to whether that’s better or worse than its current systems. 

Again, this requires input from multiple stakeholders. These tests often fail, and teams have to go back to the drawing board to determine how they want to label and organize the data and train the algorithm in order to produce better results the next time around.

After all of these steps, companies can begin down the road towards deployment. Deployment normally involves an incubation period with very limited use of the application operating live within the business before it is actually scaled to the entire organization. 

Measuring ROI 

Determining how a company wants to measure the ROI of an AI application is often not self-evident. It sometimes takes AI startups two to three years to determine the core metrics and the core proxies of ROI that they want to use in order to prove the value of their product to their clients.

Finding an enterprise fit for artificial intelligence is never easy, and thinking through ROI ahead of time is crucial for having the best chance of realistic expectations and of actually delivering value in the first place. 

Improving the Customer Experience

Customer experience is one way to measure the ROI of AI. For example, an insurance company may want to leverage AI for faster payouts. AI may only be able to automate routine, low-risk claims, but even this may greatly improve the customer experience.

Lemonade, is an insurtech company that claims to be able to pay out customers for claims they make on their damaged or stolen property in 3 seconds. Customers can file claims via chatbot and find out if their claim was approved in the same window, as seen in the video below:

The company clams their software uses AI to detect if a claim is fraudulent or not as soon as the customer files it in the chat window. In some cases, this allows the customer’s claim to be approved almost instantly.

Similarly, the customer of a credit card company might normally be based in the New York City area, but they might be traveling in Germany and make a purchase while there. Traditional fraud detection methods might put a hold on their card and block the purchase.

But AI could help quickly determine that the transaction is not fraudulent, allowing the customer to make their purchase without hassle. This could go a long way toward increasing the customer’s brand loyalty.

Detecting and Reducing Fraud

Of course, financial institutions could also measure their fraud detection software by a reduction in fraudulent transactions going through a company’s digital environment. If a company can determine when a credit card transaction is actually fraudulent, then it can protect its customers from money being taken out of their bank accounts and from the negative impacts to their credit score.

Anomaly detection, a type of machine learning software, is very good at finding deviations from the norm within structured data, including payments. The financial services industry has been using anomaly detection for payment fraud since the early 2010s.

Ayasdi was among the first well-funded AI companies to offer anomaly detection-based payment fraud detection software in financial services, having been founded by researchers at Stanford in 2008.

Fraud metrics may be among the easiest to measure since a company can simply measure the number of fraud instances its fraud specialists detect without the software—a number it likely already tracks—and compare it against the number of fraud instances the software detects after implementation.

Reducing Regulatory Risk

Regulatory risk may also be reduced as AI software catches more instances of fraud. Banks and other large financial institutions can be liable for facilitating fraud and money laundering within their digital environments.

Anomaly detection software in particular could help detect sophisticated money laundering schemes by finding patterns within millions of transactions that may indicate nefarious behavior.

In doing so, it could allow financial institutions to block suspicious payments, inform authorities, and save themselves from incurring large fines from the government.

Improving Process Efficiency

If detecting fraud can be done in such a way where a company needs less human investigation of individual transactions or of claims, then that means the company doesn’t have to hire as many people for those jobs. The company could also have its human staff do more hands-on customer work as opposed to manually scanning transactions. This, ultimately, improves efficiency within the fraud department.

The financial institution saves money on overhead all while reducing fraud and, in insurance, expediting the customer’s payout.

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.