COVID gave the digitalization of modern business such a tremendous push forward that it advanced a decade in just a few years. Companies and sectors that had never imagined a remote workforce were forced to connect entirely online.
With the embrace of remote life for their employees and customers, cybercriminals shifted their target from individuals and small businesses to major corporations, governments, and critical infrastructure. While fraud increased in every sector, financial services became especially prone to the online crime spree.
While the post-COVID increase in fraud has brought new challenges for financial institutions (FIs), the ensuing arms race has given way to new and more advanced forms of fraud detection that will continue to streamline compliance and cybersecurity workflows.
Emerj CEO and Head of Research Daniel Faggella recently spoke with Mark Gazit, CEO & Founder of ThetaRay, on the AI in Business podcast over the course of two episodes about these trends. In this analysis of their conversation, we examine two key insights Mark offers finance leaders fighting the post-COVID wave of fraud and cybercrime:
- How the historically unprecedented nature of post-COVID online activity during lockdown is behind the major cyber financial crime wave driving AI adoption in financial services.
- The increase in data on fraud is giving rise to “intuitive” AI technologies that move away from rules-based systems and encourage customer engagement to fuel continuous learning that finds deeper patterns in indicative behavior.
Listen to the full episodes below:
Guest: Mark Gazit, CEO & Founder of ThetaRay
Expertise: Driving Global Market Expansion & Domination, Creating Partnership Ecosystems, and Establishing Government Relations
Brief Recognition: Mark has been the CEO, President & Board Direct of ThetaRay for ten years. Before this, he held several leadership positions at the Israel Export Institute as Chairman Public Management Board for Cyber & FinTech and as a board member of the Isreal-America Chamber of Commerce.
AI Capabilities Are Front-Line Defense for FIs in Post-COVID Surge in Cybercrime
The world of crime changed due to COVID, and one where robbers and criminals ditched physical, dangerous bank robberies in favor of virtual ones.
A report by the Financial Stability Institute in cooperation with the Bank for International Settlements (BIS) describes how criminals exploited the vulnerabilities opened up by COVID-19, which increased the risks of cyberattacks, money laundering, and terrorist financing.
However, with the post-pandemic increases in low and no-interest loans came a consummate wave of fraud – equally unprecedented as the growing wave of cybercrime, though jettisoned by distinctly different aspects of the pandemic.
Among these forms of fraud were the many people filing for high-value insurance claims at home via digital channels following governments imposing restrictions on the movement. Without having associates review claims in person, many insurers saw increased suspicious claims like automobile accidents and flooded basements.
According to PwC, even legitimate customers use loopholes in the virtual process to file a higher-value claim. Doing so incentivizes insurance companies to make their fraud detection investigations and technology multi-layered for retention.
The pandemic and the surge in digital services also made the cloud the centerpiece of digital experiences. Cloud-enabled these new digital experiences where banks invested in startups to improve customer experience.
Organizations were rapidly deploying remote systems to connect to the remote workforce, which increased the opportunities for cybercriminals to take advantage of the security vulnerabilities to steal data and cause disruption.
According to Interpol reporting, and corroborated by Mark’s appearance on the podcast, three factors drove these unprecedented increase in cybersecurity post-pandemic:
- Employees connecting with the banking systems outside of the bank’s perimeter (remote)
- Employees using VPNs to get connected online
- Growth in online customer transactions
As Mark describes the trend – in an episode that premiered just as lockdowns were sweeping the world – COVID converged the somewhat respective spheres of cybersecurity and financial crime up until that point, resulting in a novel global wave of distinctly “cyber-financial crime.”
As he further explains, working from home also increased the online exposure of banking customers overall, which led to new and, in some cases, naive targets for online fraudsters. Indeed, being able to access corporate networks from remote areas significantly expanded cyber criminals’ attack surface.
In exploiting these vulnerabilities during the pandemic, cybercriminals made the very fact of online life post-pandemic the underlying driver of increasing risks in cyber attacks — and according to Mark, the driver for AI solutions in this space.
While AI capabilities are now commonplace among cybercriminals, especially those targeting legacy systems and institutions, Mark argues that by understanding how life online following the pandemic is driving the current wave of cybercrime, business leaders can use the overwhelm of data to their advantage.
Continuous Learning Based on Customer Inputs for the “Next Level” of Fraud Detection
In light of available AI capabilities, there are many applications for fraud detection in the financial services industry. To accurately detect fraud, businesses must understand what type of customer behavior indicates genuinely fraudulent behavior – a process typically ridden with false signals.
In his second appearance on the podcast, Mark discusses how machine learning is uniquely capable among AI technologies for detecting deviant behavior in digital environments. It is also an AI capability suited to help legacy enterprises sort vast amounts of information into customer profiles based on past transactions.
Once those customer profiles are in place, they can detect automatically if a particular customer profile matches a pattern of a fraudulent customer. If so, the system will raise a flag.
From there, all transactions can be assigned a fraud score based on the risk parameters set by the company. These scores typically consider the transaction time, IP address, and amount to assess the fraud involved. Using these scores, FIs can train models to approve transactions, flag them for review, or have a human reject them based on greater context.
While using machine learning to build these systems, Mark points business leaders to an emerging machine-learning capability he calls “intuitive AI” that looks for deeper patterns in fraudulent behavior with the help of an interface that welcomes customer input. That user interface then clusters similar events and fraud-indicative behaviors – even minute, disparate transactions designed to purposefully evade radical departures in spending patterns.
In training models based on continued customer inputs, there is greater context being brought to the basic machine learning capabilities to better predict human activity that is truly indicative of fraudulent behavior.
Mark brings up the example of finding potential fraud in single-digit dollar payments being made over weeks at a time in which a customer proved later was an innocent set of transactions.
The experience was still able to help their model learn to find more specific instances of fraud going into the future. “Now what the system will do… the algorithms will use this input that customers give us as another data set. Basically after literally a week or two, our system will propose a color,” Mark tells Emerj.
He emphasizes that what gives “intuitive AI” technologies a distinct advantage over typical AI-enhanced fraud detection systems is the emphasis on customer input to feed constant learning for fraud models over strict rules-based systems.
Citing another instance of tens of thousands of customers at a banking client being defrauded from 10-13 francs in a range of transactions over months, Mark describes using their systems to automatically find “a relationship between many parameters.”
One parameter was all the customers who were new customers for less than one year. Unfortunately, the client bank had over 100 million customers, and it is impossible to investigate even ten percent of that number for fraud. The model then pared down this sample using the following criteria:
- How many currencies these customers were using.
- How many of their transactions were with only one counterparty.
- Industry misalignment and other KYC definitions
In doing so, the model was able to automatically highlight the very patterns used by cybercriminals to hide their activities in small actions over large and increasingly untraceable patterns.
The level of detail in noticing minute observations shows that this “intuitive” AI has enormous potential across fraud detection in financial services. It represents a larger vision for AI adoption in compliance, requiring a wider organizational effort to deploy. Still, Mark notes that the ROI is well worth the effort to provide customers with such comprehensive protection.