AI plays a more significant role in our lives every day, and cybersecurity is hardly an exception – for both the “cat” and “mouse” sides of the table. According to a study from the Ponemon Institute in partnership with Accenture, cybercrime alone cost financial services firms $18.5 million on average in 2018.
More broadly, cybercrime costs the United States $242 billion a year, according to the Cybersecurity and Infrastructure Agency (CISA). Yet, for nearly every conflict taking place over rules-based systems, AI has a role to play in enterprise defensive operations that help their organizations avoid penalties and lower their risk exposure.
In this article, we’ll explore some of the key definitions and terms at the intersection of AI and cybersecurity, focusing on three major AI trends impacting financial services organizations today:
- Privacy-driven regulatory compliance: AI capabilities are uniquely suited to help FIs meet regulatory burdens that safeguard user data privacy, based on different geographic standards.
- Threat detection and the AI arms race in cyberattacks: Both cyber criminals and financial services alike are leveraging AI in a veritable arms race.
- ‘Front-door’ verification: The growing importance of verifying system users at their first interaction with that system for streamlining cybersecurity and compliance workflows.
We will begin by giving definitions for the terms and regulations involved.
Working Definitions for Cybersecurity and Cybercrime:
By various international standards, cybercrime is based on two related forms of criminal activity:
- Cyber-dependent crimes: Or crimes that can only be committed using an ICT device, considered most forms of hacking.
- Cyber-enabled crimes: Where the ICT device is used to scale the criminality involved.
While sometimes allocated only to refer to government and public sector structures on the internet, we are taking a broader view of cybercrime to the intentional and organized effort of individuals to use personal data outside widespread international and industry standards and regulatory regimes.
For the purposes of the present article, we will refer to cybersecurity as a term representing efforts to address the aftermath of that definition of cybercrime and preventative measures to stop cyber criminality from occurring in the first place.
We will also further define cyberattacks as an organized wave of hacking methods for subverting the tech stacks and legacy systems of financial institutions against their clients and carrying out large-scale cyber criminality.
Trend 1: Privacy-Driven Regulatory Compliance
AI use cases are prominent in the fields of both communications surveillance and fraud detection. Whether monitoring chat rooms, messaging apps, eCommerce environments, or personal credit card transactions – AI capabilities in machine learning and data analytics, in particular, perform the core function of operations meant to make sure illegal activity is not taking place in these enterprise domains.
AI-driven cybersecurity solutions operate in much of the same way: by leveraging machine learning and data analytics to detect suspicious behavior occurring in an online-only environment between devices and networks.
Often these solutions are focused on providing AI-regimented compliance to existing regulations and other rules-based systems specific to the dynamics of cybercrime. The various regulations associated with cybercrime concern three points:
- How consumer data and additional sensitive financial information is handled for points of sale, both in-person and online
- International government-based and industry standards for privacy and security of consumer information
- Anywhere else personal data is exposed for the sake of verification in a financial transaction
The regulatory regimes therein each involve some degree of fine or penalty for noncompliance and are generally commonplace to FIs operating globally. Regardless, those costs make for strong interest and investment in proven AI solutions that help enterprises avoid penalties for noncompliance.
BSA-Related Compliance Solutions
The most robust market for AI solutions in compliance occurs in detecting fraud that falls between AML and KYC compliance, both associated with the Bank Secrecy Act of 1970 – among the first laws in the US to address money laundering directly. Money laundering and KYC compliance go hand-in-hand as the process for detecting money laundering activity essentially boils down to how well an FI knows its customer.
Given the current challenges facing financial institutions carrying out transactions in a global context, knowing your customer can take on equally global proportions in terms of potential consequences. Director of AML and Risk Reliance for the Bank of Montreal Thomas Mangine recently appeared on Emerj’s AI in Business podcast to discuss how AML and KYC compliance can take new heights where sanctions and international conflict are involved:
“And almost any given day, during the week, you will see some sort of discussion in the West by either the United States, Canada, Australia or the European Union for increased sanctions against Russia for the fighting in southeastern Europe. So you’re required to do more and more. And one of the challenges becomes, well, where do we get that vital information that provides the indicators? Or, as the industry calls them, the ‘red flags’ that give you signs that there might be something illegal or unusual going on here, and you need to look at it more closely.”
Director of AML and Risk Reliance for the Bank of Montreal, Thomas Mangine
Among the most promising use cases in BSA-related compliance is the ability of AI technologies to dramatically reduce false positives in AML/KYC threat detection. Emerging AI technologies are also detecting deeper patterns of behavior that extend enterprise KYC compliance capacities to KYCC (or ‘knowing your customer’s customer’) levels – and regulatory expectations will not be far behind in raising the bar.
All-In-One Compliance Solutions
Many currently available AI solutions market their ability to provide FIs with enterprise-level AML/KYC compliance. However, for the remaining regulations listed above, there are no outright AI solutions marketed for, say, only international privacy compliance or PCI DSS certification the same way they are for AML/KYC compliance and fraud detection writ large.
The primary reason for the present trend is, as Thomas Mangine explains in his appearance on the AI in Business podcast, the very function of AML/KYC compliance is an information-gathering exercise – and the associated solutions specialize in highlighting that “vital information” signaling a red flag for human compliance professionals.
Because there is always more information to be gathered in a KYC context, it takes the power of an entirely digital platform to focus on such an open-ended task.
A second reason for the separation among AI cybersecurity solutions is that compliance for almost all other forms of cybersecurity regulations is often such a simple process for a digital platform that there’s no point in offering singular functions. Compliance for these regulations instead tends to be rolled into larger platforms that promise all-in-one automated compliance for enterprises based on industry and the associated regulations therein.
AI solutions for banking and financial services tend to offer compliance solutions for the cybersecurity regulations listed above alongside HIPAA and other regulations – as an example, Nightfall.AI offers both PCI and HIPAA compliance in their solution. Other solutions like UpGuard provide automated compliance for all the regulations listed above.
You can read more Emerj research about PCI compliance and AI solutions here.
Given the range of options, enterprise leaders considering AI solutions to guarantee their cybersecurity compliance should strategize compliance in two distinct camps:
- One solution for their BSA-related or AML/KYC compliance
- One customizable solution tailored to their specific regulatory requirements outside of AML/KYC
Given that applicability of individual regulations varies from industry to industry and frequently business to business, AI leaders should consider starting there in terms of balancing potential in-house solutions and even adopting early AI projects, depending on the size of their enterprise and market circumstances.
Trend 2: Threat Detection and the AI Arms Race in Cyberattacks
Even outside of strict regulatory compliance, the ability of machine learning to detect behavior typical of hackers is well documented. In financial services overall, machine learning-enhanced threat detection – particularly within the context of transaction surveillance – represents the frontline of cybersecurity defenses for FIs.
Yet cyber criminals themselves are not without their own AI weapons of choice to undermine FIs, and machine learning is also prominently among their countermeasures – especially in carrying out large-scale cyberattacks against global organizations and consummately large legacy systems.
Below is a brief 8-minute interview with ThetaRay CEO Mark Gazit in which he explains how cybercriminals are leveraging artificial intelligence themselves to conduct evermore complicated cyber attacks:
Notably, Mark points out that larger institutions, in particular, are a preferred target among cybercriminals leveraging AI because it’s easier to hide smaller forms of cybercrime that can scale into substantial forms of theft and fraud.
After making a comparison to literal bank robberies depicted in pop culture, Gazit points out:
“Today, it’s much more convenient to establish a service somewhere outside the United States and to make an automatic AI-based program run on the server that will hack into bank accounts and steal half a dollar from a bank account. Nobody will complain, especially if you call it I-tune transaction or stock transaction, and then you use it automatically, you do it 20 or 30 million times in a row, so in one month, 20 to 30 million dollars goes to somebody else’s bank account, and then it just disconnect the link, disappear, and a bank will maybe discover it after one year.”
ThetaRay CEO Mark Gazit
While that interview dates back to 2018, Mark’s appearance on the AI in Business podcast in April 2020 revealed a financial services landscape where little has changed.
Banking and financial institutions, unfortunately, have limited options in light of the arms-race dynamics of the problem beyond the ‘eternal vigilance’ of staying ahead of cybercriminal AI technologies. The endeavor inevitably involves continuous education and strategizing for the entire organization, from leadership on down, or comprehensive AI vendor solutions that will stay ahead of AI cybercriminal technologies for them.
However, there are use cases emerging in financial services that show promise in giving institutions a distinct advantage in the very fundamentals of arms race dynamics – at least for the time being.
In a recent interview for Emerj’s AI in Financial Services podcast, AI Leader for PwC Middle East, Scott Nowson, describes leading efforts to implement technologies that examine fraud trends and adjust accordingly to ‘new normals’ as they become apparent in data collection:
Theoretically, it is possible that solutions could emerge capable of examining cybersecurity reporting – both in internal data and various external media– in search of new ways criminals are leveraging AI to carry out cyberattacks. These potential solutions could then notify the proper institutional leaders on how to adjust their processes accordingly.
Trend 3: How ‘Front Door’ Verification Streamlines Cybersecurity Workflows
In a defensive organizational mindset that typifies banking leadership, the introduction, or ‘front door,’ of systems – whether they are call centers, HR onboarding, or data access – is of particular importance to security professionals.
Traditionally, the security importance of system’ front doors’ tends to be minimized in large organizations, where departments and data are siloed, thus leading to multiple checkpoints within that system to verify a participant’s identity and intent.
Throughout financial services – from banking to real estate – being able to verify the participants (or ‘know your customer’ in a strict banking or financial context) from the system’ front door’ is becoming an increasingly valuable enterprise capability.
These ‘front doors’ are becoming more critical to enterprise leaders, especially in cybersecurity, because AI capabilities like machine learning and data analytics tend to eliminate silos in personnel and data. Compliance and security professionals are better able to act on centralized data sourcing that can recall entire customer or employee profiles and relevant information at their fingertips.
To make a nightclub analogy: There’s far less need for bouncers to “work the floor” (or, in the case of banking, to have so much time spent by security professionals on verifying customers and system participants) if they can guarantee everyone by just checking IDs at the door.
The strategy extends well beyond customer interactions and is generally applicable to any process that operates as a funnel and involves outside parties becoming integrated based on trust. Another example in an HR-related context would be verifying potential employees from the top of a recruitment funnel.
Many verification processes across financial services are fundamentally data-reliant – whether in an insurance process, as in claims and underwriting, or in banking for the sake of avoiding sanctions and streamlining false positives on AML compliance. Thus they are also susceptible to cyber-based fraud and other criminal activity.
Christian van Leeuwen, Chief Strategy Officer of FRISS recently appeared on Emerj’s AI in Business podcast to describe one such insurance workflow for what his company calls “trust automation” that similarly relies on a front-door verification mindset. He talks about the importance of the model in streamlining workflows below:
They are also rules-based systems that stand to be sanitized by similar built-in ‘front door’ verification – a kind only achievable by an organization-wide culture promoting the transformational capacities of AI technology.
As a policy backed by AI-enhanced procedures that guarantee trust in participants in a system, front-door verification is slowly becoming the gateway to ensuring security in many financial services contexts while minimizing human workflows. Not the least of which includes cybersecurity – especially in online portals.
Given the proven results, it would behoove financial services leaders to view cybersecurity funnels and workflows from the ‘front door’, whether in pursuing AI solutions in the market or in developing their own internal processes and early AI projects in cybersecurity.