AI Regulation and Risk Management in 2024 – with Micheal Berger of Munich Re

Sharon Moran

Sharon was previously a Functional and Industry Analytics Senior Analyst at Accenture. She also has prior experience as a machine learning engineer customizing OCR models for a learning platform in the EdTech space. Currently, she focuses on the data pre-processing stage of the ML pipeline for large language models.

AI Regulation and Risk Management in 2024-min

As AI adoption grows, so do the associated risks, including errors, biases, and unexpected outcomes. However, as AI becomes more integral to core business operations, managing operational and financial risks becomes paramount. One way to offset that risk is to ensure AI models. However, that can be difficult to achieve as the process is complex. 

Despite the complexity involved in insuring AI models, it’s driven by real-world needs. According to a report by the World Economic Forum, in a survey of nearly 1,500 professionals, they identified AI as their organization’s most significant technology risk.

AI models are subject to numerous risks, including unfair bias and hallucinations. The insurance industry’s reliance on AI models in underwriting and claims processing can lead to discriminatory practices. Hallucinations occur when AI models make facts up that aren’t backed by the training data. Inaccurate and false information undermines customer trust and leads to financial losses for insurance companies if an AI model overestimates payments for claims. For this reason, using an AI system in insurance is an example of a high-risk AI system.

Munich Re, founded in 1890, is one of the world’s leading reinsurers. They provide primary insurance, insurance-related risk solutions, and solutions to artificial intelligence providers.

Emerj Senior Editor Matthew DeMello recently sat down with Michael Berger, Head of Insure AI at Munich Re, on the ‘AI in Business’ Podcast to discuss AI risk management and governance. This discussion serves as an extension of prior conversations with Berger in 2022 and expands on the insights he provided at that time.

The following article will focus on two key takeaways from their conversation:

  • Managing uncertainties and risk in AI: Using AI governance as an operational risk management framework for managing hallucinations, probabilistic errors, and discrimination by recognizing the role of insurance as a tool for risk transfer and mitigation.
  • Frameworks for understanding AI insurance premiums: Considering the influence of tolerance, stability, and severity on AI systems and products insurance premiums. 

Listen to the full episode below:

Guest: Michael Berger, Head of Insure AI at Munich Re

Expertise: insurance, technology, data management, technology-based risk assessment

Brief Recognition: Michael has spent the last 16 years at Munich Re, helping to mold their Insure AI operations. He holds a Master’s Degree in Information and Data Science from UC Berkeley, and another in Business Administration from the Bundeswehr University Munich, PhD in Finance.

Managing Uncertainties in AI

At the start of the podcast, Berger begins by mentioning how great it was that OpenAI opened up ChatGPT to the general public. By doing so, everyone can experience the potential and limitations of ChatGPT, not just data scientists, according to Berger. He identifies risks inherent to LLMs, including hallucinations, defamation, and discrimination. 

According to Berger, it’s important to note that both LLMs, ChatGPT, and more traditional AI models have their uses. Still, they also share similar risks, including error, hallucination, and discrimination risks. 

Berger thinks the public’s broad exposure to ChatGPT and related AI tools has led to more grounded conversations about the use cases where AI models, and more specifically, generative AI (GenAI) models, can add value in an organization, as well as downside risks and how to manage them. He says this gave rise to the next phase of the discussion around AI governance and believes that AI governance is essentially the operational risk management framework for AI.

Berger then mentions that companies, legislators, and even the public have recognized the need to identify risks associated with GenAI. He explains that putting guardrails in place is one of the ways to address these risks. According to Berger, guardrails are other models that filter out hallucinations, misleading statements, or defamatory statements that an LLM produces.

He acknowledges the need to manage these risks and implement tools to address them, which he continues to be a welcomed development. 

Even with robust AI governance and powerful guardrails in place, it’s impossible to eliminate the possibility of errors, discrimination, or new risks emerging in an AI model. As a result, Berger sees the resulting operational discussion centering on:

  • Where is AI genuinely adding value in these cases?
  • Do we, as a company, want to automate specific tasks and processes that rely on GenAI models after considering the potential downside?

Companies need to consider whether they want to accept the risk when following through with a particular use case despite the potential ROI. Berger notes that companies should also consider whether or not they want to consider other forms of risk management and risk transfer, such as insurance for AI, to reduce or eliminate the downside risk while still being able to capture some of the upside potential that accompanies GenAI models.

When asked about how business leaders should look at these different risks and how they should develop criteria around the risks to assess how the risks might differ between GenAI models and deterministic models, Berger offers solid guidance instructing leaders to recognize that any AI model, including GenAI models, are probabilistic models.

When relying on or utilizing the output of a probabilistic model for automatic decision-making, leaders have to trust a probabilistic system to make those decisions. Berger notes two considerations to keep in mind when trusting probabilistic systems:

  • There is always a probability of error
  • Changes in incoming data might not be captured correctly in retraining cycles

To hone their perspective on the risks inherent in probabilistic models, Berger believes business leaders need to think about tolerance bands and define acceptable tolerance levels for things like hallucination rates or even the model making mistakes in object detection, in the case of computer vision.

He also provides an example of tolerance rates related to quality control in the manufacturing space. In that case, he says business leaders need to consider how much waste they can handle as a business in terms of non-defect parts being incorrectly labeled as defects. 

Having clarity related to tolerance bands and safety is part of the rational risk-taking discussion, according to Berger, and this will help companies determine whether or not they want to pursue the AI use cases in question or earmark additional capital to finance additional data science work if the model doesn’t perform as expected within defined safety parameters. 

When asked about the relationship between tolerance rates and business goals without technological considerations, Berger elaborates: “I think it’s a combination between the technology and the business goals.”

According to Berger, Companies should consider investing more in data gathering if it restricts model performance within established tolerance and safety bands.

Understanding AI Insurance Premiums

Next, Berger provides insight into what a checklist would look like for business leaders and other stakeholders when assessing model risks. Berger explains that some good risk management standards are emerging when it comes to setting up a proper AI governance process and risk management process for AI adoption, including GenAI adoption. He identifies the National Institute of Standards and Technology (NIST) standards as one example.

Berger says that risk transfer and risk mitigation play a much more critical role in the early stages of AI development, and this is a welcomed trend. He mentions that companies have approached Munich Re because they provide AI insurance, and the companies have asked what techniques they should employ in their data science process to build insurable models.

Berger offers insight when asked what the conversation around premiums looks like for AI models that are leveraged for business goals and not life-or-death scenarios. He identifies three elements that influence insurance premiums:

  • Tolerance bands: The wider the tolerance band, the less likely the model falls outside that band and the better it performs.
  • Reliability: The more stable the model, the more reliable it is, which leads to lower premiums.
  • Severity: What does the financial loss look like when models fall outside the tolerance bands?

Berger offers the example of a model used to detect fraud in banking transactions where each wrong prediction has an associated cost. The question for businesses to consider is if those associated costs are lower than litigation costs.

Berger explains that the insurance premiums are a signal of the costs of risk that help companies:

  • Bake the cost of risk into the return of investment considerations.
  • Decide if they want to pursue alternative data science use cases, especially in cases where there is an alternative model with a different architecture and data science setup, which offers similar value with lower costs of risks and associated insurance premiums.

For the companies Munich Re insures, the responsibility for the risk of a model is shared between enterprises and their developers. He highlights the need for each party to work together to increase their understanding of model risk and how to manage that risk effectively and bring it down to an acceptable level for all affected stakeholders.

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.