Quantifying AI Risk – with Head of AI Insurance at Munich Re Michael Berger

Jean Olivier
Making the Move to Saas in Financial Services-1x-min

As AI continues to reshape industries, calculating the ROI for AI initiatives remains a complex challenge. According to a study by MIT Sloan Management Review, only 10% of organizations report significant financial benefits from their AI investments despite the widespread adoption of AI tools and platforms.

The statistic underscores the uncertainty many businesses face when quantifying the value of AI, which promises increased efficiency, innovation, and competitive advantage but also presents considerable risks and unknowns.

In a recent episode of the ‘AI in Business’ podcast, Emerj CEO and Head of Research Daniel Faggella interviewed Michael Berger, the Head of AI Insurance at Munich Re, a global leader in insurance and risk management. With his unique expertise, Berger delves into the critical questions business leaders should ask when evaluating AI investments, with a focus on understanding and quantifying risk.

The discussion highlights that achieving positive ROI in AI is not just about choosing the right technology or having a sound data strategy—it’s equally about understanding the inherent risks that could undermine the value proposition.

The following subsections summarize two critical insights for insurance and other financial leaders:

  • Quantifying AI risks with predictive performance metrics: Successfully evaluating an AI system’s accuracy involves not only assessing the reliability of outputs but also understanding the variability of those predictions.
  • Incorporating uncertainty into AI models: AI models should provide a range of potential outcomes, not just single-point predictions.

Guest: Michael Berger, Head of Insure AI at Munich Re

Expertise: Insurance, technology, data management, technology-based risk assessment.

Brief Recognition: Michael has spent the last 15 years at Munich Re, helping to mold their Insure AI operations. He holds a Master’s degree in Information and Data Science from UC Berkeley and a Master’s in Business Administration from the Bundeswehr University Munich, PhD in Finance.

Quantifying AI Risks with Predictive Performance Metrics

As businesses increasingly integrate AI into their operations, assessing both the potential gains and risks becomes paramount. Michael Berger, with his extensive background in finance, data science, and AI insurance, offers valuable insights into how enterprises can measure and manage these risks effectively.

AI projects, as Berger points out, should be treated like any other significant investment. Companies invest in data acquisition, model development, cloud infrastructure, and ongoing maintenance. 

However, unlike traditional investments, AI projects bring unique risks that can dramatically impact ROI. According to Berger, these risks can be broadly categorized into predictive performance, fairness concerns, operational volatility, and security threats.

Berger cites the following principal risk considerations for AI projects:

Predictive Performance:

  • The first and most fundamental risk is whether the AI model performs as expected. These expectations are based on assessing the accuracy of predictions and understanding the variability in those predictions.
  • For instance, a model deployed to predict equipment failures in a manufacturing plant needs to be highly reliable. If the model’s predictions are consistently off, it could lead to costly downtimes or defective products, negatively impacting ROI. Berger advises businesses to go beyond simple performance metrics like accuracy and adopt more robust testing regimes that account for the model’s uncertainty and potential error margins.

Fairness and Bias:

  • While fairness may not be relevant in every use case, it is critical in consumer-facing applications, particularly those involving credit scoring, hiring, or insurance. AI models, if unchecked, can perpetuate or even exacerbate existing biases, leading to discriminatory outcomes.
  • Unchecked AI modeling not only poses ethical challenges but can also result in legal liabilities and reputational damage. Berger suggests that companies incorporate fairness considerations early in the development process to mitigate such risks.

Operational and Environmental Considerations:

  • In environments where conditions change frequently, models need regular retraining to stay relevant. These environments lead to increased operational costs and may introduce sustainability challenges, especially if large-scale AI models consume significant computational resources.
  • For organizations with environmental targets, the carbon footprint associated with continuous model retraining could be a hidden cost that undermines ROI.

Cybersecurity Risks:

  • Once an AI system is in production, it becomes a potential target for cyberattacks. A compromised system could not only lead to operational disruptions but also expose sensitive data, leading to financial and reputational harm. Berger underscores the importance of integrating strong cybersecurity measures into AI deployments to protect against such threats.

Incorporating Uncertainty into AI Models

For business leaders, simply identifying risks is not enough—they must also quantify these risks to determine whether the potential benefits outweigh them. Berger outlines a systematic approach to achieve this:

Scenario Analysis and Simulation:

  • Companies should treat AI projects like any other capital investment, using tools like net present value (NPV) analysis. By simulating different scenarios—such as the model’s performance under varying conditions or the likelihood of legal liabilities—businesses can better understand the potential range of outcomes.
  • Collaborating with data science teams to estimate probabilities for different failure points allows for a more nuanced risk assessment.

Embracing Uncertainty in AI Predictions:

  • One of the most promising developments in AI risk management is the ability to quantify the uncertainty in model predictions. Berger notes that traditional AI models often provide a single-point estimate (e.g., a house worth $1 million), which can be misleading. New approaches, however, allow models to output prediction intervals (e.g., the house is worth between $800,000 and $1.2 million with 90% confidence).
  • These intervals provide decision-makers with a clearer picture of how reliable the model’s output is, enabling more informed choices about whether to rely on the AI system or seek human judgment.

Building Trust in AI Systems:

  • For AI to be widely adopted in critical applications—such as healthcare, finance, or autonomous vehicles—stakeholders need confidence in the technology. The shift towards providing more prosperous, more transparent outputs, like prediction intervals, can help build that trust
  • When businesses understand the limitations and confidence levels of AI systems, they are better equipped to make decisions that balance risk and reward.

As AI becomes embedded in more business processes, Berger advocates for a paradigm shift in how organizations approach AI risk. The future, he suggests, lies in adopting methods that not only assess whether an AI model is accurate but also how confident we can be in its predictions.

The shift therein could lead to more responsible AI adoption, where decisions are based on a comprehensive understanding of both the potential rewards and the risks involved.

AI applications, particularly in sensitive sectors like healthcare, must incorporate this new standard of risk-aware decision-making. For instance, in diagnosing medical conditions from imaging scans, AI systems should present their predictions along with a confidence interval. If the interval suggests high uncertainty, the decision could be deferred to human experts, ensuring that critical decisions are not solely dependent on AI.

As businesses continue to scale their AI initiatives, the need for robust risk management frameworks will only grow. Leaders who integrate these considerations early in their AI journey are more likely to succeed in unlocking the true value of AI—without falling victim to the pitfalls of unanticipated risks.

Subscribe