As new generative AI use cases continue to reshape the enterprise landscape across the global economy, many sectors are still weathering significant risks to keep pace with the adoption hype. Deloitte’s “State of Generative AI in the Enterprise 2024“ report found that 79% of respondents expect generative AI to transform their organizations within three years. However, companies must approach these investments cautiously with a focus on responsible implementation.
There is no shortage of cautionary tales of the consequences of rushing AI projects to market that fundamentally undermines their undertaking. Recent examples include Google being fined a record $270 million by the French regulators for privacy violations and investment firms Delphia Inc. and Global Predictions Inc. being fined $400,000 by the SEC for making misleading claims about their AI capabilities.
The risk is acutely felt in the insurance sector, especially after a recent lawsuit brought by African American State Farm customers won class-action status last year, claiming that the insurance carrier’s data systems discriminated against them.
To avoid such penalties and mitigate the risks of slapdash AI adoption, insurance leaders must proactively develop and implement robust frameworks for responsible AI governance while experimenting with new and riskier generative AI (GenAI)-based systems.
Emerj CEO and Head of Research Daniel Faggella recently spoke with John Almasan of TIAA on the ‘AI in Business’ podcast to discuss insurance-specific challenges in the responsible adoption of GenAI capabilities.
In the following analysis of their conversation, we examine two key insights:
- Strategic integration of GenAI across insurance operations: A comprehensive list of use cases for leveraging GenAI as a versatile and comprehensive solution contributing to multiple facets of insurance operations —from client services to cybersecurity.
- Strategic foundation for tangible ROI: Strategies for formulating responsible AI policies allowing for a nuanced understanding of the business implications surrounding these technologies by initiating small projects strategically to showcase generative AI’s efficacy.
Listen to the full episode below:
Guest: Dr. John Rares Almasan, Senior Managing Director, Head of ClienTech and AI, TIAA.
Expertise: Multi-cloud extensive data engineering, machine learning, and data science.
Brief Recognition: John is Senior Managing Director at TIAA. He is also a member of Arizona State University’s Board of Advisors. He holds a PhD in Information Systems from Columbia Southern University and a Masters degree in Computer Science from Technical University of Cluj Napoca.
Strategic Integration of Generative AI Across Business
John begins his podcast appearance by discussing the various applications and use cases of GenAI in different operations typical for insurance firms. He focuses on four specific use cases:
Data analysis and client services:
- Functionality: GenAI can scan and correlate large amounts of data.
- Benefits: It can summarize data quickly, extracting detailed information from unstructured, semi-structured, or structured data.
- Application: Facilitating better client relationships by providing timely information, reducing waiting times, and enhancing client satisfaction.
Analyzing patterns:
- Functionality: Analyzing patterns and generating recommendations for employees in client services.
- Benefits: Employees can use this information to customize and personalize answers, providing more appealing and valuable information to clients.
- Outcome: A hyper-personalized solution that helps the company stay closer to clients’ needs and delight them.
Personalized content marketing:
- Functionality: Generating content specific to different target audiences, considering the preferences of older and younger generations.
- Use Case: Producing content showing older members of the labor force how to plan for retirement. For younger generations, it will show content that educates them on the importance of planning for retirement early.
- Outcome: Generating content tailored to the sensibilities of each user, regardless of age or background.
Fraud detection and cybersecurity:
- Application: Generating synthetic data, or data used for simulating real-world events without real-world consequences.
- Use case: In fraud detection and cybersecurity, AI-generated synthetic data can help simulate worst-case scenarios and help security professionals better understand the potential behavior and motivations of ‘bad actors.’
- Outcome: Identify anomalies and prevent activities that could harm the company and its clients.
“These are just a few of the examples that we can leverage in engaging with the client, reshaping the design, the way we interact and engage with our clients to chatbots directly or indirectly, but always under the human support and human validation; also on protecting the company, protecting our clients, and protecting overall our business. “
— Dr. John Rares Almasan, Senior Managing Director, Head of ClienTech and AI, TIAA
Though still early in its development, GenAI indicates an awareness of the ongoing learning process regarding both the potential benefits and inherent risks associated with its utilization. By expressing a commitment to approaching GenAI with caution, John signals a responsible and measured approach to adopting this technology.
Above all, John emphasizes having a human in the loop to reinforce the idea that human oversight is crucial, suggesting that decisions and outputs from GenAI are subject to validation and guidance from human experts.
He also discusses the versatility of GenAI in content creation, emphasizing that it can produce a wide array of content types, from crafting social media posts on platforms like Twitter and Facebook to tailoring content for specific events and client locations.
John concludes by emphasizing that the technology allows for a personalized and audience-specific communication strategy, ensuring the generated content is relevant and resonant. Whether addressing students in a hackathon event or employees on the brink of retirement, the content can be adapted to suit the characteristics and needs of the target audience.
Strategic Foundation for Tangible ROI
John further addresses the role of chatbots and virtual assistants in establishing connections and providing immediate information to clients.
He emphasizes that the younger generation is keen on connecting through virtual environments and developing different LLMs to leverage this interest. Virtual assistants and chatbots within the Metaverse are highlighted to enhance connectivity and engagement:
“For example, we just developed a TI – our Metaverse. And it’s our IT ours. Which is, I would say, the easiest way to interact with the younger generation. They wanted to become closer to this topic by leveraging virtual environments and virtual assistants that can actually use gamification and generative AI to generate scenarios and synthetic data that enable better engagement. [These systems] also provide generative AI with appropriate content as well as conversations to maintain the focus of the audience.”
– Dr. John Rares Almasan, Senior Managing Director, Head of ClienTech and AI, TIAA
The conversation then gears to the considerable challenges surrounding the implementation of GenAI.
John highlights a crucial technological industry shift, particularly with more prominent players. He points out that these companies have recognized the need to move beyond traditional AI standards, standard operating procedures (SOPs), and controls within various policies such as data governance, IT risk, and privacy.
Instead, Dr. Almasan notes the focus is now on developing responsible AI policies. The shift is crucial in the new era of GenAI, where traditional approaches become increasingly challenging to maintain – both from a public relations and ethical perspective:
“That’s why these large companies like Google and Microsoft all developed responsible policies to reference the audit policies while keeping the core with AI specialists. That’s, I think, the most important thing to do first, because the responsible policy is going to enable them to understand from a business perspective what is AI and what is not AI. And also in the AI space: what is traditional AI, which has more well-defined solutions – and what is generative AI, which is in a very green space, very undefined, and requires significantly more effort [to maintain safely].”
– Dr. John Rares Almasan, Senior Managing Director, Head of ClienTech and AI, TIAA
Once responsible AI policies are established, John recommends that business leaders adopt a strategic approach by starting small with the first projects. These projects should result from collaborative brainstorming sessions involving leaders from all departments and involve business-critical functions. The approach Dr. Almasan recommends helps to demonstrate the power of GenAI and identify ‘low-hanging fruit’ — projects that can be delivered quickly with a significant impact on return on investment.
He stresses the importance of having a foundation to provide guardrails and principles for delivering GenAI products. Collaboration among various departments, including legal, compliance, risk, audit, cyber, and cloud teams, is crucial for those ends. These collaborations are necessary for successfully orchestrating efforts toward a common goal: delivering a reliable generative AI product that empowers associates and employees while driving a tangible return on investment.