Racing to adopt AI, organizations face a critical challenge: how to drive growth and efficiency without compromising compliance or exposing sensitive data. From retail customer acquisition to high-stakes compliance operations, businesses must carefully balance ambition with oversight.
National Institute of Standards and Technology, US Department of Commerce, put together an AI RMF, an official U.S. government framework to better manage risks to individuals, organizations, and society associated with AI. It stresses that AI systems should be trustworthy, valid, reliable, and resilient, and that organizations must implement governance, continuous monitoring, and controls to manage AI risks throughout the lifecycle.
Governance, as emphasized by NIST, should be proactive, integrated across the AI lifecycle, and tailored to the domain and risk tolerance. It also suggests having clear roles, continuous monitoring, and human-in-the-loop controls to ensure AI delivers value safely.
Emerj Editorial Director Matthew DeMello sat down with Naveen Kumar, Head of Insider Risk, Analytics, and Detection at TD Bank, to examine how organizations can effectively deploy AI tools, balance innovation with governance, and measure real business impact.
This article analyzes two core insights for successful AI adoption:
- Match AI risk appetite to the business domain: Deploying AI aggressively in retail use cases focused on growth, but conservatively in compliance contexts where risk, accuracy, and oversight matter most.
- Implementing stepwise data classification to reduce AI risk: Labeling data as safe, sensitive, or critical, to avoid using critical data in initial AI iterations to manage risk while building usefulness.
Listen to the full episode below:
Guest: Naveen Kumar, Head of Insider Risk, Analytics, and Detection, TD Bank
Expertise: Regulatory Compliance, Fraud and Threat Detection
Brief Recognition: Naveen has over 16 years of experience in AML, Insider Risk, Fraud, and Sanctions. Previously, he has worked with PwC and Stellaris Health Network. He holds a Master of Science in data modelling from the Rochester Institute of Technology.
Match AI Risk Appetite to the Business Domain
Naveen opens the conversation by calling out a key thing about hallucinations in AI models. He says, “hallucinations could go away if you could provide real context in your prompt.”
He is clear that the goal is not artificial general intelligence, but very purpose-fit AI built for specific use cases within an organization. In his view, this should be role-based. When someone prompts the system, access is limited by function: HR sees HR information, investigators see flagged employees, and finance data stays off-limits to anyone who has nothing to do with it.
From the model perspective, Naveen explains that hallucinations happen because AI systems are under pressure to provide an answer. The model is expected to respond; it must answer, not simply say it does not know. That expectation, he suggests, is a core reason hallucinations occur.
He describes this as a critical issue centered on full data visibility. He argues that organizations must be able to trace every internal dataset used, understand who has access to it, and see precisely how AI systems interact with it. For him, knowing “who accesses it and how AI touches it” is foundational.
“I think role-based AI is like a polite bouncer. It only provides information based on role—if there’s an insider investigation going on, finance has nothing to know about it. Putting it into the AI shouldn’t return anything. Guardrails are an invisible force, period. These are rules AI simply cannot break, no matter what prompt it receives. That stops people from gathering information by asking a series of questions and revealing things an attacker shouldn’t know.”
–– Naveen Kumar, Head of Insider Risk, Analytics, and Detection, TD Bank
He also emphasizes that how AI is used depends heavily on the domain. He draws a clear distinction between compliance and retail use cases. On the retail side, where the goal is acquiring customers, a more aggressive use of AI may make sense. In compliance, however, he argues the opposite approach is required — organizations need to be far more conservative.
Naveen then introduces a shift in how organizations think about AI agents. Increasingly, he says, agents are viewed as “quasi-human” or like employees. The implication is that they should be de-risked the same way people are: what data they use, what they touch, what they impact, who reviews their work, and who approves it. He frames AI as a “mini version” of an employee that requires equivalent oversight.
To illustrate how far this thinking has progressed, he shares an example from a corporate environment where bots are named after employees — such as “Naveen_AI_bot” — that appear in chats and learn from user activity. For Naveen, this underscores the moment organizations are in, the boundaries between what a human can do and what AI can do are blurring, and the same guardrails should apply to both.
Implementing Stepwise Data Classification to Reduce AI Risk
He portrays balancing innovation with customer obligations and regulatory and security constraints as an act that demands significant time and deliberate trade-offs. He explains that the answer lies in a phased approach, starting with AI systems that are narrowly defined and tied to very specific use cases. In these early stages, data availability and data points are intentionally limited to only what is needed to produce a usable output.
He contrasts this with the alternative, making AI solutions comprehensive by giving them access to key data sources and everything available to the model. This broader approach, he suggests, comes later. The discipline in the early phases comes from setting clear policies about what data can and cannot be used in developing AI models.
Classification also plays a central role in this process. Naveen talks about realistically labeling data as safe, sensitive, or critical, and being explicit that specific categories, especially critical data, should not be used in the first iteration. In his view, this structured, step-by-step approach is what helps organizations navigate the tension between usefulness and risk.
Using the example of Suspicious Activity Reports, he explains that while AI can support the process, it should not be allowed to run end-to-end on its own. Moving directly from data collection to alert generation, to review, and to submission to the FinCEN inbox without a human in the loop, he says, is not desirable. The challenge is balancing automation with oversight.
To manage this, Naveen suggests thinking in terms of speed versus precision. Lower-risk cases, such as tier-one alerts below a certain threshold, could be handled and resolved by AI agents. But once alerts exceed specific thresholds, they should be continuously reviewed by a human.
Ultimately, he says, the right balance depends on the domain and the use case. In some situations, AI should be positioned as an efficiency layer or a first draft, rather than a fully autonomous, end-to-end solution.




















