This interview analysis is sponsored by BigID and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
Find out more about how BigID can help your organization adopt AI safely and responsibly here.
Uncontrolled AI adoption, often called “Shadow AI,” is a growing problem in which teams independently build AI models using sensitive company data without proper oversight.
According to a 2024 MIT CSAIL review, 68% of analyzed documents on AI risks highlighted privacy and security as key concerns, with frequent mentions of compromised privacy through data leaks and vulnerabilities in AI systems.
This lack of oversight can lead to significant exposure of sensitive information and increased opportunities for cyberattacks and misuse, as also noted in peer-reviewed research discussing the rising security and privacy issues associated with AI deployment in organizations.
A 2023 open-access study in Humanities and Social Sciences Communications highlights that, as AI becomes increasingly integrated into decision-making, the risk of security breaches and privacy violations increases, particularly when organizations lack the necessary technical expertise to manage these systems securely.
The MIT CSAIL report further warns that overreliance and unsafe use of AI, combined with insufficient visibility into internal AI activities, can lead to regulatory blind spots and unmanaged risks. The risk will be especially acute as AI agents begin to require their own digital identities, introducing new layers of access and authentication vulnerabilities.
Emerj Editorial Director Matthew DeMello recently sat down with Dimitri Sirota, Co-founder and CEO of BigID, to discuss how companies must responsibly build and secure AI systems, especially as they transition to agentic AI models.
Their conversation underscores the pressing need for visibility, risk management, and zero-trust security frameworks to manage AI’s increasing autonomy and complexity.
This article examines two critical insights for CX and data security leaders across industries from their conversation:
- Managing AI risks early: Building a responsible AI starts by uncovering all activities, systematically assessing risks, and continuously monitoring operations to stay within secure guardrails.
- Building zero-trust for agentic AI: Implementing zero-trust security, setting strict guardrails, and managing risks across data, models, and identities to prevent unauthorized actions, protect sensitive information, and maintain trust.
Listen to the full episode below:
Guest: Dimitri Sirota, Co-founder & CEO, BigID
Expertise: Entrepreneurship, Business Development,
Brief Recognition: Dimitri is Co-founder and CEO at BigID. He is an established serial entrepreneur, investor, mentor, and strategist. He previously founded two enterprise software companies focused on security (eTunnels) and API management (Layer 7 Technologies), which were sold to CA Technologies in 2013. He holds a Master’s Degree in Engineering Physics from the University of British Columbia.
Managing AI Risks Early
Dimitri opens the conversation by discussing the responsible development of AI. He says that when companies embark on AI programs, the first challenge is simply getting control over what’s happening.
He describes managing AI responsibly as a three-legged stool and breaks down the process into these steps:
Step 1: Uncover and map your AI and data landscape
From the start, Dimitri insists that the first task for any company is to uncover all AI activities happening across the organization, both sanctioned and unsanctioned. He warns about the rise of “Shadow AI,” where employees independently use company data to train models without going through formal approval processes. Informal processes can involve pulling data from cloud storage like S3 or internal folders without apparent oversight.
He stresses that organizations need to understand not only where their AI models live — whether in the cloud, on local laptops, or elsewhere — but also what data is feeding those models. Sensitive information, such as personal data, intellectual property, health records, payroll data, or non-public financial data, can introduce serious risks.
Hence, step one is about fully unspooling the AI and data landscape to identify what he calls the company’s “crown jewel” data and understand the relationships between models and the data they consume.
Step 2: Assess and prioritize AI programs
Assessing the risk and value of each AI initiative enables companies to determine where to allocate their limited resources. Key factors for evaluation, Dimitri says, include what the model does, the sensitivity of the data it uses, the intended outputs, and who the model serves.
He also highlights that without prioritization, companies risk wasting time and money or exposing themselves to higher regulatory and security risks. The idea is to have a transparent, systematic approach to deciding which programs should move forward and under what controls.
Step 3: Build continuous AI security and compliance oversight
Finally, Dimitri stresses that once AI programs are operational, companies must continuously monitor them to ensure they stay secure and compliant. It includes observing how models are trained, how employees interact with AI tools like copilots, and how new agent technologies behave.
Real-time monitoring allows organizations to detect policy violations, flag security issues, and control AI operations. Without this layer of active oversight, companies risk unauthorized data use, model drift, or breaches of internal and external compliance standards.
He emphasizes that building an infrastructure to track AI activities against defined guardrails is critical for maintaining trust, safety, and regulatory adherence over time.
“This final action builds on the first two steps: gaining visibility into your data landscape to understand risk, and then assessing that risk through a structured, legally informed process. Monitoring ensures that AI systems operate within defined guardrails, adhere to internal policies, and maintain compliance as they scale.”
-Dimitri Sirota, Co-founder and CEO of BigID
Building Zero-Trust for Agentic AI
Dimitri also asserts that many companies are transitioning into an agentic AI model. Instead of just having a standard API layer where users manually request information, they are building a system where autonomous AI agents can perform tasks, access data, and even interact with external tools or other agents.
He explains that these agents will have capabilities similar to virtual staff; they might search for data, provide risk reports, or perform other specialized functions automatically.
However, Dimitri also believes this shift brings serious new challenges. As these AI agents gain increasing autonomy and power, companies must treat them with care, realizing that even their own internal agents cannot be fully trusted, familiar as they may seem. He emphasizes the need for a zero-trust security model for agents, where authentication is strict, access is limited, and activities are constantly monitored.
It becomes even more critical when external agents—such as those operated by partners, customers, or other companies—interact with your internal systems and AI representatives.
Dimitri goes on to explain that, in the future, companies across industries will need robust security frameworks for agent behavior. These will inevitably include:
- Establishing clear guardrails for what functions agents can perform
- Monitoring activities outside the allowed parameters
- Implementing systems to block, alert, or intervene if agents begin acting unpredictably
He believes companies must also have a clear plan while exploring agentic AI. This plan should focus on gaining visibility into the AI programs being launched, understanding the associated risks, and effectively managing the risks related to the data feeding into AI models. He stresses that data is the fulcrum, the central point where much of the risk emerges.
Taking it even further, Dimitri introduces the idea of a holy trinity of risks: data, AI, and identity.
He believes that while companies mainly think about data and AI risks together today, identity will soon become the third major factor. Identity can refer to employees using AI, consumers interacting with AI, or even AI agents themselves, which may need unique identities.
It is sobering food for thought for the Emerj executive podcast audience, as the global economy is poised to experience another hype cycle surrounding new and highly advanced AI-based systems.