This interview analysis is sponsored by Productive Edge and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.
Burnout among hospital staff, particularly nurses and physicians, has reached critical levels. A report by the Center for Health Outcomes and Policy Research at the University of Pennsylvania highlights that nearly one-third of physicians and almost half of nurses in hospital settings report experiencing high burnout.
The widespread exhaustion is linked to excessive workloads, insufficient staffing, administrative burdens, and unfavorable work environments, all of which contribute to job dissatisfaction, high turnover rates, and even compromised patient safety.
Agentic AI is slowly being introduced in hospitals to help address the documentation and administrative burdens that are major contributors to clinician burnout. According to a 2023 New York Times article, the most immediate and practical application of even advanced generative AI in healthcare is easing the heavy burden of the documentation task that consumes hours of clinicians’ time daily, diverting one of the leading causes of burnout.
Similarly, Harvard Business School highlights that AI-driven automation of clinical documentation and administrative tasks can significantly reduce the time physicians spend on paperwork, freeing them to focus more on patient care and improving their well-being.
Emerj Editorial Director Matthew DeMello recently spoke with Raheel Retiwalla, Chief Strategy Officer at Productive Edge, on Emerj’s ‘AI in Business’ podcast about how agentic AI can transform healthcare workflows by moving beyond insights to autonomous, orchestrated actions. Their conversation highlights the need for strong AI governance, workflow transformation, and strategic adoption to unlock the full potential of responsible agentic AI.
This article examines two critical insights for healthcare and innovation leaders from their conversation:
- Designing persona-centric workflows: Mapping user-specific tasks to identify high-friction points in roles like care managers, where AI agents can take over routine data gathering and task prep.
- Building Trust and Control into Agentic AI: Implementing governance for agentic AI by logging actions, monitoring in real time, and auditing for bias to ensure transparency and accountability.
Listen to the full episode below:
Guest: Raheel Retiwalla, Chief Strategy Officer, Productive Edge
Expertise: Healthcare, Business Intelligence, Product Strategy
Brief Recognition: Retiwalla has worked at the intersection of enterprise IT and digital transformation for over two decades, leading strategic initiatives at Pure Storage and advising Fortune 500 companies on cloud infrastructure and data strategy.
Designing Persona-Centric Workflows
Raheel opens the conversation by explaining that traditional AI in healthcare — like dashboards, alerts, and risk models — is proven in workflows involving identifying issues, such as flagging a high-risk patient. He surmises the current state of innovation with available deterministic and generative applications of AI, but unfortunately points out that many use cases stop there.
Raheel emphasizes that the next step will be taking coordinated action based on those insights. Currently, making the most of data-driven insights is usually left to humans, who struggle due to siloed systems and manual workflows.
New agentic AI systems stand to change the paradigm by identifying the problem and orchestrating the response, moving information between departments, systems, or even organizations to ensure timely and intelligent action. In essence, agentic AI turns static workflows into dynamic, intelligent ones that require far less human coordination.
Raheel emphasizes that the inherent step-level change that agentic systems are anticipated to provide in these areas will be crucial for value-based care, where outcomes depend not only on detection but also on how well and how quickly the system can respond across organizational boundaries.
He explains with a healthcare-specific industry example: Today, a care manager must manually sift through reports and notes to prioritize their tasks. With agentic AI, the system could proactively greet the care manager, summarize their top five tasks, provide supporting notes, suggest next steps (like a call script), and even ask if it should schedule appointments.
The goal is not just automating steps, but also understanding the user’s workflow, anticipating their needs, and reducing the manual burden.
Yet for healthcare organizations to build an agentic AI on top of already convoluted tech stacks, Raheel says, organizations need three readiness layers:
- A foundational layer that encompasses cloud and data systems, machine learning operations (MLOps), APIs, security, and governance, ensuring the AI is both explainable and observable. Without this strong base, you cannot build effective AI agents.
- An agentic AI Platform layer offering AI capabilities, including memory to recall past interactions, orchestration to utilize tools effectively, and modularity to integrate new capabilities easily. Raheel emphasizes that agents must operate under policies, such as knowing when human intervention is needed or respecting data access rules.
- A healthcare tools layer comprised of existing AI models that organizations already have, such as risk stratification models or next-best-action clinical models. Traditionally, these tools are siloed within departments. Agentic AI changes that by democratizing access. Agents can tap into these models dynamically, pulling insights and triggering actions such as scheduling and CRM follow-ups across departments and workflows.
Building Trust and Control into Agentic AI
When moving from even generative AI digital transformations into agentic AI systems, Raheel explains that healthcare organizations will need to build an additional layer of data governance, which he refers to as an ‘AI governance layer,’ that extends beyond traditional approaches. Since AI agents have autonomy, organisations must govern their behavior carefully to maintain control, compliance, and transparency.
He outlines a few essential features that healthcare organizations must provide for effective AI agent governance in agent systems:
- Auditability: AI agents must log all their actions, allowing teams to audit later and verify how they made recommendations or decisions.
- Instrumentation: AI agents must be designed to record specific activities, making it easier to verify if their actions meet transparency and regulatory requirements.
- Observability: Organizations need tools to trace AI agents’ paths when making decisions, ensuring their autonomy is monitored and controlled.
- Bias-checking: By appropriately documenting AI agent behavior, healthcare organizations can better detect and correct biases in their actions or outputs.
He says that when identifying use cases for agentic AI, he first studies workflows closely, particularly moments in a staff member’s journey in areas such as claims management, customer service, care management, or clinical functions, where a significant amount of time is wasted aggregating, synthesizing, and understanding data.
Traditionally, human agents gather information, make sense of it, draw insights, and act. By contrast, AI agents can handle much of the same heavy, behind-the-scenes data work, allowing humans to start directly at the recommendation stage of healthcare workflows instead of slogging through several steps of data preparation.
In the process, Raheel stresses that AI agents are not just about automation. They are workflow transformers, rethinking how work gets done. To explain what he means, Raheel shares a few real-world examples:
- In care management, an AI agent can prepare service plans for high-risk members by reading intake notes, pulling patient history, checking eligibility, and drafting plans, reducing a task that took 45 minutes to just 2-5 minutes, cutting burnout, and doubling throughput.
- In claims triage, an AI agent can sift through claims data across four or five systems, flagging issues and summarizing them intelligently for human adjusters, utilizing memory and reasoning rather than rigid rules.
Another example that Raheel cites Productive Edge for currently piloting is a longitudinal agent designed to support behavioral health follow-up. It monitors key patient behavior metrics over several weeks, tracking missed appointments, medication gaps, and unclosed referrals. When signs of risk emerge, the agent proactively nudges care managers, not with generic alerts but contextualized insights.
“For instance, it might flag that a member hasn’t refilled their prescription in 12 days, note a documented mood decline in their last visit, and recommend a timely outreach. The agent consolidates data from claims, EHRs, and even patient messaging platforms, enabling care managers to prioritize effectively and act with the full picture in view. And in many ways, that’s only the beginning of what these systems can do.”
– Raheel Retiwalla, Chief Strategy Officer at Productive Edge
He believes the best agentic AI use cases emerge with executive-driven direction. As leaders increasingly ask, “What are we doing about AI?” operational teams are being pushed to identify areas where AI can reduce manual work and inefficiencies.
Many high-ROI opportunities exist even in non-clinical workflows without protected health information (PHI), making them easy starting points. Teams can build on existing tools and models to unlock more value with AI agents.