Enhancing Customer Engagement with AI-Driven Solutions – with Robert Rose of Adobe and Phil Gray of Interactions

Emily Smith

Emily Smith has been writing online for nearly two decades, always on the cutting-edge of technology and innovation. At Emerj, she uses her expertise to highlight the latest advancements in AI and their practical applications.

Enhancing Customer Engagement with AI-Driven Solutions v1-min

This interview analysis is sponsored by Interactions and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

As customer service increasingly becomes a battleground for competitive advantage, companies are leaning on AI to transform their workflows and enhance user experiences. However, adopting AI is not without its challenges. 

According to the 2023 AI Index Report by the Stanford Institute for Human-Centered Artificial Intelligence, organizations often struggle with overconfidence in AI systems, poor data management, and a disconnect between technological advancements and consumer trust, creating hurdles for meaningful implementation.

Additionally, the rapid pace of AI innovation often outpaces organizational readiness and user understanding, creating a complex landscape for businesses to navigate. The National Institute of Standards and Technology (NIST) emphasizes the importance of trustworthy and responsible AI, highlighting the need for greater transparency in AI-driven interactions to meet rising consumer expectations.

In an upcoming episode of Emerj’s ‘AI in Business’ podcast, Robert Rose, Senior AI Strategist at Adobe shared his perspective on these challenges. Later, he and Emerj Senior Editor Matthew DeMello were joined by Phil Gray, Chief Product Officer at Interactions.

Together in a wide-ranging conversation, both customer service leaders emphasized that the key to AI’s success lies not just in technological advancements but in how these systems are integrated with human oversight and trust-building strategies.

By focusing on reducing customer effort, enhancing transparency, and implementing robust governance frameworks, companies can unlock AI’s transformative potential while mitigating its risks. This article unpacks three key insights from the conversation that provide actionable advice for business and technology leaders:

  • Integrating AI with human-in-the-loop (HITL) strategies: Combining AI automation with real-time human oversight to ensure accuracy, resolve edge cases, and improve customer interactions dynamically.
  • Designing AI to minimize customer effort: Leveraging AI to reduce friction in customer interactions while maintaining transparency and trust.
  • Building trust in AI through communications: Educating users on AI’s capabilities and limitations to foster confidence and align expectations.

Guests: Robert Rose, Senior AI Strategist, Adobe

Brief Recognition: Robert Rose has been a leading advocate for AI innovation in customer service, bringing years of experience in designing AI-driven workflows that prioritize efficiency and user satisfaction. At Adobe, he focuses on leveraging AI to bridge the gap between advanced technology and human-centered design.

Guests: Phil Gray, Chief Product Officer, Interactions

Phil Gray leads innovation at Interactions, specializing in conversational AI solutions that enhance customer engagement and streamline support operations. With extensive expertise in natural language processing and customer experience design, Phil has played a pivotal role in advancing hybrid AI models that integrate human oversight.

Expertise: Human-AI interaction, customer service optimization, predictive systems, trust-building in AI applications, governance frameworks

Integrating AI with Human-in-the-Loop Strategies 

AI’s potential to transform customer service often falters when systems operate without adequate human oversight. Robert Rose highlighted the concept of human-in-the-loop (HITL) workflows as a foundational strategy for reducing errors and building trust in AI systems. HITL workflows ensure that AI operates as an assistive tool, with humans stepping in when the system encounters uncertainty or complex scenarios.

Rose explains the pain point that he sees at Adobe: “We’re trying to create workflows where AI knows when to step back and let a human take over, especially in situations where trust and accuracy are critical.” He describes a three-stage process that helps refine AI interactions and detect biases introduced by user behavior:

  • Observation: AI systems initially process user interactions without intervention, allowing organizations to assess patterns and common points of failure.
  • Intervention: Human reviewers step in when AI-generated responses display uncertainty, providing corrections and guiding the AI’s learning process.
  • Refinement: Insights from human interventions feed back into AI training, improving the system’s ability to handle ambiguous or biased interactions over time.

Phil Gray added, “Human-in-the-loop workflows allow AI to complement human expertise rather than replace it, creating a seamless support system that adapts to complex customer needs.”

This approach not only prevents the risk of AI hallucination—where systems generate confident but incorrect responses—but also provides a critical feedback loop for improving AI performance over time. Additionally, human reviewers play a key role in detecting ways that biased user interactions may influence AI decision-making, ensuring that customer experience (CX) systems remain fair and effective.

This approach not only prevents the risk of AI hallucination — where systems generate confident but incorrect responses — but also provides a critical feedback loop for improving AI performance over time. Research highlights how HITL designs can significantly enhance decision accuracy by leveraging human intervention during ambiguous scenarios, particularly in fields like healthcare and finance.

In particular, HITL strategies also address the challenge of balancing automation with empathy. While AI can efficiently handle repetitive tasks, human involvement ensures nuanced issues are addressed with care and understanding, leading to higher customer satisfaction and retention rates.

Additionally, continuous training of AI systems based on human feedback can refine models and improve their accuracy and reliability. A hybrid approach addresses a key pain point in customer service: the frustration of navigating ineffective bots. By integrating human expertise, companies can ensure their AI systems align with customer needs and expectations, ultimately fostering trust and enhancing user satisfaction.

Designing AI to Minimize Customer Effort 

Customer service systems often fail when they demand excessive effort from users. Robert emphasizes that minimizing customer effort is critical for creating positive and lasting impressions. Traditional workflows, often referred to as “containment models,” are designed to limit interactions with human agents due to cost considerations.

These models frequently result in frustrating loops of vague responses and repeated information requests, leaving customers dissatisfied. Rose notes that, “Customers don’t care if it’s a bot or a human; they just want their problem solved quickly and efficiently.”

Phil elaborates on Robert’s curt point, noting that “moving beyond containment models to resolution-focused designs allows AI to simplify the customer journey. The goal is to reduce effort, not create barriers.”

Rose highlights the opportunity AI presents to disrupt these outdated workflows. By seamlessly transitioning between AI and human support, companies can eliminate the need for customers to repeat themselves, reducing frustration. Research has shown that AI can streamline workflows by predicting user needs and optimizing task completion.

Advanced AI systems can proactively identify potential customer pain points before they arise, offering preemptive solutions or guidance. Robert stresses that a predictive approach minimizes disruptions and enhances the overall service experience, building long-term loyalty.

Leveraging predictive capabilities allows AI to analyze customer intent and behavior, providing personalized solutions that further reduce effort. Robert is quick to point out that transparency is another critical component — clearly communicating the capabilities and limitations of AI systems fosters trust and manages user expectations. 

By prioritizing ease of use and transparency, companies can create customer experiences that are both efficient and trustworthy. This balance is essential for building long-term relationships and driving customer loyalty.

Building Trust in AI Through Transparency

As AI technologies become more advanced, fostering user trust is essential for widespread adoption. Robert emphasizes that trust begins with transparency—educating users about AI’s capabilities and limitations to manage expectations effectively.

One of the challenges in building trust is the tendency of AI systems to deliver “confidently wrong” outputs. Users often misinterpret AI’s recommendations as infallible, which can lead to frustration or poor decision-making when errors occur. Robert Rose explains, “AI is essentially a probability engine. Users need to understand that it can make mistakes, just like any other tool. The challenge is that AI often provides outputs with such confidence that people assume they must be correct.”

Studies have shown that transparency in AI design, including displaying confidence scores, can help users better understand and evaluate AI recommendations. Phil Gray emphasized the importance of clear communication, stating, “Transparency isn’t optional—it’s a requirement. Users who understand how AI works are more likely to trust it, especially when its limitations are clearly communicated.”

Governance structures further enhance transparency by establishing accountability mechanisms. Rose advocated for oversight boards that review AI decisions, assess alignment with organizational goals, and address potential risks such as biases or inaccuracies.

“We need governance frameworks that ensure AI is being used responsibly. That means having oversight boards that assess AI decisions, review training data for biases, and ensure models remain aligned with business objectives. Without these safeguards, we risk deploying systems that reinforce errors instead of correcting them.”

– Robert Rose, Senior AI Strategist at Adobe

Moreover, transparency initiatives can include user-friendly dashboards or reports that detail how AI systems arrive at their conclusions. Such tools empower users with insights into AI processes, fostering a sense of control and reducing skepticism.

Gray adds, “Combining oversight with structured user education creates a comprehensive approach to managing AI risks, ensuring systems remain reliable and aligned with business objectives.” By integrating governance and education, organizations can effectively mitigate risks and maximize the benefits of AI systems.

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.