What Chatbots Can Do, and Cannot Do

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

1

Episode Summary: In this episode, discover how chatbots and conversational agents can provide you an advantage in the realms of customer support, product, support, lead engagement, and more, and learn the theory behind creating useful chatbots you can use in your own business.

Right now, if we intend to find a piece of information or purchase something on the Internet, we might use a search engine that provides us with a list of sites we can browse in order to find ourselves a resolution for that intent. This week’s guest, Chief Scientist at Conversica, Dr. Sid J Reddy, talks about how AI and ML can usher in the next a new era of search software, one that will bring you a faster, more accurate resolution to your intent.

Most importantly, Dr. Reddy discusses how chatbot technology can be integrated into areas such as customer service, product support, and lead engagement. By the end of the episode, listeners will have a better idea of the importance of collecting data and how they can use that data to  to build chatbot templates they can use in multiple domains and applications.

Expertise: Natural Language Processing, Computational Linguistics, Biomedical Informatics, and Machine Learning

Brief Recognition: Dr. Reddy is Chief Scientist at Conversica, who claim to be the only provider of AI driven lead engagement software for marketing and sales organizations. He is an expert in natural learning processes, using this background to better understand artificial intelligence and machine learning. Before Conversica, Dr. Reddy Principal Applied Scientist at Microsoft and founded the natural learning processes lab at Northwestern University. He is an industry speaker and published author with research featured in over 60 peer-reviewed journal publications and technical conferences.

Current Affiliations: Chief Scientist at Conversica, Assistant Professor (Adjunct) at Northwestern University

(Readers with a more broad interest in chatbot applications may be interested in our recent article comparing the chatbot efforts of Google, Facebook, Microsoft, and Amazon.)

Big Ideas:

There is no catch-all chatbot, and there are no out-of-the-box chatbots that are of any real value in business. Current chatbots can provide useful responses to users within defined and narrow use-cases, but only if the system has enough data to train on – and the relevant human expertise to do the training and tweaking. The best chatbots make use of the most data (from web visits, to purchase histories, to online conversations, and more).

Understanding a visitor, lead, or customer’s intent is key to developing chatbots that can best help find a resolution, whether it’s getting a refund, purchasing a product, or finding information. The bot has to first understand the question, the intent, and then it can extract entities from that intent, in order to search for a result.

For example, you may ask the chatbot “Where can I buy a refurbished MacBook Pro?” Realistically, you’d probably leave out the question entirely and type “buy refurbished MacBook Pro.” The chatbot would first have to discern that you both intend to buy a product and intend to find a place to buy it. Afterwards, the chatbot extracts entities—or specificities—from your question. In this case: refurbished, the implied “computer,” and MacBook Pro. Finally, referring to your  intents, the chatbot can provide you with an accurate and useful resolutions. It will find you a store on which you can purchase a refurbished MacBook Pro.

As such, collecting data on visitors, leads, and customers is imperative to figuring out their intents and vice versa. When someone interacts with a model, the system integrates that interaction and uses it to inform future interactions. In this way, the chatbot is collecting data on intents. You can then translate these more robust models into different domains than the one for which you originally built them; however, you’ll still need humans to better inform the models and tweak them so that they’re more applicable for the new domain.

Companies like Google and Amazon employ hundreds of people to build out thousands of chatbot models that attempt to discern intent and follow through on finding resolutions. That’s behind the scenes. What we end up with are applications like Google Home and Alexa. Most people think their AI companion is one entity—one intelligence—just Alexa, Siri, or Cortana. In actuality, their a network of thousands of these chatbot models pulling from over a decade of customer data.

Turning Insight Into Action

Think about the questions your visitors, leads, and customers may ask. In the realm of customer service, this may be easy. Can I get a refund? Can you cancel my subscription? With regards to informational questions, what might your customer search to which your product or service could provide the answer? The more intents you can discern, the more pathways and models you can create for your chatbot software to run and collect data on your visitors, leads, and customers.

Interview Highlights – What Chatbots Can and Cannot Do

(2:42) Dan: So first things first I wanted to get your perspective on is what is possible with conversational interfaces for business today?

SR: The primary requirements for a system to be successful in conversational understanding are two. It needs to be able to understand the question first, so the way NLP researchers solved that problem is the level of classifiers to understand what the intent expressed in that question means. And by intent, that could mean whether this person is trying to buy a movie ticket or she is trying to set a reminder, or she is trying to get a job interview or she is trying to buy a car. First, we have to understand what the intent expressed in a particular message that the user has with the bot is.

The second thing is to extract the entities mentioned in a question. So if it’s a car, the bot has to understand what kind of a car it is. Is it a Toyota Carolla or is it a Tesla Model X? Depending on the entity, you will frame your response in a different way. So the first thing is clearly understanding…Every time you have a conversation, the bot understands the intents and entities that are present in the message and that requires natural language processing.

The idea of machine learning is that you can, based on the previous annotations or previous examples given a text, what is the intent? Based on that the system automatically learns through a process called training to be able to predict the intents and entities for new examples that are not yet seen. For a company to be successful in building conversational systems they need to be really good at doing natural language processing and machine learning.                                 

(8:50) Dan: You talked about bots as kind of the Internet 3.0 [in which there will be] the ease of attaining your aim and…getting further to your intent faster…you see bots as that kind of intent-reaching evolution of the Internet itself.

SR: Exactly! So the way that I’d summarize it is in 3.0, the bots should be able to complete tasks for yo,u and they should be able to summarize the information you need as opposed to giving you lengthy documents which you’re supposed to read to decide what to do to complete the task or summarize the information for yourself. So task completion and knowledge summarization are the essential elements of 3.0.  

(11:09) Dan: is it important to have a distinct agent that fulfills a specific intent…or [is it] the same machine system that can tackle a variety of those circumstances?

Actually it’s neither. The fundamental reason we have different agents doing different things is because natural language processing and machine learning is unsolved, and it is very difficult to understand natural language the way humans write it.

We don’t have a model that understands everything for every context. So what researchers and companies do is they collect annotation data for a given task or for a given knowledge summarization objective, and they’re trying to model for that particular annotation task, for a given intent, and corresponding set of entities. That particular model can only do well based on the examples it has. So for that reason different companies focus on different intents and entities.

(18:20) Dan: If you work a bunch in a particular retail space answering a certain kind of question…that same intent of wanting to purchase something online…might work just as well for buying…anything else because the intents are similar.

Yes. That’s a very good summary. [But] you can’t just expect to have a system trained on food to work on real estate without continuously monitoring and correcting the annotations. The system should be able to have an accuracy 80% out of the box, but to get that extra 20% you need humans to provide feedback to the system on what it is doing incorrectly and move from that 80% to that 100%.

Related Chatbot and Conversational Interface Interviews

At Emerj, our mission is to become the go-to source for AI business insight. Below is a selection of some of our other related interviews about chatbots and conversational agents.

 

Header image credit: CELI Language Technology

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe