Pseudo-AI – When “AI” is Really a Human, but That Might be Okay

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

"Psuedo-AI" and When It's Okay for Humans to be Behind the AI
This Emerj Plus article has been made publicly available for a limited time.
To unlock our full library of AI use-cases and AI strategy best practices, visit Emerj Plus.

One of the biggest problems facing business executives when it comes to adopting AI is determining whether a company is truly leveraging AI or simply using the term as a marketing strategy. We have discussed rules of thumb for assessing the authenticity of AI companies in previous articles based on insights derived from hundreds of interviews with industry experts and AI researchers over time.

In general, one might conclude that any company that does not satisfy these guidelines are fraudulently claiming to leverage artificial intelligence, or at the very least, they’re duplicitous. However, fraud and duplicity in many instances is not always black or white.

A company might plaster “AI,” “machine learning,” and “automate” all over its website and LinkedIn description, and they may not in fact be using machine learning in their product. This might indeed be a kind of false advertisement when it comes to explaining the means by which the company garners results for its clients. But that said, the company might actually garner results for its clients, regardless of whether or not it’s leveraging artificial intelligence to do it.

In this article, I’ll explore some of the dynamics of “pseudo-AI” and “human training wheels”, and why it’s often necessary for AI vendor companies to use human effort instead of machine learning to help deliver on the value of their product.

Some AI Vendors Use “Human Training Wheels” for Algorithms

In some cases, the line between “automated” and “manual” can be blurry. How much artificial intelligence needs to be involved in the process before we’re comfortable calling the product or service artificial intelligence? In many cases, AI vendors claim to offer AI software that is actually some parts AI, some parts traditional computing (and PhDs even debate over what constitutes AI because its definition will continue to evolve as the technology advances). When is this so-called “pseudo-ai” ethical?

Although it may seem like semantics, these questions can matter when executives need to choose a vendor for a particular service. A relatively infamous example of this gray area is a company called x.ai. According to its website, the company offers a “ridiculously efficient AI software [that] solves the hassle of meeting scheduling.”

On the front end, the process seems completely autonomous. The user only needs to send an email to another person requesting a meeting, CC the AI assistant, and the AI assistant will take over. However, a former x.ai employee recounts that the company actually employs people to manually receive and reply to emails as the AI assistant and that people are the ones actually collaborating to arrange the requested meeting.

If this is true, this is clearly not AI, as it requires the active participation of humans to perform key functions of the service. However, this does not automatically mean x.ai is a fraudulent company, at least in terms of delivering on the results it claims to provide to its clients. It also doesn’t mean that artificial intelligence is entirely absent from the process.

It seems rather likely that there is a machine learning system in place and “in training” at x.ai, with humans helping to cover edge-cases and helping to train the algorithms. AI almost always requires humans somewhere in the process of getting it to “make the decisions” it is intended to. Machine vision, natural language processing, and fraud detection software, for example, requires people to label data before it’s fed into a machine learning algorithm.

For instance, a chatbot built for customer support needs to “understand” when a customer is asking for a refund and when they are asking to cancel their subscription. The only way it comes to understand the difference is if people label thousands of customer support tickets as “refund request” and thousands of responses as “cancellation request” and then runs those labeled tickets through the machine learning algorithm behind the chatbot. At the risk of personifying the chatbot more than I already have, it needs to “see” thousands of examples of various types of support tickets before it can tell them apart.

x.ai may be training its AI-based scheduling software with this method, collecting large volumes of data in the form of emails. When the human “AI assistant” gets involved in the process, the email back and forth they have with the client’s prospect, lead, or podcast guest, are likely recorded in x.ai’s system and fed into their software’s machine learning algorithm. It’s very likely that the people at x.ai label the responses they give and the replies they get with concepts such as “missed appointment follow-up” or “reschedule request.”

Eventually, x.ai’s machine learning software would be able to handle the scheduling autonomous, at least to some degree. It’s likely that when the software was unable to properly reply to a client’s prospect, lead, podcast guest, it would route the email chain to a human employee at x.ai.

Training a machine learning model is a required step. As a result, the question becomes is it ethical to advertise the AI before it’s fully trained. If at some points the AI will route the email chain to a human employee, and there will inevitably be some interaction it can’t handle on its own, then again, how much artificial intelligence needs to be involved in the scheduling process before x.ai can ethically say they’re doing AI?

Examples of How Humans Gradually Train Algorithms

“Human training wheels” isn’t limited to booking appointments in email. Here’s a variety of examples of where human users might be used to make an AI system more capable over time:

  • A medical diagnostics company might use feedback from Doctors to make a decision (for example, whether a tumor is likely to be malignant or benign), gradually training the algorithm to refine its decision-making with those human inputs.
  • Uber might use the driving data (acceleration, speed, turning, camera data, lidar data, etc) of its safest drivers to train their self-driving cars in order to train self-driving cars to navigate those same roads (this would require Uber to instrument some of its drivers’ cars with such sensors and equipment).
  • A customer service chatbot application might send unusual chat requests to human customer service agents, allowing those human labels (i.e. identifying the unusual email as a “refund request”), and the human responses (i.e. the email template used) to train the algorithm to make better decision on its own over time.

Even big companies recognize the necessity of this step. Facebook and Google actually do the same thing for some aspects of their operations. For example, Facebook uses people in the Philippines to screen out sexual, violent, or inappropriate content from live stream videos manually. Of course, the buck doesn’t stop there. These people are training a machine learning software to be able to autonomously screen for this content more accurately.

Copious amounts of data are required to properly train ML models to proxy human behavior, and even Facebook and Google, with their billions of users, still do not have enough data for a fully autonomous AI.

And so a company like x.ai is likely legitimately leveraging artificial intelligence at least in part, and it intends to involve AI in its scheduling service more and more as it continues to train on the data it gets from human employees interacting with people via email. At Emerj, we think this is acceptable, and it doesn’t make the company fraudulent.

Taking off the “Human Training Wheels” – Expanding the Role of AI

When a new machine learning solution needs to be trained from scratch, essentially all of the decisions and judgements must me made by a human being. For example:

  • A machine learning system cannot diagnose bone fractures without thousands (or tens of thousands) of examples of labels images of broken and not broken bones. All of this initial work is done by humans.
  • A machine learning system cannot reply to customer support tickets without tens of thousands (or hundreds of thousands) of labeled support tickets. All of this initial work is done by humans.

Hence, the “bootstrapping” of an AI system often involves entirely human labor.

Over time, a machine learning system may begin making some initial judgements on its own, monitored by humans.

We can use the example of an AI system for replying to customer support email. After a certain amount of training data is used to train the algorithm, the system might begin labeling incoming support tickets as “refund request” or “product-related question” or “delivery issue”, based on past human-labeled instances of similar emails and replies.

In this early phase, it is unlikely that the algorithm would be deployed in a customer-facing application. Instead, it would simply be making judgments in a test environment where humans could determine its ability to label or reply to email tickets properly. At this time the machine will likely make plenty of poor judgements, and will have to be corrected by more human inputs (i.e. informing the machine that this specific customer email was not a “refund request”, but instead should have been labeled a “billing problem”, etc).

Over time, and with continued training, a system might be trained to handle a certain subset of all email support tickets, possibly within a certain range of confidence. For example, if the machine is 90%+ confident that an email ticket is for a certain issue, it will reply. Under 90% confidence it will apply a label (what the algorithm “thinks” is the right response), and a human will either confirm or correct that label before sending it back to the customer – training the algorithm even more in the process.

At this time, the “human training wheels” almost come off – the machine can deliver a result without ongoing human monitoring. That being said, the wheels never completely come off (at least given the state of the technology today).

Even a well-trained AI system will need some degree of ongoing human guidance. We’ll keep with the example of a customer service email application:

  • New edge-cases will continuously arise, and humans will have to train AI systems to adapt to them. An eCommerce store that used to sell clothes and now begins to also sell furniture will have many new kinds of customer support messages, and humans will have to train a customer support system on these new instances.
  • Humans will need to regularly spot-check the machine’s responses, possibly by looking into kinds of issues where customer satisfaction is low, and determining whether or not the machine is making new kinds of mistakes or miscategorizations. Humans would then need to retrain the system to avoid those mistakes in the future, and monitor its progress.

So, an application like x.ai will always have some kind of human oversight and intervention to ensure that their application works, and keep it up to date.

Initially, the division of labor in a machine learning solutions will often be 100% human, 0% machine. Over time, and with lots of training, machines might handle 95% or more of the work, with humans playing the role of corrective instructor and troubleshooter.

A company that lies about using AI isn’t even trying to move the human efforts into the realm of what a machine can handle. A legitimate AI vendor is constantly aiming to improve quality and efficiency by turning that human effort into a training mechanism, into an asset, to make their product or service better.

Some AI Vendors Are Just Lying

On the other hand, it is equally possible that companies will claim their service involves AI, but the company actually just has people do the work behind the scenes ad infinitum. These companies are using AI to seem more credible and justify a high price for their service, but are not leveraging artificial intelligence at all. They’re technically lying and it should not be encouraged.

We’ve said it a million times: A vendor company doesn’t need to be using artificial intelligence to add value to their clients. This means there’s rarely a need to only look for AI-related vendors, unless your desired solutions completely requires AI (machine vision, for example). That being said, a company that overtly lies in their marketing messaging (by pretending to use AI when they don’t) is not building much trust. What else could they be lying about?

What Business Leaders Should Keep In Mind

That said, the fact that a company may overstate their AI ability is not as relevant as its ability to deliver results.

Take the x.ai example above. The service promises to schedule meetings efficiently. If using the x.ai service actually does cut down on the time and effort required to schedule meetings and at the same time frees up personnel to do other things, the company is delivering on its promise and providing a positive return on investment to its clients.

If it gets the job done, it doesn’t really matter if the company is leveraging AI. On the other hand, if the company lies about the results it can deliver, it is fraud, even if it does leverage AI.

Until machines can extrapolate at the level of understanding of the world that humans can, they will need continuous training with more data. Machines do not yet understand context, and companies that sell ML systems often walk the fine line between overstatement and fraud to get the business they want.

While executives do need to be cautious about assessing AI companies, it is important not to lose sight of what’s most important: ROI. The ultimate goal of adopting AI in business is to reach goals faster, better, and cheaper. If an AI company delivers results as promised and as expected, it doesn’t really matter whether they used AI to do it or not.

 

Header Image Credit: Mario Einaudi Center for International Studies – Cornell University

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe