
Most of the time when we have requests for speaking engagements here at Emerj, they’re from business leaders. At the time this article was published, I just came back from a presentation at National Defense University in Washington DC. Presenting there was unique in many regards. Obviously, the use cases for tanks and submarines are quite different than they are for drug development or selling more products off retail shelves.
What surprised me most was the commonalities. As it turns out, leaders in the military—and I’ve seen the same in terms of leaders in government when we’ve done work with the World Bank and spoken at the United Nations—ask similar questions. The most common questions are often some permutation of, “Can I apply AI to ‘X’ problem,” whatever the problem may be, or, “Can AI do ‘X’?”
It’s a general question around, “I know what my priorities are. Can AI help me with that? Can machine learning be applied in a way that would help me achieve that goal or complete that task?” It’s difficult for people who may not have a technical training in AI to understand what could be applicable and what might not.
As it turns out, most business leaders, and certainly most leaders in government, are not looking to go back to school and get a PhD in artificial intelligence or computer science. Instead, they’re going to have to educate themselves in a way where they can understand where AI fits into their strategy.
We’ve looked at applications of machine vision in healthcare, applications of predictive analytics in finance, and all sorts of other permutations of use cases. These use cases are often a good first step for people to figure out how to apply AI, but the second area is a fundamental understanding of AI and machine learning itself: how it works and when it can be applied.
The purpose of this article is to give leaders an intuition as to whether or not AI can be applied to a specific problem. When leaders look at a problem and say, “Could machine learning help us do this better? Could artificial intelligence help us achieve this goal, achieve this objective,” my goal is to give leaders a better intuition as to whether the answer is a definite yes, a definite no, or somewhere in between. By the end of this article, readers should have a much healthier intuition of these points than maybe they had prior.
How to Determine Where AI Can be Used in Business
Essentially the process for discerning whether artificial intelligence can be used to solve a specific problem or not basically breaks down as follows. We have four required steps and a fifth potentially optional step. Of course, this is simplified. Here are the steps in order:
1. Bring Together Subject Matter Experts and Data Scientists
Step one is to get subject matter experts, people who really understand the business problem or the functional problem that a business is looking to solve, and data scientists in the same room with a whiteboard. This is not something leaders can avoid; it is a required step. I’ll describe why both parties are really critical for making AI happen later in the article.
2. Determine How Humans Make the Decision the AI Needs to Make
For example, if I want an off-road vehicle to drive itself, what are the decisions that it needs to make? It needs to assess its surroundings, and then it needs to accelerate, turn the wheel, and go somewhere. That’s the decision: “What is going on,” and, “Where do I go? What do I do?” In medical diagnostics, the decision or judgment is, “Is this image of a tumor cancerous or not cancerous? Is it malignant or benign? One or the other.” That’s a decision.
The second point here, which is really the first question we’re going to ask to both the data scientist and the subject matter experts, is what information do humans use to make this decision? I’ll also answer that later in the article.
3. Determine How the Decision-Making Process Can be Turned Into Data
How can those bits of information that humans use to make a decision be turned into data? Again, I will describe this later in the article.
4. Determine How Much of the Decision-Making Process Can be Tracked or Stored
The next step is to go from “What info do humans need to make this decision,” to “How do we transform that into something quantitative, something trackable, and then which of those information bits that are quantifiable can we realistically store, track, and collect in a reliable format?”
5. Optional: Determine the Non-Human Data That a Machine Can Use to Make the Decision
This is not critical to leaders’ understanding of the basics, but we will discuss it further in the article.
Example 1: Lead Scoring
Rather than staying in the abstract, let’s get into some actual examples within a potential business. The first example is one of lead scoring. In any given business that sells things, leaders need to prioritize the activities of their salespeople, otherwise, they might not make the most of their time. Lead scoring is a functionality within a CRM or some kind of e-mail marketing software using which businesses might prioritize how likely someone is to buy and what that someone is worth to the company.
Determine How Humans Make the Decision the AI Needs to Make
First, subject matter experts and data scientists must determine how humans make this decision, determine the worth of a lead. If a business leader sat down their salespeople and asked, “When you look at 20 leads in your inbox kicking around in the CRM, how do you decide who you should call first?” They’re going to provide them with a fistful of information. Really good salespeople will probably have a very healthy intuition around what points of data matter most for lead scoring, at least how they do it mentally. These salespeople, subject matter experts, will relay this information to the data scientists.
They might say, “I really look at the size of the company. Really big companies tend to close really big deals, so I’m always going to call the big folks first, generally speaking, unless some of the other factors are really skewed.” They might also ask what industry the customer is from. They might know that insurance companies and banks are very, very likely to close quickly and to buy very big products, so they might prioritize financial services and insurance folks. They might say that eCommerce and retail people, while they’re often interested in the company product, tend to historically have a much lower likelihood of closing for whatever reason.
They might also look at the customer lifetime value of similar companies. A salesperson might look at a company and say, “This is a furniture business that does $400 million a year in revenue. Have we had any mid-sized retailers of this kind in the past? What other kinds of brick and mortar retailers have we sold, and generally what was their customer lifetime value? How easy or hard was it to sell them?”
They might also use the recency of the lead. “How recent was it that this person entered our leads system?” The answer could be six months ago, and that lead has been sitting around in the CRM doing nothing. The answer could be last week; they just entered their information and the company has had a couple points of contact with them.
Determine How the Decision-Making Process Can be Turned Into Data
These are all bits of information that are used to inform how a human might score a lead. Now, leaders have to ask which of these might be quantifiable. In this example, a few of them are.
The size of the business that one is potentially trying to sell is something that can be proxied based on how many employees it lists on LinkedIn. Alternatively, leaders might purchase data from a data vendor that can tell us the revenue of these companies.
There are many different ways of categorizing a company by industry, but hypothetically leaders could simply pick one and see where the company they’re trying to sell falls.
The customer lifetime value of similar companies to the one the business is trying to sell might be quantifiable, but the business would have to have a historical record of companies that it sold based on size and industry. It would also have to include the percentage of them were closed and what their customer lifetime value was. Hopefully, the business has that information on hand.
Finally, the recency of the lead is most certainly quantifiable, but there are other factors that humans might use that may not be quantifiable. When asked how they prioritize leads, a salesperson might say, “I’ve got a pretty good instinct as to how excited this person is about buying our product,” or, “I’ve got a pretty good instinct about how capable this person is going to be to sell their boss on being able to carve out the budget to buy our product,” or, “I’m pretty optimistic,” or, “I’m pretty pessimistic about this department having a strong need for our product.”
Determine How Much of the Decision-Making Process Can be Tracked or Stored
Salespeople often will use their intuitions. Sometimes they may be right, but what they aren’t is often quantifiable. This kind of feel from a salesperson’s brain is often not going to be something that is really data-trackable. This becomes one bit of information that might be hard to put into a machine.
However, it seems the bulk of how salespeople prioritize leads can be tracked, quantified, and fed into a machine. That bodes very well for the proposition of leveraging AI for that function. Essentially, if most of the information that humans use to make the decision is easily trackable and fed into a machine, then there is a greater likelihood of being able to use AI for the function.
Example 2: Military Vehicles – Driving Off-Road
The next example deals with an off-road military vehicle that’s going to be used to deliver supplies in Afghanistan. Of course, leaders must first ask, “What information do humans use to get this job done?” Specifically, the decisions and judgments the driver is looking to make are how and when to turn, to accelerate, to back up, to go forward, to go backward, and whatever else they need to do to drive the vehicle from point A to point B.
Determine How Humans Make the Decision the AI Needs to Make
How do human drivers do that? One might say, “Well, they just look around with their eyeballs and then they put their foot on the pedal.” There’s a little bit more to it than that.
In this situation, there might be satellite maps that help drivers navigate around obstacles to make sure that they can get things to where they need to get them. Of course, drivers also need some kind of knowledge of the ending destination. That might involve GPS.
Determine How the Decision-Making Process Can be Turned Into Data
Leaders could then ask, “Which of this information is collectible in a reliable format, turning it into data?” Vision, or camera data, is relatively easy to track, although there are many challenges with it. In this situation, one could lock a 3D camera around the outside of a vehicle to collect a full view of what is around the vehicle.
In addition, satellite map data can be fed into a machine. That said, reading that map is actually somewhat challenging for the machine on its own. Data scientists need to find a way to quantify that map, but quantifying how a human distinguishes what a cliff or river is, or what material is what is not easy. This is a complicated machine vision task, but it’s hypothetically possible. On the contrary, the GPS information is about a destination, something that is not difficult to quantify.
Determine How Much of the Decision-Making Process Can be Tracked or Stored
This situation is feasible for a machine learning system, but it is extremely complicated, particularly when it comes to the visuals. The machine vision system isn’t being trained to identify when a dog enters the field of a camera. Those are somewhat simple, particularly if there is a uniform background. But training a machine vision system to distinguish between a cliff, a boulder, and a plateau of some kind is much harder.
Current machine vision systems are fed data about paved roads that have stop signs and traffic lights, not cliffs one could drive off of or giant boulders lying in the middle of the sand. These systems are not trained for vehicles that are going to be driving over sand dunes. While that data is quantifiable, it is difficult to train a machine vision system to detect all the myriad of things that a driver would hypothetically run into in the desert.
This is because one isn’t just programming a machine to count how many times a red ball enters the field of a camera, but to identify all the obstacles in a desert and then send the commands to the vehicle to drive around them. Training a machine vision system to do this would take a tremendous amount of time and data.
Examples of When AI Can’t be Used
Complicated Diagnostics
AI isn’t applicable to every situation. One example of this might be the diagnostics of some particular brain disorder. If a child shows up to the hospital with a runny nose and a fever but nothing else wrong with them, there are certain proxy queues that a doctor might say they have a cold or the flu. The doctor might also say that the child has a particular infection based on their white blood cell count. There are some diagnostic tasks that really only require a certain number of data points in order for them to be fully understood and diagnosed, at least under most circumstances.
Determine How Humans Make the Decision the AI Needs to Make
There are other cases where diagnostics is a much more intuitive task. Suppose there is a particular bacterial infection that is difficult to diagnose. The doctor needs to collect many data points, but nothing in terms of temperature or blood sample data is really conclusive about this particular infection.
Perhaps to diagnose this infection, the doctor also has to ask the parents, “Has your child been experiencing drowsiness? Are they more irritable than they usually are? Have they complained about tasting foods differently? Have they done this, have they done that?”
The doctor might ask a wide variety of questions to feel out whether or not the child has the infection or another one. They perhaps might need to consult medical literature if they are less familiar with the infection to find possible diagnostic criteria.
Determine How the Decision-Making Process Can be Turned Into Data
A diagnosis that involves a lot of questions, consulting, and intuition is unlikely to be done by a machine with any degree of confidence. Data scientists will struggle to quantify the answer to a question such as “Has your child been more irritable now as compared to earlier?” The parents could say, “He’s always irritable,” which is an answer that is extremely difficult to quantify and doesn’t tell the doctor one way or another if the child is experiencing symptoms of the infection.
In addition, the doctor has to read the emotions of the parents to get a sense of how truthful they’re being about some of these very personal questions. Tasks such as these are exceedingly intuitive, and the data points the doctor is using to make their diagnosis, their decision, involve such context that it’s highly unlikely that machine learning system would be able to make that decision.
Determining Missile Threats
When I was at the United Nations, we spoke about the security applications of AI, and someone had asked, “Right now we have analysts and folks who are combing the skies and figuring out whether or not certain actions being made by a certain military are likely to result in a military attack or are only likely to be innocent movements that won’t necessarily lead to an attack. Can AI be applied to help discern what is an attack versus what is not an attack?”
This is an example where there may be some application of artificial intelligence and machine learning, but there are many barriers to entry because the information needed to make the distinction between a threat and a non-threat is not easily quantifiable. In addition, the frequency of missile threats is very low.
Determine How Humans Make the Decision the AI Needs to Make
A human making the decision might ask, “How has this military historically engaged with its enemies?” That said, missile threats are a relatively recent phenomenon, and so again, the frequency of this data is extremely low. In order to develop the AI, one might need to proxy that data by looking at how many boats one particular navy sends when they go on their patrol missions. An admiral might be able to provide that number.
The human decision-maker would also need some context on politics. They could ask all sorts of questions: “Is the country having an issue with another country? Is there a trade war going on? Is there some kind of oil price conflict between the two countries? Is there some kind of tension between the leadership of these two countries for some reason? Is one of them trying to uphold human rights that maybe the other one is violating in some way?” These political factors are really difficult to pin down, making them difficult to quantify.
Determine How the Decision-Making Process Can be Turned Into Data
It’s hypothetically possible to build a model that would estimate that decision-making process, but it would be extremely difficult. Some countries are not going to act in some predictable way. There’s no historical record of every country attacking every other country. No country has ever attacked every country. Similarly, when it comes to proxying political factors, while there might be some general score from social media and from other sources to be able to figure out the animosity or friendship between countries at some kind of broad general political level, that data doesn’t necessarily transfer to naval conflicts in certain particular areas.
This kind of detection of a military threat or not is better done by humans reading dashboards and using their wider context to make the decision because the data just isn’t quantifiable.
Collecting Non-Human Data
Sometimes the way humans make a decision is very different from the way that a machine would make a decision. Human beings do not use radar and lidar to drive their cars, for example. However, self-driving cars leverage lidar as a sensor data bit in order to help it gauge distance and proximity to different kinds of objects and clarify its visual information a little bit better. Lidar is a layer of information added on top of what we think of as visual data. Self-driving cars also use these other data sources to get a sense of their physical environment beyond those that humans have.
It will often take data scientists and subject matter experts to come together to figure out the non-human data bits that could help inform the decision-making process for the machine. In another example, a human therapist might relay to data scientists the proxies for determining whether someone is getting more depressed or not. An application that aims to determine how depressed someone is might consider where they are at any given time. It might consider the frequency of them being at different physical locations based on their phone. It might consider their frequency of text messages and to whom. It might also consider the general emotional sentiment of those text messages.
No therapist is going to say they use the general sentiment of a client’s text messages as a proxy for how depressed they are, however. That said, a data scientist and a subject matter expert sitting together in the same room might find these unique pockets of data that a business could collect. This data could give them an additional data point to help a machine make this decision.
Sometimes this involves thinking outside the box of how subject matter experts do it now and involves pulling in new data sources that would help a machine understand and solve the problem in a way that maybe humans don’t even use right now.
Reiterating the Process
Get subject matter experts and data scientists in the same room with a whiteboard. If a business leader decides they want to apply AI in a specific area and are committed to it, they should bring in a data scientist. When a business gets serious about AI adoption, it’s very important to go through this first step, of getting the subject matter experts together with a data scientist in the same room.
Then, ask, “What is the information that humans use to make this decision or judgment now?” Once that is figured out, determine how that information can be turned into quantifiable data. After that, data scientists can help to figure out how much of that data can be tracked and stored reliably in a way that will allow it to feed into a machine learning system. It then might also help to think about if there is any non-human data that the business can collect to better inform the system.
If a business leader goes through these steps and finds that the human decision-making process is not easily quantifiable and tuned into data, then they are looking at a situation where they probably have a low likelihood of applying machine learning.
Just by having this sense of whether or not AI might work for a particular business problem lets a leader escape the space of thinking either everything or nothing could use AI.
Header Image Credit: monitorulcj