AI Integration Challenges – Pitfalls to AI Adoption in the Enterprise (Part 2 of 3)

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

AI Integration Challenges - Pitfalls to AI Adoption

In this second installment of the “Pitfalls to AI Adoption in the Enterprise” series, we’re going to talk about underestimating the integration needs of artificial intelligence and machine learning.

In our first article in this series covered Pitfall #1: Avoiding AI Novelty.

This second installment covers what many of our PhD podcast guests consider to be the biggest hurdle to AI adoption in the enterprise: Integration challenges. Bringing AI into an existing business is a challenging task, and it requires a very specific set of concerns.

Rather than learn these lessons the hard way, I’ve decided to break down the critical integration challenges of AI:

Pitfall 2: Integration Challenges

We’ll begin our conversation here by talking about what we might refer to as how a run of the mill IT integration works. There’s no better example of this than something that recently came up at an event where I was presenting. At the time of writing this article, I am just getting back from speaking at United Nations headquarters.

The UN is very interested in the security implications of artificial intelligence, and they actually had us do a deep fake video of one of the United Nations directors.

After the demo, I had a number of conversations with diplomats and law enforcement leadership. I had two conversations in particular with people who essentially asked how AI could be plugged into a video.

Now anyone who is familiar with deep fakes will know that indeed there’s a lot more going on besides plugging artificial intelligence into a video, but it’s presumed that anything that involves AI is just like any other tech. You need to install it, you need to integrate it, and then it does what it needs to do.

In order to make a deep fake work we have to train an algorithm on video footage of the person that we’re aiming to model.

We also need a person to be the model, someone who is going to “wear” this AI mask of another person’s face, and we need to train the algorithm on their face as well. We need to match up image by image for different expressions and different parts of the face.

It’s a reasonably complicated process. It’s not a snap of the fingers.

Integrating Traditional IT Systems

General, run of the mill IT concerns might start with hooking up all of our systems. We need to set up the interface and the elements to suit the needs of our business. If we’re building a CRM system, we might want it to have the right fields and the right steps that relate to our sales process for our particular company.

We’re hard-coding some rules. There’s potentially some complexity there, but in general, we’re doing what is often a one-off process to make the software useful to our business.

We also need to train people on the software.  The people that need to use the software on a regular basis have to understand what they need to do to get the kind of results that they want to get.

Then we need people to get used to the workflows. If this is a CRM system, then how are our salespeople and our sales managers going to continuously update this system with sales and lead information?

If it’s an email marketing system, how is our marketing team going to work this into our campaigns?

The Challenges of Integrating AI

Challenge 1: Data Infrastructure Needs

Now with artificial intelligence, we have an entire repertoire of brand new considerations and concerns.

The first concern is data infrastructure, or how to make sure that the software that we are working with has the data that it needs on an ongoing and often real-time basis in order to really be able to help the business.

For example, if we are building a system intended to detect fraud for bank account activity, there might be some hard rules that we can use for this. For example, we might block certain IP addresses in certain places of the world where we don’t have any customers but we know that there are criminals or any time you want to transfer a certain amount of money you have to make a phone call.

In order to set this system up, we would need not just to track things that banks track, such as money transfers, but also the frequency a user logs into their account and what parts of the user interface a user interacts with. Maybe a given user at a bank really only looks at their savings and checking balances, and they’re almost never exploring another set of features. Maybe for this individual user, the pattern of normal is a certain way.

We’d also need to somehow feed the system with continuous data if we want to detect fraud. In this case, that data is detected instances of fraud. We would need to find out when there was fraudulent activity on an account, and we would need to somehow register that as fraud with the system. We would need to tell the machine learning system, “This pattern of behavior and interactions and transfers” or whatever the case may be, “Was a fraudulent instance for this particular user. This was a fraudulent instance.”

To do this, we need to be able to go back and label individual instances, transactions, as whether or not they were fraudulent. This data needs to consistently be fed back into the machine learning system. The question here is, “Where is all of this tracked? Where is all of this stored?”

Also, how is the data stored? When we figure out where instances of fraud are stored, do those instances record a user’s first name and last name in the same exact sequence in the same way their first name and last name are represented in their bank account when we look at account activity or when we look at transfers?

We need to make sure that the data is harmonized, that the fields of data are very similar across systems so that we can match the data within them.

We then need to feed all of that data into the system. Machine learning systems are constantly being trained. Again, if we’re detecting fraud at a bank we need to be feeding in every new instance of fraud to make sure that we can pick up on those fresh criminal behaviors and stop other criminals from using those same tactics.

How can we consistently ensure that the data from all these different parts of the bank are able to come together in a way where we can train a machine learning system on it?

Challenge 2: Feature Engineering

This brings us to the second big challenge here that goes along with data infrastructure, which we oftentimes have to do before we even set up our streams of data within the business, and that is feature engineering.

When we want to train a machine learning system to do something, we need to determine what data would help it do that. Sticking with the fraud detection example, we might want to look at the behavior of a logged-in user. What are they doing? What are they clicking on? What are their mouse movements?

We also need to know their transfers, their balances, the payments that they make from those accounts to their credit cards or to other vendors, etc. We need to track that kind of data. We need to determine what historical instances of fraud are.

These are just a few types of data that we might use to train a fraud detection system, but feature engineering involves a deep strategic process for determining what those types of data are in the first place.

That conversation of feature engineering almost always is going to involve two different parties. It’s going to involve people who understand data science, who know how to train algorithms, who understand the formats of data, the types of data, and the reasonable capabilities of machine learning. They would know what’s realistic, what’s unrealistic, what kind of data might we be able to use to feed a system versus which ones we might not. We need people that understand the science.

We also need people who understand the business. For a fraud detection system, we need subject matter experts in fraud. These people don’t have to be AI experts, but they have to talk to the AI experts. They need to have an open-minded interdisciplinary conversation around what types of data that a machine learning system would need to use in order to detect fraud.

Someone who works in fraud might be able to tell the artificial intelligence data science talent what kinds of data is most important here. They might be able to have some insights. For example, they might put a lot of emphasis on the IP address, the location of the user. That might be something that’s weighted very highly when it comes to things that correlate to fraud.

They might be able to let the data science folks know some other category of data. For example, maybe when it comes to behavior within a mobile app they might know historically the kinds of behaviors within a mobile app that in general have correlated more likely to fraud over the last two or three years. That information would again be helpful for the folks on the data science side.

Sometimes the feature engineering phase ends with the conclusion that we just don’t have the data to train a machine learning system or that we do have the data but that rebuilding the infrastructure in order to make that data accessible right now is going to be completely cost-prohibitive given the potential ROI that we could get from this product or project.

That’s a perfectly valid result for feature engineering. Some people would say it’s disappointing but what’s more disappointing is wasting millions of dollars. There are certainly banks that have bought machine learning applications without having the requisite data, without having the requisite ability to access that data, and wasted tremendous amounts of money and time.

Challenge 3: Testing the Effectiveness of a Machine Learning Application

The last consideration is testing the effectiveness of a machine learning application.

When you talk to folks who either work for established and mature artificial intelligence vendor companies or you talk to very high-level consultants who are schooled in the science of artificial intelligence and have worked in the enterprise, they will all tell you that it is often harder for a machine learning application than for a generic IT application to test the effectiveness of the system.

  • How do we know that this is working?
  • How do we know that this is worth our money?

These questions seem self-evident, but often they’re not.

If we want to understand how to measure the effectiveness of a fraud detection application, what will be the means by which we do that? Is it simply being able to detect more instances of fraud? In other words, before the system, we might have been able to catch 80% of fraud cases. Would we know that the AI system was a success for us if we caught 90% with it?

Let’s throw in some other criteria here. Do we also want to reduce the false positives? If we go from catching 80% to catching 90% but we have twice as many false positives, maybe that’s not really worthwhile.

If this system is able to free up the time of large swaths of our cybersecurity team in some way, shape, or form, maybe that’s an ROI that we want to measure here.

This is a conversation for both subject matter experts and data scientists together. We need both parties in the room to determine how we want to measure the ROI of the application.

Data science teams may say very frankly, “That is impossible.” It may be very clear that what we assume is going to be possible as business leaders is actually not going to be possible in terms of the actual perspective of data science, in terms of what the technology is actually capable of doing.

Sometimes, again, just like in feature engineering we run into cases where we realize we don’t have a fit here. We run into cases where this is not going to be something that we can realistically do.

For example, business leaders might say, “I’d like to catch 98% of all fraud instances that come through,” and a data scientist may be able to have the perspective that it’s not going to be possible.

Similarly speaking, the business experts might want some massive amount of time reduction from the security team and the data science person might say, “Well, we might be able to reduce a little bit of their time, but we definitely will need at least one or two more full-time people to manage the data infrastructure and to manage the algorithmic creep and the other factors related to this machine learning system.”

The data scientists and the business experts will have to come together and say, “Realistically, what kind of time efficiencies can we create?”

That is not a conversation business people should have by themselves because, frankly, they probably don’t have a realistic expectation of what the AI system can do.

Similarly, it’s not a conversation the machine learning experts should have by themselves because they don’t understand the P&Ls and the balance sheet. They don’t understand ultimately what’s going to hit the bottom line, what kind of difference we need to make from a monetary perspective, from a business process perspective in order to make the business run.

We need both parties in the room to determine what testing effectiveness is going to look like and to ask the question of whether or not it’s realistic to expect what we’re expecting. The fact of the matter is we may learn from this brainstorm that we don’t have a fit, that this is not the right application to work with, and you know what? That’s a win. That’s a lot of money saved if we can keep ourselves from stumbling into a big mistake.

How to Overcome These Challenges

We’ve talked about the data infrastructure needs, the feature engineering needs, and the testing and effectiveness needs of a machine learning application. These are new considerations. They require additional brainstorms and additional expertise that a lot of other IT  applications just simply do not involve. It’s important to bear them in mind but how to get past them.

Know What You’re Up Against

Most people presume that machine learning is much easier than it is. They drastically underestimate the problems articulated in this article.

Simply knowing what you’re up against is already to your advantage because you’re less likely to make rash decisions and hop into AI with enthusiasm when in fact you really should be bracing yourself for some real heavy grind.

Talk to Companies That Have Already Adopted AI

If there is a singular bit of advice that I would give to any business leader that’s moving towards integrating an actual AI system into their business, it’s this: talk to companies who have done it.

In other words, talk to buyers like yourself who have bought or built AI applications like the one that you’re trying to build.

When it comes to setting up run of the mill IT projects, your IT team is probably prepared for what that looks like.

Rather than listening to the vendor about how long the integration is going to take, what the factors involved are, what the challenges we’re going to face are, talk to someone who has bought a similar solution and ask them, “What kinds of new talent did you have to hire? What kind of time did it take to hook up these systems? What kind of data integration and data infrastructure did you have to set up in order to make this work in the first place? How were you able to measure success? Did you find success? Did you find an ROI? Are you really losing money on this project?”

Ask the hard questions of people like you who have built or bought something like what you’re trying to build or buy with that involves machine learning.

You will hear them very quickly bring up almost everything that we’ve talked about in this episode. They will certainly bring up data infrastructure. They will certainly bring up feature engineering in some way, shape, or form. And when you ask them about how they’re measuring success, to be honest, they might not even have a great answer. That might make you a little bit trepidatious about how enthusiastic you are about the application if they can’t measure ROI.

The fact of the matter is a lot of these applications are pilots. A lot of these applications were done for the sake of AI novelty. You should really get a good gauge on how other people have experienced the transition that you are going to be ready for.

Hire AI Experts In House Who Have Implemented AI in Business Before

Maybe you’ve got to poach these people from other companies. Maybe you’ve got to hire people that were great AI consultants and they’re open to working for your firm. In an ideal universe, you want people who have worked in your industry, who have applied AI into the enterprise, and who have very strong technical data science skills. You’re going to need these people on your team, and increasingly you’re going to need more and more of them as AI becomes a bigger and bigger part of all things IT.

Now the fact of the matter is this: that talent is hard to find. What is often going to be the case is the next best situation.

The next best situation is to have in-house data science experts who can learn with you and who can quickly get up to speed on what it looks like to bring AI into an enterprise and what it looks like to solve problems within your sector.

You want your data science experts there because a lot of these folks, when they’re hired, are not going to have the ideal expertise I mentioned. They’re not going to have the industry expertise.

They’re going to know the science but they might not have that raw expertise that you’re really going to ultimately want them to have as you grow your company and integrate AI more and more.

They need to be part of the osmosis of learning. They need to be able to work with the business folks. They need to be able to work with people when they go to events. When we head off to events and we talk to other buyers that have gone through similar experiences or we look at other use cases, the data scientists should be looking at these business use cases, getting a rounded perspective of what this takes. These need to be the folks who are the glue between the hard science and the business problem.

Don’t Rely on Vendors and Consultants

There’s a chapter in Machiavelli’s The Prince called “How Many Different Kinds of Soldiers There Are And Of Mercenaries.”

The idea is this: if you are relying on mercenaries to win your wars, you’re going to ultimately harm yourself because at the end of the day, whether you’re victorious or whether you lose, those people don’t have the same aligned interests as you do.

Here’s what I mean by that. A business cannot fully grow by bringing on AI consultants and by bringing on AI vendors. You need to be cultivating a core of data science talent in-house, people that are aligned to your incentives, people who are not going to lie to you, people who are not going to tell you it’s easier than it is, people who might lose their job if the implementation goes poorly.

You need people on your side in this process. If you’re going to bring on AI consultants, if you’re going to bring on vendors, if you need to trust them with the outside expertise about how to integrate this stuff into your actual business, you had better have folks on your team, hard data science experts on your team, who sit on your side of the table and who can question and poke and prod the claims made by consultants, the claims made by vendors, from the science perspective, not the business perspective.

The science can be murky, the science can be smoke and mirrors when a consultant or a vendor is trying to sell a business on a concept. They’re selling to a business leader; they can pretend it’s however easy they want it to be. But when you have data science folks sitting on your side of the table, you won’t be taken advantage of by mercenaries.

Any firm out there who is expecting to hire consultants to build out hardcore real deal AI capability within their company is absolutely fooling themselves and they’re setting themselves up to lose. If that sounds daunting it’s because it is.

Our third installment in this series is going to be about underestimating the time to value. In other words, why is it so much harder than with run of the mill IT to estimate when an AI system is going to start paying for itself?

 

Header Image Credit: Alrasmyat Saudi Arabia

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.