Become a Client

Feature Engineering for Applying AI in Business – An Executive Guide

Daniel Faggella

Daniel Faggella is the founder and CEO at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and many global enterprises, Daniel is a sought-after expert on the competitive strategy implications of AI for business and government leaders.

Feature Engineering for Applying AI in Business

We talk a lot about the concept of connective tissue here at Emerj, the fact that a company that wants to apply AI not only needs to have access to data, not only needs to hire normally very expensive artificial intelligence talent, but also has to have the connective tissue of related subject-matter experts who can work with that talent.

Feature engineering is critical when it comes to the enterprise adoption of AI and for bringing AI projects to life. In laymen’s terms, feature engineering is picking the sources and types of data that we’re going to use to train the machine. This sounds very simple, but in fact, it’s quite challenging, and in this article, we’re going to be going through some exact examples of what this looks like and talking about applying it to the real world.

Andrew Ng is one of the towering figures in the world of machine learning. He famously taught at Stanford for quite some time and was with Baidu at one point. Now he runs his own company. He has a quote that I really like:

Coming up with features is difficult, time-consuming, and requires expert knowledge. ‘Applied machine learning’ is basically feature engineering.

What he’s saying is if we want to apply machine learning, we need to engineer features. As such, we’re going to walk through two different examples of AI solutions that a company might want to build and what it might look like for subject-matter experts and business leaders to speak with AI experts within the same company to ultimately come together, work together, iterate together, and find the features that are winners, the features that have predictive ability and can drive value in a company by making an algorithm capable of doing what a business needs it to do.

How Feature Engineering Works

We’ll talk first about how feature engineering works. If we want to build a lead scoring model to predict the value of a lead, the ability for us to follow up with that lead, and make that be a profitable interaction where the company can make money, what are the features about a lead that make them worthwhile?

That’s not self-evident. Different salespeople are going to have different answers to that question. Companies need their teams to work together in order to determine those features and in order to learn from their experiments and adjust those features to get closer and closer to the kind of features that are going to deliver results.

First and foremost, if a team is going to bring an AI solution to life, they’re going to train an algorithm and make it capable of something that will be valuable to a business. The first thing they’re going to do is get together both their data scientists and their subject-matter experts in that area of the business, the people who know that aspect of the business best. They’re going to combine the brains of the subject-matter experts and AI experts and come together with all the different options of what could be features.

The data scientists and subject-matter experts pool all the various options, and the team together would eventually come up with the selected features that they think have the most promise. They’re going to prioritize those. Data scientists are going to use those initial features to train the algorithm. Once they get results from that algorithm, they’re going to figure out how well those features tend to work in the real world.

Then, they’re probably going to get together with the subject-matter experts and talk about the results: the cases where it worked well and the cases where maybe it didn’t work well. They’re going to talk about the aspects that seemed to correlate to getting the result right, and they’re going to talk about the features that they suspect are throwing things off.

Alice Zheng, PhD, Senior Manager of Applied Science at Amazon, and Amanda Casari, Engineering Manager at Google Cloud, published a flow chart showing where feature engineering falls in the process of building machine learning models for business problems:

Zheng and Casari’s machine learning flowchart

Feature Engineering for a Lead Scoring Application

Say there’s a company who sells B2B software, and this company wants its salespeople to focus on high-value leads. Maybe a lot of sales managers are noticing when they’re working with their salespeople that a disproportionate percentage of them are really not able to come up with a good method for determining what leads to follow up with as opposed to others.

The first thing you’ll notice here is that this is a business that has accurately assessed its own needs or is at least attempting to do so. This isn’t a company that says, “We should use machine learning. How could we do that?” This is a company that says, “We’ve determined an area where we could be driving more value in the business, where we can be making more productive use of our team members.”

Now what the company wants to do is bring together the team and come up with the features. It the company lets a bunch of data scientists sit in a room by themselves and come up with features that they think correlate to lead scoring, they might be lost. That said, data scientists can do something that salespeople probably never get the chance to do: look at all of the historical sales and the activity of those leads and figure out what might be the factors that tie them together and make certain leads more valuable than others.

Data scientists can bring the quantitative approach to the table, but they can’t do everything. They don’t have the same sense of the real work that the salespeople do. Salespeople, for example, might know for a fact that companies that are also in the software space, for example, they’re almost always going to be able to make an easier sell there. Salespeople might also know that if the person they’re connected to is a VP of Operations, for example, they’re almost guaranteed to have a fruitful next conversation.

Bear in mind that some of those factors are not in the data. A data scientist might not be able to go into the CRM and find the size of the company they’re selling to. Salespeople might tell data scientists that the size of the business matters. Data scientists combing through all the information can find things that already exist in the data that might correlate to closing a deal, but they might not know these other factors. The subject matter experts, the sales managers,and salespeople, need to communicate their ideas to the data scientists.

They’re going to have to shuffle these ideas around on a whiteboard. They’re going to have to think about which of them might matter most. They might come together and say

Okay, we think that we’re going to prioritize these industries as higher priority for lead scoring. We’re going to prioritize these industries as middle priority, and these as low priority. And so from now on, the CRM needs to include industries. It doesn’t include them right now.

Things that are in the CRM might be the recency of the lead, the title or role of the person that they’re selling to, and the frequency of recent communications. There might be all these various factors that the data scientists and the salespeople come together and say, “These could work.” Sometimes, these are just simply ideas. They’re already in the data and all we have to do is organize them, train an algorithm on them, and run an algorithm prioritizing for these particular features.

In other instances, the team is going to have to do a lot of work. Some of these features are going to be very challenging. It may require the team to go back into the last three years of sales data to find relevant features. This is time-consuming work.

The data scientist and the salespeople are going to have to find features that seem to correlate to success that they would suspect their intuition and the hard data tells them are going to work, and they’re going to have to make efforts at which new features they want to add that are complicated.

One can see how the team would need to have a back-and-forth. The data scientists would say:

No, we can’t add these six new things, we’re going to have to hire a huge team overseas to label and to frame these things. We’re going to have to train an algorithm to automatically put it into the system, and that by itself would take a long time. We can’t add all these new sources of data retroactively, that’s going to be too crazy. So we’re going to have to pick the number that we think is reasonable.

Then the data science team is going to construct an algorithm that looks out for those features that predict a higher value of lead based on industry or size of the company or frequency of communication. Then the team is going to take all the new incoming leads and score them. They may or may not feed these scores to real salespeople; they may just feed them to the team responsible for building the product.

It’s very dangerous to take an experimental machine learning product and feed its results to one’s whole company. We don’t know if this thing works; we don’t know if this thing is doing a good job at all. Data scientists would instead build the algorithm and then just show the results to the same people who helped them build the features in the first place. They bring these people back into the room, the same sales leaders, maybe a couple of their top performing sales folks, bring those same people back in and show them the results.

The subject-matter experts might look through it and find a bunch of examples that worked really well and maybe a bunch of examples that didn’t work really well. They go through the results and they might even set up an experiment. They could run 500 of these scored leads through the machine. They would also manually tell a bunch of salespeople to look at all these different accounts and manually score them on a one to 10 as to how hot that lead is.

Then the data scientists and subject-matter experts would come together and see how well the salespeople scored the leads compared to the machine. They might find the big discrepancies. Subject-matter experts could then describe the areas where the machine is doing well and those where it’s not. For example, the algorithm might be doing a great job at determining sales for really big accounts, but might not be doing a good job at finding the low hanging fruit for smaller accounts.

This would bring the team back to the drawing board. The team might realize that they not only need to prioritize deal size, but also the ease of winning the sale, the relative ROI of pursuing this sale. It might be important to devote some attention to closing $50,000 deals instead of just the $500,000 ones. All of a sudden, the team needs to optimize the system differently. They have to figure out the factors that make a deal require more time and money before it’s closed and what data points tend to correlate to less of that time and money while still yielding a worthwhile ROI.

All of a sudden this splinters off into a very robust machine learning problem. There are new problems to solve, new ways to slice and dice the data, new features to experiment with for each. Feature engineering is hard work, and it spins into more hard work, more experimentation.

It’s not that there’s no light at the end of the tunnel, but sometimes teams really do spin their wheels. They spend six months, nine months, 12 months trying to experiment with a mode, and they can never get something that’s better than what the team is doing at the moment. They go to conferences and try to learn from other people; they learn what other people have done for lead scoring, but it still doesn’t work for them. So they go back to the drawing board and find they need to allocate their talent somewhere else. They abandon the project.

There are no guarantees here. It takes a certain company with patience, with an understanding of data science, and with subject-matter experts who are willing to take time away from their normal jobs to work with the data scientists. That culture of innovation and willingness to risk for something that doesn’t necessarily have a return are difficult to conjure and mold in an existing business that already has its processes set. As a result, feature engineering, and thus machine learning, is hard for many companies.

This is why it’s challenging for scientists themselves, let alone whole companies. When a salesperson or sales manager wants to be doing their core work, driving results, they might get tired of showing up to these meetings every two weeks. They might all of a sudden put up a lot of walls, put up a lot of resistance because they’re not earning a bonus off of this “AI meeting thing.” They’re earning a bonus based on results, and they’re getting tired of spending their time on these meetings when they could be bringing home more money to their family.

Feature Engineering for a Home Pricing Application

When creating a home pricing application, the process is the same as for lead scoring. The company would bring together people who evaluate homes, the subject-matter experts, and the data scientists. The data science people would suss out all the details that can come forth from the data, and the realtors would come to the table with real experience pricing homes in the real world.

Experienced realtors that have a track record of results would explain the critical factors for valuing a home. Just like with lead scoring, some of these factors are not in the data. The data scientists might find that the number of acres a home is on is a big deal or that square footage is a big deal. They could pull this out of the data. But the realtors might come up with things like the presentation at the front of the home that aren’t in the data and that the data scientists haven’t thought of before.

Some of those features and factors are going to be irrelevant for the sake of this algorithm. The algorithm might only need satellite data and home listing prices to make its predictions when all is said and done. We can’t give a three-dimensional tour with a camera of every room in the house and have somebody manually score how stylish it is. That might be totally unreasonable for this particular solution. The company might not even be able to afford something like that.

So the team might decide they can’t do that, and instead, they have to find something else to prioritize. They might come up with the quality of the lawn. In Florida, perhaps, a dead lawn might indicate a lawn that’s unkempt because the weather would permit a green lawn year-round. The team might find through satellite data that having some degree of hedges or trees around the perimeter of the home tends to in some way correlate to better home prices. That might be something that a realtor could tune the data scientists into. They might say:

When people feel like they have their own space, we find that in upper middle-class neighborhoods, that tends to be something that people are willing to pay more for.

So the realtors are going to chip in these ideas, some of them are going to have to get thrown out because they’re too hard. Some are going to have to get thrown out because they’re never going to be able to be involved in the algorithm, it’s not possible to realistically pull that data in for all of our future deals. But some of the ideas are going to be good, and the realtors and data scientists are going to come together with all the possible feature. Then they’re going to sort the ones that might work well, They’re going to come up with an iteration of a machine learning algorithm, and they’re going to test it, figuring out where it works and where it doesn’t.

They might realize that in middle class and lower class neighborhoods, their algorithm is doing a great job of predicting what price a home sells for But in the really high price neighborhoods, it’s doing a much worse job. The team might then come together again, and the data scientists might dig into the data and figure out if there are different patterns to find for high price neighborhoods.

The realtors might speak to their experience of what works for the higher price neighborhoods, and then they might have to actually map out the high price neighborhoods. They might have to hardcode those so that when the algorithm makes its calculations it can use the version of itself that is tailored to those high price neighborhoods. It might turn out that there are only so many rules that can be hardcoded into the system before the team resigns themselves to the fact that their algorithm can almost never do better than an experienced realtor when it comes to expensive home prices. They might also find that if they keep iterating, eventually they can get there, but it might take a year or more.

The Reality of Bringing AI Projects to Life

Feature engineering is a critical facet of the connective tissue between AI experts and subject-matter experts. It’s a critical factor that teams have to be able to come together with. It’s not a onetime brainstorm; it’s an ongoing set of discussions around what forces and factors really move the needle to get an algorithm to be able to make good predictions and do what is needed to drive business value.

We need those subject-matter experts in the room. So to understand this dynamic, to know how hard it can be, how time-consuming it can be, in order to do this, it’s very important. A lot of people go into these discussions. I talk to AI consultants every week. Very often, I’m having conversations with AI consultants. One of my longtime business advisors here at Emerj is a consultant for really large companies when it comes to enterprise AI adoption.

Companies will tell him that they want a recommendation for their eCommerce website in a month, and they’ll think that the feature engineering, multi-team collaboration, change in company culture, and results can be garnered within a specific timeframe. He tells them it’s not that easy. Understanding how difficult it can be is critical for business leaders so they don’t jump into these projects thinking they can solve it with a small budget or within a small timeframe.

These are iterative projects that take a long time to work through. Business leaders that understand this are not necessarily going to be scared away from AI, but when they do jump into a business problem they want to solve, they’re able to buckle up for what it takes and prepare their team for the kinds of meetings and brainstorms that it takes to drive real value, to actually make AI come to life.

I’m not trying to be a pessimist, but I am trying to prepare business leaders. Andrew Ng is correct when he says that feature engineering is as important as it gets for breathing life into AI, for applying machine learning. As business leaders, it’s important to understand that.

 

Header Image Credit: Fiverr

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the ‘AI Advantage’ newsletter:

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

At Emerj, we have the largest audience of AI-focused business readers online - join other industry leaders and receive our latest AI research, trends analysis, and interviews sent to your inbox weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.