The Adoption Journey for AI in Service Operations – with Chris MacDonald of PTC

Megan Jarrell

Megan serves as Publishing Operations Manager at Emerj, and is currently attending The American University in Paris, where she is pursuing degrees in global communications and international business administration.

The-AI-Adoption-Journey-for-AI-in-Service-Operations-950×540-2-1045×594

While we’ve covered a variety of use-cases in heavy industry over the years on the AI in Business podcast, we are now taking a closer look at what it is like to apply AI on the manufacturing floor on today’s episode.

In our conversation with PTC Head of AI and Analytics Chris MacDonald, we dive into what it looks like to leverage AI in servicing products once they are out in the world and in the hands of consumers.

Chris expands on a wide range of similar business applications across industries, some that we were only able to touch on briefly during our recent interview with GE’s Peter Tu.

PTC is a multibillion dollar computer software firm focused on service operations with headquarters in Boston, MA. The company aims to “unleash industrial innovation with award-winning, market-proven solutions that enable companies to differentiate their products and services, improve operational excellence, and increase workforce productivity.”

With more than 6,000 employees across 30 countries, PTC generated $1.807 billion in revenue for the year ending March 31, 2022.

We cover three topics in this 40-minute interview:

  • The AI adoption journey for service operations firms
  • What the product service lifecycle looks like and its unique considerations 
  • Why it’s essential to ensure the quality of data and working together as a team 

Listen to the full episode below or skim our interview takeaways and the full interview transcript below:

Subscribe to the AI in Business podcast on Apple Podcasts today. 

Guest: Chris MacDonald, Head of AI & Analytics at PTC

Expertise: AI, AI analytics, business management

Brief Recognition: Before becoming PTC’s Head of AI and Analytics in October 2019, Chris held positions as the Global Lead of the Analytics Center of Excellence and the Vice President of Sales and Development of ColdLight, a PTC business since joining the company in 2015. Earlier in his career, he was a business analytics sales manager at Oracle and large enterprise executive consultant at Xerox. 

Key Insights

  • How to align data strategy to business strategy: The importance of data availability and quality in constructing a proactive maintenance strategy.
  • Executive advice on maintaining key KPIs: Letting KPIs be malleable enough to fit changing situations but still maintain their integrity as the project moves from early to transformational phases.

Full Interview Transcript

Daniel Faggella: We’re speaking at a bit of a high level about a sector and space that you’re quite familiar with when it comes to getting AI projects. Strategy often sort of has to play a role. And I know that you have some particularly firm opinions around what it looks like to align data strategy to business strategy. What do you think is most important for executives to understand about that process?

Chris MacDonald: I think it really comes back to if you’re familiar with practical data scientists. A data scientist spends a lot of time sort of repeating this notion of framing the business question or framing the analytics problem. 

Really, that comes down to an overall strategy of [answering the questions] “What are you trying to accomplish? How do I model data to represent that business problem or circumstance?” 

And then how do you want to leverage logic or different types of analytics to get insights that actually drive that action. So it’s very much [about] what does it mean to align data strategy to business strategy.

So in the case of a service executive, we know that the most expensive thing in running a service business — and why a lot of executives want to have a connected product strategy — is this notion that you sell a customer a product and a service contract, or even sell that as a product as a service for that matter. 

[After] that product goes down, it’s critical to the customer’s operation. And it’s a reactive service lifecycle. 

So you have to send someone out there to get that customer back up and running, they have downtime, right, their operations not running.

And the most expensive possible thing you could do is that, the machine is down, you send someone there and they have the wrong parts. And they have to then come back right leading to lower customer satisfaction, and even a longer window to get the repair and get the machine back to a level of uptime that is satisfactory under the contract. 

So that’s the worst possible scenario. And of course, it’s easy for an executive to say, “I want to have a proactive strategy. I want to proactively service my equipment so  I don’t have to experience that. I want to make sure that I’m providing proactive maintenance and service aligned with my customer’s operational schedule.” 

Ideally, when they’re scheduling downtime: How can I have a service strategy that aligns that operational window? How can I allow them to have self service? How can I give them insight into their products and what they need to do so they can even do it themselves?

All those factors come into it, but the question remains: How does your data that you have support that strategy? How do you have your assets connected? Are they giving telemetry? 

That helps us understand the behavior or the voice of the product that we can infer a level of well-being with that equipment and how we can interject with that equipment. All of that is part of aligning a data strategy. 

So if I take even say, predictive maintenance as a use case in general, whether that’s pretty do maintenance for a maintenance team in a manufacturing plant or from a service provider of a piece of equipment, or an OEM. 

There’s this concept of data availability that we run into, right? It’s often one of the biggest concerns in our firsthand engagements, working with customers on predictive maintenance, right, you need to have a sufficient amount of data for monitoring a piece of equipment right over an austere historical period of time. 

But you also need a history of some sort of adverse action, or an event that usually comes from a maintenance system or a service system. And you need to be able to bring those two things together to create a sufficient data model.

That’s not actually a technical challenge that has to be driven through business sponsorship. There needs to be someone saying that our data is a critical asset to understand our service or operations strategy, that we need to be able to align our monitoring data to our systems where events or outcomes are tracked. 

Because ultimately, it’s not just that availability but you need to have a data quality as well.

In the case of a service technician, sometimes he might be in there tracking something that he’s doing. And the old sort of, you know, fat finger at it, right? 

He could be inputting something into the system or some sort of event that leads to poor data quality. So how do we have a level of governance or assurance of data quality, or an executive sponsorship to drive quality data collection going on there. 

Because ultimately, you either collect raw data that is fundamentally of quality or you have to find ways to pre-process it to do outlier detection and a number of those other means. But at the end of the day, you need a quality available data structure, and it needs to be sufficient. You need to be able to find meaningful patterns in that. 

So there needs to be enough adverse events, and there needs to be and it’s not a hard and fast number, per se, but there needs to be enough coverage of expected types of events that result in some sort of failure or outcome variable you’re trying to prevent.

Daniel Faggella: There’s a number of factors here that you’re bringing up that I think will immediately make sense for someone who is a services business leader. 

They’re talking about preventing those unplanned downtime, of course, and what a horrendous cost that is; what a horrendous loss of face to the customer that could be under different circumstances. That’s going to click for them a lot. 

Discerning and detecting which of our data sources are clean off to get which you’re going to need pre-processing, how to identify outliers, that’s probably not going to be in the C suite where that happens. 

So when it comes to aligning data strategy and business strategy, like you said, executive sponsorship, absolutely. 

Our experiences, a lot of this really has to start at the top executive AI fluency is the name of the game, you might argue that’s why we’re in business, actually. Because getting started up there is a big deal to make the stuff on the ground happen. 

But talk a bit about how much does the exec need to know about the technicals of AI versus what do they need to understand conceptually, to get the right people in the room.

Because it sounds to me like aligning these strategies is going to involve getting those folks that know the machines getting those folks that know the data streams under the same tent with that executive. How did those folks come together?

Chris MacDonald: So I think on the executive level, there’s enough fluency needed, where you have to understand that it’s not just an overall arching strategy, I want surface optimization. 

But I think you have to get down to the level of this is not a technical level, in my opinion.

It’s the basic question of: I want an executive to be able to articulate in a program what they want to predict. They don’t necessarily have to know the quality of the data, they have to know that data is important business outcomes are important. 

And things need to come together to a set of questions that we want to answer proactively. So I think there’s a level of detail [where executives can ask questions] like, “Wouldn’t it be great if my service managers could identify the top 10 pieces of equipment that are likely to fail in this given week-long window, or based upon a geography?…”

Or a metric where, you know, service technicians are available in this area, and they can be alerted to failures. 

But do they need to know exactly how they’re going to get there, exactly how the data is going to come together? Of course not, but they need to know: I have a business strategy. 

There are things in the case of predictive analytics that I’d like to predict. 

And here’s how the ability to do so would drive, you know, certain KPIs or business implications that would help my strategy. 

Yeah, that’s a level that’s not technical. No, but it certainly shows a level of understanding and that there is going to be technical work and sponsorship needed to drive those.

Daniel Faggella: We think about it more as a conceptual grasp for the most part. You don’t need to build a support vector machine on the weekend for fun in order to serve that executive role. 

And there’s kind of two elements of this. I’d love to get your sense of the balance, we really consider them to be different. You might have kind of a different take on this: 

There’s sort of the Hey, I’m an exec. I’ve got a Twitter feed. I get emails every now and again, I go to events, I’ve seen individual use cases. I can say, “Hmm, we really should be doing more of this, predictive, this and such.” 

“We really should be doing more of this cool stuff over here, these both seem viable and valuable.” 

That sort of like sniper style, immediate use case identification.

And then there is.. we’ll refer to a kind of a transformation vision where what we want to become is actually AI and form. So those use cases are not just band aids for today, because AI is great today, it’s sort of a building of capabilities. 

What do you encourage execs to do to maybe have a bit of both? Or how do you see them as the same or different?

Christopher MacDonald: In an ideal world, like start with this ladder, right? [In vendor marketing, the company says,] “Here’s this overall overarching strategy, I want to become AI whether you call it data driven or AI… I want to have a data driven understanding, I want to be able to leverage AI to my business benefit.”

At PTC, we have this phrase of, our whole notion, our company motto was digital transforms physical. But our customers are not in the business of being digital, we are digital first, because we’re a software company at the end of the day. 

In the case of a plane, we want to help our customers create a digital footprint of that plane, we don’t expect them to be playing software companies, we want them to build the best planes in the world. We want to help them do that. 

So being an AI driven company of understanding, I want to have aI driven insights to be able to optimize my business and then take that approach. Now that I know that here’s some top business priorities, I want to go shoot after the sniper method right up. 

I know, this is really important. I know this is a big business implication for me, let me let me put some fuel on the fire even as an executive. Let me maybe let me get some vendors in here and maybe get some other executives to go after this creates some inertia around that top priority. 

Ideally, it’s got to be under that umbrella, and some sort of maybe involving strategy driven insights. 

That sort of “I have an overarching strategy, I have a drive for my business to be more data driven to have proactive insights with my ad strategy” [on one hand]. And then, “I have a sniper approach where these different use cases fall into it” [on the other].

Because I think with that, you can start to get to a point of, “okay, I’m sniper again on say, a predictive maintenance type use case or predictive service. But I also want that executive to be able to go to the next level just a little bit.”

So if I ask an executive, “I want to do predictive maintenance.” Well, the next question is, “Okay, but if I can predict that somebody’s going to fail, does it help to know, but it’s going to fail within an hour? Is that enough time for you to do anything about it?”

So you want to get to that next level where that executive is willing to bring in, say, a service manager, or even an experienced service technician to start having the conversation. 

So maybe I [as the executive] need to understand the remaining useful life, maybe I need to understand, you know, when somebody’s gonna fail in the next two days, or three days.

 Maybe we need to gather together to have some of the more technical conversations with better business sponsorship framing, to say, “Hey, I want to be able to understand what our equipments gonna fail within a week. Go try to model the data, try to get it that way. And come back to me to see what we can do.”

That’s a really powerful, “I have an overall” strategy, sniping for those use cases. And [it shows that] I’m willing to go to that next level to help frame the analytics question that gets my technical teams moving and shows that I’m invested in knowing a little bit about what they have to do in the realities of the world. 

That’s a powerful AI-driven executive. Big time.

Daniel Faggella: And again, I think some execs think, “Oh, I gotta go back and get one of those certifications from MIT!” 

Or something. Well, not really. 

Ideally, you get enough of a big level-vision. And frankly, the time in that room like you just articulated with the folks that are technicians, with the data scientists to really hash out what are we measuring and how that’s the school of hard knocks right there. 

And that’s much better than taking a swing at a project, closing your eyes, hoping it turns out like it and coming back to it three months later. Number one, that’s not going to succeed. But number two, you’re not going to learn nearly as much as the way that you just articulated, Chris. 

So I think that’s something that leaders should be jotting down. 

Chris MacDonald: And if you go to the next level. If you say to yourself, “Okay, an executive can do those sorts of things.” 

Well then you put together say, a prototype, right? Maybe it’s before it goes into production, you’re not using it, but you’re just early on this journey, you have the strategy — say I want to go after predictive service, predictive maintenance. 

You get your team together and a vendor, and they come up with something. You’re sitting down in the room. Now suddenly, you want to measure ROI, right? And you can start being ready for a conversation where there’s two components of that. 

Yeah, there’s model performance, that’s going to be technical. That’s things like confusion matrix, right area under the curve, etc. But there’s also cost revenue associated with various aspects of model performance.

So a false positive in predictive maintenance or quality translates into cost to perform an additional inspection or reprocess a part or replace a part earlier than necessary. A false negative may result in a failure in the field, which impacts something like warranty costs or customer satisfaction, penalties, etc. 

So maximizing performance of a model sounds technical, but when you put that model into production, it has very real business and literally numbers associated with it. 

So oftentimes, maximizing performance of a model is important, but over-optimizing can be just as costly.

If this business technical translation, when it carries forward in the right identification of the use case, the right sponsorship at the right levels, brings the right vendors and the right people in the company there. 

But also evaluating the ROI of a prototype that you’ve invested in, and the real life practical use of it, that level of understanding can start to really hit the right spot there.

When you start to have the willingness to say, “Okay, I don’t understand what our area curve or confusion matrix is.”

But if I can have someone explain to me the real life costs of what these terms mean, that I have a meaningful understanding of the ROI and I can start to make business decisions to say, “Don’t over optimize this. I understand the cost and benefits of a good enough model. Let’s put that into production.”

Daniel Faggella:  So when can you crawl, walk, or run, versus when do you need something that’s astronomically higher performing before you even get started?

We’re gonna get actually directly into the business KPI side of things, because I know for you today, we’re really walking people through the journey of what you’ve seen other services leaders go through when it’s done well. 

And number two, I know, is going to be the business KPIs. I know you had some topics, we were talking off-mic around sort of data quality and other considerations for getting started with strategy. 

Is there anything else that you want to chip in there before we fly into KPIs?

Chris MacDonald: Yeah, so I think it’s 100 percent. So again, just to reiterate the data available: It’s sort of having an historical period where you have the monitoring data, think sensor data, along with the tracking of the adverse events that you have to bring those together to create a data model, that ultimately will be used to learn to create a predictive model.

So ultimately, what is your predictive model? You’re learning from historical data, right? You’re building a basis, and then when you operationalize that model, you’re saying, “Given a set of events, presently, what are the results going to be in the future?”

So you have some sort of way of understanding that given this set of events, a time window, that you’re going to be able to pull a meaningful understanding of the future probabilistically to get. 

And then in terms of data quality, I think the notion is, again, that garbage-in garbage-out principle. 

So, I use the example of the service technician who, being aware of additional cleanup that is necessary and understanding that there’s this data quality needs, in my opinion, needs to be a first-order of consideration for the enterprise.

So there is never enough emphasis put across an organization on data quality and the importance of it and what it means for different areas.

So a good CIO, a good chief data officer, good chief analytics officer, in conjunction with business leaders, need to be having conversations on a daily weekly basis on what data quality means to their parts of the organization, whether it’s a service technician, whatever it may be. 

And then, again, sufficiency: I think it’s really important for when you’re that executive sniping and picking those use cases, there’s a level of sufficiency when it comes to predictive analytics.

So people always tend to pick an asset that is most important, and therefore has been over engineered, and over service not to fail. There’s this fallacy of the thing that never fails, gotta be the best to predict for failure. And that may or may not be true. 

But in order to derive a pattern, one, two or three events is not a pattern. There’s no hard and fast number, but there has to be a sufficient amount of the adverse events or the positive events, the outcome or dependent variable that you’re trying to predict.

And you need to model data to bring together enough to derive a pattern. There are certain strategies like over-sampling and more, but you need a sufficiency of the problem in the historical past to sufficiently model a predictive analytics model to predict the future.

Daniel Faggella: There’s not much to worry about for sufficiency if, quality-wise, we’re not even in the right ballpark in the first place. 

So can we check the quality box and then we can ask the question. Which of these could we even stand on? Which of these might even have value inside of them? So that’s a nice way of thinking about order.

Chris MacDonald: .. Or summarized efficiency. It’s that I use this sort of phrasing:

Ideally, adverse events should have enough coverage that the expected types of situations that result in failure, so five samples per type of situation should be enough. 

Now there’s no again, no hard and fast problem is that oftentimes these situations are never known upfront, right. So in practice, we might be looking for 100 Plus failures over a fleet of similar assets.

Then another consideration is: If your assets, like I said, are quite good and don’t fail, then it’s gonna take a really long time to collect enough failures. That’s why big data storage, data lakes are really important to enable a true enterprise predictive maintenance strategy, you want to store a lot of data for those assets that rarely fail.

Daniel Faggella: Yes. It’s not like Amazon, where we sell 2 million pairs of rain boots every day. 

[We usually say,] ”The volume is not the problem.” But it’s in this case, maybe it is. And unless we’ve got it going far enough back, there’s nothing to predict in the first place. 

So and this takes us into sort of the KPIs, we want to measure — you touched on this lightly [before]. You brought up a point that we very much advocate for here: we sometimes refer to the needles to move. 

So what are the needles we’re gonna move? 

We talked to vendors, Chris — one year, three years later, two years after that. You’ve been at this for a long time. The way that they measure success for customer experience in banking, it’s different every time you talk to them because you get a more nuanced perspective on it.

You get to understand which of those can we actually measure an influence while we were going after this, we can’t really influence so this is a real art and science.

You had talked about how leaders should have the subject matter experts and the data scientists in the room with the business stakeholders to grasp that. But there’s so much to consider here when it comes to deciding on those KPIs. What’s the executive advice that you would give folks?

Chris MacDonald: I think to your point, there is sometimes a danger in ever, ever evolving KPIs. So there’s a notion that general business analytics are riddled with bias to start with people and executives [making goals that only justify their own stake in the project or jobs]. Let’s be clear, I might have been guilty of it too. At the time of my life. 

We’re gonna pick, you know, analysis and numbers and things to measure by that make us and our people look good. And when you’re really becoming and evolving into a data oriented organization, you may come up or you may hone in on a KPI that is a subset or related to an initial KPI. 

But you try not to get rid of it, you always want to remind yourself of the lineage of how we came to this KPI and what it meant. 

So I would say: KPIs can evolve, but they should never be erased. 

So if I’m trying to understand I want to improve profitability in my service business, right? Okay well, what does that actually mean? You’re not going to build a model to predict profitability, you’re going to build a model to say, “I want to reduce the costs of service I want to have. I want to reduce reactive dispatches. I want to make sure that the right people are in the right place with the right parts ahead of time.” 

So I might have a service and part strategy, I might have a service technician deployment dispatched strategy based upon actions and insights. And you might start measuring things to align with that. [For instance,] how many times did my service technician get called and he had the right parts with him the first time? 

So first time fix rate, right? Might be a KPI, right, but you have to be able to marry that back up to the profitability. There is a way to do that, so long as that original KPI didn’t somehow disappear.

I’m trying to improve profitability. One of the ways I’m trying to do that is to improve my first time fixed rate. My first time fixed rate is now 90% versus 70%. What’s the cost associated with that? How did that affect my profitability? 

It might just be a secondary or tertiary calculation or remind yourself of the initial KPI. So just a little legwork upfront on the ops side of things. 

But all of that being said, there’s technical execution operations: finding the KPIs that drive a service manager and a service business that are actionable, that speak to consumption to, “Hey, we’ve gone we’d leverage this AI driven umbrella of these use cases. What does it mean that your people actually took advantage of the insights from this work?”

And that by deriving these insights, being able to make better service decisions be able to make better even on site. Deployment decisions by being able to look at telemetry data to understand it in a contextualized application? 

How did that fix or reduce the amount of time they had to be on site? Or how many times was our customer able to fix the issues themselves with a remote call from us with us never having to be on site. 

So first time fix rates of a remote visit first time fix rates of an on site visit, all of those things become a subset of metrics and inform that larger KPI profitability.

Daniel Faggella: So I’m picking up a little bit of what you’re putting down. I’m going to try to nutshell and then have you maybe add some nuance to what I’m finding here.

So like you said, sometimes we have an overall object we’re trying to optimize. The profitability is outlandishly vague, but okay, let’s just go ahead and say that’s where we’re going. 

What nests underneath that is sort of something that you’re leading with and there’s a few things that popped to mind.

Number one, you started rattling off these different potential metrics that actually comes from a lot of experience with customers. 

So some of the “What we could measure?” comes from, “Well, I’ve seen this before, and I kind of have some ideas about what to measure.”

The other, the other parts of what to measure might come from someone with a perspective on the data, maybe a data scientist, or someone familiar, close the process, who might already know what metrics am I already paying attention to? 

Hey, I’m the one managing how well this performs already. What do I look at? Maybe I already have some ideas. 

There almost seems like underneath our meta-goal, there’s these kinds of nested things we could look at some come from experience. Some come from someone who’s touching the process, touching the data, is the goal to just kind of collect a number of those potential KPI ideas. 

And then talk with business leadership about, “Hey, boss, which one do we want to go with?” 

Or, what’s the process to distill them? Because we probably won’t land on 20. We can track well, we might have three or four. What are your thoughts?

Chris MacDonald: You start by gathering the 20 possible things out there.

And then, before it even goes to an executive, I always say start with the experienced service manager in the case of service. The service manager or the experienced service technician.

So why I say that is: An experienced service technician, that’s been doing it a long time has loyalty to the company. They’ve been there, they have some sort of pride. They probably get a great deal of satisfaction for solving a customer’s problem. 

That kind of person usually knows what they have to do to not interfere with their customers. They have experience, they can work with the machine in a way that doesn’t that drives customer satisfaction. And especially when we’re talking about heavy industrial equipment or medical devices.

These service technicians are frankly, very highly paid valuable assets in many cases. You’re not the guy fixing a blood urinalysis machine, as a very smart engineer.

So specialized, so he knows more than we do at the end of the day. So always start with what would you measure: What are the types of information? How would that affect your ability to do what you’ve done so well, for so long? How does that affect the customer satisfaction or ability to relate to our brand in a better way? 

Bring [those questions up] with the service manager saying, “Okay, I have, Tom experience, Tom over here. I have a junior technician over here. How do I want to replicate?”

Or, how do I provide the insights that in some way takes his experience and provide some insights to upskill him faster, to make him make the inherited decisions like Tom does? 

And then collectively, how can I run the business that I’m responsible for the area geography or types of equipment that I’m servicing? 

What are my KPIs? Combine those together to get the list of 20 down to 10,  and then drive it with the service leader to say, “What are, or are these the levers that are really going to pull or make a huge difference across the business?”

Daniel Faggella: Do you advocate that sort of pairing the data and subject matter expert person together as if you had it your way in any given project? Would you say that would be part of the way we build these every time?

Chris MacDonald: If I had my way, I would say the biggest mistake actually across the board is an executives or even a data science company, or your own internal data science team: 

Their unwillingness to spend time with the tactical, most experienced person either on the platform, whether that’s a continuous improvement expert or a maintenance engineer, or in the case of a service technician.

I can tell you why: One of the earlier times that I fell in love with applying data science in the industrial space was working with a very experienced data scientist that I’ve known for many years. 

And sitting on a factory floor — this company had just spent tens of millions of dollars centralizing this huge manufacturing plant, and they want it to be able to predict a certain type of operational event that led to less than ideal quality. 

So they sensored-up everything. They tried to model everything. They’re explaining over days to us all this different types of data they have.

The data scientist’s first question to work for us was, “Can I talk to the guy who’s your longest running operations manager? Who has been here for 20 years?”

He goes around and follows him on a maintenance event. And he says, “Okay, what did you just notice, you just replaced something that wasn’t part of what you were doing?” 

And the guy said, “Well, I heard something. I heard something that indicated something and all they needed to have a sensor for failure for this event was a microphone.” 

So they implemented a microphone, they have their dependent variable that they’ve been looking for. They didn’t have to spend tens of millions of dollars. We’re talking a couple one hundred bucks, and they had an answer to a multimillion dollar problem, right? 

So don’t underestimate the power of listening to a domain or subject expert. In fact at the end of the day in all of these advanced analytics initiatives, the tighter collaboration you have between software engineers, data scientists and subject matter experts, that’s always going to be the home run.

Daniel Faggella: We call it connective tissue and certainly in your world is no less important than it is in every other industry that we cover over here and you’re putting a nail in that coffin big time for the listeners, which I certainly appreciate. 

So this rolls us into the last part of the journey. So when you watch people do this, they get their data strategy aligned, and there’s some fluency there, they come together and discern KPIs.

Ultimately, this has to enable action. And this is kind of part of thinking through strategy is alright, what are the actions this is going to enable? So we’re in the c-suite, the boardroom. We’re mapping this stuff out, we’ve got a strategy that feels strong, we’ve got analytics, it feels strong. 

And then we’ve got to think about boots on the ground. What’s the impact? How do you guide people through that part of the process?

Chris MacDonald: Think about consumption right off the bat. 

You build this strategy, you bring this data together, you build a model, so you can operationalize the model. Now, all these things I’m saying are not easy. Let’s be clear, that everything that we’re talking about makes it easier, because you get some of the business hurdles, get some of the politics out of the way, make sure that you’re aligned to ROI and business sense. 

But at the end of the day, the rubber hits the road, let’s say you can do these things, right? Well, that you’re in a really good spot, now’s not the time to waste it right? Now’s the time to make sure that you’re understanding who is the bomb in the manufacturing plant? 

Who is the operator that’s going to be looking at this? Or who’s the service manager that’s going to be looking at these insights, because at the end of the day, the difference between showing a value that is from a sensor [showing] the temperature is this versus this is going to fail [and] because of these factors. 

Well, to many people [that] just looks like another number on the screen. So they have to be able to understand the results of an underlying predictive model, and that means it needs to be summarized. 

That insight needs to be summarized in a meaningful and actionable fashion for the end users that we can safely assume are not analytics experts. 

I mean, the industrial space, it’s, you can almost be guaranteed it’s not. So the space that we serve, it really has to be meaningful to that domain expert. And that’s where just a little bit of thought about asking that person. And maybe that’s not even the expert guy, right?

Asking anyone who’s going to be able to use these doing a little bit of UI testing, doesn’t have to be full fledged software company UI testing, but saying, “Hey, if I show you this dashboard that you’re looking at, and I show you this prediction, what do you do with that? How do you interpret it? How can I train you? What can I show you to make that clear?”

And by the way, what actions can you take or can’t take based on this, doing that stuff as part of the prototyping, thinking about and consumption goes a long way before you roll something out to realize that no one understands what they’re looking at? Because that’s the most defeated, self defeating thing you can know.

Daniel Faggella: When you’ve sunk all the dollars in so let’s talk a little bit about preventing that circumstance, we’re talking about a kind of executive strategy and transformation at a high level today. Strategy KPIs, thinking about consumption right off the get. 

All this is happening on a strategic executive kind of level. So when we do start picking those projects are going to fit under this umbrella; which as you said, ideally, is how this operates is, we don’t just go sniping. 

First, we kind of think about our transformation, it almost feels like part of the consideration on maybe even what projects we pick is, which of these do we think will actually be able to get any adoption at all?

Maybe some of them? Any kind of output would be like a whole new interface, a whole new workflow? 

And it just feels like, let’s not start there, you know? So does this factor into where we might want to go first?

Chris MacDonald: I think a lot of times, like you see at, like, at least in my experience, I’ve found that honestly, the best place to pilot something is somehow where executives tend to know. 

Even at what people might think of like some of the oldest, been-around-for-centuries industrial companies make things forever, they tend to have these, what they call “pilot plants,” or pilot lines, or private service areas, where these innovative thinkers are aligned to this product line or this fleet. 

And they’ve always been the people that take on new technologies with some level of success, somehow, they tend to be that more organically data driven way that if you do it here, this is where you prove it out. They’re always ahead of everyone. 

And there’s something to that. It means that this group has a way of operating, that the company in some ways might wish the rest of the company did. But they’re willing to at least work with these new technologies to give you a sense to iterate a bit to be a little more agile, and be a proving ground because they want to be the first to win.

They want to be the operator, they want to be the service manager who’s on the cutting edge. Yeah, that’s not a bad thing. 

That’s someone who is willing to work with you who’s willing to take some risks, and you can sort of figure it out, iron out the details in a more innovative environment. 

Daniel Faggella: Well, we had the Global Head of AI at IBM on the program not that long ago, talking about how honestly a lot of where it makes sense to drive innovation. We got to consider data and a number of those factors. 

But do we have a place where enthusiasm really lives and breathes and where we think someone’s going to be willing to push this through? Like you said, maybe it’s a bit of a selfish motive around their career, but so long as it’s within the bounds of bettering the business as well, we kind of need somebody with that motive. Imposing it top down,  Steve hates change, but he’s in this function that we think is going to transform, let’s have him spend his next 18 months doing this. Maybe that’s not the best way to do it sounds like you’re doubling down on some of that as well.

Chris MacDonald: Maybe I’ll demystify a classic sales tactic, is that any one of our good salespeople is going to find that person and they’re going to try to make their career better by them bringing in us better. 

It is absolutely selfish on both sides. But I gotta tell you, more often than not, it works.

Daniel Faggella: Yeah. I mean, I appreciate the frankness, I really appreciate [how] we talk to vendors. And ultimately, the way we see it, Chris, we’re in the middle of buyers and sellers all the time, change isn’t going to happen unless dollars change hands. 

And ultimately, dollars change hands best when everybody’s informed and the value is clear, and they’re excited about it. And being able to find somebody that can actually be excited is very critical.

Chris MacDonald: The good news is now that in modern software times, we’re in a subscription business. And if it doesn’t work out, none of us are going to benefit. So it’s not like we can really pull it on anyone.

Daniel Faggella: Incentives are aligned in many regards. Right. 

So there’s a lot to be said of that to find a little point here on enabling action. So you know, you mentioned the idea of having these pilot lines or what have you, that often being a good petri dish to begin with. 

When it comes to thinking about actions and consumption up front. Is there any guidance that you have around this kind-of mock ups or how do we envision that actual end result? 

Like what consumption would look like when this gets thought through in a proper way where it actually hits reality? And it does work the way we thought, what steps happen? Because you’ve seen this go right and wrong, I’m sure.

Chris MacDonald: So I think of it as an application. Whether the application is an iPad, a dashboard in a factory, a laptop, and AR experience, whatever it may be, think in terms of an application: If I bring this application with these insights live, who uses it? Start there. 

So, who is using the application? Then ask those people? If I could tell you this? In this amount of time? What would you do with it? Or what would you be able to do with it? And how much time or leeway would you need to be able to do it in a practical manner that officially is proactive instead of reactive? Right? 

So that interviewing process, and yes, I think a mock up helps to be honest. So it doesn’t have to be, it doesn’t have to be a full-blown “here’s a visual of an application.” You probably guess, let’s be honest. 

What I would look at as a – software engineer from a UI perspective, when you go into a factory, and you’re wondering, “How is this the most used application?” But it is!

It’s not up to me to judge it. If that works for them. If it looks like an Excel spreadsheet, on a bigger screen, but they know how to use it and consume it. I’m all in. That’s a successful UI, in my opinion, if they are using it to drive change in their business. 

So yeah, whatever that is – mock that up and the way that it’s realistically going to be so you have an understanding of how you’re putting information in front of someone, and what the boundaries are to change and make that more consumable. A UI designer, anyone can do it. Yeah, draw it out, get it out in front of someone. They will get it.

Daniel Faggella: Overthinking is common here. Like we’re building a new startup from scratch, when ideally, you actually just want to be augmenting and leveling up an existing process. Because, especially if it’s an early project, the less change the better – the more it looks exactly what they’re doing right, the better.

Chris MacDonald: And when you start introducing an entire new software, an entire new visualization, just not the best thing you can do with analytics is another number. That means something different in a context that already exists.

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.