A Framework for Long-Term and Near-Term AI ROI

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

A Framework for Long-Term and Near-Term AI ROI

We spoke with David Carmona, the GM of Artificial Intelligence at Microsoft about his approach to AI ROI with the enterprise clients he works with at Microsoft. The biggest takeaway from this episode comes right at the beginning. David talks about how to think about artificial intelligence ROI in the long-term and the near-term.

That is to say, how are we going to see a relatively near-term return with AI that might be able to improve our condition while keeping in mind the longer-term disruption in our industry?

Subscribe to our AI in Industry Podcast with your favorite podcast service:

Guest: David Carmona, General Manager of AI – Microsoft

Expertise: AI and machine learning

Brief Recognition: Carmona has worked at Microsoft for 18 years in various managerial roles, becoming its General Manager, Artificial Intelligence in 2017.

Interview Highlights

(01:30) When the C-suite is really thinking through an AI project, it’s irresponsible for them to ignore return on investment, but it’s very hard to pin AI to a specific monetary goal from any given project within a limited time horizon. How do you get the best of both worlds when it comes to AI for a specific project?

David Carmona: So when we say ROI, we usually mean that we are focused on that pure short-term revenue play. And that is tricky, right? When we talk about technology like AI that can change your business entirely.

So instead of that, we usually prefer to talk about value, which is broader than just revenue. So that could be, yes, it could be less cost, it could be more revenue, but it could be improving my customer experience or it could be increasing the employee productivity or many others that are broader than just the short term a revenue play. But as you said, you have to balance both.

So many times we had to pay for the project as we work on it, right? So all the leaders are also looking for that immediate return of investment. So what we do in Microsoft is that we use a very, very simple framework to help customers have that balance between the two.

Right? And we called that framework agile value modeling, and it’s super simple. So just let me give you just a couple of hints of that framework that is more like a conversation with customers, right? So we asked them to position all the opportunities that they have for AI. So all the opportunities that they can think of to bring AI for their business.

By the way, the one thing that we always say here is please before doing that, know the capabilities of AI. So remember from the previous podcast that we’re discussing that you need to know what AI can do and what AI cannot do to have that conversation, right?

So once you have that, then you ask them to position all those opportunities, right? And you can think of it in a very simple way. So you can have tactical projects on one side and then strategic programs or initiatives on the other side.

That’s it. That’s the only thing that you need to do. And then just a bubble with the size of the opportunity. So it could be more revenue, as you said, or it could be better experience with customers.

And then as you have that conversation, that’s why we call it agile because this is an iterative conversation. So the point is that you may have a big bubble that is longer term and then you need to map with the smaller bubbles on the left, right?

So sometimes you need to think big but start small, but you need to consider both things. So in that framework, we try to have that conversation of thinking big, but then getting there with the smaller projects that you can measure and that’s the second thing that we ask in this kind of exercise is always think of how you are going to measure success for each of those bubbles.

So those could be, like you said before, that revenue but it could be anything. It could be I’m going to improve my inside in my customer service.

I am going to reduce my workload for my call center or I want to improve my satisfaction on the product or the revenue of a particular segment. So all of that, you need to always put a label in those, in those bubbles that you have in there.

So maybe the best project to get to the big bubble with that big value that you’re looking for in the longterm, maybe it’s not going to be the most profitable one in the short term. So you need to have that balance. You need to think, “Hey, I’m going to target this big strategy in my company,” and then yes, I need to get there by smaller projects that can’t return my investment immediately. But you can have that discussion openly.

(08:30) What are some of those best practices to bring a corporate AI project to life?

DC: So usually an amazing job in the first phase, right? So imagine that you are now in a point where you identified the right projects, so that balance between short term, long term, you identify the business metrics that you want to drive with each of those projects. So in an ideal world it will be as simple as throwing those projects over the wall, right?

And just waiting for them to be completed. Right? That would be the ideal world. Now the problem is that AI is not that simple. So it’s not like building a house or building a bridge where you just need to follow the blueprint, right?

A lot of things can go wrong with an AI project. And I think the interesting thing here is that we have had that experience already.

So we really solve this for software development. So that concept of your startup project where the business is connected with the project and then making sure that as you implement the project, the business is still connected with the execution of it.

So it’s not that thing that we throw over the wall and the business is disconnected until the end. We have solo really that for software development and it’s called DevOps. Right. So in a sense DevOps, yes is something bigger as a philosophy, is technologies, practices.

But at the end its essence, DevOps is making sure that while you are executing the project, there’s a continuous connection to the business so it doesn’t fork. Right. So that initial idea that you had at the beginning with this framework, it ends on the business value at the end.

So there are many names for this. So we are still trying to get aligned that in the industry thing on how typical DevOps apply to AI because it’s not the same. So I think I over simplify this, right? So, in reality, AI has many differences with traditional software development.

You have data, you have the concept of models, at the end you have things like the model can go wrong because the data changes, right? So it’s very, very different and you have a new role. You have the data scientist that you don’t have in traditional software development.

So that name I see from what we’ve seen, the market is starting to stick the name of MLOps, right? Which is the same concept of DevOps but applied to machine learning.

(11:30) When you think about a good, balanced team that ultimately lands on business value gets to see some of that value come to life, what does it look like?

DC: I think the two things that I see when this is working is the first one is that of course their business cannot get involved in, I don’t know that the day to day development, right? So it’s not that you can send a piece of code or I don’t know, your TensorFlow model to the business person and then too for him or her to provide feedback, right? That is not going to work.

So I have to be able to have the business involved in your daily process. You need to deliver daily. So that’s the first thing of MLOps is the ability to deliver, not at the end of the project, not every week, but even every day are even several times a day so the business can get involved continuously in that process. Right?

And for that to happen, the best practice in there is of course, automating your life cycle, is making sure that you have a technology and a process in place from the moment that a data scientist creates a new model to the moment that that model is packaged and then that model is deployed into production. All of that has to be automated. Right? So it can happen.

We call it usually that that time that it takes a, we call it mean time to resolution. Right. MTTR you want that time to be as short as possible, ideally minutes. Right?

So you don’t want manually sending something to the developer for the developer to package with the documentation and then spending three days to understand them all and then moving into production, going through a different process. You want all of that automated. That will be the first thing.

(14:00) In terms of what the subject matter expert is doing in terms of delivering daily, is there often some degree of feedback?

DC: That’s a great question because actually it connects very well with the second point that I was going to say that is critical. Right? And it’s the fact that these automated by line that you had that automated life cycle, it should be a closed loop. Right?

So when you, something in production you have to put in place things like monitoring or things like just health monitoring, right? So you can understand the usage that is having that model in production, right?

So it’s not that you have a gate with the business that you send a new version and then the business is going to take a look to it and then you have this pipeline where the business is basically like a gate, like a blocker, right? That is that is stopping my delivery. No, what you do is that you continuously move to production.

So the system is always going to be up to date in our production or pre production environment for the business to start using it. And the business should start using it in the context of the project.

So we should start putting it into the real usage as soon as possible and as the business is using it then development can monitor what is happening.

They can monitor not only their usage but also the metrics, the outcome of it and the business can provide feedback as they are using it. So look at it more as using it as I learn from it and not the other way around and not stopping like a blocker where the business is going to be a gate all the time.

Then I think the most difficult part here is that as you do that you shouldn’t lose the measurement that you decided when you were building your framework at the very beginning.

So that bubble that you put a value into it as you are developing, you need to make sure that you don’t lose track of that because usually, the other thing that we see all the time is that you have this amazing vision, but then once you a move that vision to the technical teams, all they care is…the model accuracy or the reliability of the system, which of course those are things that we have to care about, but that was not the point.

We don’t not only want to reliable system, we want a system that is connected to the business value that we defined it at the beginning. So that should be part of the entire loop all the time.

(11:30) What are some other ways that people steer the wrong way and kind of get this wrong?

DC: I think you hit a little bit of that in your previous comment and the test that I always do when I talk with an enterprise is imagine that you go to a data scientist or a developer in your organization. Now imagine that you ask that person what are they paid to do?

So simple question, so what exactly are you doing to get paid? And a common answer there, 99% of the time they will tell you, “Hey, I’m paid to deliver high quality code or I’m paid to deliver accurate models.”

If that person is a data scientist, right? That is a symptom of not embracing MLOps, right? So it’s like I just said before, so that person should say, “Hey, I’m paid to increase customer satisfaction by X points or reduce cost by this X amount of dollar,” or whatever the metric is for the project.

Right? So that is a common test that you can use very easily to understand your company is really connecting business with technology along all the process.

I can share with you actually how we took that philosophy to the extreme in Microsoft internally in our own development.

Yeah, so what we did that, this was a while ago, but we had a very functional organization in Microsoft. So we have the concept of the team of data scientists and the team for developers, product owners, et cetera, right?

So right now the philosophy that we use is very different. So we, instead of having that organization, what we have now is teams that are cross disciplines, right?

So you have a team where you have several data scientists, several developers, several administrator for productions, product owners, et cetera. So the key thing behind that, and we actually, we even took it even more to the extreme because we brought all those people together in the same room.

So we change it in many teams. We changed our configuration of the office to have bigger rooms where we can have all those teams working together. So when you are a data scientist and you are, I don’t know, sending your model to the developer, the developer is by your side. So we went that far in this model, right? And the reporting lights are not important here.

So you can still have a report in line that is maybe to the data scientists discipline. But having that concept of people together connected by that business outcome, that made a huge difference for us. Right?

Because then when you ask anybody in that team for, “Hey, what is your goal?” The developers are going to say is the code quality. The developer or any other person in the team would say whatever their business outcome is for that project that they are part of. So that had a big impact and that is part of the DevOps culture and now the MLOps culture, right, is making sure that everybody is aligned to the same goal and they’re working together.

(23:00) Are there other aspects of the change that people should make note of?

DC: So three things that we make part of this culture transformation that we cover in the business school. One is the concept of being data driven. So that team that we were mentioning before, they have to have data as part of their DNA. So every decision that they make, everything that they do should be based on data.

That is that data-centric culture that you need to empower because that’s the foundation for AI. Because when you do that, then you are going to create the data, you’re going to share the data and you’re going to have high quality data that then you can use to create relevant AI.

So that’s the first thing that I would say. The second one has to do more with empowerment and that’s the whole concept that without leadership fully encouraging business and technology to take the ownership to be empowered to get that business outcome, it’s very difficult to have this working.

So it’s not about getting the team together, it’s making sure that then the leadership team is empowering that team to make the decisions to have that business impact. So the whole concept of empowering everybody in your organization for those business goals.

And then the third one, which I think is very unique for AI, although it has also some impact on software development, is responsibility. So AI is very different from software development in the sense that it comes with some challenges like fairness, transparency, privacy, and many others that you have to keep in mind in that iteration that we said before that continues developing as part of MLOps.

Every task, every phase that you have there should consider also those specific aspects that AI brings to the business.

Subscribe to our AI in Industry Podcast with your favorite podcast service:

Header image credit: barns.com

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe