Proving the economic value of AI projects remains paramount to the success and continuation of any machine learning-related project. As more companies adopt AI technologies, measuring these projects’ success is increasingly important – unfortunately, proving this value is proving anything but straightforward.
While other significant projects at the enterprise level have a longstanding set of KPIs and metrics to measure success, the technical complexity and constantly evolving nature of AI and data make it difficult for enterprises to determine what constitutes “success.”
Suppose your enterprise AI team needs help implementing and maintaining AI projects that add quantifiable value to your company. In that case, establishing a set of agreed-upon KPIs and metrics is first imperative before deploying AI models.
To find out what those metrics should be, Emerj CEO Daniel Faggella sat down on the ‘AI in Business’ podcast with Morgan Stanley Assistant Vice President Supreet Kaur to discuss the most important metrics to use when measuring the success of an enterprise-grade AI project.
This article takes a closer look at these three critical measures Kaur describes that business leaders must consider when embarking on AI projects of enterprise scale:
- Define success metrics first, then continuously measure in real-time: Have KPIs and success metrics in place before starting an AI project and real-time monitoring frameworks to measure the model’s performance over time.
- Have the right team assembled: Finding the right balance between diversity and cohesiveness in team members for developing a minimum viable product (MVP).
- Foster a fail-fast culture of acceptance: Accepting that AI is probabilistic, not deterministic, means building an organization that innately learns from mistakes rather than avoiding risk.
Listen to the full episode below:
Guest: Supreet Kaur, Assistant Vice President at Morgan Stanley
Expertise: data science, machine learning, artificial intelligence, mentorship
Brief Recognition: Supreet is the founder of DataBuzz, a thriving community of tech enthusiasts, and is a leading, passionate voice in the field of data science, serving as a member of the advisory board for Rutgers University’s MBS Analytics program, as well as several other organizations. Supreet is deeply committed to mentoring the next generation of data science leaders and applying her expertise to essential advocacy work.
Define Success Metrics First, Then Monitor the Model Over Time
To ensure alignment with a company’s strategic goals and objectives, defining a comprehensive measurement strategy is imperative before initiating an AI project. Failure to do so can make it challenging to evaluate the effectiveness of the project or optimize the model to achieve the desired results.
KPIs and metrics provide quantifiable goals and benchmarks that enable ongoing model performance evaluation throughout the project lifecycle. This real-time monitoring of the model’s performance ensures timely adjustments can be made as necessary to achieve the desired outcomes.
One crucial metric that should be measured early in the project is the time it takes to deliver a minimum viable product (MVP) – or the time it takes to move from development into production. This measurement enables business leaders to determine whether the project is on track and identify potential areas for improvement to ensure timely delivery of the desired product.
When that does not occur, several possible reasons and questions need to be asked to get to the root cause of the project’s failure. Kaur describes what to start asking when the model does not make it into production:
“What is the data you’re using in your products – is it the accuracy, or is it the compliance?”
Kaur says that when a product fails to make it to production, there’s “definitely an assessment that’s needed for you to get out of your Jupyter notebooks and get into the production realization of the module.”
Assemble the Right Team
Building a diverse team with the necessary expertise and collaboration skills to develop an MVP efficiently and effectively is critical for any AI project’s success. “AI is a very collaborative process, and for an AI product to succeed, you need different heads in the game – you need all the different perspectives,” Kaur says.
“You need data strategists, you need data scientists, you need [machine learning Ops engineers to monitor your model, and then you also need the business SMEs to provide you some insights so that you can take that feedback, tweak your model, come back and improve it,” she continues.
Ensuring the correct composition of roles within a team is just one part of building the “right team” for an AI project. Additionally, it is crucial to assess the team’s capabilities to deliver a successful MVP from development through release.
This requires assembling a diverse and proficient team with specialized data science, machine learning, and software engineering skills who can collaborate productively toward the project’s goals.
Foster a Fail-fast Culture of Acceptance
It is imperative to promote a company culture that encourages experimentation and learning from mistakes rather than fearing failure and, in turn, avoiding the risk-taking required for AI teams to succeed.
In fact, a recent Oxford study published in the Journal of Business Research found organizational culture to be the second most significant barrier to AI adoption in the enterprise, with the inability to change culture cited as the primary barrier to adoption in 64% of the organizations surveyed.
For an AI discipline to grow and thrive at the enterprise level, leadership must be ready for a paradigmatic shift in a company’s culture and model: AI is probabilistic, not deterministic.
That’s starkly distinct from traditional business models of setting a budget, expecting a specific percentage increase, and then moving on to the next project.
Ultimately, however, Kaur says that the most vital factor to observe is how someone is treated when they make a mistake and ultimately, she says, the success of an AI initiative within an enterprise hinges on the willingness of senior leadership to cultivate such a culture.
Kaur adds that this isn’t something that can happen at the practitioner- or data-scientist level; it must come from senior leadership – and it must say to members of an AI team, “we are providing you that space, and we are OK if this product doesn’t work, but it’s worth trying,” Kaur explains.
She continues that the leadership team has to be comfortable with the possibility that the product might not work.
“This will empower your team members to innovate, but at the same time, learn from that and then adapt to the newer technologies based on what they learned from the failures of past projects.”