Tracing AI and Natural Language Processing’s Journey to the Mainstream – with Matt Berseth of NLP Logix

Riya Pahuja

Riya covers B2B applications of machine learning for Emerj - across North America and the EU. She has previously worked with the Times of India Group, and as a journalist covering data analytics and AI. She resides in Toronto.

Tracing AI
and NLP’s Journey
to the Mainstream-1-min

This article is sponsored by NLP Logix and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

The rise of Large Language Models (LLMs) between 2021-2023 caught many by surprise, if not the entire global economy. Yet for industry insiders like NLP Logix – a Florida-based AI-driven software developer whose name evokes the same technology in natural language processing (NLP) that makes LLMs possible –  it marked the sign of decades-long investments finally paying off. 

Historically, NLP stands as the intellectual foundation of not only LLMs but generative AI (GenAI) more broadly, and represents the core methodology that enables machines to understand, process, and generate human-like text and audio. To this day, LLM development — even for the recently released GPT-4o leverage NLP techniques to comprehend and produce coherent natural language outputs. LLMs have, in turn, revolutionized the NLP field in recent years by scaling up the statistical language properties learned from extensive text collections, building on the foundational intuition of traditional language models.

Emerj Senior Editor Matthew DeMello recently spoke with Matt Breseth, Co-founder and CIO of NLP Logix, on the ‘AI in Business’ podcast about the impact of NLP on the evolution of AI over the last decade and best practices for integrating language models into software platforms across the enterprise based on his wide-ranging experience in the field during that same period. 

In the following analysis, we examine three critical insights from their conversation meant for business leaders across industrial sectors to deploy in their organizations:

  • Monitoring technology investments for value and focus: Monitoring investments in technology development to ensure incremental value for end users, not just for the sake of advanced technology, and knowing when to transition focus to new areas or use cases.
  • Launching small-scale projects for gradual expansion: Starting with small-scale projects focused on delivering continuous, tangible value within 90 days to establish initial success and expand their services gradually as they gain momentum.
  • Integrating LLMs into the software development process: Best practices for embracing the integration of language models into software platforms, focused on not treating them as separate features but as an integral part of software development.

Listen to the full episode below:

Guest:  Matt Breseth, Co-Founder & CIO, NLP Logix

Expertise:  AI, Data Science, Software Engineering

Brief recognition: Matt is the Co-founder and CIO of NLP Logix, where he leads a team of data scientists and engineers who deliver AI solutions. He has a Masters degree from North Dakota State University in software engineering.  He has taught as an adjunct at Jacksonville University and the University of North Florida since 2017, covering topics such as data science, database management, and software engineering.

Monitoring Technology Investments for Value and Focus

Matt begins his podcast appearance reflecting on his career journey, particularly his role as a software design engineer in test (SDET) at Microsoft. He acknowledges the uniqueness of his career path and recounts a significant experience he had while working there. 

Matt’s team at Microsoft had spent several years diligently developing a crucial product component: an object-relational mapper, or a sophisticated tool to simplify database operations for developers without requiring extensive SQL coding. As they were nearing the completion of their work, they discovered another team within Microsoft working on a similar project. Given Microsoft’s size and the potential confusion of having two separate but similar products, they were instructed to integrate their technologies.

Before merging their technologies into a cohesive solution, one key component they retained from his team’s work was a comprehensive unit test suite, crucial for ensuring the functionality and features required by the product. He compares the process to what we see in the current landscape of AI and machine learning. He observes that, while developing models and software systems that can make predictions or generate output is becoming increasingly feasible, understanding their effectiveness remains challenging. 

Based on his experience at Microsoft, Matt emphasizes the importance of having a reliable metric for assessing the quality of AI systems, particularly concerning their business value. He underscores the necessity of designing these systems with adaptable mechanisms that can accommodate new models and algorithms while continuously improving quality standards.

Matt then emphasizes the importance of using technology to solve real-world problems rather than simply adopting technology for its own sake — a focus that echoes the ‘product mindset’ that NLP Logix’s Jennifer Bradshaw and Arash Kamiar described in a previous podcast appearance. He stresses aligning technological solutions with business problems and creating practical solutions that address specific customer needs. He suggests that there should be a deliberate focus on applying an engineering mindset to problem-solving, where the input typically involves data, and the goal is to build software systems that continuously iterate and make incremental improvements toward solving the problem more effectively.

Matt also acknowledges the challenges in the field, particularly in balancing the pursuit of incremental improvements with the associated costs. Matt mentions the concept of diminishing returns, where each additional unit of value created may require increasing investment. He emphasizes the importance of monitoring these investments to ensure they yield incremental value and knowing when to transition to addressing new use cases or areas of focus.

Launching Small-Scale Projects for Gradual Expansion

Matt further reflects on the evolution of NLP Logix alongside natural language processing as a discipline over the past 13 years and the strategic insights that have shaped its trajectory.

He starts by discussing the transition in software development from desktop applications to web-enabled systems focused on data management, noting the exponential growth in structured data sources during the same period.

He then recounts pivotal moments in the emergence of machine learning challenges, such as Netflix’s million-dollar challenge to improve its recommendation engine. He describes how the now famous strategy spurred innovation and led to the rise of platforms like Kaggle, where individuals and organizations could compete to solve complex machine-learning problems. 

Matt then turns to the difficulty of building machine learning models, deploying them in production, and extracting value from them over time.

Recognizing the challenges and opportunities presented by machine learning, Matt’s company – NLP Logix – decided to focus on bridging the gap between model development and production deployment. Just as he advised the Emerj executive podcast audience earlier in the episode, NLP Logix adopted an approach centered on extreme customer focus. In the process, they sought to demonstrate the value of machine learning solutions to businesses, especially those with valuable data and manual processes that could be improved.

To engage customers, the company started with small-scale projects, typically completed within 90 days, to deliver tangible value continuously. The approach allowed them to establish initial success and expand their services gradually as they gained momentum.

Taking a larger view of these developments, Matt once again returns to the transformational impact NLP has had on on data collection and utilization. He starts by referencing the period from 2000 to 2010, characterized by the development of systems primarily focused on collecting data. Back then, Berseth notes that many systems included unstructured data fields, such as free text comments, which are often underutilized. 

Matt also emphasizes the value derived from such unstructured and messy data fields, highlighting the importance of knowing how to work with and extract insights from them. He suggests that even 13 years ago, there was potential to extract valuable information from these data sources if one knew how to model and structure the data effectively.

The exponential growth of NLP can be observed, he argues, in how quickly the term ‘chatbot’ turned from being a pejorative to a technology nearly everyone across business culture is comfortable using in their everyday tasks: 

“The space is just totally evolved on what we think is possible. I always say – back 15 months ago, or the day before Chat-GPT became a thing – if you said the words ‘chatbot,’ then everybody would roll their eyes and say, ‘No, I’m not talking to a chatbot. Are you kidding me?’ And here we are 15 months later, and I think everybody’s got a tab browser open right now right to a chatbot.”

– Matt Breseth, Co-Founder & CIO, NLP Logix

He goes on to note that almost everyone has directly interacted with these language models, perhaps by using them to create a recipe, generate ideas, or rewrite a resume. As a result, individuals are more inclined to understand how these models can be applied to solve business problems or explore new possibilities.

Previously, Matt explained that his company’s strategy involved showcasing complex problems they had solved, such as processing MRI scans for abnormalities in partnership with the Mayo Clinic. By demonstrating their capabilities in handling such challenging tasks, they could convince prospects in various industries, even those unrelated to healthcare, of their ability to address their automation needs. Of course, the conversation has become even easier with the widespread adoption of language models today, particularly in direct user interactions. 

Integrating LLMs into the Software Development Process

Towards the conclusion of the episode, Matt suggests that a significant shift is happening in enterprise software development. He predicts that the enterprise software created over the past 30 years, primarily focused on automating back-office systems, will rapidly transform in the next five to 10 years. Instead of adding AI as an afterthought or as an isolated component, he believes that language models will become integrated into the fabric of software platforms.

In this new landscape, AI will not be viewed as a separate feature but as an integral part of software development. Matt ends his podcast appearance by positing to listeners a vision for a future where AI expertise becomes a fundamental skill set for software developers and business leaders across industrial sectors. 

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe