Strategic Adoption of Large Language Models and Generative AI – with Asif Hasan of Quantiphi

Riya Pahuja

Riya covers B2B applications of machine learning for Emerj - across North America and the EU. She has previously worked with the Times of India Group, and as a journalist covering data analytics and AI. She resides in Toronto.

Strategic Adoption of Large Language Models and Generative AI – with Asif Hasan of Quantiph@2x

As nearly every American household has realized over the last year, large language models combined with generative AI abilities pose tremendous challenges and opportunities for enterprises of every shape and size. Just ask anyone who has heard of ChatGPT.

But far outstripping awareness and popularity of platforms trafficking in these technologies are the many ways their fundamental natures are misunderstood, increasing the risk of ‘hallucinations’ and other ways they can propagate misinformation. 

Data privacy is another critical concern intertwined with large language models like ChatGPT. The vast amount of data generated and processed during interactions raises questions about the security and confidentiality of personal information. As society embraces the benefits of AI-powered systems, striking a balance between innovation and protecting individuals’ privacy becomes paramount. 

Alex Babin, co-founder and CEO of ZERO Systems, described the problem in a recent interview with Emerj: “It’s an interesting dilemma for enterprises right now – on the one hand, they want to use external LLMs, but at the same time, they are really concerned about data privacy.”

“That’s why they are particularly careful about choosing vendors who can be the bridge between LLMs and their environment,” Babin continues. “Those with a track record in deploying LLMs and using data privacy techniques in the enterprise environment have a good chance at winning the gen AI race.”

To address these challenges, companies must be acutely mindful of their adoption strategies and how they pursue return on investment. 

Emerj CEO and Head of Research Daniel Faggella recently spoke with Asif Hasan, Co-founder of Quantiphi, on the AI in Business podcast to discuss the best way for business and financial leaders to think of what these technologies will come to mean for the global economy over the next half-decade. 

The following analysis examines two key insights from their conversation: 

  • Disrupting systems to achieve ROI: Generative AI used in tandem with language models will disrupt many industries, sectors and workflows – but they will drive ROI specifically where they can disrupt entire systems over individual tasks. 
  • Key considerations framework for adopting generative AI: A six-point criteria including critical metrics and provisions necessary for adopting generative AI responsibly and supporting the core of enterprise operations.

Listen to the full episode below:

Guest: Asif Hasan, Co-founder, Quantiphi

Expertise: Machine learning, Big Data and Risk Modeling

Brief Recognition: Asif co-founded Quantiphi 10 years ago as a deep learning and artificial intelligence solutions startup. He has broad experience in machine learning, including computer vision, speech recognition, natural language understanding, risk modeling, churn prevention, supply-chain optimization, predictive maintenance, customer segmentation, and sentiment analysis.

Disrupting Systems to Achieve ROI

Asif discusses with Emerj how generative AI works as well as its potential impact on economic metrics and human productivity. He begins by mentioning that he has thought about this topic in two different time horizons: one-to-two years and three-to-five years. He distinguishes developments in such a way because specific changes resulting from generative AI may take longer to manifest.

To address the impact of generative AI on return on investment (ROI), Hasan suggests focusing on the most fundamental economic metric that generative AI will influence. He believes this metric is the marginal cost of performing a cognitive task, which he considers the basic unit of human productivity. 

In simpler terms, he suggests that generative AI will reduce the cost of completing cognitive tasks sustainably in the coming years, similar to how Moore’s Law reduced the cost of computing by a factor of two every 18 months.

Hasan explains that in the past five-to-seven years, machines have become capable of tasks like seeing, hearing, understanding language, and recognizing patterns. Yet in the previous paradigm of AI, a model would have to be custom-built for each specific task using supervised learning, which was both complex and expensive.

Now with generative AI, it is possible to pre-train a large language model to perform fundamental tasks, such as predicting the next word in a sequence or the next pixel in an image. This pre-trained model can then serve as a foundation for an AI system, which can be fine-tuned at a lower cost to perform a wide range of specific tasks.

Hasan suggests there will be many visible examples of task-level disruptions in the next two years. Human-AI combinations can perform the same tasks currently done by human agents but with better, faster, and cheaper results. He provides examples such as customer service agents following call scripts, paralegals reviewing commercial contracts, or developers writing code. 

However, Hasan believes a much more significant impact on return on investment (ROI) will occur when there is a system-level disruption through combining AI capabilities in novel approaches to fulfill customer needs. This level of disruption fundamentally affects business models and may take three to five years or longer to materialize fully.

Asif provides further clarification by using examples to differentiate between task- and system-level disruptions.

Generative AI can summarize critical moments in a three-minute video, automating the task of creating a trailer or sizzle reel. However, its impact on the movie industry goes beyond this task. Hasan highlights a system-level disruption where generative AI redefines the business model of movies, affecting players, distribution channels, and audience consumption. True to their name, system-level disruptions can lead to significant industry-wide changes.

Hasan shares his approach to helping customers identify impactful use cases for AI and establish a strong return on investment (ROI) framework. He outlines three key factors to consider when uncovering these use cases.

  • Where AI matches or exceeds the human performance, like replying to emails or creating presentations.
  • Repetitive and costly tasks like customer service: Compared to outsourcing, the idea is to identify functions that can be performed better, faster, and cheaper with generative AI.
  • Prioritize tasks with clear written instructions for automation, like drug discovery in the life sciences domain. 

Explaining further how this approach works in the aforementioned drug discovery context, Asif specifies that:

“When you get through the process of first discovering a target, then screening the molecule, then getting to a hit, and then getting to the most promising compound followed by tests to check the safety of the compound – all of this happening in a matter of months, not years. So across many industries there are similar examples of long, expensive steps in the value chain that I feel are ripe for some sort of acceleration, with generative AI.”

– Quantiphi Co-founder Asif Hasan

He believes any industry value chain involving robust scientific research and development would also be a good candidate for applying generative AI. He provides an example from the life sciences domain, where language models are trained not just on the English language but also on gene sequences, nucleotides, and protein shapes. 

He suggests that generative pre-trained architectures have the potential to impact any sequence that can be organized. He proposes training a language model to predict the next action based on the user’s journey. He believes that exploring such areas and applying generative pre-trained models could enhance the ability to predict future actions or timestamps. 

Key Considerations for Adopting Generative AI

Hasan later discusses six key factors that enterprise leaders must consider regarding adopting Large Language Models (LLMs) or generative AI. The following is a breakdown of each factor:

  • Understanding the new generativity paradigm: Leaders should understand the difference between generative AI and the older paradigm of task-specific models. Generative AI allows for pre-trained foundation models that can be fine-tuned to perform multiple tasks, making it more accessible and cost-effective.
  • Identifying business implications: Leaders need a clear perspective on how generative AI can impact their value chain and business model. They should explore how it can improve customer interactions, reduce operating costs, enhance productivity, uncover hidden risks, and enable the development of new products and services.
  • Building, deploying, and managing AI systems: Leaders should consider the necessary investments, balance external expertise with in-house talent, choose technology partners, determine the mix of proprietary and open-source technology, and foster a culture of curiosity and experimentation within their teams.
  • Data requirements: Generative AI differs from task-specific models regarding data needs. Leaders should ensure their organization’s data is accessible for AI systems to use as context. They should also curate and manage fine-tuning data, which is crucial in improving the quality of AI responses.
  • Role of champions: Identifying individuals at the departmental or enterprise level who believe in the power of AI and are willing to champion its adoption. These champions should support experimentation, learning, and evolution while understanding the business context without relying solely on ROI models.
  • Ethical considerations: Leaders must prioritize transparency, fairness, and accountability in deploying generative AI. Doing so includes preventing misuse, establishing governance frameworks, and ensuring the responsible and appropriate use of AI. It’s essential to view generative AI as a tool to augment human productivity rather than replace it.

Asif suggests that the development of enormous language models with billions or even trillions of parameters will likely be handled by specialized organizations rather than every large enterprise attempting to streamline its own models. These foundation models will be multimodal, processing various data types such as images, videos, text, and data grids.

He mentions two types of fine-tuning of large language models: supervised and instruction fine-tuning:

Supervised fine-tuning involves teaching the model the specific language and terminology used within an enterprise. For example, different organizations may have unique three-letter acronyms or industry-specific terms that need to be understood by the model to generate accurate results.

Instruction fine-tuning refers to training the model to perform specific tasks beyond its general-purpose capabilities. Hasan explains by providing an example:

“But let’s say you’re a BioPharm organization. After the drug discovery process, you need to navigate the FDA process, which involves generating a unique clinical trial protocol. Typically, it requires collaboration among two dozen people over six months to create a protocol specifying the trial size, control arm, dosage, and how to handle adverse reactions. Training a model to generate a clinical trial protocol within this context is a prime example of sophisticated instruction fine-tuning.”

– Quantiphi Co-founder Asif Hasan

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe