Developing AI Solutions for Work Marketplaces – with Andrew Rabinovich of Upwork and Tsavo Knott of Pieces

Riya Pahuja

Riya covers B2B applications of machine learning for Emerj - across North America and the EU. She has previously worked with the Times of India Group, and as a journalist covering data analytics and AI. She resides in Toronto.

Developing AI Solutions
for Work Marketplaces@2x-min

This interview analysis is sponsored by Pieces and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

As excited as business leaders are about integrating AI, many are quickly finding their teams need help to achieve reliable results. It turns out – for the time being – that crafting the perfect input prompt feels like navigating a maze, with so many rules that even well-intentioned users can’t find their way.

There are many challenges in prompt engineering surrounding complexity, semantic gaps, and consistency issues, as laid out in recent research from Stanford University and the Indian Institute of Technology. Conclusions from the same study emphasize fundamentals for leaders driving growth at their organization with similar models: more intuitive interfaces, improved model understanding, and addressing biases in generated content.

Emerj Senior Editor Matthew DeMello recently sat down with Andrew Rabinovich, VP and Head of AI and Machine Learning at Upwork, and Tsavo Knott, co-founder and CEO of Pieces, to discuss the future of software development, particularly for workforce marketplaces and expediting hiring workflows. 

Following many of the themes of Andrew’s recent appearance on the ‘AI in Business’ podcast, their conversation touched on nuances surrounding the evolving role of prompt engineering and its current significance in AI interactions. They speak in depth about its limited future, how they feel it won’t last particularly long as a workplace discipline – at least not in its current form –  and what the step-level changes therein will mean for how development teams scale their work going forward. 

The following analysis examines two critical insights from their conversation:

  • Evolving AI beyond prompt engineering:  Avoiding investment strategies in AI development in precise prompt engineering as models become able to interpret and respond accurately to any phrasing.
  • Focusing on designing efficient, task-specific models: Shifting the emphasis from developing large, generalized models to creating smaller, more efficient models that can handle specific tasks and contexts effectively.

Guest: Andrew Rabinovich, VP and Head of AI and Machine Learning at Upwork

Expertise: Generative MultiModal AI, Computer Vision, Deep Learning

Brief Recognition: Andrew did his PhD in Computer Science from UC San Diego. Andrew worked for years in R&D leadership positions at Google. In 2020, Andrew co-founded Headroom, an AI-powered video conferencing platform, which Upwork later acquired. 

Guest: Tsavo Knott, Technical Co-founder & CEO of Pieces

Expertise: Coding, Software Development, Entrepreneurship, Interactive Media, Computer Science

Brief Recognition: Tsavo graduated from Miami University in 2018 with bachelor’s degrees in Game and Interactive Media Design as well as Computer Science. Before co-founding Pieces in 2020, he was a vice president and co-founder of Accent.ai, a language learning platform. 

Evolving AI Beyond Prompt Engineering

Andrew opens the conversation by discussing the evolving role of prompt engineering and how AI models, like GPT, interact with users based on the way questions are asked. He explains that currently, giving clear instructions is crucial, which is where the concept of “prompt engineering” comes into play. 

However, he dislikes the term, predicting that it will soon become obsolete. In the future, AI models will be sophisticated enough that users won’t need to craft specific prompts to get the desired results — similar to how Google has evolved to deliver relevant search results regardless of how the question is phrased.

He likens the current state of AI models to early Google searches, where only users who knew how to query the system effectively got the best results. Today, users of GPT models may get different answers based on how they ask questions. Andrew insists that, as AI evolves, it will be able to interpret any phrasing and deliver accurate results consistently, and recommends leaders strategize accordingly. 

Andrew also points out that it’s about more than just asking questions properly; there’s a need for expertise in interpreting AI’s results. He gives an example where GPT-3 provided an eloquent but completely wrong answer about why an abacus is better than a GPU for deep learning. 

In such cases, a human expert is required to identify errors (i.e. when the model “hallucinates”) or modify the AI’s output, especially if it’s something like code, to make it more accurate, efficient, and scalable. Therefore, human involvement is still critical, either in providing precise instructions, interpreting the results, or refining AI outputs.

He further emphasizes the difference between human reasoning and how AI models like GPT operate. He notes that, despite their vast knowledge, GPT models easily memorize enormous amounts of data, whereas humans rely on reasoning from a few core principles. Human minds aren’t designed to memorize facts like GPT models; they excel at applying logical reasoning to make sense of the world with limited information:

“If we look at GPT models, they [apparently] need billions of parameters to just memorize the patterns. So, the data that we feed into these systems is not in the form that models nature but just provides you with observations. These observations can be sometimes erroneous, sometimes redundant, and sometimes contradicting. The more data that you have, and the better you can make sense of it, then the more accurate these models become. Because they’re extremely redundant, they hallucinate. They do this because, rather than making sense of the data that comes in, they just memorize it. And if you ask it slightly in the wrong way, it’ll pull up the evidence that contradicts the one that may be the right one.”

– Andrew Rabinovich, VP and Head of AI and Machine Learning at Upwork

Focusing on Designing Efficient, Task-specific Models

In response to Andrew, Pieces Technical Co-founder and CEO Tsavo highlights key concerns about AI’s future limitations, such as the finite ability to aggregate data and the growing role of synthetic data. 

Tsavo starts by noting that data is rapidly increasing, raising questions about the “golden ratio” of model size, cost-effectiveness, and where human involvement is most efficient. Tsavo also stresses the importance of providing relevant context when interacting with AI models like GPT, as it significantly affects output quality and computing costs. 

Andrew explains that progress in machine learning has been driven by the sheer scale of data and compute power applied to straightforward models, like convolutional neural networks (ConvNets) and transformers. 

These models, using techniques like gradient descent and backpropagation, have existed for decades, but large data sets have amplified their effectiveness. However, as public data sources like Wikipedia are now nearly exhausted, synthetic data is being seen as a sufficient alternative.

The challenge therein, Andrew explains, lies in generating synthetic data that are significantly different from existing data, making it valuable for training.

Andrew also points out that simply adding variations to existing data (e.g., different ways of asking a question) only tweaks the same knowledge base. To advance, models need more diverse data sources, which is problematic. 

He suggests that instead of making models bigger, they should become smaller and more efficient, learning to handle multiple types of inputs with shared representations across modalities, much like humans do. 

Andrew also emphasizes that it’s not the amount of data but how it’s processed and reasoned about that matters most, using the example of human learning to drive versus how machines handle vast amounts of sensor data. He insists the future focus should be on reasoning, not just data size.

Tsavo asks Andrew whether the future focus of AI development will shift from building massive, generalized models trained on vast amounts of data to creating more specific, distilled models with smaller “context windows.” His question focuses on whether the emphasis will move toward more specialized models that handle specific tasks or contexts rather than continuing to build large models designed to handle everything. 

In response, Andrew explains that these context windows are a way for models to focus on specific types of data. However, he is quick to caveat that once a model has learned something, like the difference between Edgar Allan Poe and Shakespeare, it shouldn’t need to relearn it each time—it should retain that knowledge. 

Unlike humans, AI models don’t forget, so there’s no need to provide background information repeatedly:

“If you think about it, context windows have a very clear cost. And if we want these technologies to become ubiquitous, the questions that we ask should not depend on the way we ask them. If you say, ‘If you have access to more money, then you can get a more right answer than if you don’t.’ That doesn’t sound like democratizing AI or building scalable solutions. Context windows are essentially a trick to get things systems to focus and pay attention to specific nuances that you care about, but it’s absolutely not scalable. Now, we have these million concise context windows. But this, again, is a very temporary thing to let us know what these things are capable of, but this is not the way it’s going to evolve.”

– Andrew Rabinovich, VP and Head of AI and Machine Learning at Upwork

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.