The Market and Tech Forces Shaping the Future of Software Development – with Tsavo Knott of Pieces

Sharon Moran

Sharon is a former Senior Functional Analyst at a major global consulting firm. She now focuses on the data pre-processing stage of the machine learning pipeline for LLMs. She also has prior experience as a machine learning engineer customizing OCR models for a learning platform in the EdTech space.

The Market and Tech Forces Shaping the Future of Software Development-1

This article is sponsored by Pieces and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

For businesses seeking to increase their productivity to remain competitive, many use cases in generative AI (GenAI) across industrial sectors are promising to transform the way enterprises approach productivity in radical ways. 

Predictive models are expected to show an increase in AI investments, which are anticipated to reach $64 billion by 2025. As the exponential growth in GenAI since the end of COVID-19 has affected almost every occupation touching the digital age, software developers at the heart of that transition are naturally and hardly immune. Through twin booms in the debut of the personal computer and onto the dawn of the internet just under two decades later, software developers are still predicted to fare well – with demand anticipated to increase in the coming years, according to the Bureau of Labor Statistics. 

However, many players in the software development space see the future differently. Founded in 2020, Pieces is an AI-enhanced productivity tool that helps developers be more efficient by providing personalized workflow assistance. Pieces Technical Co-founder and CEO Tsavo Knott recently sat down for a special two-part episode of the ‘AI in Business’ podcast with Emerj CEO and Head of Research Daniel Faggella.

Throughout their conversation, Knott divides software development into two camps: one that believes specific developer workflows will be fully automated without human supervision and another that believes humans are integral to the foreseeable future of the field.

This article analyzes the second part of the discussion, focusing on the rapid progression in machine learning capabilities and how they impact developer labor markets and futures. In the process, two critical insights from their conversation are examined:

  • Recognizing the limitations of GenAI tools: Ability to produce output more quickly but at the cost of quality.
  • Acknowledging the limitless use case potential of GenAI: Overcoming short-term limitations related to fundamentals such as GPU and computing to leverage legacy data.

Listen to the full episode below:

Recognizing the Limitations of GenAI Tools

Tsavo begins by explaining that he’s excited about the evolution of machine learning and what that means for not only the products that Pieces is building but also how the company is trying to augment developer work streams overall, what that means for developer productivity, and how they can build unique experiences faster.

Tsavo defines a context window as the amount of background knowledge a model has at processing time about whatever it’s trying to process. He then explains the influence a context window has had traditionally over model development by how context windows have always provided a specific direction to the model.

He then fills the audience in on some relevant history of machine learning and how context windows used to be very small, with about 2,000 tokens in the context window, which amounts to a handful of files at most. He compared that to the number of tokens for GPT-4, which includes about 32,000 tokens, or roughly the equivalent of 300 pages of text. He succinctly summarizes that “in theory, more context is better.”

Tsavo then compares the process to the somewhat old-fashioned method of retrieval augmented generation. Traditionally in that process, developers would retrieve context before each request and then cram that context into a smaller context window. Now, because of the capability of a context window having a million tokens, retrieval augmented generation is not as necessary.

Yet with that backdrop, Tsavo returns to his larger theme of how the limitations of context windows lend themselves to trends in software development moving toward augmenting workflows from existing human roles — including Pieces. However, he acknowledges that there is also a segment of players in the industry who believe AI will replace many roles, and the limitations of context windows tend to fuel their argument. 

Knott concedes that larger context windows have significant implications for a variety of industries, including finance. He further describes how an accountant, when conducting an audit, might be able to input the past five years of financial data for context as opposed to just the last year. The result will be an increased speed at which insights will be derived from that model.

When asked about what business leaders and developers need to understand about constraints machine learning tools and generative systems have related to electricity, GPU, and CPU limitations, Tsavo explains how GenAI is used both as a search mechanism and an authorship mechanism and then provides examples of authorship including:

  • Writing new code
  • Authoring new content for marketing and related purposes
  • Generating new financial projections

Tsavo also explains how GenAI systems enable companies to create code or content much faster, resulting in a significant increase in how quickly digital assets can be produced. However, he explains the underlying issue of these developments is that the code therein is inherently biased because it’s based on the quality of previous work.

I think what’s important to know is that these tools are a double-edged sword,” he succinctly sums up the issue. “You have to be careful because you can create a lot of very average quality content or average quality code very, very quickly.”

GenAI tools can allow developers to create millions of lines of code very, very quickly, but the code might not be high quality, according to Tsavo. He further underscores how having a context window with a million tokens is not only a way to prepare for the future but also a way to keep up.

He explains how, Google Code, and GitHub copilot were well positioned to surprise the market at the time they were introduced. Still, their effect on the future won’t necessarily be the same. 

Tsavo cautions how the increased output of code produced by GenAI models will require more energy, more compute, and more processing power. Additionally, they have to stream all of that data back and forth to the servers. As a result, he feels strongly that models must improve in the following areas: 

  • Context windows
  • Tokens per second
  • Output

Acknowledging the Limitless Use Case Potential of GenAI

In light of all the challenges for driving efficiency in GenAI models, Tsavo insists that all of these influences, taken together, are contributing to an evolution in device models and the supporting consumer hardware. He anticipates an apparent, massive growth in server farms that strictly power AI systems. 

Tsavo foresees that several fundamentals will limit the ability to ship products with all the features and functionality desired:

  • Supply
  • GPU
  • Compute
  • Energy

While significant limitations in model developments and capabilities were noted throughout the conversation, Tsavo’s faith that humans will continue to find creative ways to apply these models is unencumbered. “If you look at AI, the implications are limitless,” he exclaims. “We can apply them to biology, pharmacology, and energy.” 

Part of the reason for that optimism is because, as Tsavo observes in the business world, there is an abundance of data that has yet to be touched. Such data includes hundreds of years of information from Fortune 100 companies. “If you think about the power requirements needed, we’re only scratching the surface,” he tells the Emerj executive podcast audience.  

Tsavo also anticipates that the systems that augment human work will result in a shift in the role of individual workers that will be less about doing the work and become more cross-functional across their organizations. 

Developers will connect with other highly related teams and coordinate the overall motion of work, starting with the initial idea through production and ultimately ending with the product’s release. “I think that cross-functional behavior will naturally change the structure of organizations,” he says. 

Tsavo anticipates that a lot of layers will be ‘flattened’ in the process but hopes it will make them more efficient. He believes that doing so will enable the US especially to take their existing development workforce and “10x it” — referring to developers upskilling their talent to the highest level, or ’10x’, as it’s referred to in the field. In the long term, these investments in talent will help the US keep up with countries that have larger workforces: 

“I think that what business leaders should look at is not AI systems that aim to process everything autonomously and on their own. They should look at systems that their existing workforce can rapidly adopt to augment them and help them move faster, extend their personal memory, and become a 2x employee. Then take them from 2x to 10x.”

– Tsavo Knott, Co-founder and CEO of Pieces

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter: