Driving Responsible Approaches to AI Through Operations and Development Workflows – with Ranjan Sinha of IBM and Tsavo Knott of Pieces

Riya Pahuja

Riya covers B2B applications of machine learning for Emerj - across North America and the EU. She has previously worked with the Times of India Group, and as a journalist covering data analytics and AI. She resides in Toronto.

Driving Responsible Approaches to AI through Operations and Development Workflows

This interview analysis is sponsored by Pieces and was written, edited, and published in alignment with our Emerj sponsored content guidelines. Learn more about our thought leadership and content creation services on our Emerj Media Services page.

AI copilots leverage natural language processing to understand code and provide real-time suggestions, empowering developers to work more efficiently. As these intelligent assistants continue evolving, they’re becoming indispensable allies in the coding process. An article by MIT Technology Review mentions that a team of GitHub and Microsoft researchers tested the impact of Copilot on programmers in a small study, finding that those using Copilot completed the coding task 55% faster than those without. 

At the recent IBM Think event, the company showcased its watsonx Code Assistant powered by IBM’s Granite Code Models, which provides AI-powered capabilities like code generation, explanation, and test case creation to boost developer productivity. IBM® Granite™ is a family of enterprise-grade models developed by IBM Research® with rigorous data governance and regulatory compliance. These models match state-of-the-art models for code generation, translating between languages, fixing bugs, explaining and documenting code.

Emerj Senior Editor Matthew DeMello recently sat down with Ranjan Sinha, IBM Fellow, Vice President and Chief Technology Officer for watsonx and IBM Research AI and Tsavo Knott, Technical Co-founder and CEO of Pieces, to talk about how dynamics are changing with the adoption of generative AI (GenAI) in developer workflows. 

In the following analysis of their conversation, we examine three key insights:

  • Increasing developer productivity with AI copilots: Developing AI copilots with broad horizontal integration capabilities across various workflow tools, facilitating seamless integration with browsers, Slack, IDEs, and documentation systems to provide comprehensive support and enhance developer productivity throughout diverse environments.
  • Streamlining software development with advanced models: Integrating advanced models into development environments to automate tasks such as code generation, bug fixing, documentation, and translation that streamline the development process, reducing complexity for developers along the way.
  • Implementing hands-on learning initiatives: Fostering the practical application of new skills among developers through small projects, like hackathons, to provide practical insights and experience.

Guest: Ranjan Sinha, IBM Fellow, Vice President and Chief Technology Officer for watsonx, IBM Research AI. 

Expertise: Enterprise AI and Data, Data and AI Platforms, AI Strategy, Responsible AI, Software Development

Brief Recognition: Ranjan Sinha is an IBM Fellow, Vice President, and Chief Technology Officer for watsonx in IBM Research AI. He works at the intersection of Technology, Research, Product, and Enterprise-scale use cases in AI. He is a seasoned technical leader building and operating data and AI platforms and solutions from inception to production. He holds a Ph.D. in computer science from RMIT University and was awarded federal and university research grants at The University of Melbourne.

Guest: Tsavo Knott, Technical Co-founder & CEO of Pieces

Expertise: Coding, Software Development, Entrepreneurship, Interactive Media, Computer Science

Brief Recognition: Tsavo graduated from Miami University in 2018 with a Bachelor’s Degrees in Game and Interactive Media Designs as well as Computer Science. Before co-founding Pieces in 2020, he was a vice president and co-founder of Accent.ai, a language learning platform. 

Increasing Developer Productivity with AI Copilots

When asked about driving developer productivity responsibly, Ranjan begins by explaining to the executive audience essential context for understanding the dual approach IBM used to roll out the watsonx Code Assistant internally. He details how doing so involved both top-down and bottom-up engagement, ensuring that leadership and individual contributors were involved. The strategy was necessary, he says, because of the widespread interest in how the tool could enhance productivity for programmers and developers.

He then turns to placing an emphasis on improving productivity and efficiency across IBM, but particularly in development. The goal of which is to simplify and enhance the lives of IBM’s developers, leveraging IBM watsonx Code Assistant to achieve these improvements. 

Ranjan outlines the initial rollout process, which targeted a select group of early adopters within IBM chosen to test the technology and provide feedback. The selection process, Ranjan insists, helps mature the tool before it’s deployed more broadly to IBM’s vast developer community, numbering in the tens of thousands.

In turn, the watsonx Code Assistant powered by IBM’s Granite code models has shown promising improvements in productivity, estimated at 30-40% across the range of development activities, he says. In response, Tsavo Knott of Pieces notes the impressive productivity improvement of the 40% figure and considers the implications of increased code production and faster workflow. The heightened productivity results in more code being written and a quicker rate of work, affecting both the volume and velocity of development tasks.

Tsavo also questions whether such an increase in productivity will put additional pressure on other parts of the development workchain. Specifically, he mentions the potential need for more pull requests and greater interaction with existing documentation. As more code is produced more quickly, these support processes may need to scale up accordingly to keep pace.

Tsavo then suggests that developers might no longer be confined to working in a single programming language. With improved productivity tools, they could become more versatile, akin to what he calls a “Swiss army knife,” and be able to work on various projects and cross-functional teams more freely than before. Doing so could also enhance their ability to contribute to different areas of development. 

Despite all these potential benefits, Tsavo still wonders about the potential impact on developers’ ability to manage this increased productivity and versatility: 

“I think it starts at capturing that work in progress journey, because if you don’t capture it, it’s very hard to ground an large language model (LLM) or to resurface it later. And I think that, to be honest, 90% of what a developer does day-in-day-out is not captured: the sites they visited, the people they talked to, the work that they did – those micro-iterations. I think right now, what gets committed to the final codebase is a lot more formal. We actually want that work in progress journey that happens on the end user’s device. Once you’ve captured it, now you can surface it back in the context of a copilot LLM grounding. You can give it to someone who is onboarding or anyone in the organization trying to solve the same problem.”

–Tsavo Knott, Technical Co-founder & CEO of Pieces

Ranjan responds by acknowledging that while productivity improvements will enhance developers, there is also exciting research in increasing the coverage of typical tasks performed by a software engineer. This includes multi-agent systems such as OpenDevin. In addition, such GenAI technologies will also significantly influence other roles, such as customer support:

“… How do we make the best use of the subject matter expert? Well, a rockstar SME using such technologies could now engage with three customers simultaneously for application support, in three different languages, because existing technologies can now translate in real time. So, you’re literally enabling the support SME to be a lot more productive and simultaneously ensuring that more of your valued customers have access to the rockstar SME, but in personalized languages. You have a lot of these use cases across enterprise workflows that will blend into the enterprise of tomorrow. We live in a time where you have these technologies that are continually improving in accuracy and becoming more robust. The question then becomes: How do we reimagine and reinvent our processes and make the best use of our talents? It is not about replacement but augmenting developers and enhancing development productivity.”

— Ranjan Sinha, IBM Fellow, Vice President and Chief Technology Officer for watsonx, IBM Research AI

Tsavo then emphasizes the need for AI copilots to have broad, horizontal integration across various workflow tools, such as individual internet browsers, Slack, IDEs, and various documentation platforms. He points out that current copilots like GitHub and Microsoft’s are limited by their ecosystems, and argues that truly valuable copilots should seamlessly integrate across different platforms, providing comprehensive support and avoiding the silos created by current ecosystem-specific copilots.

Streamlining Software Development with IBM Granite Models

Ranjan shares a perspective from his work on enabling developers to use IBM Granite models for code. Granite comprises a family of foundational LLMs focused on enhancing software development productivity by simplifying the coding process for developers. IBM’s release of the Granite code models – under the Apache 2.0 license, trained on 116 programming languages – aims to make coding accessible and straightforward for all developers. These enterprise-grade models, which compete with the best open-source code models, can automate a range of tasks such as code generation, fixing, translation, unit test generation, documentation, and vulnerability testing for the enterprise.

The various IBM Granite models themselves are designed to streamline various aspects of the software development process, freeing developers from time-consuming tasks. Leveraging these tools in developer workflows in direct software integrations can automate code writing, debugging, and explaining why code isn’t functioning correctly. Doing so can also generate unit tests, write documentation, and conduct vulnerability tests.

By integrating these models into development environments, IBM seeks to reduce the complexity of writing and maintaining software. Granite models are valuable for modernizing mission-critical applications by translating legacy codebases, such as COBOL, into modern languages like Java. By integrating these models, developers can enhance productivity and focus on more critical aspects of their projects.  

“LLMs trained on code are revolutionizing the software development process and boosting programmer productivity. This is an opportunity to reimagine and reinvent collaboration, workflows, processes, and insights generation in an enterprise to make it AI-first. These are relatively early days in GenAI, we are all in the trenches learning from each other’s experiences. However, it is necessary for replicable results that strong data and AI foundation is established. It is also crucial that we all champion responsible Data and AI management and reskilling of the workforce. We should adopt a scientific and experimental mindset: be critical, try new things, adapt to change, be open to new ideas and possibilities, and see failure as an opportunity to learn and grow.”

– Ranjan Sinha, IBM Fellow, Vice President and Chief Technology Officer for watsonx, IBM Research AI

Implementing Hands-On Learning Initiatives

Ranjan then elaborates on the importance of the onboarding process for adopting new technologies, emphasizing several critical components of the process for executives:

  • Reskilling and Education: The developers must have access to appropriate learning materials and courses, including a progression through certification levels for software developers and architects.
  • Application of Learning:  Developers should apply their new knowledge through small projects, hackathons, or “art of the possible” projects. These kinds of initiatives help developers and leaders understand how different technologies can be used to achieve business objectives, providing practical insights and experience on their direct application.
  • Supportive Environment: A supportive work environment allows developers to voice their opinions and concerns about new technologies, in particular, their potential impact on processes. Open communication between leaders and developers fosters a better understanding of the broader influence of the systems they are developing.
  • Privacy and Security: Developers need to be clear on what they can and cannot do with the technology to prevent issues with privacy and security. For instance, IBM stopped using some tools internally due to concerns about proprietary data being uploaded.

Tsavo discusses the evolving role of developers with AI tools, emphasizing their increased versatility akin to a Swiss army knife. These tools enable developers to handle various tasks, from generating to documenting code but also introduce challenges in managing extensive cross-functional work and documentation. 

He notes the importance of real-time, horizontal systems to track and share information with a fluidity that ensures an efficient onboarding process and subsequent knowledge transfer. Tsavo emphasizes that, as developers shift from specialized tasks to broader responsibilities, automating routine responsibilities like unit testing becomes crucial and highlights the need for systems that support this dynamic workflow.

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe