Might AI Need Standards to Scale? – with Konstantinos Karachalios of the IEEE

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Might AI Need Standards to Scale? - with Konstantinos Karachalios of the IEEE

Though we don’t think about it on a daily basis – the technologies around us often “work” because of an underlying standard that they depend on. These technologies include: Wifi, ethernet, fax, and much of the internet itself. Do certain AI applications need their own set of standards in order to scale?

Imagine if you needed a new type of cable or input every time you wanted to jack your computer into the wall? Imagine if you needed different hardware to pick up wifi in every location you moved around to? Imagine if all websites had totally different protocols for how they were loaded or served to your computer? If this were the case, it would be extremely challenging for a robust “ecosystem” of internet companies and technologies to emerge, because the technology wouldn’t scale or work well at all.

This week we interview Konstantinos Karachalios,‎ Managing Director of the Standards Association at the Institute of Electrical and Electronics Engineers (IEEE). Konstantinos also speaks to us about some of the current AI standards that IEEE is working on developing currently, and the implications they might have businesses everywhere.

Guest: Konstantinos Karachalios, Managing Director of the IEEE Standards Association

Expertise: Technology patents and standards

Brief recognition: Konstantinos holds a PhD in Physical and previously worked for 25 years at the European Patent Office. He speaks with us this week about the kinds of AI standards that may need to arise in order for AI to be safe and trusted enough to support a business ecosystem.

Big Idea

Standards are about trust. In order to adopt a technology (from Wifi to fax machines or the internet itself), the public must feel that they there is accountability for the technology before it can become part of their daily lives, personally or professionally.

  • How will recommendation engines be permitted to store, share, or sell the preferences of individual users?
  • How will autonomous cars communicate with their riders and with other vehicles?
  • Where will facial recognition systems be permitted to be deployed, and how will these systems hold onto and use data about the presence, emotions, and location of people?

The answers to all of the questions above are twofold:

  1. First, we don’t know the answers yet, it’s too early to tell
  2. Second, it’s likely that some open agreement around these technologies will form, or their popularity and use may be hampered by lack of trust and understanding by users

Konstantinos puts it this way:

There are cases where no company can do it alone – this has to do with the social acceptance of a technology. Can we trust it? How will individuals and societies trust these new applications? I believe that only through open platforms – that are transparent and participatory – can we create a foundation on top of which the companies can develop proprietary systems… otherwise the market may be delayed or stalled.

The internet is really a collective development – many standards organizations (the IEEE being only one of them) create and update the standards of the internet. These standards have been created openly, publicly, and with transparency, and this has allowed the internet to propagate all over the world. Konstantinos admits that the internet protocols have their challenges, but that overall they are robust and useful because they were developed openly with input form experts from all over the world.

In other words, today’s internet standards serve the global ecosystem well because they were not created by and for a single group, but through a transparent discourse to take into account the needs and concerns of a more global technology society.

If Uber, GM, and Ford all have drastically different standards for autonomous cars to communicate with each other, ensuring safety on the roads will be more challenging for all citizens, and protocols seem somewhat inevitable in applications with physical risk. Interoperability and protocols create a “harmonization” upon which businesses can innovate.

Exactly where this underlying harmony and trust will be needed for AI is still to be determined. Autonomous vehicles and medical AI applications – because of their direct link to human safety in life-or-death circumstances – may be the easiest first targets for standards development. In our previous article on “timelines for the autonomous vehicle,” we explore the potential hurdles of legality and standardization that may stifle early adoption of self-driving cars.

Interview Highlights with the IEEE’s Konstantinos Karachalios

Listed below are some of the main questions that were posed to Konstantinos’ throughout the interview. Listeners can use the embedded podcast player (at the top of this post) to jump ahead to sections that they might be interested in:

  • (2:50) In this relatively early stage, why are AI standards important to develop / determine now?
  • (4:50) What are good examples of technology standards of the past the helped to enable innovation and new companies and business models?
  • (7:45) There must be a set of basic standards that companies agree on – but there must also be ways to protect the proprietary and hard-won innovations of companies. How can standards allow for both?
  • (15:15) How far along is IEEE in terms of developing ground-up standards for AI (on ethics, robotics, other topics)?
  • (20:10) Why and how could businesses play a role in forming industry standards – and what would be the benefits of being involved?

Related Material on Emerj

Our goal at Emerj is to be the #1 source for business and government leaders to learn about the practical applications of AI in their organization. We call it “staying on the right side of disruption,” making the right strategic plans and being prepared to use (or deal with) the technologies that will disrupt important industries and business functions.

Interviews with top executives and experts is part of what lets us help our readers “see around corners” and get a sense of the trends that matter. Each week, our AI in Industry Podcast (all episodes here) brings on a new AI executive or researcher to explain the applications and implications of AI in their domain of expertise.

If you’ve enjoyed today’s episode, you might enjoy some of our other work

Subscribe
subscribe-image
Stay Ahead of the Machine Learning Curve

Join over 20,000 AI-focused business leaders and receive our latest AI research and trends delivered weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.