The Unique Requirements and Considerations for AI in Robotics

Dylan Azulay

Dylan is Senior Analyst of Financial Services at Emerj, conducting research on AI use-cases across banking, insurance, and wealth management.

The Unique Requirements and Considerations for AI in Robotics

Applying AI to the real world is much more difficult than applying it in digital ecosystems; this is what makes robotics use-cases in business so much more difficult than applications such as AI-enabled fraud detection.

To elaborate on these differences, Emerj spoke with Dileep George, co-founder of AI company Vicarious, which has raised over $100 million in venture funding, for Kisaco Research’s Brain Inspired Computing Congress 2020, which takes place April 21 – 22 in Milpitas, California. We spoke with Dileep about the unique requirements and considerations for adopting AI in robotics use-cases, as well as where AI-enabled robotics will play a role in business in the new decade.

Listen to the full interview from our AI in Industry podcast or read the interview highlights below:

Subscribe to our AI in Industry Podcast with your favorite podcast service:

Guest: Dileep George, Co-founder – Vicarious

Expertise: Machine learning, neuroscience

Brief Recognition: Prior to co-founding Vicarious, Dileep was co-founder and CTO of Numenta, a machine learning company, from 2005 – 2010. He holds a PhD in Electrical Engineering from Stanford. He is also a visiting fellow at the Redwood Center for Theoretical Neuroscience and the University of California, Berkeley.

Interview Highlights

(1:30) What are the unique considerations for robotics?

Dileep George: That’s a very good question. AI applied to robotics is very, very different from fraud detection or classifying your photos or anything like that. The difference is that robots act in the world, and so to act in the world, your perception of the world has to be fairly accurate.

For the classification problem, it’s very different because when you search for a cat, if it shows you pictures of cat, you are happy. You don’t miss the photograph of the cat it did not detect. This is the recall versus precision problem.

Whereas in robotics, you need to have high recall and high precision. If you want to grasp an object, you need to be able to grasp that particular object. You don’t get points for grasping some other object.

If you need to insert that object, you need to know how to hold that object. You need to sense the world around you fairly accurately, and you need to be able to control your actions so that the insert is correct and you need to incorporate feedback from the world in how you do that action.

All of those pieces are missing in a static classifier or fraud detection. Those are all static classification algorithms, which is not what’s needed for robotics. What is needed is accurate perception of the environment, and you see feedback to correct your actions. All this, planning, project, place, all those things come into picture, which are fundamentally not tackled in those other applications.

Of course, you have to get the machine learning piece of it right. You have to get the AI piece of it right. But if you mess up any other piece, you will not get the value out of it. So you have to get everything right from all the different hardware pieces, camera, cabling in your robot.

The number of things that can go wrong when you try to make robots for a field, is just a large number of things. So yeah, AI and machine learning are important. So it’s the basic discipline of hardware changes, configuration changes, all those things, and you have to get all of those right.

(05:00) How do the limitations of the real world affect potential robotics use-cases?

DG: One thing is that we are not going after problems where if you make a mistake something really bad happens. I would put self-driving cars in that category, which is you need to get to 99.99999% accuracy at a real-time rate.

A large number of problems you can still solve by having 99.5% and 99.9% accuracy, and making an error is not fatal. It is those kinds of problems that we are going after. To be specific, it is assembly line operations which are mostly done manually now except for things like car manufacturing, which is very, very structured.

Most other assembly-line operations that are unstructured, that are a high mix of air, the product changes every few months, those are all done manually now.

There is a lot of opportunity for automation in that space, a lot of opportunity for AI. The reason why it is hard for robots is that you know the environment changes quite often and you cannot use traditional robotics to reprogram the robot for every new setting. That is too expensive, too time-consuming.

So you need to have AI that can sense the world, have some amount of common sense, and be adaptive so that when things change again you can quickly repurpose it for the new setting or it can learn quickly for the new setting. 

(07:30) Are robotics similar to other AI products in that companies are hesitant to adopt them for high-risk use-cases like healthcare diagnostics?

DG: Well, the commonalities are at the very top level, but once you dig a little bit deeper, the commonality vanishes in the sense that… So think of medical imaging or making a trading setup. Those are cases where you think the world is inherently noisy, so your decisions are expected to be noisy. So yeah, you interpret a medical image that is some amount of confusion, confusion, experts will disagree and people expect that amount of noise system. So okay, you expect output to be probabilistic.

That is not the case when it comes to seeing, hey, is that trashcan right there? Is that object right there? People don’t disagree. People don’t disagree and people don’t make mistakes. Even if people make mistakes, they can recover from it so quickly. People are so good at interacting with objects in the world. So the expectations are really different and the assembly lines are constructed for people.

They are constructed for people to be very, very effective. So you can attack it without having 100% accuracy, it is not the same expectation as in analyzing a digital image or making a trade, which people expect those to be inherently noisy. But here, no, it’s not. People don’t expect it to be inherently noisy and when errors are made, people know these things very quickly. Wow. That thing did not make sense and so it’s different in that sense.

(10:30) Where do you see robotic making the biggest difference in the next five years?

DG: Yeah, so I would say the structured environments are fully automated. It is in the same structured environments. It is not as structured as automobile manufacturing, but the environment has still some structure and it is not completely wild as a home environment where the clothes are thrown around or an arbitrary kitchen where you don’t know where things are.

It’s not like that. It is still semi-structured but there is good amount of perception to be done, good amount of assembly to be done. Those are the areas in which I think robotics combined with AI, AI-powered robotics will start making a difference.

It will be challenging initially. You have to find the right opportunities for scaling. And you also have to keep in mind the upstream and downstream operations from the robots. So you won’t be replacing the whole line with robots.

You’ll only be replacing some operations with robots. So you have to keep in mind the upstream and downstream effects of those changes on the human line. So it will be in assembly lines and in warehouses where these robots will get deployed. But you have to recognize the right opportunity and scale it the right way.

You cannot expect a company to completely change a process, make a completely new warehouse. That’s not how things are going to go in these settings. So you have to make the solution adapt to their needs. You have to work within the footprint that makes sense.

And you have to make your systems smart enough and flexible enough so that you are adapting to their needs rather than making too many asks of them. They will be open to some amount of changes because finally it’s improving the process and finally it is making a big change in how many human operators are recorded, how many operations can be handled. So it is definitely making a huge difference.

Subscribe to our AI in Industry Podcast with your favorite podcast service:

This article was sponsored by Kisaco Research and was written, edited and published in alignment with our transparent Emerj sponsored content guidelines. Learn more about reaching our AI-focused executive audience on our Emerj advertising page.

Header Image Credit: Pioneering Minds

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe