As the term “machine learning” has heated up, interest in “robotics” (as expressed in Google Trends) has not altered much over the last three years. So how much of a place is there for machine learning in robotics?
While only a portion of recent developments in robotics can be credited to developments and uses of machine learning, I’ve aimed to collect some of the more prominent applications together in this article, along with links and references.
Before I delve into machine learning in robotics, go ahead and define “robot”. Though at first this might seem simple, it’s no easy task to come to an agreement on just what a robot is and what it is not, even amongst roboticists. For the sake of this article, I’ll borrow an abbreviated definition of “robot” from this article on the Carnegie Mellon CS Department website:
“Force through intelligence.”
“Where AI meets the real world.”
Some researchers might even argue against a set definition for robot, or debate whether a definition can be relative or dependent upon the context of a situation, such as the concept of “privacy”; this might be a better approach as more and more rules and regulations are created around their use in varying contexts. There’s also some debate as to whether the term robot includes innovations such as autonomous vehicles, drones, and other similar machines. For the purposes of this article, and considering the definition above, I argue that these types of machines are a class of mobile robot.
Most robots are not, and will likely not, be humanoids 10 years from now; as robots are designed for a range of behaviors in a plethora of environments, their bodies and physical abilities will reflect a best fit for those characteristics. An exception will likely be robots that provide medical or other care or companionship for humans, and perhaps service robots that are meant to establish a more personal and ‘humanized’ relationship.
Like many innovative technological fields today, robotics has and is being influenced and in some directions steered by machine learning technologies. According to a recent survey published by the Evans Data Corporation Global Development, machine learning and robotics is at the top of developers’ priorities for 2016, with 56.4 percent of participants stating that they’re building robotics apps and 24.7 percent of all developers indicating the use of machine learning in their projects.
The following overview of machine learning applications in robotics highlights five key areas where machine learning has had a significant impact on robotic technologies, both at present and in the development stages for future uses. Though by no means inclusive, the purpose of the summary is to give readers a taste for the types of machine learning applications that exist in robotics and stimulate the desire for further research in these and other areas.
5 Current Machine Learning Applications in Robotics
* Terms in italics and bold are defined further in the glossary at the bottom of this post.
1 – Computer Vision
Though related, some would argue the correct term is machine vision or robot vision rather than computer vision, because “robots seeing” involves more than just computer algorithms; engineers and roboticists also have to account for camera hardware that allow robots to process physical data. Robot vision is very closely linked to machine vision, which can be given credit for the emergence of robot guidance and automatic inspection systems. The slight difference between the two may be in kinematics as applied to robot vision, which encompasses reference frame calibration and a robot’s ability to physically affect its environment.
An influx of big data i.e. visual information available on the web (including annotated/labeled photos and videos) has propelled advances in computer vision, which in turn has helped further machine-learning based structured prediction learning techniques at universities like Carnegie Mellon and elsewhere, leading to robot vision applications like identification and sorting of objects. One offshoot example of this is anomaly detection with unsupervised learning, such as building systems capable of finding and assessing faults in silicon wafers using convolutional neural networks, as engineered by researchers at the Biomimetic Robotics and Machine Learning Lab, which is part of the nonprofit Assistenzrobotik e.V. in Munich.
Extrasensory technologies like radar, lidar, and ultrasound, like those from Nvidia, are also driving the development of 360-degree vision-based systems for autonomous vehicles and drones.
2 – Imitation Learning
Imitation learning is closely related to observational learning, a behavior exhibited by infants and toddlers. Imitation learning is also an umbrella category for reinforcement learning, or the challenge of getting an agent to act in the world so as to maximize its rewards. Bayesian or probabilistic models are a common feature of this machine learning approach. The question of whether imitation learning could be used for humanoid-like robots was postulated as far back as 1999.
Imitation learning has become an integral part of field robotics, in which characteristics of mobility outside a factory setting in domains like domains like construction, agriculture, search and rescue, military, and others, make it challenging to manually program robotic solutions. Examples include inverse optimal control methods, or “programming by demonstration,”which has been applied by CMU and other organizations in the areas of humanoid robotics, legged locomotion, and off-road rough-terrain mobile navigators. Researchers from Arizona State published this video two years ago showing a humanoid robot using imitation learning to acquire different grasping techniques:
Bayesian belief networks have also been applied toward forward learning models, in which a robot learns without a priori knowledge of it motor system or the external environment. An example of this is “motor babbling“, as demonstrated by the Language Acquisition and Robotics Group at University of Illinois at Urbana-Champaign (UIUC) with Bert, the “iCub” humanoid robot.
3 – Self-Supervised Learning
Self-supervised learning approaches enable robots to generate their own training examples in order to improve performance; this includes using a priori training and data captured close range to interpret “long-range ambiguous sensor data.” It’s been incorporated into robots and optical devices that can detect and reject objects (dust and snow, for example); identify vegetables and obstacles in rough terrain; and in 3D-scene analysis and modeling vehicle dynamics
Watch-Bot is a concrete example, created by researchers from Cornell and Stanford, that uses a 3D sensor (a Kinect), a camera, laptop and laser pointer to detect ‘normal human activity’, which are patterns that it learns through probabilistic methods. Watch-Bot uses a laser pointer to target the object as a reminder (for example, the milk that was left out of the fridge). In initial tests, the bot was able to successfully remind humans 60 percent of time (it has no conception of what it’s doing or why), and the researchers expanded trials by allowing its robot to learn from online videos (called project RoboWatch).
Other examples of self-supervised learning methods applied in robotics include a road detection algorithm in a front-view monocular camera with a road probabilistic distribution model (RPDM) and fuzzy support vector machines (FSVMs), designed at MIT for autonomous vehicles and other mobile on-road robots.
Autonomous learning, which is a variant of self-supervised learning involving deep learning and unsupervised methods, has also been applied to robot and control tasks. A team at Imperial College in London, collaborating with researchers from University of Cambridge and University of Washington, has created a new method for speeding up learning that incorporates model uncertainty (a probabilistic model) into long-term planning and controller learning, reducing the effect of model errors when learning a new skill. This statistical machine learning approach is put into action by the team’s manipulator in the video below:
4 – Assistive and Medical Technologies
An assistive robot (according to Stanford’s David L. Jaffe) is a device that can sense, process sensory information, and perform actions that benefit people with disabilities and seniors (though smart assistive technologies also exist for the general population, such as driver assistance tools). Movement therapy robots provide a diagnostic or therapeutic benefit. Both of these are technologies that are largely (and unfortunately) still confined to the lab, as they’re still cost-prohibitive for most hospitals in the U.S. and abroad.
Early examples of assistive technologies included the DeVAR, or desktop vocational assistant robot, developed in the early 1990s by Stanford and the Palo Alto Veterans Affairs Rehabilitation Research and Development. More recent examples of machine learning-based robotic assistive technologies are being developed that include combining assistive machines with more autonomy, such as the MICO robotic arm (developed at Northwester University) that observes the world through a Kinect Sensor. The implications are more complex yet smarter assistive robots that adapt more readily to user needs but also require partial autonomy (i.e. a sharing of control between the robot and human).
In the medical world, advances in machine learning methodologies applied to robotics are fast advancing, even though not readily available in many medical facilities. A collaboration through the Cal-MR: Center for Automation and Learning for Medical Robotics, between researchers at multiple universities and a network of physicians (collaborations with researchers at multiple universities and physicians led to the creation of the Smart Tissue Autonomous Robot (STAR), piloted through the Children’s National Health System in DC. Using innovations in autonomous learning and 3D sensing, STAR is able to stitch together “pig intestines” (used in lieu of human tissue) with better precision and reliability than the best human surgeons. Researchers and physicians make the statement that STAR is not a replacement for surgeons – who for the foreseeable future would remain nearby to handle emergencies – but offer major benefits in performing similar types of delicate surgeries.
5 – Multi-Agent Learning
Coordination and negotiation are key components of multi-agent learning, which involves machine learning-based robots (or agents – this technique has been widely applied to games) that are able to adapt to a shifting landscape of other robots/agents and find “equilibrium strategies.” Examples of multi-agent learning approaches include no-regret learning tools, which involve weighted algorithms that “boost” learning outcomes in multi-agent planning, and learning in market-based, distributed control systems.
A more concrete example is an algorithm for distributed agents or robots created by researchers from MIT’s Lab for Information and Decision Systems in late 2014. Robots collaborated to build a better and more inclusive learning model than could be done with one robot (smaller chunks of information processed and then combined), based on the concept of exploring a building and its room layouts and autonomously building a knowledge base.
Each robot built its own catalog, and combined with other robots’ data sets, the distributed algorithm outperformed the standard algorithm in creating this knowledge base. While not a perfect system, this type of machine learning approach allows robots to compare catalogs or data sets, reinforce mutual observations and correct omissions or over-generalizations, and will undoubtedly play a near-future role in several robotic applications, including multiple autonomous land and airborne vehicles.
Machine Learning in Robotics: Future Outlook – A Long Term Priority
The above,brief outline of machine-learning based approaches in robotics, combined with contracts and challenges put out by powerful military sponsors (e.g. DARPA, ARL); innovations by major robotics manufacturers (e.g. Silicon Valley Robotics) and start-up manufacturers (Mayfield Robotics); and increased investments by a barrage of auto-manufacturers (from Toyota to BMW) on a next generation of autonomous vehicles (to name a few influential domains), point to the trend of machine learning as a long-term priority.
Glossary of Robotics-Related Machine Learning Concepts
Kinematics – Branch of classical mechanics which describes the motion of points (alternatively “particles”), bodies (objects), and systems of bodies without consideration of the masses of those objects nor the forces that may have caused the motion; often referred to as “geometry of motion”.
Bayesian models – Method of statistical inference that casts statistical problems in the framework of decision making. It entails formulating subjective prior probabilities to express pre-existing information, careful modeling of the data structure, checking and allowing for uncertainty in model assumptions, formulating a set of possible decisions and a utility function to express how the value of each alternative decision is affected by the unknown model parameters.
Inverse optimal control – Also known as inverse reinforcement learning, it’s the problem of recovering an unknown reward function in a Markov decision process from expert demonstrations of the optimal policy.
Support vector machines – Also called support vector networks, SVMs are supervised learning models with associated learning algorithms that analyze data used for classification and regression analysis.
Related Emerj Interviews
The following Emerj researcher interviews may be relevant for readers with a greater interest in machine learning in robotics:
- Machine Vision Developments and Technology – with Dr. Irfan Essa of Georgia Tech
- Navigating the “Uncanny Valley” of Robotics – with Derek Scherer of Golem Group, LLC
- An Intro to Swarm Robots – with Dr. James McLurkin of Rice University
Image credit: 33rd Square