Become a Client

New Deep Learning Technologies Unveiled and Advances in Robotics Underway – This Week in Artificial Intelligence 06-04-16

Daniel Faggella

Daniel Faggella is the founder and CEO at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and many global enterprises, Daniel is a sought-after expert on the competitive strategy implications of AI for business and government leaders.

New Deep Learning Technologies Unveiled and Advances in Robotics Underway - This Week in Artificial Intelligence 06-04-16

1 – Welcome to Magenta!

On Wednesday, the Google Brain Team introduced Magenta, a research project that will use machine learning to create art and compose music. The team will use TensorFlow and release its models on GitHub, posting demos, tutorials, and technical papers along the way. Though no date was given, the team said it would soon accept code contributions from the larger community. While exploring the creative capabilities of deep learning, the team also states that they hope Magenta will foster a community of machine learning researchers, coders, and artists to help pioneer new tools for art and music creation. The Magenta team will also work closely with the Artists and Machine Intelligence Project (AMI) and the Google Cultural Institute to connect artists with technology.

(Read the full article on

2 – Snails’ Efficient Thinking Could Help Design Robots of the Future

Roboticists may have a new model on which to base building a robot ‘brain’, and it looks and eats like a snail. A research team at the University of Sussex found evidence that snails use just two neuronal cells when looking for food. In goal-directed behaviors, like feeding, an animal must combine environmental information with its internal state, all while using as little energy as possible. The snails in the study used one cell to determine if it was hungry, and the other to know when it had found food. Knowing this structure is useful because, as Lead Researcher George Kemenes said,

“This will eventually help us design the “brains” of robots based on the principle of using the fewest possible components necessary to perform complex tasks.”

Study findings also shed light on how the snails manage energy use once they’ve made a decision.

(Read the full article on The Telegraph)

3 – Introducing DeepText: Facebook’s Text Understanding Engine

On Wednesday, Facebook officially unveiled its deep learning-based text engine DeepText. Facebook describes it as having “near-human” accuracy in understanding thousands of text-based posts per second across 20 languages. Using FbLearner Flow and Torch for training purposes, the system uses multiple deep neural networks (including convolutional and recurrent net) to perform word- and character-level based learning. Deep learning is a key component in learning to overcome challenging language and contextual nuances, with things like slang and word-sense disambiguation. Facebook is already testing the DeepText systems on some of its platforms, like Messenger, where the AML Conversation Understanding team is helping the system develop intent detection (such as when someone does or doesn’t need a ride). The team will work across Facebook Research to help advance understanding of people’s interests, combine text and visual content, and develop new deep neural network architectures.

(Read the full article on

4 – Stanford’s Social Robot ‘Jackrabbot’ Seeks to Understand Pedestrian Behavior

Stanford researchers are intent on developing a new breed of robot, one that better understands pedestrian social conventions as it navigates external environments with and around human beings. ‘Jackrabbot’ is a self-navigating machine that can already move freely indoors, and scientists are currently fine-tuning its abilities to stay mobile outdoors. The next step is implementing the social aspects of navigation, such as knowing the right-of-way rules on sidewalks. By observing how Jackrabbot moves around pedestrians on Stanford’s campus and uses machine learning to learn overtime rules of social conduct, Stanford researchers hope to gain valuable insight into how to build the next generation of social robots that can move alongside humans in crowded spaces like malls and airports.

(Read the full article on Stanford News)

5 – Toyota in Talks to Buy Two Robotics Units from Google’s Parent, Alphabet

According to Tokyo’s Nikkei Asian Review, Toyota is in talks to buy two robotics’ companies, including Boston Dynamics, from Google’s parent company Alphabet. While Alphabet is looking to shed its robotics ties, Toyota is building its robotics sectors, currently developing nursing care and medical robots in addition to self-driving cars. No information has been released on potential acquisition prices, though it seems likely that Toyota would purchase the companies through its Toyota Research Institute. The move could add up to 300 personnel to Toyota’s robotics divisions.

(Read the full article on Silicon Valley Business Journal)

Image credit:

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the ‘AI Advantage’ newsletter:

Stay Ahead of the Machine Learning Curve

At Emerj, we have the largest audience of AI-focused business readers online - join other industry leaders and receive our latest AI research, trends analysis, and interviews sent to your inbox weekly.

Thanks for subscribing to the Emerj "AI Advantage" newsletter, check your email inbox for confirmation.