Deep Learning Labels Live Video, AI Dances with Humans, and Robot Monk Gives Advice – This Week in Artificial Intelligence 04-30-16

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Deep Learning Labels Live Video, AI Dances with Humans, and Robot Monk Gives Advice - This Week in Artificial Intelligence 04-30-16

1 – A Robot Monk Captivates China, Mixing Spirituality With Artificial Intelligence

The Longquan (Dragon Spring) Temple in mountains northwest of Beijing is home to Xian’er, the world’s first robotic monk. At two-feet tall, the robot is cartoon- and child-like in appearance, wearing an orange Buddhist robe and with a touch pad on his chest that allows him to respond to visitors’ questions and statements. Given the official title ‘Worthy Stupid Robot Monk’ (stupid is a term of affection in Beijing dialect), the robot was created by the temple’s Comic Center – in collaboration with about a dozen Chinese technology, culture and investment companies – strictly for the development of “public welfare” and to communicate Buddhist values. While visitors have been skeptical about the robot’s authentic value in helping to solve human problems, many have expressed the sentiment that technology is likely advance quickly enough that future generations may be able to fully converse with a form of AI-powered monk. A second Xian’er is already under development.

(Read the full article on The New York Times)

2 – Twitter’s Artificial Intelligence Knows What’s Happening in Live Video Clips

AI researchers at Twitter have developed a sophisticated deep learning system, called Cortex, that can recognize and label moving images in live streaming video. This is a particularly impressive feat that not only requires deep learning but massive amounts of computing power. Live-streaming has recently become popular on apps like Periscope (from Twitter), Meerkat, and Facebook Live. Historically, recommendation engines have recommended videos and other content based  on what people with similar tastes have watched, a “crude” approach known as collaborative filtering. With Cortex, Twitters plans to develop a much more sophisticated filtering system that can curate a variety of content shared through Twitter, based on a user’s past activity.  Twitter is currently testing the Cortex technology through its smartphone app Periscope. In addition to better-matched content, the technology will provide a number of uses, including filtering out copyrighted content or undesirable content like pornography or violence.

(Read the full article on MIT Technology Review)

3 – You’ll Never Dance Alone With This Artificial Intelligence Project

Visit Georgia Institution of Technology, and you might just run into your next dance partner – an AI-powered avatar that uses deep-learning to learn moves from humans and then improvises its own in sync. The “LuminAI” project is an example of a creative and collaborative partnership with AI. Mikhail Jacob, a PhD student in computer science and a co-developer of the project, said,

“LuminAI forces a person to create something new — potentially something better — with their partner because they’re forced to take their (virtual) partner’s actions into consideration.”

Vai, the virtual dance avatar, uses Kinect devices to capture a person’s moves, which is projected on the walls of a 15-foot geodesic dome designed and constructed by GA Tech digital media master’s student Jessica Anderson. The AI then analyzes these movements and uses episodic memory to choose its next move, a repertoire that becomes more sophisticated over time. In a co-creative relationship, the human dancer’s moves can also be shaped by the AI’s, creating a truly collaborative experience.

(Read the full article on Georgia Tech News)

4 – OpenAI Gym Beta

Elon Musk’s OpenAI has announced a public beta version of OpenAI Gym, a toolkit for creating reinforcement learning (RL) algorithms. The product is compatible with any algorithmic framework, including TensorFlow and Theano. At present, the growing set of environments (from simulated robots to games) is written only in Python, though OpenAI stated its near-term plans make it accessible from any language. OpenAI is  following a trend set earlier last year by companies like Facebook and Microsoft (2016). RL is a branch of machine learning that furthers artificial decision making and motor control. Its applications are wide-ranging, but involve making sequenced decisions, such as a robot that can run or jump or a program that can make business decisions on pricing and inventory. The objective behind OpenAI Gym is to help solve two primary problems in the space, the need for better benchmarks and gaps in standardization used in publications.

(Read the full article on the OpenAI Blog)

5 – YouTube Gets Better at Watching You

YouTube announced that it’s ‘smartening up’ its recommendation engine on its iOS and Android apps. The system, which is based on the same deep neural network technology that Google uses for its search engine, will better discern a user’s patterns based on user activity. One of the more challenging obstacles for the system is the concept of “freshness”, in which the engine finds and tags videos that were uploaded less than an hour earlier (there are 400 hours of video uploaded to YouTube every minute). While the improved recommendation engine is meant to better serve users’ needs, improved matches will also keep more users watching, which means increased ad revenue for YouTube.

(Read the full article on CNET)

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe