Social and Soft Robotics, Super-Human Speech Recognition, More – This Week in Artificial Intelligence 08-26-16

Daniel Faggella

Daniel Faggella is Head of Research at Emerj. Called upon by the United Nations, World Bank, INTERPOL, and leading enterprises, Daniel is a globally sought-after expert on the competitive strategy implications of AI for business and government leaders.

Social and Soft Robotics, Super-Human Speech Recognition, More - This Week in Artificial Intelligence 08-26-16

1 – People Favour Expressive, Communicative Robots Over Efficient and Effective Ones

A recently published study out of the University of Bristol and University College London provides evidence that people may prefer robots that are able to show human-like emotions over those that are more quick and efficient in completing tasks. Researchers studied how humans interacted with Bert2, a humanoid robot assistant, while making an omelette. When the robot made a mistake and showed a sad expression, users responded well to its apology. When Bert2 asked if it could have a job as a kitchen assistant, most participants were hesitant or showed discomfort before responding, which researchers conveyed as a “pre-condition” to not cause the robot to feel distress. Graduate Student and Researcher Adriana Hamacher said,

“Human-like attributes, such as regret, can be powerful tools in negating dissatisfaction but we must identify with care which specific traits we want to focus on and replicate. If there are no ground rules then we may end up with robots with different personalities, just like the people designing them.”

The research is being presented at the IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN) from August 26 to 31 in New York City.

(Read the full article on University of Bristol News)

2 – The First Autonomous, Entirely Soft Robot

A Harvard team of researchers has created the first autonomous, soft robot nicknamed the ‘October’, using 3D printing, mechanical engineering, and microfluidics technologies. While the robot is not fully functional, the production paves the way for more complex robotic designs that are compliant in both body and power source components, which until now have presented a challenge in soft robotics. Replacing rigid batteries and circuit boards, the scientists used a microfluidic logic circuit (hydrogen peroxide-fueled) to inflate and power the bots octopus-like arms. A next-generation octobot would be able to crawl, swim and interact with its environment. The design is quick to manufacture, and the research team hopes to inspire other research teams working on advanced robotics manufacturing.

(Read the full article on the Harvard gazette and research paper at Nature)

3 – Microsoft Acquisition of Genee to Accelerate Intelligent Experiences in Office 365

Microsoft announced on Monday its intent to purchase the AI-powered scheduling service Genee, created in 2014 by Co-founders Ben Cheung and Charles Lee. Cheung and Lee, who plan to join the Microsoft team, announced on their own blog (which seems to have been taken down as of September 2017) that the Genee service will shut down on September 1, 2016, but that they look forward to continuing to “build amazing next generation intelligent experiences” at Microsoft. Genee was designed as a scheduling service that uses natural language processing and optimized decision-making algorithms, providing the experience of interacting with a human-like personal assistant.

(Read the full press release on the Microsoft Blog)

4 – nuTonomy Launches World’s First Public Trial of Self-Driving Car Service and Ride-Hailing App

nuTonomy, a Singapore-based tech company developing cutting-edge software for self-driving vehicles conducted the first ever public trial of its self-driving vehicles in Singapore’s one-north business district on Thursday. Trials will continue on an on-going basis, as nuTonomy engineers ride in the self-driving taxis to observe system performance and assume control if necessary. nuTonomy, which was founded by MIT graduates Karl Iagnemma, PhD, and Emilio Frazzoli, PhD, has been testing its autonomous vehicles since April of this year in Singapore, the UK, and Michigan, in partnership with various automotive manufacturers (Jaguar Land Rover is one example). The company’s goal is to release a self-driving fleet by 2018.

(Full press release on nuTonomy is no longer available as to our September 2017 update of this article)

5 – Smartphone Speech Recognition Can Write Text Messages Three Times Faster than Human Typing

A new study out of Stanford University, done in collaboration with Baidu Inc. and University of Washington, shows that speech recognition technologies are now an average of three times faster than human beings – and more accurate. Speech recognition has made significant progress in the past few years due to increased use of deep learning and training neural networks on extensive reams of data. Baidu’s Deep Speech 2 cloud-based speech recognition software was used for the study, though the team stipulated that other comparable speech recognition algorithms are likely to perform at similar levels. James Landay, Stanford University professor and co-author commented,

“You could imagine an interface where you use speech to start and then it switches to a graphical interface that you can touch and control with your finger.”

“Speech Is 3x Faster than Typing for English and Mandarin Text Entry on Mobile Devices” is published online at arxiv.org.

(Read the full article on Stanford University News)

 

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter:

Subscribe