1 – Google DeepMind Researchers Develop AI Kill Switch
Researchers at Google’s DeepMind and the Future of Humanity Institute have documented their ideas for building in a “kill switch” in an artificial intelligence that might otherwise refuse human intervention. Their paper will be presented at the 32nd conference on Uncertainty in Artificial Intelligence in New York at the end of June. The proposed research framework specifically involves reinforcement learning, in which an AI learns to respond in accordance with some form of “reward”. There’s no feasible way for a human programmer to anticipate every potential path to a reward, including those that entail the AI (unconsciously) going against what’s in the best interest of humans in a particular scenario. A safely interruptible AI is one that would always allow for human interference, particularly in cases where it had learned to “ignore” human interruptions to achieve a goal at all costs.
(Read the full article at Motherboard and the research paper at intelligence.org)
2 – Maluuba is Getting Machines Closer to Reading Like Humans Do
Canadian-based AI company Maluuba this week released a machine learning program that has outperformed all other machine learning comprehension programs to date, including those of Facebook, Google and IBM. Called the EpiReader, the system is specifically designed to fill in a missing work in a chunk of text based on context. EpiReader was tested on two collections of text: the CNN/Daily Mail collection, which consists of 300,000 plus articles from news websites, as well as the Children’s Book Test, composed of 98 classic children’s books. The system scored an accuracy rate of 74 percent and 67.4 percent respectively, setting new benchmarks for the field. Yoshua Bengio, an advisor to Maluuba, cautioned that there’s still much work to be done before we get anywhere close to matching natural language comprehension in humans.
(Read the full article at The Verge and the research paper at arxiv.org)
3 – Google Moves Closer to a Universal Quantum Computer
Google research teams in California and Spain, in collaboration with Canadian-based D-Wave, have created functioning prototypes of quantum computers. The prototype combines two approaches to designing the computer’s circuity. One involves constructing the qubits in a specific way to solve a particular problem, similar to a traditional microprocessor; the other, known as adiabatic quantum computer (AQC), involves encoding a problem in the states of a group of qubits, which gradually evolve their quantum shape to solve the problem. Though the models are difficult to scale due to the number of quantum bits (i.e. qubits) needed for solving “any computational problem”, the lessons learned from the teams’ work are invaluable for future research and architecture of scalable quantum computers in the future.
(Read the full article at Nature)
4 – IBM Targets Data Scientists with a New Development Platform Based on Apache Spark
On Monday, IBM introduced a new platform targeted at making data scientists’ lives more streamlined. Bob Picciano, senior vice president of IBM Analytics, said:
“With data science, the major roadblock is having access to large data sets and having the ability to work with so much data.”
Named Data Science Experience, the platform is designed to be an all-in-one, go-to place for embedding data and machine learning into cloud-based applications. Developers have access to Python, R, Scala, and can also view sample notebooks and tutorials while coding. A suite of other tools focus on data preparation and cleaning, visualization, prescriptive analytics, data connections, and collaboration with other codes.
(Read the full article at PC World)
5 – How Shining a Laser on Your Face Might Help Siri Understand You
An Israeli startup named VocalZoom is breaking into the speech-recognition space with technology that it believes will improve the accuracy rate of speech recognition by 60 to 80 percent. The company is developing tiny, low-power lasers, to be used alongside existing speech-recognition technologies, that measure the vibrations of your skin when you speak. A recent venture funding round secured VocalZoom $12.5 million, and CEO Tal Bakish is already in talks with several, undisclosed automotive companies; Bakish believes VocalZoom’s lasers will be added to vehicles by 2018 for better voice command technology. The technology will likely first be introduced in helmets and headsets, such as those for warehouse workers or even in motorcycle helmets (Chinese-based iFlytek has a prototype headset in the work for end of August 2016).
(Read the full article at MIT Technology Review)
Image credit: D-Wave Systems