One of the world’s oldest and most prestigious universities is offering a new study focus that sheds light on its progressive approach to academia. With a grant from the non-profit foundation, the Leverhulme Trust, academics at England’s Cambridge University will be able to study artificial intelligence ethics over the next ten years.
The research focus will be facilitated by the Leverhulme Centre for the Future of Intelligence, which will be established thanks to a $15 million grant by the Leverhulme foundation.
Working alongside Cambridge’s already influential Centre for the Study of Existential Risk (CSER), the artificial ethics center will work toward a common purpose to found responsible innovation and refresh contemporary perspectives on the affordances and threats of AI, according to Professor Huw Price, the university’s Bertrand Russell Professor of Philosophy, who will direct the Center for Future Intelligence as well as CSER.
Using memes from science fiction movies made decades ago – the case of 2001: A Space Odyssey – that was 50 years ago,” Professor Price told the Wall Street Journal. “Stanley Kubrick was a brilliant film director but we can do better than that now.”
Cambridge isn’t alone in recognizing and taking actions towards establishing an ethical study of AI. Via a $7 million contribution of Elon Musk to “keep AI robust and beneficial”, Cambridge, Massachusetts saw the birth of the Future of Life Instate last July. Meanwhile, Musk and other well-to-do and concerned investors committed $1 billion to OpenAI, a non-profit research initiative intended to advance digital intelligence in the way that is most likely to benefit humanity as a whole, unconstrained by a need to generate financial return.”
Corporate acquisitions of AI companies has also seemed a trend in the last years. Google, Facebook, Microsoft, and Apple have all recently invested in AI software that show promise in tasks from facial recognition to speech recognition and beyond.
Price and his Cambridge cronies will partner with researchers at the Oxford Martin School and the University of California, Berkley to combine the insights of software programmers and philosophers, and develop code that would govern the behavior of AI systems.
“As a species, we need a successful transition to an era in which we share the planet with high-level, non-biological intelligence,” Price told to the Wall Street Journal. “We don’t know how far away that is, but we can be pretty confident that it’s in our future. Our challenge is to make sure that goes well.”
All this interest in artificial intelligence seems sudden but it isn’t whimsical. 2015 was regarded as a “breakthrough year” for the development of AI. Cloud computing infrastructures have provided cheaper and more powerful means through which to run programs. Self-driving cars toured the world while an AI program learned to play old Atari games all by itself. Likewise, a motorcycle riding robot vouched to surpass us as a robotics company asked us to refrain from having sex with their product. Though AI systems have toyed with the silly and the serious in 2015, the Cambridge focus illustrates in academic weight the importance of considering the rights and wrongs behind artificial intelligence.
Credit: Shutterstock