This week we kick off the first episode in our new AI Futures series. This 12-part series will focus on the near-term and long-term governance of artificial intelligence. Our intention with this series is to take our grounding in the near-term applications or artificial intelligence, and extend the conversation forward to the long-term implications of AI.
Our first guest in the series is a renowned AI researcher and UC Berkeley Professor, Stuart Russell.
Stuart is co-author, along with Peter Norvig, of Artificial Intelligence: A Modern Approach, which is used as a textbook in over 1400 universities around the world. Stuart’s most recent book, Human Compatible: Artificial Intelligence and the Problem of Control, covers the misuses of AI, the means of controlling strong AI, and potential approaches to effective governance. In 2016, Stuart led the founding of the UC Berkeley Center for Human-Compatible AI, with a $5.5MM grant from the Open Philanthropy Project. The center’s state mission is “to develop the conceptual and technical wherewithal to reorient the general thrust of AI research towards provably beneficial systems.”
There is absolutely no academic consensus on whether or not strong AI should be seen as a real risk to humanity in the next 100 – or even 500 – years (see our own poll on AI risks, and that largest such poll I’m aware of – conducted by Bostrom et. al.), but Stuart is arguable the most respected academic to emphasize the importance on avoiding a dangerous general AI takeoff, and his writing and presentations on this topic have been influential for the entire field of AI safety. We’re honored to have Stuart as the first interviewee for this new series.
In this episode, Stuart explores some of the correlates between AI governance and the governance and control of other technologies – including electricity and nuclear materials. He also explores some of his thoughts about international AI transparency and governance.
Subscribe to our AI in Business Podcast with your favorite podcast service:
Guest: Stuart Russell, Professor of Computer Science, University of California, Berkeley.
Expertise: Artificial intelligence. AI safety and governance.
Brief Recognition: Stuart earned his PhD in Computer Science from Stanford University in 1986. He has been a Professor of Computer Science at UC Berkeley since 1996. Russell is Vice Chair of the World Economic Forum’s Council on AI and Robotics, and a fellow of the American Association for Artificial Intelligence. Russell has been awarded the National Science Foundation’s Presidential Young Investigator Award, the World Technology Award, the Mitchell Prize and the AAAI/EAAI Outstanding Educator Award.
Interview Questions
(3:15) You believe that at some point we’ll need global governance of AI – can you summarize your position on this topic?
(8:15) Are there existing standards, procedures, or laws for other technologies that might apply well to AI as well?
(13:00) Do you think that early AI governance progress will begin with individual, discrete governance instances (say, around using AI to market to children)?
(19:10) Do you think that AI principles can be the right starting point for drilling down to more concrete governance?
(22:10) How can we develop an international AI values consensus with nations that don’t share values such as individualism, privacy, freedom of speech, etc?
(27:00) What does near-term international AI convening look like? How could nations begin to come together?
(33:30) Do you believe that the shared prosperity from beneficial strong AI would relieve the tensions towards violent international competition?
(38:00) What is the role for the private sector in helping to bring us closer to a better AI future?
(41:15) Do you believe that the “new normal” for AI governance and ethics in industries (like insurance or banking) can set a precedent for the bigger picture of AI ethics and governance?
Every Saturday from June 27th to September 12th, we’ll be covering a new AI Futures episode about AI governance.
Do you have any thoughts you’d like to share about this AI Futures series? Share your thoughts in our one-question podcast survey.