Event Title: The GPAI 2024 Fall Plenary.
Event Host: OECD
Location: Paris, France
Date: November 12-13, 2024
Team Member: Daniel Faggella, Emerj Technology Research Founder and CEO, Head of Research
What Happened
In an effort to consolidate global AI governance initiatives and resources, the Global Partnership on AI (GPAI) and the OECD AI Policy Observatory have merged into a single policy body. Set in motion earlier this year and coalescing at The GPAI 2024 Fall Plenary.
The recent OECD AI Futures meeting brought together academic, industry, and intergovernmental leaders to discuss key priorities for this newly unified group. Chaired by Francesca Rossi, IBM AI Ethics Global Leader and Distinguished Research Staff Member at the IBM T.J. Watson Research Lab, and Stuart Russell from the Center for Human-Compatible Artificial Intelligence (CHAI) at UC Berkeley, the meeting focused on defining the future areas of research and policy for the OECD AI Futures Group.
Yoshua Bengio presenting remotely before the GPAI 2024 Plenary at the OECD in November 2024. (Source: Daniel Faggella)
Participants explored a broad spectrum of AI impacts, from immediate technical risks to long-term societal implications. Emerj CEO and Head of Research, Daniel Faggella, attended the meeting as part of his ongoing role in contributing to international AI policy discussions, a commitment he has upheld since joining the OECD’s AI Expert group in 2018.
Emerj CEO and Head of Research Daniel Faggella participating in a roundtable during the GPAI 2024 Plenary at the OECD in November 2024. (Source: Daniel Faggella)
What We Learned
The meeting yielded the following critical insights for business leaders and policymakers:
- Divergence on AGI Risks: A notable divide remains among policymakers and researchers regarding the likelihood and urgency of risks from Artificial General Intelligence (AGI). While some participants expressed concern over the loss of human control with advanced AI systems, the majority were hesitant to prioritize this as a near-term focus.
- Yoshua Bengio’s Risk Framework: Yoshua Bengio presented findings from his forthcoming AI Risk Report, categorizing risks into three areas: malicious use (e.g., scams, disinformation), cyber malfunctions (e.g., self-driving car failures), and systemic risks (e.g., environmental and legal issues).
- Forecasting and Scenario Planning: Despite disagreements on AGI, there was broad consensus on the value of scenario planning. Discussions emphasized evaluating AI’s impacts on specific industries such as banking and life sciences, and societal domains like education and communication under various future scenarios.
- Evolving Consensus: This meeting marked a rare candid discussion on AGI within the OECD framework. Though opinions differed, the group advanced efforts to bridge perspectives through structured forecasting exercises.
Prominent voices in the AI community, including Sam Altman and Elon Musk, have recently suggested that AGI might be achieved within specific timelines. Even skeptics like Yann LeCun have acknowledged the possibility of reaching human-level AI in the next five to ten years, hinting at growing recognition of AGI as a topic of serious debate.
Emerj will continue to monitor developments from the OECD AI group and share updates as new reports and findings become available.