Jumpstarting International AGI Governance – a Snapshot from the Millennium Project’s Recent Expert Survey

Matthew DeMello

Matthew is Senior Editor at Emerj, focused on enterprise AI use-cases and trends. He previously served as podcast producer with CrossBorder Solutions, a venture-back AI-enabled tax solutions firm. Prior, Matthew served three years at the World Policy Institute as a news editor and podcast producer.

Jumpstarting International AGI Governance-1

Riya Pahuja also contributed reporting to this article.

As AI systems become increasingly capable of greater cognitive functions across vast enterprises and public functions, more severe concern around how these technologies might impact humanity and how those impacts are covered by relevant media is becoming a central talking point across global governing bodies.

At the World Economic Forum Annual Meeting for 2024 (best known by its host location in Davos, Switzerland) in January, United Nations Secretary-General António Guterres gave a special address warning that the rapid development of current AI initiatives poses an “existential threat” to humanity. 

The Millennium Project was co-founded by futurist and sitting executive director Jerome C. Glenn in 1996 to evaluate 15 global challenges the organization sees as to the future of humanity. The project plays a significant role in facilitating international AI governance discussions with partners as diverse as foreign governments to the Red Cross. The group is a particularly prominent voice in futures forecasting and AI policy guidance in the same spaces. 

The Millennium Project’s report, “International Governance Issues of the Transition from Artificial Narrow to Artificial General Intelligence Governance”, focuses on the governance challenges that Artificial General Intelligence (AGI) poses and the potential emergence of Artificial Super Intelligence (ASI). AGI is defined as a general-purpose AI capable of autonomous learning, code editing, and problem-solving strategies comparable to or better than human abilities. 

The report highlights the urgency of addressing AGI governance issues internationally, emphasizing the need for a regulatory system before AGI’s potential arrival within the next decade to prevent undesirable outcomes with ASI. 

The study involves 22 questions crafted by the AGI Steering Committee, directed towards 55 AGI experts and thought leaders worldwide, with their responses forming the basis for an international assessment. The report helps inform discussions on global governance scenarios for AGI, aiming to engage authorities in law, regulation, international organizations, and AGI leaders. 

Daniel Faggella is CEO and Head of Research of Emerj Technology Research. He recently contributed to the Millennium Project’s report and first interviewed Jerome in 2020.

The following article assembles a short set of quotes from these same sources in response to two critical questions in the series, one involving the risks of artificial general intelligence and the other involving modes of AI governance. 

Some of the following quotes were taken directly from experts who also contributed to the report. Others were pulled from various interviews and publications. The substance of the collective testimony therein provides business and public policy leaders with a strong tapestry of ideas on where governance might need a hand in grappling with the questions AGI will ask of humanity over the coming years. 

Question: What are the most important, serious outcomes if these trajectories are not governed or poorly governed?

“Unsupervised learning from much-unadulterated data develops a mind that is completely in the black box territory. We don’t know how it works. It has already convinced a man to commit suicide. People have already coaxed GPT-4 into giving advice about bioweapons. With modifications of a GPT-5 or so, it could give dangerously bad advice and actions. Here’s an important sign regarding public opinion.

In a recent YouGov survey, around 60% of people aged under 30 were either very concerned or somewhat concerned that AI will cause the end of the human race. So, we are calling for a six-month pause of any work beyond GPT-4: One really important thing that a six-month pause would give us is empirical knowledge of whether a pause is possible in the first place. If not — if AI Labs will not cooperate with humanity — then we will know stronger measures will be needed.”

– Jaan Tallinn, the Centre for the Study of Existential Risk at Cambridge University and Future of Life Institute


“50% of AI researchers believe there’s a 10% or greater chance that humans go extinct from our inability to control AI. That would be like if you’re about to get on a plane and 50% of the engineers who make the plane say, ‘Well, if you get on this plane, there’s a 10% chance that everybody goes down. Would you get on that plane?’ But we are rapidly onboarding people onto this plane.”

–Tristan Harris, Center for Humane Technology (CHT)


“I share the concern that AGI could be massively dangerous to humanity because we just don’t know what a system that’s so much smarter than us will do. I mean, obviously, what we need to do is make this synergistic, have it so it helps people. And I think one of the main issues is the political systems we have. I’m not confident that President Putin is going to use AI in ways that help people. 

Regarding autonomous lethal weapons, we need something like the Geneva Conventions: people decided that chemical weapons were so nasty they weren’t going to use them. People would love to get a similar treaty for autonomous lethal weapons, but I don’t think there’s any way they’re going to bring that. I think if Putin had autonomous lethal weapons, he would use them right away. [autonomous lethal drones sold by Turkey have already been used in Syria, Libya, and in the Azerbaijan-Armenia war.”

– Geoffrey Hinton, Canada Research Chair in Machine Learning and former Google Brain computer scientist 

Question: How do we manage the international cooperation necessary to build international agreements and a global governance system while nations and corporations are in an intellectual “arms race” for global leadership? 

(Question 7 from the “International Governance Issues of the Transition from Artificial Narrow Intelligence to Artificial General Intelligence” report from the Millennium Project.) 

“Plan A is regulation with the external audits of independent researchers and consultants. We need also international coordination when we do that, because we want to make sure all the countries follow some minimal standards. 

We need that because it’s not enough for the US to have the rules because computer viruses or biological viruses don’t care about borders. Of course, it has to start with the US and probably the next thing has to be [both] US and China agreeing on some standards. But at some point, it has to be international, and it has to be a treaty that’s enforced quite strongly because there’s so much at stake.”

–Yoshua Bengio, AI pioneer, Quebec AI Institute and the University of Montréal


“We need global governance for AI; we have a lot of patchwork right now, almost balkanized. The worst case from the company’s perspective and the world’s perspective is if there are 193 jurisdictions, each deciding its own rules requiring its training of these models, each run by governments that don’t have much specific expertise in AI. 

We need to have a global system modeled on something like the International Atomic Energy Agency to manage these new threats and conduct research to counter cybercrimes, cyber warfare, and disinformation, a kind of standards organization. This global organization should be well-financed to try to build tools to mitigate those threats.”

—Gary Marcus, NYU professor Emeritus and author


“So the kind of governance that is needed varies enormously from technology to technology. Think about nuclear power – whether it’s weapons or energy. It’s clear to everybody that every country in the world understands that it will be a bad idea for almost every country. It will be a bad idea for nuclear weapons and technology to fall into the hands of criminals or another sort of non-state non-regulated entities. 

And so we have a rigorous, globally, fairly uniform set of agreements in the form of treaties and national legislations about how nuclear materials are to be stored and transported securely around nuclear plants. And, of course, there are treaties controlling or banning nuclear weapons. 

—Stuart Russell, U.C. Berkley

Emerj previously hosted Stuart on the AI in Business podcast back in 2020, talking about the subject of AI governance.

“Governance systems should include mechanisms for information sharing, coordination, and dispute resolution. Pursue multi-stakeholder agreements among nations and corporations to establish norms, standards, and regulations for the development and use of AGI. This could involve creating international bodies to oversee and enforce these agreements, such as the IAEA for nuclear technology. Encourage transparent and open development of AGI, in which researchers and developers share their work and collaborate across borders. 

This could help build trust and foster cooperation among nations and corporations. Governments should partner with corporations to create joint research initiatives and funding mechanisms to support the development of AGI while ensuring that the technology is developed in ways that align with societal values and goals. Bridge the AGI divide; developed countries may have an edge in AGI development, but emerging economies may also contribute their own unique resources and perspectives.”

–Yudong Yang, Alibaba Research Institute


You can learn more by accessing the full report here. We look forward to supporting the Millennium Projects’ future reports and the work. Emerj believes that rational and meaningful collaboration around AI involves policy and business leaders understanding the realpolitik of artificial general intelligence at an international scope.

Stay Ahead of the AI Curve

Discover the critical AI trends and applications that separate winners from losers in the future of business.

Sign up for the 'AI Advantage' newsletter: