24th World Conference

28th October, 2021 (Thursday)
19:00-19:20 (CET)

Why we Have to Start Working on AGI Governance Now

Event Speaker
Co-founder and the Executive Director, The Millennium Project
Event Description

An international assessment of how to govern the potential transition from Artificial Narrow Intelligence (ANI) to potential Artificial General Intelligence (AGI) is needed. If the initial conditions of AGI are not “right,” it could evolve into the kind of Artificial Super Intelligence (ASI) that Stephen Hawking , Elon Musk , and Bill Gates have warned the public could threaten the future of humanity.

There are many excellent centers studying values and the ethical issues of ANI, but not potential global governance models for the transition to AGI. The distinctions among ANI, AGI, and ASI are usually missing in these studies. Even the most comprehensive and detailed U.S. National Security Commission on Artificial Intelligence Report has little mention of these distinctions.

Current work on AI governance is designed to catch up with the artificial narrow intelligence proliferating worldwide today. Meanwhile, investment into AGI development is forecast to be $50 billion by 2023. Expert judgments about when AGI will be possible vary. Some working to develop AGI believe it is possible to have AGI in as soon as ten years.
It is likely to take ten years to: 1) develop ANI to AGI international or global agreements; 2) design the governance system; and 3) begin implementation. Hence, it would be wise to begin exploring potential governance approaches and their potential effectiveness now. We need to jump ahead to anticipate governance requirements for what AGI could become. Beginning now to explore and assess rules for governance of AGI will not stifle its development, since such rules would not be in place for at least ten years. (Consider how long it is taking to create a global governance system for climate change.)

The governance of AI is the most important issue facing humanity today and especially in the coming decades.
— Allan Dafoe, Future of Humanity Institute, University of Oxford