Alt Text
  • Publisher: The Millennium Project
  • Publishing date: 2023
  • Language: English

Transition from Artificial Narrow (ANI) to Artificial General Intelligence (AGI) Governance

Phase 1 of the AGI study collected the views of 55 AGI leaders in the US, China, UK, the European Union, Canada, and Russia to the 22 questions below (the list of leaders follows the questions). Phase 1 research was financially supported by the Dubai Future Foundation and the Future of Life Institute.

Updates:

  • February 2023, Publication, “Artificial General Intelligence Issues and Opportunities” by Jerome C. Glenn contracted by the EC for input to the Foresight for the 2nd Strategic Plan of Horizon Europe (2025-27).
  • December 2022, Podcast, “Global governance of the transition from Artificial Narrow Intelligence (ANI) to Artificial General Intelligence (AGI)“, with Jerome C. Glenn on London Futurist.
  • December 2022, Launch, First steps on General Artificial Intelligence Governance Study.

Available as a downloadable PDF in English, Spanish or Chinese

Phase 1 Questions:

Origin or Self-Emergence

  1. How do you envision the possible trajectories ahead, from today’s AI, to much more capable AGI in the future?
  2. What are the most important serious outcomes if these trajectories are not governed, or are governed badly?
  3. What are some key initial conditions for AGI so that an artificial super intelligence does not emerge later that is not to humanity’s liking?

Value alignment, morality, values

  1. Drawing on the work of the Global Partnership on Artificial Intelligence (GPAI) and others that have already identified norms, principles, and values, what additional or unique values should be considered for AGI?
  2. If a hierarchy of values becomes necessary for international treaties and a governance system, what should be the top priorities?
  3. How can alignment be achieved? If you think it is not possible, then what is the best way to manage this situation?

Governance and Regulations

  1. How to manage the international cooperation necessary to build international agreements and a global governance system while nations and corporations are in an intellectual “arms race” for global leadership?
  2. What options or models are there for global governance of AGI?
  3. What risks arise from attempts to govern the emergence of AGI? (Might some measures be counterproductive?)
  4. Should future AGIs be assigned rights?
  5. How can governance be flexible enough to respond to new issues previously unknown at the time of creating that governance system?
  6. What international governance trials, tests, or experiments can be constructed to inform the text of an international AGI treaty?
  7. How can international treaties and a governance system prevent increased centralization of power crowding out others?
  8. Where is the most important or insightful work today being conducted on global governance of AGI?

Control

  1. What enforcement powers will be needed to make an international AGI treaty effective?
  2. How can the use of AGI by organized crime and terrorism be reduced or prevented? (Please consider new types of crimes and terrorism which might be enabled by AGI.)
  3. Assuming AGI audits would have to be continuous rather than one-time certifications, how would audit values be addressed?
  4. What disruptions could complicate the task of enforcing AGI governance?
  5. How can a governance model correct undesirable action unanticipated in utility functions?
  6. How will quantum computing affect AGI control?
  7. How can international agreements and a governance system prevent an AGI “arms race” and escalation from going faster than expected, getting out of control and leading to war, be it kinetic, algorithmic, cyber, or information warfare?

And last:  22. What additional issues and/or questions need to be addressed to have a positive AGI outcome?

Initial sample of potential governance models for AGI*

  1. IAEA-like model or WTO-like with enforcement powers. These are the easiest to understand, but likely to be too static to manage AGI.
  2. IPCC-like model in concert with international treaties.  This approach has not led to a governance system for climate change.
  3. Online real-time global collective intelligence system with audit and licensing status, governance by information power. This would be useful to help select and use an AGI system, but no proof that information power would be sufficient to govern the evolution of AGI.
  4. GGCC (Global Governance Coordinating Committees) would be flexible and enforced by national sanctions, ad hoc legal rulings in different countries, and insurance premiums. This has too many ways for AGI developers to avoid meeting standards.
  5. UN, ISO and/or IEEE standards used for auditing and licensing. Licensing would affect purchases and would have impact, but requires international agreement or treaty with all countries ratifying.
  6. Put different parts of AGI governance under different bodies like ITU, WTO, WIPO. Some of this is likely to happen but would not be sufficient to govern all instances of AGI systems.
  7. Decentralized Semi-Autonomous TransInstitution. This could be the most effective, but the most difficult to establish since both Decentralized Semi-Autonomous Organizations and TransInstitutions are new concepts.

*Drawn from “Artificial General Intelligence Issues and Opportunities,” by Jerome C. Glenn contracted by the EC for input to Horizons 2024-27 planning.

Phase 1 Questions:

Origin or Self-Emergence

  1. How do you envision the possible trajectories ahead, from today’s AI, to much more capable AGI in the future?
  2. What are the most important serious outcomes if these trajectories are not governed, or are governed badly?
  3. What are some key initial conditions for AGI so that an artificial super intelligence does not emerge later that is not to humanity’s liking?

Value alignment, morality, values

  1. Drawing on the work of the Global Partnership on Artificial Intelligence (GPAI) and others that have already identified norms, principles, and values, what additional or unique values should be considered for AGI?
  2. If a hierarchy of values becomes necessary for international treaties and a governance system, what should be the top priorities?
  3. How can alignment be achieved? If you think it is not possible, then what is the best way to manage this situation?

Governance and Regulations

  1. How to manage the international cooperation necessary to build international agreements and a global governance system while nations and corporations are in an intellectual “arms race” for global leadership?
  2. What options or models are there for global governance of AGI?
  3. What risks arise from attempts to govern the emergence of AGI? (Might some measures be counterproductive?)
  4. Should future AGIs be assigned rights?
  5. How can governance be flexible enough to respond to new issues previously unknown at the time of creating that governance system?
  6. What international governance trials, tests, or experiments can be constructed to inform the text of an international AGI treaty?
  7. How can international treaties and a governance system prevent increased centralization of power crowding out others?
  8. Where is the most important or insightful work today being conducted on global governance of AGI?

Control

  1. What enforcement powers will be needed to make an international AGI treaty effective?
  2. How can the use of AGI by organized crime and terrorism be reduced or prevented? (Please consider new types of crimes and terrorism which might be enabled by AGI.)
  3. Assuming AGI audits would have to be continuous rather than one-time certifications, how would audit values be addressed?
  4. What disruptions could complicate the task of enforcing AGI governance?
  5. How can a governance model correct undesirable action unanticipated in utility functions?
  6. How will quantum computing affect AGI control?
  7. How can international agreements and a governance system prevent an AGI “arms race” and escalation from going faster than expected, getting out of control and leading to war, be it kinetic, algorithmic, cyber, or information warfare?

And last:  22. What additional issues and/or questions need to be addressed to have a positive AGI outcome?

Initial sample of potential governance models for AGI*

  1. IAEA-like model or WTO-like with enforcement powers. These are the easiest to understand, but likely to be too static to manage AGI.
  2. IPCC-like model in concert with international treaties.  This approach has not led to a governance system for climate change.
  3. Online real-time global collective intelligence system with audit and licensing status, governance by information power. This would be useful to help select and use an AGI system, but no proof that information power would be sufficient to govern the evolution of AGI.
  4. GGCC (Global Governance Coordinating Committees) would be flexible and enforced by national sanctions, ad hoc legal rulings in different countries, and insurance premiums. This has too many ways for AGI developers to avoid meeting standards.
  5. UN, ISO and/or IEEE standards used for auditing and licensing. Licensing would affect purchases and would have impact, but requires international agreement or treaty with all countries ratifying.
  6. Put different parts of AGI governance under different bodies like ITU, WTO, WIPO. Some of this is likely to happen but would not be sufficient to govern all instances of AGI systems.
  7. Decentralized Semi-Autonomous TransInstitution. This could be the most effective, but the most difficult to establish since both Decentralized Semi-Autonomous Organizations and TransInstitutions are new concepts.

*Drawn from “Artificial General Intelligence Issues and Opportunities,” by Jerome C. Glenn contracted by the EC for input to Horizons 2024-27 planning.

AGI Experts and Thought Leaders in Phase 1

  1. Sam Altman, via YouTube and OpenAI Blog, CEO OpenAI
  2. Anonymous, AGI Existential Risk OECD (ret.)
  3. Yoshua Bengio. AI pioneer, Quebec AI Institute and the University of Montréal
  4. Irakli Beridze, UN Interregional Crime and Justice Res. Ins. Ct. for AI and Robotics
  5. Nick Bostrom, Future of Humanity Institute at Oxford University
  6. Gregg Brockman, OpenAI co-founder
  7. Vint Cerf, Internet Evangelist, V.P. Google.
  8. Shaoqun CHEN, CEO of Shenzhen Zhongnong Net Company
  9. Anonymous, at Jing Dong AI Research Institute, China
  10. Pedro Domingos, University of Washington
  11. Dan Faggella, Emerj Artificial Intelligence Research
  12. Lex Fridman, MIT and Podcast host
  13. Bill Gates
  14. Ben Goertzel, CEO SingularityNET
  15. Yuval Noah Harari, Hebrew University, Israel
  16. Tristan Harris, Center for Humane Technology
  17. Demis Hassabis, CEO and co-founder of DeepMind
  18. Geoffrey Hinton, AI pioneer, Google (ret)
  19. Lambert Hogenhout, Chief Data, Analytics and Emerging Technologies, UN Secretariat
  20. Erik Horvitz, Chief Scientific Officer, Microsoft
  21. Anonymous, Information Technology Hundred People Association, China
  22. Anonymous, China Institute of Contemporary International Relations
  23. Andrej Karpathy, Open AI, former AI S Researcher Tesla
  24. David Kelley, AGI Lab
  25. Dafne Koller, Stanford University, Coursera
  26. Ray Kurzweil, Director of Engineering Machine Learning, Google
  27. Connor Leahy, CEO Conjecture
  28. Yann LeCun, Professor New York University, Chief Scientist for Meta
  29. Shane Legg, co-founder of DeepMind
  30. Fei Fei Li, Stanford University, Human Centered AI
  31. Erwu Liu, Tongji University AI and Blockchain Intelligence Laboratory
  32. Gary Marcus, NYU professor emeritus
  33. Dale Moore, US Dept of Defense AI consultant
  34. Emad Mostaque, CEO of Stability.ai
  35. Elon Musk
  36. Gabriel Mukobi, PhD student Stanford University
  37. Anonymous, National Research University Higher School of Economics
  38. Judea Pearl, Professor UCLA
  39. Sundar Pichai, Google CEO
  40. Francesca Rossi, Pres. of AAAI, IBM Fellow and IBM’s AI Ethics Global Leader
  41. Anonymous, Russian Academy of Science
  42. Stuart Russell, UC Berkeley
  43. Karl Schroeder, Science Fiction Author
  44. Bart Selman, Cornel University
  45. Javier Del Ser, Tecnalia, Spain
  46. David Shapiro, AGI Alignment Consultant
  47. Yesha Sivan, Founder and CEO of i8 Ventures
  48. Ilya Sutstkever, Open AI co-founder
  49. Jaan Tallinn, Ct. Study of Existential Risk at Cambridge Univ., and Future of Life Institute
  50. Max Tegmark, Future of Life Institute and MIT
  51. Peter Voss, CEO and Chief Scientist at Aigo.ai
  52. Paul Werbos, National Science Foundation (ret.)
  53. Stephen Wolfram, Wolfram Alpha, Wolfram Language
  54. Yudong Yang, Alibaba’s DAMO Research Institute
  55. Eliezer Yudkowsky Machine Intelligence Research Institute

There are many excellent centers studying values for and the ethical issues of ANI, but not potential global governance models for the transition to AGI. The distinctions among ANI, AGI, and ASI are usually missing in these studies; even the most comprehensive and detailed US National Security Commission on Artificial Intelligence Report has little mention.[iv] Current work on AI governance is trying to catch up with the artificial narrow intelligence that is proliferating worldwide today; we also need to jump ahead to anticipate governance needs of what AGI could become.

It is argued that creating rules for governance of AGI too soon will stifle its development. Expert judgments vary about when AGI will be possible; however, some working to develop AGI believe it is possible to have AGI as soon as ten years. Since it is likely to take ten years to: 1) develop ANI to AGI international or global agreements; 2) design the governance system; and 3) begin implementation, then it would be wise to begin exploring potential governance approaches and their potential effectiveness now.