AI Safety: Rethinking Our Approach to AGI

France June 2024
Written by: 
Einar, AJ, Mikolaj

In the race towards Artificial General Intelligence (AGI), we often frame our choices as binary: either we sprint ahead to beat competitors like China, or we risk falling behind. However, discussions in the AI safety community propose a third option that deserves serious consideration: choosing not to build AGI at all.

The AGI Trilemma

When contemplating the future of AGI, we face three potential outcomes:

  1. Your country creates AGI first: This scenario allows you to imbue the system with your own values, potentially avoiding catastrophe by controlling other actors. However, this path has the problem that you don’t spend your resources developing techniques for making AGI safe. This option therefore makes it more likely that sufficient safety techniques don’t exist at the time of AGI development.
  2. Another country creates AGI first: This outcome has the disadvantage that it would allow others to decide which values are embedded in the system. However, it's worth noting that even in this scenario, your focus on AI safety research could prove invaluable. Other developers will want to incorporate any safety measures into their systems, resulting in a safer AI. Even if another country has values differing from yours, they are likely better than an AGI with arbitrary values.
  3. AGI development is postponed or halted indefinitely: This option avoids the entire problem of AI misalignment but requires global coordination. Because we believe this to be the most desirable outcome, the remainder of this post will expand on it.

The Case for Non-Development

Is it really possible not to build an economically advantageous technology? History provides numerous examples of potentially beneficial technologies that we've chosen not to develop or delay due to ethical, safety, or environmental concerns:

  • Nuclear power development has been slowed due to safety concerns
  • Fracking faced delays over environmental issues
  • Vaccine development is deliberately slowed to ensure safety
  • Human cloning is not pursued despite potential economic benefits
  • Geoengineering remains largely theoretical due to various concerns
  • Enhancement drugs for human performance are rarely researched
  • GMOs face significant resistance in many parts of the world

We believe there’s a strong argument that though AI technologies show strong promise, their development can and should be slowed or even stopped in the short term to allow for further research on how we can safely develop them.

Public Opinion: A Powerful Force

Public opinion plays a crucial role in shaping technological development. Recent polls from the Pew Research Center indicate a growing concern about AI among the American public. Their data show a growing number of people more concerned than excited about AI.

This shift in public sentiment could be a powerful tool in slowing or halting AGI development. 

The Value of Delay

Even if we fail to prevent AGI development, there's value in delaying its creation. This would give the world time for:

  1. More comprehensive safety research
  2. Potential breakthroughs in alignment techniques
  3. Development of global governance frameworks

A Call to Action

Policymakers have a unique opportunity to shape the future of AI development. Here are some actionable steps to consider:

  1. Prioritize AI safety research: Allocate significant funding to AI safety initiatives. This research is valuable regardless of who ultimately develops AGI.
  2. Foster international cooperation: Work towards global agreements on AI development, similar to nuclear non-proliferation treaties.
  3. Engage with the public: Educate constituents about the potential risks and benefits of AGI, and listen to their concerns.
  4. Explore regulatory frameworks: Develop policies that prioritize safety in AI research and development.
  5. Support narrow AI paradigms: Encourage development of narrow AI systems that don't lead to AGI but still provide significant benefits.

The development of Artificial General Intelligence stands as one of the most consequential challenges of our time. Policymakers stand at the helm of this transformative journey. The choices before us are not simply about technological supremacy or economic gain, but about the very future of humanity. We are not locked into an inevitable path towards AGI; we have the power to shape its trajectory. By prioritizing safety over speed, cooperation over competition, and foresight over haste, we can steer towards a future where AI enhances rather than endangers human flourishing. The decisions we make today will echo throughout the generations. Let us choose wisdom, embrace responsible innovation, and work collectively to ensure that the dawn of AGI, if it comes, brings light rather than darkness. The future of humanity is in our hands – let us act with the gravity and vision this moment demands.

References:

This blog post was written with the financial support of Erasmus+