The Use of Technical Expertise In Governance

France June 2024
Written by: 
Jonas, Gianmarco

Introduction

As artificial intelligence (AI) advances, the importance of AI Governance grows. However, many technical researchers in AI Safety struggle to understand how their work can impact AI Governance. This disconnect often stems from a lack of concrete models showing the interplay between technical contributions and governance outcomes.

Our project aims to address this gap through a brief literature review and targeted survey. We will analyze how technical work has been utilized in major AI governance proposals and explore potential ways technical efforts could be leveraged in future governance frameworks.

Objectives of Our Analysis:

  1. Identify Key Contributions: Review existing literature and governance proposals to highlight specific instances where technical work has played a role.
  2. Model the Transition: Provide basic models illustrating how technical work transitions into governance practices.
  3. Highlight Future Opportunities: Suggest areas where technical expertise could potentially enhance AI Governance.

Expected Impact for the Reader

This document aims to provide technically-inclined readers an initial understanding of how their skills and research can influence AI Governance. We hope to:

  • Encourage Engagement: Motivate technical researchers to consider governance projects by providing tangible examples and pathways for involvement.
  • Facilitate Understanding: Offer guidance and models to clarify the transition from technical work to governance impact.

Previous work

The post "AI Governance Needs Technical Work" explored potential contributions to AI Safety from technical work. Our approach differs by conducting a literature review of AI Governance areas that use technical work, seeking patterns in what has been useful.

Key Areas of Technical Contributions

  • Engineering Technical Levers: Developing hardware/software solutions to enforce AI regulations.
  • Information Security: Securing AI technologies and sensitive data.
  • Forecasting AI Development: Predicting advancements to inform governance strategies.
  • Technical Standards Development: Establishing standards for AI safety practices.
  • Grantmaking and Advising: Guiding projects and advising policymakers.
  • AI Control: Developing systems to oversee potentially untrustworthy AI.
  • Model Evaluations: Creating technical evaluations for AI safety.
  • Forecasting Hardware Trends: Analyzing hardware trends to forecast AI capabilities.
  • Cooperative AI: Researching game theory and decision theory for AI systems.

Getting Into the Area

Mau's article also provides guidance on how individuals can get involved in these areas:

  1. Learn more about technical work categories
  2. Test your fit through introductory courses or small projects
  3. Build expertise in relevant areas
  4. Pursue opportunities by networking and applying for positions

Our investigation

As technical researchers in AI, we often wonder how our work can shape the policies that govern AI development and deployment. To answer this question, we dove deep into the world of AI governance, examining major proposals and tracing their roots.

Our journey began with a comprehensive review of leading AI risk and governance proposals from influential institutions and governing bodies. We pored over documents, analyzed citations, and mapped out the flow of ideas. What we found was both surprising and illuminating.

At the heart of many of these proposals, we discovered a common thread: the work of the Organisation for Economic Co-operation and Development (OECD). The OECD, it turns out, plays a crucial role in shaping AI governance globally. Their reports and recommendations ripple out, influencing policies across nations and institutions.

Recognizing the OECD's pivotal role, we shifted our focus. We asked ourselves: How can technical AI researchers ensure their work informs these influential OECD publications? Through careful analysis of OECD processes and sources, we identified several effective pathways:

  1. Publishing and Presenting: Choosing the right journals and at key conferences doesn't just advance science - it catches the eye of policymakers and OECD experts.
  2. Setting the Standard: By collaborating with organizations like ISO or NIST, you help craft the technical standards that often form the backbone of governance frameworks.
  3. Direct Policy Input: Lending your expertise to advisory groups or government bodies allows you to directly shape policy documents, translating complex technical concepts into actionable governance strategies.
  4. Cross-Disciplinary Collaboration: When you team up with ethicists, social scientists, and policymakers, you create a more comprehensive picture of AI's impacts.
  5. Multi-Stakeholder Engagement: Participating in OECD-organized initiatives puts you at the table where diverse perspectives converge and policy recommendations are born.
  6. Provide written feedback: The OECD specifically acknowledged organizations and individuals who submitted written comments. Providing detailed, well-reasoned written feedback can be highly influential.
  7. Engage through relevant organizations: Taking for example the OECD Framework for the Classification of AI Systems, the acknowledged contributors were from recognized organizations or government bodies. Engaging through such entities can lend weight to your input.

These pathways aren't just theoretical - they're proven routes for technical expertise to influence governance. By understanding and utilizing these channels, AI researchers can extend their impact beyond the lab, helping to craft policies that are both technically sound and ethically grounded.

Our exploration revealed that the journey from research to policy isn't a straight line, but a web of interconnected pathways. Each publication, collaboration, or presentation is an opportunity to shape the future of AI governance. As technical researchers, we're not just observers in this process - we're essential participants.

Conclusion

This work wouldn't have been possible without the support of the Erasmus+ program. Their commitment to fostering international collaboration and knowledge exchange was instrumental in our research. The bootcamp they supported provided us with the resources and environment to delve deep into these crucial connections between technical AI research and governance.

In conclusion, the gap between AI research and governance isn't as wide as it might seem. There are clear, actionable paths for your technical insights to inform and shape policy. By engaging with these pathways, we can ensure that AI governance is grounded in the latest research and technical understanding. The future of AI is in our hands - not just in our labs, but in the policies we help shape.

This blog post was written with the financial support of Erasmus+