Navigating the Future

France June 2024
Written by: 
Ouafae, Pierina, Michał, M'hamed

As artificial intelligence (AI) continues to evolve at a rapid pace, the necessity for comprehensive safety strategies and regulatory frameworks becomes increasingly important. The European Union (EU) has developed the EU AI Act, a significant legislative framework aimed at governing the deployment and development of AI technologies. This Act is designed to set a baseline for AI regulation, addressing key areas of risk and ethical considerations associated with AI deployment across member states.

Despite these efforts, there remains a consensus that further regulation is needed to keep pace with technological advancements and emerging challenges in AI. 

This article will examine the details of the EU AI Act and explore how the UK, France, and Germany are adapting their AI safety frameworks to meet specific national needs. By analyzing these approaches, this article seeks to provide a bird's eye view of the frameworks and legislations, hopefully encouraging the reader to engage in further research.

The European Union and the EU AI Act

The AI Act employs a risk-based regulatory framework, categorizing AI systems into four levels of risk to prevent misuse:

  • Unacceptable Risks: Bans harmful practices like social scoring and untargeted data scraping, protecting societal values and individual rights.
  • High Risks: Subjects AI systems in sensitive areas like recruitment and healthcare to rigorous regulations and pre-deployment assessments, ensuring compliance with safety and ethical standards.
  • Transparency Risks: Permits technologies like impersonation tools and deepfakes but requires strict transparency to prevent deception.
  • Minimal or No Risks: Covers the majority of AI systems (80 to 85%), which have negligible risk and face minimal regulations, promoting adherence to voluntary codes of conduct.

This approach aims to protect the public interest and foster innovation in lower-risk applications. However, it overlooks the potential risks of capabilities like self-replication and does not specify requirements for alignment testing. While it supports innovation, there is a crucial need to advance innovation in evaluations and benchmarks to ensure that humans maintain control over AI models and systems throughout their development.

EU AI Act Application Details

The EU AI Act stipulates that any developer or company engaging in AI within the EU is subject to its regulations, regardless of their primary location. These regulations apply to:

  • AI Development: Technologies developed in the EU but used outside are still governed by EU rules during the development phase.
  • AI Deployment: Technologies developed outside the EU must comply with EU rules when deployed within its borders.

This comprehensive approach ensures consumer and user protection, influencing AI product design and the information provided, enhancing safety and transparency.

Compliance is structured around a risk-based regulation system, where AI systems are categorized by risk level—unacceptable, high, limited, or minimal. Each category has specific compliance requirements:

  • Data Protection: Strict compliance with GDPR is mandatory, particularly concerning personal data.
  • Transparency and Accountability: High-risk AI systems must meet higher standards of transparency and have robust accountability measures in place.

EU member states retain considerable control under the Act. They have the autonomy to:

  • Enforcement: Implement and enforce EU regulations through local regulatory bodies.
  • Legislation: Enact stricter laws or additional measures that enhance and do not conflict with EU regulations.
  • National Interests: Preserve national interests in AI development and deployment in areas not covered by EU regulations.
  • International Relations: Engage in international AI discussions and agreements, provided they align with EU policies.

Systemic risk in general-purpose AI models

Systemic risk in general-purpose AI models is assessed through a combination of technical evaluations and regulatory decisions:

  • High Impact Capabilities: The risk classification for these AI models relies on evaluating their capabilities using specific technical tools, indicators, and benchmarks.
  • Commission's Decision: Additionally, the European Commission has the authority to classify a model as high-risk either proactively or in response to a recommendation from a scientific panel.

A general presumption of high systemic risk is assigned to AI models if:

  • Computation Threshold: The computation used in training the model exceeds 10(^25) floating point operations. (FLOPs are calculations that computers use to handle decimals precisely, essential for tasks like scientific computing and machine learning.)

To ensure the regulations keep pace with technological progress, the Commission can adjust these thresholds:

  • Delegated Acts: These acts allow the Commission to modify the benchmarks and thresholds as necessary, reflecting advances in technology or efficiency in hardware, thus maintaining the relevance and effectiveness of the regulations.

UK

The United Kingdom, despite not being part of the European Union, is a leading country in artificial intelligence (AI) and AI safety. In 2021, the UK published a comprehensive report on its AI strategy, which notably emphasizes AI safety. The strategy is divided into three main points:

  1. Infrastructure and Skills Development: This section addresses the need for enhancing skills among people, investing in microchips, and improving the overall infrastructure required for AI development.
  2. Economic Integration: This point focuses on how AI can be leveraged to enhance public services and boost the economy, both domestically and internationally.
  3. AI Safety: This section is particularly detailed, outlining the UK’s plans for AI safety in the short, medium, and long term. The UK demonstrates a deep understanding of the risks associated with AI and the potential solutions. The government emphasizes governance as a key area where it can exert influence, citing solutions such as algorithmic transparency and ethical data usage. An interesting aspect of the UK's approach is the "assurance roadmap," which involves various stakeholders, including third-party independent entities, to ensure the ethical use of AI.

Additionally, the UK hosts numerous organizations dedicated to AI safety research, making it a leading country in this field. The assurance roadmap and the involvement of independent entities highlight the UK's proactive stance on AI safety.

France

France aims to become an emerging leader in AI and released a comprehensive report in March 2024 outlining its strategy. The strategy is categorized based on budget allocation into three main areas:

  1. Fostering AI Development: This is the largest category, focusing on advancing AI technology and research.
  2. Deploying AI for Public Benefit: This category emphasizes making AI more common and accessible to the public, ensuring that its benefits are widely distributed.
  3. AI Safety Measures: This category, although smaller with a 10% allocation, focuses on establishing robust governance and safety measures. The strategy includes global governance frameworks, national evaluation systems, risk management, and efforts to secure market dominance.

France's AI safety strategy, while not as clearly defined as the UK's, aims to establish a comprehensive governance framework. France is also set to organize the third edition of the AI Safety Summit, following the UK, South Korea, and India, highlighting its commitment to AI safety.

Germany

Germany's AI strategy is comprehensive and includes substantial investments and initiatives between 2023 and 2025. Although Germany does not have a dedicated law regulating AI, the government has committed 1.6 billion Euros to support AI research, development, and application. Among others crucial components of Germany's strategy we can distinguish:

  1. Gauss Centre for Supercomputing: Germany plans to develop the Gauss Centre for Supercomputing to enhance its AI capabilities.
  2. Medical Care and Epidemiological Prediction: Investments in AI-based assistance systems for medical care and epidemiological prediction are prioritized to improve public health.
  3. Civic Innovation Platform Project: This project aims to leverage AI for public good, focusing on civic innovation.

Germany's strategy also emphasizes AI safety, though it lacks dedicated AI safety institutes. The Federal Office for Information Security is one of the key entities focusing on AI safety, producing reports and guidelines. Despite not having a specific AI safety institute, Germany incorporates AI safety into its broader strategy, ensuring that it is addressed in a comprehensive manner.

Conclusion

While Europe has made significant strides in addressing AI safety through various national and EU-wide initiatives, critical gaps remain. The EU AI Act provides a foundation for regulation, but may not fully address the rapidly evolving landscape of AI capabilities. The UK stands out with its comprehensive approach to AI safety, while France and Germany are developing strategies that balance innovation with safety concerns.

However, these efforts may fall short in addressing crucial issues such as the potential risks of advanced AI systems and maintaining human control over increasingly sophisticated models. As AI continues to advance, European policymakers must critically reassess and strengthen their approaches to ensure they are truly prepared for the challenges that lie ahead. The race to harness AI's potential must be matched with an equally ambitious commitment to robust, forward-thinking safety measures.

This blog post was written with the financial support of Erasmus+