A Rough Overview of Chinese AI Governance

France June 2024
Written by: 
Vera, Cristian, Alec

Disclaimer

None of the authors have any expertise on China, AI governance, or legislation more generally. We nonetheless think that a rough understanding of the topic, with examples of articles passed in recent laws, could be beneficial to the AI safety community in the West. Our main sources are Matt Sheehan’s extensive report on the topic, from the Carnegie Endowment for International Peace, as well as translations of the original Chinese laws (see links below). Only ~4 hours were spent doing research to write this blog post, hence, take it with a pinch of salt.

Introduction

In recent years, China has implemented significant legislation aimed at governing AI. These regulations are structured around three binding laws passed in 2021, 2022, and 2023, with a more comprehensive draft for 2024 in progress. The overarching strategy behind these laws is the establishment of an algorithm registry system, serving as a foundation for future legislation. Each law targets a type of algorithm, in contrast to comprehensive regulations like the EU AI Act, allowing for a faster feedback loop and the incremental setup of government infrastructure and bureaucratic know-how.

The Algorithm Registry System

All three laws mandate that developers file detailed information about their algorithms into a novel government repository called the algorithm filing system. This registry collects data on how algorithms are trained and deployed, the datasets used, and a security self-assessment report. As noted by Matt Sheehan in his extensive report on the subject, “Some companies have been forced to complete over five separate filings for the same app, each covering a different algorithm [...] The structure of the registry and the required disclosures reveal a belief that effective regulation entails an understanding of, and potentially an intervention into, individual algorithms”.

Key Legislation Highlights

2021 Law – Recommendation Systems

The 2021 law, Provisions on the Management of Algorithmic Recommendations in Internet Information Services, targets recommendation systems, includes a number of interesting and seemingly strong regulations. Some of the main ones are listed below:

  • “Providers must not set up algorithmic models that [...] go against ethics and morals, such as inducing users to become addicted or spend too much” (Art. 8)
  • “Recommendation services shall provide users with functions for selecting or deleting user tags [...] that target their personal traits.” (Art. 17)
  • “[Providers] must not push information to minors that might impact minors' physical and psychological health such as possibly leading them to imitate unsafe behaviors.” (Art. 18)
  • “[Providers] must not use algorithms to carry out unreasonable differentiation in treatment in terms of transaction prices [...] based on consumers preferences, transaction habits, or other traits.” (Art. 21). 

Unfortunately, little is known regarding the enforcement of these regulations. Helen Toner, director of strategy at Georgetown’s Center for Security and Emerging Technology and ex-board member of OpenAI, highlights their practical ambiguity: “In many cases, it’s unclear how these will apply in practice. [...] Like its international counterpart, TikTok, Douyin is famous for its uniquely powerful recommendation algorithm, which serves video after video to the user, optimizing to keep them on the app. How does this fit with Article 8 of the new draft, which prohibits apps from engrossing or addicting their users?”

It is also worth noting that some articles seem to be tailored towards government control of information rather than protecting users or promoting AI safety per se. For instance, Article 6 states that recommendation service providers shall be “oriented towards mainstream values” and “actively transmit positive energy”. We could nonetheless imagine altering this towards recommendation service providers that are oriented towards free speech (within constitutional law) and actively avoid promoting outrage and hate speech.

2022 Law – Deep Synthesis Technology

The 2022 law addresses deep synthesis technology, which includes algorithms capable of generating data, text, images, and codes. In the following, we highlight some articles:

  • “[Providers and users] must not [...] produce, reproduce, publish, or transmit fake news.” (Art. 6): the aim is to mitigate the spread of misinformation and ensure the integrity of information shared on platforms utilizing deep synthesis technology.
  • [Providers] shall have safe and controllable technical safeguard measures.” (Art. 7): this article is a bit vague and doesn’t explicitly provide instructions or constraints to deep synthesis services.
  • [Providers] shall verify real user identity [...] and employ technical measures or manual methods to conduct reviews of the data inputted by users and synthesis outcomes.” (Art.9 ): this aims to prevent the misuse of the technology and safeguard against the creation of harmful or misleading content.
  • For editing biometric information such as faces and voices, they shall prompt the users of the deep synthesis service to notify the individuals.” (Art. 14): this ensures that individuals are aware of and can consent to the use of their personal biometric information.
  • Employ technical measures to attach symbols to information content produced or edited by their services' users that do not impact users' usage.” (Art. 16): this article mandates that providers must label synthetic generated data. Services don’t have to guarantee that these labels remain in future user usage as stated in the next article.
  • “Technical measures must not be employed by any organization or individual to delete, alter, or conceal the deep synthesis labels” (Art. 18).

2023 Law – Generative AI

The 2023 law focuses on generative AI which refers to the same technology of Deep synthesis. It mainly restricts some of the previous laws (2022). The law is ambitious and doesn’t specify standards to apply what’s requested. Here we highlight two main additions:

  • [Providers] should be able to ensure the [training] data’s veracity, accuracy, objectivity, and diversity.” (Art. 7)
  • Content generated through the use of generative AI shall be true and accurate, and measures are to be adopted to prevent the generation of false information.” (Art. 4)

The goal of both articles is mainly the same: providers must implement measures to prevent the creation and dissemination of false information. This is to ensure the integrity and reliability of AI-generated content, reducing the risk of spreading misinformation.

2024 AI Law (Draft)

The draft for AI Law is mainly finalized towards AI capability development and there’s just a small part talking about safety. The draft includes provisions for the reasonable use of copyrighted data in AI model training. If the use of copyrighted data for model training differs from its original purpose or function and does not adversely affect the data’s normal use or the legitimate rights of its owner, it is considered reasonable (Art. 24). This seems to widely legitimize  freedom of data usage.

Conclusion

China’s AI governance framework is impressive in its scope, but highly ambiguous in its implementation. The potentially selective enforcement of these laws reflects the lack of an independent judicial system, making direct comparisons with Western regulatory approaches difficult. Nevertheless, the detailed algorithm registry and ambitious legislative goals may offer valuable lessons for other nations seeking to regulate AI effectively.

Sources

China’s AI Regulations and How They Get Made

2021 Law

2022 Law

2023 Law

2024 AI Law Draft