Latest essays on AI safety explorations
In recent years, China has implemented significant legislation aimed at governing AI. This blog post explores this legislation.
July 23, 2024
None of the authors have any expertise on China, AI governance, or legislation more generally. We nonetheless think that a rough understanding of the topic, with examples of articles passed in recent laws, could be beneficial to the AI safety community in the West. Our main sources are Matt Sheehan’s extensive report on the topic, from the Carnegie Endowment for International Peace, as well as translations of the original Chinese laws (see links below). Only ~4 hours were spent doing research to write this blog post, hence, take it with a pinch of salt.
In recent years, China has implemented significant legislation aimed at governing AI. These regulations are structured around three binding laws passed in 2021, 2022, and 2023, with a more comprehensive draft for 2024 in progress. The overarching strategy behind these laws is the establishment of an algorithm registry system, serving as a foundation for future legislation. Each law targets a type of algorithm, in contrast to comprehensive regulations like the EU AI Act, allowing for a faster feedback loop and the incremental setup of government infrastructure and bureaucratic know-how.
All three laws mandate that developers file detailed information about their algorithms into a novel government repository called the algorithm filing system. This registry collects data on how algorithms are trained and deployed, the datasets used, and a security self-assessment report. As noted by Matt Sheehan in his extensive report on the subject, “Some companies have been forced to complete over five separate filings for the same app, each covering a different algorithm [...] The structure of the registry and the required disclosures reveal a belief that effective regulation entails an understanding of, and potentially an intervention into, individual algorithms”.
The 2021 law, Provisions on the Management of Algorithmic Recommendations in Internet Information Services, targets recommendation systems, includes a number of interesting and seemingly strong regulations. Some of the main ones are listed below:
Unfortunately, little is known regarding the enforcement of these regulations. Helen Toner, director of strategy at Georgetown’s Center for Security and Emerging Technology and ex-board member of OpenAI, highlights their practical ambiguity: “In many cases, it’s unclear how these will apply in practice. [...] Like its international counterpart, TikTok, Douyin is famous for its uniquely powerful recommendation algorithm, which serves video after video to the user, optimizing to keep them on the app. How does this fit with Article 8 of the new draft, which prohibits apps from engrossing or addicting their users?”
It is also worth noting that some articles seem to be tailored towards government control of information rather than protecting users or promoting AI safety per se. For instance, Article 6 states that recommendation service providers shall be “oriented towards mainstream values” and “actively transmit positive energy”. We could nonetheless imagine altering this towards recommendation service providers that are oriented towards free speech (within constitutional law) and actively avoid promoting outrage and hate speech.
The 2022 law addresses deep synthesis technology, which includes algorithms capable of generating data, text, images, and codes. In the following, we highlight some articles:
The 2023 law focuses on generative AI which refers to the same technology of Deep synthesis. It mainly restricts some of the previous laws (2022). The law is ambitious and doesn’t specify standards to apply what’s requested. Here we highlight two main additions:
The goal of both articles is mainly the same: providers must implement measures to prevent the creation and dissemination of false information. This is to ensure the integrity and reliability of AI-generated content, reducing the risk of spreading misinformation.
The draft for AI Law is mainly finalized towards AI capability development and there’s just a small part talking about safety. The draft includes provisions for the reasonable use of copyrighted data in AI model training. If the use of copyrighted data for model training differs from its original purpose or function and does not adversely affect the data’s normal use or the legitimate rights of its owner, it is considered reasonable (Art. 24). This seems to widely legitimize freedom of data usage.
China’s AI governance framework is impressive in its scope, but highly ambiguous in its implementation. The potentially selective enforcement of these laws reflects the lack of an independent judicial system, making direct comparisons with Western regulatory approaches difficult. Nevertheless, the detailed algorithm registry and ambitious legislative goals may offer valuable lessons for other nations seeking to regulate AI effectively.
China’s AI Regulations and How They Get Made