Worth $23.196 billion USD in 2021, China’s Artificial Intelligence (AI) market is expected to triple to $61.855 billion by 2025 and the Chinese government expects for AI to create $154.638 billion USD in annual revenue by 2030. China, however, is not just focused on the proliferation of AI and its innovative use cases; the country has also been silently leading the pack and making its mark on the AI regulatory landscape. In 2022, China passed and enforced three distinct regulatory measures on the national, regional and local levels. This momentum carried into 2023, where in January alone, China cracked down on deepfake and generative technology through national-level legislations.
Looking for guidance on AI regulations in China as well as what we can learn about trends in global AI regulation? That’s what we’ll cover in this post.
Key Takeaways
Since 2021, China steadily introducing laws to regulate AI technologies, and in 2023 alone, China enforced multiple pieces of national regulation. Focusing on digital platforms and AI-generated content such as deep fakes, the Chinese government is paving the way towards strong protections against many of the most widespread potential AI harms.
While there is concern about the implications of China’s far-reaching regulations and their potential to hamper free speech, it would be imprecise to fully negate the important precedent and best practices these laws are setting. It is also significant to note the impact of these regulations on international firms which employ these technologies in China, as there are expectations that they already align with compliance requirements.
Below we’ll take a deeper dive into national legislations towards AI in China.
On January 10, 2023, China’s Deep Synthesis Provisions came into effect as part of the Chinese government’s efforts to strengthen its supervision over deep synthesis technologies and services.
The provisions apply to both ‘deep synthesis service providers’ –companies that offer deep synthesis services and those that provide them with technical support – and ‘deep synthesis service users’ –organizations and people that utilize deep synthesis to create, duplicate, publish, or transfer information.
The provisions define deep synthesis as “technology utilising generative and/or synthetic algorithms, such as deep learning and virtual reality, to produce text, graphics, audio, video, or virtual scenes.”
The provisions are centred on four key verticals:
These provisions will significantly change the way that AI-generated content is produced for 1.4 billion people due to their comprehensive scope. While the UK is also intending to ban the creation and dissemination of deepfake videos without consent, China’s law goes beyond this. The regulation creates rules for every stage of the process involved in the use of deepfakes, from creation, to labelling, to dissemination. Additionally, the law leaves room for the potential suppression of organically captured content as well.
There are speculations as to whether China will use this law as a means of further policing the freedom of expression too broadly, however as one of the first countries to enforce a deepfake regulation, conversations are re-igniting about what can be done to address the harms espoused by this technology. Regardless of where you stand on the argument, the law does set a precedent we may see elements of in other jurisdictions. This year we will see more details on how exactly these provisions are enforced.
The Internet Information Service Algorithmic Recommendation Management Provisions went into effect on March 1, 2022. This law is similar to the EU’s enacted DMA and DSA laws. Drafted by the Cyberspace Administration of China, the provisions require that providers of AI-based personalized recommendations in mobile applications uphold user rights, including protecting minors from harm and allowing users to select or delete tags about their personal characteristics.
Companies are banned from offering different users different prices based on personal characteristics collected and would have to notify users if a recommendation was made based on an algorithm whi;e giving users the option to opt out. The aim of this is to address monopolistic behaviour by platforms (similar to the DMA) and issues of dynamic pricing which contribute to precarious working conditions for delivery workers.
The regulation’s provisions are grouped into three main categories: general provisions, information service norms and user rights protection. The provisions affect US and international companies that use algorithms and/or machine learning in their applications or websites which operate in China, as they are already expected to comply.
Key provisions:
Among many things, the regulation prohibits:
Other not-so-straightforward provisions that are presumed to be reflective of China’s approach to AI ethics in practice, orders companies to:
Like the DSA, China’s recommender law also mandates increased transparency and audits of recommendation algorithms. To learn how algorithms work, and ensure that they do so within acceptable parameters, China has created an algorithm registry as part of this regulation. The registry includes a security assessment of registered algorithms, however, the extent to which this registry will be able to provide meaningful insight into black box technologies is yet to be determined. In the interim, such efforts for documentation and understanding are similar to that of the DSA and other EU legislation such as the EU AI Act.
More recently, on May 23, 2023, China adopted interim measures on generative AI, which went into effect on August 15, 2023. The rules seek to balance innovation with legal governance and are based on five key principles:
To support this, the measures require providers of generative AI to carry out data processing activities in a way that uses legal data sources, respects intellectual property rights, obtains consent for the use of personal information, and maximizes the authenticity, accuracy, objectivity and diversity of training data.
In addition to these laws specifically targeting AI, China’s Personal Information Protection Law (PIPL) - a federal data privacy law targeted at personal information protection and addressing the problems with personal data leakage - has implications for automated decision-making technologies. Adopted on August 20, 2021 and filed into force on November 1, 2021, the PIPL is designed to protect the privacy and personal information of Chinese citizens and imposes obligations on Chinese organisations and foreign companies operating in China.
The law defines the term “personal information” (PI) as any kind of information, electronically or otherwise recorded, related to an identified or identifiable natural person within the People’s Republic of China. Like the EU’s GDPR, PI excludes anonymised information that cannot be used to identify a specific natural person and is not reversible after anonymisation. Among the main contributions of the PIPL are as follows, with specific requirements in relation to automated decision making and impact assessments:
These requirements are applicable to organizations and individuals involved in the processing of personal information in China, or outside of China if any of the following conditions are fulfilled:
Exemptions from the Law include natural persons’ processing of personal information for the purposes of personal or family affairs. This includes emergency circumstances to protect natural persons’ lives, health, or security and that of their property. Outside of these exemptions, personal information handlers that fail to comply with the requirements of the PIPL face penalties of us to 50 million RMB, revenue confiscation (up to 5% annual revenue) and business cessation.
In the context of AI regulation, the PIPL is significant as it regulates data, and data is so central to AI. Similar to how recent cases are highlighting how the GDPR applies to AI in the EU, the PIPL applies similarly in China. This is seen clearly in China’s deepfake regulation, where provisions of this law state that entities which use deepfakes must be in abidance with existing PIPL laws.
In addition to these laws, on September 21, 2021, China’s Ministry of Science and Technology published a New Generation Artificial Intelligence Code of Ethics. Released by the National New Generation Artificial Intelligence Governance Professional Committee - which was established by China’s Ministry of Science and Technology to research policy recommendations for AI governance - the Ethics Code covers the entire life cycle of AI and provides guidance for natural and legal persons, as well as other relevant institutions.
The main contributions of the general provisions of the Specification are:
As seen above, the general provisions of the Specification are centred around the verticals of safety, privacy, and fairness, with management standards being encouraged to focus on the appropriate governance and exercises of power to prevent AI risks. Additionally, the Specification outlines R&D specifications concerning data storage and use, with a focus on security provisions and fairness, and supply specifications that focus on following market regulations and ensuring there are emergency provisions in place. Further, the organisation and implementation provisions encourage organisational management to build on the Ethics Code and develop guidelines that are in line with the specifications of the systems they are using.
Efforts towards AI regulation are not just concentrated from the central government, but from provincial and local levels as well.
Regional regulations in China have provided more of a balance between support for innovation and regulation than national initiatives which are more stringent. Regional regulations appear to support best practises for promoting the development of AI in industry and government.
This section looks at provincial and local AI regulations in Shanghai and the Shenzhen Special Economic Zone, respectively.
The Shanghai Regulations to promote the development of the AI industry, is a provincial-level regulation which was passed in September 2022 and has been in effect since October 1, 2022. The regulation is considered a piece of industry promotion legislation, with respect to the innovative development of AI. However, keeping in mind future implications of AI, the regulation introduces a graded management system and enforces sandbox supervision, where companies are given a designated space to test and explore technologies.
Uniquely, the Shanghai AI Regulation stipulates that there is a certain degree of flexibility regarding minor infractions. This is to continue to encourage the development of AI without burdening companies or developers with the fear of stringent regulation and instead shows a deeper commitment to fostering innovation. This is done so through a disclaimer clause where relevant municipal departments will oversee creating a list of infraction behaviours and making it clear that there will be no administrative penalty for minor infractions. To create checks and balances to the innovation-centric approach, the regulation also establishes an Ethics Council to increase ethical awareness in this field.
Similar to the Shanghai Regulations, the Shenzhen AI Regulation to promote the AI industry was passed in September 2022 and went into effect on November 1, 2022. The regulation aims to encourage governmental organisations in China, specifically in the Shenzhen Special Economic Zone, to be at the forefront of AI adoption and development, by increasing financial support for these endeavours.
A risk-management approach towards AI is adopted by the regulation to foster this growth by allowing Shenzhen-based AI services and products that have been assessed as “low-risk” to continue in their trials and testing even without local norms if international standards are being complied with.
Article 72 of the regulation emphasises the importance of AI ethics and encourages risk assessments to identify adverse effects of products and systems. The Shenzhen government will be responsible for the development and management of the risk classification system.
Despite being a local-level regulation, this is a significant development as Shenzhen is home to many AI and tech-related businesses, where an estimated $108 billion USD will be invested into this space from 2021 to 2025.
There is contention as to whether China’s approach to AI regulation is rooted in a power play or a genuine effort to curb the harms associated with the development and deployment of AI systems. One view is that China has taken note of how regulations are becoming a way to set global norms and standards. Wanting to set that precedent itself, China has been involved in some of the earliest enforcement of AI regulation in the world.
However, such a black-and-white view of China’s motivations in the AI regulatory space would be misaligned. There is no doubt that China’s efforts are motivated by a desire to set global standards, but this is integrated with a multi-pronged approached which seeks to regulate AI harms, and to understand, rather than just document, “high-risk” algorithms. For example, where focus has been placed on bias and transparency in other parts of the world, similar to the aims of the DSA, China is also focusing on the technical implications of digital services. Doing so by attempting to delve into the complexity of recommender systems and black box technology through their algorithmic registry, making a head start.
With China seemingly ahead of the curve, it will be interesting to see how others may borrow from its precedent, how East-West relations on AI continue to converge, and who will set the gold AI standard in the East.
Regardless of whether your organization functions within China, the global reach of the nation signals the potential that the forms of legislation they have presented could be embraced elsewhere. Additionally, one fact is becoming abundantly clear: organizations must be able to track the evolving AI regulatory landscape.
Want to explore how you and your team can have global visibility of AI landscape to guide your organization? Schedule a call with one of our team to learn more about our AI Tracker today.
DISCLAIMER: This blog article is for informational purposes only. This blog article is not intended to, and does not, provide legal advice or a legal opinion. It is not a do-it-yourself guide to resolving legal issues or handling litigation. This blog article is not a substitute for experienced legal counsel and does not provide legal advice regarding any situation or employer.
Schedule a call with one of our experts