In October 2022, the U.S. Biden administration issued major export controls restricting U.S. exports of advanced semiconductors and related manufacturing equipment to China and certain Chinese-linked entities. One year later, the package was updated, significantly expanding the restrictions. The Chinese government hit back by a ban imposed on the export of technologies used in the mining and refinement of the rare earth elements required for advanced semiconductors manufacturing, data infrastructure and advanced military industry. The AI squabble between the U.S. and China, however, was hardly confined to the microchips and rare earth elements exchanges, but wen further into the competition on models of AI regulation.
Following the updated microchip restrictions in 2023, President Joe Biden issued an extensive executive order that aimed at a U.S. government-wide policy to harness AI’s potential while managing its risks. The policy drew upon eight guiding principles, including making AI safe and secure; protecting privacy, civil rights and civil liberties; promoting responsible innovation, competition, and equity; and safeguarding workers, consumers, and public interest. The order called for content-provenance mechanisms and labeling to help users know when content is AI-generated. Beyond safety, it sought to build a competitive AI ecosystem by supporting innovation, small developers, and equitable access to opportunities, while protecting workers and preventing undue concentration of power by dominant firms. The federal government was encouraged to lead by example through a pledge to build internal AI capacity, hire AI-literate public-service professionals, and ensure that government use of AI adheres to the same safety, equity and civil-rights standards.
Here, China was initiative rather than reactive. Weeks before Biden’s AI policy framework, Chinese President Xi Jinping declared the Global AI Governance Initiative, which presented a comprehensive vision for how the international community should collectively govern artificial intelligence. Central to the Initiative is a call for deeper global cooperation to enhance information exchange, technical collaboration, and joint development of governance frameworks, standards, and norms with the aim to make AI “secure, reliable, controllable, and equitable.”
The Initiative proposed a series of core commitments that should shape global AI governance. First, AI development must be people-centered: advancing human well-being, supporting sustainable development, and addressing major global challenges such as climate change and biodiversity loss. Second, the Initiative emphasizes respect for national sovereignty: Countries should abide by local laws when providing AI products and services abroad, and must not use AI to manipulate public opinion, spread disinformation, or interfere in other states’ internal affairs, social systems, or social stability.
A major theme is fairness and equality. All countries, regardless of size, economic strength, or social system, should have equal rights to develop and use AI. The Initiative opposes ideological blocs, exclusive technological alliances, and export controls that may undermine other countries’ development in a direct allusion to the U.S trade restrictions and the western alliances against China, which also advocated open-source sharing of AI knowledge, and protecting the global AI supply chain from monopolistic or coercive disruptions.
The document further advocates wide participation and consensus-based decision-making in global AI governance. It was a call for elevating the role and representation of developing countries, supporting UN-centered discussions to establish an international AI governance institution capable of coordinating major issues related to AI development, security, and global regulation.
The AI Safety Governance Framework issued in September 2024 by China’s National Technical Committee 260 on Cybersecurity lays out a comprehensive, whole-of-process approach to artificial intelligence (AI) governance. Structured as a standard-style blueprint, the document organizes AI safety into principles, a framework, risk classification, technical countermeasures, governance mechanisms, and guidelines for developers, providers, and users.
The framework begins by grounding AI governance in a set of overarching principles. It adopts a people-centered approach and the notion of “AI for good,” emphasizing that development and security must be balanced. It calls for a vision of common, comprehensive, cooperative, and sustainable security, and stresses that preventing and mitigating AI safety risks should be both the starting point and the ultimate goal of governance. The framework encourages innovation while demanding prudence: risks must be addressed promptly, especially when national security, public interest, or individual rights are threatened.
The framework section outlines how risk management underpins the entire governance architecture. It highlights four major components:
First, safety and security risks must be identified based on the characteristics of AI technology and application scenarios.
Second, technical countermeasures should target risks across models, algorithms, training data, computing infrastructures, and services, including measures to improve fairness, robustness, and reliability.
Third, comprehensive governance measures must involve all stakeholders—technology researchers, service providers, users, regulators, and social organizations—in coordinated risk identification, prevention, and response.
Fourth, safety guidelines for varied groups (developers, providers, key-sector users, and general users) offer practical instructions for developing and deploying AI responsibly.
AI Safety Risks
A large portion of the document classifies AI risks comprehensively. It divides them into inherent risks, stemming from models, data, and systems, and risks arising in AI applications across cyberspace, real-world contexts, cognitive domains, and ethics:
Governance controls, measures and guidelines
To mitigate inherent and application risks, the framework proposes several technical controls:
Beyond technical measures, the framework outlines governance structures. These include:
Safety guidelines are provided for developers, providers and users to ensure a multistakeholder approach to AI governance, including:
Conclusion
China’s governance framework makes clear that AI regulation has become a strategic instrument in its foreign policy toolkit. By promoting a consensus-based international architecture and advocating norms centered on sovereignty, equality, and multilateral cooperation, Beijing seeks to reshape the global rule-making environment in a way that dilutes U.S. influence, particularly Washington’s use of export controls and supply-chain restrictions. At the same time, China positions itself as a representative of the Global South, presenting its governance vision as an alternative to Western, industry-dominated approaches. This strategic posture is reinforced by a perceived decline in U.S. regulatory leadership, where powerful corporations exert disproportionate sway and successive administrations emphasize accelerating AI innovation over establishing robust safeguards. As a result, a vacuum in global AI governance has emerged, one that China is increasingly attempting to fill through standards, frameworks, and diplomatic initiatives.
For advocates of AI governance in the Global South, this landscape presents both opportunities and risks. China’s risk-management approach, comprehensive, whole-of-process, and attentive to systemic vulnerabilities, offers valuable insights that can inform responsible regulation in developing contexts. Yet these insights must be adopted critically and independently, without becoming instruments of geopolitical competition. The task for the Global South is to draw from the technical strengths of China’s framework while avoiding entanglement in major-power rivalries, asserting instead our own priorities: equitable development, digital sovereignty, and the protection of our societies from the harms and asymmetries of emerging technologies. By engaging selectively and strategically, we can leverage the best available governance ideas while preserving policy autonomy and advancing a more inclusive global AI order.