Arabi Facts Hub is a nonprofit organization dedicated to research mis/disinformation in the Arabic content on the Internet and provide innovative solutions to detect and identify it.

Learning from Risks: Insights from China’s Whole-Process AI Governance Model

Learning from Risks: Insights from China’s Whole-Process AI Governance Model

 

 

In October 2022, the U.S. Biden administration issued major export controls restricting U.S. exports of advanced semiconductors and related manufacturing equipment to China and certain Chinese-linked entities. One year later, the package was updated, significantly expanding the restrictions. The Chinese government hit back by a ban imposed on the export of technologies used in the mining and refinement of the rare earth elements required for advanced semiconductors manufacturing, data infrastructure and advanced military industry. The AI squabble between the U.S. and China, however, was hardly confined to the microchips and rare earth elements exchanges, but wen further into the competition on models of AI regulation.

Following the updated microchip restrictions in 2023, President Joe Biden issued an extensive executive order that aimed at a U.S. government-wide policy to harness AI’s potential while managing its risks. The policy drew upon eight guiding principles, including making AI safe and secure; protecting privacy, civil rights and civil liberties; promoting responsible innovation, competition, and equity; and safeguarding workers, consumers, and public interest. The order called for content-provenance mechanisms and labeling to help users know when content is AI-generated. Beyond safety, it sought to build a competitive AI ecosystem by supporting innovation, small developers, and equitable access to opportunities, while protecting workers and preventing undue concentration of power by dominant firms. The federal government was encouraged to lead by example through a pledge to build internal AI capacity, hire AI-literate public-service professionals, and ensure that government use of AI adheres to the same safety, equity and civil-rights standards.

Here, China was initiative rather than reactive. Weeks before Biden’s AI policy framework, Chinese President Xi Jinping declared the Global AI Governance Initiative, which presented a comprehensive vision for how the international community should collectively govern artificial intelligence. Central to the Initiative is a call for deeper global cooperation to enhance information exchange, technical collaboration, and joint development of governance frameworks, standards, and norms with the aim to make AI “secure, reliable, controllable, and equitable.”

The Initiative proposed a series of core commitments that should shape global AI governance. First, AI development must be people-centered: advancing human well-being, supporting sustainable development, and addressing major global challenges such as climate change and biodiversity loss. Second, the Initiative emphasizes respect for national sovereignty: Countries should abide by local laws when providing AI products and services abroad, and must not use AI to manipulate public opinion, spread disinformation, or interfere in other states’ internal affairs, social systems, or social stability. 

A major theme is fairness and equality. All countries, regardless of size, economic strength, or social system, should have equal rights to develop and use AI. The Initiative opposes ideological blocs, exclusive technological alliances, and export controls that may undermine other countries’ development in a direct allusion to the U.S trade restrictions and the western alliances against China, which also advocated open-source sharing of AI knowledge, and protecting the global AI supply chain from monopolistic or coercive disruptions.

The document further advocates wide participation and consensus-based decision-making in global AI governance. It was a call for elevating the role and representation of developing countries, supporting UN-centered discussions to establish an international AI governance institution capable of coordinating major issues related to AI development, security, and global regulation. 

The AI Safety Governance Framework issued in September 2024 by China’s National Technical Committee 260 on Cybersecurity lays out a comprehensive, whole-of-process approach to artificial intelligence (AI) governance. Structured as a standard-style blueprint, the document organizes AI safety into principles, a framework, risk classification, technical countermeasures, governance mechanisms, and guidelines for developers, providers, and users.

The framework begins by grounding AI governance in a set of overarching principles. It adopts a people-centered approach and the notion of “AI for good,” emphasizing that development and security must be balanced. It calls for a vision of common, comprehensive, cooperative, and sustainable security, and stresses that preventing and mitigating AI safety risks should be both the starting point and the ultimate goal of governance. The framework encourages innovation while demanding prudence: risks must be addressed promptly, especially when national security, public interest, or individual rights are threatened.

The framework section outlines how risk management underpins the entire governance architecture. It highlights four major components:

First, safety and security risks must be identified based on the characteristics of AI technology and application scenarios.

Second, technical countermeasures should target risks across models, algorithms, training data, computing infrastructures, and services, including measures to improve fairness, robustness, and reliability.

Third, comprehensive governance measures must involve all stakeholders—technology researchers, service providers, users, regulators, and social organizations—in coordinated risk identification, prevention, and response.

Fourth, safety guidelines for varied groups (developers, providers, key-sector users, and general users) offer practical instructions for developing and deploying AI responsibly.

 

AI Safety Risks

A large portion of the document classifies AI risks comprehensively. It divides them into inherent risks, stemming from models, data, and systems, and risks arising in AI applications across cyberspace, real-world contexts, cognitive domains, and ethics:

  1. Inherent Risks
  2. Model-related risks: These include:
  • Low explainability due to the black-box nature of deep learning.
  • Bias and discrimination arising from poor-quality or unrepresentative data.
  • Fragility and poor robustness under changing environments.
  • Malicious interference risks including model stealing, tampering (modifying the model), inversion attacks (reconstruction of sensitivie training data), and backdoor insertion (inserting an intentional bug in the model).
  • Hallucinated outputs that misrepresent facts.
  1. Data-related risks: These begin with:
  • Illegal or non-consensual data collection. and misuse during training or service interaction.
  • Biased or poisoned content leads to error propagation in outputs.
  • Unregulated or low-quality annotation can reduce model accuracy and generalization while introducing bias.
  • Data leakage can occur through improper processing, unauthorized access, or malicious attacks.
  1. System-level risks: These include:
  • Defects and backdoors in APIs, toolkits, libraries, and execution platforms used at various stages of AI development.
  • Computing infrastructure which is also vulnerable to resource exhaustion attacks or cross-boundary transmission of threats.
  • Supply-chain risks on top of the geopolitical vulnerabilities of AI components, like chips, software, data resources, and from unilateral export restrictions that disrupt access to critical technologies.
  1. Risks in AI Applications
  2. Cyberspace Risks:
  • AI-generated content can spread misinformation, biased narratives, or illegal content, threatening individual rights, national security, and public order.
  • Users may be misled if AI outputs are not clearly labeled, or if synthetic media bypass authentication systems like facial or voice recognition.
  • Improper use can expose sensitive government or corporate data. AI can facilitate cyberattacks by automating vulnerability discovery, malware generation, or phishing. Moreover, security flaws in upstream foundation models can propagate to downstream models through fine-tuning.
  1. Real-World Risks:
  • AI systems used in finance, energy, transportation, or healthcare may produce hallucinations or incorrect decisions that threaten personal safety and social stability.
  • AI can enable illegal or criminal activities including terrorism, violence, gambling, drug trafficking, by generating tools or instructional content.
  • Additionally, dual-use technologies may lower the barrier for designing chemical, biological, or nuclear weapons, or for creating cyber weapons.
  1. Cognitive Risks:
  • AI-driven content personalization may intensify “information cocoons,” (filter bubbles) shaping user behavior and public consciousness.
  • AI can be used to generate and disseminate fake news or extremist content, interfere in other countries’ internal affairs, manipulate public opinion, or support “cognitive warfare” operations.
  • Social media bots may gain agenda-setting power, influencing collective perception.
  1. Ethical Risks:
  • AI can exacerbate discrimination by profiling individuals based on behavior, socioeconomic status, demographics, or other traits, potentially deepening structural inequalities and widening the intelligence divide among regions.
  • AI’s role in reshaping industries may disrupt traditional social norms and employment patterns.
  • The framework even acknowledges long-term risks of AI pursuing autonomous goals, self-replication, or gaining self-awareness, potentially challenging human control (agentic misalignment).

 

Governance controls, measures and guidelines

To mitigate inherent and application risks, the framework proposes several technical controls:

  • For model-related risks, developers should improve explainability and predictability, adopt secure development processes, eliminate security flaws and discriminatory tendencies, and strengthen robustness testing.
  • For data risks, developers must comply with rules on personal information, data security, intellectual property rights, and cross-border data transfer; ensure diversity, legitimacy, and accuracy in training data; and filter sensitive or harmful content.
  • System-level measures include disclosing model capabilities and risks, labeling outputs, enhancing platform-level risk identification, ensuring operational continuity of computing infrastructures, and tracking vulnerabilities across supply chains.
  • Measures for application risks cover protecting models from interference, ensuring compliance when handling sensitive data, restricting AI functionality in high-risk scenarios, ensuring traceability of end-use, detecting harmful or inaccurate outputs, and developing AIGC detection technologies.
  • Ethical safeguards require filtering training data and outputs to prevent discrimination and equipping key-sector AI with emergency management capabilities.

Beyond technical measures, the framework outlines governance structures. These include:

  • Tiered, category-based management of AI systems
  • Testing and registration requirements for systems exceeding certain thresholds
  • Traceability mechanisms through digital certificates and labeling standards
  • Improved data security and personal information protection regulations
  • A responsible AI R&D and application system aligned with ethical norms
  • Measures to enhance supply-chain security and open-source collaboration
  • Research into explainability and error-correction mechanisms
  • Information-sharing and emergency response systems for AI security incidents
  • Expanded training of AI safety talent
  • Development of mechanisms for AI safety education, industry self-regulation, and social supervision
  • Promotion of international cooperation through the UN, APEC, G20, BRICS, and Belt and Road partnerships.

Safety guidelines are provided for developers, providers and users to ensure a multistakeholder approach to AI governance, including:

  • Developers are instructed to follow ethical principles, strengthen data security and IPR protection, secure training environments, assess biases, manage product versions, conduct comprehensive testing, and produce detailed reports.
  • Providers must disclose system capabilities and limitations, inform users of risks, support informed decision-making, monitor real-time risks, report incidents, and enforce rules against misuse.
  • Key-sector users must perform risk assessments, maintain human oversight, use secure authentication, and ensure confidentiality and data protection.
  • General users are encouraged to understand product limitations, protect personal information, avoid unnecessary disclosure, and be aware of cybersecurity and addiction risks.

 

Conclusion

China’s governance framework makes clear that AI regulation has become a strategic instrument in its foreign policy toolkit. By promoting a consensus-based international architecture and advocating norms centered on sovereignty, equality, and multilateral cooperation, Beijing seeks to reshape the global rule-making environment in a way that dilutes U.S. influence, particularly Washington’s use of export controls and supply-chain restrictions. At the same time, China positions itself as a representative of the Global South, presenting its governance vision as an alternative to Western, industry-dominated approaches. This strategic posture is reinforced by a perceived decline in U.S. regulatory leadership, where powerful corporations exert disproportionate sway and successive administrations emphasize accelerating AI innovation over establishing robust safeguards. As a result, a vacuum in global AI governance has emerged, one that China is increasingly attempting to fill through standards, frameworks, and diplomatic initiatives.

For advocates of AI governance in the Global South, this landscape presents both opportunities and risks. China’s risk-management approach, comprehensive, whole-of-process, and attentive to systemic vulnerabilities, offers valuable insights that can inform responsible regulation in developing contexts. Yet these insights must be adopted critically and independently, without becoming instruments of geopolitical competition. The task for the Global South is to draw from the technical strengths of China’s framework while avoiding entanglement in major-power rivalries, asserting instead our own priorities: equitable development, digital sovereignty, and the protection of our societies from the harms and asymmetries of emerging technologies. By engaging selectively and strategically, we can leverage the best available governance ideas while preserving policy autonomy and advancing a more inclusive global AI order.