AI-tocracy: How Automated Censorship Works in China
DOI:
https://doi.org/10.51685/jqd.2025.015Keywords:
Censorship, Information Control, Text Analysis, Artificial Intelligence, Social Media, Authoritarian ResilienceAbstract
State supervision of ideas and information circulated among the public has a longstanding history. While there is a substantial body of literature examining the government’s motives for censorship, scholarly assessments of evolving censorship strategies in the new era of artificial intelligence (AI) remain relatively scarce. This paper analyzes an automated censorship system, developed and commercialized by a leading Chinese internet company, to study its content review logic. Using multiple real-world datasets, we assess: (1) concordance between conventional human-led and automated censorship decisions; (2) disruption of keyword evasion on the system’s effcacy; and (3) varied responses toward collective action and other political threats. Despite a notable gap between human-led and automated censorship decisions, we demonstrate that the system’s primary capability lies not in perfectly mimicking human censors, but in conducting large-scale user profiling and information categorization, which complements other information control tactics in China.
Downloads
Published
Issue
Section
License
Copyright (c) 2025 Yuchen Cao, Zhaozhi Li, Jiahua Yue

This work is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License.


