China's Digital Authoritarianism: A Realist Approach
Can a mastery gospel of an Orwellian state genuinely exist in the modern world where technology has reshaped every fabric of society? Using a Realist perspective, this piece argues that China's digital authoritarianism exemplifies how technology has become a sharpened sword for consolidating state power.

Introduction
There are 5.52 billion internet users worldwide as of 2024, and the average person spends over six hours online each day. Global investment in artificial intelligence surpassed $184 billion in 2024 and is projected to reach $632 billion by 2028. Digital platforms have become an indispensable part of human life.
A 2022 Pew Study demonstrated that digital technologies empower individuals by breaking down information barriers and promoting a positive impact on democracy. Yet one-third of countries in the world are now living under authoritarianism. While traditional authoritarian regimes often maintain legitimacy through patron-client networks and suppress dissent through physical means, modern autocratic regimes have expanded their control into cyberspace.
In 2023, the White House introduced the term "authoritarian counterrevolution" to describe a growing phenomenon: digital authoritarianism. It is defined as the pervasive use of technologies to surveil, repress, and manipulate populations. The Chinese Communist Party leads in developing advanced authoritarian tools, including surveillance systems, censorship architectures, facial recognition, and data-driven policing. With a staggering $1 trillion projected investment in generative AI in the coming years, it is increasingly urgent to recognize how states use AI to maintain social control and suppress dissent. Using the Realist framework, this essay argues that China's digital authoritarianism exemplifies how technology has become a sharpened instrument for consolidating state power.
Realism framework
Realism is the oldest theory in international relations, offering a pragmatic lens through which to analyze state behavior in an anarchic global system. Grounded in principles articulated by Hans Morgenthau in 1948 and later expanded by Robert Jervis, Realism emphasizes power, security, and the pursuit of national interest as the core driving forces of state action.
What drew me to applying this lens to digital authoritarianism is that it reframes what might look like domestic surveillance as a fundamentally strategic act. AI governance in authoritarian states is not primarily about technology. It is about power. The Realist framework makes that visible.
Morgenthau and the pursuit of power
In Politics among Nations, Morgenthau frames international politics as a competition for power. States act rationally to pursue their national interest, which is intrinsically tied to the accumulation and preservation of power. This explains why China's AI governance prioritizes state control over technological liberalization. AI in China is designed to serve political interests rather than market-driven growth.
China's approach reflects a deeply ingrained philosophy of centralized control. The Chinese term for artificial intelligence, 人工智能 (rén gōng zhì néng), emphasizes AI as a human-made extension ultimately serving state interests. This aligns with the ancient proverb 活到老,学到老, underscoring the CCP's belief that technologies must be shaped and controlled before they mature in ways that escape governance. China's deployment of surveillance systems and censorship tools exemplifies this mindset, consolidating domestic authority in line with Morgenthau's third principle: national interest is defined by power.
Because policy must arise from political analysis, Morgenthau insists on the need for policies to prioritize national interest. China's strict regulation of AI reflects exactly this Realist calculation. Such a strategy enables the state to maintain power and mitigate the perceived perils of advanced technology falling into ungoverned hands.
Jervis and the anarchic system
Jervis expands on classical Realism by highlighting the role of competition in an anarchic international system, where states constantly maneuver to safeguard sovereignty over global cooperation. The geopolitical implications of China's digital authoritarianism are alarming precisely through this lens. Many autocratic regimes have adopted China's AI tactics, actively challenging liberal democratic values of transparency and accountability.
Using the Realist lens, China's digital authoritarianism is not merely a domestic control mechanism. It is part of a broader strategy to assert position in the international order. Technology allows China to reinforce its sovereignty, mitigate external threats, and project global influence.
Algorithms, censorship, and the thought police
Algorithms, alongside data and computing power, form the critical triad of AI. China's action on recommendation algorithms demonstrates that pro-innovation is not a core value of its governance framework. In response to national security concerns about Western influence, China's AI governance integrates censorship and surveillance, blocking over 800 websites and automating content suppression on social media to ensure public discourse aligns with state ideology.
On social media platforms, AI algorithms function as thought police, monitoring and removing posts or accounts deemed to violate community standards. Sensitive topics include Tibetan independence, the Tiananmen massacre, and anything the CCP considers a threat to its legitimacy. China also created domestic equivalents of major Western platforms, including Baidu, WeChat, Alipay, Weibo, and Taobao, specifically to maintain control over data that would otherwise flow through foreign-owned systems.
The first emerging algorithm control appeared in 2017 with TikTok, where addictive algorithmic feeds were perceived as a threat to public discourse. In response, the Cyberspace Administration of China (CAC) released an algorithm registry requiring developers to explain the rationale behind their systems and submit them for approval. A limited version is then published publicly. This is transparency as control, not as accountability.
Generative AI and deep synthesis regulation
In 2022, China introduced the term "deep synthesis" as an alternative to deepfake, and simultaneously announced a regulatory framework to govern its use. This framework covers text, image, voice, and video generation. The 2023 generative AI regulation went further, requiring that training data ensure accuracy, objectivity, and diversity while also embodying CCP values. Content related to sex or race is restricted, and intellectual property rights for training data are mandated.
This framework places significant pressure on tech companies, requiring extensive compliance efforts to remain competitive in the rapid AI market. It also means that large language models operating in China must be ideologically aligned before they are commercially viable.
Data, surveillance, and the CCP crackdown
Data is the foundation of the AI triad. Feeding massive surveillance data into training datasets biases the algorithm. Those who control models can nudge individuals toward choices that favor their end goals. The CCP grew concerned that giant tech companies like Alibaba and ByteDance now hold more data than the government itself, creating a direct tension between private capital and state sovereignty.
In response, the CCP cracked down on prominent domestic companies including Alibaba and Ant Group. The CAC ordered major tech companies to fix recommendation algorithms creating echo chambers, required platforms to allow users to decline personalization, and mandated that qualified content reflect Socialist core values. These measures illustrate the CCP's effort to centralize data control and ensure that AI technologies remain aligned with its ideological and political objectives.
Geopolitical implications
China's digital authoritarianism has significant global implications. More than half of the world's one billion surveillance cameras are from China, approximately 700 million. Chinese surveillance start-ups export their technology at roughly twice the rate of the United States. Studies reveal that facial recognition technology from China has a strong autocratic bias, with China specifically exporting to autocratic regimes or weak democracies, particularly those experiencing domestic unrest.
China exports its digital authoritarian model to many like-minded regimes, including Zimbabwe, Iran, Saudi Arabia, Sudan, Syria, and Egypt. These developing countries are more likely to import surveillance technologies under the facade of security and public order maintenance. Some tools including facial recognition, AI, and IoT technology have even been reported in democratic countries like Turkey. At least 47 governments are reported to be rigorously adopting China's tactics.
For instance, Vietnam mandated that as of December 25, 2024, only accounts verified by phone numbers can share or post content on social media. Zimbabwe citizens have experienced multiple internet shutdowns, with Facebook and WhatsApp blocked to suppress protest. Such measures reflect China's pattern of digital authoritarianism diffusing into the broader world.


Countering China's digital authoritarianism
De-risking China's authoritarian AI is challenging. Various punitive measures have been implemented to address human rights violations and curb the spread of authoritarianism. In 2022, the US Department of Commerce introduced new restrictions on China's access to advanced AI chips to curb its ability to develop AI-powered weapons of mass destruction and surveillance technology. In response, China invested heavily in domestic AI chip development to reduce long-term dependence on foreign supply chains.
In 2023, European regulators hit TikTok with a $368 million fine for violating children's privacy, and the UK's Information Commissioner's Office issued an additional $15.7 million fine for misusing children's data. Many countries have since moved toward banning TikTok entirely, though these measures do not directly curb China's export of surveillance AI to authoritarian states.
International bodies have made significant efforts to limit the risk of digital authoritarianism. The Australian Strategic Policy Institute introduced a three-step framework of auditing, red teaming, and regulation to mitigate AI-enabled harm. President Biden's 2023 Executive Order established new standards for AI safety, citizen privacy, and equity. Global frameworks like the GDPR and OECD AI Principles exemplify multilateral cooperation to uphold democratic values including transparency, accountability, fairness, and privacy.
Three global tech leaders, Google DeepMind, OpenAI, and Anthropic, have also published if-then frameworks to anticipate the potential risks of AI. For example, if a red line is crossed, a corresponding governmental action might be triggered. If a surveillance tool is biased by disproportionately targeting specific groups, the government must stop using it. Countering China's digital authoritarianism ultimately requires coordinated efforts from international policymakers, companies, and individuals.
Conclusion
Realism reminds us of a world where states prioritize sovereignty and enhance relative power. China's digital authoritarianism exemplifies the Realist perspective on international relations by exporting its governance model to reshape the international influence landscape. China's strategic use of AI demonstrates the competitive nature of states in a world where technological capabilities increasingly dictate power dynamics.
AI entrenched in authoritarian regimes creates a resilient infrastructure that is more challenging to dismantle than traditional authoritarian structures. Such development brings the Orwellian state closer to reality. International bodies must coalesce around the transparent, accountable development and adoption of explainable and responsible AI to mitigate these trajectories. As AI-powered authoritarianism grows, the international community must rethink how to counter not just China's AI policies but also the broader global spread of digital autocracy.
A closing thought
This essay was written in November 2024, early in my thinking about AI governance and digital authoritarianism. It was also where I first realized that international relations theory could give me a much sharper vocabulary for understanding why states behave the way they do around technology. Realism was the starting point. It will not be the ending one.
References
- ·Statista. (2024). Number of internet users worldwide from 2010 to 2024. https://www.statista.com
- ·Pew Research Center. (2022). Digital technologies and their impact on democracy. https://www.pewresearch.org
- ·The White House. (2023). The authoritarian counterrevolution and AI. https://www.whitehouse.gov
- ·IDC. (2024). Worldwide spending on artificial intelligence forecast. https://www.idc.com/getdoc.jsp?containerId=prUS52530724
- ·Pew Research Center. (2022, December 6). Social media seen as mostly good for democracy across many nations, but U.S. is a major outlier. https://www.pewresearch.org/global/2022/12/06/social-media-seen-as-mostly-good-for-democracy-across-many-nations-but-u-s-is-a-major-outlier/
- ·World Population Review. (n.d.). What countries have authoritarian governments? https://worldpopulationreview.com/country-rankings/what-countries-have-authoritarian-government
- ·The White House. (2023, March 29). Fact sheet: Advancing technology for democracy at home and abroad. https://www.whitehouse.gov/briefing-room/statements-releases/2023/03/29/fact-sheet-advancing-technology-for-democracy-at-home-and-abroad/
- ·Polyakova, A., & Meserole, C. (2019). Exporting digital authoritarianism: The Russian and Chinese models. Brookings Institute. https://www.brookings.edu
- ·Scharre, P. (2023, May 4). The dangers of the global spread of China's digital authoritarianism. Center for a New American Security (CNAS). https://www.cnas.org/publications/congressional-testimony/the-dangers-of-the-global-spread-of-chinas-digital-authoritarianism
- ·Goldman Sachs. (2024, August 5). Will the $1 trillion of generative AI investment pay off? https://www.goldmansachs.com/insights/articles/will-the-1-trillion-of-generative-ai-investment-pay-off
- ·Morgenthau, H. J. (1948). Politics among nations: The struggle for power and peace. Alfred A. Knopf.
- ·Jervis, R. (1998). Realism in the study of world politics. International Organization, 52(4), 971–991. https://doi.org/10.1162/002081898550725
- ·Sheehan, M. (2023, July 10). China's AI regulations and how they get made. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2023/07/chinas-ai-regulations-and-how-they-get-made?lang=en
- ·Cyberspace Administration of China (CAC). (2022, January 4). Algorithms. https://www.cac.gov.cn/2022-01/04/c_1642894606364259.htm
- ·Cyberspace Administration of China (CAC). (2022, December 11). Deep synthesis. https://www.cac.gov.cn/2022-12/11/c_1672221949354811.htm
- ·Cyberspace Administration of China (CAC). (2023, April 11). Overview of draft measures on generative AI. https://www.cac.gov.cn/2023-04/11/c_1682854275475410.htm
- ·Nover, S. (2024, August 27). China spends big on AI. GZERO Media. https://www.gzeromedia.com/gzero-ai/china-spends-big-on-ai
- ·Wu, Y. (2023, July 27). How to interpret China’s first effort to regulate generative AI measures. China Briefing. https://www.china-briefing.com/news/how-to-interpret-chinas-first-effort-to-regulate-generative-ai-measures/
- ·Binns, R. (2023, November 6). Websites banned in China. The Independent. https://www.independent.co.uk/advisor/vpn/websites-banned-in-china
- ·People's Daily. (2017, September 18). Strengthening ideological work in the new era. http://opinion.people.com.cn/n1/2017/0918/c1003-29540709.html
- ·Cyberspace Administration of China (CAC). (n.d.). National internet information service platform. https://beian.cac.gov.cn/#/index
- ·Sheehan, M., & Du, S. (2022, December 9). What China's algorithm registry reveals about AI governance. Carnegie Endowment for International Peace. https://carnegieendowment.org/posts/2022/12/what-chinas-algorithm-registry-reveals-about-ai-governance?lang=en
- ·Toner, H., et al. (2023, April 19). How will China's generative AI regulations shape the future? DigiChina, Stanford University. https://digichina.stanford.edu/work/how-will-chinas-generative-ai-regulations-shape-the-future-a-digichina-forum/
- ·Liang, A. (2022, August 16). China's tech giants rush to show AI can align with Communist Party values. BBC News. https://www.bbc.com/news/business-62544950
- ·Zhang, L. (2023, July 15). Timeline: China's 32-month Big Tech crackdown. South China Morning Post. https://www.scmp.com/tech/big-tech/article/3227753/timeline-chinas-32-month-big-tech-crackdown-killed-worlds-largest-ipo-and-wiped-out-trillions-value
- ·Wong, H. (2024, November 24). China sets deadline for Big Tech to clear algorithm issues. SCMP. https://www.scmp.com/news/china/politics/article/3287929/china-sets-deadline-big-tech-clear-algorithm-issues-close-echo-chambers
- ·Qu, T. (2022, March 1). China’s algorithm law takes effect to curb Big Tech’s sway. SCMP. https://www.scmp.com/tech/policy/article/3168816/chinas-algorithm-law-takes-effect-curb-big-techs-sway-public-opinion
- ·Huang, S., et al. (2023, April 12). Translation: Measures for the management of generative AI services. DigiChina. https://digichina.stanford.edu/work/translation-measures-for-the-management-of-generative-artificial-intelligence-services-draft-for-comment-april-2023/
- ·Statista. (n.d.). Number of surveillance cameras in major cities in China in 2022. https://www.statista.com/statistics/1456936/china-number-of-surveillance-cameras-by-city/
- ·Beraja, M., Yang, D. Y., & Yuchtman, N. (2024, July 24). China's AI surveillance technology and autocratization. Project Syndicate. https://www.project-syndicate.org/commentary/china-exports-ai-surveillance-technology-associated-with-autocratization-by-martin-beraja-et-al-2024-07
- ·Beraja, M., Kao, A., Yang, D., & Yuchtman, N. (2023, September 10). How the surveillance state is exported. VoxDev. https://voxdev.org/topic/trade/how-surveillance-state-exported-through-trade-ai
- ·Beraja, M., Kao, A., Yang, D. Y., & Yuchtman, N. (2023). AI-tocracy. The Quarterly Journal of Economics, 138(3), 1349–1402. https://doi.org/10.1093/qje/qjad012
- ·Feldstein, S. (2022, October 27). China's high-tech surveillance drives oppression of Uyghurs. Bulletin of the Atomic Scientists. https://thebulletin.org/2022/10/chinas-high-tech-surveillance-drives-oppression-of-uyghurs/
- ·Gallagher, R. (2019). Export laws: China is selling surveillance technology to the rest of the world. Index on Censorship, 48(3), 35–37. https://doi.org/10.1177/0306422019876445
- ·Yayboke, E., & Brannen, S. (2020, October 15). Promote and build: A strategic approach to digital authoritarianism. CSIS. https://www.csis.org/analysis/promote-and-build-strategic-approach-digital-authoritarianism
- ·Yücel, A. (2024). Hybrid digital authoritarianism in Turkey: the 'Censorship Law' and AI-generated disinformation strategy. Turkish Studies, 1–27. https://doi.org/10.1080/14683849.2024.2392816
- ·Buchholz, K. (2020, August 18). Origin of AI surveillance technology by country. Statista. https://www.statista.com/chart/20221/origin-of-ai-surveillance-technology-by-country/
- ·China Law Translate. (2017, August 29). Provisions on the management of internet post comments services. https://www.chinalawtranslate.com/en/provisions-on-the-management-of-internet-post-comments-services/
- ·RFA Uyghur. (2023, June 8). Uyghur student sentenced for sharing religious content on WeChat. Radio Free Asia. https://www.rfa.org/english/news/uyghur/student-sentenced-06082023154805.html
- ·Reuters. (2010, May 14). China allows internet access in Xinjiang 10 months after riots. https://www.reuters.com/article/technology/china-allows-internet-access-in-xinjiang-10-months-after-riots-idUSTRE64D0MB/
- ·Funk, A., Shahbaz, A., & Vesteinsson, K. (2023). Repressive power of artificial intelligence. Freedom House. https://freedomhouse.org/report/freedom-net/2023/repressive-power-artificial-intelligence
- ·Chinhphu.vn. (2024, November 25). Từ 25/12 xác thực tài khoản mạng xã hội bằng số điện thoại di động. https://xaydungchinhsach.chinhphu.vn/tu-25-12-xac-thuc-tai-khoan-mang-xa-hoi-bang-so-dien-thoai-di-dong-119241112163033918.htm
- ·BBC News. (2019, January 18). Zimbabwe’s Mnangagwa promises investigation into crackdown. https://www.bbc.co.uk/news/world-africa-46917259
- ·Lorenz-Spreen, P., et al. (2022). A systematic review of evidence on digital media and democracy. Nature Human Behaviour. https://doi.org/10.1038/s41562-022-01460-1
- ·U.S. News & World Report. (n.d.). Best countries for cheap manufacturing costs. https://www.usnews.com/news/best-countries/rankings/cheap-manufacturing-costs
- ·Knight, W., & Matsakis, L. (2024, November 27). Memory restrictions in China: The fight for advanced chips. Wired. https://www.wired.com/story/memory-restrictions-china-advanced-chips/
- ·Chan, K. (2023, September 15). TikTok faces fine over data privacy concerns in Europe. Associated Press. https://apnews.com/article/tiktok-data-privacy-europe-regulation-fine-8ebacba7646ef872fb8e85a1bcb93876
- ·Information Commissioner’s Office (ICO). (2023, April 4). ICO fines TikTok £12.7 million for misusing children’s data. https://ico.org.uk/about-the-ico/media-centre/news-and-blogs/2023/04/ico-fines-tiktok-127-million-for-misusing-children-s-data/
- ·Walter, J. D. (2024, April 26). Which countries have banned TikTok? DW. https://www.dw.com/en/which-countries-have-banned-tiktok/a-68930678
- ·Gilding, S. (2023, July). De-risking authoritarian AI. Australian Strategic Policy Institute (ASPI). https://ad-aspi.s3.ap-southeast-2.amazonaws.com/2023-07/De-risking%20authoritarian%20AI.pdf?VersionId=zJYnuXNnkbViSO1YCflscWYLaOlXMrSg
- ·The White House. (2023, October 30). Fact sheet: President Biden issues executive order on safe, secure, and trustworthy AI. https://www.whitehouse.gov/briefing-room/statements-releases/2023/10/30/fact-sheet-president-biden-issues-executive-order-on-safe-secure-and-trustworthy-artificial-intelligence/
- ·OECD. (2019, amended 2024). OECD principles on artificial intelligence (OECD/LEGAL/0449). https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0449
- ·Dragan, A., King, H., & Dafoe, A. (2024, May 7). Introducing the frontier safety framework. DeepMind. https://deepmind.google/discover/blog/introducing-the-frontier-safety-framework/
- ·OpenAI. (n.d.). Safety and alignment at OpenAI. https://openai.com/safety/
- ·Anthropic. (2023, September 19). Anthropic's responsible scaling policy. https://www.anthropic.com/news/anthropics-responsible-scaling-policy
- ·Karnofsky, H. (2024, September 13). If-then commitments for AI risk reduction. Carnegie Endowment for International Peace. https://carnegieendowment.org/research/2024/09/if-then-commitments-for-ai-risk-reduction?lang=en