Key Points

China is deploying advanced artificial intelligence tools to spread state propaganda across global digital platforms. These sophisticated techniques involve creating fake websites, generating multilingual content, and simulating realistic social media interactions. Researchers have uncovered systematic attempts to influence public opinion through AI-generated personas and narratives. The strategy poses significant challenges for democratic governments and social media platforms seeking to combat digital misinformation.

Key Points: China AI Propaganda War Threatens Global Information Landscape

  • China uses generative AI to craft multi-language propaganda content
  • AI generates fake social media personas with sophisticated profiles
  • Disinformation campaigns target youth in developing regions
  • Advanced techniques simulate organic online engagement and discussions
3 min read

China's use of AI in propaganda war triggers serious concerns

Exclusive report reveals China's sophisticated AI-driven disinformation tactics targeting global audiences through fake websites and social media manipulation

"AI tools are being leveraged to create entire fake news websites distributing Beijing-aligned narratives - The Diplomat"

New Delhi, Sep 10

China-linked information operations are increasingly using generative AI tools to refine content laundering further, covertly spread state propaganda and smear campaigns, and develop fake social media personas, which is giving rise to serious concern worldwide.

According to a report in The Diplomat, the use of generative AI tools to tailor content to local languages and cultural contexts alongside a focus on youth could leverage social media's popularity to deceptively build trust in pro-Beijing sources and influence future leaders in developing regions.

The report also highlights that in early August, two professors from Vanderbilt University in the US published an essay outlining a trove of Chinese documents linked to the private firm GoLaxy. The sources revealed that artificial intelligence (AI) was being used to generate misleading content for target audiences, such as in Hong Kong and Taiwan, and also to extract information about US lawmakers, creating profiles for possible use in future espionage operations or influence campaigns.

The report states that there have also been several incidents involving OpenAI, Meta, and Graphika on the latest uses of AI by China-linked actors focused on foreign propaganda and disinformation. This calls for urgent attention from social media platforms, software developers, and democratic governments, it added.

According to the report, while prior China-linked disinformation campaigns had deployed AI tools to generate false personas or deepfakes, these latest disclosures point to a more concerted effort to leverage these tools for creating entire fake news websites that distribute Beijing-aligned narratives simultaneously in multiple languages. Graphika's "Falsos Amigos" report, published last month, identified a network of 11 fake websites, established between late December 2024 and March 2025, using AI-generated pictures as logos or cover images to enhance credibility.

OpenAI's threat report published in June cited the use of similar tactics, noting that now-banned ChatGPT accounts had used prompts (often in Chinese) to generate names and profile pictures for two pages posing as news outlets, as well as for individual persona accounts of US veterans critical of the Trump administration in a campaign the firm dubbed "Uncle Spam". These efforts aimed to fuel political polarisation in the United States, with AI-crafted logos and profiles amplifying the illusion of authenticity.

Another key strategy involved simulating organic engagement. OpenAI detected China-linked accounts bulk-generating social media posts, with a "main" account posting a comment followed by replies from others to mimic a discussion. The "Uncle Spam" operation generated comments from supposed American users both supporting and criticising US tariffs.

It highlighted the case of Pakistani activist Mahrang Baloch, who has criticised China's investments in the disputed territory of Balochistan. Meta documented a TikTok account and Facebook page posting a false video accusing her of appearing in pornography, followed by hundreds of apparently AI-generated comments in English and Urdu to simulate responses.

- IANS

Share this article:

Reader Comments

P
Priya S
Very concerning development. AI-generated propaganda can be so convincing that ordinary people won't be able to distinguish truth from fiction. Social media platforms need to step up their detection game immediately.
A
Arjun K
The Mahrang Baloch case mentioned here shows how low they can go. Using AI to create fake pornographic allegations against activists? This is digital warfare and we need to treat it as such.
S
Sarah B
As someone working in tech, this is terrifying. The scale at which they can generate fake content in multiple languages simultaneously is unprecedented. Democratic nations need to collaborate on countermeasures.
V
Vikram M
We've seen similar tactics used in our region too. Remember how fake news spreads during border tensions? Now imagine that amplified by AI. Time for digital literacy to become part of our education system.
M
Michael C
While I agree this is concerning, let's not forget that other countries including Western nations also use propaganda tools. The difference is scale and the authoritarian nature of the Chinese government's control over information.
A
Ananya R
This is why we need to support and invest in Indian AI companies and technologies. Atmanirbhar Bharat is not just about economics but about national security in the digital age! 🚀

We welcome thoughtful discussions from our readers. Please keep comments respectful and on-topic.

Leave a Comment

Minimum 50 characters 0/50