China is likely to rely on artificial intelligence-generated disinformation content, such as deep fake and deep voice videos, as part of its psychological and public opinion warfare across the world, a new study by United States-based think tank Atlantic Council says.
The Atlantic Council’s Digital Forensics Lab (DFRLab) has published a new study analysing Chinese disinformation campaigns and recent trends which suggest that despite high success among the domestic audience base, the Chinese Communist Party (CCP) struggles to drive its message home on the foreign front.
The study notes that so far, Chinese disinformation operations on western social media platforms have been ineffective due to flawed implementation, which includes outsourcing the operation to third parties.
Platforms such as Twitter, Facebook and Google have been able to identify Chinese campaigns and take timely actions in the past. But now, experts at DFRLab assess that “AI will be used to employ effective, large-scale disinformation campaigns and to covertly run authentic Western social media accounts”.
The study comes amid a tense standoff between India and China along the Line of Actual Control. Even US Secretary of State Mike Pompeo recently addressed the “threat” posed by China in the region.
Discourse power and information warfare
The Atlantic Council study observes the continuous shift in Beijing’s foreign policy from its earlier “non-intervention” stance in the affairs of other countries. China now looks to exercise “Discourse Power” – a concept which suggests that a country can attain increased geo-political power by setting agendas internationally through influencing the political order and values, both domestically and in other countries, to project its “peaceful” rise as a global superpower.
CCP has been using the information space, both domestically and internationally, to project the “China Story”. The author of the study, Alicia Fawcett, describes it as projecting its positive image through storytelling in the media landscape, both domestic and abroad.
Information perception tactics such as removal, suppression and downplay of negative information, as well as gamification of certain hashtags, are tools with which China intends to convince foreign audiences that it is a “responsible world leader” and leading power in reforming the international political system. Today’s Internet-driven global information space offers Beijing an effective way to spread the “China Story” across the globe.
Chinese discourse power architecture/Alicia Fawcett, DFRLab
The Chinese government infrastructure has been busy with large-scale operations of producing and reproducing false or misleading information with the intention to deceive. The content usually relies on psychological bias, provoking ethnic, racial or cultural affiliations within its target audience and intends to implant “paranoia and cognitive blind-spots”.
“China sees disinformation operations as an effective strategy for its government
to achieve foreign policy objectives,” Fawcett says and points out that the People’s Liberation Army (PLA), State Council, and CCP’s Central Committee all take part in organised information operations on domestic and international platforms.
Deep Fakes: Weapon of mass division
Deep fakes are synthetic multimedia where a person’s original voice or appearance is manipulated to attribute the words they didn’t say or an act they didn’t perform. AI-driven deep fakes make the manipulated media flawless and easy to create which deceive viewers into believing the events that didn’t take place.
Deep voice media on the other hand relies on cloning a person’s voice using machine-learning which could be used to create an entirely new speech in their original voice without consent or information. Since the use of deep fakes and deep voice on Chinese social media and artificial intelligence are not unprecedented, these tools figure prominently in Beijing’s cyber warfare.
The study observes the massive legwork done by popular Chinese apps and big tech firms such as TikTok, Baidu and Zao. It also mentions Baidu’s recent Deep Voice project that can clone any voice in seconds. China could use these tools to create AI-driven mass deep fakes, which can be deployed as part of CCP’s information operations.
Several governments, including many US states, have sensed the threat and enacted laws against possible misuse of deep fakes. In India, there are no specific laws governing deep fakes as yet.
Beijing’s appetite for big data as a cyber and psychological warfare tool is no secret. The PLA’s own academic journal, “Military Correspondent”, had earlier published a commentary suggesting the use of AI-driven bot network by it.
CCP’s interest in big data seems to be heading towards analysis, detection, determination and handling of mass public sentiments. An article by China’s Strategic Support Force (SSF) Base 311, in charge of CCP’s psychological warfare, had earlier stressed on the need for a “voice information synthesis technology”. This technology was meant to identify a user’s emotional sentiment and then conduct subliminal messaging.