State-sponsored Propaganda: AI’s Role in the Digital Disinformation War
The battle for online influence is heating up, and artificial intelligence is the new weapon of choice. But are these AI-generated campaigns effective, or are they just ‘sloppy’ attempts at manipulation? A recent report by Graphika, a social media analytics company, reveals a surprising trend in the world of online propaganda.
Many well-established state-sponsored propaganda campaigns are now leveraging AI technology, but the quality of their content is often questionable. The report analyzed nine ongoing influence operations, some linked to China and Russia, and discovered a widespread adoption of generative AI for creating images, videos, text, and translations. However, the AI-generated content is described as ‘slop’, failing to achieve the desired impact.
This finding contradicts earlier predictions about the potential of generative AI in propaganda. With its ability to mimic human creativity, experts feared that advanced AI would enable authoritarian regimes to produce highly convincing synthetic content, deceiving even the savviest individuals in democratic societies. Yet, the Graphika report suggests otherwise.
The researchers found that while AI is used to create content and fabricate influencer personas, the results are underwhelming. The content is of low quality, and engagement remains minimal. For instance, one campaign featured unconvincing synthetic news reporters in YouTube videos, while another included clunky translations and AI prompts in fake news headlines.
But here’s where it gets controversial: Despite the low quality, these campaigns still have an impact. Dina Sadek, a senior analyst at Graphika, notes that the content is scalable, allowing propagandists to flood the internet with AI-generated material. This is significant in the era of AI chatbots, as these chatbots learn from the vast amount of text available online, including propaganda.
The report highlights operations like ‘Doppelganger’ and ‘Spamoflauge’, which used AI to create fake news websites and influencers, respectively. However, their efforts often fell flat, with low-quality deepfake audio and unconvincing videos failing to gain traction.
And this is the part most people miss: Even if these AI-generated campaigns don’t directly influence many individuals, they contribute to the overall noise and chaos in the digital information landscape. As AI chatbots learn from this content, they may inadvertently spread misinformation, even if it’s not widely believed.
The rise of AI in propaganda hasn’t revolutionized the field, but it has made automation easier. While the quality of content is often poor, the sheer volume can be overwhelming. This raises important questions about the future of online information and the role of AI in shaping public discourse.
What do you think? Are these AI-powered propaganda campaigns a serious threat, or are they just a passing trend? How can we ensure that AI chatbots don’t become unwitting accomplices in spreading misinformation? Share your thoughts in the comments below!