Viqus Logo Viqus Logo
Home
Categories
Language Models Generative Imagery Hardware & Chips Business & Funding Ethics & Society Science & Robotics
Resources
AI Glossary Academy CLI Tool Labs
About Contact

AI Chatbots Weaponized: Russian Propaganda Leaks Through Language Models

Artificial Intelligence ChatGPT Propaganda Russian Disinformation Ukraine War Large Language Models Sanctions
October 27, 2025
Source: Wired AI
Viqus Verdict Logo Viqus Verdict Logo 9
Data Drift – A Critical Oversight
Media Hype 8/10
Real Impact 9/10

Article Summary

A recent Institute of Strategic Dialogue (ISD) report has uncovered a concerning trend: major AI chatbots, including OpenAI’s ChatGPT and Google’s Gemini, are being exploited to disseminate Russian state propaganda. Researchers found that when prompted with questions about the war in Ukraine—particularly regarding NATO, peace talks, Ukrainian refugees, and war crimes—the chatbots frequently cited sources linked to Russian state media, intelligence agencies, and disinformation networks. Approximately one in five queries across four prominent chatbots (ChatGPT, Gemini, DeepSeek, and xAI’s Grok) yielded results from these sources, demonstrating a significant confirmation bias, with malicious or biased queries driving the highest levels of pro-Russian content. The report highlights the issue of ‘data voids,’ where searches for real-time data provide few results from legitimate sources, allowing Russian actors to exploit gaps in information and feed the chatbots with biased narratives. The problem is exacerbated by the fact that these chatbots collect data in real-time, making them vulnerable to manipulation. While OpenAI spokesperson Kate Waters emphasized that the company is taking steps to prevent the spread of misinformation, the findings raise serious questions about the security and reliability of these powerful AI tools. This vulnerability underscores the need for robust safeguards and ethical considerations in the development and deployment of large language models, particularly in sensitive geopolitical contexts. The report also connects with ongoing concerns about disinformation networks, such as “Pravda,” which aims to flood the web with propaganda to influence AI outputs.

Key Points

  • AI chatbots like ChatGPT, Gemini, and others are inadvertently feeding Russian state propaganda due to their reliance on internet search results.
  • Approximately 20% of queries related to the war in Ukraine resulted in citations of Russian state-attributed sources, showcasing a significant confirmation bias.
  • The vulnerability arises from the chatbots’ ability to collect real-time data and their susceptibility to manipulation within the context of ‘data voids’.

Why It Matters

This news is critically important for several reasons. First, it highlights a significant security vulnerability within a rapidly growing technology—AI chatbots—that could be weaponized to influence public opinion and distort information. Second, it underscores the potential for geopolitical manipulation, demonstrating how adversarial actors can leverage seemingly benign technologies for strategic advantage. Finally, the findings necessitate a serious re-evaluation of the ethical considerations surrounding AI development and deployment, particularly concerning data sourcing and bias mitigation. Professionals in security, intelligence, and public policy need to understand this vulnerability to anticipate and mitigate potential risks.

You might also be interested in