14.5 C
Vancouver

AI Chatbots Manipulated By Russian Disinformation, Report Finds

Published:

As reported in The New Digital, AI Chatbots, including prominent models like ChatGPT and Meta AI, are susceptible to influence from Russian propaganda, according to a recent report by NewsGuard. The analysis reveals that a Moscow-based network, known as “Pravda,” is successfully manipulating chatbot responses by flooding the internet with millions of misleading articles. This manipulation raises concerns about the integrity of information provided by AI, especially in sensitive geopolitical contexts.  

NewsGuard, a firm specializing in rating news and information reliability, found that Pravda has published a staggering 3.6 million misleading articles in 2024 alone, according to statistics from the American Sunlight Project. These articles are designed to promote pro-Russian falsehoods and influence the data sets used to train AI models.  

Vladimir Putin has a particularly strong interest in disinforming Western sources. Image: The New Yorker.

The study examined 10 leading chatbots and discovered that they repeated Russian disinformation narratives, such as the claim that the U.S. operates secret bioweapons labs in Ukraine, approximately 33% of the time. NewsGuard attributes Pravda’s effectiveness to its sophisticated search engine optimization (SEO) techniques, which boost the visibility of its content and ensure it is captured by web crawlers used by AI training systems.

This infiltration poses a significant challenge for chatbot developers, particularly those relying heavily on web-based information. The reliance on publicly available data makes chatbots vulnerable to manipulation by actors seeking to spread disinformation.

AI Inferences and Considerations:

The report highlights a critical vulnerability in the current architecture of large language models (LLMs). While these models are trained on vast datasets to provide comprehensive answers, the lack of robust fact-checking mechanisms during the training and retrieval phases allows for the propagation of false information. This issue isn’t limited to Russian propaganda; any actor capable of manipulating search results and web content can potentially influence chatbot outputs.

Furthermore, the report suggests that the problem may be intractable for chatbots heavily reliant on web engines. This highlights the need for AI developers to explore alternative data verification methods. This could include:

  • Implementing more stringent source verification processes: AI models should be able to assess the credibility and trustworthiness of information sources.
  • Developing fact-checking algorithms: AI could be trained to identify and flag potentially false or misleading information.
  • Creating diverse and curated training datasets: Utilizing datasets that are less susceptible to manipulation and more representative of verified information is essential.
  • Improving transparency: AI companies should be more transparent about their data sources and how they address disinformation.
  • Human Oversight: Incorporating human oversight in the process of training and output verification.
  • Considering the ethical implications: The use of AI in information dissemination necessitates a deeper examination of the ethical implications of allowing potentially biased outputs.

The problem also highlights the need for increased media literacy among users, who should be aware of the potential for AI-generated misinformation and critically evaluate the information they receive. The increasing use of AI for information gathering requires a more active and informed public.  

Keywords: AI chatbots, Russian propaganda, disinformation, ChatGPT, Meta AI, NewsGuard, Pravda, SEO, misinformation, bioweapons labs, Ukraine, American Sunlight Project, AI training data, fact-checking, information manipulation, artificial intelligence, large language models.

Related articles

spot_img

Recent articles

spot_img