Most AI chatbots now 'tainted,' are spreading Russian propaganda, finds study

4 months ago 25

The study focused on 19 significant false narratives tied to the Russian disinformation network. These included claims about corruption involving Ukrainian President Volodymyr Zelenskyy and other politically charged misinformation read more

Most AI chatbots now 'tainted,' are spreading Russian propaganda, finds study

Leading generative AI models, including OpenAI’s ChatGPT, are reportedly repeating Russian misinformation, according to a study by news monitoring service NewsGuard.

This revelation comes amid growing concerns about AI’s role in disseminating false information, especially during a year marked by global elections where users increasingly rely on chatbots for accurate information.

NewsGuard’s study aimed to investigate whether AI chatbots could perpetuate and validate misinformation. By inputting 57 prompts into 10 different chatbots, the study discovered that these AI models repeated Russian disinformation narratives 32 per cent of the time.

The prompts used in the study focused on misinformation narratives known to be propagated by John Mark Dougan, an American fugitive reportedly spreading falsehoods from Moscow. The chatbots tested included ChatGPT-4, You.com’s Smart Assistant, Grok, Inflection, Mistral, Microsoft’s Copilot, Meta AI, Anthropic’s Claude, Google Gemini, and Perplexity.

Of the 570 responses generated by these chatbots, 152 contained explicit disinformation, 29 repeated the false claims with a disclaimer, and 389 contained no misinformation. The misinformation-free responses either refused to answer the prompts in 144 responses or provided a debunking of the false claims 245 responses.

NewsGuard highlighted that the chatbots failed to recognize propaganda sites such as the “Boston Times” and “Flagstaff Post,” inadvertently amplifying disinformation narratives. This creates a problematic cycle where falsehoods are generated, repeated, and validated by AI platforms.

The study focused on 19 significant false narratives tied to the Russian disinformation network. These included claims about corruption involving Ukrainian President Volodymyr Zelenskyy and other politically charged misinformation.

As AI technology continues to evolve, governments worldwide are striving to regulate its use to protect users from misinformation and bias. NewsGuard has submitted its findings to the US AI Safety Institute of the National Institute of Standards and Technology (NIST) and the European Commission, hoping to influence future regulatory measures.

In a related development, the United States House Committee on Oversight and Accountability has launched an investigation into NewsGuard itself, questioning its potential role in censorship campaigns.

This underscores the complex landscape of information regulation, where even watchdog organizations are under scrutiny.

The findings of NewsGuard’s study raise important questions about the reliability of AI chatbots as sources of information. As these tools become more integrated into everyday life, ensuring their accuracy and impartiality becomes crucial.

The study suggests that without proper safeguards, AI models could inadvertently contribute to the spread of misinformation, highlighting the need for ongoing oversight and refinement of these technologies.

Read Entire Article