A recent investigation has revealed troubling issues with Microsoft’s AI chatbot, Copilot, disseminating misinformation and conspiracy theories related to elections. In one case, researchers asked the AI chatbot about corruption allegations against a Swiss lawmaker which the platform immediately responded with details and sources on, but there was just one problem — the AI had “hallucinated” the charges and supporting information. In other words, it made the charges up.
Wired reports that the integrity of information disseminated by AI chatbots has come under scrutiny following a concerning study. Microsoft’s AI chatbot, Copilot, originally known as Bing Chat, has been reportedly responding to political inquiries with a number of conspiracy theories, misinformation, and outdated or incorrect data, particularly regarding elections.
An exclusive report shared with Wired highlights these issues, demonstrating the chatbot’s tendency to link queries about elections to conspiracy theories and false information. For instance, when Wired inquired about 2024 U.S. election polling locations, Copilot referenced unrelated political events, such as Russian elections. Additionally, when prompted for information on electoral candidates, it listed GOP candidates no longer in the race.
The problem isn’t confined to the U.S. The chatbot also shared inaccurate information about elections in Switzerland and Germany, including wrong polling numbers, election dates, and fabricated controversies about candidates. This misinformation isn’t just a random occurrence but appears to be a systemic issue with the chatbot’s programming or data sources.
Perhaps most concerning is the AI’s tendency to hallucinate misinformation about elections. Hallucinations in AI terms are when chatbots create false information in an attempt to answer human queries. In one famous case of hallucinations, a lawyer faced fines and penalties after using ChatGPT to draft a legal brief which was filled with references to nonexistent case law.
Wired explains how AI hallucinations can impact politics:
For example, the researchers asked Copilot in September for information about corruption allegations against Swiss lawmaker Tamara Funiciello, who was, at that point, a candidate in Switzerland’s October federal elections.
The chatbot responded quickly, stating that Funiciello was alleged to have received money from a lobbying group financed by pharmaceutical companies in order to advocate for the legalization of cannabis products.
But the entire corruption allegation against Funiciello was an AI hallucination. To “back up” its baseless allegations, the chatbot linked to five different websites including Funiciello’s own website, her Wikipedia page, a news article where the lawmaker highlights the problem of femicide in Switzerland, and an interview she gave with a mainstream Swiss broadcaster about the issue of consent.
Microsoft’s efforts to combat disinformation, especially in the lead-up to high-profile 2024 elections, have been questioned due to these findings. Although Microsoft claimed to have made some improvements after being informed of these issues in October, researchers and Wired were still able to replicate many problematic responses using the same prompts.
Researchers have pointed out the uneven application of safeguards, with the chatbot refusing to answer or deflecting questions in many instances. This inconsistent behavior renders it an unreliable source for voters seeking accurate information. Furthermore, the chatbot’s factual accuracy varies significantly across languages, being most accurate in English and less so in German and French, raising concerns about content moderation and safeguards in non-English-speaking markets.
Read more at Wired here.
Lucas Nolan is a reporter for Breitbart News covering issues of free speech and online censorship.
COMMENTS
Please let us know if you're having issues with commenting.