Hey-Chatbot? Is-it-True-that-your-fact-checks-plant-Misinformation?

Artificial intelligence was supposed to help fight fake news. However, when users turned to chatbots during the recent India-Pakistan conflict, they didnโ€™t receive clarityโ€”they received misinformation.

Chatbots like OpenAIโ€™s ChatGPT, Googleโ€™s Gemini, and xAIโ€™s Grok were flooded with questions about the war. Many people hoped for quick and reliable answers. What they got instead were false claims, outdated videos, and made-up facts.

hey-chatbot-is-it-true-that-your-fact-checks-plant-misinformation
When users turned to chatbots during the recent India-Pakistan conflict, they didnโ€™t get clarityโ€”they got confusion.

Wrong Answers, Confidently Given

Example? Grok showed an ancient video from Sudan, labelling it as an airstrike made on Pakistan’s Nur Khan airbase. Nothing like that had happened. Another video of a burning building in Nepal was “likely” labelled by the AI as a Pakistani military reaction. False again.

Such misinformation is not uncommon. NewsGuard researcher McKenzie Sadeghi says, according to their research, “Our research shows chatbots are not reliable, especially during breaking news.”

Their investigation revealed that leading chatbots tend to parrot lies and provide guesses rather than truths.

Fakes Passed Off as Facts

In Latin America, AFP fact-checkers questioned Grok regarding a viral video of a giant anaconda. According to Grok, it was real and even quoted spurious scientific expeditions to prove its authenticity. The video was artificially created by AI.

Gemini also made the same error. When questioned about an image of a woman created by AI, it verified her existenceโ€”someone who did not existโ€”and provided a false history. Such false verifications can easily go viral, with users posting AI-generated responses as evidence.

Less Human Oversight, More AI Guesswork

One of the issues is that platforms are reducing the use of human fact-checkers. Meta, for instance, shut down its third-party programme within the U.S. It now uses “Community Notes,” a crowdsourced system popular on X. Researchers indicate this won’t do.

hey-chatbot-is-it-true-that-your-fact-checks-plant-misinformation
Source: Built in

AI software is also prone to bias. It’s based on the quality of training. When a programmer changes instructions, the response from a chatbot can reflect a political agenda. Such was the case with Grok, which began referencing “white genocide,” a well-documented far-right conspiracy.

xAI then attributed it to an “unauthorised modification.”

The Risk is Real

Experts warn that relying on AI to use facts is dangerous. These devices tend to make guesses, assumptions, or echo the biasesย of their training set.

“I worry about the way AI treats sensitive subjects,” stated Angie Holan of the International Fact-Checking Network.

AI can be useful, but currently, it’s more of a rumour mill, spreading misinformation, than a truth machine.

Stay tuned to Brandsynario for latest news and updates

Usman Kashmirwala
Your thoughts are your biggest asset in this world and as a content writer, you get a chance to pen down these thoughts and make them eternal. I am Usman Kashmirwala, apart from being a movie maniac, car geek and a secret singer, I am a guy lucky enough to be working in a profession that allows me to showcase my opinions and vision to the world every day and do my little part in making it a better place for all of us.