Artificial intelligence was supposed to help fight fake news. However, when users turned to chatbots during the recent India-Pakistan conflict, they didnโt receive clarityโthey received misinformation.
Chatbots like OpenAIโs ChatGPT, Googleโs Gemini, and xAIโs Grok were flooded with questions about the war. Many people hoped for quick and reliable answers. What they got instead were false claims, outdated videos, and made-up facts.

Wrong Answers, Confidently Given
Example? Grok showed an ancient video from Sudan, labelling it as an airstrike made on Pakistan’s Nur Khan airbase. Nothing like that had happened. Another video of a burning building in Nepal was “likely” labelled by the AI as a Pakistani military reaction. False again.
Such misinformation is not uncommon. NewsGuard researcher McKenzie Sadeghi says, according to their research, “Our research shows chatbots are not reliable, especially during breaking news.”
Their investigation revealed that leading chatbots tend to parrot lies and provide guesses rather than truths.
Fakes Passed Off as Facts
In Latin America, AFP fact-checkers questioned Grok regarding a viral video of a giant anaconda. According to Grok, it was real and even quoted spurious scientific expeditions to prove its authenticity. The video was artificially created by AI.
Gemini also made the same error. When questioned about an image of a woman created by AI, it verified her existenceโsomeone who did not existโand provided a false history. Such false verifications can easily go viral, with users posting AI-generated responses as evidence.
Less Human Oversight, More AI Guesswork
One of the issues is that platforms are reducing the use of human fact-checkers. Meta, for instance, shut down its third-party programme within the U.S. It now uses “Community Notes,” a crowdsourced system popular on X. Researchers indicate this won’t do.

AI software is also prone to bias. It’s based on the quality of training. When a programmer changes instructions, the response from a chatbot can reflect a political agenda. Such was the case with Grok, which began referencing “white genocide,” a well-documented far-right conspiracy.
xAI then attributed it to an “unauthorised modification.”
The Risk is Real
Experts warn that relying on AI to use facts is dangerous. These devices tend to make guesses, assumptions, or echo the biasesย of their training set.
“I worry about the way AI treats sensitive subjects,” stated Angie Holan of the International Fact-Checking Network.
AI can be useful, but currently, it’s more of a rumour mill, spreading misinformation, than a truth machine.
Stay tuned to Brandsynario for latest news and updates