HomeArticlesHey Chatbot? Did You Deliver Misinformation During the Pak-India Conflict?

Hey Chatbot? Did You Deliver Misinformation During the Pak-India Conflict?

Published on

Artificial intelligence was supposed to help fight fake news. However, when users turned to chatbots during the recent India-Pakistan conflict, they didn’t receive clarity—they received misinformation.

Chatbots like OpenAI’s ChatGPT, Google’s Gemini, and xAI’s Grok were flooded with questions about the war. Many people hoped for quick and reliable answers. What they got instead were false claims, outdated videos, and made-up facts.

hey-chatbot-is-it-true-that-your-fact-checks-plant-misinformation
When users turned to chatbots during the recent India-Pakistan conflict, they didn’t get clarity—they got confusion.

Wrong Answers, Confidently Given

Example? Grok showed an ancient video from Sudan, labelling it as an airstrike made on Pakistan’s Nur Khan airbase. Nothing like that had happened. Another video of a burning building in Nepal was “likely” labelled by the AI as a Pakistani military reaction. False again.

Such misinformation is not uncommon. NewsGuard researcher McKenzie Sadeghi says, according to their research, “Our research shows chatbots are not reliable, especially during breaking news.”

Their investigation revealed that leading chatbots tend to parrot lies and provide guesses rather than truths.

Fakes Passed Off as Facts

In Latin America, AFP fact-checkers questioned Grok regarding a viral video of a giant anaconda. According to Grok, it was real and even quoted spurious scientific expeditions to prove its authenticity. The video was artificially created by AI.

Gemini also made the same error. When questioned about an image of a woman created by AI, it verified her existence—someone who did not exist—and provided a false history. Such false verifications can easily go viral, with users posting AI-generated responses as evidence.

Less Human Oversight, More AI Guesswork

One of the issues is that platforms are reducing the use of human fact-checkers. Meta, for instance, shut down its third-party programme within the U.S. It now uses “Community Notes,” a crowdsourced system popular on X. Researchers indicate this won’t do.

hey-chatbot-is-it-true-that-your-fact-checks-plant-misinformation
Source: Built in

AI software is also prone to bias. It’s based on the quality of training. When a programmer changes instructions, the response from a chatbot can reflect a political agenda. Such was the case with Grok, which began referencing “white genocide,” a well-documented far-right conspiracy.

xAI then attributed it to an “unauthorised modification.”

The Risk is Real

Experts warn that relying on AI to use facts is dangerous. These devices tend to make guesses, assumptions, or echo the biases of their training set.

“I worry about the way AI treats sensitive subjects,” stated Angie Holan of the International Fact-Checking Network.

AI can be useful, but currently, it’s more of a rumour mill, spreading misinformation, than a truth machine.

Stay tuned to Brandsynario for latest news and updates

Latest articles

Deadly Jet Crash in Bangladesh School Injures Over 100 Children

DHAKA – A devastating tragedy struck the Bangladeshi capital on Monday when a Bangladesh...

AI Web Browsers Say They’ll Save You Time, Can They Really?

Web browsers with AI capabilities are set to transform the way we surf the...

Woman Arrested for Killing Brother Over Property Dispute

A family land dispute in Malir’s Rehri Goth area, Karachi, turned violent when a...

Unbothered, Unimpressed: Decoding the Gen Z Stare Millennials Can’t Handle

There it is, THAT look you already know what I am talking about. Blank,...