Chatgpt Gemini Artificial Intelligence Claude Anthropic Grok Fake Court Cases And Hallucination: Don’t Believe Everything AI Tells You

Published:

New Delhi:

If you have used ChatGPT or Gemini, you know the feeling. You ask a question, and it fires back an answer so fast and so confident that you don’t even think to double-check it.

But there is a growing problem that is landing people in serious trouble, in India and across the world. It is called “hallucination.”

In simple terms, AI is like that one friend who has an answer for everything but isn’t always accurate. It isn’t lying to you on purpose; it just doesn’t know how to say, “I don’t know.”

Fake Cases Being Cited In Courts 

Take the recent drama in the Indian courts. Judges are starting to notice a weird trend in the petitions landing on their desks.

In one instance, a lawyer presented a case titled Mercy vs Mankind to back up their argument. It sounded like a landmark judgment. The problem? The case does not exist. The AI simply looked at the context of the argument and “invented” a name that sounded legal enough to pass.

In fact, the Supreme Court earlier this week raised serious concerns over the growing use of AI tools in court proceedings. The top court noted that such conduct goes beyond a simple judicial error and may amount to misconduct carrying legal consequences. It has asked the Bar Council of India to constitute a committee of experts to examine the use of fake citations and AI-generated case law in a matter where the judgement impugned apparently relied on made-up case law.

“The foundational principle of litigation is to be thorough, whether it involves drafting, preparation of arguments, or research on points of law,” Delhi-based lawyer Ritwik Saha told NDTV. “It is truly worrying that AI-generated information is finding its way not only into arguments and pleadings, but also judgements of trial courts.”

This isn’t just happening here. Mata v. Avianca in the US remains a prime example of this disaster. In that case, a New York legal team filed a brief citing six non-existent cases and was eventually slapped with a $5,000 fine. Indian benches have flagged similar lapses, pulling up counsel for relying on AI-generated authorities that simply do not exist.

Apar Gupta, Advocate and Founder Director of the Internet Freedom Foundation, points out that the risk isn’t AI itself but “automation bias.” This stems from the human tendency to treat verbose output as authoritative. He notes that generative models like ChatGPT and Gemini are probabilistic text engines, not legal databases. They invent case names, fabricate citations, and misstate ratios with the same confidence as accurate ones.

“A court of law is an institution in which a litigant reposes an enormous amount of faith in, and this faith should not be casually trifled with by lawyers or the judiciary by relying on a shortcut which seems attractive but can lead to terrible consequences,” Saha added.

Why Does This Happen 

To understand why a genius machine makes such basic mistakes, we must understand how it works.

AI does not ‘search’ for facts like Google does. Instead, it predicts the next word in a sentence based on patterns. If you ask for a legal case about airlines, the AI knows that legal cases usually look like Name A vs Name B. It then pulls names out of its ‘memory’ and stitches them together.

It cares more about sounding plausible than being accurate. It is essentially auto-complete on steroids.

The worrying part is that the fake cases mentioned above did not look fake. They had proper legal formatting, believable names, and detailed summaries. To someone skimming through them quickly, they appeared legitimate. That is because generative AI is designed to produce responses that sound natural and convincing.

Fluency vs Truth 

The fact of the matter remains that as of today AI is optimised for fluency, not truth. And most users still do not fully understand this distinction.

As Apar Gupta explains, the professional duty under the Advocates Act and the Bar Council Rules is non-delegable. A lawyer who files an AI-generated submission without verification is breaching that duty, thereby exposing the client to sanctions and adverse inference.

Many people now use AI as a replacement for search engines, researchers, or assistants. But unlike a traditional search engine, AI often does not clearly separate verified information from generated text, at least not yet. That creates a dangerous illusion of accuracy.

The issue becomes harder to spot because modern AI systems are exceptionally good at tone. They write smoothly. They structure arguments well. They even mimic authority. So when they make mistakes, users tend to assume the problem is theirs, not the machine’s.

A wider concern exists because courts themselves are experimenting with tools like SUPACE and SUVAS. As Gupta notes, this demands transparency about what these systems can and cannot reliably do. I have seen drafts circulate first-hand where section numbers are off, judgments are misattributed, or the citation belongs to an entirely different case.

That does not mean AI is useless. These tools are genuinely powerful when used correctly such as for brainstorming ideas, simplifying complex concepts, or summarising long documents.

The problem starts when users stop verifying.

If an AI tool gives you statistics, dates, legal citations, medical advice, or breaking news, cross-check it. Look for original sources. Open the links. Verify the names. Treat AI-generated information as a starting point, not the final answer.

The biggest risk these bots pose is that they don’t sound robotic, they sound extremely believable.


Related articles

Recent articles