Nearly three years after ChatGPT’s debut in November, 2022, hallucinations are still one of the biggest adoption worries across large language models (LLMs), from ChatGPT to Gemini to Claude. A recently released study by OpenAI researchers, Why Language Models Hallucinate, shows why they still a “thing” and what can be done to minimize them.

Here’s what they are, why they happen, and how to work around them.

TL;DR

  • Hallucinations = AI outputs that sound right but aren’t true.
  • They happen because of training/test incentives and messy data, not because AI wants to lie to you.
  • Nearly three years after ChatGPT’s launch, surveys still list them as one of the top adoption barriers.
  • A 2025 study (Why Language Models Hallucinate) highlights reasons hallucinations persist.
  • Manage hallucinations with smart prompting, giving the AI good sources to pull from, and always double-checking important answers.

My first AI hallucination had nothing to do with Tequila

I still remember mine. I asked an AI chatbot to summarize a rival’s products and link to their pricing pages. The summary? Flawless. The URLs? Every single one was fake. Perfectly formatted. Totally plausible. Completely wrong. I felt betrayed… and unfortunately stone cold sober.

That’s the essence of an AI hallucination: confident sounding bullshit. 

So, what is an AI hallucination?

In research terms, it’s when an AI generates something that sounds true but isn’t. Unlike lying, there’s no intent. It’s a side effect of how these systems are trained, and can leave you cursing at your computer screen.

Think of a student under pressure at exam time: better to bluff than leave the answer blank. That’s how models are scored too, good-sounding guesses are rewarded; an “I don’t know” is punished. As the authors of Why Language Models Hallucinate point out, today’s benchmarks push models to sound confident even when they’re not, instead of simply choosing not to give an answer.

Haven’t we been complaining about AI hallucinations for years?

Yep. Large language models (LLMs) went mainstream with ChatGPT’s launch on November 30, 2022. Nearly three years on, and across large language models (from ChatGPT to Gemini to Claude), hallucinations remain a top adoption concern.

A 2025 survey from Gallagher (one of the world’s largest risk management firms) found that AI errors or “hallucinations” were the most commonly cited risk of AI adoption, ahead of data protection and legal concerns.

Recent studies and industry surveys continue to list hallucinations (and privacy) as central barriers to everyday use. And every major tool warns you up front in their interface: always double-check.

  • ChatGPT can make mistakes. Check important info.
  • Gemini can make mistakes, so double-check it
  • Claude can make mistakes. Please double-check responses

Why do models hallucinate?

Yep. Large language models (LLMs) went mainstream with ChatGPT’s launch on November 30, 2022. Nearly three years on, and across large language models (from ChatGPT to Gemini to Claude), hallucinations remain a top adoption concern.

A 2025 survey from Gallagher (one of the world’s largest risk management firms) found that AI errors or “hallucinations” were the most commonly cited risk of AI adoption, ahead of data protection and legal concerns.

Recent studies and industry surveys continue to list hallucinations (and privacy) as central barriers to everyday use. And every major tool warns you up front in their interface: always double-check.

  • ChatGPT can make mistakes. Check important info.
  • Gemini can make mistakes, so double-check it
  • Claude can make mistakes. Please double-check responses

3 Things You Can Do To Get AI Output You Can Trust

1. Prompt like a pro (allow “I don’t know” or ask the AI for proof)

  • Set guardrails: “If unsure, say ‘I don’t know.’ Cite sources.”
  • Ask for verification: “List assumptions and unresolved questions.”
  • Use self-check patterns (like Chain-of-Verification) so the AI drafts, then fact-checks itself.

Prompt example:
“Answer only if you’re more than 90% confident. If not, say ‘I don’t know.’ Cite sources with links for each claim. After the answer, list assumptions and what would change your conclusion.”


2. Give the AI trusted sources to work with

One of the simplest ways to cut down on hallucinations is to feed the AI the right material up front. You don’t need advanced retrieval systems to do this — just:

  • Attach documents to a Project folder in ChatGPT or a CustomGPT so it always has access to them.
  • Paste in trusted links or upload reference PDFs you’re comfortable sharing (never sensitive or private data).
  • Tell the AI explicitly to only use those sources when answering.

This mirrors the principle of retrieval-augmented generation (RAG), but in a way anyone can use.


Prompt example:
“Use only the attached documents and the following links as your sources: [insert links]. Do not use outside knowledge. If the answer isn’t covered in these materials, say ‘I don’t know.’ For each claim, cite the source you pulled it from.”

3. Double-check the important stuff (and use built-ins)

  • Use built-in checkers like Gemini’s “double-check.” (I didn’t even know this existed, but it’s very useful)
  • Re-ask your question in different ways; inconsistency in the responses is a red flag.
  • Always cross-verify answers that are high-stakes with trusted sources. (It’s your job to lose if you don’t double-check)

Prompt example:
“Provide your answer, then give me a section titled ‘Verification’ where you flag any claims that may be uncertain or need fact-checking. Suggest two authoritative sources I can use to confirm or reject each key claim.”

Don't blame Ai, if you don't verify. (and it rhymes)

Three years in, hallucinations aren’t going away (yet). But they don’t have to be a dealbreaker between you and AI. If you prompt with care, ground your AI in good data, and double-check the important stuff, you can cut through the noise and make AI genuinely useful.

In other words: treat the model like a brilliant but unreliable intern (book smart, but always gets the coffee order wrong), fantastic output, as long as you check its work.

Randall Matheson profile picture

Randy Matheson

Randy Matheson is an innovation strategist with a 25+ year proven track record of turning ideas into digital products. He specializes in working with Generative AI for content creation and using cutting-edge AI tools to create and interact with virtual audiences. He operates out Hamilton, Ontario where he resides with his partner and two large dogs.

Connect with Randy on Linkedin