
A hallucination happens when an AI makes up something that sounds real—but isn’t true. It’s confidently telling you a fact that’s actually wrong.
This is based on Open AI research. OpenAI said, we’re trying to make AI smarter and more trustworthy. But there is still one big problem, sometimes AI gives wrong answers and acts like they are definitely true. This is called a “hallucination.” Our new research says this happens because the way we train and test AI encourages it to guess – even when it’s not sure—rather than admit it doesn’t know.
ChatGPT sometimes makes things up and gives wrong answers. The newer version, GPT‑5, makes fewer mistakes especially when it’s solving problems. But it can still mess up sometimes. All big AI models have this issue, and the team is working hard to make it happen less often.
What are hallucinations?
AI sometimes makes up answers that sound real but aren’t true. These made-up answers are called hallucinations. They can happen even with simple questions. For example, when someone asked a popular chatbot about the PhD thesis title of Adam Tauman Kalai, it gave three different answers—but all of them were wrong. When asked about his birthday, it gave three different dates, and none were correct either
Teaching to the test
I models keep making things up partly because of how we test them. The tests don’t cause the problem directly, but they are guessing over saying “I don’t know.” So, the AI learns to guess even when it is not sure. Because that helps it score better.
Imagine a multiple-choice test. If you guess, you might get it right. If you leave it blank, you get zero. AI models work the same way—guessing helps them score better, even if the guess is wrong. So they learn to guess instead of saying “I don’t know.”
In another example, Let’s say an AI is asked for someone’s birthday but doesn’t know. If it guesses “September 10,” there’s a small chance it’s right. If it says “I don’t know,” it gets no credit. So, over many questions, the guessing AI looks better in scores—even though it’s often wrong. That’s why AI learns to guess instead of being honest.
There’s an easy way to help stop AI from making things up: punish wrong answers more than honest “I don’t know” replies, and give some credit when the AI admits it’s unsure. This isn’t a new idea, some school tests already do this to stop students from guessing blindly.
But here’s the key point: just adding a few special tests isn’t enough. The main way we score AI today still rewards lucky guesses. So the AI keeps learning to guess instead of being careful. If we fix how we score AI—by rewarding honesty and punishing confident mistakes – it will help reduce hallucinations and make AI more trustworthy.
Why AI Makes Stuff Up (Hallucinates)
AI (Artificial Intelligence) models are trained to guess the next word in a sentence. They don’t really know facts. They just try to sound right. So even when they don’t have the right answer, they still guess something that fits. That guess might sound smart, but it can be totally wrong.
This is how hallucinations happen:
The AI is guessing words, not checking facts.
Source: Open AI
Conclusion:
AI models often make up answers that sound real but aren’t true—this is called a hallucination. It happens because AI is trained to guess the next word, not to check if the answer is correct. Even when not sure, it guesses confidently, which can lead to wrong information.
This problem is made worse by how we test and score AI. Current methods reward guessing over honesty. So, AI learns to guess instead of saying “I don’t know.”
To fix this, we need to change how AI is scored – Punish wrong answers more than honest uncertainty and give partial credit when AI admits it doesn’t know
Doing this will encourage AI to be more careful and trustworthy—and help reduce hallucinations in future models.
Pingback: Microsoft 365 September 2025 Update: What’s New and How It Helps You - Srishti Digital World
Pingback: Zoho : Rising Moment for India’s Tech Goes Global - Srishti Digital World