Essential background on AI functionality, including: Why AI produces confident errors, how often that matters, and when verification is essential.
Summary
AIs hallucinate – and, of course, that makes what they generate suspect. One of the key problems is that other content is often created with AI as a partner. So even if you personally are avoiding AI with your students, they still need to learn about it so they can accurately evaluate web-based content (which is usually AI-created).
Last updated December 19, 2025
Key Findings
- LLMs make stuff up (called AI hallucinations) because they are reward for confidence over uncertainty. That’s in the root programming – telling them to emphasize factuality won’t overcome that flaw.
- Most articles on the internet are now written by AI instead of humans. The implication is that, as you have the students conduct research, they may be viewing AI hallucinations instead of valid information.
- AIs struggle with reasoning and “What if?” thinking. They are almost always mirroring what they’ve seen before instead of truly creating something.
