Episode notes
We’ve all done it: uploaded a massive 100-page PDF and assumed the AI "read" every word. 📄 But in early 2026, a viral experiment using the Harry Potter series revealed a terrifying truth: AI models often "hallucinate" their reading process by pulling from their training data instead of your actual file. We are breaking down the Harry Potter Experiment and the phenomenon of Context Rot that is causing professionals to miss critical details buried in the middle of their documents.
We’re breaking down the 2026 Stanford & Yale Study that found models can reproduce 96% of famous books from memory, and what this "Memorization Crisis" means for your legal, medical, and financial workflows.
We’ll talk about:
- The Fumbus & Driplo Tra ...
Keywords
RAGAI hallucinationLarge Language ModelsContext Rot