Episode notes
In this episode, we dive deep into the phenomenon of AI hallucinations — not just as technical glitches, but as fundamental aspects of how large language models (LLMs) work. Joined by our first guest Angela Black, we explore how confabulation better describes what AI systems are really doing, and why this matters in fields like education, academic research, and information ethics.
☕ Enjoying the show? Support us by buying a coffee!
🔗 https://buymeacoffee.com/aiwtt
We also highlight the emerging role of librarians as crucial guides in navigating AI-generated content, building AI literacy, and safeguarding institutional integrity in a post-truth era.
⏱️ Timestamps
00:00 – The Rise of AI H ...
Keywords
AcademicIntegrity AIhallucinations Confabulation EthicsInAI AIliteracy