Episode notes
Turns out, it only takes 5% of bad training data to make an AI suggest felonies. No joke. Today’s episode breaks down OpenAI’s wild new study—and why it changes how we think about safety.
We’ll talk about:
- The “toxic persona” hiding inside GPT-4
- How OpenAI found early warning signs before the model said anything weird
- Why AI adoption has quietly hit 1.8B users (and Boomers are surprisingly into it)
- The rise of AI-first creative tools—and where the next big breakout will happen
Keywords: GPT-4, OpenAI alignment, AI safety, Menlo Ventures AI report, Audos Donkeycorns, AI tools 2025, creative AI, toxic persona
Links:
- Newsletter:
Keywords
GPT-4AI safetycreative AIToxic persona