Thomas Larsen (AI 2027): We have to start preparing for AGI

The Intelligence Horizon by The Intelligence Horizon

Episode notes

In this episode, Thomas Larsen of the AI Futures Project joins us to dissect the public's reaction to the widely influential paper "AI 2027," which he co-authored, and makes the case that superintelligent AI is highly likely within our lifetimes — and plausibly imminent in the next few years. Thomas also lays out why he’s pessimistic that risks from misaligned and misused AI will be handled in time. This was a fascinating and thought-provoking discussion on the challenges ahead in AI security.

Check out "AI 2027" here: https://ai-2027.com

Learn more about the AI Futures Project here: https://ai-futures.org

 ...  Read more
Keywords
AIArtificial IntelligenceTechnology