Episode notes
“The Future of Life Institute has been working on AI governance-related issues for the last decade. We're already over 10 years old, and our mission is to steer very powerful technology away from large-scale harm and toward very beneficial outcomes. You could think about any kind of extreme risks from AI, all the way to existential or extinction risk, the worst kinds of risks and the benefits. You can think about any kind of large benefits that humans could achieve from technology, all the way through to utopia, right? Utopia is the biggest benefit you can get from technology. Historically, that has meant we have focused on climate change, for example, and the impact of climate change. We have also focused on bio-related risks, pandemics and nuclear security issues. If things go well, we will be able to avoid these really bad downsides in terms o ...