Future of Life Institute Podcast
En podcast av Future of Life Institute
![](https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/53/cb/3c/53cb3c79-cc62-0afe-ce3e-6e16c826fa38/mza_1136687704219132387.png/300x300bb-75.jpg)
Kategorier:
222 Avsnitt
-
Neel Nanda on Math, Tech Progress, Aging, Living up to Our Values, and Generative AI
Publicerades: 2023-02-23 -
Neel Nanda on Avoiding an AI Catastrophe with Mechanistic Interpretability
Publicerades: 2023-02-16 -
Neel Nanda on What is Going on Inside Neural Networks
Publicerades: 2023-02-09 -
Connor Leahy on Aliens, Ethics, Economics, Memetics, and Education
Publicerades: 2023-02-02 -
Connor Leahy on AI Safety and Why the World is Fragile
Publicerades: 2023-01-26 -
Connor Leahy on AI Progress, Chimps, Memes, and Markets
Publicerades: 2023-01-19 -
Sean Ekins on Regulating AI Drug Discovery
Publicerades: 2023-01-12 -
Sean Ekins on the Dangers of AI Drug Discovery
Publicerades: 2023-01-05 -
Anders Sandberg on the Value of the Future
Publicerades: 2022-12-29 -
Anders Sandberg on Grand Futures and the Limits of Physics
Publicerades: 2022-12-22 -
Anders Sandberg on ChatGPT and the Future of AI
Publicerades: 2022-12-15 -
Vincent Boulanin on Military Use of Artificial Intelligence
Publicerades: 2022-12-08 -
Vincent Boulanin on the Dangers of AI in Nuclear Weapons Systems
Publicerades: 2022-12-01 -
Robin Hanson on Predicting the Future of Artificial Intelligence
Publicerades: 2022-11-24 -
Robin Hanson on Grabby Aliens and When Humanity Will Meet Them
Publicerades: 2022-11-17 -
Ajeya Cotra on Thinking Clearly in a Rapidly Changing World
Publicerades: 2022-11-10 -
Ajeya Cotra on how Artificial Intelligence Could Cause Catastrophe
Publicerades: 2022-11-03 -
Ajeya Cotra on Forecasting Transformative Artificial Intelligence
Publicerades: 2022-10-27 -
Alan Robock on Nuclear Winter, Famine, and Geoengineering
Publicerades: 2022-10-20 -
Brian Toon on Nuclear Winter, Asteroids, Volcanoes, and the Future of Humanity
Publicerades: 2022-10-13
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.