Future of Life Institute Podcast
En podcast av Future of Life Institute
![](https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/53/cb/3c/53cb3c79-cc62-0afe-ce3e-6e16c826fa38/mza_1136687704219132387.png/300x300bb-75.jpg)
Kategorier:
222 Avsnitt
-
AIAP: Cooperative Inverse Reinforcement Learning with Dylan Hadfield-Menell (Beneficial AGI 2019)
Publicerades: 2019-01-17 -
Existential Hope in 2019 and Beyond
Publicerades: 2018-12-21 -
AIAP: Inverse Reinforcement Learning and the State of AI Alignment with Rohin Shah
Publicerades: 2018-12-18 -
Governing Biotechnology: From Avian Flu to Genetically-Modified Babies With Catherine Rhodes
Publicerades: 2018-11-30 -
Avoiding the Worst of Climate Change with Alexander Verbeek and John Moorhead
Publicerades: 2018-10-31 -
AIAP: On Becoming a Moral Realist with Peter Singer
Publicerades: 2018-10-18 -
On the Future: An Interview with Martin Rees
Publicerades: 2018-10-11 -
AI and Nuclear Weapons - Trust, Accidents, and New Risks with Paul Scharre and Mike Horowitz
Publicerades: 2018-09-28 -
AIAP: Moral Uncertainty and the Path to AI Alignment with William MacAskill
Publicerades: 2018-09-18 -
AI: Global Governance, National Policy, and Public Trust with Allan Dafoe and Jessica Cussins
Publicerades: 2018-08-31 -
The Metaethics of Joy, Suffering, and Artificial Intelligence with Brian Tomasik and David Pearce
Publicerades: 2018-08-16 -
Six Experts Explain the Killer Robots Debate
Publicerades: 2018-07-31 -
AIAP: AI Safety, Possible Minds, and Simulated Worlds with Roman Yampolskiy
Publicerades: 2018-07-16 -
Mission AI - Giving a Global Voice to the AI Discussion With Charlie Oliver and Randi Williams
Publicerades: 2018-06-29 -
AIAP: Astronomical Future Suffering and Superintelligence with Kaj Sotala
Publicerades: 2018-06-14 -
Nuclear Dilemmas, From North Korea to Iran with Melissa Hanham and Dave Schmerler
Publicerades: 2018-05-31 -
What are the odds of nuclear war? A conversation with Seth Baum and Robert de Neufville
Publicerades: 2018-04-30 -
AIAP: Inverse Reinforcement Learning and Inferring Human Preferences with Dylan Hadfield-Menell
Publicerades: 2018-04-25 -
Navigating AI Safety -- From Malicious Use to Accidents
Publicerades: 2018-03-30 -
AI, Ethics And The Value Alignment Problem With Meia Chita-Tegmark And Lucas Perry
Publicerades: 2018-02-28
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.