Future of Life Institute Podcast
En podcast av Future of Life Institute
![](https://is1-ssl.mzstatic.com/image/thumb/Podcasts221/v4/53/cb/3c/53cb3c79-cc62-0afe-ce3e-6e16c826fa38/mza_1136687704219132387.png/300x300bb-75.jpg)
Kategorier:
222 Avsnitt
-
Imagine A World: What if we designed and built AI in an inclusive way?
Publicerades: 2023-09-05 -
Imagine A World: What if new governance mechanisms helped us coordinate?
Publicerades: 2023-09-05 -
New: Imagine A World Podcast [TRAILER]
Publicerades: 2023-08-29 -
Robert Trager on International AI Governance and Cybersecurity at AI Companies
Publicerades: 2023-08-20 -
Jason Crawford on Progress and Risks from AI
Publicerades: 2023-07-21 -
Special: Jaan Tallinn on Pausing Giant AI Experiments
Publicerades: 2023-07-06 -
Joe Carlsmith on How We Change Our Minds About AI Risk
Publicerades: 2023-06-22 -
Dan Hendrycks on Why Evolution Favors AIs over Humans
Publicerades: 2023-06-08 -
Roman Yampolskiy on Objections to AI Safety
Publicerades: 2023-05-26 -
Nathan Labenz on How AI Will Transform the Economy
Publicerades: 2023-05-11 -
Nathan Labenz on the Cognitive Revolution, Red Teaming GPT-4, and Potential Dangers of AI
Publicerades: 2023-05-04 -
Maryanna Saenko on Venture Capital, Philanthropy, and Ethical Technology
Publicerades: 2023-04-27 -
Connor Leahy on the State of AI and Alignment Research
Publicerades: 2023-04-20 -
Connor Leahy on AGI and Cognitive Emulation
Publicerades: 2023-04-13 -
Lennart Heim on Compute Governance
Publicerades: 2023-04-06 -
Lennart Heim on the AI Triad: Compute, Data, and Algorithms
Publicerades: 2023-03-30 -
Liv Boeree on Poker, GPT-4, and the Future of AI
Publicerades: 2023-03-23 -
Liv Boeree on Moloch, Beauty Filters, Game Theory, Institutions, and AI
Publicerades: 2023-03-16 -
Tobias Baumann on Space Colonization and Cooperative Artificial Intelligence
Publicerades: 2023-03-09 -
Tobias Baumann on Artificial Sentience and Reducing the Risk of Astronomical Suffering
Publicerades: 2023-03-02
The Future of Life Institute (FLI) is a nonprofit working to reduce global catastrophic and existential risk from powerful technologies. In particular, FLI focuses on risks from artificial intelligence (AI), biotechnology, nuclear weapons and climate change. The Institute's work is made up of three main strands: grantmaking for risk reduction, educational outreach, and advocacy within the United Nations, US government and European Union institutions. FLI has become one of the world's leading voices on the governance of AI having created one of the earliest and most influential sets of governance principles: the Asilomar AI Principles.