[Linkpost] “My P(doom) is 2.76%. Here’s Why.” by Liam Robins

EA Forum Podcast (All audio) - En podcast av EA Forum Team

This is a link post. Disclaimer: all beliefs here are mine alone, and do not necessarily reflect the beliefs of my employer or any organization I work with. Summary: This is my best attempt to explain my beliefs about AI existential risk, leaving none of my assumptions unstated or unchallenged. I’m writing this not because I’m confident that I arrived at the right answer, nor because I think I’ve stumbled on an especially clever insight. I’m writing this because laying out my thought process in full detail can be a helpful tool for thinking clearly and having productive dialogues. If you disagree with any of the points I’ve made, then I encourage you to write your own P(doom) explanation in a similar style. Introduction I just came back from Manifest, where I had the privilege of participating in a live AI doom debate with Liron Shapira of the AI Doom [...] ---Outline:(00:56) Introduction(03:06) The 11 Claims(03:10) 1. AGI isn't coming soon (15%)(04:36) 2. Artificial intelligence can't go far beyond human intelligence (1%)(06:10) 3. AI won't be a physical threat (1%)(07:43) 4. Intelligence yields moral goodness (30%)(08:18) 5. We have a safe AI development process (20%)(10:25) 6. AI capabilities will rise at a manageable pace (N/A)(10:37) 7. AI won't try to conquer the universe (50%)(11:33) 7A: By default, AGI will not be truly agentic. (2%)(12:41) 7B: By default, AGI will be aligned to human interests. (40%)(13:52) 7C: By default, AGI will be misaligned but non-expansionary. (8%)(17:00) 8. Superalignment is a tractable problem (20%)(18:40) 9. Once we solve superalignment, we'll enjoy peace (80%)(23:29) 10. Unaligned ASI will spare us (1%)(24:58) 11. AI doomerism is bad epistemology (true, but mostly irrelevant)(26:27) Multiplying the Numbers Together(26:58) Other Considerations(27:21) Humanity Won't Be Doomed is a Good Historical Heuristic(28:37) Superforecasters Think We'll Probably Be Fine(31:00) AI Developers and Top Government Officials Seem to Think We'll Be Fine(34:06) Multiplying the Numbers Together (Again)(34:33) ConclusionThe original text contained 2 footnotes which were omitted from this narration. --- First published: June 12th, 2025 Source: https://forum.effectivealtruism.org/posts/hrMBpqmgxpLZPLeBG/my-p-doom-is-2-76-here-s-why Linkpost URL:https://thelimestack.substack.com/p/my-pdoom-is-276-heres-why --- Narrated by TYPE III AUDIO. ---Images from the article:Apple Podcasts and Spotify do not show images in the episode description. Try Pocket Casts, or another podcast app.

Visit the podcast's native language site