Planned Obsolescence
Subscribe
Sign in
Home
Archive
About
Science and speculation
Will we agree about AI risks in time?
May 1
•
Ajeya Cotra
42
4
8
Latest
Top
Discussions
Six milestones for AI automation
What can AI do on its own, and how well?
Apr 3
•
Ajeya Cotra
83
45
10
I underestimated AI capabilities (again)
Revisiting a prediction ten months early
Mar 5
•
Ajeya Cotra
109
31
9
Takeoff speeds rule everything around me
Not all short timelines are created equal
Feb 12
70
10
9
AI predictions for 2026
But first, scoring my predictions for 2025
Jan 14
•
Ajeya Cotra
67
6
4
Self-sufficient AI
No, we don't "have AGI already." But in any case, we should articulate clearer milestones.
Jan 6
•
Ajeya Cotra
61
15
10
OpenAI's CBRN tests seem unclear
OpenAI says o1-preview can't meaningfully help novices make chemical and biological weapons. Their test results don’t clearly establish this.
Nov 21, 2024
•
Luca Righetti
1
Dangerous capability tests should be harder
We should spend less time proving that today’s AIs are safe and more time figuring out how to tell if tomorrow’s AIs are dangerous.
Aug 20, 2024
•
Luca Righetti
1
See all
Planned Obsolescence
Thinking ahead to a future where AI decides everything
Subscribe
Recommendations
View all 11
Redwood Research blog
Buck Shlegeris
Strange Cities
Owen Cotton-Barratt
Understanding AI
Timothy B. Lee
Dwarkesh Podcast
Dwarkesh Patel
Epoch AI
Epoch AI
Planned Obsolescence
Subscribe
About
Archive
Recommendations
Sitemap
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts