Sarah’s Substack
Subscribe
Sign in
Home
Archive
About
I read every major AI lab’s safety plan so you don’t have to
AI labs acknowledge that they are taking some very big risks. What do they plan to do about them?
Nov 29, 2024
•
Sarah
37
12
4
Latest
Top
Discussions
Why do people disagree about when powerful AI will arrive?
My best attempt to distill the cases for short and long(ish) AGI timelines.
Jun 2, 2025
•
Sarah
37
6
8
A defence of slowness at the end of the world
Since learning of the coming AI revolution, I’ve lived in two worlds.
Jan 29, 2025
•
Sarah
130
12
16
Don’t sell yourself short
And other advice for the newly AI-concerned.
Jan 16, 2025
•
Sarah
40
8
2
Are AI safetyists crying wolf?
Fear of AI is not just another tech-panic.
Jan 8, 2025
•
Sarah
21
9
4
On futile rage against the chatbots
When AI says it better.
Dec 13, 2024
•
Sarah
7
1
2
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we…
Nov 8, 2024
•
Sarah
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience.
Oct 30, 2024
•
Sarah
1
See all
Sarah’s Substack
My personal Substack
Subscribe
Sarah’s Substack
Subscribe
About
Archive
Sitemap
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts