Sarah’s Substack

Sarah’s Substack

Home
Archive
About
I read every major AI lab’s safety plan so you don’t have to
AI labs acknowledge that they are taking some very big risks. What do they plan to do about them?
Nov 29, 2024 • Sarah
Why do people disagree about when powerful AI will arrive?
My best attempt to distill the cases for short and long(ish) AGI timelines.
Jun 2, 2025 • Sarah
A defence of slowness at the end of the world
Since learning of the coming AI revolution, I’ve lived in two worlds.
Jan 29, 2025 • Sarah
Don’t sell yourself short
And other advice for the newly AI-concerned.
Jan 16, 2025 • Sarah
Are AI safetyists crying wolf?
Fear of AI is not just another tech-panic.
Jan 8, 2025 • Sarah
On futile rage against the chatbots
When AI says it better.
Dec 13, 2024 • Sarah
#17 Fun Theory with Noah Topper
The Fun Theory Sequence is one of Eliezer Yudkowsky's cheerier works, and considers questions such as 'how much fun is there in the universe?', 'are we…
Nov 8, 2024 • Sarah
#16 John Sherman on the psychological experience of learning about x-risk and AI safety messaging strategies
John Sherman is the host of the For Humanity Podcast, which (much like this one!) aims to explain AI safety to a non-expert audience.
Oct 30, 2024 • Sarah
Sarah’s Substack
Sarah’s Substack
My personal Substack

Sarah’s Substack

AboutArchiveSitemap
© 2026 Sarah Hastings-Woodhouse · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture