Facilis descensus ad minimum

Facilis descensus ad minimum

Home
Archive
About
Moravec's paradox applies to genetics, too
And other implications of adaptivity
Aug 31, 2025 • Clare Lyle
The state of plasticity in 2025
A survey
Aug 30, 2025 • Clare Lyle
A visual guide to proximal methods
Extra gradients, hold the momentum
Aug 23, 2025 • Clare Lyle
Does parameter norm growth cause networks to lose plasticity?
A brief clarification
Jun 30, 2025 • Clare Lyle
Paper review: Auditing language models for hidden objectives
What's in an objective?
Mar 30, 2025 • Clare Lyle
Can you train a neural network forever?
Understanding trainability in neural networks
Dec 27, 2023 • Clare Lyle
Neural networks live life on the edge
(of stability)
Oct 22, 2023 • Clare Lyle
Do we know why deep learning generalizes yet?
It depends on your epistemology
Oct 22, 2023 • Clare Lyle
Facilis descensus ad minimum
Facilis descensus ad minimum
A place to discuss the gory details of the current state of machine learning research: what works, what doesn’t, and why we don’t have AGI yet.

Facilis descensus ad minimum

AboutArchive
© 2026 Clare Lyle · Privacy ∙ Terms ∙ Collection notice
Start your SubstackGet the app
Substack is the home for great culture