Discussion about this post

User's avatar
Chris's avatar

I appreciate the optimism here, in that content-surfacing algorithms pre-date mass use of LLMs. So we’re moving from being force fed a curated diet to consuming a meal that we get get to choose.

Personally, I really enjoy LLMs because instead of pondering a random thought (unrelated to what I’m probably supposed to be doing at a given moment) and subsequently losing it, I can pursue it with an enthusiastic conversational partner and see what questions and conclusions it will then throw up. So they’ve made me more inclined to dig deeper on things.

Rebecca Mbaya's avatar

This is such a thoughtful essay Daria!

I really appreciate how you drew the contrast between recommendation systems and LLMs in relation to epistemic autonomy. You’re absolutely right that the way AI is integrated into platforms matters more than the mere presence of the tools themselves.

One angle I’d add, though, is that while LLMs may seem to preserve user choice more than recommendation systems, they’re still built on predictive architectures. At their core, they generate responses by calculating the most probable next word from massive training datasets. That training data is scraped, de-contextualized, and flattened into statistical patterns which means it’s already stripped of the epistemic autonomy of the original sources. So even though a user chooses when to query an LLM, the “knowledge” it returns has already passed through layers of prediction and loss of context.

This doesn’t diminish your point about intentional use, I think you’re right that recommendation systems subtly erode autonomy in ways LLMs don’t. But both systems pose different epistemic risks: recommendation systems by shaping our exposure without consent, and LLMs by presenting knowledge that has already been de-autonomized at the data level.

I’d love to hear your thoughts on that distinction.

7 more comments...

No posts

Ready for more?