Thinking for Yourself: AI and Epistemic Autonomy
Personalization algorithms vs. LLMs and why choice matters
I wrote this essay in June for a competition. One of the prompts was, In an increasingly AI-driven world, how is our ability to think for ourselves changing? At first I was planning to discuss how we can either use AI to do the work for us, or use it to get better at our work while still doing it ourselves — and that the choice is ultimately ours.
The topic wouldn’t be very new, though. I’ve definitely seen more than one article discussing exactly that, and there’d be no point in repeating what’s already been said. So I went down a bit of a philosophical rabbit hole and tried to come up with something more original.
A few quick notes before I share the actual essay. First, I had a maximum of 1000 words to write, and word counts are my worst enemies. The essay could’ve been so much more extensive, but I didn’t get to include many interesting ideas - which I hope to do soon. Second, it’s an academic essay, so if it’s not your preferred “genre” you might want to wait until my next post, where I’ll rewrite it to sound less academic (not sure when that will be though, given that it’s only the second day of school and I already feel like 24 hours a day is not enough...)
Fun fact: the competition guidelines stated that you must report every way in which AI was used, and that any use of AI would negatively impact the submission. I personally think it’s fair, but I’m always open for a discussion! Another fun fact: I used an AI detector after finishing the essay (without using AI, of course; it would be so ironic given the prompt), and my introduction paragraph was marked as 30% AI-generated. Honestly, the paragraph sounded too generic and didn’t convey anything important, so I ended up rewriting it entirely — saved me the precious word count.
Anyways, I’ll stop rambling and let you read the essay. I also included my bibliography on the off chance that someone is interested in epistemology.
In an increasingly AI-driven world, how is our ability to think for ourselves changing?
Multiple approaches can be taken to investigating the impact of artificial intelligence on our ability to think for ourselves. This essay takes a philosophical approach, and aims to analyze how AI changes our independent thinking from an epistemology angle.
Such analysis would benefit from a more refined understanding of “thinking for yourself”. A more rigorous term would be epistemic autonomy - an agent’s governing of their own intellectual life, without external influence, control, or pressure (Priest 73). The strength of this definition is that it shifts the focus from how we govern our intellectual lives in an increasingly AI-driven world, to whether AI allows us to govern them on our own.
In a sense, the use of AI doesn’t negate our epistemic autonomy per se. As Matheson points out, “individuals can be epistemically autonomous with respect to how they conduct their inquiry into any given question” (321). Using an AI tool can be seen as an autonomous choice of a means of inquiry; consequently, we don’t yet lose epistemic autonomy by mere virtue of using AI. This essay therefore argues that the effects of artificial intelligence [1] on our epistemic autonomy are determined more by the design and performance of AI itself than by how, if at all, we use it.
One of the most prevalent AI applications is personalization algorithms. Those are machine learning systems that analyze user data on a platform and determine what content to display. Personalization algorithms are commonly used on social media, streaming services, and search engines. The users’ digital experience is thus curated by opaque algorithms, creating an environment where they have little control over what is shown to them. The narratives they are exposed to are chosen for them, not by them, thus undermining their self-governance which Kawall (376) believes to be one of the aspects of epistemic autonomy.
It can be argued that, since they are based on the user’s previous activity, personalization algorithms can support another aspect of epistemic autonomy suggested by Kawall - authenticity. In this context, Kawall uses the term to describe the concept of an agent’s intellectual pursuits being rooted in their genuine curiosity, values, and passions (377). The AI algorithms tailor the users’ experiences to their apparent preferences, enabling a degree of such authenticity inasmuch as the users are exposed to arguments, topics and products they are (likely) willing to see.
This, however, raises the question of whether those interests are truly “authentic”. The issue AI recommendation systems pose to epistemic autonomy is that while interacting with content on a platform, users become passive consumers. Epistemically autonomous agents, on the other hand, “conduct their intellectual pursuits . . . with a particular motivation” (Matheson 269), and “deliberately seek out” those pursuits (Kawall 377). It is this intentionality that users lack when their digital experience is curated by personalization algorithms. Furthermore, it is well known that such algorithms are addictive by design, especially on social media platforms. According to Fricker (325), engaging in a compulsive activity, such as social media usage, necessarily negates an individual’s self-governance.
Large Language Models (LLMs) are another form of AI that is rapidly gaining popularity. There are some key elements that distinguish LLM-based chatbots from recommendation systems.
First, unlike with personalization algorithms, users of LLMs are more wary of AI involvement. This leads to a higher awareness of the chatbots’ limitations, whereas on platforms like social media, users are more susceptible to the opaque algorithms’ influence (Mattioni 1506). As a survey by Wang et al. illustrates (12), respondents understand the main flaws of AI chatbots, which makes them more likely to exercise epistemic vigilance, i.e. to cross-check the provided information, seek out sources to support or disprove claims, etc. By doing so, agents are able to form beliefs in accordance with available evidence, exercising self-governance (Fricker 329) and therefore maintaining epistemic autonomy. [2]
Second, users prompt LLMs to answer specific queries, so a chatbot’s output is less backwards-oriented than the content suggested by a recommendation system, in a sense that it addresses the user’s present needs rather than offers content the user was interested in previously. This gives users more control over what inquiries to pursue. They aren’t limited to what the algorithm suggests, which enables much more independence and self-governance.
On the other hand, LLMs exhibit a number of biases in their outputs, including framing bias (Alessa et al. 3). Consequently, they may subtly steer the user’s attention to certain arguments over others, potentially influencing the direction of their further research. Yet, as Kawall (379) notes, even if an agent’s pursuits are initially subject to some external influence, their subsequent inquiries can still be authentic if the agent engages in purposeful, passion-driven and intellectually fulfilling exploration.
Although recommendations systems and LLMs are just two examples, they clearly illustrate the importance of user choice in AI implementation. On platforms with AI-driven personalization, users are subject to the influence of manipulative algorithms that often represent only one side of reality and limit the users’ exposure to alternative perspectives. This interferes with their autonomous belief formation, pushing them to inadvertently conform with dominant narratives, which is opposite to epistemic autonomy. On the contrary, when agents choose to use AI tools intentionally, they can do so in a way that doesn’t inherently negate their epistemic autonomy. It is evident that LLM-based chatbots allow for a greater degree of epistemic autonomy than recommendation systems.
In conclusion, it is not the availability of AI tools, but the integration of AI algorithms into digital platforms that threatens our epistemic autonomy the most. The more online spaces operate based on AI, the less control we will be able to maintain over our digital environments. At the same time, as we inevitably become more dependent on those platforms, it will get harder to avoid the influence of AI algorithms on our intellectual pursuits. Ultimately, our ability to think for ourselves will be significantly hindered unless we have choice regarding the integration of AI into our epistemic lives.
1. Artificial intelligence is a broad term that encompasses a wide range of computer systems. Given the scope of this essay, the following analysis will only focus on a few most common applications of AI.
2. This is not to say that all users will necessarily approach AI-assisted inquiries critically, as some will simply accept the ‘final’ answer provided by AI. However, this essay looks primarily into the design and implementation of AI systems rather than user behavior. The key point here is that any LLM user is able to exercise epistemic autonomy; whether or not they choose to is not the focus of this essay.
References:
Alessa, Abeer, et al. “How Much Content Do LLMs Generate That Induces Cognitive Bias in Users?” ArXiv.org, 2025, arxiv.org/abs/2507.03194.
Fricker, Lizzie. “Epistemic Self-Governance and Trust in the Word of Others: Is There a Conflict.” Epistemic Autonomy (Routledge Studies in Epistemology Series), 2021.
Kawall, Jason. “Epistemic Autonomy and the Shaping of Our Epistemic Lives.” Social Epistemology, vol. 38, no. 3, 1 Apr. 2024, pp. 374–391, https://doi.org/10.1080/02691728.2024.2326840.
Matheson, Jonathan. “Why Think for Yourself?” Episteme 21.1 (2024): 320–338. Web.
Matheson, Jonathan. “The Philosophy of Epistemic Autonomy: Introduction to Special Issue.” Social Epistemology, vol. 38, no. 3, 4 Apr. 2024, pp. 267–273, https://doi.org/10.1080/02691728.2024.2335623.
Mattioni, Margherita. “Is Epistemic Autonomy Technologically Possible within Social Media? A Socio-Epistemological Investigation of the Epistemic Opacity of Social Media Platforms.” Topoi, vol. 43, 9 Oct. 2024, https://doi.org/10.1007/s11245-024-10107-xhttps://doi.org/10.1007/s11245-024-10107-x.
Priest, Maura. ‘Professional Philosophy Has an Epistemic Autonomy Problem’. Epistemic Autonomy, Routledge, 2021, pp. 71–91, https://doi.org/10.4324/9781003003465-6.
Wang, Jiayin, et al. “Understanding User Experience in Large Language Model Interactions.” ArXiv.org, 16 Jan. 2024, arxiv.org/abs/2401.08329.



I appreciate the optimism here, in that content-surfacing algorithms pre-date mass use of LLMs. So we’re moving from being force fed a curated diet to consuming a meal that we get get to choose.
Personally, I really enjoy LLMs because instead of pondering a random thought (unrelated to what I’m probably supposed to be doing at a given moment) and subsequently losing it, I can pursue it with an enthusiastic conversational partner and see what questions and conclusions it will then throw up. So they’ve made me more inclined to dig deeper on things.
This is such a thoughtful essay Daria!
I really appreciate how you drew the contrast between recommendation systems and LLMs in relation to epistemic autonomy. You’re absolutely right that the way AI is integrated into platforms matters more than the mere presence of the tools themselves.
One angle I’d add, though, is that while LLMs may seem to preserve user choice more than recommendation systems, they’re still built on predictive architectures. At their core, they generate responses by calculating the most probable next word from massive training datasets. That training data is scraped, de-contextualized, and flattened into statistical patterns which means it’s already stripped of the epistemic autonomy of the original sources. So even though a user chooses when to query an LLM, the “knowledge” it returns has already passed through layers of prediction and loss of context.
This doesn’t diminish your point about intentional use, I think you’re right that recommendation systems subtly erode autonomy in ways LLMs don’t. But both systems pose different epistemic risks: recommendation systems by shaping our exposure without consent, and LLMs by presenting knowledge that has already been de-autonomized at the data level.
I’d love to hear your thoughts on that distinction.