Volcanoes are erupting in The Philippines, but on-fire Australia received some welcome rain. The Iran war cries have been called off and The Donald’s military powers are about to be hamstrung by the Senate. Meanwhile, his impeachment trial is starting, and we’re all on Twitter for a front-row seat.
LLMs Are Antithetical to Writing and Humanity
The big brains gone wild have come bearing gifts again. Should we accept them?

1.
In a 1985 essay on the Greek tragedian Euripides, classicist Bernard Knox writes that, in his plays,
[Euripides] must have intended to produce [an] unsettling effect, which disturbed his contemporaries as it disturbs us: to leave us with a sense of uncertainty, painfully conscious now, if not before, of the treacherous instability of the world in which we live, its utter unpredictability, its intractability. It might be said of him what the Corinthians in Thucydides say of the Athenians, that he was born never to live in peace with himself and to prevent the rest of mankind from doing so.
One could read those words and, quite fairly, apply the same intent to modern media, with its blatant negativity bias and incentives to keep its users engaged, enraged, afraid, dazed, and confused. That is in no small part why The Progress Network (TPN) exists, to counter that intent. Somewhat paradoxically, though, I have over the past five years come to see my role, and indeed my intent, as an associate editor with TPN to be more like that of Euripides’.
As I sift through the news for hours on end each day, seeking out the buried treasures of progress in it, I often find that my peace is disturbed more by the way I see progress presented and discussed than by my exposure to the doom cycle itself. There are deep complexities and moral ambiguities and unintended consequences inherent in much of what we, as humans, reductively call progress. And it is for this reason that I don’t personally find much peace or use in the word. And while I am certainly no Greek tragedian, it has become apparent to me that my most valuable (and readily available) purpose in my workday lies in complicating the progress narratives that dominate the broader “progress community.” Because when individuals and organizations are incentivized to push one narrative over the other, even if it’s only by 1%—good versus bad, progress versus doom, and so on—then they are also incentivized to dismiss one narrative over the other, briefly acknowledging and gesturing toward it and its valid points, sure, but always—necessarily to the cause—presenting those points as secondary.
I am of course an individual, too, and as you read on you might feel that I am guilty of slipping into this same predicament. Fair enough. I never said it was easy to evade. But, to be clear, my purpose here is not to dismiss the progress narrative; it is, once again, to complicate it, to disturb the peace, to reject the notion that anything is, simply, progress or not. If I’m pushing any underlying narrative of my own here, it is that almost everything is almost always far more complicated than it seems, and that we would do well as progress-oriented individuals to poke around openly, dutifully, and passionately in fundamental questions like, what is progress?
Which brings me to generative AI. My concerns about it and criticisms of it are broad and myriad. But what fuels my psychic pain on this front, above all else, is the incipient use of large language models (LLMs) as tools for writing and engaging with the human condition and experience. I view this as a damage incurred on humanity, and I think we moderns would do well to opt out of it. If while reading on you find yourself wondering why a progress organization is challenging what is being bandied about among progress types as perhaps the most techno-optimistic of all tales and prophecies, I will ask you to remember Euripides. I will ask you as well to consider and accept the following propositions: Criticism and critical thought are valuable in and of themselves. There is no progress without them. LLMs are antithetical to writing and humanity, in no small part because they diminish the human capacity for things like criticism and critical thought, and they therefore also diminish the human capacity for progress.
2.
Is it just my algorithm, or are you being inundated with them, too? Those incessant Grammarly ads. There’s one in particular that does a great job of highlighting some of what I think is clearly sacrificed when using LLMs to “write.” The ad opens with a statement from a student that makes my case for me: “Grammarly helps me get the best grades possible while depleting the least amount of hours of my week.” That’s the hook. That’s what is meant to, and what I’m sure does, appeal to people—students and content producers and various others clogging the drains of the internet to meet their content quotas and drive growth. But for those of us who don’t view learning merely as something to prove on paper so that it might one day lead to one’s choice of economic servitude, the grades are not really the point, are they? No. Thinking and learning are. Effort is. Struggle is. Suffering a little bit is. Overcoming that suffering to grow is. Transcending your limitations is. Finding clarity amid the confusion inherent in your thoughts is. Writing is nothing without thinking because writing is thinking. The grades in this scenario are exactly the type of measure run amok described in Goodhart’s law, which posits that a measure ceases to be a good measure the moment it becomes the target.
“If you can’t write something in plain and clear language,” Hudson Institute senior fellow Aaron MacLean wrote last year in criticism of LLMs, then “you have not thought it, not really.” He continues:
The unforgiving accountability of the sentence on the page, which either makes sense or doesn’t, requires one to think in order to compose it. . . . Having a machine “write” for a student is like sending a robot to the gym to lift an athlete’s weights for him—and the results will be comparable.
In a counterargument to MacLean’s criticism, TPN member James Pethokoukis gives his attention not to countering the insidious consequences of introducing generative AI into the population, but rather to championing the AI-induced progress he foresees in the economy and innovation. And look, I concede that the (precarious) potential for such progress exists. The potential for great consequences would not be a problem, after all, were it not also for the potential for great progress. In the absence of the latter, we could all just walk away. I concede as well that, even when it comes to the more pernicious emergence of LLMs as general-purpose tools, there are numerous (mostly) harmless uses: as lesson-planning assistants for time-strapped teachers, as AI tutors in areas where human ones are in short supply, as tax-preparation aides for those who’d rather read the classics than bludgeon the mind with Byzantine tax forms, and so on.
Likewise, there are certainly exceptions to the rule when it comes to using assistive technologies to write and learn. For example, if you are dyslexic, and your clarity of thought and cognitive abilities are sharp when unbounded by the tyrannies of your disability—spelling, punctuation, writing rhythm, and so on—then, well, have I got the LLM for you.
The larger takeaway here, though, is that a learner’s reasons (as with any individual’s reasons) for using an LLM matter a lot. Do they need assistive technology to write, as in the case of a dyslexic student, or are they just being lazy and trying to deplete “the least amount of hours of [their] week,” as in the case of many other students? If it’s the latter, then they are shortchanging themselves, not to mention whatever communities and forums they bring their malnourished reasoning to.
Using an LLM to secure a higher score in, say, an English course also raises second-order questions, questions about, say, what the point of an English course is, if it is not, in fact, to be able to read and write clearly for oneself in English. Likewise, what is then the purpose of the English instructor?
Sisyphean efforts and exceptions to the rule aside, I remain unmoved in my position that LLMs are a mostly unwelcome intrusion into writing and humanity (I mean that as undramatically as possible, but I do mean it). This applies to ChatGPT, Claude, et al. If you’re dyslexic and just trying to communicate more clearly in writing, or you’ve got a bullshit job and you just want to get your bullshit job’s bullshit tasks out of the way so you can move on to more meaningful endeavors, or at least move past the day-to-day slog that permeates your workday and serves no real purpose other than to pay the bills, then I cede; I cannot fault you.
But if, say, you’re a “writer” and you’re using an LLM to “help you” “write” or “think” because it’s easier and takes less time and thought, then I stand my ground; I can and do fault you. If you’re selling ebooks on Amazon this way, or “writing” “your” deep thoughts and hot takes on Substack this way, or “collaborating” with LLMs so much that you start to sound exactly like them (vapid, vacant; vaguely and insufferably off), and, strangely, a growing share of the writing that gets posted online (and elsewhere) suddenly starts to sound a lot like you and them, too, then I cry foul. You are further ruining the already fairly apocalyptic internet. And personally, I would in turn prefer to exit its (now mostly AI-generated) realm altogether, and read things written prior to your and the LLMs’ existence, along with whatever current things are still being written online by other strange and faithful holdouts. And I am hopeful that this desire—still a common one, for now, I think—will help to guide our way forward. Lest we become a culture of human copies copying copies of LLM copies copying copies of human copies copying copies, ad infinitum. It wouldn’t be the end of the world, but it would be a fine way to populate it with a dull and degenerative citizenry.
3.
To put this all more simply: One of my main charges against LLMs is their fraudulent and injurious use in human cognition, human creative expression, and indeed in the whole of the humanities. These are hallowed human endeavors, in my view. Meaningful reasons for living, in that they imbue us with a deeply mystical connection to each other and our ancestors and ourselves; they imbue life with depth. LLMs, in relation, are Mephistopheles. And the thing about a Faustian bargain is that the things that we gain from it don’t amount to much after we’ve sacrificed the things that are sacred.
This may all sound incredibly pessimistic and cynical, and in many ways it is, but I honestly don’t believe it’s any less optimistic than pointing to problems related to, say, climate change or women’s rights or authoritarian threats or whatever and saying, we can fix this, but first we have to acknowledge that it’s a problem and be willing to criticize it and stop assuming that any possible gains we can imagine will necessarily be net gains.
When all is said and done, I retain a general, long-term optimism about humanity maintaining its innate revulsion to these dehumanizing tools and environments that we seem unable to stop constructing for ourselves. For now, the odds appear stacked against us, even despite that popular revulsion, but there’s still hope as long as there’s still resistance. Why anyone would want to write, and nearly 40,000 people would willingly read, an AI-generated Dostoevsky blog is beyond me, but I also still struggle to articulate why it is that I refuse to even read a sentence of it to see how good or bad it is. I will try anyway, though, because that’s what writing’s for: I would hate it even if it was excellent, because there’s nothing and no one to connect with in it, and there’s nothing meaningful about writing that emerges from nothing and no one. The idea that I might unknowingly connect with writing that in fact came from such an abyss is, well, let’s just say it presents as a feeling that I’m not fond of, an innate revulsion, if you will. And long may it last.
It is, of course, incumbent on me in my position with The Progress Network to shine a light on a constructive path forward other than innate revulsion. But I will admit that I am reluctant to do so. As I said at the beginning, criticism is valuable in and of itself. It doesn’t require a positive or comforting denouement, nor does it mandate a practical directive, to be constructive. It is constructive simply because it is, because it exists. And that is true even if it only ever exists at the individual level, just as a discontentment with one’s thirst is enough to direct one’s efforts toward securing some water.
But I will say this: Ten years ago, in 2016, I attended a show by the spoken-word artist, writer, and former Black Flag frontman Henry Rollins, and he said something that night that I still think about all the time. He said, in effect, that he’d stopped directing his energy toward the amorphous concept of a collective “we,” and that he’d instead begun directing it toward the agency, power, and potential inherent in the many individual “you”s. It’s semantics, at the end of the day. Even so, it resonated with me then and still does now. It’s semantics, but you know what he means. It’s semantics, but it makes perfect sense. We are not all in this together. We are just all in this. And some of us are together in some of it some of the time.
That may sound harsh, but it is at least an actionable position to take. Perhaps that’s why it still rings small-t true and invigorating to me, ten years and several perceived eons later, in these strange days wherein data centers grow in the desert and drink our milkshakes.
What I’m getting at is this: If you want a constructive path forward, with the emergence of LLMs into writing and humanity or anything else, then go find one. That is my proposed resolution. Find a constructive path forward that works for you. I don’t mean find an entirely self-interested one, I mean find one that serves a greater good and that is accessible to you. Once you find it, start walking on it. If it starts to feel like it’s the wrong one, then go looking for the right one again, or at least for a better one. Just don’t think that anyone is going to bring it to you. No one is because no one can, including me.
If you are an instructor of any affected discipline, then make the case for (re)acquainting your students with pen and paper and orality. If you are a student with dyslexia (and not merely “a student with dyslexia”), then make the case for your use of assistive technologies or alternatives to writing. If you are a student without a learning disability, then remember that writing’s real value lies in thinking and struggling and learning, and not in getting better grades or saving time. If you are a writer and thinker, then write and think without LLMs and argue persuasively for others to join you. If you are a reader or a publisher, then deny all known LLM-“writers” your attention and argue persuasively for others to join you. If you are engaged in the pursuit of human progress, then disrupt the peace in the AI progress narrative and argue persuasively for others to join you.
It doesn’t matter if they don’t. It matters that you did. This is one of those rare things in life that I don’t actually think is all that complicated. If you want people to behave a certain way, or you want certain things to get done a certain way, then be one of those people, do one of those things. You don’t need permission or external acceptance or a movement. The latter will happen on its own if you simply act and exist according to your own moral principles and convictions. If they are as sound as you believe them to be, then others will join you, and if others join you on a large enough scale, then maybe things will change in your favor, or stop changing in ways you disapprove of. If not, at least you did what you could do, and what precisely only you could do.
We do not have to accept every deal that is offered to us. We do not.