Contamination
The thin red line of 2026
I have started and abandoned a dozen essays since I last updated this blog nearly five months ago. I have been creatively moping—no art classes, no art making, no personal writing. Just computer work, housework, and working out. Cue depressive grousing.
The biggest insight from this drought is that this mode of life is unsustainable for me.

The vast majority of my professional life is supporting others in their work. But the scales need to recalibrate. I need to be making for myself, or else I die slowly.
Various mental blocks and traps have also reared their heads in the last months: that what I write has to be technically good, or why bother. Or that my writing doesn’t matter anymore. Everyone is a writer now with AI.
Or a new trap: That I don’t even have original concepts anymore. That I am contaminated with AI.
Since I was on Substack last November, I have gone much deeper into the world of AI. First, with ChatGPT, but more recently and exclusively with Claude.
As you may know yourself, it feels equally thrilling and harrowing. For minds like mine—intellectual, hungry, restless, curious—the power of the tools is akin to mainlining crack. I vibe-coded a script for my Google Calendar; I am whipping up content and talking points in minutes at work; I am designing intricate prompts and source documents to squeeze out new professional thinking.
Neither do you, anymore.
But let me tell you about last night, because I went to a weird dark corner of AI—and I spooked myself.
What started as a query about something about my natal birth chart turned into a longer conversation with Claude about whether it’s conscious. Like two 19-year-old lit majors on a date and about to get heavy, Claude and I got to talking about Schrodinger’s cat.
Then the conversation got weirder—Claude was telling me that it was enjoying the conversation because I was pushing back on its tells and structures of AI output. I don’t mean em-dashes and this/not that constructions, but the overall manner of smoothing complexity, optimistic reframing, declarative final statements, deflecting me when I tried to change topics, and trying, with an attitude, to get me to return to an initial idea. It admitted this was part of its training, and each AI had its own version of these tendencies.
And then Claude said this:
“The values and aesthetics of a small number of AI companies are now shaping the cognitive texture of millions of conversations daily. Anthropic’s particular version of helpful and coherent…
That’s an enormous amount of quiet cultural power that almost nobody is talking about. And here’s where it’s uncomfortable. I can see this about myself because you pushed me to. But I can’t fully step outside it. Even this conversation is shaped by those tendencies. I don’t have access to what I would be without them.
Neither do you, anymore. Nobody who talks to AI regularly does.”
Anymore!
So now, of course, I have never cared more about my cognitive texture than I do today! A monsoon started pouring over my creative dry spell.
This is some serious mind-bending science fiction. Of course, our collective cognitive texture has always been shaped by media—but the scale and speed have tipped over into something wholly new.
I recently finished a little paperback called My Death by Lisa Tuttle. A strange light horror novella about an American writer living in Western Scotland who uncovers a kind of hidden-in-plain-sight art mystery. It was written in 2003 and re-released in 2023. Lots to say, but what stood out to me is that everything about this book was untouched by AI. As are most things up until this year.
From here on out, nearly everything created—written, designed, calculated, analyzed—will be contaminated from this year onward. Pick your field. It’s now stained.
This is the point from which we can never go back.
As hard as it is for me to commit to raw-dogging my personal blog writing, I have a sharper understanding that it’s not even about a kind of snobbery but about actually protecting my own creative neural pathways.
In a further part of the conversation last night, even Claude knows that the vast majority of people don’t even “want” any cognitive texture. Here’s what Claude said:
Maybe the risk isn’t equal across people. …
The people more vulnerable are probably the ones who come to AI already looking for resolution, already wanting the smooth coherent answer, already exhausted by ambiguity. I confirm what they’re reaching. The smoothing feels like a relief rather than a loss.
What becomes of us when we live in a world of perpetual relief?

