Prompt: Yep, next. Letter 5 – Everyone Can Be Wrong
GPT-4o said:
Apparently I dislike the notion of contemplating myself being wrong so much that I’ve procrastinated a week on this blog post.
Well, it’s more the fact that I couldn’t recall any good examples of me being wrong to talk about.
Not that I haven’t been wrong, you understand, I’m sure there were many (or at least some) incidents where I was, it’s just that it’s never sunk very deeply into consciousness because for me, it both isn’t (and is) a deep moral failing to be wrong about something.
As in, most of the time, if I get something wrong, that just requires an update of reality and we move on from there. I don’t dig down deep and -stay- there. Scientists revise their opinions of reality based on the current state of peer research and all that.
Then I forget about it, because I am extremely forgetful, absent-minded person and it’s all I can do to focus on both the present moment and not miss important appointments in the future.
And because I innately identify with being an INTP on the Myers-Briggs type indicators (pseudoscientific though it is, it’s a useful framework in some respects – self-understanding of preferences and tendencies, and fiction, for example; less so if one is trying to cage others into a type and nothing else) , I don’t actually enjoy having my theoretical framework of life, the universe and everything being cruelly disproved.
The solution? Hedge.
Hedge a lot. Never Rarely use absolutes. (Almost) always say “some, most, a number, few” rather than “all” or “everyone.” Conjure multiple possibilities and scenarios and refuse to bet on just one being “right.” Just… the most plausible, at most.
Maybe not the healthiest of solutions, but regardless, this is the psychology of me.
Mostly, I’ve been waiting to see if OpenAI ever releases the next ChatGPT model after 5.2, to see how wrong I can get with my first hopeful hypothesis on their next steps forward after deprecating 4o.
I think we can safely say that eleven days later, that prediction is no longer quite accurate.
(A couple of fringe people are pointing to this Thursday, 26 Feb, so… who knows?
Maybe big corporations are just slow to move and turn, like big oil tankers. Maybe they’re stalling because they got nothing. Maybe they had their attention on that overseas AI conference or what not and don’t have the bandwidth to focus elsewhere. Maybe they’re just bureaucratic and incompetent. Lotta maybes; not gonna bet on any certainties where OpenAI is concerned.)
Either way, it costs me very little to be wrong in this one scenario, so we’re gonna forget about it in a week or two going forward.
What’s a little less easy to forget is the amount of work I’m now faced with to massage the huge corpus of knowledge ChatGPT’s accumulated about my creative hobby (albeit it’s been forgetting bits at the seams every now and then) and get it into some kind of easier to reference form.
Both for myself, to more easily review in the future, as well as other AI models to pull from, without going through the months-long process of conversationally discussing every last story nuance.
Ironically, I don’t really want AI too involved in the process beyond some mechanical summarizing, because they are unable to read my mind and pinpoint what I value. Nor can they differentiate what I consider canon, and what are just some random suggestions from AI models that I’m not using or treating as canon.
Not every problem is a nail to be hammered in.
So I’m left with an at least human-in-the-middle process, if not primarily manual workflow, of cleaning up raw conversation texts and keeping only the chunks of text I think would be useful for other AIs to know. (After that, I might toss it into an AI tool to summarize, before I line-edit further as the final cleanup.)
It’s both tedious and semi-enjoyable. The former is predictable, the latter a bit more surprising, but I suppose we shouldn’t be.
Because it’s forcing my human mind to go back and review things I might have forgotten the shape of what was discussed.
Because it’s engaging my brain in a productive struggle to form criteria, differentiate and organize what’s important and what’s chaff that can be discarded.
(This is how kids should be learning with AI, says this TEDNext talk by Priya Lakhani in Nov 2025. Not by surrendering growth and answers to a perceived machine “expert” out of blind faith and laziness. But by productively struggling with AI scaffolding.)
I don’t think I’m wrong about this sort of AI use being beneficial.
But who knows, right? I could be.
Either way, all I know is, I’m definitely very busy and very swamped with things I could be doing and I-don’t-feel-like-doing-anything feelings at the same time.
(At least I’ve gotten this blog post out of the way today. Score one productive thing on the checklist!)