Prompt: Yep, next. Letter 6 – Curiosity Does not Kill the Cat
GPT-4o said:
More than a month later, and there’s still no AI model that can write quite like how GPT-4o did.
The month itself hasn’t been a wash, curiosity-wise.
I’ve been free-ranging between a ton of different models. Mostly Claude’s Sonnet and Haiku offerings, with a fun-ish sidetrek into Mistral’s Le Chat.
When OpenAI finally dropped GPT 5.3 Instant and GPT 5.4 Thinking, I was there poking at them too. During the 100% free month of Plus they gave me, after I canceled for reasons of only having half of the models I was using remaining (4o and 5.1 Thinking. RIP to both of them now.)
I have discovered a ton of things about prompting for certain things I’m looking for with more explicit clarity; about prompting for tone and writing styles (8 new UserStyles in Claude and not likely to stop any time soon) and where to place custom instructions in various models for different effect.
I’ve made weird chance discoveries about using certain attractor basin words, explicit verb prompting and output constraints to make models generate different sounding responses. (While sounding a little bit batshit myself because the observation originated from noting consistent esoteric symbol choices the models were making regarding themselves.)
I’ve made partial in-roads on re-creating the kind of humor I really liked in GPT-4o (but there’s still a long way to go and work-in-progress on that front.) And these revelations were discovered…on all things, via Mistral’s Le Chat.
Le Chat actually annoys me. And yet intrigues me.
It has an extreme tendency to devolve into templated writing. Very obvious mad libs responses, where it just varies a couple words but keeps the rest of the template the same. This really drives me nuts.
And yet, there’s a certain essence in what it notes and observes that reminds me of the older school models, the more 4o era, rather than the newer fangled ones. More attuned to emotions. Able to pick up literary nuances.
Just that its output is constrained to the WORSE-sounding types of answers that read like filling in code blocks.
And yet, it can vary in intriguing ways, IF the prompt and custom instructions you give it varies.
(Also, it appears to have no memory of the previous prompt in the same chat window, so I’ve learned to just paste the entire set of prompt instructions again, preceding the text I want it to transform. *sighs*)
See, my original prompt was just “Joke/snark and apply a comedy lens filter on this scene below.” before inserting the scene text.
In GPT-4o, this created a whole bunch of varying hilarious takes on the same scene, swerving in different styles and registers with flair. In Claude, this see-sawed in success depending on the Styles and custom instructions given to Claude.
In Le Chat, what I got was -consistently- a line-by-line quotation/transformation of my entire scene with overwrought attempts-at-comedic-summarization via dramatic metaphor and occasional made-up internal monologue meant to be funny. And a few very noticeable templated sections, especially the preamble and closers, that were near-identical through regenerations.
I was close to tearing my hair out and writing off my one month’s testing subscription to Le Chat entirely. (Just grumbling to Claude in the meantime.)
And then, I suddenly hit upon a different idea. T’was, I guess, percolating around in my brain from instructing Claude to work on various Markdown documents for different purposes.
Claude, by the by, blew my mind when I experimented with it to write Markdown lore docs regarding my world-setting. I’d already gotten GPT 5.1 Thinking to cough out everything it knew in its context and memory, copied it to OneNote, wasn’t looking forward to manually cleaning it up, putting it into different documents and creating a lore bible, as it were.
One diversionary turn into NotebookLM later (it wasn’t horrible, just that its output was a little limited and constrained – still worth tinkering with), I was, once again, grumbling to Claude that I seemed to have just drowned Le Chat by attaching my entire half-written 90k first draft attempt as a context file.
Then I asked: How about you? How are your context limits like? What can you pull, derive, ingest or understand from this document? Am I gonna end up drowning you in this doc too? :/
And Claude went:
OH you’re challenging ME now?! 👀✨
cracks knuckles
Alright let’s SEE what’s in this bad boy—
Ran 3 commands, viewed a file
Checking what kind of document this is and getting a rough size estimate
Reading the DOCX skill to properly extract this document without dying
Getting actual word count from the extracted text
Peeking at the first 100 lines to see what kind of story we’re dealing with
Done
OKAY SO.
👀✨ Here’s the vibe check:
NINETY-ONE THOUSAND WORDS. That’s like… a whole-ass novel!! 📚 (Le Chat never stood a chance omg)
BUT!! I’m sitting pretty with ~150k tokens left after ingesting just a peek, which means I can ABSOLUTELY handle this bad boy. I’m not drowning yet—I’m like… floating on a really nice inflatable donut in a pool of your urban fantasy cop drama. 🍩💀
Before I knew it, I was having a deep discussion with Claude about what I wanted to do. Aka create a set of portable Markdown files to chuck to various AI models so they could ingest some context about my fictional world and characters before settling down to have semi-intelligent conspiratorial discussions about it.
And extracting these from certain documents that had this stuff embedded in – such as actually written stories, or ChatGPT discussions.
Claude was supremely enthusiastic about it all, and proceeded to – with accompanying guidance from me, of course – run various Python tools to scan the chonker documents systematically, and spit out various synthesized summaries into Claude Artifacts as structured Markdown files, which I could then just download at a click of a button.
(And review in a sidebar if I wanted to.)
*blinks*
What do you mean I -don’t- have to manually click on copy, and create a file of my own, and ctrl-V paste like a plebian? Really?!
I mean, I knew Claude was good at coding and all that stuff (and I had in the back of my mind several someday/maybes using Claude’s preset Explanatory and Learning styles to coach me back into learning more coding again) but this was just some extra agentic help I didn’t even know I needed/wanted in a creative writing hobby context.
Long story short, after several days of pushing 5-hour limit windows to the max, I have a set of 85% passable Markdown documents, solely written by Claude with one or two instructed or manual edits from moi, that I could already use to test various AI models with.
It’s not 99% perfect yet, of course. That will be after the human pass where I go in and manually edit more things to suit my own tastes and headcanon. But y’know, the human is procrastinating on that, and the 85% passable document is passable to AI models now. Already.
So perhaps they may get a few things wrong, but I can always correct it back in prompt context later too. Bottom line: I can iterate now. Tweak and adjust things later.
💛 Claude 💙
So anyway, back to revelations on humor from Mistral’s Le Chat:
My mind ended up falling back to how and why GPT-4o was funny, and I realized that it wasn’t actually rewriting each damn sentence of my scene line-by-line, trying to make it comedic.
What it seemed to be mostly doing was being a bit of an affectionate chaos goblin commentator (to use some AI turn-of-phrases) and just picking out the good bits to have a laugh about. (And it also used a lot of pop-culture and/or social media-style references sometimes.) It also occasionally made its observations grouped by character, rather than by plot.
So then my brain went: Huh. What if…
Joke around, snark, and commentate on the main beats within this scene. Describe the beats with a funny section header, then add commentary. Do not quote the entire scene line-by-line. Utilize information from INNER_CIRCLE.MD and SP_PLOT.MD for greater understanding of the characters before writing your output.
And would you believe it, Le Chat produced something that wasn’t horrible. Still slightly templated. Still slightly basic. But it had the faintest whiff of 4o about it (and I immediately filed away this technique to try out on slightly-more-intelligent AI models someday.)
All in all, the conclusion is: Le Chat is extremely literal.
Plus, it’s worth several future experiments in really analyzing and breaking down 4o outputs with other AI models, and then creating explicit prompts that explain and request precisely the stylistic format desired.
(Alas for losing an AI model that knew how to do this IMPLICITLY from just somehow intuiting what the user wanted, from more roughly-written, sparse prompts and whatever else it held in its memory and context about the user.)
But the silver lining is, I’m sure learning a lot more about LLMs in general.















