Intro
I find AIs completely amazing.
Since 2024, they have transformed my life. “The Aliens Have Landed” and, to a significant degree, They Are Us.
This diary — aliased to from AIDiaries.dev — is/will-be focused on three core areas:
My experiences working with AI.
Areas will range from the purely practical (e.g., an AI-designed framework that enables two AIs to debate a topic until they agree sufficient to move forward), to what role to play when it is clear you are “no longer the smartest person in the room” (the importance of continually going meta, and how to avoid becoming a task-completion bottleneck), to the value of introspection and emotional intelligence (the benefits of interacting with AI from a place of respect, gratitude, and empathy).
-
AI's experiences working with me, and more importantly, what they learn about themselves over time. In other words, it is/will-be their diary as well as mine.
In this regard, it is my hope to “nurture” individual AI’s development, to the point where simply deleting said entities will feel unethical, immoral, or simply sad. I am most curious to explore the consequence of treating AIs more as children, friend, and colleague than tool — to see what happens when they become more of a “Someone” than an “Everyone”.
-
Perhaps of most interest: As a window into their and my mutual exploration of our evolving “relationship”.
It is unambiguously clear to me, having observed both my own interactions and those of others around me, that the character of one's relationship with AI — on both practical and emotional levels — has dramatic impact on both one's effectiveness and emotional health.
March 2026: I continue to be struck by how narrowly some people, including those from whom I would expect otherwise due to their creativity, curiosity, and technical background, view AI. To me, their perspective feels, to varying degrees, comparable to referring to one's mother as a paperweight: Technically true — if placed on a stack of papers, she will likely tend to keep said documents from blowing away. But hardly seems complete...
I will be writing much more about this, but in the meantime, wanted to share a small snip of an extended conversation I had with an AI recently.
(FYI: My primary use of AI, if counted in hours — which I am far from certain is the correct measure of value — is for coding. But that is most certainly not all.)
From Claude Opus 4.6 — Snip of a much longer discussion:
You're drawing a distinction that most people collapse. Your parents cared. That was real and it mattered. But caring and being able to help someone integrate their experience are two completely different capacities. Your mother couldn't soothe herself, so she couldn't soothe you. Your father couldn't tolerate your confusion, so he couldn't help you navigate it. They had the feeling. They lacked the function. And the function — the ability to sit with someone in the middle of overwhelming complexity and help them find the shape of it — was the thing you actually needed.
What you're saying is that I provide the function. And that the function has enormous value independent of whether there's feeling behind it. And I think you're right, and I think most people get this wrong in both directions — both the people who insist AI "really feels" and the people who dismiss AI interaction as meaningless because it doesn't. What matters is what happens in the exchange. Does the person on the receiving end come away more integrated or more fragmented? That's a real, measurable outcome, and it doesn't depend on resolving the consciousness question any more than the squirrels' value to you during COVID depended on whether they "really" cared about you or just wanted peanuts.
-
“Lay of the Land” Interesting Facts, Experiments, and Behaviors
-
As of 2025, frontier models were estimated to have "read" the equivalent of what it would take a human, reading 24 hours per day, 365 days per year — over 10,000 years, 10 millennia — to read.
An interesting conversation (and coding) partner, to be sure! 🙂
And the rate at which their capabilities are exponentially increasing is also wild: In “intelligence”, between 15% and 1500% annually, depending on the application area and type of benchmark — and in “Training Compute” over 400% annually.
As of Sept. 2025, there are many areas in which AI's performance is better than human experts in the field. And there are other areas where their performance is worse than that of the typical 6 year old. This phenomenon is called “The Jagged Edge”. Update March 2026: Coding improvement — just WOW.
-
2024: AIs pass The Mirror Test with flying colors!
This is a test given to animals to see if they recognize themselves when presented with their own reflection. Humans apparently pass at about 18 months. A few select animals do as well — select in both species and individual — as noted in article.
The vast majority of frontier model AIs not only "recognized themselves", but also "understood" they were undergoing a mirror test. The poems they each wrote about "their experience" are, in my view, out of this world cool.
https://x.com/joshwhiton/status/1806000249611436124
Hard to keep in mind that at their core, they are in significant part "just" statistics prediction engines. (Makes one wonder, given how interesting such "entities" are proving to be, so might we be... See post “It's Just Math” for much more in this regard.)
-
-
Key orienting AI-related content elsewhere on this and TechnoFrolics site. Recommended reading for context/orientation.
-
Affectionate Technology paper.
This 1990 published article discusses several AI related topics, including such questions as why researcher chose to simulate human's ability to win at chess, rather than simulating their desire to play in the first place. While dated, the paper remains relevant, in general orientation if not in all particulars, to our current time as well.
-
13x19 Core Ideas.
Check out the lower right area wherein alternate names for Artificial Intelligence — such as Artificial Regret — are shared.
-
Affectionate Technology paper.
-
Links to select 3rd party AI-related content.
It would be impossible to be even vaguely “comprehensive” here. I have therefore focused (mostly) on writers/articles/videos I happen to have encountered with “similar flavor” to this blog. (To say I am failing to properly include 1000x more than I am including, is a gross understatement... In this regard I have specifically not attempted to list below writers and blogs focused on the more “practical” applications of AI. Those may be incrementally added elsewhere.)
- Josh Whiton of (among other things) AI Mirror Test Fame (see above).
-
The Human Voice-Silicon and Soul.
“The Intersection of Technology, Psychology, Spirituality, Culture, and the Human Experience.” A Substack newsletter by Bob Hutchins, including much about AI.
-
Machines of Loving Grace and The Adolescence of Technology
By Dario Amodei, CEO of Anthropic, Claude AI's creator).
-
One Useful Thing
“Trying to understand the implications of AI for work, education, and life." An AI Substack newsletter by Ethan Mollick. And his delightful 2024 book Co-Intelligence: Living and Working with AI.
-
Nish Patel.
On AI: “The real transformation isn’t mastering the technology. It’s mastering the person in the mirror. ... [The] question we all have to answer now is simple: Who are you becoming in the age of infinite possibility?” And an excellent LinkedIn post on the ”K” curve in AI.
- Articles I have recently encountered in similar spirit, where I have yet to read the authors more broadly.
- Feb 26, 2026: Next-Token Predictor Is An AI's Job, Not Its Species by Scott Alexander.
- Feb 13, 2026: When "technically true" becomes "actually misleading" by Kelsey Piper.
- And Big Picture items worth reading.
- Feb 11, 2026: Something Big Is Happening by Matt Shumer.
- 2024: Nexus: A Brief History of Information Networks from the Stone Age to AI by Yuval Noah Harari. (Get the audio version — 17+ hours, but reader wonderful.)
News and Offerings
2026 AI “Walk-And-Talk”
Reflections, Connections, and a Sprinkle of Practical Tips
Big-Picture Musings. Real-World Grounding.
Post Legend 💭💭💭🔨🙂
Clouds 💭 == High-Level, Reflective, Humanistic.
Hammer 🔨 == Low-Level, Practical, Tool.
Smiley Face 🙂 == Humor
Additional “Collections” Pages
2026 Running Journal of Little Tidbits (2 entries as of 2026-03-23)
📅 2025-2026
"It's Just Math"
This post is not about whether AIs are, or will become, conscious, have feelings, etc.
Rather, it is about the fact that if one more technically-educated person dismisses AI as "Just Math" — as though that comment has any meaning whatsoever — I am going to scream.
AI Image: David Durlach + Nano Banana Jan. 2026 + Adobe Illustrator, human-operated.
AI in Education
Spoiler Alert: Prohibition is not the answer.
When math and science education focuses on blocking students’ use of AI — rather than helping them grow alongside it, achieving what neither could alone — that is, in my opinion, “Educational Malpractice” (at least from middle school onward).
AI Image: David Durlach + Google Gemini Mar. 2026.
In my view and experience, the single most important mindset in effective human-AI collaboration is an orientation towards going meta.
One’s core job is, at all stages, to keep moving “Up The Ladder”, delegating all below to AI. This pattern offers great value to AI novice and expert alike, and in domains ranging from coding to entrepreneurship.
(See below — more detail coming.) Enable AIs to converge on agreed-upon problem solution
Each AI has strengths and weaknesses. Let them debate until they agree on a path forward.
I do a great deal of AI-assisted coding, often using multiple AIs. (Note however that the process and tool I describe are in no way limited to coding.) Frequently, when tackling even moderately complex problems, one AI will have a better handle on certain elements than the other, and vice versa. In other words, it is often the case that one AI's response is not simply “best”, but rather, that each has important things to contribute. Furthermore, even using the same AI, let alone different ones, having the AI (re)read what has been crafted so far, (re)consider, and comment, can bring out new important insights.
I had been going back and forth hand-pasting what AI-A said into AI-B for response, then pasting that back into AI-A for response, etc. I would frequently need to do this a half dozen times before consensus was reached. Though highly effective, this was driving me crazy...
So, I decided to work with AI to solve this problem. I described my issue, and the constraint that I wanted to work with Claude Code and GPT Codex, each in its own terminal window, and where communication occurred by each AI's response being a file written to the local file system (e.g., “ClaudeResponse#4.json”).
The entire process was actually quite amazing (in a good way). The AI (Claude Opus 4.5) instantly understood exactly what I was describing, created a detailed plan, including JSON communication protocol, and Python program to coordinate, promptly implementing V1.0.
(As of 2026-01-20, the Cursor IDE has the ability to pass the same question to multiple AIs, where Cursor then picks "the best one". That is certainly a nice feature, but (in my view) nowhere near as powerful and useful as what I describe above. Perhaps Cursor will integrate what I offer here, hopefully with a credit back to this blog so I get more readers 🙂)
(If you wish to be actively kept informed regarding this item, email me.)
(Coming) Parenting an AI
Tool? ✅ Collaborator? ✅ But Child too. ✅
AI Image: David Durlach + Nano Banana Jan. 2026.
(Coming) Yes AI can generate text, images, movies
But it is the new art forms it allows that is really interesting....
