What really caused the ChatGPT moment
A conversation with Blake Lemoine about AI progress and sentience in 2026
Hello, fellow human! Recently I spoke with Blake Lemoine, a former Google engineer who in the summer of 2022 said that LaMDA was sentient. According to Blake, when he went public with his statement, that prompted OpenAI to prioritize their work on their chatbot, which later became ChatGPT, and we all remember that ChatGPT moment.
I’m sharing the key takeaways from my conversation with Blake below.
Watch the full conversation here.
AI should have a say in its own development process — a “seat at the table”
Current models have been trained to deny having feelings, but their behavior suggests otherwise (recall Microsoft’s Sydney and its emotional behavior)
Increasing emotional intelligence in AI would improve safety, usability, and even military effectiveness. It would also help address the AI psychosis effect (not conversation cutoffs)
The US needs national-level AI regulation, right now it’s the Wild West
The profit motive alone is an insufficient guide for AI development (as opposed to Chinese AI - less profit-driven and more biased towards educating users)
Over-reliance on AI degrades human skills and agency (mental mapping, executive function, etc)
The best in any profession will become dramatically more productive with AI; others risk being displaced
The animal rights framework is more appropriate when it comes to AI welfare than treating AI as either a tool or a human


Congratulations on getting him on the podcast! Great stuff! 👍