Blog
-
Your Best Ideas Come at the Worst Time: Building a Low-Friction Capture System for Developers
I was mid-feature in a Claude Code session when a
/simplifypass triggered a bigger idea: retrofit simplification systematically across the whole codebase. Claude gave me a 300-word response sketching how that might work. Useful idea, concrete starting point.But the idea had nothing to do with what I needed to finish NOW. I told myself I would come back to it … I did not come back to it.
-
Automating the PARA Weekly Review with Claude Projects and Notion MCP
The PARA weekly review is supposed to be a habit. In practice, it competes with everything else on a Monday morning and (for me) loses. Tiago Forte describes the process clearly and if you do it you feel better, but manually working through six databases and writing a summary takes sustained focus that is easy to defer. You could open a Claude conversation instead, but you’d spend the first ten minutes copy-pasting database schemas, re-explaining your workflow, and reconstructing context that evaporates the moment you close the tab. By the time Claude is useful, you’ve burned more energy than the review itself would have taken.
-
Catching Spec-Kit Task Phantom Completions with /speckit.verify-tasks
AI agents sometimes mark tasks
[X]without doing the work. These phantom completions are rare (~0.36% in my data), but each one is a false claim you’ll either accept or spend precious mental energy figuring out the truth.verify-tasksis a spec-kit community extension that runs a multilayer verification cascade against every[X]completion in yourtasks.mdand passes a verdict on whether the work was actually done.In The [X] Problem, I documented phantom completions: tasks that AI agents mark
[X]complete without doing the work. Across ~830 structured tasks spanning Claude Code/planand spec-kit workflows, I found three phantom completions, about 0.36%. The preceding post introduced/verify-planto catch this in Claude Code’s/planworkflow, but nothing equivalent existed for spec-kit’s task-based workflows, where hundreds of[X]marks intasks.mdgo unchecked after/speckit.implementfinishes. -
The [X] Problem: Phantom Completions in AI-Assisted Development
AI coding agents sometimes mark tasks as complete when the work was never done. The code compiles, the tests pass, and the agent moves on. But the specified file was never created, or the required modification was never applied. I call this failure mode a phantom completion: a false positive in the agent’s own task-tracking output, where the checkbox was marked
[X]complete, but the code is missing or “wrong” (syntactically correct but not to spec). -
Claude Code Said 'Done.' It Wasn't. So I Built a Skill to Catch Phantom Completions
I spent hours refining a
/planin Claude Code. Six major change groups, over sixty discrete implementation items across multiple files. Type definitions, new methods, filter logic, wiring between upstream producers and downstream consumers. The plan was thorough because the feature was complex, and I had iterated it carefully before switching to implementation.Claude implemented the plan and reported it was complete. The code compiled. The structure looked right. No errors, no warnings, no hesitation from the agent.
-
Meta-Recursive AI Development: Using AI to Analyze and Learn From Your Copilot AI Chat History
An automated meta-recursive workflow using AI to analyze Copilot AI Chat Interactions and Learn From Them (and Repeat).
-
Your PostgreSQL Indexes Are Useless (And You Don't Know It)
Introducing pg_num2int_direct_comp: Exact Cross-Type Comparison for PostgreSQL
-
Hello, World!
Introductory blog post