From Templates to Heuristics: Enhancing Thought Work

Update 2025-10-14: This post has been updated to reflect thought work rather than knowledge work – which was pointed out by Fiona Charles as a much better description of the cognitive work performed in software development and testing. I fully agree with the distinction between knowledge work and thought work where knowledge work rather describes what LLMs do:
“..it works only from stored knowledge, matches patterns, makes some clever (we hope) predictions and does not think.”LinkedIn post by Fiona Charles

Anime-style illustration of five software professionals sitting around a conference table, all facing a large screen displaying a colorful risk matrix grid. Each person is working on a laptop and focused on the discussion, suggesting a team analyzing project risks or testing priorities.
Risk matrix template

I once worked with a team that used a risk-scoring template.
Each risk got a number for probability and another for severity, and they multiplied them to get a total. It was meant to help decide what to test first. But as I listened to the team debate whether something was a three or a five, I realised the discussion wasn’t really about risk at all. No one talked about what those risks meant, how they might unfold, what could be done to reduce them, or what we still needed to learn.

Maybe that happened because the template did not even include “impact” as a separate topic. But I suspect the deeper issue was competence. A team comfortable discussing risk would have gone there instinctively. Here, the numbers took over and the thinking stopped.

That moment has stayed with me because it reminds me of something I’ve experienced many times. something subtle about thought work:
The way a tool is meant to guide thinking can instead replace it.


When the Form Becomes the Focus

Templates, checklists, and frameworks promise structure and consistency. In software development and testing, they’re everywhere. Test plan templates are one of those I encounter every now and then. They usually have sections for Objectives, Scope, Risks, Environment, and Schedule. It looks thorough and reassuring. It appears like you can fill one out without thinking too hard, and that’s the danger.

When a template becomes the focus, attention shifts from outcomes to outputs.
The document starts to stand in for the thought that created it. We stop asking, “What problem are we trying to solve?” and start focusing on, “Have I filled in all the sections?”

In that shift, responsibility quietly moves from the person to the process. The template gives a comforting illusion that the work has been done, even if the real thinking never happened. For people with less experience, it creates false confidence: “anyone” can fill in the blanks, so the work starts to look mechanical rather than cognitive.

A test plan template, for example, gives the impression that testing can be planned like construction: linear, predictable, and tidy. But testing is exploratory by nature. It’s a way of learning about a product, not just confirming what we already know. A static plan can’t capture that. It diminishes learning to a document.

In my experience, test plans and test planning often look different depending on the maturity of the team. In a mature team, the outcome of a planning session might be mind maps, whiteboard sketches, or lightweight notes. They are living things, short, imperfect, and full of questions. The content still matters, but the conversation around it matters more.


When Templates Actually Help

Of course, not every task in software development demands deep reflection.
Some kinds of documentation are about communication or traceability rather than discovery: release notes, version information, configuration details. In those cases, templates can be genuinely helpful. They save time, bring consistency, and reduce the chance of missing something important.

There’s nothing wrong with that. Not all work is thought work, and not all documentation should invite debate.
The key is recognising which kind of work we’re doing. If the task is largely mechanical, a template can be a useful shortcut. If the task depends on judgment, understanding, and interpretation, then a template can easily get in the way.

The problem isn’t that we use templates; it’s that we often use them everywhere, without stopping to ask whether the work in front of us is about recording information or making sense of it.


The Catch-22 of Structure

The irony is that templates usually appear with good intentions. I know this because I have created lots of checklists myself, and at first glance they might even look like templates. They promise consistency, a shared language, a way to make the invisible parts of thought work visible. And they can help. When a team is new to a problem, a bit of structure can keep things from drifting. But what starts as a way to get moving can easily turn into something that boxes people in.

To use a flexible template well, you already need the maturity that makes rigid templates unnecessary. Without that maturity, the structure that should support learning ends up replacing it. Too much structure, and people stop thinking; too little, and they struggle.

There is another layer to it as well. When a process takes over, it gives people somewhere to hide. If something goes wrong, we can point to the document and say, “But we followed the plan.” That is the quiet harm of too much structure: it makes accountability procedural instead of personal.

What breaks that loop, in my experience, is not another process but leadership, coaching, mentoring, and showing a different way of working. When people feel what it is like to work that way, to discuss, to question, to reason rather than just fill in forms, they discover that outcomes can be achieved without a template. That experience changes something deep.

I once coached a team where we decided to stop using the formal test plan template. We did not replace it with anything fancy. We just started each conversation with one question: “What do we need to learn about this product?” We used heuristics to guide our thinking. It was messy at first, but soon people began to ask sharper questions and share more ideas. The documentation did not disappear — it just looked different, smaller, and better.


From Templates to Heuristics

In thought work, I have found that heuristics serve us better than templates.
They might look similar, a list of things to consider, but the mindset is different.

A checklist says, “Do not forget these things.”
A heuristic says, “Think about these things, if they matter in this context.”

Heuristics make that kind of thinking visible.
When I test, I sometimes use heuristics like “How might this fail?” or “What happens if/when…?” They are not rules; they are reminders. Each one starts a conversation with the product rather than trying to control it.

Over the years I have collected and written about several testing heuristics and mnemonics such as SFDIPOT, RCRCRC, and FEW HICCUPS. Each serves as a prompt to think, not a promise of coverage. For readers who want to explore them more deeply, I have shared examples and links in a separate post.

What makes heuristics powerful is also what makes them fragile. They help us navigate uncertainty, but they can easily harden into rules. Checklists assume the situation is stable; heuristics assume it is not. Checklists aim for control; heuristics invite judgment.

I find heuristics fragile for another reason too. The moment I write them down, they risk becoming the very checklists they were meant to replace. Heuristics are rules of thumb meant to guide problem solving, but they are not guaranteed to be optimal or perfect. In my experience, your background, exposure to similar situations, and worldview all influence how you apply them. That makes heuristics both contextual and fallible.

The trick is to share them as conversation starters, not commandments. When I share a heuristic with someone, I try to say, “This is something that helps me think; see if and how it helps you.” That small disclaimer keeps the door open for learning.

That is also why I have grown wary of “best practices.” To me, they are just templates at scale: frozen examples of what worked once, somewhere. Good practices live in context. They grow out of the people, the product, and the problems at hand. They shift as those things shift.

Culture, Competence, and Confidence

The difference between a template that helps and one that harms often comes down to culture. In a reflective culture, a template is often a prompt for conversation. In a compliance culture, it is more like a box to tick.

I have seen both. In mature teams, templates are used lightly, edited, reworded, sometimes ignored when they do not fit. In less mature teams, templates are treated almost as sacred text. The organisation starts to value uniformity over understanding.

The paradox is that, in my experience, competence and culture grow through practice/doing, not through documents. That is why demonstration and coaching matter so much. You cannot change this with policy; you nurture it by showing what good thinking looks like.

Over time, that creates its own kind of structure, a shared sense of what good work feels like. When a team reaches that point, even if they reintroduce some form of documentation, it is no longer a threat. It becomes expressive rather than prescriptive; it records thinking instead of dictating it.


Competence Before Templates

Competence is what allows people to see context, to tell when a template applies, when it does not, and how it might need to change. Without competence, every good practice turns into a rule, every heuristic into a checklist, every conversation into a template.

So maybe the real principle is simple:
Structure seems to work best when it grows out of understanding and experience, not the other way around.

When we reach for structure before understanding, we trade thinking for form. But when we build understanding first, structure becomes a natural by-product, a way to capture and share what we have learned, not a way to avoid learning in the first place.

Maybe that is what maturity really means: not the absence of structure, but the ability to use it lightly.

The longer I work in this field, the more I’ve come to see that thinking is the real craft. The tools change, the processes come and go, but the ability to pause, to ask why, to connect dots — that’s what lasts.

I’ve realised I don’t reach for templates because they don’t fit how I think.
They make me feel limited, because thought-work isn’t linear.
Thinking, for me, has never been about ticking boxes. It’s about staying close to what’s real — even when it’s messy, uncertain, or hard to explain.

So now I’m curious:
When do templates and checklists help you think better, and when do they get in the way?

Changing the Conversation About Testability

Rethinking Testability Part 4 (Last post) – A series of blog posts based on my talk Improving Quality of Work and Life Through Testability


Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2 – Poor Testability is Everywhere – but we don’t always see it Part 3 – The triangle of Perception

Reframing Testability

Over the years I’ve learned that: Starting a conversation by talking about testability, I can lose people pretty fast.

Not because they don’t care about it — but my guess is that testability just sounds too niche, too tester-centric. And to some it might just seem too technical and maybe a bit too dry for people to immediately feel connected with.

So I started framing it differently.


Leading With Developer Experience

Instead of testability, I’d talk about Developer Experience — or sometimes Developer Productivity, depending on who was in the room.

I genuinely care about Developer Experience — and it overlaps heavily with testability. And it seemed like that concept is easier for people to connect with.

Most developers know the pain of:

  • Waiting forever for a build to finish.
  • Having to restart flaky environments multiple times a day.
  • Wrestling with tools that block instead of help.
  • Getting interrupted mid-flow by issues that shouldn’t be issues.

Those things drain energy, slow learning, and make people feel less effective.
When I talk about them as Developer Experience problems, people nod immediately.

But here’s the thing: those are also testability problems. Because every one of those frictions makes it harder to observe, control, and explore the system.


Where Productivity Comes In

On the other side, Developer Productivity is a term that seemed to often land better with leaders or managers, because it speaks directly to business outcomes: speed, efficiency, predictability.

If you say “poor testability slows us down,” you’re talking productivity.
If you say “better testability means faster learning and fewer surprises,” you’re also talking productivity — just in a way that connects risk and speed.

The overlap is there. Testability affects both how productive teams feel (experience) and how productive they are (output).

But here’s the risk: if I only frame it in terms of speed, I risk losing the deeper point. Testability isn’t about going faster —it’s about efficiency, how easily we can explore a system, uncover risks, and challenge the product itself.


The Catch-22 of Testability

One of my biggest challenges is that: improving testability often feels like a Catch-22.

  • To show the value of improving it, you first need better conditions.
  • But to get those better conditions, you often need to show the value up front.

I’ve seen this happen with tools, environments, and processes. Everyone would benefit if they were more stable or better supported. But getting buy-in often comes down to one question – related to productivity:

“How much time will this save?”

It’s a fair question. But the real value isn’t only about saving time.
It’s about reducing blind spots.
It’s about enabling exploration earlier.
It’s about noticing problems sooner — and avoiding the burnout of constantly fighting the system.

Those benefits are harder to measure, but they’re the ones that really matter.


The Bigger Picture

This is the thread running through the whole Rethinking Testability series:

  • Testability isn’t just about code.
  • Poor testability shows up everywhere, often in invisible ways.
  • People perceive it differently depending on how they interact with the system.
  • And the way we frame it shapes whether others understand its value.

At the heart of all of this is one simple idea: testability shapes quality of life.
Not just the quality of the product — but the quality of life for the people building it.


Improving and Advocating for Testability

Improving testability isn’t the responsibility of one person or one team — it’s something we can all influence:

🔍 As a tester
Don’t just share results — share the story behind them. If it was painful to get there, that friction is a signal. Speaking up about it reveals risks others can’t see.

💻 As a developer
Think beyond “code that works.” Ask yourself: can someone easily observe, control, and explore this system? Design for exploration, not just validation.

📈 As a leader or product owner
Seek real confidence, not just green dashboards. Ask: where are teams fighting the system instead of learning from it? Your support can make the difference between friction and flow.


The Takeaway

Testability isn’t really about speed. It’s about making our work smoother, our learning faster, and our confidence real. And when we improve it, we’re not just improving our products — we’re improving the experience of everyone building them.

That’s why I believe testability deserves more attention.
Because quality of life at work isn’t separate from quality of the product. They rise and fall together.


Rethinking Testability Part 1 – Testability is about people, not just code Part 2  Poor Testability is Everywhere – but we don’t always see it, Part 3 The Triangle of Perception: Why we see testability differently