What’s in a Title? Nothing. And Everything.

This article was sparked by a talk I gave at Lean Agile Scotland 2020. Revisiting those ideas has allowed me to reflect on how my thinking about job titles has evolved through experience. All pictures in this post are from that presentation.

I’ve never been very attached to job titles. Over the course of my career, there have been many moments that made me pause — conversations, decisions, small organizational mechanics that revealed something larger underneath. Again and again, I found myself reflecting on how a single word attached to a role could shape expectations, influence credibility, and often determine compensation.

When we meet someone for the first time, the script is almost automatic: name, then title.

Two people greeting each other

It happens so effortlessly that we rarely question it. Yet job titles don’t just describe what we do, they shape how others see us, what they expect from us, and sometimes even what we’re paid. They also shape how we see ourselves. Our own expectations and those of others rarely align completely.

In my experience, titles promise clarity. I have fallen into that trap myself, even deliberately choosing titles that created ambiguity and sparked curiosity. Yet job titles rarely capture the full substance of someone’s contribution. Instead, they often act as shortcuts that invite interpretation — and with it, assumptions. Sometimes those assumptions help; other times, they mislead.

If I say I’m a project manager, you probably already have a picture in your head of what I do. But that picture is yours, shaped by your experiences, values, and assumptions.

I’ve seen this play out more than once — two people holding the same title: Scrum Master. One operated as a facilitator and coach, enabling collaboration and continuous improvement. The other functioned more like a traditional project lead, directing work and controlling delivery. Same title, but shaped by different experiences, expectations, and organizational contexts.

People sharing the same title Scrum Master but the setup looks different

When a Title Becomes Identity

Because many of us spend a large portion of our lives at work, it is only natural that our roles begin to intertwine with our sense of self. Over time, a title can stop feeling like a description of what we do and start feeling like a part of who we are. Which is why I have noticed that changes to titles, during reorganizations, promotions, or transformations, can land more heavily than leaders sometimes anticipate. It isn’t just a role being adjusted; it is a story about identity being rewritten.

There was a time when many companies grouped everyone working in software development under a single title: Software Engineer. Testers were given the same label regardless of their specific expertise. While the intention may have been alignment, it often led to confusion around expectations, and even a slight identity crisis, including for me. My skills and experience as a tester were something I was proud of. Suddenly, that distinct contribution seemed less visible or valuable.

I also experienced the positive side of shared identity when managing a group of software testers. Although they were distributed across different development teams, we built a strong professional community grounded in trust, learning, and mutual support. We reviewed each other’s work, ran workshops, attended conferences, and invested deeply in our craft.

“We are testers” became more than a contractual label, it became a shared identity. Looking back, I don’t believe the pride necessarily came from the title itself, but from the culture and meaning we built around it. In my experience identity grows from belonging, when values are shared and experienced as genuine, not from wording alone.

Titles as Organizational Infrastructure

Another dimension of titles became clear to me over time: titles are not just descriptive; they are a tool for organizations to create structure. They allow companies to group roles into salary bands, plan ahead, and maintain a sense of fairness as the organization grows. From a leadership perspective, this structure is both rational and necessary. Without it, alignment becomes harder to sustain.

But structure also has side effects — A title doesn’t just describe your role; it quietly defines the boundaries around it, how far compensation is expected to grow, which opportunities appear appropriate, and sometimes how contribution is evaluated.

I was reminded of this when leading a group of software testers whose compensation lagged behind comparable engineering roles. Their formal title was Software Tester, and the salary band attached to it reflected a hierarchy many organizations historically maintained.

So I did an experiment and changed their title to Software Test Engineer. Nothing about their daily work changed. Their competence did not suddenly increase. Their responsibilities remained the same. But the new title aligned them with a different compensation structure, one that made it possible to adjust salaries accordingly.

That experience revealed something fundamental to me:

Organizations use titles to create order. People often experience them more personally, sometimes as part of their professional identity. Depending on the context, that structure can be either supportive or limiting.

Invisible Responsibility

My mother who spent her career within dental care

My mother helped me see this from an entirely different perspective, outside the software domain. She graduated as a dental nurse in the early 1960s and remained in dental care for more than forty years. During a visit long ago, I asked her about a role she once held — the one where she was no longer assisting dentists. She brushed it off as nothing special, describing it simply as “mostly administration.”

But as our conversation unfolded, a different picture emerged; She managed staffing and inventory. Handled vendors and suppliers. Led hiring conversations. Resolved interpersonal conflicts. Introduced daily staff meetings. Navigated organizational change when the clinic transitioned from public to private care. Implemented competence-based salaries despite resistance. Ensured ongoing education for staff. Restructured teams around shared goals.

Much of what she described sounded remarkably like people management. So I asked what title she had held. Dental Nurse.

At the time, becoming a clinic manager required being a dentist. No amount of demonstrated leadership could override that structural boundary. Her title neither reflected her responsibilities nor her impact, and because salary followed title, it did not reflect her contribution either.

The title reflected the formal structures of the time, even if they did not fully capture the scope of the work she was doing.

It made me wonder how many people quietly carry responsibilities their titles never acknowledge. While much has changed since then, similar dynamics can still be found today. In my experience, this is not uncommon within software development either, and a new title does not necessarily translate into better compensation.

When Titles Grant, or Withhold, Visibility

Titles influence more than compensation; they also signal status. In some cultures, and for some individuals, the status attached to a title can carry as much weight as salary, sometimes more. I have seen organizations use titles as a form of compensation when financial adjustments were not possible. While it may address an immediate constraint, it often creates new ambiguity. A title raises expectations, and when those expectations are not matched by responsibility or authority, misalignment follows.

They can also shape who is seen. Early in my career, I attended a customer meeting for a large telecom project. As introductions began, business cards were exchanged around the room. When the customers reached me, they walked straight past without acknowledgment. At the time, I was both young and the only woman in the room.

I have often wondered what informed that moment — cultural norms, assumptions about seniority, expectations attached to titles, or factors I may never fully see.

It was only when a colleague introduced me as the Test Manager that the dynamic shifted. Suddenly I had their full attention. That moment has stayed with me because of what it revealed.

A title can change how quickly others recognize your authority, your relevance, even your presence.

The Expectations Trap

Later in my career, I experienced another side of titles, how they can prescribe behavior. During a consulting assignment, my profile did not fit neatly into the organization’s framework agreement. To align with their structure, I was contracted under the title Test Architect. On paper, it solved a problem. In practice, it created several new ones.

Within that organization, test architects were expected to act as gatekeepers, separating testing from development and approving releases. That model conflicted with the way of building software I was there to support, one focused on small, frequent improvements and shared ownership of quality. The title didn’t just misrepresent my work; it pulled me toward activities and expectations that were fundamentally misaligned with my values.

Test Architects working in different paradigms within the same organisation

Yet understanding how titles function inside systems can also create leverage. At another point, when I needed experienced practitioners who fell outside the framework, I introduced a new title that allowed the organization to justify the hire within its existing structure.

Experiences like this has influenced how I view titles — not as fixed definitions, but as a tool that can sometimes help us work more effectively within the structures that exist.

Looking Beyond the Label

Over time, these experiences have reshaped how I see the role titles play. I am not suggesting we abandon them. Organizations need structure to function, and titles can provide clarity, direction, and sometimes a sense of pride and belonging. Yet they rarely capture the full scope of a person’s contribution. A title’s meaning is never fixed; it is filtered through experience, worldview, and bias.

This realization has gradually changed how I approach titles. I find myself more inclined to ask what sits behind them, what a person actually spends their time on, what they care about, and where their strengths lie. Just as often, it has challenged me to describe my own work with greater nuance, beyond the shorthand of a label.

Perhaps what matters most is not the title itself, but our willingness to look beyond it, to remain curious about the person behind the label and the work that may not be immediately visible. Because more often than we realize, people carry responsibilities, strengths, and influence that no title fully reflects.

You might be surprised by what you discover.

From Templates to Heuristics: Enhancing Thought Work

Update 2025-10-14: This post has been updated to reflect thought work rather than knowledge work – which was pointed out by Fiona Charles as a much better description of the cognitive work performed in software development and testing. I fully agree with the distinction between knowledge work and thought work where knowledge work rather describes what LLMs do:
“..it works only from stored knowledge, matches patterns, makes some clever (we hope) predictions and does not think.”LinkedIn post by Fiona Charles

Anime-style illustration of five software professionals sitting around a conference table, all facing a large screen displaying a colorful risk matrix grid. Each person is working on a laptop and focused on the discussion, suggesting a team analyzing project risks or testing priorities.
Risk matrix template

I once worked with a team that used a risk-scoring template.
Each risk got a number for probability and another for severity, and they multiplied them to get a total. It was meant to help decide what to test first. But as I listened to the team debate whether something was a three or a five, I realised the discussion wasn’t really about risk at all. No one talked about what those risks meant, how they might unfold, what could be done to reduce them, or what we still needed to learn.

Maybe that happened because the template did not even include “impact” as a separate topic. But I suspect the deeper issue was competence. A team comfortable discussing risk would have gone there instinctively. Here, the numbers took over and the thinking stopped.

That moment has stayed with me because it reminds me of something I’ve experienced many times. something subtle about thought work:
The way a tool is meant to guide thinking can instead replace it.


When the Form Becomes the Focus

Templates, checklists, and frameworks promise structure and consistency. In software development and testing, they’re everywhere. Test plan templates are one of those I encounter every now and then. They usually have sections for Objectives, Scope, Risks, Environment, and Schedule. It looks thorough and reassuring. It appears like you can fill one out without thinking too hard, and that’s the danger.

When a template becomes the focus, attention shifts from outcomes to outputs.
The document starts to stand in for the thought that created it. We stop asking, “What problem are we trying to solve?” and start focusing on, “Have I filled in all the sections?”

In that shift, responsibility quietly moves from the person to the process. The template gives a comforting illusion that the work has been done, even if the real thinking never happened. For people with less experience, it creates false confidence: “anyone” can fill in the blanks, so the work starts to look mechanical rather than cognitive.

A test plan template, for example, gives the impression that testing can be planned like construction: linear, predictable, and tidy. But testing is exploratory by nature. It’s a way of learning about a product, not just confirming what we already know. A static plan can’t capture that. It diminishes learning to a document.

In my experience, test plans and test planning often look different depending on the maturity of the team. In a mature team, the outcome of a planning session might be mind maps, whiteboard sketches, or lightweight notes. They are living things, short, imperfect, and full of questions. The content still matters, but the conversation around it matters more.


When Templates Actually Help

Of course, not every task in software development demands deep reflection.
Some kinds of documentation are about communication or traceability rather than discovery: release notes, version information, configuration details. In those cases, templates can be genuinely helpful. They save time, bring consistency, and reduce the chance of missing something important.

There’s nothing wrong with that. Not all work is thought work, and not all documentation should invite debate.
The key is recognising which kind of work we’re doing. If the task is largely mechanical, a template can be a useful shortcut. If the task depends on judgment, understanding, and interpretation, then a template can easily get in the way.

The problem isn’t that we use templates; it’s that we often use them everywhere, without stopping to ask whether the work in front of us is about recording information or making sense of it.


The Catch-22 of Structure

The irony is that templates usually appear with good intentions. I know this because I have created lots of checklists myself, and at first glance they might even look like templates. They promise consistency, a shared language, a way to make the invisible parts of thought work visible. And they can help. When a team is new to a problem, a bit of structure can keep things from drifting. But what starts as a way to get moving can easily turn into something that boxes people in.

To use a flexible template well, you already need the maturity that makes rigid templates unnecessary. Without that maturity, the structure that should support learning ends up replacing it. Too much structure, and people stop thinking; too little, and they struggle.

There is another layer to it as well. When a process takes over, it gives people somewhere to hide. If something goes wrong, we can point to the document and say, “But we followed the plan.” That is the quiet harm of too much structure: it makes accountability procedural instead of personal.

What breaks that loop, in my experience, is not another process but leadership, coaching, mentoring, and showing a different way of working. When people feel what it is like to work that way, to discuss, to question, to reason rather than just fill in forms, they discover that outcomes can be achieved without a template. That experience changes something deep.

I once coached a team where we decided to stop using the formal test plan template. We did not replace it with anything fancy. We just started each conversation with one question: “What do we need to learn about this product?” We used heuristics to guide our thinking. It was messy at first, but soon people began to ask sharper questions and share more ideas. The documentation did not disappear — it just looked different, smaller, and better.


From Templates to Heuristics

In thought work, I have found that heuristics serve us better than templates.
They might look similar, a list of things to consider, but the mindset is different.

A checklist says, “Do not forget these things.”
A heuristic says, “Think about these things, if they matter in this context.”

Heuristics make that kind of thinking visible.
When I test, I sometimes use heuristics like “How might this fail?” or “What happens if/when…?” They are not rules; they are reminders. Each one starts a conversation with the product rather than trying to control it.

Over the years I have collected and written about several testing heuristics and mnemonics such as SFDIPOT, RCRCRC, and FEW HICCUPS. Each serves as a prompt to think, not a promise of coverage. For readers who want to explore them more deeply, I have shared examples and links in a separate post.

What makes heuristics powerful is also what makes them fragile. They help us navigate uncertainty, but they can easily harden into rules. Checklists assume the situation is stable; heuristics assume it is not. Checklists aim for control; heuristics invite judgment.

I find heuristics fragile for another reason too. The moment I write them down, they risk becoming the very checklists they were meant to replace. Heuristics are rules of thumb meant to guide problem solving, but they are not guaranteed to be optimal or perfect. In my experience, your background, exposure to similar situations, and worldview all influence how you apply them. That makes heuristics both contextual and fallible.

The trick is to share them as conversation starters, not commandments. When I share a heuristic with someone, I try to say, “This is something that helps me think; see if and how it helps you.” That small disclaimer keeps the door open for learning.

That is also why I have grown wary of “best practices.” To me, they are just templates at scale: frozen examples of what worked once, somewhere. Good practices live in context. They grow out of the people, the product, and the problems at hand. They shift as those things shift.

Culture, Competence, and Confidence

The difference between a template that helps and one that harms often comes down to culture. In a reflective culture, a template is often a prompt for conversation. In a compliance culture, it is more like a box to tick.

I have seen both. In mature teams, templates are used lightly, edited, reworded, sometimes ignored when they do not fit. In less mature teams, templates are treated almost as sacred text. The organisation starts to value uniformity over understanding.

The paradox is that, in my experience, competence and culture grow through practice/doing, not through documents. That is why demonstration and coaching matter so much. You cannot change this with policy; you nurture it by showing what good thinking looks like.

Over time, that creates its own kind of structure, a shared sense of what good work feels like. When a team reaches that point, even if they reintroduce some form of documentation, it is no longer a threat. It becomes expressive rather than prescriptive; it records thinking instead of dictating it.


Competence Before Templates

Competence is what allows people to see context, to tell when a template applies, when it does not, and how it might need to change. Without competence, every good practice turns into a rule, every heuristic into a checklist, every conversation into a template.

So maybe the real principle is simple:
Structure seems to work best when it grows out of understanding and experience, not the other way around.

When we reach for structure before understanding, we trade thinking for form. But when we build understanding first, structure becomes a natural by-product, a way to capture and share what we have learned, not a way to avoid learning in the first place.

Maybe that is what maturity really means: not the absence of structure, but the ability to use it lightly.

The longer I work in this field, the more I’ve come to see that thinking is the real craft. The tools change, the processes come and go, but the ability to pause, to ask why, to connect dots — that’s what lasts.

I’ve realised I don’t reach for templates because they don’t fit how I think.
They make me feel limited, because thought-work isn’t linear.
Thinking, for me, has never been about ticking boxes. It’s about staying close to what’s real — even when it’s messy, uncertain, or hard to explain.

So now I’m curious:
When do templates and checklists help you think better, and when do they get in the way?

Learning Faster: Deadlifts, Software Testing and Feedback Loops

Reflections emerged from learning to deadlift

Many years ago, I decided I wanted to get really good at deadlifting. I can’t quite remember why, but at some point I thought: women who lift heavy are pretty badass. And I wanted to be badass too.

At first, I thought the deadlift would be simple. You just pick up a barbell from the floor, right? But like many things that look simple on the outside, the deeper I went, the more complex it became. Hip hinge, grip, bracing, bar path, leverages — all of it mattered. And because I tend to get nerdy when I learn something, I didn’t just practice in the gym. I was simultaneously watching endless of tutorials, reading articles and forum threads, and even rehearsing the hip hinge and the feeling of a proper lift without a barbell. Yes you would find me pretending to deadlift everywhere – at work, at home, in the grocery shop.

The more I dug in, the more I realized how much my progress depended on the feedback I was getting. Sometimes it came instantly, sometimes much later — but the faster and more diverse the feedback, the quicker I learned. I was starting to see some parallels connected to my profession – it reminded me of the feedback loops in software development and testing.


Reflection 1: Not All Feedback Is Useful

One of the first “feedback tools” I tried in the gym was the mirror. It gave me an instant reflection of my movement, which sounded useful in theory. In practice, though, it wasn’t reliable at all. To check myself, I had to turn my head or shift my focus — and that immediately changed my form. The feedback was there, but the very act of observing interfered with the movement.

Software has its own “mirrors”. Sometimes we interact with a system and it looks fine — the page loads, the button clicks, the response comes back — but that doesn’t mean it’s really working the way we expect.

Feedback through mirror

The feedback can be shallow, or even misleading. Other times we add log statements or quick checks that give us a sense of what’s happening, but only from a narrow angle. Just like the mirror in the gym, these signals can create an illusion of confidence while hiding what’s really going on. The real value comes when we go deeper and investigate beyond what’s immediately visible.


Reflection 2: Fast Feedback Accelerates Learning — Especially with Multiple Inputs

Feedback from recording

Recording myself in training sessions became a turning point, even if it felt really awkward at first. With video, I could almost immediately see what had happened and adjust in the very next set. That kind of instant loop accelerated my learning curve enormously.

But the video wasn’t the only input. Sometimes I could feel something was off — maybe my balance shifted, or the bar drifted away from me. That sensation alone didn’t always tell me why it happened, but the video often did. And the best feedback of all? A coach standing right beside me, shouting cues in the middle of the lift — “brace more!” or “push the floor away!” That was immediate, specific, and actionable

Testing is similar. We learn fastest when feedback is both fast and comes from multiple angles:

  • The system itself giving you signals (logs, responses, performance “feel”).
  • Tools that capture and replay what happened (recordings, traces, automated checks).
  • A colleague or peer review pointing out what you might have missed.
  • Pairing with a colleague to give a richer perspective of ideas and feedback on your own thoughts.

One perspective rarely tells the full story. It’s the mix of inputs that accelerates learning.


Reflection 3: Interpretation Unlocks the Value of Fast Feedback

Here’s an interesting note: when I first started lifting, I wouldn’t have known exactly what to look for in a video. A rounded back or hips rising too fast didn’t mean anything to me until I had learned what good looked like. Fast feedback was only useful once I had the knowledge to interpret it.

It was similar to when my testing team was asked to explore the product for security risks. They were skilled testers, but security testing was not our area of deep expertise. We could follow guidelines, try common attack patterns, and note down the responses we got — but we didn’t know whether what we were seeing was truly a vulnerability or just expected system behavior. Even when we followed recommendations from checklists, we were left wondering: Is this a real threat, or just noise?

What we really needed was someone who could interpret the signals with expertise — a security specialist who could look at the same output and say, “Yes, this is dangerous,” or “No, this is fine.” Without that, the fast feedback we were generating didn’t translate into learning. This reminded me of the feedback I got from the coach, an expert on deadlifting. So once I had learned what to look for I could make sense of my videos.

Speed matters enormously — but it only accelerates learning if you can make sense of what’s coming back.


Reflection 4: We Can Shape the Loops

As a lifter, I learned to adjust my loops. Filming myself gave me near-instant replays. Writing a training journal and reviewing previous recordings helped me see trends across months. Without those adjustments, my progress would probably have been slower.

Sometimes, I even shaped the lift itself to get more feedback. Slowing down the movement — adding pauses at the knees, or deliberately descending very slowly — gave me more time to feel what was happening and notice where my position was breaking down. It wasn’t about moving more weight, but about creating a training scenario where I could learn more from each rep.

In software, we also have the power to shape our feedback loops. We can choose what to observe, how to surface information, and how quickly we get it. Sometimes that means speeding things up — shortening build times or adding logging — but sometimes it means slowing down on purpose. Taking time to explore step by step, to add more observability, or to walk through a workflow carefully can reveal details we’d miss at full speed.

The goal isn’t just to get feedback faster — it’s to design feedback that accelerates learning.


Closing Reflection

Software testing, like lifting, are practices that can look easy from the outside. To someone watching, it may seem like a tester is just “randomly pressing buttons.” But underneath, there’s intention: forming hypotheses, observing carefully, connecting signals, and adjusting based on feedback. Sometimes that means repeating a scenario to learn more, sometimes it means trying a completely new approach.

Of course, there are huge limits to the analogy. Deadlifting is more of a physical skill where I train my body to move well and stay strong. Testing is a cognitive skill where I train my brain to form , notice patterns, challenge assumptions, and explore risk. But the small parallels circles around the need for feedback: both require listening carefully — to your body or to the system — and using that information to adjust.

When feedback is fast, you accelerate not only your progress but also your ability to adapt. Whether it’s correcting a mistake, fine-tuning a movement, or exploring a new path, quick feedback shortens the time between action and adjustment. It gives me the ability to spot patterns faster.

And that’s the real carry-over. Under the barbell or inside a product, progress comes from designing and using feedback loops that are fast enough to guide the next step, diverse enough to reveal different perspectives, and deep enough to provide value.

Deadlifting and software testing look completely different on the surface, but at their core they are both ongoing practices of learning — ways to continuously explore, learn, adjust, and improve.

On a side note I actually don’t do conventional deadlifts any longer.

Changing the Conversation About Testability

Rethinking Testability Part 4 (Last post) – A series of blog posts based on my talk Improving Quality of Work and Life Through Testability


Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2 – Poor Testability is Everywhere – but we don’t always see it Part 3 – The triangle of Perception

Reframing Testability

Over the years I’ve learned that: Starting a conversation by talking about testability, I can lose people pretty fast.

Not because they don’t care about it — but my guess is that testability just sounds too niche, too tester-centric. And to some it might just seem too technical and maybe a bit too dry for people to immediately feel connected with.

So I started framing it differently.


Leading With Developer Experience

Instead of testability, I’d talk about Developer Experience — or sometimes Developer Productivity, depending on who was in the room.

I genuinely care about Developer Experience — and it overlaps heavily with testability. And it seemed like that concept is easier for people to connect with.

Most developers know the pain of:

  • Waiting forever for a build to finish.
  • Having to restart flaky environments multiple times a day.
  • Wrestling with tools that block instead of help.
  • Getting interrupted mid-flow by issues that shouldn’t be issues.

Those things drain energy, slow learning, and make people feel less effective.
When I talk about them as Developer Experience problems, people nod immediately.

But here’s the thing: those are also testability problems. Because every one of those frictions makes it harder to observe, control, and explore the system.


Where Productivity Comes In

On the other side, Developer Productivity is a term that seemed to often land better with leaders or managers, because it speaks directly to business outcomes: speed, efficiency, predictability.

If you say “poor testability slows us down,” you’re talking productivity.
If you say “better testability means faster learning and fewer surprises,” you’re also talking productivity — just in a way that connects risk and speed.

The overlap is there. Testability affects both how productive teams feel (experience) and how productive they are (output).

But here’s the risk: if I only frame it in terms of speed, I risk losing the deeper point. Testability isn’t about going faster —it’s about efficiency, how easily we can explore a system, uncover risks, and challenge the product itself.


The Catch-22 of Testability

One of my biggest challenges is that: improving testability often feels like a Catch-22.

  • To show the value of improving it, you first need better conditions.
  • But to get those better conditions, you often need to show the value up front.

I’ve seen this happen with tools, environments, and processes. Everyone would benefit if they were more stable or better supported. But getting buy-in often comes down to one question – related to productivity:

“How much time will this save?”

It’s a fair question. But the real value isn’t only about saving time.
It’s about reducing blind spots.
It’s about enabling exploration earlier.
It’s about noticing problems sooner — and avoiding the burnout of constantly fighting the system.

Those benefits are harder to measure, but they’re the ones that really matter.


The Bigger Picture

This is the thread running through the whole Rethinking Testability series:

  • Testability isn’t just about code.
  • Poor testability shows up everywhere, often in invisible ways.
  • People perceive it differently depending on how they interact with the system.
  • And the way we frame it shapes whether others understand its value.

At the heart of all of this is one simple idea: testability shapes quality of life.
Not just the quality of the product — but the quality of life for the people building it.


Improving and Advocating for Testability

Improving testability isn’t the responsibility of one person or one team — it’s something we can all influence:

🔍 As a tester
Don’t just share results — share the story behind them. If it was painful to get there, that friction is a signal. Speaking up about it reveals risks others can’t see.

💻 As a developer
Think beyond “code that works.” Ask yourself: can someone easily observe, control, and explore this system? Design for exploration, not just validation.

📈 As a leader or product owner
Seek real confidence, not just green dashboards. Ask: where are teams fighting the system instead of learning from it? Your support can make the difference between friction and flow.


The Takeaway

Testability isn’t really about speed. It’s about making our work smoother, our learning faster, and our confidence real. And when we improve it, we’re not just improving our products — we’re improving the experience of everyone building them.

That’s why I believe testability deserves more attention.
Because quality of life at work isn’t separate from quality of the product. They rise and fall together.


Rethinking Testability Part 1 – Testability is about people, not just code Part 2  Poor Testability is Everywhere – but we don’t always see it, Part 3 The Triangle of Perception: Why we see testability differently

The Triangle of Perception: Why We See The Need for Testability Differently

Rethinking Testability Part 3 – A series of blog posts based on my talk Improving Quality of Work and Life Through Testability


Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2 – Poor Testability is Everywhere – but we don’t always see it

Japanese anime styled picture. A triangle in the center of the picture. To the left a girl with brown long hair faced towards the triangle and in dialogue with black haired guy to the right of the triangle.
Triangle of Perception

Same same but different

Two people can work on the exact same system and what seems to be the same problem— and yet live in completely different worlds.

I learned this many years ago—I was working with a developer, asking to improve the logs to help us catch subtle problems. But we saw logs very differently: for me, they were essential; for him, they were occasional – which made him question the investment and the time needed to improve the logs.

As a tester, logs were really important to me. I relied on them not just when something was obviously broken. I needed that observability before anything failed. It helped me spot anything weird—things that might not be visible through the UI.

For the developer, logs were something he dug into after a failure—part of troubleshooting a known issue. Logs were helpful, but only needed now and then.

We weren’t disagreeing on whether logs were useful.
But how often we needed logs and how we used them, what we used them for – shaped how we saw the need for investing in better testability.


Three Factors Shaping The Perception of The Need For Testability

An animated picture in black and white with a triangle in the middle. To the left you can see the shadow of a person with short hair. On the right the shadow of a person with long hair. They both face the triangle. On top of the triangle is a text- view of testing - inside the triangle is a text - Perception of tstability. In the left corner of the triangle it says - usage of the system. In the tight corner of the triangle it says -  frequency of interaction
Perception of Need for Testability Triangle

Over time, I started noticing a certain pattern.
It seems like different people’s perceptions of the need for testability are shaped by three main factors:

  1. Frequency of interaction — How often do you work with the product? Daily? Occasionally? Rarely?
  2. Usage of the system — How do you interact with the product? No matter if you are building it, testing it, observing it — When you do work with it, are you going deep into the system or just skimming the surface?
  3. View of testing — Do you see testing mainly as confirming known behaviors, or as exploring the unknown?

When your answers to those questions differ, your sense of what’s “good enough” for testability will differ too.


Confirmation vs. Exploration

An animated picture in black and white with a triangle in the middle. To the left you can see the shadow of a person with short hair. On the right the shadow of a person with long hair. They both face the triangle. On top of the triangle is a text- view of testing - inside the triangle is a text - Perception of tstability. In the left corner of the triangle it says - usage of the system. In the tight corner of the triangle it says - frequency of interaction
Perception of need for Testability

I’ve noticed that the third factor — how you see testing — is the one that changes the conversation the most. Note – I am clearly polarizing and exaggerating the views, to make the distinction more clear.

When someone sees testing as confirming expected outcomes, they’ll judge testability by how easily they can check the known. In my experience it seems like the symptom of this is a huge focus on testability for automation.

But if we see testing as exploration—about learning, discovering, and questioning—then what we need from testability will be different.  We need to support serendipitous exploration—being able to notice something interesting and then quickly dig deeper without friction.

Unfortunately, most organizations I’ve worked with lean heavily toward optimizing for confirmation and verification, maybe because it’s easier to measure. Exploration often gets left behind and when that happens we risk missing the bugs that really matter. For more on this topic see my post on Testing Beyond Requirements.


Why This Matters

When someone nods along as you talk about improving testability, it’s worth checking:
Are they picturing the same thing you are?
Or are they imagining something completely different?

That shallow agreement can be dangerous — because it hides the fact that you might be solving for entirely different problems.

Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2  Poor Testability is Everywhere – but we don’t always see it

Poor Testability Is Everywhere — But We Don’t Always See It

Rethinking Testability Part 2 – A series of blog posts based on my talk Improving Quality of Work and Life Through Testability

Part 1 – Testability is about people, not just code

Poor testability
Slows everything down
Adds risk
Increase cost
Delays
Poor quality
Poor Testability

Symptoms of Poor Testability

I’ve worked with a lot of different teams and organizations over the years, and I’ve seen the same problems repeat themselves in places you’d think had nothing in common.

Sometimes the symptoms are obvious but very often not understood as testability problems.
– A test environment that is not available.
– Logs which are unavailable or hard to read.
– A new team who is not yet familiar with the product.


The Patterns I Keep Seeing

When testability is poor, I usually see some mix of these five problems:

  1. Late discovery of critical bugs — the issue was there, but poor observability or unstable environments kept it hidden until too late.
  2. Intermittent issues that slip through — the right conditions to trigger them are too hard to create on demand.
  3. False confidence — the green checks hides how much effort it took to get there.
  4. Missed learning opportunities — we stop exploring and only do the bare minimum to getthrough.
  5. Burnout — constant friction turns the work into a grind.
Comic strip with a distressed man in front of a computer screen and text saying I once worked at a place where a memory leak took down the website in production. We’d seen symptoms of the issue during testing—but because our environment was flaky, we had a habit of restarting the server. The warning signs were there. But because of distraction from poor testability, the bug stayed hidden until it was too late.

The first four mainly hurt the product – at least at first.
The last one hurts people.


The Burnout Nobody Talks About

One of the hardest moments I’ve witnessed came from the fifth problem.

A tester I worked with broke down in tears. Not because of bad feedback from a manager, or a bug escaping to production — but because every single day was a fight just to do the basics.

He couldn’t get the system into the right state.
He couldn’t trust the tools.
He felt blocked at every turn.

That’s not “just part of the job.” That’s the personal cost of low testability – and a loss for the organization where this person works.


The Invisible Friction

Sometimes poor testability hides in plain sight — we’ve just gotten used to it.

On one project, a tester had to go through the entire customer journey before they could even start the actual test:
Simulate a purchase — > Step through the install flow — > Confirm the configuration
Every day. Several times a day.
Nobody questioned it. It was just the way things worked.

Until one day, a developer sat down, watched the whole process unfold, and said:

“Wait… you do this every time? I have a script that does all of that.”

That moment said it all.
Sometimes the biggest testability problems aren’t hidden in the system — they’re hidden in our habits.


How to Spot It

The power of pairing

If your team is struggling with testability — whether you realize it or not — there are a few ways to surface the pain:

  • Pair up — have someone from another role watch you set up and run a test. Fresh eyes see friction you’ve stopped noticing.
  • Map the setup — document the steps just to get into a testable state. Investigate which of those could be simplified or perhaps automated/scripted
  • Ask “how” and “why” more often — tell the story of how it was tested and why, the story about the testing itself may reveal interesting information about testability.
  • Run a testability workshop with your team — these are workshops that I often run with teams and it starts with a simple question: “What is making it hard for you to test?”

The Real Point

Poor testability isn’t always loud.
Sometimes it creeps in slowly, hidden behind workarounds and “just the way we do things.”

But whether it’s obvious or invisible, it is costing us.
It costs us time, it costs us learning, and — over the long run — it costs us the energy and motivation we need to do our best work.

Rethinking Testability Part 3 The Triangle of Perception – why we see testability differently

Testability Is About People, Not Just Code

Rethinking Testability Part 1

Poor Testability

I’ve lost count of the times I’ve seen similar scenarios play out:
A tester — or sometimes a developer — spends hours just getting the system into a testable state. By the time everything is finally configured, they’ve got maybe twenty minutes left to actually do the testing.

They don’t complain.
Nobody on the team does.
It’s just how things are.

But to me, that’s not just a scheduling hiccup or a minor annoyance.
It’s a symptom of something deeper: poor testability.


The Narrow View That Holds Us Back

In my experience, when “testability” comes up in technical discussions, it’s almost always framed in narrow, code-focused terms.

The ISO 25010 standard, for example, defines it as:

“The degree of effectiveness and efficiency with which test criteria can be established for a system, and tests performed to determine if they’re met.”

It’s not completely wrong — but it’s incomplete.
This definition treats testability as something the system has, as if the only point of testing is to check that known expectations are met.

But testing is so much more than that. It’s about learning. It’s about discovering things you didn’t expect. It’s about questioning assumptions and exploring risks before they turn into real problems.

When we define testability too narrowly, we risk building systems that are easy to check but hard to learn from. And that’s where the real damage happens!


A More Human-Centric Definition

Dimensions of Testability

After 25 years in software development, here’s how I see it:

Testability is how easy it is for a specific person to test a specific product in a specific context.

That single sentence changes the conversation.
It forces us to look beyond the code and think about:

  • Who is doing the testing, and what skills and knowledge they bring.
  • What tools they have, and how easy those tools are to use.
  • The culture of the team, the pressures of deadlines, and the development practices in play.
  • The architecture and purpose of the product itself.
  • the list continues. For a deep dive into the dimensions that affect Testability – have a look at my previous work on testability.

These aspects aren’t fixed. They shift over time — even within the same team. What feels smooth and straightforward to one person might feel painfully slow to another.

That’s why I don’t think testability is about speed. It’s about effort — how much effort it takes for this person, in this moment, to make real progress in testing.


Why This Matters More Than You Think

When testability is low, it doesn’t just slow down releases or make bug-hunting harder.
It drains energy. It discourages curiosity. It not only undermines confidence in the product but may also create a dangerous illusion of reliability.

In my experience, many people look at a green test suite and assume everything’s fine. But they don’t talk about what it took to get there.

Tests passed—but only after multiple retries.
Or the environment was unstable, so corners were cut.
Or the system was too painful to set up properly, so we didn’t test very deeply.

That struggle—that story—rarely show up in the report.
It’s all green.
It’s an illusion based on data with no context.

I’ve seen skilled testers spend most of their day wrestling with flaky environments instead of exploring the product.
I’ve seen teams skip entire categories of tests — not because they didn’t care, but because the setup was too painful.
I’ve even seen burnout happen not from impossible deadlines, but from the constant grind of fighting the system just to do the basics.

The hardest part is that burnout doesn’t stay at the office. It follows people home. It affects evenings, weekends, families, and mental health.

Poor testability might look like a technical issue on the surface, but its impact runs much deeper.

So – Improving testability isn’t just a technical win. It’s a human one.
It changes how smoothly we work, how quickly we learn, and how confident we feel about the results we’re getting.


Where to Start

If you want to improve testability in your team, start by looking beyond the code.

  • Talk about people, not just systems.
    Ask: Who’s testing this, and what do they need to succeed?
  • Look beyond speed.
    Faster isn’t always better. Less friction is better.
  • Measure the effort, not just the output.
    Track how long it takes to get into a testable state, how easy it is to observe and control the system — not just how many tests pass.

Testability is a reflection of how we work.
When we improve it, we’re not just improving the code — we’re improving the whole experience of building and testing.

Rethinking Testability Part 2 Poor Testability is Everywhere – but we don’t always see it

Rethinking Testability


Before summer, I had the chance to share my new talk:
Improving Quality of Life Through Testability at GreaTest Quality Convention
It’s a topic that still doesn’t get enough attention — which is why I’m bringing it here, in a 4-part blog series.
Over the years, I’ve collected lessons, stories, and patterns from my own work and from teams I’ve worked with. My goal is to show a different way of thinking about testability — one that’s built for people, not just systems.
When most people hear “testability,” they think about code.
But in my 25 years in software, I’ve learned it’s about much more than that.
Poor testability shows up as slow feedback, missed bugs, fragile automation, and even burnout.
And it’s everywhere — sometimes in ways we don’t notice, because we’ve accepted them as “just how things are.”

Here’s what’s coming up in the series:
1️⃣ Testability Is About People, Not Just Code
→ A more human-centric definition and why it matters.
2️⃣ Poor Testability Is Everywhere — But We Don’t Always See It
→ The recurring patterns and the invisible friction that holds teams back.
3️⃣ The Triangle of Perception
→ Why different roles see the same system’s testability in completely different ways.
4️⃣ Changing the Conversation About Testability
→ How reframing gets people to listen — and the risks that come with it.

Heuristics applied when opening a safe lock

When teaching testing I often talk about heuristics. Everyone uses heuristics whether you are aware of them or not. Simply explaining heuristics – A heuristic is a rule of thumb. It’s an approach or a method of how to a solve a problem. It’s important to remember that all heuristics are fallible.

In software development we use heuristics everyday. Your heuristics are build upon your experience. This is why I find it extra interesting comparing my heuristics with those of my children, who have different experiences than I have.

A few years ago when my kids still played with toys, I was in the bathroom getting myself ready for the day when I heard my youngest daughter crying from the other side of the wall. I opened the door and asked her what was going on.

“I can’t open the safe lock and I have all my money in there. We have forgotten the code and now we can’t open it.” Of course I went to see how I could help them.

My oldest daughter was sitting on the floor and furiously shaking the safe. She mumbled something about how she had changed the code a while ago. They had tried several different codes but with out any luck. So I sat down on the floor with my daughters and asked them to show me what happened when they entered the code.

They entered a code. “Error” was shown on the display and the safe made a kind of buzzing sound indicating that something was wrong .

I asked them what different things they had  tried to do to open it. They had actually tried several things like

  • Trying to remove the hinges. Kind of clever I think ( but it might have broken the toy).
  • Searching the internet for how to open the safe.
  • Tried different codes.

Suddenly my youngest shouted, with pride in her voice: “We can use a saw and cut it open!” We chose to proceed without bringing out the saw since they really liked their safe lock.

Opening the safe box

20210510_164603

To give you and idea of how the safe is opened, I have provided a short gif and a brief explanation. There are actual sounds made for each action you do but since I can’t upload videos without upgrading my account to Premium this will have to do. You just have to imagine any toy with sounds and I bet you will understand how this quickly can drive you crazy…

  1. First insert a plastic “key” card (sound played).
  2. Press the button “Withdraw” twice. Not sure why it needs to be pressed twice (sound played).
  3. Then enter your code (sound!). Another sound is played when the code is correct. You can then open the safe with the black handle below the “Withdraw” button.

Heuristics applied

Before moving on with some ideas that came into my mind we tried a few codes again but without any luck.

Since the safe box actually had some software I used the “Long press heuristic” hoping I could reset the software. I even tried combining different buttons hoping to trigger some error which would reset the safe. Yes, this has actually happened before!

I then went with the “Google search heuristic”. It had already been tried by my children but I had a deeper knowledge of how to search the internet. This unfortunately didn’t help either and to be frank I was more eager to interact with the safe lock rather than reading.

When this didn’t help, I wanted to use my “Remove the batteries heuristic”. It was a long shot but it was a toy so maybe removing the batteries would reset the code. While doing so I discovered a tiny hole next to where the batteries goes.20210507_184255

You might wonder why I didn’t look for it when considering to reset the safe code – but it just didn’t cross my mind. As a side note there is a cognitive bias called the Hindsight biaswhich is very common and also known as the “I knew it all along” phenomenon. The effect of this bias is that it causes us to overestimate our ability to predict events. And might also be a reason for you thinking – “I would have looked for the reset option the first thing I did.” Maybe you would have – maybe you wouldn’t. 

With excitement in my voice I asked the children to go get me a tooth pick. Again I was using my heuristics – “The press reset with a tooth pick heuristic“. I pushed the tooth pick into the tiny hole. The safe made a new type sound and seemed to be reset. “Wohoo! Could this be it?”

We inserted the plastic card, pressed the withdraw button twice and then pressed the code several times without any luck. The fourth time the safe made a new sound ( described as a sad sound by the children), a light went red and then the display was blank. This was a different behavior! Intriguing!

I noticed that the text on the display was very hard to read. It was not very bright. My youngest used her little hand to shadow the display to make it easier for me to read. I suspected that the batteries might be running low so we changed the batteries. And voilà we could finally open the safe.

My heuristics were build upon my experiences with toys, software, and software and hardware working together. Experiences which my children did not yet have. Some were consciously applied but I am certain there were several unconsciously used heuristics as well. Also by interacting with the toy, exploring different alternatives serendipity might have played a part here too. I still don’t know why the safe wasn’t reset properly after pressing the reset “button”. In hindsight it could have been waiting for a new code to be entered but that hypothesis have yet to be tried.

A toy safe is however different from a real safe. The context of where the heuristic is applied is of great significance.

To read more about heuristics in Software Development and Testing the following article, Software Testing Heuristics: Mind The Gap! by Richard Bradshaw and Sarah Deery is a great starting point containing a long list of references.

Are you really a Test Coach?

For many years we have seen different kinds of coaches appearing within software development. As more and more companies strive to become agile, various types of roles are becoming obsolete or transformed into something different. The most prominent one seems to be Agile Coach. In the software testing domain it is the Test Coach or the Quality Coach.

The transformation and changes in expectations of a role have in my experience caused some identity crisis within the testing profession. Even though there is a need for testing many companies choose to remove the tester as a role. (This post is however not about testers so I will not continue down that road).

As for myself I’ve been struggling to put a label on the work that I do. For those who know me I am not a big fan of titles and labels, although they can be helpful in some contexts. My work for the last few years have focused on transformation and how testing needs to be interlaced with development. Many of the companies I’ve worked with do not even have testers.

Coaching

Last year I went through a nine day training to become an ICF Coach. During that course I had many great insights. One of them relates to the Coach in the context of Software Development. A few times I’ve labeled myself Test Coach or Quality Coach but I struggled a bit with those titles as well. I just felt they didn’t really do justice to my work. In the context of coaching a coach is an expert on the process of coaching and is someone who facilitates learning.

“A coach is an expert on the process of coaching and is someone who facilitates learning. “

The client is the expert but the coach helps the client to unlock their potential to maximize their own performance. A skilled coach knows that the individual has the answer to their own problems.

A coaching approach

Why I struggled with titles like Test Coach became very obvious during my coach training. I was presented with the following model, “The flower”, created by Polhage & Lundberg, who also run the training (the model is originally described in Swedish and this one has been visually modified by me). They differentiate between the coach as a profession and having a coaching approach. We can always apply a coaching approach whether it’s in our daily life or at work.

The flower petals represents several roles which we might step into during our daily life or at work. The Coach is one of these roles (and the one I was in training for).
You can move between these roles and decide who to be in different situations. As an example, sometimes you need to take decisions based on your responsibilities which makes you the Decision maker. The empty petal is left for you to decide what to put in there. Your flower might have many more petals.

No matter what profession or role you have you can always apply a coaching approach. This means how you act and relate to the values of coaching.

The root system represents eight characteristics to consider for constructive communication, which are used in a coaching approach.

I quickly realized why I have never been very fond of the title Test Coach. It doesn’t fully reflect what I do or who I am. I am a subject matter expert in testing trying to help an organization, a team or an individual to improve their testing by guiding them and showing what to do, how they can do it and why.

“I am a subject matter expert in testing trying to help an organization, a team or an individual to improve their testing by guiding them and showing what to do, how they can do it and why.”

But I am also a Decision Maker, a Teacher, an Inspirer and a Mentor (for those who choose me to mentor them). I shift a lot in between all of these. One role I have never used at work is the Coach. However I often apply a coaching approach. This is an approach where I ask questions, where I use my curiosity to understand where the team or individual is right now and where I display my courage to challenge and ask “uncomfortable” questions. Focusing on what works and what moves us forward is also part of what I apply in my daily job whether my title is Project Manager, Test Coach or Scrum Master.

The only time I have been the Coach and only a coach is when I am a professional Coach in an agreement with a client.

Coaching in software testing

I recently had a short assignment where I was asked to coach a tester. She needed someone to talk to regarding her own journey where she was leading a change in her organization. In the beginning I found myself struggling with who to be. Biased by my recent experiences as a Professional Coach I started off in that role but quickly understood that my client needed something different. The focus was more related to guidance around the change she was implementing at work rather than her own journey. Sometimes it was hard to separate her own growth from the approach to testing that she was implementing.

Something that is very important is the agreement that you come up with before starting the sessions. The purpose of that agreement is to build trust and set expectations. Though this situation was a bit new for both myself and my client we decided to keep an open dialogue along the way to make sure she got value from our sessions.

My learning experience here is that it is not as black and white. What title you carry is not as important as the approach you choose.

“What title you carry is not as important as the approach you choose.”

During these sessions I used a coaching approach – actively listening, asking questions, driving my client to find her own solutions based on where she and her team are right now. In the cases where she wanted me to share my experience and thoughts, I did that as well.

What are your thoughts regarding coaches in Software Development/Testing?

References

Polhage & Lundberg

International Coaching Federation