Learning Faster: Deadlifts, Software Testing and Feedback Loops

Reflections emerged from learning to deadlift

Many years ago, I decided I wanted to get really good at deadlifting. I can’t quite remember why, but at some point I thought: women who lift heavy are pretty badass. And I wanted to be badass too.

At first, I thought the deadlift would be simple. You just pick up a barbell from the floor, right? But like many things that look simple on the outside, the deeper I went, the more complex it became. Hip hinge, grip, bracing, bar path, leverages — all of it mattered. And because I tend to get nerdy when I learn something, I didn’t just practice in the gym. I was simultaneously watching endless of tutorials, reading articles and forum threads, and even rehearsing the hip hinge and the feeling of a proper lift without a barbell. Yes you would find me pretending to deadlift everywhere – at work, at home, in the grocery shop.

The more I dug in, the more I realized how much my progress depended on the feedback I was getting. Sometimes it came instantly, sometimes much later — but the faster and more diverse the feedback, the quicker I learned. I was starting to see some parallels connected to my profession – it reminded me of the feedback loops in software development and testing.


Reflection 1: Not All Feedback Is Useful

One of the first “feedback tools” I tried in the gym was the mirror. It gave me an instant reflection of my movement, which sounded useful in theory. In practice, though, it wasn’t reliable at all. To check myself, I had to turn my head or shift my focus — and that immediately changed my form. The feedback was there, but the very act of observing interfered with the movement.

Software has its own “mirrors”. Sometimes we interact with a system and it looks fine — the page loads, the button clicks, the response comes back — but that doesn’t mean it’s really working the way we expect.

Feedback through mirror

The feedback can be shallow, or even misleading. Other times we add log statements or quick checks that give us a sense of what’s happening, but only from a narrow angle. Just like the mirror in the gym, these signals can create an illusion of confidence while hiding what’s really going on. The real value comes when we go deeper and investigate beyond what’s immediately visible.


Reflection 2: Fast Feedback Accelerates Learning — Especially with Multiple Inputs

Feedback from recording

Recording myself in training sessions became a turning point, even if it felt really awkward at first. With video, I could almost immediately see what had happened and adjust in the very next set. That kind of instant loop accelerated my learning curve enormously.

But the video wasn’t the only input. Sometimes I could feel something was off — maybe my balance shifted, or the bar drifted away from me. That sensation alone didn’t always tell me why it happened, but the video often did. And the best feedback of all? A coach standing right beside me, shouting cues in the middle of the lift — “brace more!” or “push the floor away!” That was immediate, specific, and actionable

Testing is similar. We learn fastest when feedback is both fast and comes from multiple angles:

  • The system itself giving you signals (logs, responses, performance “feel”).
  • Tools that capture and replay what happened (recordings, traces, automated checks).
  • A colleague or peer review pointing out what you might have missed.
  • Pairing with a colleague to give a richer perspective of ideas and feedback on your own thoughts.

One perspective rarely tells the full story. It’s the mix of inputs that accelerates learning.


Reflection 3: Interpretation Unlocks the Value of Fast Feedback

Here’s an interesting note: when I first started lifting, I wouldn’t have known exactly what to look for in a video. A rounded back or hips rising too fast didn’t mean anything to me until I had learned what good looked like. Fast feedback was only useful once I had the knowledge to interpret it.

It was similar to when my testing team was asked to explore the product for security risks. They were skilled testers, but security testing was not our area of deep expertise. We could follow guidelines, try common attack patterns, and note down the responses we got — but we didn’t know whether what we were seeing was truly a vulnerability or just expected system behavior. Even when we followed recommendations from checklists, we were left wondering: Is this a real threat, or just noise?

What we really needed was someone who could interpret the signals with expertise — a security specialist who could look at the same output and say, “Yes, this is dangerous,” or “No, this is fine.” Without that, the fast feedback we were generating didn’t translate into learning. This reminded me of the feedback I got from the coach, an expert on deadlifting. So once I had learned what to look for I could make sense of my videos.

Speed matters enormously — but it only accelerates learning if you can make sense of what’s coming back.


Reflection 4: We Can Shape the Loops

As a lifter, I learned to adjust my loops. Filming myself gave me near-instant replays. Writing a training journal and reviewing previous recordings helped me see trends across months. Without those adjustments, my progress would probably have been slower.

Sometimes, I even shaped the lift itself to get more feedback. Slowing down the movement — adding pauses at the knees, or deliberately descending very slowly — gave me more time to feel what was happening and notice where my position was breaking down. It wasn’t about moving more weight, but about creating a training scenario where I could learn more from each rep.

In software, we also have the power to shape our feedback loops. We can choose what to observe, how to surface information, and how quickly we get it. Sometimes that means speeding things up — shortening build times or adding logging — but sometimes it means slowing down on purpose. Taking time to explore step by step, to add more observability, or to walk through a workflow carefully can reveal details we’d miss at full speed.

The goal isn’t just to get feedback faster — it’s to design feedback that accelerates learning.


Closing Reflection

Software testing, like lifting, are practices that can look easy from the outside. To someone watching, it may seem like a tester is just “randomly pressing buttons.” But underneath, there’s intention: forming hypotheses, observing carefully, connecting signals, and adjusting based on feedback. Sometimes that means repeating a scenario to learn more, sometimes it means trying a completely new approach.

Of course, there are huge limits to the analogy. Deadlifting is more of a physical skill where I train my body to move well and stay strong. Testing is a cognitive skill where I train my brain to form , notice patterns, challenge assumptions, and explore risk. But the small parallels circles around the need for feedback: both require listening carefully — to your body or to the system — and using that information to adjust.

When feedback is fast, you accelerate not only your progress but also your ability to adapt. Whether it’s correcting a mistake, fine-tuning a movement, or exploring a new path, quick feedback shortens the time between action and adjustment. It gives me the ability to spot patterns faster.

And that’s the real carry-over. Under the barbell or inside a product, progress comes from designing and using feedback loops that are fast enough to guide the next step, diverse enough to reveal different perspectives, and deep enough to provide value.

Deadlifting and software testing look completely different on the surface, but at their core they are both ongoing practices of learning — ways to continuously explore, learn, adjust, and improve.

On a side note I actually don’t do conventional deadlifts any longer.

The Triangle of Perception: Why We See The Need for Testability Differently

Rethinking Testability Part 3 – A series of blog posts based on my talk Improving Quality of Work and Life Through Testability


Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2 – Poor Testability is Everywhere – but we don’t always see it

Japanese anime styled picture. A triangle in the center of the picture. To the left a girl with brown long hair faced towards the triangle and in dialogue with black haired guy to the right of the triangle.
Triangle of Perception

Same same but different

Two people can work on the exact same system and what seems to be the same problem— and yet live in completely different worlds.

I learned this many years ago—I was working with a developer, asking to improve the logs to help us catch subtle problems. But we saw logs very differently: for me, they were essential; for him, they were occasional – which made him question the investment and the time needed to improve the logs.

As a tester, logs were really important to me. I relied on them not just when something was obviously broken. I needed that observability before anything failed. It helped me spot anything weird—things that might not be visible through the UI.

For the developer, logs were something he dug into after a failure—part of troubleshooting a known issue. Logs were helpful, but only needed now and then.

We weren’t disagreeing on whether logs were useful.
But how often we needed logs and how we used them, what we used them for – shaped how we saw the need for investing in better testability.


Three Factors Shaping The Perception of The Need For Testability

An animated picture in black and white with a triangle in the middle. To the left you can see the shadow of a person with short hair. On the right the shadow of a person with long hair. They both face the triangle. On top of the triangle is a text- view of testing - inside the triangle is a text - Perception of tstability. In the left corner of the triangle it says - usage of the system. In the tight corner of the triangle it says -  frequency of interaction
Perception of Need for Testability Triangle

Over time, I started noticing a certain pattern.
It seems like different people’s perceptions of the need for testability are shaped by three main factors:

  1. Frequency of interaction — How often do you work with the product? Daily? Occasionally? Rarely?
  2. Usage of the system — How do you interact with the product? No matter if you are building it, testing it, observing it — When you do work with it, are you going deep into the system or just skimming the surface?
  3. View of testing — Do you see testing mainly as confirming known behaviors, or as exploring the unknown?

When your answers to those questions differ, your sense of what’s “good enough” for testability will differ too.


Confirmation vs. Exploration

An animated picture in black and white with a triangle in the middle. To the left you can see the shadow of a person with short hair. On the right the shadow of a person with long hair. They both face the triangle. On top of the triangle is a text- view of testing - inside the triangle is a text - Perception of tstability. In the left corner of the triangle it says - usage of the system. In the tight corner of the triangle it says - frequency of interaction
Perception of need for Testability

I’ve noticed that the third factor — how you see testing — is the one that changes the conversation the most. Note – I am clearly polarizing and exaggerating the views, to make the distinction more clear.

When someone sees testing as confirming expected outcomes, they’ll judge testability by how easily they can check the known. In my experience it seems like the symptom of this is a huge focus on testability for automation.

But if we see testing as exploration—about learning, discovering, and questioning—then what we need from testability will be different.  We need to support serendipitous exploration—being able to notice something interesting and then quickly dig deeper without friction.

Unfortunately, most organizations I’ve worked with lean heavily toward optimizing for confirmation and verification, maybe because it’s easier to measure. Exploration often gets left behind and when that happens we risk missing the bugs that really matter. For more on this topic see my post on Testing Beyond Requirements.


Why This Matters

When someone nods along as you talk about improving testability, it’s worth checking:
Are they picturing the same thing you are?
Or are they imagining something completely different?

That shallow agreement can be dangerous — because it hides the fact that you might be solving for entirely different problems.

Rethinking Testability Part 1 – Testability is about people, not just code,  Part 2  Poor Testability is Everywhere – but we don’t always see it

Testability Is About People, Not Just Code

Rethinking Testability Part 1

Poor Testability

I’ve lost count of the times I’ve seen similar scenarios play out:
A tester — or sometimes a developer — spends hours just getting the system into a testable state. By the time everything is finally configured, they’ve got maybe twenty minutes left to actually do the testing.

They don’t complain.
Nobody on the team does.
It’s just how things are.

But to me, that’s not just a scheduling hiccup or a minor annoyance.
It’s a symptom of something deeper: poor testability.


The Narrow View That Holds Us Back

In my experience, when “testability” comes up in technical discussions, it’s almost always framed in narrow, code-focused terms.

The ISO 25010 standard, for example, defines it as:

“The degree of effectiveness and efficiency with which test criteria can be established for a system, and tests performed to determine if they’re met.”

It’s not completely wrong — but it’s incomplete.
This definition treats testability as something the system has, as if the only point of testing is to check that known expectations are met.

But testing is so much more than that. It’s about learning. It’s about discovering things you didn’t expect. It’s about questioning assumptions and exploring risks before they turn into real problems.

When we define testability too narrowly, we risk building systems that are easy to check but hard to learn from. And that’s where the real damage happens!


A More Human-Centric Definition

Dimensions of Testability

After 25 years in software development, here’s how I see it:

Testability is how easy it is for a specific person to test a specific product in a specific context.

That single sentence changes the conversation.
It forces us to look beyond the code and think about:

  • Who is doing the testing, and what skills and knowledge they bring.
  • What tools they have, and how easy those tools are to use.
  • The culture of the team, the pressures of deadlines, and the development practices in play.
  • The architecture and purpose of the product itself.
  • the list continues. For a deep dive into the dimensions that affect Testability – have a look at my previous work on testability.

These aspects aren’t fixed. They shift over time — even within the same team. What feels smooth and straightforward to one person might feel painfully slow to another.

That’s why I don’t think testability is about speed. It’s about effort — how much effort it takes for this person, in this moment, to make real progress in testing.


Why This Matters More Than You Think

When testability is low, it doesn’t just slow down releases or make bug-hunting harder.
It drains energy. It discourages curiosity. It not only undermines confidence in the product but may also create a dangerous illusion of reliability.

In my experience, many people look at a green test suite and assume everything’s fine. But they don’t talk about what it took to get there.

Tests passed—but only after multiple retries.
Or the environment was unstable, so corners were cut.
Or the system was too painful to set up properly, so we didn’t test very deeply.

That struggle—that story—rarely show up in the report.
It’s all green.
It’s an illusion based on data with no context.

I’ve seen skilled testers spend most of their day wrestling with flaky environments instead of exploring the product.
I’ve seen teams skip entire categories of tests — not because they didn’t care, but because the setup was too painful.
I’ve even seen burnout happen not from impossible deadlines, but from the constant grind of fighting the system just to do the basics.

The hardest part is that burnout doesn’t stay at the office. It follows people home. It affects evenings, weekends, families, and mental health.

Poor testability might look like a technical issue on the surface, but its impact runs much deeper.

So – Improving testability isn’t just a technical win. It’s a human one.
It changes how smoothly we work, how quickly we learn, and how confident we feel about the results we’re getting.


Where to Start

If you want to improve testability in your team, start by looking beyond the code.

  • Talk about people, not just systems.
    Ask: Who’s testing this, and what do they need to succeed?
  • Look beyond speed.
    Faster isn’t always better. Less friction is better.
  • Measure the effort, not just the output.
    Track how long it takes to get into a testable state, how easy it is to observe and control the system — not just how many tests pass.

Testability is a reflection of how we work.
When we improve it, we’re not just improving the code — we’re improving the whole experience of building and testing.

Rethinking Testability Part 2 Poor Testability is Everywhere – but we don’t always see it

Rethinking Testability


Before summer, I had the chance to share my new talk:
Improving Quality of Life Through Testability at GreaTest Quality Convention
It’s a topic that still doesn’t get enough attention — which is why I’m bringing it here, in a 4-part blog series.
Over the years, I’ve collected lessons, stories, and patterns from my own work and from teams I’ve worked with. My goal is to show a different way of thinking about testability — one that’s built for people, not just systems.
When most people hear “testability,” they think about code.
But in my 25 years in software, I’ve learned it’s about much more than that.
Poor testability shows up as slow feedback, missed bugs, fragile automation, and even burnout.
And it’s everywhere — sometimes in ways we don’t notice, because we’ve accepted them as “just how things are.”

Here’s what’s coming up in the series:
1️⃣ Testability Is About People, Not Just Code
→ A more human-centric definition and why it matters.
2️⃣ Poor Testability Is Everywhere — But We Don’t Always See It
→ The recurring patterns and the invisible friction that holds teams back.
3️⃣ The Triangle of Perception
→ Why different roles see the same system’s testability in completely different ways.
4️⃣ Changing the Conversation About Testability
→ How reframing gets people to listen — and the risks that come with it.

Are you really a Test Coach?

For many years we have seen different kinds of coaches appearing within software development. As more and more companies strive to become agile, various types of roles are becoming obsolete or transformed into something different. The most prominent one seems to be Agile Coach. In the software testing domain it is the Test Coach or the Quality Coach.

The transformation and changes in expectations of a role have in my experience caused some identity crisis within the testing profession. Even though there is a need for testing many companies choose to remove the tester as a role. (This post is however not about testers so I will not continue down that road).

As for myself I’ve been struggling to put a label on the work that I do. For those who know me I am not a big fan of titles and labels, although they can be helpful in some contexts. My work for the last few years have focused on transformation and how testing needs to be interlaced with development. Many of the companies I’ve worked with do not even have testers.

Coaching

Last year I went through a nine day training to become an ICF Coach. During that course I had many great insights. One of them relates to the Coach in the context of Software Development. A few times I’ve labeled myself Test Coach or Quality Coach but I struggled a bit with those titles as well. I just felt they didn’t really do justice to my work. In the context of coaching a coach is an expert on the process of coaching and is someone who facilitates learning.

“A coach is an expert on the process of coaching and is someone who facilitates learning. “

The client is the expert but the coach helps the client to unlock their potential to maximize their own performance. A skilled coach knows that the individual has the answer to their own problems.

A coaching approach

Why I struggled with titles like Test Coach became very obvious during my coach training. I was presented with the following model, “The flower”, created by Polhage & Lundberg, who also run the training (the model is originally described in Swedish and this one has been visually modified by me). They differentiate between the coach as a profession and having a coaching approach. We can always apply a coaching approach whether it’s in our daily life or at work.

The flower petals represents several roles which we might step into during our daily life or at work. The Coach is one of these roles (and the one I was in training for).
You can move between these roles and decide who to be in different situations. As an example, sometimes you need to take decisions based on your responsibilities which makes you the Decision maker. The empty petal is left for you to decide what to put in there. Your flower might have many more petals.

No matter what profession or role you have you can always apply a coaching approach. This means how you act and relate to the values of coaching.

The root system represents eight characteristics to consider for constructive communication, which are used in a coaching approach.

I quickly realized why I have never been very fond of the title Test Coach. It doesn’t fully reflect what I do or who I am. I am a subject matter expert in testing trying to help an organization, a team or an individual to improve their testing by guiding them and showing what to do, how they can do it and why.

“I am a subject matter expert in testing trying to help an organization, a team or an individual to improve their testing by guiding them and showing what to do, how they can do it and why.”

But I am also a Decision Maker, a Teacher, an Inspirer and a Mentor (for those who choose me to mentor them). I shift a lot in between all of these. One role I have never used at work is the Coach. However I often apply a coaching approach. This is an approach where I ask questions, where I use my curiosity to understand where the team or individual is right now and where I display my courage to challenge and ask “uncomfortable” questions. Focusing on what works and what moves us forward is also part of what I apply in my daily job whether my title is Project Manager, Test Coach or Scrum Master.

The only time I have been the Coach and only a coach is when I am a professional Coach in an agreement with a client.

Coaching in software testing

I recently had a short assignment where I was asked to coach a tester. She needed someone to talk to regarding her own journey where she was leading a change in her organization. In the beginning I found myself struggling with who to be. Biased by my recent experiences as a Professional Coach I started off in that role but quickly understood that my client needed something different. The focus was more related to guidance around the change she was implementing at work rather than her own journey. Sometimes it was hard to separate her own growth from the approach to testing that she was implementing.

Something that is very important is the agreement that you come up with before starting the sessions. The purpose of that agreement is to build trust and set expectations. Though this situation was a bit new for both myself and my client we decided to keep an open dialogue along the way to make sure she got value from our sessions.

My learning experience here is that it is not as black and white. What title you carry is not as important as the approach you choose.

“What title you carry is not as important as the approach you choose.”

During these sessions I used a coaching approach – actively listening, asking questions, driving my client to find her own solutions based on where she and her team are right now. In the cases where she wanted me to share my experience and thoughts, I did that as well.

What are your thoughts regarding coaches in Software Development/Testing?

References

Polhage & Lundberg

International Coaching Federation

Testing beyond requirements

This post was originally posted on Täckblog

I often hear people talking about testing as validating and verifying requirements. But testing is so much more than that.

When we focus on verifying and validating we will most likely only find and look for those things we seek. It’s embedded in the meaning of the words themselves.

According to the Oxford Dictionaries;

Verifying is to: Make sure or demonstrate that (something) is true, accurate, or justified.

And validating means to: Check or prove the validity or accuracy of.

With this mindset we tend to focus on confirming whether the function or product works as stated. But we forget to observe what else is happening . It’s like reading your horoscope that says you will meet a blond stranger — and suddenly you are only paying attention to blond strangers.

Testing is to learn about the product through exploring and experimentation but when we focus on verification it turns into a demonstration of the product:

“Yes, I can login to the website.” “Yes, I can add some products in my shopping cart.” Does this seem familiar? However the demonstration only shows that the product can work, once, in some kind of controlled circumstance. It’s the process of verifying which leads us to think in this way.

When our brain is targeted towards verification we tend to forget to ask questions like:

What if…? What else? What happens when…? Who is this for? Why is it designed like this? When can this be used? How can this function be used differently then expected? What value does it provide? To whom?

By challenging the product or the system you are testing you will discover a lot of new things. When moving away from your confirmation bias you will increase the opportunity to find the bugs/defects you don’t want your users to find. In worst case those bugs which will cost your company a lot of money.

But we need to make sure we meet the requirements!

The most common argument I get when advocating for testing over validation and verification is “but we must make sure the requirements are fulfilled”.

Testing requires multiple information sources. The written requirements are only one input to the test analysis to decide what needs to be tested. If we want better products and software we must go beyond checking. Since we can’t test everything we must know what is the most important to test. There are many parameters which affects that decision and this is yet another skill which you need to have when testing. This is where I see many focus on a too narrow approach choosing verification over testing. What you can do instead is to focus on risk. What is the worst thing that can happen? What if we can’t proceed to checkout? Instead of checking a few specific flows — be creative and explore these flows from different perspectives.

We have both explicit and implicit requirements and we can’t write down every aspect of a requirement. Some things must be tried out before you understand it. Some things are considered so obvious that it is just not documented. There is always a possibility for interpretation and ambiguity in requirements whether they are written or spoken.

So why don’t we spend more time making sure the requirements are perfect? Perfect requirements is an illusion. Even if it would be possible to create perfect requirements we would be out of business before we got our product to our customers. The quicker we can challenge the product by interacting with the software or design, the faster our feedback loop becomes.

By challenging the product from different perspectives like different quality characteristics , different users, different context and applying different test techniques the more likely we are to find those critical bugs which in worst case can put us out of business.

So let’s step out of the verification mode and start testing!

If you are to dive into testing beyond the requirements here is a bit of reading which I can recommend to start with:

Rickard Edgren http://www.thetesteye.com/papers/TheLittleBlackBookOnTestDesign.pdf

James Bach
Testing vs Checking http://www.satisfice.com/blog/archives/856

Michael Bolton
http://www.developsense.com/articles/2005-01-TestingWithoutAMap.pdf

Anne-Marie Charrett
https://www.mavericktester.com/blog/2018/3/20/how-to-avoid-being-fooled-in-software-testing

Boosting your creative thinking

PART 1 – of a short series around creative thinking and testing.

Yesterday I facilitated a workshop on creativity at a local conference Testit in Malmö. The purpose was to share some ideas of how to boost your creative thinking and some tools of how you can improve your thinking around test ideas.

Our testing is only as good as our thinking and many times we are held back by deadlines, limited time, poor communication, knowledge and pre-defined roles and responsibilities.

I have a certain process which I go through when ever I create a workshop or an exercise. Usually I start with an idea and a purpose of what I want the participants to learn. I trust my process of where I will come up with exercises which fits the purpose. This is the part which I get the most excited about, where I let my creativity flow. I play with different ideas, tools and media. I might experiment and try my ideas on colleagues and friends. I want my workshops to be interactive, fun and experiential where my work is much more about facilitating learning and creating opportunities than traditional teaching. I want my workshops to be fun. The most difficult is to balance the fun and the learning. What I mean is even though I aim for fun and play, that can’t take away the focus on the learning.

Like many times before I trusted my own process about putting my workshop together in time for the conference. I wasn’t worried, I had some ideas pondering but nothing really tangible. I knew I would come up with something. Surprisingly that didn’t happen. I realized I kept procrastinating my preparation and the day for the conference just kept coming closer and closer. I could not even force myself into designing my workshop.

My creativity was gone! What happened?!

To be continued…

Part 2 – Where did my creativity go?