Friction-Maxxing, or The Case for Elbow Grease

My friend Maaike Brinkhof recently wrote a post called “They will not break me.” Her point, in a nutshell: the more they push her to offload her work to an LLM, the more she grabs pen and paper, takes her time to think, and does her work slowly and thoroughly. And it finally kicked me into writing this piece that’s been bouncing around in my head for a while.

The best developers I know are lazy. Not the “I’ll do it tomorrow” kind of lazy. The “I’ll spend four hours writing a script so I never have to do this ten-minute task again” kind of lazy. That’s productive laziness. Being that kind of lazy can be extremely arduous. Still, that’s beautiful.

But there’s another kind of laziness creeping in, and it’s not beautiful at all.

You know those cleaning products on TV? The ones that promise you can just spritz a surface, make a swift wipe, and the stain vanishes. No scrubbing, no effort, just spray and walk away. Sounds amazing, right? Except that you actually try it, and the stain is still there. Grinning at you. Because it turns out, most stains need friction. They need you to bring the scrubs and brushes, get on your knees and start scrubbing. The product helps, sure. Because they put something in against everything. But the elbow grease does the actual work.

Nobody wants to hear that. We want the spritz. We want the magic. We want the easy route.

And now, enter AI.

AI is the ultimate “just spritz it” promise. Need to understand a complex codebase? Ask the AI. Need to write a test strategy? Let the AI draft it. Need to debug something? Paste it in and get your answer. And honestly, it works. You get results. Sometimes surprisingly good results.

But here’s the thing. When the cleaning product does the work for you, you never learn how to actually clean. You don’t understand why some stains need acid and others need alkaline. You don’t build the intuition for which surface can handle abrasion and which can’t. You just spritz and hope. You fully rely on a tool, and when it stops helping you don’t know what to do anymore.

The same is happening with AI and thinking. People are offloading the hard parts, the painful parts, the parts where you sit with a problem and struggle until something clicks. And that struggle? That friction? That’s where the learning happens.

I’m not saying AI is bad. I use it myself, regularly. It’s a good tool. A very versatile tool in my tool box. Like a really good cleaning product. But a tool works best in the hands of someone who understands the craft underneath. If you’ve spent years debugging, writing tests, understanding systems, then AI amplifies your ability. It takes away the tedious parts so you can focus on the interesting ones. That’s the good kind of lazy. That’s automation.

But if you skip the learning, if you let AI do the thinking from the start, you end up in a strange place. You can get things done. Your output looks fine. Maybe even impressive. And sometimes, getting it done is what counts. I won’t deny that. Ship the feature, fix the bug, move on.

But for the overall picture? It should not be enough.

Because sooner or later, you’ll hit a situation where the AI gives you something that looks right but isn’t. And you won’t know. You won’t have the instinct to say “wait, that doesn’t smell right.” You won’t have the scar tissue from the times you tried the wrong approach and learned from it. You’ll trust the spritz because you never learned to scrub.

Let me put it differently. Friction is not a bug. Friction is the mechanism by which we build understanding. Every shortcut you take through a problem is a piece of understanding you didn’t build. And understanding compounds. The person who struggled through debugging for years sees patterns that the person who always asked AI simply cannot see.

My friend Stu Crocker once said something that stuck with me deeply: “Quality is the absence of unnecessary friction.” I love that and made it my mantra. But the key word is “unnecessary.” Not all friction is bad. Some friction is the price of admission. It’s on us to learn to tell the difference between the friction we should eliminate and the friction we need to push through. No pain, no gain, as they say. And right now, too many people are removing all the friction, including the kind that makes them grow.

So here’s my take. Use AI. Be lazy in the smart way. Automate the repetitive garbage. But when it comes to understanding how things work, to building your mental models, to developing the instinct that separates someone who gets things done from someone who truly knows what they’re doing, embrace the friction. Get on your knees and scrub.

The stain won’t remove itself. And neither will ignorance.

Context is the System – What AI Prompting and Systems Thinking Have in Common

There is a quote I want to come back to, from Elisabeth Hendrickson:

“The word ‘context’ is shorthand for the cumulative effect of all the past decisions that we cannot change now.”

Lately I’ve been thinking about it from a completely different angle. Not in the retrospective sense of “how did we get here”, but in a prospective sense. What context do I need to provide, so that someone, or something, can make a good decision going forward?

That someone, in this case, is an AI.

Vision – Mission – Capacity – Learning

If you interact with a human colleague, the VMCL model describes the layers of context and information a human has, that you don’t need to include in your communication.

I mentioned in earlier posts the VMCL model from Drs. Cabrera. Vision, Mission, Capacity, Learning. On a larger scale this context is the vision. On a slightly smaller scale this is the mission, what is the next step towards reaching the vision. Capacity is the next smaller scale for the context, what do I need to do now to fulfill the mission. And Learning is the continuous feedback loop. To understand if we have the right capacity to fulfill the mission. If the mission is on track and if it’s still the right mission, and where are we in our vision?

What information do I need to provide for someone, or something, to fit my V, M and C?

When providing instructions to a human colleague, they have some level of V, M and C already and can decide based on this context. An AI has none of that shared context. Which is exactly the problem.

Throwing the Problem Over the Fence

When people start using AI tools, the typical first instinct is to just type the problem. Short, direct. “How should I structure this service?” or “What’s wrong with this test?”

And the AI responds. It always responds. And it’s not wrong. It’s just… generic. Because it has to be. It knows absolutely nothing about you, your system, your team, your constraints, your past decisions. All it has is the few lines you gave it. In case of coding agents, it has access to the code base and construct more context out of that.

You threw the problem over the fence, without providing much or any context. And now you expect wonders.

Enter Systems Thinking

In Systems Thinking, one of the first exercises when analyzing a situation is a Context Scan. You put your situation in the center and ask: what is around it? What is part of the picture even if it doesn’t seem obvious at first glance? Zoom out and see relations on a larger scale.

The idea is that you cannot understand an element without understanding the system it lives in. A microservice is not just a microservice. It is a piece of software with a team behind it, a deployment pipeline, a set of consumers, a data contract, a history of decisions, and constraints coming from outside. Take it out of that context and it becomes something generic and probably useless. Put it back in, and suddenly it becomes specific, with all its quirks, trade-offs, and necessities.

An AI has no idea about any of that. Unless you tell it.

Context is the System Prompt of Reality

When you provide context to an AI, you are essentially describing the system the solution has to live in. You are not just giving more information, you are drawing the boundaries. You are saying: here is what is relevant, here is what is not. Here are the actors, the constraints, the goals. Here is what we tried before and why it did not work.

Compare these two prompts:

“How should I structure my Auth module?”

vs.

“We run a multi-tenant SaaS platform, currently Auth is part of a Django monolith. The team is four engineers, experienced in Python and some Go. We want to move towards service isolation but have no dedicated platform team and limited ops capacity. What are realistic options for extracting Auth?”

Same question. Completely different answer. Not because the AI knows more, but because you have described the system and its constraints.

What Context Actually Means Here

Not every detail matters equally. The useful context dimensions for technical prompts are similar to what you would map in a Systems Thinking exercise:

Boundaries/Distinctions – What is in scope, what is not. What constraints are non-negotiable? Elements – What does your current system look like? Tech stack, team size, existing components? Relations – What depends on what? What would break if you changed this? Perspectives – What is the goal here? MVP, production-critical, proof of concept? Who is the user of this solution?

Sound familiar? It should. It is basically a light-weight systems analysis, just expressed as a prompt.

The Cost of Missing Context

Without context, the AI optimizes locally. It gives you a solution that looks fine in isolation, but might be completely incompatible with your situation. You implement it. Something doesn’t fit. You prompt again. The AI corrects. Something else doesn’t fit. Rinse, repeat.

This is the fixes that fail pattern from Systems Thinking. Local interventions that create new problems elsewhere, because the broader system was never part of the picture. You end up spending more time in a feedback loop than if you had taken five minutes to describe the situation upfront. Even then, some iteration is most probably inevitable.

This is Not New. You’ve Done It Before.

Any experienced engineer does this instinctively before making a decision. They ask about the context. What is the load? What is the team’s experience? What is the timeline? What have you already tried?

They don’t jump to a solution. They build the mental model first. That is Systems Thinking in practice, even if nobody calls it that.

When working with AI, the same principle applies. The model is not going to ask you follow-up questions unless you set it up to do so. You need to front-load the system description.

Set up your AI agents to ask for more context before they start. Like with every tool, the context makes it more useful.

You are Part of the System

Today, we go meta. When listening to a podcast this morning, the above sentence stood out for me. But it was not really further discussed. “You are part of the system.” I had to stop the podcast to get my head around that sentence. And understand just how much valuable information is in this statement.

When drawing a picture of a system, putting a multi-dimensional mental model into a 2D representation, one often ignores an important aspect. They forget to put themself in the picture. Well, not literally. The context they bring to the model. Your box of context holds a lot of stuff. Biases, experiences, limitations, preferences, and implicit knowledge. You could see an analogy to Heisenberg’s uncertainty principle. By measuring a thing, you influence that thing, so there is a limit in precision you can measure. In the case of your model, it’s the degree of reality you can achieve.

As an example, take a consultant analyzing a company’s processes. How does not only their presence, but their “measuring” influence people? When a manager analyzes their team, how much credit do they give to their own presence in the dynamics? Do you really understand what’s going on, when you are in the room?

A favorite answer in IT, often by testers, but basically everyone involved is: It depends! Of course, it depends! That’s the context. A big box. You might bring your team’s or your company’s context into the picture. But you forget just how much your own context influences your model.

Next time you think about a system, think about the “I” in the system. What is your personal relationship with each and every element. How might your context influence relationships and shape your mental model? In a way that makes it different and partially incompatible with somebody else’s model?

A more frequent example of this is using company inside language. You use a specific term and imply that the person you speak with, using the same term, means the same. We are coming down to the old semantics topic, which is so important.
When person A tells person B, “Can you test this feature, please?” Simple sentence, so much context. What is the feature? Do both have the same picture in mind of what the feature is and is not? Is there documentation available? Do both interpret the documentation the same way? And what the heck does “test” even mean in this case?

Performing a P-circle of a system, which is looking at the system from many different perspectives, you influence this move. You think you know the view of another person or element of the system. Again, your bring your whole context to the party. So even by trying to view the system through somebody or something else’s eyes, your context influences it.

A few years ago, at my current workplace, I adjusted and extended a process diagram. The process was lacking details – from my point of view. I was also interviewing other stakeholders to get their input on steps they are involved in.
Others were looking at the process from a different angle, from a different role, from a different background. They often did not understand the model, or at least parts of the model. When talking them through they understood a bit more. Implicit knowledge became explicit knowledge. Relations between steps and people and inputs and outputs became clearer. Not entirely. But a bit more. I explained them my context. A box that was nowhere mentioned, except that it was. It was forming and shaping the whole model.

Keep the “I” in mind when you describe your next model.

When you outsource testing to the AI, you lose the ability to understand your system

When reducing the creation of code to a prompt, we will loose the ability to actually understand the system. Humans need to stay closer in the loop. Testing becomes more relevant with AI, not less.

Engineers use AI to generate code these days. It will become more and more every day. There are coding platforms that create whole apps based on a prompt without seeing any code. AI agents, Ralph loops, and other AI-powered coding tools and approaches make it easier to create and adapt larger portions of your system in much shorter time. It’s only a small prompt away.

This post is not about missing guardrails or the danger of AIs doing something bad. Let’s assume the AI behaves within its boundaries. It doesn’t wipe your hard disk, and doesn’t send all your beachwear holiday pictures “home”.

M = I x O

Knowledge about a system, also called the mental model, is in essence organized information. M = I x O. Mental Model = Information x Organization.
Systems thinking and systems seeing are all about the organization of information. There are many approaches available. Like DSRP by Drs. Cabrera or the one Donella Meadows described in “A Primer on Systems Thinking”. Ruth Malan, in her system seeing adventures, presents different approaches to organize information about a system.

Where does information come from? In today’s world, there is too much information. All the time. An endless mixture of signal and noise is sent towards us. Especially noise. Endless noise, fishing for your attention.
When it comes to information about an IT system, then the answer is easier. Or more difficult, because it doesn’t come by itself. The answer is Testing! In ALL its facets. From reading code to observing production.

With testing, we trigger the system to generate information. These we can organize into our mental model, our understanding of how the system works. This can be code reviews, executing the code, or observing output like monitoring data. The variety of ways to test – as most readers might know – is huge. I don’t go into details here, otherwise this will be a book, and not a short post.

What happens if many levels of testing are not performed by humans, but by the AI. It doesn’t even matter if testing is done, as in probably more cases, not at all. When the AI overlord is done, we click around and see what has been produced. And what we see looks good. And the most common cases are working, so it should be fine.
When your system is small and in early stages, this amount of testing is probably sufficient.
What about more complicated or complex systems? Systems with many components not exposed through a nice looking frontend. When AI writes its own tests, and they pass, does the system do what we think it should do? Well, it works. The tests are passing. Right?!

One might think that knowing the prompts used to generate the code, they know what the system is doing. To be honest, we had requirements and specifications for a long time. And they were flawed, outdated, and incomplete. And systems were always behaving slightly differently. There is always a lot of implicit knowledge in a system that you don’t find in writing anywhere.
And do you actually store all your prompts in a structured way that humans can read and learn from?

All the little bits of valuable information on how a system works, people are collecting meticulously while reading code, writing unit and integration tests, or during exploratory testing. Now you hand nearly all of this to an AI. Because writing tests is so boring. Several months ago, these tests were crap tbh. They have become better. It doesn’t matter. Because the human in the loop distances themselves from the code. The mental model of the humans behind the system is lacking a lot of information.

And still, we run and use these applications. We trust them enough to do the right thing. This is a growing risk.

The human needs to stay in the loop. So far, engineers who do a thorough job, don’t have a 10x improvement in productivity. They have a change in their tasks. They spend more time reading code rather than writing code. And the risk is that people become weary of that. And sooner or later the code reading will get less and less. Your system becomes a black box. From the outside, looking at input and output on a higher level, it might be okay. But you have lost all control of what’s happening inside. You can not distinguish between risks anymore. The whole system grows into one big risk. When you need to debug something, and you don’t know where to start without asking the AI?! When you need to change a color or the position of a UI element, and you don’t change those few characters yourself, but you have to give it to the AI?!

So at the end of the day, the human needs to stay in the loop, to prevent a rising risk. Complex systems tend to be operated on the rim to failure. With adding more and more AI code without knowing what it does, this rim becomes narrower and narrower. And systems will fail more often. If that is a leisure app for you and your friends. Or something you put together to support your presentation. That is one level of systems. Do whatever you want.
If this is a piece of software running on critical infrastructure, in finance, in a medical device or in a car, then it makes a huge difference.

Use AI wisely. Don’t trust it all the way. Don’t trust it at all. Beware the human factor in testing. Continue to collect as much information as possible. Organize the information in your mental model. Understand the risks of changes. Keep the changes rather smaller and more often to make them more digestible. Keep monitoring your applications closely.
Keep your mental models up to date with reality. Stay in the loop!

My expectation is that people who are good at testing will be better at working with AI and create more reliable and robust results. Keep testing!

Systems Thinking seems to be important, but how does it work?

Systems Thinking is basically Thinking. Systems Thinking approaches or frameworks have been invented to bring structure to thinking and enable modeling and sharing mental models. This also helps to practice thinking, when you have some more tangible ideas to talk about. Because thinking about thinking is quite on the meta level.

M = I x O
A Mental Model is Information combined with Organization

Drs. Derek and Laura Cabrera – Cabrera Labs Podcast (from memory, so might not be 100% accurate)

Systems Thinking provides the tools to organize the information we get about the system we analyze.

We are all thinking, all the time. Yes, even the moron, that comes to mind first. We create mental models of the world around us. The goal is to understand reality. And our mental models will try to reflect reality as much as possible and is useful for us. And it doesn’t matter how much of an expert we are in certain areas. We don’t need to understand every thing around us to the last level of detail – if there actually is one. This would cost too much time and doesn’t make any sense. We can accept many things as is, put it in a box and label it as “works” and ignore it. While many will cry out loud, thinking that “milk comes from the super market” can be a sufficient mental model to some, at least for a while.

When asked “How does the Heisenberg compensator work?” Star Trek technical adviser Michael Okuda responded: “It works very well, thank you.”

I’m not an expert in systems thinking. I personally believe that I actually started thinking properly at some time in the last 10-15 years. Which leaves more than 30 years unaccounted for. Of course I was thinking before as well. And my thinking was working sufficient enough to make it through life. But my thinking was more improvised, applying what used to be common sense, and a quick grasp how things work on a shallow level. When reading my first book on Systems Thinking more structure appeared to the way I looked at things. Donella Meadows’ book “Thinking in Systems: A Primer” gave me tools to organize my mental models. I didn’t need to re-invent the wheel with every system on how to explain it. Several of the insights I gained were happy mind-blowing for me.

Systems Thinking is about having a set of methods and approaches to analyze a system and organize the collected information. If any of the many frameworks and approaches help you with that or you have your own way doesn’t matter. When I started listening to the Cabrera Labs Podcast last year I learned about their way of organizing information in mental models. I found it rather appealing as well.
Then my friend Vernon brought the Systems Seeing Adventure from Ruth Malan to my attention. This includes a bunch of new tools I have not heard about before. They were interesting to use. Especially as these tools manifest in output to share with other people. And it reminds me to continue with my Systems Seeing Adventure as well.

In 2013 I started using Mind Maps to model the system under test. I called them Virtual Test Models. It helped with identifying the different elements of a system and dissecting them. It was relatively helpful until I started putting relationships between nodes into them. This created a huge visual mess and the whole thing was not usable anymore. It was a clear sign that the system I had to test was too complex for simple approaches like a mind map. But it was a starting point to help identify issues with the system, or talk people through it. So goal achieved. It was useful to some degree.

Doing Systems Thinking

There are many frameworks out there for you to discover. I want to give you a generic overview of the things you might want to look for in any given system. And like with everything else, practice is key. Whatever approach you chose, when you want to get better, practice.

Just recently I made a systems thinking exercise with my daughter. I was trying to teach her the DSRP approach from Drs. Cabrera by looking at the kitchen door. I will add examples to the categories from that exercise.

Photo by Jonathan Petersson on Pexels.com

Remember, these categories are there to help organize the information you gain about the system and describe your mental model.

Boundaries / Distinctions

What is and is not my system under evaluation? How do I distinguish between certain elements? Where does one end and the next start?

Example: What do we take into account as kitchen door? We decided to include the door frame as relevant for the model. We decided that mounting the frame in the wall itself is not relevant for now, as we want to focus on the moving door part.

Elements / Parts / Whole

What does the system I look at consist of? What is it part of? Can I take one element and break it further down? Can I zoom out and see what its part of? Can I zoom in, if there is something of relevance to learn. more?

Example: We have established the door or door leaf itself and the frame it is attached to. In the door leaf itself we found the lock and the handle. We virtually dissected the handle and all it’s screws and elements. On the other side of the door leaf we found the hinge joints. To look at it as kitchen door, we established that the frame as part of the kitchen wall is relevant. So we looked at the parts of the door frame as well.

Connections / Relations / Actions

Things are connected. They are not hanging loosely in the air. How are they related? Are they related? Are there actions or flows from one element that influence another? Inflows, outflows, actions, reactions. Feedback cycles! There can be a lot going on.

Example: Obvious, the hinges from the door leaf are connected to the hinge parts of the frame. But there is a relation when pushing the handle down, to retracting the lock bolt or whatever you call it. But the handle is not connected to the other bolt underneath. That one is connected to the lock itself and is moved out and in with the key. These bolts are related to the slots in the frame where they can be placed.

Actors / Motivations / Perspectives

From which angle can we look at the system. And this don’t have to be people. This can be elements of the system or even relations. What is the goal of individual actors? What are driving forces? What is their motivation? What we learned from crime shows and movies is that you want to follow either money or emotions (love or greed) when it comes to people.

Example: We took mostly perspectives from people involved with the door. The person mounting it, and what is relevant for them. The person buying it, and why? The person using it. We also quickly took the perspective of the door leaf or the handle to see how the view of the model would change. From the handle’s point of view certain things are irrelevant on first sight, while others are more obvious.

And these elements can be applied to any system. You can look at whole countries, an animal, a piece of software, a bicycle, a company, the house you live in, the game you want to play. Just try it.

The System of a Button

Buttons are all over the screen I’m looking at. There will be buttons on the screen you read this. There’s buttons in your house, in your car, everywhere. And every button is more than just a button. A button is a complex system. “Come on, now you exaggerate!”, you might say. Let me explain.

A button has a purpose. When you press or click the button, something should happen. From the action of pressing the button, follows a reaction. Or rather more than one reaction. The button should trigger some expected behavior of the system that it is a part of. The person pressing the button has the reaction that triggers an expectation of something to happen.

How quickly does the reaction happen? Is there some sort of feedback expected? Pressing a button that turns on a device like a light bulb, the reaction should follow instantly. Sometimes the reaction of the attached system is delayed. Does the user know about this? Does it trigger a behavior of the user to re-press the button? What happens when the button is pressed again?

The existence of the button alone creates this expectation. When I press this button something happens. Either we know exactly what happens, or we don’t know it. A button clearly marked as “on/off” sets the expectation when pressing the button (action) the attached device is turned on or off (reaction).

Also the button has a reaction to being pressed. A feedback, some haptic feeling to it. Even virtual buttons are often designed to give some sort of feedback. The user interacting with the button expects not only a reaction of the button being pressed or clicked, but also some kind of feedback that the pressing of the button has happened successfully. If you have ever touched a screen with a flat static button behind it, where there is no feedback at all, you don’t know if you have touched it or not.

A button also has a distinction. You can usually tell what the button is and what it is not. The big red thingy is the button, everything around is not. Is it fitting in the vicinity? Does the color, size and form fit the system? Is the button clearly marked as what it does?

In IT designers and developers can influence the border of the button. Sometimes the button is a visual representation in a larger layer. And not only a click on the button, but also in the vicinity of the button counts as interaction with the button. The is / is not border has been moved. Expected or not is a matter of design and experience.

A button consists of many parts. With a real button, of course it does. The housing, the wires, the mechanism to register the button being clicked, the movable part, the parts that push back the button. Also virtual buttons have many parts. There are properties of color and size. Does it show a border, some bevel effect. What action is linked with it. The reaction needs to defined, e.g. changing the color, playing a “click” sound.

And then there are the perspectives. There is the person installing the button to provide a person. Where do they put the button. In real world examples it relies on where the system resides in relation that the button interacts with. You need cables, a place to mount it, etc. There is the view of the person using the button. Is it clearly marked what the button is doing? Can they reach the button? What is their expectation when pushing the button? The list goes on.

You can even take the perspective of the button. Where does it sit? Does it have wires attached properly? Is it visible? Does every part move as expected?
Or the perspective of the system the button should interact with. Is the button properly wired? Does it provide the right type of signal to trigger what it should trigger? Does the button expect some feedback?

Pressing a button is an experience. The rule of industrial design is that great design is invisible. When the user can flawlessly use a button, getting the reaction they expect, even find the button where they expect it. If something is off, the experience is impacted.
Sure, there are buttons that are highly visible for a purpose. Even then the design is intentional.

The next time when you press a button, place a button or test a button. Think about what is attached to it.

Empathy – Understanding Motivation in Systems

In the Systems Seeing Adventure we used an Empathy Map on Day 9. This is a nice tool to look at a system from other perspectives. In this case from other humans. My current favorite approach to systems thinking is the DSRP-method. The P stands for Perspective. The idea is to use the perspective of any element of the system to improve its model. Empathy can be used for human perspectives.
Donella Meadow’s used actors and their motivation to describe the empathy part. Why is an actor in a system and what is their goal and motivation participating in the system?

Empathy helps to understand other participants in the system. What is their job, what is their goal? This can also go to a more personal level. What is their story? Where do they come from? What experience do they have? How does that influence their behavior as part of the system?

Important note: You don’t have to share the opinion, motivation or goal of the other person. There is emotional and cognitive empathy to distinguish.
Cognitive Empathy: Understanding someone’s thoughts and perspective, like seeing their point of view.
Emotional Empathy: Sharing and feeling another person’s emotions, almost as if they are your own.
For Systems Thinking the Cognitive Empathy aspect is mostly relevant. You can take the point of view of an a$$hole to understand why they act like they do, but you don’t have to share the same feeling. That’s a huge difference.

When you apply this lens it is also important to understand, what part of the system do they see? What part of the system is relevant for them. What other elements might they see that you don’t see? Does this information has an impact on the system? These are a lot of unknowns and speculation might be necessary. Sometimes it’s already helpful to know that there is an unknown component to some element of the system that drives its behavior.

In my example in the exercise the other week, I filled out the empathy map for an administrator at my energy net provider. They need to check a circuit diagram, the completeness of information and certificates of the used electrical elements.
This is all they see for my specific problem. They are not aware of all the other context for me. So I need to treat this person as such. What is their motivation? Their job should be to ensure that home solar power plants installed in people’s houses are not endangering the safety of the energy network. Well, that is their job description. Their motivation to do the job is probably to earn money. Where do they come from? What is their background?
They see dozens of these documents a day. What is the difference in my document? How could this impact my way of communication? In my case, I tried to explain them the specific context and why in this case I can provide the information rather than the electrician as usual.

This way I discovered an impact on somebody else’s system, by understanding how my context might be an issue for them. I could also not care a bit, and just throw around angry emails. Does it help me? No! Does it help the other person? They probably already get angry emails and calls with people not understanding. It will make them less willing to solve my situation. And I can relate to that.

If I look at my professional background, especially in the first 10 years were I worked in this huge project. As a Release Test Manager I worked together with so many different roles and people. What helped me back then was to understand what each roles motivation was. This helped me a lot to understand the whole project. And that way I could better “bend” the system to my needs when necessary.

My advice is to learn more about the other roles in a company that you work with. Understanding what they need to do their job helps you to provide the relevant information. That way they are not blocked, coming back to you, disturbing you in another task, playing ping-pong with tickets. And maybe you find a better way to provide the information they need.

For the emotional empathy aspect: Put your ego aside for a moment and think about the others. Maybe they had a rough day and are in a bad mood. They are only people. Just like you. And they also only want to do their job. You don’t need to become their counselor. Just have some empathy.

Systems Seeing Adventure – Day 10 and 11 – Telling it backwards

In case you wonder what this Systems Seeing Adventure is all about, there is this post from Ruth Malan explaining it. It’s about a 31 day systems seeing adventure. So I decided to take you with me on this journey.
Welcome to the next block of applying different lenses to situations, to better analyse the system.

Day 10: Circle of Cares and Concerns

Explore the concerns and orientations of various people or groups (“stakeholders”) who impact, and are impacted by, this situation you’re exploring.

The template gives some structure to this exploration.

  1. Put the situation at the center.
  2. Identify stakeholders who are directly involved, and why and how they care about the situation, and impact its unfolding. Identify their cares and concerns as they relate to the situation, and how they impact it.
  3. Move out a level to who else cares, and consider how they are related to (and influence) directly involved stakeholders. What are their cares and concerns (as relevant to the situation)?

I produced another piece of beauty here. My drawing and writing skills are splendid. Anyway, I’m distracting.

This tool is a nice way to focus on the empathy and motivational part of people involved in the process. Looking at what these roles or people care about and what they are concerned about will help to understand the situation even better. Why is someone focusing on certain aspects? Either they especially care about this or are concerned about this and try to mitigate this.

In my example I have a sales person in the loop. Nothing against sales people, they are important for companies to make money and pay salaries. In my case, they want to sell their product to me. Their motivation is to earn the most they can make from me, from what I sensed, if the solution makes sense or not. At least in most cases. But in turn I can add this concern to myself in the picture, that I get offered a very expensive non-solution.

To look at motivations of actors in a system with the lenses of cares and concerns is a helpful approach to understand them in a more distinct way. This can help to better explain certain actions or reactions.

Also adding the second level of cares and concerns is helpful. Because these are then drivers for the people in the first level. This can explain even better why they care or are concerned. And in the analysis stage it helped me to find more motivations and reasons for their behavior.

Taking into account this additional level of actors in the system broadens the view. It looks at influences to the system that might not be directly part of the system in the first place. Like adding a new dimension to it. It’s collecting relevant information for your model, while not necessarily adding more complexity to it.

Intermezzo: Sensemaking and Framing

There is a bunch of quotes on page 18 of the slide deck around sensemaking. And the key message I took from some is, that detecting the problem is key to understanding the situation. When you come into a new situation, the first step is to find out about the core problem, the problem that is at the center of the situation.

Day 11: Narrative History: Unwinding Threads

Pick up a thread in the situation (in your verbal or visual narrative, or in your experience of it), and write a few paragraphs exploring how it came to.

What were the various paths of influence and unfolding? What are some stories of the history that you were there for, or have heard others tell? While you have time (in the 15-20 minute window), pick up other threads and explore those and notice interconnections.

I don’t want to bore you with the details of my interaction with the administrator of the energy net provider. Looking at the history of this single thread and going further back, how it even came to this interaction, is like going through a specific path through the situation.

I may have not done the exercise as intended. I was telling the story backwards in time, like in one of my favorite chapters of the Kangaroo Chronicles. The author Marc-Uwe Kling is telling the story backwards, starting with the end of an evening. And step by step he goes back in time during the evening, and one weird situation after another is explained how it began.
So I went back from status quo step by step. That way I focused on where I came from in this specific situation. As I know myself, if I start in the beginning, I will loose focus. Telling the story backwards has helped me.

I picked up one other thread. And while the second thread was not directly connected in the situation as it evolved, there were some interconnections. And if some decisions would have been made differently earlier in the situation, these threads would probably changed place. And everything would have been different. To stop speaking in riddles. If I would have decided that the “professional” solutions were worth the money, they would have taken care of registering the installation with the net provider. They would have had all certificates at hand, as the components installed would have been of a different – let’s call it – category, where these certificates are treated with more implicitness. Then I would have not come into the situation to communicate with this person at all. That person would then have been second level in the care and concern circle from Day 10.

There was a quote from the fantastic Elisabeth Hendrickson on that page, where I picked out this sentence:

The word “context” is shorthand for the cumulative effect of all the past decisions that we cannot change now.

Elisabeth Hendrickson

Putting Lipstick on the Pig: The Downside of Automation

One issue that I see every now and then is that we tend to automate things because we can. I’m guilty of that myself. Ever since I started writing scripts and tests and tools and what not, I started to automate things. Because I can.

Don’t get me wrong, automating things is useful and a fantastic opportunity to reduce stupid work. But ever so often we automate the given process and are happy. We forget the chance to maybe optimize the process. We automate processes that are too tedious, error-prone, and binding too much capacity. Instead of asking why is the process the way it is, we start drafting what we need to automate.

First, why does the process look like it does? What does the process achieve? Where does it come from? Are there any relations that are not obvious?

Second, are there steps that we can optimize easily? And I speak from optimizing the process, not automating yet. Can we simplify a form? Can we remove unnecessary steps? Are all steps still relevant? Too often we deal with legacy in our processes. We have always done it that way. One of my favorite quotes, and I’m guilty of using it myself too often.

Third, should we automate every step of the process? Are there restriction that require human interaction? I’m currently working in medical device context, I have some experience in other regulated environments as well. Sometimes there are steps that need a human in the process. Sometimes it’s a step where some human evaluation is not the worst to have.
In general this step goes back to the Jeff Goldblum statement in Jurassic Park.

“Your scientists were so preoccupied with whether or not they couldthey didn’t stop to think if they should.”

Fourth, continuing on the “think if you should” statement, what parts of the process need automation the most. Why are these steps so awful to use by a human? Do they have to be so awful? Isn’t there something that we could do about it? Yes, this is a repetition of “second”, but we – at least I – tend to settle too quickly with, “let’s automate”.

Fifth, is the effort of automation really worth it? There is a rather old chart from xkcd that helps quickly to evaluate how much time you save by automation. On the other hand, sometimes it’s worth automating, even if the immediate ROI is not positive.

Sixth, use the automation to spot more potential for process automation. When I automate a test case, then I’m also doing exploratory testing. The perspective of automation is a wonderful opportunity to evaluate a system. Use the chance to spot issues in the process. You don’t need to automate everything, maybe there are details that can be adjusted in the process without really changing it.
An example I had some time ago was automating documentation updates. The automation was supposed to work across two or three documents. The only issue was that the part of the document to update looked slightly different across the documents. One chat with the colleague who was in charge of the documents and three edits later, that part had the same structure and I could use the same script in all three places.

Seventh, implement the automation. Finally. We could have saved so much time and just automate it from the start. But then we would have lost a great potential to optimize the process. Processes need refactoring as well.
I’m currently working on a huge refactoring of a central process. The process reached a state where every step was defined in detail. That was the same point like in programming, when a function does what it should, then you can start refactoring.

When you define a new process, think about automation from the beginning, and design steps in a way that are easier to automate.

Happy Automation!

Systems Seeing Adventure – Day 6-9: Looking at a situation from different lenses

I have decided to bundle a few days together. The reason is, that I now need to come up with a situation that the rest of the adventure is using for the exercises. So I need to find something that I can share publicly. On the other hand I want to look at a work-related situation as well. So I will have a simpler situation for the blog, and a more complicated one for myself. In the posts I will share examples with the public situation, but also share insights from the work situation.

Day 6: Describe Focal Situation

Think of a situation you’d like to explore with a systems lens as we practice various systems approaches and views. It’s good (since these 15-20 minutes of daily journaling add up) if it’s something that matters to you to explore and understand, and begin to shape responses to.
Write a few paragraphs describing the situation.

As public scenario I will use my endeavor to get solar panels for our house. The project has come to some point of closure finally, so it should provide enough material for the exercises.

  • Terraced house
  • Roof facing WSW
  • Three large roof windows – not much roof area left
  • Old fuse box – no space for necessary electrics
  • Got three calculations from companies with unreasonable offers
  • The fourth company suggested to place as many balcony power plants as possible and have an electrician registering it as full solution

The work example I chose is our Software Development Life Cycle. As this is company internal, I can not share any explicit information. But I can share general insights.

Day 7: Context Scan

Consider the broader context of the situation you identified (in the previous activity). One way to make this visual: Put your situation in a circle (named abstraction) at the center of your page. What’s around it? What is the “context” for this situation?

Not everything might be readable, sorry.

Insights

One thing I realized again is that visualizing is helping with thinking about more and more topics for the situation. Putting it on a flat map reduces a bit the ability to visualize all connections. While drawing I remembered how interconnected and intertwined certain topics are, even though you can treat them as kind of individual streams. Looking at individual streams would be a change of perspective. This would keep the rest of the system, but change the importance of certain connections.

For example looking at the new product of manufacturer for balcony power plant solutions will keep mostly all other topics relevant. This product needs to provide advantages over the offers and needs a solution for the fuse box situation. But as I know now, it also has impact on the energy network provider, as some specific certificate is necessary to provide.

The lesson I learned from this small situation applies in a multitude to the work situation I chose. In the solar power example I have about a handful of other actors involved (net provider, electrician, sales people, myself). With my company’s SDLC the amount of actors alone is x-time higher, all with their individual motivation. The context is so much larger, as the SDLC is a rather central part of the company, which is a system by itself. It gets complex (intertwined) enough by its central nature. The interfaces with other processes, the amount of rules that need to apply, the amount of people involved more or less in the process is sheer endless.

Day 8: Sketch the Situation

Continue exploring the situation you described. Draw the situation, using words and images, but keep it informal and sketchy. The situation sets the general frame (so we aren’t bringing the whole world into our picture).

I gave it my best to draw something. As I said the other day, drawing or sketchnoting is not my strength. Here you can see why.

My insights for today are mostly, that when sketching the situation you reduce certain things to graspable elements of the situation. Virtual elements are too hard to draw, so you focus on the more or less real world things.

I’m not sure yet, if this is a good or a bad thing. Maybe it’s a way to put a filter on the situation. The instructions went on and tell about how elements of the system interact and what they care about and their concerns. I tried to not over-clutter the image. This takes away a lot of information that I had written down or put in the other diagram, and especially this is by far not what I have in mind for the situation. There are a lot of relations and interconnections, that would leave that drawing in a state where you can not see, understand or read anything. So I would say, a bit of focus is good. I definitely did not focus on all the most relevant bits and pieces, but then also the question, most relevant from which perspective.

For my SDLC problem I re-focused a lot from individuals to groups, or representing only some element from complicated relations. But there were too many parts and parties involved. I tried my best, but I was not happy.

Day 9: Practice Empathy

Pick one of the people in your situation sketch (Rich Picture, if you did that in the previous activity), and explore their experience of this situation, using an Empathy Map (see instructions on the image.)

I would say that I am half good with empathy and understand motivations of people in systems. Doing it with this Empathy Map helped to organize thoughts a bit better.

Insights

Mostly it was a confirmation that I’m on the right track with what I’m doing already. Splitting another person into all these different perspectives is really helpful. As I’m talking about a situation that is now solved, I can look back at the interaction with this person. And filling out the empathy map helped me re-evaluate the communication that happened. And I think I felt empathy for the person from the first email on that reached me. I tried to explain the situation from my point of view as good as possible, as I was aware that this most probably not a standard case. When I found out that I had no idea what I actually needed to provide at one point, I asked them politely for any example and thanked them, explaining my next steps. Would I have not felt empathy for the person and understood their situation, I might have reacted differently.

I believe that I have empathy for different roles, mostly at work but also many other areas. From a systems thinking point of view, taking different perspectives is essential. Empathy is the key to take perspectives of other people. The empathy map helps with that in a more formal way, what will work more “naturally” with practice.

In Systems Thinking you can also take the perspective of any other element or relation or distinction. For non-human elements of a system empathy is restricted. But I guess it will also help to a good degree to evaluate non-humans this way. I will think more about this and come back to you.

That’s it so far with the exercises. I found it very interesting to use different lenses for a change. I will practice more with those when looking at other systems.

Design a site like this with WordPress.com
Get started