Notes from Jonathan Bach’s “Session-Based Test Management”

Tracking exploratory testing work is difficult for test managers. We don’t want to micro-manage testers, we want them to explore to their hearts content when they test, but we won’t know how much progress there is if we don’t track the work. We also won’t know what sort of problems testers encounter during testing, unless they have the nerve to tell us immediately. Jonathan Bach’s “Session-Based Test Management” article has one suggestion: use sessions, uninterrupted blocks of reviewable and chartered test effort.

Here are a few favorite notes from the article:

  • Unlike traditional scripted testing, exploratory testing is an ad hoc process. Everything we do is optimized to find bugs fast, so we continually adjust our plans to re-focus on the most promising risk areas; we follow hunches; we minimize the time spent on documentation. That leaves us with some problems. For one thing, keeping track of each tester’s progress can be like herding snakes into a burlap bag.
  • The first thing we realized in our effort to reinvent exploratory test management was that testers do a lot of things during the day that aren’t testing. If we wanted to track testing, we needed a way to distinguish testing from everything else. Thus, “sessions” were born. In our practice of exploratory testing, a session, not a test case or bug report, is the basic testing work unit . What we call a session is an uninterrupted block of reviewable, chartered test effort. By “chartered,” we mean that each session is associated with a mission—what we are testing or what problems we are looking for. By “uninterrupted,” we mean no significant interruptions, no email, meetings, chatting or telephone calls. By “reviewable,” we mean a report, called a session sheet, is produced that can be examined by a third-party, such as the test manager, that provides information about what happened.
  • From a distance, exploratory testing can look like one big amorphous task. But it’s actually an aggregate of sub-tasks that appear and disappear like bubbles in a Jacuzzi. We’d like to know what tasks happen during a test session, but we don’t want the reporting to be too much of a burden. Collecting data about testing takes energy away from doing testing.
  • We separate test sessions into three kinds of tasks: test design and execution, bug investigation and reporting, and session setup. We call these the “TBS” metrics. We then ask the testers to estimate the relative proportion of time they spent on each kind of task. Test design and execution means scanning the product and looking for problems. Bug investigation and reporting is what happens once the tester stumbles into behavior that looks like it might be a problem. Session setup is anything else testers do that makes the first two tasks possible, including tasks such as configuring equipment, locating materials, reading manuals, or writing a session report
  • We also ask testers to report the portion of their time they spend “on charter” versus “on opportunity”. Opportunity testing is any testing that doesn’t fit the charter of the session. Since we’re in doing exploratory testing, we remind and encourage testers that it’s okay to divert from their charter if they stumble into an off-charter problem that looks important.
  • Although these metrics can provide better visibility and insight about what we’re doing in our test process, it’s important to realize that the session-based testing process and associated metrics could easily be distorted by a confused or biased test manager. A silver-tongued tester could bias the sheets and manipulate the debriefing in such a way as to fool the test manager about the work being done. Even if everyone is completely sober and honest, the numbers may be distorted by confusion over the reporting protocol, or the fact that some testers may be far more productive than other testers. Effective use of the session sheets and metrics requires continual awareness about the potential for these problems.
  • One colleague of mine, upon hearing me talk about this approach, expressed the concern that senior testers would balk at all the paperwork associated with the session sheets. All that structure, she felt, would just get in the way of what senior testers already know how to do. Although my first instinct was to argue with her, on second thought, she was giving me an important reality check. This approach does impose a structure that is not strictly necessary in order to achieve the mission of good testing. Segmenting complex and interwoven test tasks into distinct little sessions is not always easy or natural. Session-based test management is simply one way to bring more accountability to exploratory testing, for those situations where accountability is especially important.

Lessons from James Bach and Michael Bolton’s “A Context-Driven Approach To Automation In Testing”

Automation has gained traction in recent years in the testing community because the idea of automating tests has been sold off as a solution to many of the common problems of traditional testing. And the advertisements worked, as many software businesses now rely heavily on automated checks to monitor and check known system bugs, even performance and security tests. It’s not surprising anymore that automation has been accepted by the general public as one of the cornerstones of effective testing today, especially since releases to production for most businesses has now become more frequent and fast-paced. I use such tools at work myself, because they complement the exploratory testing that I do.

James Bach and Michael Bolton, two of the finest software testers known to many in the industry, however reminds us, in their whitepaper titled ‘A Context-Driven Approach to Automation in Testing, to be vigilant about the use of these tools, as well as the practice of the terms ‘automation‘ and (especially) ‘test automation‘ when speaking about the testing work that we do, because they can imply dangerous connotations about what testing really is. They remind us that testing is so much more than automation, much more than using tools, and that to be a remarkable software tester we need to keep thinking and improving our craft, including the ways we perform and explain testing to others.

Some takeaway quotes from the whitepaper:

  • If you need good testing, then good tool support will be part of the picture, and that means you must learn how and why we can go wrong with tools.
  • The word ‘automation’ is misleading. We cannot automate users. We automate some actions they perform, but users do so much more than that. Output checking is interesting and can be automated, but testers and tools do so much more than that. Although certain user and tester actions can be simulated, users and testers themselves cannot be replicated in software. Failure to understand this simple truth will trivialize testing, and will allow many bugs to escape our notice.
  • To test is to seek the true status of a product, which in complex products is hidden from casual view. Tester’s do this to discover trouble. A tester, working with limited resources must sniff out the trouble before it’s too late. This requires careful attention to subtle clues in the behavior of a product within a rapid and ongoing learning process. Testers engage in sensemaking, critical thinking, and experimentation, none of which can be done by mechanical means.
  • Much of what informs a human tester’s behavior is tacit knowledge.
  • Everyone knows programming cannot be automated. Although many early programming languages were called ‘autocodes’ and early computers were called ‘autocoders,’ that way of speaking peaked around 1965. The term ‘compiler’ became far more popular. In other words, when software started coding, they changed the name of that activity to compiling, assembling, or interpreting. That way the programmer is someone who always sits on top of all the technology and no manager is saying ‘when can we automate all this programming?’
  • The common terms ‘manual testers’ and ‘automated testers’ to distinguish testers are misleading, because all competent testers use tools.
  • Some testers also make tools – writing code and creating utilities and instruments that aid in testing. We suggest calling such technical testers ‘toolsmiths.’
  • We emphasize experimentation because good tests are literally experiments in the scientific sense of the word. What scientists mean by experiment is precisely what we mean by test. Testing is necessarily a process of incremental, speculative, self-directed search.
  • Although routine output-checking is part of our work, we continually re-focus in non-routine, novel observations. Our attitude must be one of seeking to find trouble, not verifying the absence of trouble – otherwise we will test in shallow ways and blind ourselves to the true nature of the product.
  • Read the situation around you, discover the factors that matter, generate options, weigh your options and build a defensible case for choosing a particular option over all others. Then put that option to practice and take responsibility for what happens next. Learn and get better.
  • Testing is necessarily a human process. Only humans can learn. Only humans can determine value. Value is a social judgment, and different people value things differently.
  • Call them test tools, not ‘test automation.’

More Lessons from James Bach and Michael Bolton’s “Rapid Software Testing”

I’ve mentioned Jame Bach and Michael Bolton’s ‘Rapid Software Testing’ ebook before. I am citing their resource once more today because they’re worth revisiting again and again, for software testers. Many of the things I’ve come to believe in about software testing are here, explained well in interesting lists, charts, and descriptions. Such are the things that junior software testers need to fully understand, as well as the stuff that seniors should thoroughly teach.

Quoting some lines from the slides:

  • All good testers think for themselves. We look at things differently than everyone else so that we can find problems no one else will find.
  • In testing, a lot of the methodology or how-tos are tacit (unspoken). To learn to test you must experience testing and struggle to solve testing problems.
  • Asking questions is a fundamental testing skill.
  • Any “expert” can be wrong, and “expert advice” is always wrong in some context.
  • The first question about quality is always “whose opinions matter?” If someone who doesn’t matter thinks your product is terrible, you don’t necessarily change anything. So, an important bug is something about a product that really bugs someone important. One implication of this is that, if we see something that we think is a bug and we’re overruled, we don’t matter.
  • It’s not your job to decide if something is a bug. You do need to form a justified belief that it might be a threat to product value in the opinion of someone who matters, and you must be able to say why you think so.
  • In exploratory testing, the first oracle to trigger is usually a personal feeling. Then you move from that to something more explicit and defensible, because testing is social. We can’t just go with our gut and leave it at that, because we need to convince other people that it’s really a bug.
  • Find problems that matter, rather than whether the product merely satisfies all relevant standards. Instead of pass vs fail, think problem vs no problem.
  • How do you think like a tester? Think like a scientist, think like a detective. Learn to love solving mysteries.

Five People and Their Thoughts (Part 2)

It’s amazing how much we can learn just from reading other people’s writings, even if they’re only as short as blog posts or official statements. And testing is learning. Today, I’d like to share the following interesting articles from some of the curious minds in the software testing industry: