Disconnected

Standard
In the Netherlands the testing practice is largely dominated by the TMap test management approach and by other methodologies such as TestFrame and ISTQB. Throughout the years I’ve spent as a software tester I’ve become concerned that the testing methodologies we value and apply are moving us further away from reality.

It could be argued that the Dutch have a natural tendency towards methodologies, standards, rules and processes to govern complexity. Although I cannot offer scientific proof for this statement it feels like we think that by putting a tremendous amount of intellectual effort into complex problem we can abstract from that problem a generic framework, a mechanized pattern, to govern and master the problem’s domain. I think our Delta Works – the effort to defend our country against the onslaught of the North Sea – counts as such a framework.

In software testing the seven hundred plus pages of TMap Next outline in very specific and careful detail the steps we should follow to tackle software reality. While I believe that this work is the result of a huge intellectual effort concerning the amalgamation and sublimation of an extensive body of historic knowledge in software development, I also believe that it has little to do with software testing. I would even go so far as to say that software testing, as defined in our methodologies, does not exist. There! I officially upped Alberto Savoia.

The disconnect is the gap between the finely tuned parts of the mechanism described in a methodology and the supreme insight that is a prerequisite for the establishing of the quality of a solution specified in software. The parts are the scripted test cases, the test plans, the risk matrices, the product risk assessments, the requirements analysis, the test environment intake checklist, the test case prioritization and the test effort estimations.The insight comes from a myriad of sources that are often specific scientific fields in themselves, such as mathematics, systems theory, epistemology, experimentation, analysis, design, language, critical thinking, decision making, communication and learning. While competence in some of these areas is acknowledged to be a prerequisite for testing even in the TMap approach, the methodologies are quick to assume that these elements, which are at the core of dealing with complex problems, are inherently present in the software tester.

As soon as we define testing solely by the machine we use to effect it – if we set out by thinking about testing in terms of constituents driving a process – we willingly refuse to notice any of the relevant aspects of software such as complexity, communication, knowledge, change, uncertainty, design, analysis and reasoning at large. From this principal exclusion, which either supposes that the tester inherently possesses all competences to deal with the relevant aspects of software or supposes that such competences are not needed, there is no turning back.

If, for example, we use a test case to test something, then ideally we use the right test case, at the right time, under the right conditions, executed by the right person. If we define testing by specifying what a test case is but omit all competences that are needed to adequately assess what is ‘right’ then what remains is an empty shell. And it is not safe to assume that the competences needed for the adequate assessment (the “judgment and skill” as mentioned in the principles of context-driven testing) grow on trees bountifully in La La Land.

If we want to define testing let’s start by explicating the competences that are required for making an adequate assessment.

My four favourite articles

Standard

I would like to mention my four favourite articles of the moment. Most of the time when I read something on software testing or software development it is interesting or worth finding out more about. Sometimes the subject matter is difficult and it takes a while to figure out the value of what’s written down.

The four articles I like to mention are nothing like that. They convey a message so powerful that reading them has changed the way I think about software and therefore about software testing almost instantly.

Richard Feymman – Cargo Cult Science [click here for a pdf of the article]

Richard Feynman

Richard Feynman

This is not truly an article; it is Richard Feynman‘s 1974 commencement speech for the California Institute of Technology. Here Feynman, perhaps one of the greatest scientists of the 20th century, speaks about scientific integrity in relation to experimentation. When I apply this to software testing, which is – after all – a form of experimentation (though not always a very scientific one), I feel that we need to be true to ourselves and to others about the experiments we conduct. That, in software testing, we need to be unrelenting in our experiments and in our quest for knowledge.

Here’s how mr. Feynman concludes his speech.

“So I have just one wish for you – the good luck to be somewhere where you are free to maintain the kind of integrity I have described, and where you do not feel forced by a special need to maintain your position in the organisation, or financial support, or so on, to lose your integrity. May you have that freedom.“

Tom DeMarco – Software Engineering An Idea Whose Time Has Come and Gone [click here for a pdf of the article]

Tom DeMarco

Tom DeMarco

Tom DeMarco is one of the great thinkers in software. Through the book Peopleware he became a personal hero of mine. In this short article he refutes his early work on software metrics and that alone, in my opinion, shows the man’s true spirit and bravery. It is a change of heart that should have a significant impact on the way we view our craft. In essence DeMarco criticizes the way we scrupulously and against better judgment hold on to predictability in software development.

DeMarco’s closing words shed a good deal of light on the true nature of software.

“Consistency and predictability are still desirable, but they haven’t ever been the most important things. For the past 40 years, for example, we’ve tortured ourselves over our inability to finish a software project on time and on budget. But as I hinted earlier, this never should have been the supreme goal. The more important goal is transformation, creating software that changes the world or that transforms a company or how it does business.”

Phillip Glen Armour – The Five Orders of Ignorance [click here for a pdf of the article]

Phillip Glen Armour

Phillip Glen Armour

The most important step I think we still have to take in software testing is to learn what software really is and use that knowledge for better testing. One way of looking at software is presented by Phillip Armour and I sincerely like how his definition radically throws the mind off the ‘software is a verified and validated factory product’ track and opens up a landscape of possibilities.

Philip Armour states in his article that software is not a product but a medium for the storage of knowledge. Developing software is an attempt to reduce ignorance, to gain knowledge about what the software should do. In my opinion testing is a craft that can be used to gain knowledge about the software. Armour mentions that in software development we need tools that generate questions in order to find out that there are things we do not know about the software. What I learned from my experience is that testing throws up these immensely valuable questions frequently. They are, at the very least, a byproduct of testing. As long as we think of testing as a crude ‘pass of fail’ tool we are unable to extend our competencies into such areas.

Jack W. Reeves – What is Software Design? [click here for a pdf of the article]

Jack W. Reeves

Jack W. Reeves

Jack Reeves’ 1992 article What is Software Design is perhaps the keenest article I’ve read that refutes the waterfall way of creating software. Reeves neatly points out why we should not compare the making of software to, for example, the creation of a hardware or any factory product. His main point is that it costs next to nothing to produce software; it is the design (the code) that is the focal point of creating software.

I look at it this way; by chopping up the development of software into phases, placing testing at the end of the manufacturing line, we suggest that testing is meant to verify and validate a finished end-product. Often testing finds errors that send us back to the drawing table. From this perspective testing leads to costly rework on a product that was supposed to be finished. It is like sending a car back to the factory because we found that the engine stalls when a speed of 30 km/h is reached, the seats are mounted backwards, there are only three wheels and the steering wheel is triangular. Yet such a car would never be built because the design would never go to production.

Testing, in Reeves’ perception, is a part of design; its purpose to validate and refine the design. I think this distinction places testing where it belongs. If we look at testing as a quality assurance (policing) activity – a process control methodology – we willingly impair the ability of testers to aid software design by constructing multiple layers of abstraction on top of the software ‘manufacturing’ process. The overhead involved takes the testers’ time and mental focus away from the actual testing and from the design.

Some words on design and testing by Jack Reeves.

Am I crazy? I don’t think so! Compiling and linking 50,000 lines of C++ code on your 486 may seem to take forever, but how would you like to assemble a circuit card with 50,000 discreet components, or build a bridge with 50,000 structural elements? We don’t construct mathematical proofs of software correctness or run our code through symbolic executors because it takes less time and effort to just build it and test it. We probably would get better software if we did more of the former, but we don’t. Why not?

There are probably lots of reasons, but I would like to suggest that many of them derive from our failure to consider testing and debugging as part of the software design process. We would like for it to go away completely. Since it will not, we try to treat it as some sort of “quality assurance” function and spend as little time, effort, and money on it as we can get away with. We consider it a shame of the software industry that testing and debugging take up half the typical software development lifecycle. Is it really so bad? I suspect that most engineers in other disciplines haven’t a clue about what percentage of their time is spent actually creating a design and what is spent on testing and debugging the result. Some industries are probably better than software. I am pretty sure that other industries are actually much worse. Consider what it must take to “design” a new airliner.