The ‘Gartner bias’ in software testing

Standard

Yesterday, I stumbled upon the call for papers for the Dutch Testing Day 2013. The programme committee has decided that the theme of this year’s conference should be the future of testing. As the future is always a source of copious speculation and therefore discussion, it probably serves well as a conference theme. It may also prove to be interesting.

However, in the case of the Dutch Testing Day 2013, it is likely that the programme committee did not do much more than skim some leaflets and pick some terms from the trend reports by Gartner, Ovum and Forrester. Whether the committee’s effort actually stretched beyond this simple reading exercise, does not become clear from their call for papers, which acutely suffers from the  ‘Gartner bias’.

Below is the list of themes that will be important in the future of software testing, according to the programme committee. I know this list is suffocatingly boring as it is piped through every possible software development marketing channel worldwide. As I am not out to torture you, I would have left it out, but there is a point to be made.

  • Continuous delivery
  • Mobile
  • Crowd
  • Big data
  • Security
  • Privacy
  • Lean testing
  • Social media

Now compare this list to a 2012 Gartner report entitled ‘Top 10 strategic technology trends for 2013‘.  Gartner mentions the main trends listed below and segments them into 10 topics.

  • Mobile
  • Cloud
  • Big data

Sounds familiar, right? If you want to add security, privacy and whatever you like, go see the Ovum 2013 Trends to Watch and copy, paste. Plenty of stuff to discuss and you’re done in less than a minute creating a program. The only slightly annoying problem that remains is that you’re doing the craft of software testing a huge disservice. This way of discussing software testing should be considered – the Merriam Webster dictionary states it correctly – a harmful act. In other words, the list of topics presented by the programme committee, was not created by software testers, because apparently the first question in software testing was never asked: “Is that really so?”.

The first reason why software testing should be not be equated with the latest marketing fads in software development is that the trends are exactly that: moving targets and fleeting infatuations. Even Gartner and Ovum make their predictions just for the year ahead. They know (and they probably earn a nice amount of money from the fact) that next year things could be different. Wanting to guide the craft of software testing into the future by fixating solely on trends is like trying to cross the Atlantic just being tossed around by the currents and the winds, without using instruments to manipulate the forces of nature into your favor. Sure, there may be a very slight chance that you reach the other end of the ocean… alive, hopefully.

Time and again, when we link software testing to infatuations, we take away focus from essentials of the craft. Furthermore, with this kind of thinking, we do not encourage software testers to look for anything beyond trends. We just tell them to learn whatever technology is in vogue and to learn the next big thing a couple of years later, without ever thinking about how software testing is done and why it is done that way. This is a way to move backward, not forward.

The second reason is that software testing is not technology-driven. Software testing is, and always has been, about the investigation of software. How and to what end the software is investigated depends on what is to be tested and what questions we try to answer. However; the instruments of reasoning that we use in software testing – the fundamental paradigms driving software testing – are not going to change because the application is written in Java or C++, or whatever means is used to store data.

The instruments of reasoning are essential to software testing and when there is a discussion about the advancement of software testing, I am expecting a discussion of developments in, for example, anthropology, the philosophy of science or linguistics. Anyone coming up with the next technological infatuation just isn’t taking the craft seriously.

The third reason is that software testing is not going to be driven by the next software development or management trend. As said above, software testing is an investigation into software. This investigation is bounded by many factors, such as the software development paradigm du jour, but the paradigms driving this investigation are not bounded by the trend. If they were, it would be like saying that in test-driven development we test the software only and exclusively using boolean algebra, while in lean software development we are only and exclusively going to use deductive reasoning. This, clearly, is nonsense.

My question to the programme committee is whether they truly thought about the goal, as stated in the call for papers,

Our goal is to create a test event that will inspire testers and prepare them for the future.

and if they can explain why they think their current approach is the best way to reach that goal.

Facing the behemoth: ISTQB Advanced

Standard

Last week I started the course for the ISTQB Advanced Test Analyst certification. I had been planning to do this course for some two years. The price is steep, some 2,500 euros and it is a time-consuming business. I know some of you reading this must think I lost control of my senses, taking an ISTQB course instead of Black Box Software Testing course, the Bug Advocacy course or Rapid Software Testing. It suffices to say that it was a somewhat rational decision.

I intended to do live blogging from the course but did not pursue that option. But I do wish to spend some posts on the subject.

My first impression is that ISTQB is a behemoth. I do not say this because I am unable to oversee the material. I am very well capable of that. Still, if you look at the mountain (card house, for the critics among you!) of constructs and definitions the course seems to stagger under its own weight. For example, the famous glossary alone has some forty pages of definitions. And that’s just words about words. The two binders holding the printed slides of the course are massive, of the aforementioned biblical proportions.

It is words about words. ISTQB is a framework for defining testing. It is construct placed upon construct. It is a sum of abstractions that seems to live in a world entirely of its own. It hardly ever touches the human world we live in. It is not easy at all to get to know ISTQB because the only frame of reference for ISTQB is ISTQB itself. Like mathematics it is an artificial universe in which constructs interact. Not easy at all to learn.

Unfortunately the day before I started the course I opened a book by Alistair Cockburn entitled Agile Software Development. Cockburn is all about human side of creating software and his theory for building and sharing knowledge. This book, in some way, surprised me too. Considering the book is about agile software development, I expected a tale about processes (Scrum and iterative things in all kinds of forms, unit tests, customer involvement, burn down charts and other hollowed out concepts). But Cockburn, in his introductory pages, refuses to notice even that such concepts exists. Which is a very, very refreshing take on software development. And perhaps it is also the only viable take on software development.

But now, back to the glittering wonders of the processes of software testing. I left the first day of the ISTQB course with one prevailing feeling. ISTQB is a set of constructs that excludes other ways of looking at testing. If I compare it to my History of Software Testing (which is also a huge amount of constructs), my history is not a system. For me it is a path of discovery; use whatever you like, leave out whatever you like.  The ISTQB framework is a system, it is a limited, restrictive set of constructs, it is a framework inside which the software tester thinks. It reduces options, it takes away the possibilities of looking at testing in any other way. The course forces upon the tester the only one way of looking at software testing.