The Cheeseburger Standard

Standard

Last evening I picked up Peopleware by Tom DeMarco and Timothy Lister. It has been, for quite some time now, a book that I open when I want an entertaining view on software development that stipulates some home truths about the game. I opened up chapter 2, read a couple of pages and was struck by how much this text relates to the ISO 29119 debate.

Chapter 2 – entitled ‘Make a cheeseburger, sell a cheeseburger’ – starts off as follows (in all quotations, emphasis mine).

Development is inherently different from production.

I am still, and always will be, greatly in awe of the fact that quite a number of people in the software industry are blissfully unaware of this and go full Taylor on everything that moves.

But managers of development and allied efforts often allow their thinking to be shaped by a management philosophy derived entirely from a production environment. Imagine for the moment that you’re the manager of the local fast food franchise. It makes perfect sense for you to take any or all of the following efficient production measures:

  • Squeeze out error. Make the machine (the human machine) run as smoothly as possible.
  • Take a hard line about people goofing off on the job.
  • Treat workers as interchangeable pieces of the machine.
  • Optimize the steady state. (Don’t even think about how the operation got up to speed, or what it would take to close it down.)
  • Standardize procedure. Do everything by the book.
  • Eliminate experimentation – that’s what the folks at the head-quarters are paid for.

These would be reasonable approaches if you were in the fast food business (or any production enironment), but you’re not. The “make a cheeseburger, sell a cheeseburger”mentality can be fatal in your development area. It can only serve to damp your people’s spirit and focus their attention away from the real problems at hand. This style of management will be directly at odds with the work.

To manage thinking workers effectively, you need to take measures nearly opposite those listed above.

And further on, on the subject of making errors.

Fostering an atmosphere that doesn’t allow for error simply makes people defensive. They don’t try things that may turn out badly. You encourage this defensiveness when you try to systematize the process, when you impose rigid methodologies so that staff members are not allowed to make any of the key strategic decisions lest they make them incorrectly. The average level of technology may be modestly improved by any steps you take to inhibit error. The team sociology, however, can suffer grievously.

Further on, on the subject of the steady state.

Steady-state production thinking is particularly ill-suited to project work. We tend to forget that a project’s entire purpose in life is to put itself out of business. The only steady state in the life of a project is rigor mortis. Unless you’re riding herd on a canceled or about-to-be-canceled project, the entire focus of project management ought to be the dynamics of the development effort. Yet the way we assess people’s value to a new project is often based on their steady-state characteristics: how much code they can write or how much documentation they can produce. We pay far to little attention to how well each of them fits into the effort as a whole.

And lastly, on the subject of doing without thinking.

If you are charged with getting a task done, what proportion of your time ought to be dedicated to actually doing the task? Not one hundred percent. There ought to be some provision for brainstorming, investigation, new methods, figuring out how to avoid doing some of the subtasks, reading, training and just goofing off.

The steady-state cheeseburger mentality barely even pays lip service to the idea of thinking on the job. Its every inclination is to push the effort into one hundred percent do-mode.

The dilemma in software testing that is characterized by ISO 29119 is whether we regard software testing as a factory process or as an act of human investigation. As the quotations from Peopleware showed, this dilemma is far from new. The ISO 29119 people may strongly doubt the experiences written down by Tom DeMarco and Timothy Lister as much as we (context-driven) testers doubt the way of working that is imposed by the Central Committee Working Group 26. I choose to believe that software testing is an act of investigation because from what I’ve experienced so far, the reality of software development looks much like it is described by DeMarco and Lister. If, however, the reality of software development is the exact opposite of what is described by DeMarco and Lister and the factory approach does indeed lead to better software each and every time, then I think the backers of the ISO 29119 standard should come forward, refute the evidence of software development as a human act, and convince us by showing us the reality of software development as they experience it.

References

Tom DeMarco, Timothy Lister (1999). Peopleware. Dorset House Publishing Company.

Who should write the test code?

Standard

I will be discussing this and other FitNesse related topics in my presentation Moving from Ad Hoc Testing to Continuous Test Data with FitNesse at the Agile Testing Days 2014. If you are interested, please visit my session in Potsdam.

Not so long ago I found myself in a debate in which my motives for doing the work that I do currently and the way I that work were questioned by a team member, a senior developer. As far as I was able to ascertain the questions were earnest and coming from someone who really cares deeply about the software product we are making. Therefore, I thought that I should mention my approach and the questions that arose and how I dealt with these questions.

Who should write the test code?

I am currently using FitNesse as a test automation tool. Writing tests in FitNesse involves writing so-called fixtures in Java. The fixtures arrange the interactions with the system under test, such as the execution of database queries, calls to stored procedures, REST interface calls, kicking off unix scripts, FTP calls or the reading of log files. As the test set grows so does the repository of Java code. The code is maintained by two testers who have some experience with Java, but no background as a Java developer. We have been able to get most of the code working correctly and the code base is still small enough to allow for the many insights for refactoring. So there is room to learn from our coding mistakes.

Now the source code of the system under test is in PL/SQL and all of the developers in the team are proficient in this language. None of them have any experience in programming in Java. So in our team there are two testers maintaining the Java test code base and a number of PL/SQL developers maintaining the code base of the system under test. The senior developer now put forward the following line of reasoning (this is not a literal quote).

The test code should be maintained in the same fashion as the production code. When a developer is doing rework on the production code he should also be handling the related test code. It is the only way that we can ensure that the test code remains consistent. Therefore the test code should be written by a developer and in PL/SQL.

This may sound like a very decent proposal. After all, perhaps testers should not be focusing on writing code, which is usually not their expertise. When developers write the test code, the testers can focus their energy on testing the software. This way the effort of  creating test code and testing is shared in the most efficient way among the skills sets present in the team.

Also, treating the test code as production code is a very laudable sentiment. It means that test code and production code are equally important, which in my opinion they are. Delivering a piece of production code without the proper means to test that code does not make a lot of sense if you want the production code to be properly tested.

While all of these statements reflect sentiments that may lead to better cooperation and easier maintenance of the system and the test automation effort, I have a number of objections that, in my opinion carry some weight against this approach. These objections are also the reason that I am putting off the transition from Java to PL/SQL for test code.

1. Deployment of test code

We, as testers, are able to deliver changes in the Java code to the test environment almost instantaneously. I created a build script in Ant and this build script runs in Jenkins every day. So at the very least every day all of the code that is used for test automation is compiled and deployed to the environment where the tests are run. If the code contains errors we know this immediately. Otherwise the code is compiled and deployed without us ever having to think about it. However; deployment of PL/SQL packages to the test and acceptance environment cannot be done in an instant and certainly isn’t automated. Test packages would follow the same deployment procedures as production code and for some reason this is a cumbersome process. If I want to add some functionality in the Java code, I write it and I deploy it instantaneously. If I wanted to do that same thing in PL/SQL, I’d have to wait for the functionality to arrive, at best within a couple of hours.

2. Testing the test code

Over the last couple of months I have grown very fond of JUnit. I still do not write unit test on a structural basis and the packages that contain the unit tests are still a mess that I should sort out. But I am using JUnit to at least run a couple of tests against my code and gain some confidence in the solutions that I build. So the question is; would the developers do the same thing and the code they write to a certain degree? I asked this question a couple of times and got mixed responses.

3. The framework is linked to certain languages

If PL/SQL packages were really used as test code, we would still have to call them using a Java layer. This layer would probably not be very complicated, but FitNesse would still require us to use Java. So there would then be two bases of test code; the PL/SQL packaged and the Java code calling the packages.

4. Prioritization

If I am not writing the test code, someone else ought to be doing it (for me?). This other person may or may not like having to write test code (instead of working on production code). He may attach different priorities to test code. Also, if we would try to write user stories for building test code, these user stories may be deprioritized against user stories for pressing bug fixes or rewarding new functionality. Creating test code as production code requires discipline, a relentless commitment to the definition of done and a product owner who deeply feels our need to use automated tests. The current situation is too far away from that ideal for me to start this experiment with the code that I depend on to generate test data.

5. It is not at all bad for testers to know the test code

I think it can be argued that testers should know intimately the code that they use to automate their tests. Automated tests generate results. If a tester does not know where the results are coming from, or how they are fabricated, how can he or she trust them?

6. Software developers have a different take on testing than testers

Test code has to have the capability to execute a number of scenario’s, it requires flexibility and it needs to be designed for testing, not for production. I believe that testers have a better feeling for how they want to use the test code. I believe, for example, that they more easily think in terms of scenarios and more easily come up with variations than do developers. And so testers may be better at indicating what the test code should be able to handle.

Exploring FitNesse – Using Database Queries

Standard

In this series of blog posts I want to explore my choice to use the Java Persistence API in a test automation effort against a legacy (Oracle) database. For test automation I currently use FitNesse. Fitnesse offers many ways to access information from the database and for those who do not like to get their hands dirty writing Java (or other) fixtures, the Java Persistence API is definitely not the way to go.

Like I said, FitNesse offers many ways to access the database from the FitNesse page or from somewhere just below the FitNesse page, with an absolute minimal amount of (Java) programming. Below I list a couple of options. I never worked actively with any of these options, but I am familiar with them from reading through the documentation.

  • DbFit by Goijko Adzic. This seems to be a viable alternative to placing SQL queries in the Java layer. Adzic’ fixture set seems to be pretty complete and I recently learned that it is actually used in a data warehousing project in the Netherlands.
  • JdbcFixtures. This set of fixtures is limited and probably it can only be used for basic stuff.
  • A generic FitNesse fixture that is ready to use. The fixture mentioned was written by Anubhava Srivastava. It is perfectly fine to use code that someone else offers for use. But you’d have to know programming to check the code, adjust it to your needs or develop it further.
  • FitNesse QueryTable. This requires you to write the actual query in the Java fixture, so (a bit of) programming is involved. The table itself offers a nice way to return more than one row of data and check for certain outcomes.

Three of the options listed above allow for the tester to write the actual SQL statements in the FitNesse page. The tester does not have to know a lot about drivers and connections to write tests that fetch data from the database or even manipulate data that is in the database. The statements are executed ‘under water’ and the result is displayed in the FitNesse page. It is a nice way to quickly start using SQL queries and to keep all the logic in the same place, namely the FitNesse page.

I dismissed the use of SQL statements in FitNesse pages rather quickly after I started on the project. We had some pages in which SQL statements were used. In my opinion SQL statements were technical (implementation) details and I wanted to place all technical details in the layer that, according to me, they belong in – which is the programming (in my case Java) layer.

Besides that I thought that scattering many SQL statements across many FitNesse pages would seriously impact the maintainability of statements should the developers ever decide to change some of the tables. Actually there is a solution for the maintainability problem: build modular tests (building blocks) in FitNesse. If a query is used by many pages, build a separate FitNesse page for this query only, parametrize the page using variables and include (FitNesse: !include) it in the pages that use the query. I use the modular approach in FitNesse a lot and it works brilliantly. I will write more about this later.

A second objection to writing SQL statements in FitNesse pages is that it would probably take a while to find out which table is queried in which page, in what way. FitNesse has reasonable search capacities, but having to search for queries all the time is not ideal. The modular approach outlined above may fix this issue partly.

All in all I still had five major objections to placing SQL statements in FitNesse pages.

  1. Maintainability. This I explained above.
  2. Testability. I like to test my queries 🙂 Typos are quickly made and sometimes not that easy to find. Complex joins also invite mistakes.
  3. Handling of exceptions. Some queries may return nothing (no rows or null) or errors. FitNesse may not be the proper tool to handle these exceptions.
  4. Managing connections. Again, there are better ways to manage connections than in FitNesse pages.
  5. Handling of types. I am not sure if this is actually a problem. Fields in the database can be of many types (VARCHAR, FLOAT, DATE, TIMESTAMP, LONG, CHAR,  BLOB etc…). I think FitNesse ‘represents’ everything as text. I think this works out ok for non-text types, but I haven’t tested it. In Java, when fetching different types of data from the database, the type is something you carefully have to pay attention to.

So I decided to embed SQL statements in the Java layer (the Java fixtures) that is beneath FitNesse. However, here the maintainability problem remains. Sure, the testing of queries is made much easier by using jUnit. Connections can definitely be managed better in the Java layer. And Java has tools to handle exceptions programmatically. But you still run the risk of scattering SQL queries in all shapes and sizes all across the code.

This last observation eventually led me on the path to the Java Persistence API, about which I will write in a follow-up blog post.

Communication Between the Hominids

Standard

How do we build the theories that describe what we think testing is? How do we evaluate them?

Five minutes into a presentation I attended at the Dutch TestNet Spring Event, the speaker recklessly confronted the audience with the following phrase.

communication between the disciplines

For me that was a clear call to run for the exit. The title of the talk was Test Improvement is Something You Do in the Workplace and I attended it hoping that I would learn a thing or two from hearing another tester’s perspective on how to improve testing. The phrase ‘communication between the discplines’ however, ignited my fear that this talk was not going to be about humans. When the speaker announced that we would do an excercise and consequently checklists were handed out, I was dead sure.

Later in the evening I reflected on my moment of frustration and on why the word ‘discipline’ startled me. If you quickly substitute ‘the discplines’ with ‘the people on the project’, which is probably what you did already without even noticing it, then there is nothing wrong with that phrase. But we should notice ‘communication between the disciplines’ actually means something different.

According to my Oxford Paperback Dictionary & Thesaurus a discipline is a branch of academic study. A discipline has a field of study, is likely to have a paradigm and will have ways of doing research. Here is a taxonomy of academic disciplines (PDF).

The concept ‘discipline’ is an abstraction, and the use of the word discipline to indicate people doing different tasks on a software project is indicative of a particular point of view. It shows how a theory of software testing choses to identify and classify entities in its realm. In this case it is a theory that is based on the use of ‘discipline’  as a classification mechanism. ‘Discipline’, in this theory, serves a mechanism that abstracts from the realm of software testing exactly those aspects that serve a purpose to the theory. Exclusively, or most preferably, the elements that form the concept of a discpline, are those that lend ultimate support to this theory of software testing.

This means that this particular theory of software testing decides to regard the humans doing particular tasks in a software project not from the perspective of them being human, but from the perspective of them working in a profession that originates from an academic field of study. The theory states that the latter perspective is by far more useful; it accounts for the phenomena that occur when doing testing in an excessively superior way.

I was inclined to dismiss this point of view right away. But I think further investigation is warranted. If this theory speaks of ‘disciplines’ rather than ‘people’ then there should be in the literature relating to this theory an examination of the disciplines that interact with software testing, and for each of these disciplines a clarification of how aspects of the discipline are relevant to the theory and how other perspectives are not. I’m assuming there are case studies or field studies too.

As of yet, however, I have been unable to find solid evidence that the ‘disciplines’ perspective trumps the ‘human’ perspective when it comes to communicating with other people on the project. Since conclusive evidence is lacking,  the speaker in the presentation mentioned above would be required to at least add a disclaimer to his ‘disciplines’ perspective and inform his audience that he is using a highly contestable abstraction. As you can guess, he did not say a word about it and I reacted too slowly to question his reasoning. Frankly, I was too infuriated.

In my current project I have five software developers. In theory their work is the subject of investigation of the following academic field of study.

Physical Sciences and Mathematics: Computer Sciences: Software Engineering

When this team creates software there are discussions on almost every aspect of software engineering. There are different points of view on what should be in the definition of done, how we should write our unit tests, how far refactoring should go, what should be documented where, what should be in code comments, what should be in scope for the acceptance tests, what tooling we should use, how we set up test automation, what should be the level of detail of our use cases, how we set up test environments and what purpose they should serve, how we set up data and how we should deal with incidents and interruptions. Behind each of these considerations there is a whealth of rationales, most of them probably not based on mathematical calculations, but on human emotions.

According to the ‘disciplines’ perspective I should be communicating with each of developers alike, as members of an academic field of study. In practice this will probably get me blank stares across the board. The thing that will help me in my communication with my fellow units is to know that they have very valid human reasons or sentiments to act in a certain kind of way. To make progress (to improve) is to appeal to these sentiments.

From this experience and a couple of others, I would say a typical software development workplace contains mostly hominidae of the genus homo. If we are looking to improve our testing, perhaps we should therefore start ‘communicating between the humans’ and concentrate our precious resources and intellect on the study of aspects of human behavior in software development, as did Gerald Weinberg, Tom DeMarco and Timothy Lister, and Alistair Cockburn.

Taking the History of Software Testing beyond 2009

Standard

I have had some requests to take the History of Software Testing beyond the year 2009. After all, the craft of software testing has not stood still in recent years.

In the table below is a first draft of some of the events that I would like to add. This is list is probably by no means complete. Also, a certain amount (some will say a huge amount) of (selection) bias on my part is involved. As can be concluded from the list I lean toward context-driven testing, Agile and test automation. These are the things that I am familiar with.

This brings us to the topic of how to select which event should be in the history and which event should not be in the history. I have some selection criteria, but I think I violate each of these criteria over the whole of the history. Perhaps then this should be the starting point of a discussion on what we deem important enough to enscribe in our collective memory. Sometimes though, what we deem important is a highly individual opinion. It is therefore not probable that a study of history is 100% objective.

Also, it may be a fool’s errand to appraise at this moment what was important last year, or the year before that. It is a given that the importance of events will be seen in a completely different light 50 years from now.

In short, I am aware that my selection is not the ‘right’ selection, nor perhaps even a satisfactory selection.

I will write about some of selection criteria that I employ. For now, being aware of that fact that it could start a hellish debate, I sort of encourage you to post the most obvious things I forgot in the comments. I do not guarantee inclusion of anything, for the obvious reason that mediating discussions on what’s important and what’s not, is likely to be a full-time job. If you feel that a discussion is neccessary, do feel free to use the social media to start one.

[table id=1 /]

Slides of my presentation for the Belgium Testing Days 2014

Standard

On the 18th of March I spoke on the topic of testing as skillful investigation for the Belgium Testing Days conference. It was a honor to be invited to speak at this lively conference. Also, it was great (and a little unnerving) to find the room filled to capacity with listeners, as Gil Zilberfeld mentions in his tweet.

Gil Zilberfeld tweet

 

Me presenting at the Belgium Testing Days

 

 

 

 

 

 

 

 

 

 

 

 

The slides of my presentation are displayed below.

On the Testing of Normative Theories

Standard

While I was writing a piece on newly created Dutch testing approach, I took a closer look at a couple of models in testing. In particular I tried to assess the Tester Freedom Scale by Jonathan Bach and the Heuristic Test Strategy Model by James Bach. To me, both these models are descriptive theories, which means that they try to capture and explain some of the phenomena in software testing.

In the case of the newly created Dutch testing approach, Jonathan Bach’s descriptive theory was modified into a normative theory. A normative theory is a theory that states: “If the situation is such and such, THOU SHALT do this and that (in order to achieve a satisfactory result).” I do not know if it is possible to turn a descriptive theory into a normative theory, but I think it is a dramatic switch, which, in case it happens, should probably be accompanied by a thorough scientific investigation. Such an investigation will probably take into account the data on which the original theory was based, and it will also explain why the proposed directives are the best solution in all the situations that are captured in the descriptive theory.

The fact that a normative theory – perhaps even more than a descriptive theory – should be tested thoroughly in order to produce valid instructions, is probably the reason why most of the normative theories we know in testing are watered down to ‘best practices’ in real life. If we take, for example, the theory of TMap – the most prominent normative theory of testing in the Netherlands – we see that in practice, its instructions are weakened to a toolbox of practices  that are applied at the best judgement of the tester. We have come to learn that the instructions that are written in the book had better be interpreted, adapted and reshaped in order for them to make some sense.

While this ‘weakening down’ of the normative theory is hailed as an innovation by the proponents of the methodology, it signifies, in fact, that the proposed theory of testing has been invalidated. It means that “Things fall apart; / the centre cannot hold“. At the core of the theory there is a definition of testing that will probably not stand thorough scientific investigation, if ever we should come to test the theories that form our craft. An example of such a definition can be found in the normative theory created by the International Software Testing Qualifications Board, as displayed below.

The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.

Two things are striking enough to mention. The first one is that we do not consciously and willingly test our theories, nor do we invite rigorous testing. We are unable to provide evidence for, or challenge, the fundamental0 theoretical assumptions by which there is proposed one right way to approach software testing.

The second observation that intrigues me is that software testing produces normative theories at all. Yet here we are wrestling with overarching methodogies, instead of focusing on the practice and ways to describe what actually happens. If we feel the need for theories of testing, we should be focusing on descriptive theories. In an article on his weblog, Markus Gaertner expresses exactly this, in the following words.

Anyway, I think we should stop teaching testers practices that they might or might not need. Instead we should focus more on teaching testers how to evaluate their situation, and make useful improvements to their work where it’s appropriate.

I wholeheartedly welcome Gaertner’s sentiments. They parallel Philip Johnson-Laird’s handling of theories on mental modelling, to which he devotes a number of chapters of his book Mental Models. Johnson-Laird tests the theories one by one on a couple of criteria, the most important of which is that the theories have to account for the phenomena that are observed. Which brings me back to the Bach brothers and their descriptive models. I hope they will be tested rigorously.

The ‘Gartner bias’ in software testing

Standard

Yesterday, I stumbled upon the call for papers for the Dutch Testing Day 2013. The programme committee has decided that the theme of this year’s conference should be the future of testing. As the future is always a source of copious speculation and therefore discussion, it probably serves well as a conference theme. It may also prove to be interesting.

However, in the case of the Dutch Testing Day 2013, it is likely that the programme committee did not do much more than skim some leaflets and pick some terms from the trend reports by Gartner, Ovum and Forrester. Whether the committee’s effort actually stretched beyond this simple reading exercise, does not become clear from their call for papers, which acutely suffers from the  ‘Gartner bias’.

Below is the list of themes that will be important in the future of software testing, according to the programme committee. I know this list is suffocatingly boring as it is piped through every possible software development marketing channel worldwide. As I am not out to torture you, I would have left it out, but there is a point to be made.

  • Continuous delivery
  • Mobile
  • Crowd
  • Big data
  • Security
  • Privacy
  • Lean testing
  • Social media

Now compare this list to a 2012 Gartner report entitled ‘Top 10 strategic technology trends for 2013‘.  Gartner mentions the main trends listed below and segments them into 10 topics.

  • Mobile
  • Cloud
  • Big data

Sounds familiar, right? If you want to add security, privacy and whatever you like, go see the Ovum 2013 Trends to Watch and copy, paste. Plenty of stuff to discuss and you’re done in less than a minute creating a program. The only slightly annoying problem that remains is that you’re doing the craft of software testing a huge disservice. This way of discussing software testing should be considered – the Merriam Webster dictionary states it correctly – a harmful act. In other words, the list of topics presented by the programme committee, was not created by software testers, because apparently the first question in software testing was never asked: “Is that really so?”.

The first reason why software testing should be not be equated with the latest marketing fads in software development is that the trends are exactly that: moving targets and fleeting infatuations. Even Gartner and Ovum make their predictions just for the year ahead. They know (and they probably earn a nice amount of money from the fact) that next year things could be different. Wanting to guide the craft of software testing into the future by fixating solely on trends is like trying to cross the Atlantic just being tossed around by the currents and the winds, without using instruments to manipulate the forces of nature into your favor. Sure, there may be a very slight chance that you reach the other end of the ocean… alive, hopefully.

Time and again, when we link software testing to infatuations, we take away focus from essentials of the craft. Furthermore, with this kind of thinking, we do not encourage software testers to look for anything beyond trends. We just tell them to learn whatever technology is in vogue and to learn the next big thing a couple of years later, without ever thinking about how software testing is done and why it is done that way. This is a way to move backward, not forward.

The second reason is that software testing is not technology-driven. Software testing is, and always has been, about the investigation of software. How and to what end the software is investigated depends on what is to be tested and what questions we try to answer. However; the instruments of reasoning that we use in software testing – the fundamental paradigms driving software testing – are not going to change because the application is written in Java or C++, or whatever means is used to store data.

The instruments of reasoning are essential to software testing and when there is a discussion about the advancement of software testing, I am expecting a discussion of developments in, for example, anthropology, the philosophy of science or linguistics. Anyone coming up with the next technological infatuation just isn’t taking the craft seriously.

The third reason is that software testing is not going to be driven by the next software development or management trend. As said above, software testing is an investigation into software. This investigation is bounded by many factors, such as the software development paradigm du jour, but the paradigms driving this investigation are not bounded by the trend. If they were, it would be like saying that in test-driven development we test the software only and exclusively using boolean algebra, while in lean software development we are only and exclusively going to use deductive reasoning. This, clearly, is nonsense.

My question to the programme committee is whether they truly thought about the goal, as stated in the call for papers,

Our goal is to create a test event that will inspire testers and prepare them for the future.

and if they can explain why they think their current approach is the best way to reach that goal.

Slides of my presentation on the history of software testing

Standard

Last Monday (27 May) I presented on the history of software testing for the Community of Practice Testing of Capgemini in the Netherlands. It was a pleasant evening and the auditorium was filled with a good and eager crowd. Among those present were Eric Roeland and Eddy Bruin. The presentation is entitled History Class – For software testers. Its aim is to make testers a bit more aware of the background of our craft.

To summarize the 37 slides: everything you know about testing right now was invented in the 1970’s (or earlier). And now a bit more seriously: the history of software testing is all about the vantage point. Not so long ago I argued that the history is mainly used as a sales argument. My point of view on the history should also be regarded as such. History is a difficult field.

One remark I got on the history and my take on the history of context-driven testing was that (traditional) software testing and context-driven testing may have totally different starting points. From a philosophical point of view, there is a certain amount of truth in that. Context-driven testing is based on a different set of (scientific) concepts. I am going to develop this idea.

Until then, these are the slides of my presentation. Hope you can make sense of them.