A Systems Perspective on Testing

Standard

Lately I’ve been looking at Systems theory (Bertalanffy, 1968) and wondering how much of it has been applied to testing and how much more could still be learned from it.

Three blind spots

I started with the assumption that little, if any, of the concepts in Systems theory have been applied in testing. This assumption proved to be false.  There were a number of blind spots in my assumption. I will list the most prominent of them.

1) Little explicit mention of Systems theory in testing material

The first blind spot arose from the observation that Systems theory is hardly ever explicitly mentioned or referenced in day-to-day testing. By day-to-day testing I mean the insights provided by the text books as used for certification (ISTQB, TMap), the testing standards, blogs on testing, articles on testing as published in the popular testing magazines and the mindset and knowledge of software testers. Although software testing deals, almost exclusively, with systems, there seems to be no explicit connection with Systems theory, or concepts within this field. To account for this, we should consider the possibility that software testing in itself is a subset of computation which is a subset of Systems theory. As such software testing may look inward for more detail, instead of outward for a position in the general picture.

2) Little knowledge of Systems theory among testers

The second blind spot is the knowledge, or common perception, among testers of the field of Systems theory. Within the context-driven school of testing, there are testers that look beyond the traditional views offered by certification institutes or testing standards. So there is a willingness to learn about, among other things, systems theory. Yet as a starting point for Systems theory (or Systems thinking) often the book Thinking in Systems (Meadows, 2008) is mentioned. Meadows wrote about System dynamics (Forrester, 1961), which certainly is a part of Systems theory, but it would be a gross overstatement that Meadows covers the landscape of Systems theory in total. A personal investigation of Systems theory revealed  – to me it was a revelation – that there are at least 18 fields of scientific investigation that are related to Systems theory. System dynamics is only a fragment.

3) The academic-vocational chasm in software testing

The third thing that obscured my perception of Systems theory and testing, is that software testers in the field are largely unaware of the theoretical (scientific, academic) foundations of software testing. I consider this a problem that affects the industry as a whole. There is a curious separation between the work that (academic) researchers in software testing do, and what actually filters down to the work floor. This may be a common phenomenon that is not at all particular to software testing, but it frequently frustrates me that what is taught in testing text books and testing courses is a filtered, abstracted and sometimes disfigured version of the original theories on software testing. Particularly of the 1970’s, which was a period in which software testing theory was formed, the tester in the field generally knows very little, perhaps only a few quotes, cut out of their context.

System theory in software testing

So if we look at how software testing theory evolved in the 1970’s and 1980’s, it becomes clear that Systems theory – certain (mathematical) system perspectives applied to, or originating from(!), computing and software – has been the main driver for that effort.

Automata theory

One of the branches of Systems theory is the Automata theory. Automata theory by and large started with Alan Turing, who conceived the Turing machine in 1936. From this theory arose theories on computational complexity, which is itself is a branch of Complexity theory, which is a branch of Systems theory. In computing Automata theory gave rise to finite state machines, for which we have a test technique called state transition testing (Chow, 1978).

Graph theory and partition analysis

Other ways of looking at systems, closely related to system design, developed in software testing in the 1970’s. Theories formed about the flow of information through software code and the selection of test data for testing the program. Regarding the flow of information Graph theory (branch of Network theory, branch of Systems theory)  was frequently used. The cause-effect graph (Elmendorf, 1973) is an example of that knowledge applied to in software testing. Also using Graph theory several path-based techniques were introduced, such as basic path testing (McCabe, 1976) and decision-to-decision path testing (Miller, 1977). One other (domain testing) technique, partition analysis (or equivalence partitioning), was particularly used for the selection of data to be used to test the paths (Goodenough, 1975). Partition analysis can be thought of as an application of Set theory, which is a branch of Systems theory.

Hierarchy and modularity

Another way of looking at systems is by seeing them as a hierarchical structure. This is called Hierarchy theory. On software development, the concept of hierarchical structures has had a huge influence. In 1972 David Lorge Parnas introduced decomposition and modularity into programming (Parnas, 1972). In testing the functional decomposition is a way to reduce the complexity of a system.

 Applying Systems theory

The problem with applying insights from Systems theory to software testing is a manifold problem. The first question that needs to be tackled is whether the making of software has changed in such a way that a new technique is warranted. In short: do we need other test techniques to test the current software systems? I ask this question because a number of our ‘black box’ testing techniques (particularly path testing and partition testing) definitely come from a ‘white box’ background. They are closely related to the way software (a program) is made. It therefore should be no surprise that software testing started out as ‘program testing’ (Hetzel, 1973). Other techniques, such as CRUD analysis (Martin, 1981) are equally based on code. Has the coding (programming) of our systems changed in such a way that the old techniques are no longer adequate to cover the system sufficiently? Has a change in programming forced upon us the usage of other aspects of Systems theory in order to create new techniques? Looking at our testing text books, I still notice a certain prominence of old techniques. Perhaps the answer to this question should be ‘no’.

There are two other questions that may be more interesting if we want to use the Systems theory perspective in software testing. The questions are: has the behavior of software changed and has the usage of software changed? Perhaps there are fundamentally no radical changes in code, but there may be radical changes in behavior or usage of software systems that could eventually lead to other ways of failure than those investigated using the classic test techniques. Perhaps there are patterns in systems that cannot be investigated using partition or path analysis.

I think it is safe to state that software usage has changed radically and that – accordingly – in the functioning of software behavioral patterns have emerged that require us to rethink the tools we use to test the behavior or functioning of systems. It is likely that Systems theory can help us find these tools. In subsequent posts I will try to investigate these new tools.

References

Bertalanffy, Ludwig von  – General System Theory (1968)

Chow, Tsun S. – Testing Software Design Modeled by Finite-State Machines (1978)

Elmendorf, William – Cause-Effect Graphs in Functional Testing (1973)

Forrester, Jay – Industrial Dynamics (1961)

Goodenough, John B. – Toward a Theory of Test Data Selection (1975)

Hetzel, William – Program Test Methods (1973)

Martin, James – Information Engineering (1981)

McCabe, Thomas J. – A Complexity Measure (1976)

Meadows, Donella H. – Thinking in Systems (2008)

Miller, Edward F. – Program Testing Techniques (1977)

Parnas, David Lorge – On the Criteria To Be Used in Decomposing Systems into Modules (1972)

Facing the behemoth: ISTQB Advanced

Standard

Last week I started the course for the ISTQB Advanced Test Analyst certification. I had been planning to do this course for some two years. The price is steep, some 2,500 euros and it is a time-consuming business. I know some of you reading this must think I lost control of my senses, taking an ISTQB course instead of Black Box Software Testing course, the Bug Advocacy course or Rapid Software Testing. It suffices to say that it was a somewhat rational decision.

I intended to do live blogging from the course but did not pursue that option. But I do wish to spend some posts on the subject.

My first impression is that ISTQB is a behemoth. I do not say this because I am unable to oversee the material. I am very well capable of that. Still, if you look at the mountain (card house, for the critics among you!) of constructs and definitions the course seems to stagger under its own weight. For example, the famous glossary alone has some forty pages of definitions. And that’s just words about words. The two binders holding the printed slides of the course are massive, of the aforementioned biblical proportions.

It is words about words. ISTQB is a framework for defining testing. It is construct placed upon construct. It is a sum of abstractions that seems to live in a world entirely of its own. It hardly ever touches the human world we live in. It is not easy at all to get to know ISTQB because the only frame of reference for ISTQB is ISTQB itself. Like mathematics it is an artificial universe in which constructs interact. Not easy at all to learn.

Unfortunately the day before I started the course I opened a book by Alistair Cockburn entitled Agile Software Development. Cockburn is all about human side of creating software and his theory for building and sharing knowledge. This book, in some way, surprised me too. Considering the book is about agile software development, I expected a tale about processes (Scrum and iterative things in all kinds of forms, unit tests, customer involvement, burn down charts and other hollowed out concepts). But Cockburn, in his introductory pages, refuses to notice even that such concepts exists. Which is a very, very refreshing take on software development. And perhaps it is also the only viable take on software development.

But now, back to the glittering wonders of the processes of software testing. I left the first day of the ISTQB course with one prevailing feeling. ISTQB is a set of constructs that excludes other ways of looking at testing. If I compare it to my History of Software Testing (which is also a huge amount of constructs), my history is not a system. For me it is a path of discovery; use whatever you like, leave out whatever you like.  The ISTQB framework is a system, it is a limited, restrictive set of constructs, it is a framework inside which the software tester thinks. It reduces options, it takes away the possibilities of looking at testing in any other way. The course forces upon the tester the only one way of looking at software testing.

A Software Tester’s Bookstore

Standard

I think it can be said that I am known for keeping lists. There is a simple reason, I like to collect resources related to testing. Mostly I do this in the fine spirit of Umberto Eco’s Anti-Library (see Nassim Nicholas Taleb’s The Black Swan for reference), to be aware of what’s there and what’s not. And, in the fine spirit of Rumsfeldian epistemology, to be aware of the fact that I’m unaware of what’s there and what’s not.

Anyway, for a while I used Google Books to track suggestions for and references to books that came across my path. I reached a total of 100 books (which is nothing, compared to the 50,000 Eco is said to have in his library) and thought it might be nice to categorize this list and show it on my website. I believe most of the books in the list contain at least bits (as in portions) of valuable information for modern-day testing.

I placed the books in an Amazon bookstore and I link to a flat list below. The categorization contains faults and could have been done in many other ways. The category ‘Management’ could, for practical purposes, be split up into ‘People management’, ‘Project management’, ‘Risk management’, ‘Leadership’ etc… These categories in their turn open up a wide landscape of sub categorizations. For now, I use a relatively flat categorization, to prevent anyone from getting truly lost. And, as categorizations go, they can never be false.

Yes, books and categories (such as coaching) are missing. As always, additions are welcome.

Here is a link to the store.

And a link to the list (PDF, opens in new window).

 

The oddness of two familiar words

Standard

Today, finally, the oddness of two words dawned on me. The words are ‘software tester’ and I used them, in a conversation, to describe my role in the project I am currently involved in.

The first reason why I felt strangely alienated from the description ‘software tester’ was that, for last couple of months, I have not tested software. Not a single piece of software has been been submitted to me with the question “Can you please test this?”. Instead, I’ve been reviewing (mostly functional) design. All I’ve seen is complex thoughts spelled out in writing.

Reviewing – from a testing perspective – is hard work. I have sat down with other reviewers and no one of these persons ever described reviewing as even relatively easy. It can be tedious and mind boggling at the same time. It involves reasoning from many perspectives. It involves recognizing the traps of your own reasoning and that of others. And it takes a lot of effort to find out which set of perspectives will lead to a manageable conclusion.

This brings me to the second word, which is ‘tester’. Today, through a couple of discussions, I realized that the tester – the independent assessor of quality – is as much bound to the context, as much the modifier of context as he is the assessor of the design. Though we would like to see ourselves as the evaluating party, it is this evaluation, this test, that is part of the design, part of the meaning of a system in its context. It reminded me that the design is bound to the tester just as it is bound to the designer. That means that we should be excruciatingly careful in selecting our approach to testing and that we should be looking at not only the result of the test but at the effect of the test on the design. The latter may turn out to be a much bigger issue than the former. Which is why I think ‘ tester’ is just a very odd word.

A benignly malicious list of books

Standard

In line with the good old context-driven tradition I’m presenting, in this blog post, a list of books. I decided to use the adjective ‘malicious’ to characterize this list. As far as I know there are no books in this list that have actually done harm. But I think the knowledge in these books may be used in a benignly malicious way. If software testing is about evaluating beliefs in software systems and if good software testing is relentless in exposing the value of those beliefs then we, as testers, had better be armed with the knowledge to dissect those beliefs, ruthlessly. And since we are not paid to expose only those things that make everybody happy, there is a certain malicious aspect in our craft.

So what I want is to be as ruthless as possible in my analysis. And in order to achieve that a certain amount of knowledge is neccessary. Also, to survive as a software tester, it may be neccessary to be five or twenty steps ahead when it comes to reasoning. This list is one of the ways to get closer to that goal.

I did not read all of these books. In fact, most of them are on my list of books that I want to read. Most of the titles are taken from philosophy and psychology. I also decided to leave out, as much as possible, books that are said to be life- or game-changing. In other words, I  want the straight stuff, from the source, not from the hype.

I think Nelson Goodman’s Languages of Art should be a very promising read. Furthermore Feyerabend’s Against Method seems to be one of the founding documents of context-driven testing. More on that book in some future blog post.

 

A satisfying testing experience on a rainy Sunday

Standard

The problem statement: an annoying pause

The issue was really quite simple: on all devices in a family home (three laptops, two tablets and one smart phone) the display of videos through the internet (whether through Youtube or another channel) was very slow. I was shown a video on the iPad. The beginning of the video was loaded, was paused after a few seconds because more of the video had to be loaded, the video ran on for a couple of seconds and then was paused again to load more of the video. This went on and on so viewing a twenty minute video was a very annoying, gruelling, experience. The lagging video display had been frustrating the family for quite some time.

I was visiting the family. It was a Sunday afternoon. It was pouring.

The system setup

There is a home network to which, through a wireless router, the three family laptops all running a Windows operating system are connected. Also using the wireless router are two tablets, one iPad 2 and another unidentified tablet running some version of the Android operating system. Also, one (Android) smart phone will probably use the wireless router for internet access. Not all devices are switched on at the same time. The wireless router is linked to a DSL modem. Both modem and router are of the Cisco brand, types of modem and router were not investigated.

First inference

Since all devices suffered from the same lags in loading and displaying videos I concluded that neither the operating systems nor the devices were the first candidates for investigation. In fact the speed of the internet connection would be a good starting point for the investigation.

The requirements

At that point two things needed to be established. One, the expected speed of the connection and two, the actual speed of the connection. The expected speed of the connection was established by looking up the service contract with the internet provider. The download speed should be 50 Mbps.

The first test: how fast is the internet connection?

With this in mind it was time to investigate the actual speed. This could be done with an online tool on the website of the provider. But since I wanted a second tool to verify the results of the tool offered by the provider I found a nifty little iPad app that also measured the speed of the internet connection.

Using these two applications, one on one of the laptops and one on the iPad it was established that the download speed of internet connection was at least fifteen times slower than the speed promised by the provider. According to the tool offered by the provider was about 3 Mbps (several measurements in a time frame of a couple of minutes) and the speed according to the iPad app was even less; 0,5 Mbps ( (again several measurements in a time frame of a couple of minutes).

Notice that there is quite a difference between the measurements with the respective tools. I decided not to bother with the difference since a) it would probably be impossible to find the cause of the difference and b) because both measurements clearly indicated that the connection speed was (very) low. The classification high, medium, low, very low contained enough granularity for me at that time.

Another thing that struck me was the the upload speed was actually higher than the download speed. Normally, in DSL lines, the download speed is significantly higher than the upload speed. I did not investigate into this further but it may have been a pointer towards the reason for the problems.

I was told that the wireless router had fallen at least once. The owner of the router thought that that may have caused a defect in the router.

The second test: is the wireless router the cause of the problem?

To eliminate the router as a cause I reset it a couple of times, without any effect on the download speed. Then I decided to take the router out of the equation. I connected the laptop directly to the modem. Should the router be the cause of the troubles then with this new setup I should have a good download speed. It was possible to connect the laptop to the modem but there was no internet connection. This was the point at which I was baffled a little bit. I thought connecting the laptop to the modem should be a plug-and-play thing. But apparently this was not the case.

To see if I could get the internet connection working through the modem I rebooted the laptop. Without success. Then I started paying attention to the suggestions that Windows offered by which I should be able to fix the connection. One of them was to adjust the DHCP settings. Since I am not a wizard at network settings, I decided to leave that alone at first. One other suggestions was to reset the moden.

The solution: beaten by Windows

Why hadn’t I thought of that before? I was beaten to the chase by Windows! In the early days of DSL I had tinkered with modems and routers for the better part of an afternoon to get the network going. I remembered having tried all sorts of boot and reset sequences. The one I still use infrequently is to shut down the router, shut down the modem, boot the modem, wait till it’s fully functional, then boot the router, wait till the router is online and then start the pc or laptop.

I skipped the ‘internet connection through the modem‘ test and applied the somewhat familiar boot sequence. Then I grabbed the iPad and hit the speed test application. It now showed a very nice and satisfying download speed of about 20 Mbps. I was happy to report that the internet connection speed had just increased by a factor 40.

The deliverables: happiness despite the rain

But was that 20 Mbps speed really satisfying? Our requirement said that the speed in fact ought to be 50 Mbps. But then I remembered that DSL connections seldom reach the specified speed. It has something to do with the distance from the switch, the number of houses using that connection, possible the quality of the ethernet cables and interference from other devices. Interference could not be ruled out because the modem and the router were close to a phone, a television, a dvd player and such.

I could have done many other tests in search for the cause of the issue. I could have finished the ‘internet connection through the modem‘ test. I could have upgraded the firmware of the router and the modem and test the effect of that. I could have tried a different ethernet cable. But the result of the session was reasonable and spending more time on testing seemed like a waste. I handed the iPad to the owner and he conjured up the video – a cooking instruction video – that had been plaguing him for quite some time. This time it ran without a flaw. As an extra test on the laptop I loaded a lengthy video in Youtube (Goranka Bjedov – Using open source tools for performance testing). Goranka came through clear and without a hitch.

Though the actual cause of the error had not been found I was happy to hand to the family two testing products that may help them in the future: 1) a nice little iPad tool to measure the internet connection speed and 2) a procedure to reboot the internet connection just in case.

I like to think I delivered exactly what was needed.

A primitive attempt at analysis

Standard

Jari’s challenge

A couple of days ago I browsed through the puzzles (testing challenges) posted by Jari Laakso on his weblog. I picked one that I found particularly interesting and tried to come up with an answer. I sent an e-mail to Jari and he replied, thanking me for my input. I did not provide the explanation that was on his mind, but this was mainly due to the fact that I should have tried to solve the challenge in a direct discussion with him, rather than by e-mail. I intend to do this later on, but in the meantime Jari’s challenge nevertheless got me thinking about analysis; the way I tried to tackle his challenge. So thanks to Jari for starting this train of thought!

The challenge is Testing Challenge – Puzzle #3 and goes as follows.

There is a 15 year old boy studying in a high school. He loves ice hockey and is the best of the team from his year. The team has been excellent in the high school championships. Recently, the dean and the teacher’s council had a meeting where they decided he is so good they must dismiss him from the team. Explain why.

Now clearly we need some form of reasoning to find the possibilities that are hidden in this text and to account for those findings. We need our old school analytical skills; the skills that have been honed to perfection through years of intensive study, training and practice.

What the holy book has to say

One of the corner stones of our craft is the 752 page testing bible TMap Next.The book has a lot to say on analytical skills, particulary in chapter 4 Foundations of Software Testing – Applying the Analytical Mind. While this chapter dives deep into the history of analytical thinking, scientific reasoning and experimentation, it also provides what I think is the best argument for the all pervading importance of analytical skills in our craft. Chapter 4.2.1, right down the middle of page of 154 goes as follows.

To fully understand the testing experience a thorough grasp of analytics and reasoning is imperative. In the previous chapter we concluded that within any software development effort the tester is confronted with systems, models of systems, descriptions of systems or opinions on the functioning of systems that are incomplete, ambiguous, inconsistent, ill structured, contradictory, distorted or just plainly incorrect. And yet this information is seldom presented accompanied with a list of footnotes explicating in full detail its inconsistenties. Why? Because the tester is presented with information that most other people believe, at the very least partly,  to be correct, justified and validated. The task of the tester is to confront people with their beliefs, to show not the ways in which the system will succeed, but the situations in which the beliefs we hold may differ from what we encounter in real life. This is the test; and the tools we supply to the tester are the ways to reveal what is hidden. Within this set of instruments we find logic and reasoning and at the same time instruments to disrupt logic and reasoning. In the remaining parts of this chapter the most important of these instruments are discussed.

Armed with these instruments we move toward the challenge.

The challenge

There is a 15 year old boy studying in a high school. He loves ice hockey and is the best of the team from his year. The team has been excellent in the high school championships. Recently, the dean and the teacher’s council had a meeting where they decided he is so good they must dismiss him from the team. Explain why.

First there is a psyhological hurdle we must overcome. The reason for the dismissal of the boy does not seem logical; we live in a society in which being good at something is valued, praised and awarded. Being dismissed from a team at first glance hardly seems to be an award. So the challenge is to explain this ‘ unlogical’ dismissal. If the dismissal were ‘logical’ it would hardly require explanation. Since I decide that the dismissal is unlogical I have a number of assumptions I must validate. For example, I assume that the boy lives in modern times and not a couple of hundred years ago. The best clue in the text as to the time in which this challenge must be placed is the fact that ice hockey is mentioned. If we take Wikipedia as a valid source we learn that modern ice hockey started in 1875 in Montreal.

The second assumptions is that the boy lives in a modern Western culture not unlike mine. Other cultures may have totally different views of what is ‘good’ and what ‘dismissal’ means. In fact I think this challenge can only be ‘solved’ by someone from a Western culture. Luckily evidence like ‘ice hockey’, ‘high school’ point to our cosy modern Western world.

As a side note I think the information systems we create highly depend on the society in which they are implemented and on the time they are implemented. If you take this one step further you may as well state that the information systems we create are mirror images of (aspects of) the societies we live in. If you take it yet one step further you can state that information systems are elements of social systems and information systems are largely social, not technical.

Back to the psychological hurdle. The boy is dismissed because he is so good. If this is true then maybe the boy’s excellence may yet still lead to a reward, for example being promoted to a better team. On the other hand, if no reward follows then we must look at the motivations behind the dismissal. The only motivation offered by the text is that the boy is so good. But other, hidden motivations may be behind this statement. We have to look into psychology. Being ‘so good’ can be seen as a compliment and maybe this compliment is given to cheer up the boy. He may have been dismissed for other reasons (such as being a bad team player (in my mail to Jari I used the football player Christiano Ronaldo as an example)) but the dean and the teachers choose to be kind to the boy and stress the fact that he is good. Maybe other players on the team felt ill at ease with so much talent around and the team, rather than the excellent player, had to be saved.

There is a world of possibilities behind the inner workings of the meeting of the dean and the teachers’ council. There are many things that the text does not reveal; we only see the curtailed conclusion. Within the council there may have been a teacher who felt that the boy’s grades suffered from playing ice hockey so much and pleaded for his dismissal. To Jari I even offered the suggestion that bets were placed on the ice jockey games (unlikely, but interesting for the sake of argument) and that the dean or the teachers had been bribed to make the team fail. The boy may or may not have parents who play a part in the decision. No parents are mentioned in the text and this is odd in some way because parents are usually involded in school issues.

Analysis

If we want to apply analysis we must learn how to reason. We must learn how to recognize and analyze conclusions, reasonings, facts and assumptions. How do I judge the conclusion in the challenge? I know it is short and illogical in my world view.  There are numerous gaps in this conclusion and I will have to apply reasoning to locate them. How do I judge the entire challenge within the light of my limited modern Western world mindset. In the book The Order of Things the French philospher Michel Foucault tells us that reading the classification of animals stated in the Celestial Emporium of Benevolent Knowledge (by the Argentine writer Jorge Luis Borges) broke “up all the ordered surfaces and all the planes with which we are accustomed to tame the wild profusion of existing things”. He further argues that a categorization changes when you change the background (context) of that classification. The concept, of course, is simple: if you place a black square in a black background you will not even notice it is there. If you place it in a white background you will see a black square. Has the square changed?

Some other friends that accompany us during analysis are quantification, time and place (setting). From, for example, movies, we learn that time is not absolute, that sequences of actions do not have to be consistent in time, that time may be sped up, slowed down, reversed etc… Similarly any quantification, or lack of quantification may hide or reveal gaps. In the challenge the boy is 15 years old, he studies in a (any?) high school, there is a “teacher’s council”, how many teachers are in that council, do any of those teachers actually teach in the classes in which the boy participates?

Lately I’ve been thinking a lot about ‘what is missing‘ and what reasoning we can use to find what is missing. There is a nice book by the Austrian philosopher Paul Feyerabend with the rallying title Against Method. I cannot think of a reason why a software tester should not read this book. Feyerabend is all about context and he does nice things with Galileo Galilei. He uses Galilei’s reasoning in favour of the movement of the earth (around the sun). What Galilei did, according to Feyerabend, was to show something that was missing by reasoning. In  Galilei’s case the movement of the earth was rejected because a rock, falling from a tower, falls straight down. If the rock falls straight down it proves that the earth does not move. Galilei argued that there are movements that we cannot percieve, and by proving this demonstrated that we at least must use other means of perception to draw conclusions about the movement of the earth.

Within the history of science, philosophy,sociology, biology etc… there is a whealth of reasoning. I think that it can be demonstrated that the tester who can analyze from a broad base of reasoning adds immense value to any project.  So if we must learn anything to add value to our craft, it is this. I hope this is a falsifiable theory.

Specialization

Standard

Specialization in Information Technology is a fairly common thing. Due to the fact that the field of IT is vast and covers numerous intricacies it cannot be avoided that the IT specialist will at some point know more about certain technologies, approaches or domains than others. The question that kept me busy for the last couple of weeks is whether it is imperative for the software tester to specialize, in order to survive?

I consider myself to be a context-driven tester, mainly because I feel that context-driven testing facilitates the use of the brain. That’s cynical and yet also true. On  a day-to-day basis in our line work we frequently meet our worst enemy; complexity. There are numerous ways to battle complexity and context-driven testing is at the very least a software testing approach that identifies complexity as being part of the craft and stimulates the tester to think freely about it. That singe fact is a major win over other schools of thought.

On the other hand I also feel that context-driven testing is a generic approach; there are generic principles that govern the way we master the problems we face. We need, for example, creativity in our line of work. Having recognized this we can study the way creativity works and use that knowledge to our advantage. I find myself more eager to focus on the generic aspects of the work than on the specializations that are often the focus of the context in which we test. I do not think that we can easily say that once we know all about banking we know all about testing banking applications. The same goes for mobile applications, cloud applications, specific architectures, frameworks, programming languages, tools or quality characteristics. The temptations of becoming very good at testing in a specific context are numerous, but should we yield to those temptations?

I found myself wondering about this topic because currently business rules and business logic are a serious part of my context. Now the field of business rules is an emerging one, I believe it started to develop in the 1990s. And there is not a lot of focus on business rules from a testing perspective. It is a niche market but it is also a market that has a chance of becoming quite popular. If we are to believe one of the founding fathers of business rules, Larry Goldberg, it is a very elegant and simple solution to reducing complexity in systems, because, for the first time, it strictly and consistently separates business logic from technical implementation. Larry has a convincing story.

Now if job security, financial rewards and recognition of being one of the ‘pioneers’ in business rules testing are the rewards that can be had from specializing in this particular field, is it then okay to specialize? This time I’m not so sure about rejecting specialization without a closer look. After all, it may be a relatively easy way to survive and thrive…

Functional testing heuristics – a systems perspective

Standard

In my previous post I promised to get back to the topic of functional testing heuritics. There are several reasons why I get back to it now. The main reason is that I was able to expand the list to 41 heuristics. One of the other reasons is that I learned from making this list that most of the items are probably aspects, characteristics of systems. Therefore I added the phrase ‘a systems perspective’ to the title of the list. I do not claim to be an expert in systems thinking but I do believe that software systems have common characteristics that we often  fail to highlight in our approaches to functional testing. As you go through the list you will notice that most of the aspects are pretty obvious.

Keep in mind that the list is not a checklist. It is not part of a methodology or an approach. It is not a definitive set of ‘functional testing principles’ and if you have other names by which to identify characteristics of systems, please by all means go ahead and use those. Or even better; share them with the testing community. Also, additions are warmly welcomed.

Another reason I get back to it now is that I learned from making this list that we, as testers in the field, may want to reflect a little bit more on the work we do and the things we encounter. I believe that a research methodology such as grounded theory suits that purpose reasonably well.

The list is published on my website with some commentary. If you’re only interested in the PDF file, it can be found here.

A closer look at functional testing heuristics

Standard

I consider myself to be a tester of the functions of software. It means my expertise is in looking at how (or why, or when etc…) systems and applications function. It also means that I am less experienced in non-functional aspects of systems such as usability, performance or security. I believe that those aspects can be considered areas of expertise in themselves.

I think most testers start out as functional testers. In some introductionary course on software testing they encounter a number of software testing techniques that can be used to evaluate the functions that are contained in an application or a system. Most commonly these techniques aim at paths and decisions and come from a more or less mathemical examination of the function. Combinations and coverage are important aspects of these techniques.

Now profiency in functional testing is not a common trait. For a number of reasons the functional tester is thrown off the path of developing skills in functional testing. One reason is the testing career path. Gaining competence in functional testing is not rewarded as much as climbing the ladder and becoming a test coordinator, a test manager, a test coach, a quality consultant etc… Furthermore; if we define functional testing by a restrictive set of testing techniques then it seems that the only way forward in functional testing is gaining experience with those techniques, which is a dead-end street. Another reason could be that functional testing is not regarded as a valuable area of expertise such as performance testing or security testing. It is remarkable that the latter areas in software testing have gurus expanding the craft. I find it hard to come up with the name of a ‘functional testing guru’. Well, actually there are some in the context-driven school of testing.

Most of the time you see functional testers going off on a quest based on a mixture of techniques, instincts and past experiences. The latter two most of the time provide valuable insight into the functions of a system. They seem to find the better bugs because the functional tester experiments with the application in ways that had not occured to the programmer or the designer. So it seems that there is a lot more to functional testing than just techniques, but there is little explicit knowledge of what more there is.

I stumbled across our common lack of profiency in functional testing quite a number of times. In functional testing most of the time there is that nagging feeling that you’re missing something important but usually it takes a lot of effort to find out how to get to that important thing. For example; I may have covered the paths of a function using some technique and found no bugs. Yet despite that the function could still cause the application to go wrong in a fascinating way. In one project I developed a test strategy based on scenarios (see, for example, Soap Opera Testing [PDF]), which was quite a valuable and very useful approach in itself. Yet I had the feeling that this approach covered a minor part of the ways the functionality of the system could be used. I was looking for other perspectives I could use to research the system and experiment with it.

Most of the time I use James Bach’s SFDPOT heuristic to change my perspective. As a side note, I find that many of the functional testers I meet are awkwardly ignorant of this heuristic. On the particular project I mentioned the heuristic did not include some of the perspectives I was looking for. So through the analysis of a couple of sessions with users on risk assessment, I came up with a couple more. Yesterday I sat down and extended my list to what you see below. This list should probably be turned into a heuristic. Each of the items could use some clarification. I will leave that to later posts. Additions, of course, are welcome!

  • Patterns
  • Sequence
  • States
  • Concurrency
  • Confluence
  • Synchronization
  • Sharing
  • Interactions
  • Repetition
  • Hierarchy
  • Dependencies
  • Parameters
  • Rules
  • Configuration
  • Constraints
  • Resources

A final note to close this topic. Yesterday I was reading up on qualitative research (Qualitative Research: An Introduction to Methods and Designs). I got to the part on Grounded Theory and coding. This theory, in short, is a research method in which a theory is formed based on observation. Coding is the abstraction of theory parallel to the observation (for example; the text of an interview with a person). If we, as testers, consider, for example, our bug repositories as raw data and are able to abstract a theory from that by coding, we should be able to come up with numerous functional testing heuristics (such as the list above). One fact is that there is probably no shortage of bug descriptions in repositories world wide. The other fact is that we hardly ever use them to do valuable research. Such failure to scientifically use the data we generate must be one of the reasons for the immaturity of our craft.