Mind maps as a testing tool bug me a lot

I use Mindmup.com for my mindmaps
I use Mindmup.com for my mindmaps

There is one thing that bugs me about mind maps as a testing tool, it really bugs me a lot.

Don’t get me wrong, I find mind maps a great collaboration tool to get ideas out of people’s heads and put them on paper. They are great for designing a test plan, a few testers can sit down, brainstorm and create a map that can be efficiently used as a complete plan.

So what bothers me so much?

This: “Why don’t we do the same exercise together with the developers before the code is written?”

Are we so obsessed with finding bugs that we cannot share our thought process and help PREVENT (yes prevent) the defects instead of catching developers out and wasting customer’s money and time?

I really hope that somebody out there will give me a valid reason why we shouldn’t shift left the mind map creation and prevent the defects instead of detecting them.

Otherwise we should start doing it now, no excuses.

Do agile organisations need a Test Manager?

I have had this discussion a few times in the last few years, the question whether or not agile organisations need Test Managers keeps coming up. Now that the agile transformation wind is blowing stronger over traditional software testing departments, more and more Test Managers have their role challenged.

This blog post is not about the need of a Test Manager during a transition to agile, this blog post is about the need of a Test Manager within a mature agile organisation.

As software development professionals, the first thing that we should do when somebody is asking us to build a software product, is to ask “what problem are we trying to solve with this product?”.

Software products are financed and built for the same reason people are hired and paid, that is “to resolve a problem”. Software products are built to resolve a problem for users and people are employed to resolve a problem for organisations.

If we look at traditional software development organisation with phases, gates, siloed development and test departments, a Test Manager resolves the following problems:

1) Communication/Negotiation with other silos
2) Schedule/Resource allocation
3) Process improvement
4) Quality signoffs
5) Test strategy
6) Skill and Resource Development
7) People leadership

Now let’s think about a mature agile software development organisation.

Problem 1 disappears with the existence of a cross-functional team where people with all skills are sitting together and don’t need an intermediary to communicate.

Problem 2 ceases to exist because testing, in an agile team, is a continuous activity, there is no need to schedule anything. On resource allocation, testing is a shared activity and everybody in the team will help; the team itself will know if one or more testers are needed. Through retrospection the team will identify skills shortages. No need for somebody to call the shots from outside.

Problem 3 evaporates. Continuous improvement is a team activity, again, retrospectives will trigger changes, not an external entity.

Problem 4 gets sucked into a black hole and implodes, agile teams don’t need quality signoffs, quality is owned by the team, the team is accountable for it and a high five is all they need.

Problem 5 is not relevant. The test strategy is part of the development strategy and is defined by the team. Let’s remember that we are talking about a mature agile organisation where teams have the necessary skills.

Problem 6 can be resolved by a Test Guild or Test Community of Practice that don’t need a Test Manager for functioning, it simply needs people passionate about testing and software quality.

Problem 7 is still a problem we need to resolve. We need a people leader, so let’s solve the problem and hire a people leader then!

I was a Test Manager, I had a choice, fight the system and create problems that didn’t exist anymore so I could justify my role or embrace the new challenge, learn new skills and start resolving real problems for my organisation. I chose the latter and never looked back.

What team do you play for?

One team One goal
One team One goal

“If you had one wish for software testing this year what would it be? I found this interesting question in a forum on Linkedin a few days back and it made me think.

Many ideas came to mind, around better automation strategies, better understanding of one’s context, better collaboration and knowledge sharing, et cetera and then it hit me.

For 2015 I wish that testers would understand what team they play for. I wish it was the year when testers realise there is no us and them with developers and business analysts. I wish testers focused on delivering value over acting like the quality police. I wish that testers stopped cracking developer’s jokes. I wish that testers started looking at the whole and stop (sub) optimizing their process. I wish that testers stopped trying to resolve problems that don’t exists and embraced collaboration with developers and BA’s to prevent defects and save their companies a lot of money in the process.

You might say “hey! that’s a lot of wishes, not one!”. No it’s only one.

It’s I wish testers chose to play for the right team.

 

Agile tester part 2, questions and answers

Warning: The opinions expressed in this post are mine only, please do not use them against any other group of people, but only against me, that is Augusto Evangelisti a.k.a. Gus.

After writing my most recent blog post “The Agile Tester, a Curious and Empathetic Animal” I received quite a lot of feedback for which I am very grateful. Feedback in the form of a conversation is the true fuel for learning and improvement.

To all of you that have mailed me, tweeted about my post, responded using the comments in my blog and talked to me face to face about it, thank you so much for helping me learn!

When i looked at the feedback I received, I saw 2 quite different trends. Looking closer I saw that the different feedback types came from different schools of thought.

Let me give you 2 examples for some context

This one from twitter

This one from one comment to my blog post

Michael Bolton: Something about this post troubles me, Gus. In order to test what was troubling me, I took the content and replaced each instance of “tester” with “programmer”. The result made perfect sense, of course, but it left me wondering: if I can replace “tester” with “programmer”, what distinguishes the tester from anyone else on the project? What is the special role, the particular set of activities, that the tester performs? Is there a difference between the programmer’s mindset and the tester’s mindset? What is the mission, the distinctive reason for having a tester on the team—whether that person has the title “tester” or something else? What is it that distinguishes testing work from all the other work? What testing-specific skills do testers bring to the table? What is testing?

I have answers of my own, of course. But I’m wondering what your answers might be.

I am using Lisa’s and Michael’s as examples of the feedback received because the former is a recognisable exponent of the agile testing community and the latter is a recognisable exponent of the context driven testing community. The other feedback that i received from people close to each of the 2 communities is extremely similar to theirs.

If a group of people finds troubling what the other group finds great, I smell something interesting and an opportunity for learning.

I’ll try answering Michael questions and look forward to his very own answers to learn something:

Question#1:  Something about this post troubles me, Gus. In order to test what was troubling me, I took the content and replaced each instance of “tester” with “programmer”. The result made perfect sense, of course, but it left me wondering: if I can replace “tester” with “programmer”, what distinguishes the tester from anyone else on the project? What is the special role, the particular set of activities, that the tester performs?

Michael, the distinction between different roles in agile teams is becoming more and more blurry. Agile teams value competencies more than roles. As an agile tester in my team I have core competencies that i use to support my team. These include but are not limited  to the ability to evaluate a product by learning about it through experimentation. I also use such competencies to coach and lead other members of my team that are not that strong in that area and help them grow towards an ideal form of generalising specialist. Finally I also perform tasks outside my main competency to support the team. In some cases I will need guidance from another member of the team whose core competency includes the ability of performing such task. I believe that this blurring of the roles increases agile team members accountability, in fact nobody in an agile team (developer, tester, business analyst, operations specialist, UX expert, et cetera) should ever say, “I’m X, doing Y is not my job” but they should instead ask their colleagues “how can I help you?”. Shared activities are key to learning and gaining competencies.

The main goal for any agile team member, regardless of his role is delivering customer value.

Question#2Is there a difference between the programmer’s mindset and the tester’s mindset?

Yes, there is a difference, I don’t think it constitutes an insurmountable obstacle. One of the teams I have worked with, knowing that I wouldn’t be available for a period of time, suggested they needed the tester’s mindset for certain activities and decided that to make sure they were focusing on the right things, they would wear a big red hat. This simple change helped them keep their focus and their mindset in the right place. It might not be prefect but it worked. It works also because I speak to them about the testers mindset when we work together, I give them examples of what I am thinking at specific times and why it is important to think about such things. Developers are very smart people, once they understand a practice has value and receive sufficient amount of coaching, they can learn to do almost anything.

Just to avoid misunderstandings, our developers wear the testers hat when performing testing activities with other developers, i.e. they don’t use the hat for their own code as maybe this could be asking too much.

 Question#3: What is the mission, the distinctive reason for having a tester on the team—whether that person has the title “tester” or something else?

The mission is to provide the team with the testing competencies it needs so that customer value can be delivered. The secondary purpose is to train and coach the team so that they can gain some of the competencies to support test activities. Coaching and training can be also formal but it is mainly delivered by working in pairs.

 Question#4: What is it that distinguishes testing work from all the other work?

Every activity the team engages in, is performed to deliver customer value, including test activities, but I am not sure I understood your question completely, could you please rephrase?

Question#5: What testing-specific skills do testers bring to the table?

See answer to question#1 re. competencies. My main competencies are exploration and learning.

Question#6: What is testing?

Testing to me is exploration and learning.

I test ideas to prevent defects, software to detect them and product owners assumptions to reduce waste.

 

I hope I have clarified some of your doubts and I am curious to hear your answers to your own questions.

 

 

Testing, the big agile misunderstanding

Navigating social media I bumped into the Capgemini Word Quality Report 2014-15. After sharing my personal data with Capgemini, I downloaded it and started reading. First of all, it is a very well written document, second the findings are interesting, I will talk about some of its puzzling conclusions some other time.

What I am going to comment on here is one small part in the chapter “Agile Testing: Growing in Acceptance, Still to Fully Mature” and in particular to the finding that the biggest challenge in agile testing according to the report is:

“Lack of a good testing approach that fits with the agile development method”

According to the report 61% of the 1432 respondents (among 1543 CIOs and IT testing leaders) claim this is an issue for their organization and among the issues this is the most widespread.

Can you see the real problem?

The problem is that 61% of the interviewed don’t know what agile testing is about, and that’s the real issue.

Agile testing is an inseparable part of agile software development how can it not fit with itself?

Do you want to know when it will not fit? It will not fit when you try to shoehorn traditional centralized independent testing approaches into an agile development team. Yes in that case it won’t fit at all, in fact, forget it, if you do that you will fail.

Do you really want to be agile? Really? Then forget about Test departments and change the culture in your organization. Software quality is everybody’s responsibility in an agile organization, embrace the change and YOU WILL FIT.

 

 

 

Be lean, ask “why?”

why-god-why1

Your best piece of code is the one you won’t have to write.

Yes I am serious, and no, I am not intoxicated, let me tell you a story.

It was a sunny summer day in Dublin and “the three amigos” (Tester, Developer and Product Owner) were discussing the next user story to deliver.  It all started as usual with the Product Owner explaining what we were meant to do and as the story was quite small and easy we were soon starting to write our examples that would become acceptance tests and eventually code.

Everything was going smoothly, we had already agreed that we would add a field to our payload with the new information, until out of the blue, the tester said:  “why? I mean, why do we need to do this?”

The product Owner said: “Because we need to send extra information to customer A!”

The tester insisted: “But why do we need to send it to customer A?”

The product Owner said: “Because we always send it to system X so that it can create product Y”

Then the tester said: “But customer A doesn’t use system X and doesn’t sell product Y, so why do we have to send it?”

The product Owner said: “You might be right, but Mr. SuperBoss said he wants to send it!”

The tester said: “Well, we need to ask Mr. SuperBoss WHY he wants to send it”

The product Owner stood up, went straight to Mr. SuperBoss and asked him.

The fact is, there was no reason, we were wrongly assuming that customer A would need it.

Would you agree with me that the code we didn’t write is super clean, extremely cheap to develop, test, deploy, maintain and sunset?

It only ever costed a conversation, and a few whys, I bet you can’t do it much cheaper!

If you don’t see the business value of what you are delivering, ask for it, you might get a nice surprise and find out you don’t need it at all.

What’s with the tools obsession?

toolsYou can’t avoid them; they are in every discussion: tools, tools, tools and more tools.

There is no question on bugs that doesn’t get an answer like “open a bug in Jira and bla bla bla…”, every topic on performance have a Loadrunner or Jmeter here and there, no functional testing discussion seems to be worth its while if somebody doesn’t mention Selenium or QTP.

If you suggest somebody to talk and discuss the issues they have, somebody will jump in with the tool that will solve the issue straight away.

The CVs I review are festered with tools, some people with 3 year work experience claim they can use more tools I have ever used in almost 20 years.

For some strange reason, people believe that if you want to be agile you must use tools, when somebody clearly said “Individuals and interactions over processes and tools”.

Do not let the tools replace the conversations!

The wrath of the mighty metric

Reasons why some software delivery teams don’t give a damn about their customers

It feels like a century ago, but once upon a time, less than a century ago, I was leading a traditional test team in an organization where 3 separate teams of Business Analysts, Developers and Testers were delivering software in an incremental iterative death march style. Each of the 3 teams had its own leads and managers and each of the 3 teams was measured by specifically tailored metrics. My team’s efficiency was to be measured based on the mighty DDI Defect Detection Index calculated as DDI = (Number of Defects detected during testing / Total number of Defects detected including production defects)*100. The DDI had to be greater than 90% otherwise our team would have been deemed inefficient, bonuses dropped and the test team itself branded as a bunch of losers.

Yes you guessed right, the other 2 teams were measured in a similar way, their efficiency was also based on number of defects, the lowest the better.

God I am glad this is only in the past. Even remembering this makes me sick in the stomach. Sick like every time a production defect was detected, sick like every time a defect that our team detected was rejected, sick like every time I had to go to the triage meetings and inevitably have an argument either with the BA lead or the DEV one because that defect that we found was not seen as a possible improvement for the product but as a threat to some team’s metric. I’m not even going to describe to you the awful discussions that followed the acceptance of a defect as valid when a decision had to be made on whether the defect was due to bad requirements or bad code.finger-pointing

The funny thing was that no matter which was the efficient team and which were the inefficient ones, the software delivered was the same, no change whatsoever, the customers were constantly quite unhappy. The real value that the metrics gave to the department was the ability to point fingers based on numbers. They say numbers never lie, maybe numbers don’t lie but how many lies can we tell to fabricate numbers?

Since then many things have changed in my professional life and today I don’t have to fight stupid battles to fabricate numbers in order to define efficiency so I can, funnily enough, use my time more efficiently.

Why calculating confrontational metrics doesn’t work? The problem is in the fact that we are humans; if you attach prestige and monetary value to a metric, the metric becomes the goal of the team and the battle can begin. The test team doesn’t care how useful the product delivered is, all they care is opening as many defects as possible so that the mighty DDI doesn’t go under 90%, if this means opening defects that are absolutely no harm to the customer but only to the development team and the schedule it doesn’t matter. The same logic can be applied to development and the BA teams that will spend their time obfuscating their requirements and defending their code from the stupid defects opened by the test team. All this creates a climate of tension, distrust and hostility. Nobody really cares whether the customers are happy as long as the individual teams metrics solemnly declare their efficiency and fingers can be rightly pointed :-(.

teamwork

The funny thing is that it is very easy to resolve this problem and put the focus back on the customer. Create a cross functional self organising team able to analyse, develop, test and deliver a complex software project and judge the team on how well they satisfy the customer needs. The team lives as one, produces quality as one, delivers customer value as one, succeeds as one or fails as one. The goal of the team matches the goal of the company and failure or success of the team determines failure or success of the company, it’s called agile team, try it out!

How to transform bad Acceptance tests into Awesome ones

So you want to learn how to write good acceptance tests? There’s only one way, let’s write some.

This is a practical example that is designed to help beginners write clear and easily maintainable acceptance tests.

Our System

We are BOG, “Bank Of Gus” and we have a Loan Approval Processing System that takes in input some data regarding the applying customer and his loan requirements, as output it returns either Accept (the customer will be given the loan) or Reject (the customer will not be given the loan).

The marketing manager wants to start selling a new Holiday Loan and produces the following user story:

As a Customer
I want to borrow money from the bank
So that I can go on Holiday and enjoy myself

Acceptance Criteria:
In order to get the Holiday Loan approved
1) The Customer must be 18 or older
2) The Customer’s salary must be > €20,000.00
3) The Customer Time in employment must be >= 6 months
4) The Loan amount < (Customer salary)/5

The Loan Application Form (UI) exists already, the Loan Application Form calls a REST service that is what we are now updating to allow for this new product. The UI is also ready and able to display our outcome, a big green Approved or red Rejected string based on what our service returns.

The Loan Application Form looks something like this:

I am eager to start writing acceptance tests, and I start writing down the first one without thinking much, (please don’t get bored by the first test, I promise it gets better, this is the worst one you’ll see)

I’m going to use a rich celebrity for my first test, let’s try to make things interesting.

first_test

Auch… 16 steps for ONLY ONE TEST, by the time I do all the necessary scenarios with boundary analysis I am going to have a document the size of the iTunes licence agreement and this is only the start!

HINT #1: Focus on “what” you are testing and not on “how”

First of all, do I really need to say that I go to a page and that I fill each field and that I push a button? That’s the “how” I use the app, it is not necessarily “what” I do with it. The “what” is “a customer is applying for a loan”.

Could I generalize and get to a concept of a customer applying for a loan? YES
Do I really need to fill the web form to exercise the code I am writing/testing? NO
Can I abstract it, use a test double and call directly my code? YES

Focus on what you are testing, you are not testing the UI, you are testing the loan approval logic! You don’t need to exercise it through the UI. You would exercise it through the UI only if you were testing the UI.

Ok let’s use a test double. I create a mock with the data as per example above and will use it for testing, but, it’s not making writing the test any easier Sad

I could do something like

second_test

Besides the fact that I abstracted the how (the customer entering strings and clicking buttons) with the what (the customer applying for a loan) I still have a very messy test with loads of detail and quite difficult to read and maintain.

It looks slightly better but not good enough, I couldn’t even fit all the data on one line and I took the lazy option of adding ellipses, but in the real world ellipses don’t work, they can’t be automated, imagine repeating this for all the scenarios I need to cover, it’s a disaster, what am I going to do?

HINT #2: Eliminate irrelevant Detail

Do I really need to know the name of the customer to decide if I want to approve his loan? NO
Do I need to know his sex? NO
Shall I continue asking rethorical questions? NO

The only important variables for designing the logic of my application are the ones described in the acceptance criteria, look back: Age, Salary, Time in employment, Loan amount

OK this looks promising, let me try to write the original test using only those.

third_test

This looks definitely better, it exposes only the parameters that have an impact on the loan approval logic, it is more readable and while reading it I have some idea of how the system will work, that’s better isn’t it?

OK let’s write all the scenarios to comply with the acceptance criteria using boundary analysis, equivalence partitioning and other test techniques.

fourth_test

Auch again… I haven’t even started looking at the cases where the loan will be rejected and I have already 4 very similar tests that will bore to tears the Product Owner, so much that he won’t speak to me for a month, what can I do?

HINT #3: Consolidate similar tests with readable tables

I know of a very useful way of writing tests that are very similar without repeating myself over and over and make the readers fall asleep. It’s called scenario outline and I’m not going to explain in words what it does, I’m just going to show it to you because I know that looking at it you won’t require any explanation.

sixth_test

Wow, this looks much better! One test of 3 lines and examples that cover all the possible scenarios! Do you remember when you needed 16 lines of unnecessary detail to describe only the first line in the examples above? This is certainly a an improvement, more readable, more maintainable and all around 100 times better than the original one.

Also, look at it closely; it gives the business an amazing power of using this test in the future to make changes! Imagine that we end up in a credit crunch (again) and the banks want to tighten the way they lend money. So they decide to increase the minimum salary to 30,000 and the minimum time in employment to 12 months for example.

A quick copy and paste + small refactor and we get:

seventh_test

That’s quite powerful isn’t it?

Now if I was a tester and I wanted to be picky I would tell you that there are plenty of scenarios that have not been tested and a full decision table should be created to give good coverage.

Yes you guessed, I am a picky tester, let’s build the decision table for the Credit Crunch scenario.

HINT #4: Use decision tables and boundary analysis to get high coverage (DON’T DO THIS! It is an anti pattern and I am leaving it here as an example of something I learned to avoid along the way)

How do I build a decision table?
First you need to know what your variables are and what “interesting values” need to be considered.

What’s an “interesting” value? They are all the values a variable can take that might make the logic fail. Generally they are Boundary values.

Ok back to the Credit crunch requirements:

2) Customer salary must be > €30,000.00
3) Customer Time in employment must be >= 12 months

The salary variable, for example has 3 interesting values: 29,999.99, 30,000.00, 30.000.01
Respectively, left boundary, boundary and right boundary (some observers might say that 0 and -1 could be interesting values as well, I agree but for the purpose of this exercise we won’t consider them).

How about time in employment, the interesting values are: 11, 12, 13

OK I have 2 variables each with 3 “interesting” values (or dimensions)

I can immediately calculate the amount of tests I need to get 100% coverage with all possible combinations of “interesting” values.

NumberOfTests = dim(salary)*dim(time_in_employment) = 3*3=9

9 test cases will cover all possible paths using all combinations of “interesting” values.

Let’s build the decision table, and guess what? It can be expressed as a test!

eight_test

1 test, 3 steps, 9 examples, 100% boundary analysis coverage, in English, readable, maintainable, clearly expressing the business value delivered, what do you want more?

One last thing; you might be in a situation where decision tables with many variables with large dimensions will require hundreds or even thousands of test cases. If these tests are run at the unit level I wouldn’t worry too much about the run time but if for instance you are testing some Javascript logic and are able to do so only through the UI this will require a long time to execute and not represent a good Return On Investment.

What can you do? There are many ways of reducing the amount of tests to run still maintaining relevant coverage. One technique is called pairwise testing, it is very straight forward and uses tools to quickly identify the highest risk tests that should be included and eliminate the ones with less risk associated. Pairwise testing is outside the scope of this document, If you are interested in knowing more about it, check this out! http://bit.ly/T8OXjZ

Did we get it all wrong?

On Success Measure Vs Bug count and a brand new approach to building Successful products

Back from Potsdam (Germany) where I attended “Agile Testing Days”, I now had 48 hours to reflect on what I saw and heard.

Gojko Adzic presented a concept that I believe could represent a paradigm shift not only in testing but in the whole software delivery approach.

Agile-testing-quadrants

He says that we all got it wrong when applying one of the quadrants of Agile testing because in quadrant Q3 we have been focusing on criticizing the Product based on our internal understanding of how to build a successful product and paying little or no attention to the final customers’ opinion on whether the product is useful and successful or not.

To visualize this, Gojko came up of with a model for software quality that mirrors the Maslow’s hierarchy of needs where the highest level in Maslow’s model (Self Actualization)

corresponds to Successful in Gojko’s Software Quality Model. In this model the lower levels are a necessity for the upper ones to be relevant, i.e. if a product is not Deployable and Functionally OK, we should not care whether it is performant and secure or if it is useful because obviously if we cannot deploy it, it won’t get the chance to perform and be useful, you get the idea.

Looking at the pyramid we immediately realize that as a software delivery team we can only assure the 3 bottom levels of the pyramid and to assure our product is Useful and Successful we need feedback from the final customer. We must involve our final customers in the feedback loop on our products, only they will really know if our product is useful and only they can make it successful or not. Gojko goes one step further and says that when measuring the levels we can apply a different level of focus. Maybe the bottom 2 levels should be delivered to be “good enough” moving up the pyramid we need to aim to “the more the better” as we get closer to Successful.

The most impressive part is yet to come and it is basically Gojko’s approach to measuring the Successful bit of the pyramid. He introduced a strategic planning technique based on 4 questions that he named Impact Mapping. Gojko says “An impact map is a visualisation of scope and underlying assumptions, created collaboratively by senior technical and business people“. In my opinion, the most revolutionary side of Gojko’s thinking is on his focus on behaviour change. In the third question he asks “How should our actors’ behaviour change?”. By focusing on this aspect we are able to visualize the impacts that we want to see as a result of our product/idea.

Using Impact Mapping we are able to visualize and test our assumptions in our path to success. By allowing assumptions testing, Impact Mapping helps find the shortest and cheapest path to product success, not bad at all…

Impact Mapping is a brand new approach and Gojko, says he doesn’t know yet if it will apply to every area of software delivery, it is up to the community now to test it, define applicability boundaries if any and improve it, you can count on me Gojko, I am up for it!

BTW, before you ask, yes I live in the real world.