jump to navigation

What Are We Doing With AI and Machine Learning? February 12, 2016

Posted by Peter Varhol in Software development, Uncategorized.
Tags: , ,
add a comment

When I was in graduate school, I studied artificial intelligence (AI), as a means for enabling computers to make decisions and to identify images using symbolic computers and functional languages. It turned out that there were a number of things wrong with this approach, especially twenty-five years ago.  Computers weren’t fast enough, and we were attacking the wrong problems.

But necessity is the mother of invention. Today, AI and machine learning are being used in what is being called predictive analytics.  In a nutshell, it’s not enough to react to an application failure.  Applications are complex to diagnose and repair, and any downtime on a critical application costs money and could harm people.  Simply, we are no longer in a position to allow applications to fail.

Today we have the data and analysis available to measure baseline characteristics of an application, and look for trends in a continual, real-time analysis of that data.  We want to be able to predict if an application is beginning to fail.  And we can use the data to diagnose just what is failing.  In that the team can work on fixing it before something goes wrong.

What kind of data am I talking about?  Have you ever looked at Perfmon on your computer?  In a console window, simply type Perfmon at the C prompt.  You will find a tool that lets you collect and plot an amazing number of different system and application characteristics.  Common ones are CPU utilization, network traffic, disk transfers, and page faults, but there are literally hundreds more.

The is a Big Data sort of thing; a server farm can generate terrabytes of log and other health data every day.  It is also a DevOps initiative.  We need tools to be able to aggregate and analyze the data, and present it in a format understandable by humans (at the top level, usually a dashboard of some sort).

How does testing fit in?  Well, we’ve typically been very siloed – dev, test, ops, network, security, etc.  A key facet of DevOps is to get these silos working together as one team.  And that may mean that testing has responsibilities after deployment as well as before.  They may establish the health baseline during the testing process, and also be the ones to monitor that health during production.

Should Coders Learn to Test? July 8, 2015

Posted by Peter Varhol in Software development.
Tags: ,
2 comments

This topic occurred to me in response to another posting that asked if testers should learn to code, and is a follow-on to my previous post on the unique skill set that software testers possess. If we seriously question whether testers should learn to code (and most opinions fall on the ‘yes’ side of this question), then it is relevant to ask the corresponding question.

Wait a minute, I hear you thinking. That’s a stupid question. Coders already know how to test; they are coders, after all. Testing is simply a subset of coding, the act of making sure that the code that they have written works properly.

It’s nice to believe that coders know how to test because they know how to code, but that’s fallacious reasoning. First, it’s a different skill set entirely. Coding is highly detail-oriented and focused on making sure the code is grammatically and logically correct. Testing asks whether that code covers all requirements, is usable, and is fit for its intended purpose.

Second, while it’s a cliché that coders can’t test their own code, that doesn’t make it any less true. I’ve tested my own code, and I shy away from edge cases or unusual inputs. Testers bravely go where coders don’t.

Third, testers do much more than checking code for logic bugs. Because of their broad mandate encompassing correctness as well as domain expertise and information from end users, testers must be both detail-oriented and focused on the end result, yet able to be flexible in terms of their goals.

So should coders learn the skills of testers? The answer is, of course, it depends. We should all be learning additional skills, but only have a finite amount of time, so there is a cost/benefit tradeoff for any professional in learning (and presumably practicing) a collateral skill. By learning how to test, coders by necessity have to reduce time learning a new language, or learning continuous integration, for example.

But there are technical advantages for coders to learn testing. At the very least, it will make them more thoughtful coders. At best, it can help them write better code, as they become experienced with knowing what testers look for.

If coders do decide to learn to test, they have to give up certain biases to effectively learn the skill set. Here are just a few of those biases.

  1. Users would never do that! That’s simply not true. Users will do everything you can imagine and many things you can’t.
  2. It’s not a bug, it’s a feature. Stop arguing, and start working together to determine if it’s a feature needed by the users.
  3. Testers just slow us down. That may be true in some cases, but most of the time testers speed up the application delivery process. If they seem to slow you down, perhaps it’s because developers didn’t do their jobs right to begin with.

Probably the best way for coders to learn how to test is to do pair-testing with an experienced tester. Participating in testing with someone who already has the skill set will help a developer learn how to spot weaknesses in their code, how end users approach their tasks with software, and how to assess risks and determine testing strategies.

The goal isn’t to turn coders into testers, but to make them better coders. For many coders, that’s a skill worth having.

Software Testing is a State of Mind July 2, 2015

Posted by Peter Varhol in Software development.
Tags: , ,
1 comment so far

I was prompted to write this by yet another post on whether or not testers should learn to code. While it gave cogent arguments on both sides, it (prematurely, I believe) concluded that testing is a fundamental skill for testers, and discussed how testers could develop their coding skills.

The reality is much more nuanced. There are different types of testers. An automation engineer is likely to be coding, or scripting, on a daily basis. A functional tester using an automated testing tool (commonly Selenium, or perhaps a commercial one) will write or modify scripts generated by the tool.

And in general, we try to automate repeatable processes. Often this can be done with customizable workflows in an ALM product, but there might be some amount of scripting required.

But while coding knowledge can improve a tester’s skill set, it’s not required for all roles. And sometimes it can detract from other, more important skills. That got me to thinking about the unique skill sets of testers. There are unique mental and experiential skills that testers bring to their job. The best testers intuitively recognize the skills needed, and work hard to develop them.

  • Curiosity. Good testers do more than execute test cases. They look for inconsistencies or incongruities, question why, and don’t stop looking for improvements until the software is deployed.
  • Logic and Discipline. Testers do very detailed work, and approach that work in a step-by-step logical fashion. Their thought processes have to be clear and sharp, and they have to move methodically from one step to the next.
  • Imagination. Testers understand the user personas and put themselves in their role. The best can work through the application as both a new and experienced user, and find things no one ever considered.
  • Confidence. Testers often have to present unpopular points of view. If they can do so while believing in their own skills and conclusions, while also taking into account differing points of view, they can be successful voice of both the user and application quality.
  • Dealing with Ambiguity. It’s rarely clear what a requirement says, whether the test case really addresses it, whether an issue is really an issue, and what priority it is. Testers have to be ready to create a hypothesis, and provide evidence to support or reject that hypothesis.

These tend to be different skill sets than those possessed by coders. In particular, many coders tend to focus very narrowly on their particular assignments, because of the level of detail required to understand and successfully implement their part of the application. Coders also dislike ambiguity; code either works or it doesn’t, it either satisfies a requirement or it doesn’t. Computers aren’t ambiguous, so coders can’t write code that doesn’t clearly produce a specific end result.

Coders may argue that they produce a tangible result, source code that makes an application concept a reality. Whereas the work product of testers is usually a bit more amorphous. Ideally, a tester would like to say that software meets requirements and is of high quality, in which case few defects will be found. If testers write many defects, it’s interpreted as negative news rather than a desired result.

But organizations can’t look at testers as second class participants because of that. Testers have a unique skill set that remains essential in getting good and useful software into the hands of users. I don’t think that skill set has been very well documented to date, however. And it may not be appreciated because of that.

What is the Deal with Self-Driving Cars? June 23, 2014

Posted by Peter Varhol in Software development, Technology and Culture.
Tags: ,
add a comment

Google, the media, and other interested parties are portraying self-driving cars as a panacea for drivers, traffic congestion, accidents, and other undesirable driving outcomes. I simply don’t get it, on multiple levels. I like the concept, but can’t connect it to any reasonable reality anytime in the future.

I’ve suspected that there would be issues with self-driving cars since they became a popular meme over the past year. At one level, there is the question of how you would test the technology. In normal system testing, you attempt to run tests that simulate actual use. But there are far too many possible scenarios for self-driving cars to reasonably test. Under other circumstances, it may be possible to test the most likely cases, but on a safety-critical system like a car, that’s simply not possible.

I’m reminded of my skepticism by this article on the utility of aircraft autopilot systems and their role in the operation and in some cases mis-operation of planes. One conclusion seems to be that autopilots actually make flying more complex, rather than simpler. That counterintuitive conclusion is based on the idea that the assumptions made by the autopilot are unexpected by the operators.

As a software guy, I’m okay with the idea that assumptions made by software can take people by surprise on occasion. It’s a difficult problem even for safety-critical systems, where people can die if the software makes an incorrect assumption. You can argue, probably successfully, that pilots shouldn’t be surprised by whatever a plane under their command does.

Drivers, not so much. As we look at aircraft autopilots, it is reasonable to draw a parallel between commercial aircraft and automobiles. Now, granted, aircraft operate in three dimensions. But automobiles have a greater range of operating options, in terms of speed, traffic, road types, road conditions, and so on. Commercial aircraft are already under positive control from the ground.

It’s not clear who will control driverless automobiles. It’s certainly unlikely that drivers are as attentive as pilots, yet will become at least as confused at times as they change where they want to go, and how they want to get there. And they won’t be observing the driving process any near as attentively as (I hope) pilots do.

Sigh. I’m not a Luddite. I’m excited about technology in general, and am an early adopter of many technologies (and, to be honest, a not-so-early adopter of others). But I simply don’t see self-driving automobiles taking off (pun intended) anytime in my lifetime.

Mindsets and Software Testing May 18, 2014

Posted by Peter Varhol in Software development, Strategy.
Tags: ,
1 comment so far

All of us approach life with certain attitudes and expectations. These attitudes and expectations can affect how we perform and what we learn, both now and in the future.

According to researcher Carol Dweck, there are two fundamental mindsets – fixed and growth. A person with a fixed mindset feels the need to justify their abilities at every possible opportunity. If they succeed, it reaffirms their status as an intelligent and capable person. If they fail, it is a reflection upon their abilities, and there is nothing to be learned from it.

A person with a growth mindset recognizes that chance, opportunity, and skill all play a role in any activity in which they engage. Success can be due to a number of factors, of which our intelligence and abilities play only a part.   More important, we can improve our abilities through failure, by not taking it personally and by delving into the lessons learned.

It’s important to understand that the idea of mindset is a continuum, so that few if any of us are entirely one or the other. And in some circumstances we may be more one than in others. I can personally attest to this through my own experiences.

This has a couple of implications to software development and testing. First, it means that we will almost certainly make mistakes. But how we respond to those mistakes is key to our futures. We can be defensive and protective of our self-perception, or we can learn and move on.

Second, and perhaps more important, is that failing at a project, creating buggy code, or failing to find bugs isn’t a reflection on our intelligence or abilities. At the very least, it’s not something that can’t be corrected. If we are willing to grow from it, we might recognize that our work is a marathon, rather than a sprint.

It also has implications to technical careers in general. I’ve failed more times that I would like to count. I’ve also succeeded many times too. With a fixed mindset, I’m not sure where that leads me. Charitably, I’m almost certainly a screw-up. With a growth mindset, I’m right where I want to be. I’ll leave the answer to that question as an exercise to the reader.

Cognitive Bias and Regression to the Mean April 29, 2014

Posted by Peter Varhol in Software development.
Tags: , ,
add a comment

We prefer to assign causality to events in our own lives, and in the world in general. If something positive happens, we tend to credit our intelligence, or dedication, or some other quality. If negative, we often blame others, or perhaps blame our own failings. Every day when the stock market closes, we read about how stocks have gone up or down for some perfectly understandable reason.

Bull. Most things in our lives and in the world don’t happen for a reason, or at least any reason we can readily identify. Our good fortune may be only peripherally related to our intelligence or abilities, and our bad fortune may simply arise from being in the wrong place at the wrong time.

Regression to the mean is simply one example of our need for causality, and how it results in bias. If we perform exceptionally well, we come to believe our own press releases, and behave as though we are high achievers. We might well be, but achievement is a slippery thing; it might disappear in a heartbeat.

Regression to the mean is a statistical concept. It simply notes that any time you get an exceptional result, it is unusual. Subsequent results are more likely to be closer to the average. It’s a concept often found in the natural sciences. For example, I am taller than either of my parents, so it is likely that my children (if I had any) would be shorter than me, since I am taller than many of my direct ancestors.

Applied to our lives, regression to the mean refers to the fact that what we do is a combination of skill and luck. We have little idea how much is skill, and how much luck. When we do exceptionally well at a task, we tend to attribute that to skill. When we do poorly, we often blame bad luck. Instead, exceptional performances are random (and rare) chance.

You can certainly argue that such a statistical concept doesn’t really apply to individual efforts, but I think the general principle holds. Sometimes we simply do better than other times, and it’s not clear that it reflects skill any more than (good or bad) luck.

Applied to software development and testing, regression to the mean gives us preconceived notions of the performance of the software based on who works on it. It makes us believe certain things about software based on the perceived superiority or inferiority of the team members based on our experiences.

The Role of Heuristics in Bias April 24, 2014

Posted by Peter Varhol in Software development.
Tags: , ,
add a comment

A heuristic is what we often refer to as a “rule of thumb”. We’ve experienced a particular situation on several occasions, and have come up with a step-by-step process for dealing with it. It’s purely System 1 thinking in action, as we assess the situation and blindly follow rules that have worked for us in the past.

And heuristics are great. They help us make decisions fast in situations that we’ve experienced in the past. But when the situation only appears similar, but is really different, applying our heuristic can have a very bad effect, if it’s not right.

Here’s a real life example. Years ago, I took flying lessons and obtained my pilot’s license. One of the lessons involved going “under the hood”. The hood is a plastic device that goes over your head (see image). When the hood is down, you can’t see anything. When the hood is raised, you can see the instrument panel, but not outside of the plane.

hood

While the hood was down, the instructor pilot in the right seat put the plane into an unusual situation. That might be a bank, or a stall, or something that was unsustainable. When he raised the hood, I was required to use the instrument panel to analyze and diagnose the situation, and recover from it.

After several of these situations, I had developed a heuristic. I looked first at the turn and bank indicator; if we were turning or banking, I would get us back on course in straight flight. Then I would look at the airspeed indicator. If we were going too slow, I could lower the nose or advance power to get us back to a cruise speed.

This heuristic worked great, and four or five times I was able to recover the aircraft exceptionally quickly. I was quite proud of myself.

But my instructor figured out what I was doing, and the next time I applied my heuristic, it seemed to work. But I was fighting the controls! It wasn’t straight and level flight. I started scanning other instruments, and discovered that we were losing over a thousand feet a minute.

At that point, my heuristic had failed. But I wasn’t able to go back and analyze the situation. My mind froze, and if it weren’t for the instructor pilot, we may well have crashed.

The lesson is that when your heuristic doesn’t work, it may be worse than starting over at the beginning. You may simply not be able to.

Applying Cognitive Bias to Software Development and Testing April 21, 2014

Posted by Peter Varhol in Software development.
Tags: ,
add a comment

Through his psychology research, Daniel Kahneman (and his collaborator Amos Tversky) demonstrated that we are not rational investors. We make irrational decisions all the time, decisions that most definitely don’t optimize our expected utility. He proved this well enough that he was awarded a Nobel Prize in Economics.

Beyond economics, we exhibit the same behavior in other aspects of our lives, including our professional lives. Let’s take software testing as an example. We may have preconceived notions of how buggy a particular application is, and that will likely affect how we test it. We may have gotten that notion from previous experience with the development team, or from an initial use of a previous version of the software.

As a result of those preconceived notions, or biases, we are likely to plan and execute our work, and evaluate the results, differently than if they didn’t exist. If our prior experiences with the team or the software were negative, we may be overly harsh in our assessment of the software and its perceived flaws. If our experiences are positive, we may be willing to give questionable characteristics a free pass.

Lest it sounds like this is a conscious decision on our part, let me say right now that it’s almost always not so. It never occurs to us to think that we are biased. If we think of it at all, we believe that the bias is a good thing, because it puts us on alert for possible problems, or it gives us a warm fuzzy of the quality or fitness of the application.

Bias can be a good shortcut to the correct or optimal decision. More often, it is a way of analyzing a situation poorly and making an incorrect or less-than-ideal decision. Even if it might result in a good outcome, it’s incumbent of each of us to realize when we are being influenced by our own beliefs, and to question those beliefs.

We tend to think of software development and testing as highly analytical and scientific endeavors, but the fact is that they are both highly subjective and social. We work in close-knit teams, and the decisions are highly situational based on the exact circumstances of the problem. We tend to overestimate our individual and group abilities, and underestimate the complexity of the problems to be solved.

Further, we tend not to learn relevant lessons from past experiences, instead remaining overly optimistic, often in the face of a great deal of evidence to the contrary.

In subsequent postings, let’s take a look at some of the specific biases, how they affect our work, and how we can recognize and compensate for them.

Design a site like this with WordPress.com
Get started