Cognitive Bias in Software Testing

A cognitive bias is a pattern of deviation in judgment that occurs in particular situations, leading to perceptual distortion, inaccurate judgment, illogical interpretation, or what is broadly called irrationality. Implicit in the concept of a “pattern of deviation” is a standard of comparison with what is ‘normally’ expected; this may be the judgment of people outside those particular situations, or may be a set of independently verifiable facts. In well-run software development projects, the mission of the test team is not merely to perform testing, but to help minimize the risk of product failure. Testers look for manifest problems in the product, potential problems, and the absence of problems. They explore, assess, track, and report product quality, so that others in the project can make informed decisions about product development. It’s important to recognize that testers are not out to “break the code.” or to embarrass or complain, just to inform as meters of product quality. The definition of testing according to the ANSI/IEEE 1059 standard is that testing is the process of analyzing a software item to detect the differences between existing and required conditions (that is defects/errors/bugs) and to evaluate the features of the software item. The purpose of testing is verification, validation and error detection in order to find problems – and the purpose of finding those problems is to get them fixed.

In the traditional software development model – often called the waterfall – everything flows from a process and one step starts after the other. Testing is at the end of that process and is often looked at less than essential. In software development companies therefore the testing that happens in-house more often that not is seen more as a rigor than as a part of the project flow. It is billable and some developer organizations that are committed to systems and processes – try to do a good job of it in the name of QA and develop a defect free product in the first release. But it is the nature of the beast that there can never be only a version of a product. Things change and something change for the better, the operating system undergoes upgrades in the way that it works and the application needs be updated to ensure that the newer things are all built are now incorporated calling for new round of development and a layer of testing and the process often repeats itself.

But can people in the same organization who are organized in a construct towards getting the release out quickly and fairly defect free be the best judge of quality? In a seminal study on human computer interface called “Positive test bias in software testing among professionals: A review by Laura Marie Leventhal, Barbee M. Teasley, Diane S. Rohlman and Keith Instone at the Computer Sciences Department, Bowling Green State University Ohio, the researchers found a case for ample evidence that testers have positive test bias. This bias is manifest as a tendency to execute about four times as many positive tests, designed to show that “the program works,” as tests, which challenge the program. The researchers cited found that the expertise of the subjects, the completeness of the software specifications, and the presence/absence of program errors may reduce positive test bias. Skilled computer scientists invent specifications to test in the absence of actual specifications, but still exhibit positive test bias.

Another study  “Confirmation Bias in Software Development and Testing: An Analysis of the Effects of Company Size, Experience and Reasoning Skills” by Gul Calikli; Berna Arslan; Ayse Bener at the Department of Computer; Engineering, Software Research Laboratory, Bogazici University, Turkey Results showed that regardless of experience and company size, abilities such as logical reasoning and strategic hypotheses testing are differentiating factors in low confirmation bias levels. Education and/or training programs that emphasize mathematical reasoning techniques are useful towards production of high quality software. In order to investigate the relationship between code defect density and confirmation bias of software developers the researchers performed an experiment among developers who are involved with a software project in a large-scale telecommunications company and analyzed the effect of confirmation bias during software testing phase. Their results proved that there is a direct correlation between confirmation bias and defect proneness of the code. Their concluding summary shows that there is no significant relationship between software development or testing experience and hypothesis testing skills. Experience did not play a role even in familiar situations such as problems about software domain. The most striking difference was found between the group of graduate students and software developers and testers of the companies in terms of abstract reasoning skills. The fact that students scored better in software-domain questions although most of them had less software development and testing experience indicates that abstract reasoning plays an important role in solving everyday problems. It is highly probable that theoretical computer science courses have strengthened their reasoning skills and helped them to acquire an analytical and critical point of view.

Hence, we can conclude that confirmation bias is most probably affected by continuous usage of abstract reasoning and critical thinking. Company size was not a differentiating factor in abstract reasoning, but differences in hypotheses testing behavior was observed between two groups of companies grouped according to their sizes. The large company performed better in the interactive test, but it has been shown that the group of students outperformed this group in terms of both tests.

This has led to the conclusion that hypotheses testing skills were better in the group of students. There is a relationship between confirmation bias and continuous usage of and training in logical reasoning and critical thinking. The relevance of this in current day trends like crowd sourced testing are structured attempts at making this happen in real time over larger and wider deployments.

There are several kinds of biases that the average humans are exposed to and commits but in the business of software development testing each on poses its own challenges and the astute tester must guard watch for it and compensate for it into the test design. The importance of making testing therefore an independent area and outsourced differently from application development is therefore strategically very important. A brief listing of some of the biases are listed as below with reviews from some of the experienced independent software testing community thought leaders are listed.

(1.) Observational Bias happens when one only look where they think they will find positive results, or where it is easy to record observations and a little like looking for something lost only under the streetlight! Darren McMillan, an independent software testing consultant from Glasgow in his Requirements Analysis & Testing Traps rightly points out the dangers of having visual references (wireframes) at a very early stage in the project lifecycle that could take your attention away from something more fundamental within the text of the requirements themselves.

(2.) Reporting Bias – a tendency to under-report unexpected or undesirable experimental results, attributing the results to sampling or measurement error, while being more trusting of expected or desirable results, though these may be subject to the same sources of error. Over time, reporting bias can lead to a status quo where multiple investigators discover and discard the same results, and later experimenters justify their own reporting bias by observing that previous experimenters reported different results. A valuable piece of information can be skewed to make a problem seem less severe (e.g. <1% of our customer base use *that* browser so can’t do XYZ).

(3.) Survivorship Bias a type of selection bias the logical error of concentrating on the people or things that “survived” some process and inadvertently overlooking those that didn’t because of their lack of visibility. This can lead to false conclusions in several different ways. The survivors may literally be people, as in a medical study, or could be companies or research subjects or applicants for a job, or anything that must make it past some selection process to be considered further. Survivor ship bias can lead to overly optimistic beliefs because failures are ignored, such as when companies that no longer exist are excluded from analyses of financial performance. It can also lead to the false belief that the successes in a group have some special property, rather than being just lucky. For example, if the three of the five students with the best college grades went to the same high school, that can lead one to believe that the high school must offer an excellent education.

(4.) Confirmation Bias is a tendency of people to favor information that confirms their beliefs or hypotheses and display this bias when they gather or remember information selectively, or when they interpret it in a biased way. People also tend to interpret ambiguous evidence as supporting their existing position. Biased search, interpretation and memory have been invoked to explain attitude polarization and the irrational primacy effect (a greater reliance on information encountered early in a series) and illusory correlation (when people falsely perceive an association between two events or situations. This particular bias is the big daddy if all biases since it has so many variations. Michael Bolton an independent software testing consultant and Principal, DevelopSense and Co-author (with James Bach) of Rapid Software Testing from Toronto, Canada   provides some really useful tips for escaping confirmation bias in his book.

(5.) Anchoring Bias or focalism is a term used in psychology to describe the common human tendency to rely too heavily, or “anchor,” on one trait or piece of information when making decisions. During normal decision making, individuals anchor, or overly rely, on specific information or a specific value and then adjust to that value to account for other elements of the circumstance. Usually once the anchor is set, there is a bias toward that value. Michael D. Kelly a testing veteran from Indiana in the US talks about simply sketching out a schematic of sorts and talk through his ideas (not necessarily solutions). It just could be a  “Talk it through with your mates” heuristic?

(6.) Congruence Bias occurs due to people’s over reliance on direct testing of a given hypothesis, and a neglect of indirect testing. It is a kind of Confirmation Bias as mentioned earlier. Pete Houghton, Peter Houghton a Contract Tester at the Financial Times, London opines on the Arrogance of Regression Testing “We stop looking for problems that we don’t think are caused by the new changes.” claims Pete.  And there are, many others such as Automation Bias, Assimilation Bias etc… there’s quite a lot of cognitive biases out there, and you may wonder how testers even get out of the starting blocks with so many possible ways for their judgment and work to be skewed.

About Soumya
A technology enthusiast, forever enamored by all that it hath wrought and of course here is an attempt at making sense of it all and perhaps simplifying it!

One Response to Cognitive Bias in Software Testing

  1. brom shos says:

    Great web site you have got here.. It’s hard to find quality writing like yours nowadays. I honestly appreciate people like you! Take care!!

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: