This is number two in my list of commandments for testers.
It probably goes without saying that actually performing testing will be your main job function. Talking about testing, thinking about testing, planning your test process – these are all very important and very useful things to do, but the primary reason for your continuing pay packet is your raising of bugs and highlighting of issues. In other words, go forth and break software.
Thou shalt seek the better bug
So, you’ve found a problem. It’s a minor thing – perhaps a screen widget doesn’t disallow an invalid entry. But how deep does the rabbit hole go? Does it blindly store the invalid entered value? Is that same dodgy value then retrieved by another process or business function and utilised somewhere it really shouldn’t be? Is there an external symptom of this case, like a somewhat alarming error in the system log? Go the whole hog – can you make the application crash?
You are not Peter Sinclair; never take an application’s first answer. Find out how much more value you can get out of the bug before you raise it. If nothing else, it provides you with a lot of supporting information and scenarios that can be provided to the development team.
Thou shalt make thy bug a prime target
Assuming that the organisation you work for is a rational one, a small bug ain’t gonna get fixed unless someone cares about it. It is your job to find out why someone should care, and demonstrate the knock-on impacts possible; it’s almost always better to be able to raise a serious bug than a minor one, and a serious bug warrants more attention by far.
Of course, you have a similar responsibility to reality. If your discovered issue doesn’t actually affect anything, this should be writ large in your report. Again, assuming that your organisation is rational, you aren’t paid on the severity of the issues you raise, so be sensible.
Thou shalt apply diverse half-measures
There is another important heuristic to be followed when you’re testing software: the rule of diverse half-measures. Essentially, this says that: it’s better to do more, different kinds of testing to a pretty good level than it is to do one or two kinds of testing perfectly.
"This strategic principle derives from the structured complexity of software products. When you test, you are sampling a complex space. No single test technique will sample this space in a way that finds all important problems quickly. Any given test technique may find a lot of bugs at first, but the find-rate curve will flatten out. If you switch to a technique that is sensitive to a different kind of problem, your find rate may well climb again. In terms of overall bug-finding productivity, perform each technique to the point of sufficiently-diminished returns and switch to a new technique.
Diversification has another purpose that is rooted in a puzzle: How is it possible to test a product for months and ship it, only for your users to discover, on the very next day, big problems that you didn’t know about? A few things could cause this situation. A major cause is tunnel vision. It wasn’t that you didn’t test enough; it was that you didn’t perform the right kind of test. We’ve seen cases where a company ran hundreds of thousands of test cases and still missed simple obvious problems, because they ran an insufficient variety of tests." – the Bible
Even though this heuristic is obviously a deep and considered approach to scalable and effective test management, you may find it useful to glibly sum it up in a way that is more familiar to programmers: we didn’t expect the user to do that.
You must be logged in to post a comment.