WeTest presentation: test planning in the real world

Keep NZ Green! Or something.I recently hosted the third WeTest presentation, where we had a really jolly good discussion on test planning, the real world, and, bending processes to your whims. Or, as the blurb put it:

This session will cover test planning in non-lean environments, and planning for the real world instead of planning for a process checkbox. Damian will speak about his experience with trying to perform meaningful test planning within the constraints of a rigid testing process, test plan templates design and dissemination, and finding out "why you’re testing" before you get to the "what you’re testing".

I’ll put the body of my experience report below, since I was told I talk too much and therefore cut most of it out on the night. If the workshop was anything to go by, I figure there’s at least another few hours of thoughtful discourse to go on this subject (hot topics: specificity of test planning, semantics of the word strategy, and the dreaded spectre of assumptions).

Surprisingly, the usual suspects don’t have anything additional to to add on their own sites, but the post-match discussion at the WeTest forum and elsewhere was robust and lively. Thanks to all the attendees and participants for making this such an entertaining and thought-provoking evening.

The WeTest workshops are only possible due to our generous sponsors, Assurity, who provide the premises, pizza, and beer. Thanks! I’ve written a report of the event for them, and you can find it over at their site.


Introduction

Over the last fifteen years, I’ve done a lot of test planning. I’ve produced test plans that were a few bullet points in an email, and I’ve – regretfully – produced test plans that were 50 pages long when they were just a template. So, some thoughts I have come to after planning approximately a million projects’ testing…

Working within rigid frameworks

There are a lot of people who do not want to think in their test planning process, and fortunately for them there are also a number of managers who believe that thought is unnecessary and wasteful. A test planning process that is rigid, locked-down, and consists largely of boilerplate is seen as desirable and efficient by these folk; it is easily controllable and repeats easily on paper. Naturally, I could not disagree more strongly with this, but I genuinely don’t believe that these people are at all interested in change. So, when your hands are tied, a lot of what you’re able to achieve will be minor victories by bending the rules, or snuck in under the radar while still fulfilling the Capital-P process.

When producing test strategies under such rigid frameworks, there are two ways of achieving a useful outcome. One option is to bootstrap the test plan: put the bare minimum of information and scoped testing in, and don’t mention anything at all outside that defined scope. This puts the onus of discovery squarely on other people – most of whom won’t read the document anyway – freeing you to develop and explore your tests around the areas you need to focus on. However, this can lead to situations in which you genuinely haven’t considered something important, and this can be bad in highly regulated or mandated environments.

So, my preferred approach is to take an explicit and exhaustive scope of work – say, gleaned from previous projects or other reference information – and include absolutely everything in the test planning process and whether it is in the scope of testing or out of it. This allows you to specifically examine and dismiss the chaff while still keeping a record that it has at least been considered. Naturally, this approach can still miss things, and it makes documents tend towards being both large and with a low signal to noise ratio, but it is an effective way to manipulate a rigid process in order to achieve a thoughtful outcome.

It’s probably important to note here that I don’t personally believe you have to write actual documentation, formal or otherwise, in order to do good test planning. Let me read you a quote from Kaner and friends, from a book I can’t recommend strongly enough:

There’s a lot of testing literature that says, in essence “you can’t do testing well without a written test plan.” In our experience, the main positive effect of that advice has been better job security for the paper and toner manufacturers. There are too many badly written plans. And we’ve seen a lot of good testing done without following written plans. It’s time for better advice.

Kaner et al, Lessons Learned

They are, perhaps unsurprisingly, correct. However, for the purposes of this discussion, I’m going to assume that you, like me, are operating within rigid frameworks that don’t comprehend not having documentation; if you’re not, I’d love to hear from you later! One reasonable argument that can be made for at least using a common template as a starting point – even if you snip most of the sections out and change the others so there’s only one page of actual content – is that one objective of a rigid framework is not to frighten anyone, and one of the best ways not to do that is to provide consistent-looking documents repeatedly.

I also find that prescribed processes tend towards conflating the length of a document with its apparent usefulness. I’m sure at some point we’ve all produced 50-page test plans or 400-page test case documents, and received the customary gasps over their girth. However, if we’re trying to achieve excellence in our planning, we need to make sure the documents are useful and are actually going to be read. When you have master test plans and detailed test plans or fifty-page templates, you are heading down the path of creating documentation because you have been told to create documentation; no-one will read or care about this document until the project goes poorly, when everything you wrote will be pulled out and discussed at length. And one of the reasons the project went poorly is because no-one reviewed the test planning, apart from the poor bastard doing it.

Let’s have a William Morris line that I try to live by:

Have nothing in your houses that you do not know to be useful or believe to be beautiful.

A smaller document is a fundamentally better, more readable, and more easily reviewable document. Extra words add nothing, and remove focus from the actual meaning. Another advantage of smaller documents is that I think test planning is an inverted pyramid of importance versus detail: the important stuff is without detail, holding everything else up, and the detail can be added on top to strengthen the important stuff. A lot of details are unimportant during test planning, but there will always be really important details that change from project to project, and those details really will be important.

In a similar vein, don’t repeat yourself, and don’t repeat other people. Reference other documents for unchanging information, including defect management and acceptance criteria. Most templates I have worked with were particularly bad in this regard, which lead to at best redundancy and at worst complete conflicts of information. Copy and paste works between projects with careful consideration, but if you’re copying and pasting information within documents of the same project with the same audience, there is a terrible problem somewhere. Similarly, blindly parroting a list of items that someone else came up with is not a useful way of thinking about your own test strategy. One technique that I forced upon my poor testers was to include a project overview section in the test strategy template, and in that section I would ask them to write their understanding of the project and its aims and constraints. Although this made a lot of people dislike me, it did act as an extremely useful tool to drive out misunderstandings and information gaps early in the test process, particularly when the tester would write in plain English and produce their own diagrams. However, this sort of activity costs about a day or more on every project, and is hard to justify if you’re in an environment that just expects testers to check boxes and move on.

Also, when you’ve come up with a test strategy that is different from what people may be expecting, it’s always best to fling it out against the wall as soon as possible to see what sticks. The other interested parties in your test effort will very likely have hidden expectations of what you should be including, and you either need to add that information early on or start preparing a reasonable explanation of why you won’t, which brings us neatly to…

Find out why you’re testing before deciding what you’re testing

I recently saw a testing group on LinkedIn – don’t judge me – discussing what their first step would be upon starting a new project. It’s important to note that there was no other context given – it was simply what your first action would be in some hypothetical project. The responses were varied, to say the least. One tester said, straight off the bat, that the first thing to do would be to write the test cases. Then they thought about it for a few more minutes – a good start – and came back with this list:

Bear in mind that they knew not a single thing about the project and its audience, its platform, the software interaction, or even its purpose, yet they assumed that font fallback was going to be so important that it should be one of their five key testing objectives. "Big deal," I hear you think, "they just assumed it was a web app, and in the real world they’d have more knowledge." That’s fair enough. The second person, then, to chime in was specific straight away: they asked "who is the primary stakeholder?" That, on the face of it, is a perfectly sensible statement. Except again, it’s not; it’s charging straight at the problem like you have blinkers on. Who says there’s only one primary stakeholder? I bet every stakeholder would have a different opinion on that matter!

James Whittaker made this very point:

Assumptions are a very bad thing for software testers. Assumptions can reduce productivity and undermine an otherwise good project. Assumptions can even undermine a career. Good testers can never assume anything. In fact, the reason we are called testers is that we test assumptions for a living. No assumption is true until we test and verify that it is true. No assumption is false until we test that it is false.

Any tester who assumes anything about anything should consider taking up development for a career. After all, what tester hasn’t heard a developer say "Well, we assumed the user would never do that!" Assumptions must always be tested. I once heard a test consultant give the advice: "Expect the unexpected." With this I disagree; instead, expect nothing, only then will you find what you seek.

Whittaker, stickyminds.com

This is true throughout the entire test planning process. You can’t assume you know for certain anything about the internal or external expectations or outcomes unless you confirm them. In most cases, this will be validating your thinking and plans against documents, but there is generally no substitute for opening the communication lines and going directly to likely people.  This is particularly important if the customer is external to your company or you don’t have experience in the problem domain.

A word of warning, though: it’s been my experience that by actively seeking out and engaging stakeholders, you increase two major political risks:

Cautions aside, collecting test requirements can help you to answer a couple of fundamental questions very early on in your test planning process. You will have to forgive me for plagiarism in the next few minutes, because frankly Kaner et al nailed it in Lessons Learned; I absorbed these lessons years ago and they have become a tremendous resource in effective planning, and gilding a lily is just senseless. So anything idiotic I say from here on is entirely my fault, and anything brilliant you can just assume comes straight from the minds of Bach, Kaner, and Pettichord.

So, fundamental questions:

Because of all of this, I have found that a semi-formal Test Requirements layer is immensely useful. On the face of it, this can be seen as more needless bureaucracy and just another bloody document to create and get agreed, and you can certainly approach it that way and write something like "we need to verify the quality of the release". But unless you’re working at the most cookie-cutter, CMM-5 organisation, the test requirements you uncover might surprise you, on every single project.

Designing and disseminating test plans

So, with the testing requirements identified, you probably now have a bit of a better idea of where to begin your actual test planning.

The test strategy serves as the link between your test requirements and the actual testing. It’s a description of the decisions that you made around the motivations, thoughts, plans, focus areas, reasons, and – yes – strategies that are being used for a particular testing effort. Note the active tense there – I find that a strategy serves as a map of the testing process; initially we have the rough outlines of what we’re expecting, the approximate edges of what needs your attention, and the finer detail is added as we go along. Naturally, the map can be completely redrawn when required. Test strategies are living documents in the truest sense of the phrase: they are not complete until the software is taken from your hands and you are no longer testing any part of that delivery. Up to and including that point, anything and everything within that document is open for discussion, debate, and – most importantly – change.

A lot of process-heavy environments will mandate discrete, immutable milestones that have to be achieved and signed off before later work can begin, and one of these milestones is inevitably the signoff of the ‘final’ test strategy. This is obviously a problem, since change is your friend – expect and embrace it. A test planning process that doesn’t allow for change in just about every area is almost certainly not going to work out very well. This is because, to paraphrase the three gentlemen previously mentioned, you cannot possibly know everything at the start; if you could, you’d be a world-famous psychic and not stuck in a room with me. It is always going to be better for your test process to have a strategy that reflects reality, rather than outdated or over-optimistic outlooks. If you’re lucky enough to be able to issue updated versions without setting off alarm bells, do so. Otherwise, change it when no-one’s looking; there is no argument that can be made against this without making the rigidity of the process look foolish.

A reasonable starting point for a test strategy’s purpose is to:

You should tell the audience your priorities, what you’d like to do, if there’s an area you’ve considered, but dismissed, or if you’re not sure about something. Your test strategy should explain both your decisions and their reasons for being made.

Really, there is only one reason that any of us test: something important might go wrong. Your entire test process exists to identify, investigate, and report the risks that the product may fail.

In general, there are five areas to consider when planning your strategy (this is the Satisfice Context Model):

Altogether, you’ll be making choices in three rough areas:

You will be making choices about all of these things either explicitly in your test planning, or implicitly via some other means. There’s no option just to not choose things. These decisions will form your strategy, and there are many possible strategies. You’ll recall earlier that I made my testers write about the project and the product in their own terms, rather than copying what someone else thought. A similar approach is excellent for communicating your chosen strategy: essentially tell a compelling story in your own words that explains and justifies the testing that is to be done. If I can quickly read the extremely generic and simplified examples in Lessons Learned:

“We will release the product to friendly users after a brief internal review to find any truly glaring problems. The friendly users will put the product into service and tell us about any changes they’d like us to make.”

“We will define use cases in the form of sequences of user interactions with the product that represent, altogether, all the ways we expect normal people to use the product. We will augment that with stress testing and abnormal use testing (invalid data and error conditions). Our top priority is finding fundamental deviations from specified behavior, but we’re also concerned with ways in which this program might violate user expectations. Reliability is a concern, but we haven’t yet decided how best to evaluate that.”

“We will perform parallel exploratory testing and automated regression test development and execution. The exploratory testing will be risk-based, and allocated to coverage areas as needed. We’ll revisit the allocation each week. The automated regression testing will focus on validating basic functions (capability testing) to provide an early warning system about major functional failures. We will also be alert to opportunities for high-volume random testing.”

Stories like these not only clearly spell out the high-level goals and strategies to achieve them, but are also able to be absorbed more readily by other audiences, especially non-technical ones. This drives discussion and acceptance, and is far more likely to find variances in expectations – in the examples I just read, I can see things that would make most rigid process project managers or technical leads upset, which is a good outcome – you uncover the pain points in planning rather than in execution. Using this approach to paint a broad picture before hitting specifics makes it completely clear what you and your test team are intending to do by clearly communicating your emphasis. Again, your ideas are your test plan.

Speaking of your audience, your strategy, its decisions, and the language in which you communicate will be defined by how well you know that audience. Sometimes you may need to direct an internal test team, and sometimes you may need to communicate with stakeholders outside the company. There may be other audiences later on – for example, your support organisation once the project goes live. Your test strategy will certainly be read the very next time this software is changed, and one day it will be you that has to pick up someone else’s strategy and make sense of it, so be nice.

Finally, I’m going to steal from Lessons Learned again where they discuss what a good test strategy is:

As a side note – I don’t think I should have to tell you that test strategies should also be honest. If you have pressure to lie or obfuscate in your testing or planning, I think you should find a new job.

Things I still don’t know how to solve

To finish off, I’d like to throw a few things on the table that I still don’t know how to solve adequately, and also request some experiences from people who have the great luxury of not working within rigid, prescribed test processes:

Thanks for your time.


« | »

Respond

You must be logged in to post a comment.