This guest blog post is part of an Atlassian blog series raising awareness about testing innovation within the QA community. You can find the other posts in this series under the QA Innovation tag.

The post is written by Michael Larsen, a “Lone Tester” with SideReel.com, which is a division of Rovi Corporation. His experience covers networking equipment software, SNMP applications, virtual machines, capacitance touch devices, video games and distributed database applications/web services.

Processes getting in the way

Over the past several years, I have had differing ideas and expectations when it comes to performing software testing in different organizations. Some are very rigid, with steps that had to be met in a specific way. In the 90s, when I worked for an internetworking manufacturer, this was mandated by ISO 9001 requirements and us proving that we “followed the process”. This made for voluminous test plans, lots of specific detail, and an impressive checklist.

Later, I worked for a video game publisher that was based in Japan, with testing provided by a group stationed in the U.S. Again, we had very specific requirements, this time not because of a need for following a standard, but because of a translation requirement; every bug report, test plan and status update had to be translated into Japanese and sent back to the company in Japan, where they would be reviewed, commented on in Japanese, sent back to us, and then re-translated back into English. This required a very specific process, and very specific wording for what we did.

In both of these instances, it would be safe to say that the process was “necessary”, by some definition of that word. However, I would also have to say that it also led to less effective testing. Why would I say that? Because in both cases (and several other examples I could provide), the process itself limited our ability to ask questions of the product. For me, my favorite metaphor for software testing is that of the “beat reporter”. We’re out to get “the story”, to land “the interview”, to get “the scoop”. For us to do that, we have to be able to ferret out a story wherever it can be found. When a process helps us do that, then it’s a benefit to us and something we can use to help us get the story. When the process actively discourages such things, then we are hindered in our ability to get the full story.

The Solution: A New Test Model

What if, instead of our constantly focusing on “following the process” or “having a thorough plan” we instead focused on having a large list of questions that we could ask any product? It sounds far fetched, doesn’t it? Actually, it’s not. There’s a way to do this rather effectively and the good news is, we don’t have to re-invent the wheel to do it. The process has already been described and laid out for us to consider and use as we see fit. It’s called the Heuristic Test Strategy Model (HTSM).

HTSM was designed by James Bach (http://www.satisfice.com/tools/satisfice-tsm-4p.pdf) and works as a template to ask a product questions. This model allows for questioning at a focused unit level, and as large and an entire suite of integrated applications. The main point of the HTSM is the ability to look at a broad range of categories, and to understand the questions that would be relevant for that category. Rather than create a series of canned scripts that walk a user through a checklist of expected states, this approach instead focuses on the relevant questions we want to have answered. Those questions may well lead us into numerous paths we otherwise would not have considered when we were focused on the “must test this” checklist. By actively questioning the product, we are able to create a much more comprehensive and adaptable testing strategy. What’s more, we actually learn about the product, and that learning can help us ask additional questions.

The key areas that the HTSM covers are:

  • Product Environment: What resources do we have? What are our constraints? What will prevent us from doing certain things on this environment? What if we were to scale up or down various aspects of our environment (RAM, CPU, storage, network access, etc.)?
  • Product Elements: What are the areas that you intend to test (and don’t say “everything” because that impossible)? How will you determine that you have covered the necessary steps to test that element (or set of elements)? How do you know which interactions make sense, and which ones are unlikely or needless overhead?
  • Quality Criteria: What rules help you to determine if a product has a problem? Do these rules harmonize with each other, or do they clash at times? Are there times when the criteria makes sense? Are there instances where it doesn’t? Penny Wyatt goes into this in more detail in her post on testing intuition.
  • Test Techniques:  What are the methods you will use to create tests? What informs your decision to use a particular technique over another? Does a boundary condition alone make for a successful test, or will you need to incorporate other elements such as load or negative testing to determine if there is an issue?
  • Perceived Quality: What are our results? How have we assessed the quality of a system? What information have we derived to help us come to that conclusion?

Performing a complete breakdown of the HTSM is well beyond the focus of a single blog post, but suffice it to say that it is an approach that can help guide and develop testing efforts in ways that static, formalized structures cannot. Unlike the structured methods, which presuppose a correct answer, the HTSM is instead a process that encourages questions, and more questions after initial questions are answered.

Challenge your product testing

I encourage everyone to take a look at this model, and practice using it on your next testing project. Instead of following the old script, sit down with your product and perform a thorough interview. Ask it the tough questions. Make it squirm, sweat and delve deep into its background, so that you can publish the “Scoop of the Season” in your next report.

Michael Larsen is the producer of “This Week in Software Testing”, a podcast hosted by SoftwareTestPro.com. He is a black belt in the Miagi-Do School of Software Testing. He is also a member of the Board of Directors (and an active instructor) with the Association for Software Testing, and the principal facilitator for the America’s chapter of Weekend Testing. Find him on Twitter at @mkltesthead and writes his own software testing blog called TESTHEAD.

(Guest blog) Interviewing the Product: Making Testing More Flexible with a Heuristic Test Strategy Model