This guest blog post is part of an Atlassian blog series raising awareness about testing innovation within the QA community. You can find the other posts in this series under the QA Innovation tag.

It’s written by Richard Hale, a Senior Consultant & Director of Quality Assurance with Go2GroupAs an Atlassian Platinum Expert, Go2Group provides its clients with world-class ALM and Testing services, with expertise in integration solutions for Atlassian, HP, Perforce, MuleSoft, IBM, and SalesForce.com. 

As a Director of Quality Assurance, I’ve been involved in many different software development environments and cultures. Many have been good and productive and some have been dysfunctional, for lack of a better word. Penny Wyatt touches briefly on the importance of fostering a team culture that encourages testers to investigate and report bugs in her post as well. In my experience, I’ve seen a team or certain sub-groups within the team reticent to report too many defects, or certain types of defects because they are sensitive topically, or hot points in the project, or “managing down” the defect count because of management sensitivity to the total defect count.

This is done for various reasons – usually for a perception management approach so that upper managers, who may not understand the value of a found defect, do not get a bad impression of the progress of their project. The practice, as I’ve observed it, is never a formality but always a covert attempt at re-working the total defect count (and sometimes other numbers) along the project’s procession.  At times, this can also often be at the behest of a middle manager who has a desire to manage the perception of his/her uppers. Sometimes this is at the request of a developer.

What I’d like to accomplish in this blog entry, is explore some of the examples I’ve seen, how they came about, and their real value for the project. These are just some case studies that we can hopefully learn a little from.

Typical scenarios I’ve observed are:

  • The Lame Bug
  • “It’s Not a Requirement”
  • Developer is already fixing that – e.g. The Dev “knows about it”

 “The Lame Bug”

Sometimes QA is faced with complex environment, data or configuration issues that cause what seem to be defects. In this case, the ‘defect’ may not be in the code or functionality itself, but due to these issues, the result of test cases can be failures.

For example, say the tester was expecting to see certain data samples as part of their test environment, which includes pricing for a certain region, but their test cases fail because the proper pricing data is not present. It may be the responsibility of the Data or Development Team to ensure environments are setup properly. The failure of the test cases impact the timeline of testing, and waste QA time until the situation is resolved.

When they encounter this situation and enter defects, development teams may say, “Don’t enter that as a defect because it’s not a bug” – or that it’s a ‘lame bug’. However, this is a missed opportunity, if not entered and tracked, to correct a problem in the configuration management (CM) process or environment & data setup process, application deployment methodology, etc.  The defect also captures the project effort and resources used to resolve the issues and can provide a valuable benchmark to ensure the issues do not arise again.

QA teams should be diligent about entering these types of defects, properly categorizing them, and working with the teams involved to pinpoint the root cause. Once the cause is identified, it could also change the way the defect is described & characterized with attributes so that is an appropriate reference artifact. This will help the project’s key processes – those which actually support the life cycle– continually improve throughout the software development life cycle (SDLC).  Again, we capture the work involved for all teams in resolving these issues.

“It’s Not a Requirement”

This is a case where the requirement artifact(s) should have a defect entered against them.

The tester finds what seems to be a defect, although it’s not necessarily a test case failure. Sometimes this is found through ad-hoc style testing. When this happens, the tester must check with requirements team to confirm whether the issue they encountered is a gap in the requirement. What may normally happen here is that the requirement is quickly updated to include the intended functionality, and development and test teams continue with addressing this new requirement.

However, at this point, a better way of handling this situation would be entering the defect against the requirement artifact with the missing requirement, and assigning the Requirement Team resource. With proper issue types, this practice can help to track scenarios where testing uncovers necessary functionalities not originally included in the requirements documentation and capture this effort on the part of QA teams when it occurs. Otherwise, these incidents can be a hidden cost of the project’s development.

“The Developer is Already Fixing That”

Say QA is testing per requirements docs or test cases and the tester finds an issue in functionality: either it doesn’t exactly line up with requirements or there are unclear requirements surrounding a particular functionality, showing a gap in the documentation or expectations of the actual functionality of the product. The tester is fairly certain about the issue, but tries to confirm with developer. Then, the developer states, “Oh yes, I found this, it’s already fixed and will be in your next build.”

The result is that the tester never enters the bug, and the requirement documentation doesn’t get corrected. From there, a test case may not be specifically written for this feature and that’s the end of the conversation.

The problem here is that if QA does not document the defect properly and this type of scenario begins to happen with greater frequency, the project metrics and measure of progress will be skewed. As QA often spends effort interacting with the requirements team, business teams, and developers, the true value of effort is lost in this process is not accounted for. It’s always best to just enter the defect, and make sure to account for the new requirement. In this case, it may be necessary to enter a defect against the requirements artifact as well, so that any effort required in that activity can be tracked. This can help management isolate the issue of incomplete requirements if that problem becomes more frequent.

Enter the Defect . . . with conviction!

In summary, entering the defect is not a ‘bad’ thing.  Defects or issues – with proper typing across the various sorts of incidents – are a necessity to the entire project in:

  1. Helping improve processes and fostering continual improvement in team and individual disciplines
  2. Capturing and quantifying the real cost of the project (e.g. identifying potential hidden costs)
  3. Helping to isolate the root cause of certain project support issues, and resolving them (e.g. CM, environment, data)
  4. Supporting more helpful project metrics

The extremely important tasks of quantifying our work, tracking our progress, and continuous improvement are dependent upon the proper and good practice of entering issues.

Rich Hale has been involved in the software development industry for over 15 years.  He’s worked as a Developer, QA Analyst, Test Automation Architect and SDLC Tools & Process Specialist. 

(Guest blog) Enter the Defect: When In Doubt, Report the Bug