Consider a case where there is a test case with several steps, and one of the steps fails because a bug is present in the product. However, there is a workaround for the issue and the decision is made not to fix the bug. The test case in which the bug is encountered, however, is still valid.

Does the test case need to then be tailored to note the bug and then ignore the failure in subsequent runs? That is, should we mark the test as passed with a note that indicates an issue?

Potential Solutions

I am personally a big believer in paying down debt as quickly as possible. For a legacy bug that has been around for a long time, I would either (a) fix it, or (b) punt it outright and permanently. If it is even moderately important to customer scenarios, then fix it. As it is now, it sounds like it is in the middle ground just taking up space, and a decision needs to be made either way.

Making a note in a failing test that there is a legacy issue feels wrong. Test should pass, or tests should fail – nothing in the middle. If there is an issue that has not been addressed, then the test should continue to fail until something is done about it. Otherwise, we risk losing track and painting an incomplete picture.

What can we do about it?

  1. Fix the bug. An obvious one.
  2. Demote the test case to a lower priority. Normally release criteria revolve around the highest priority test cases. If this scenario truly isn’t core then the test case should be lower priority anyway and your release criteria can stomach some failures in that category at ship-time. Note that chunking the test case per below can also help with this situation (creating some high priority test cases and some low priority test cases)
  3. Test case contains the workaround. Modify the test case so that it contains the workaround for the legacy bug (if applicable) instead of the core path that causes the bug. Make a note in the test case that an alternate path exists, but doesn’t quite work right. Note that I don’t really like this as it still hides the problem.
  4. Remove the test case. If no longer relevant to the product, get rid of it and close off the legacy bug.
  5. Chunk the test case. It may be that the test case is too large, containing way too many steps, and is not cohesive. This makes it hard to test and prioritize one scenario and may be more prone to the situation you describe. You’ll still end up with a failing test in this case.

The key thing for me is to not play any games. If there is a failing test case due to a bug that has not been fixed, then that’s the reality and should be reported on until a key decision is made. If the bug is punted permanently, then the test case could be updated to reflect the proper unexpected behavior as we just made a design choice for the product.

In all cases, the bug should be linked to the test case for full traceability,

Anyone have other advice that you would pass along?