QA AutomationHelp! My QA Team Can’t Keep Up With Development!

We hear a lot of variations on this cry for help. ‘QA is always falling behind’. ‘QA doesn’t have enough time for the full test passes’. ‘There isn’t enough time for automation or performance work’. Any of this sound familiar?

There are many ways to improve your QA productivity. Let’s look at some of the best practices and anti-patterns. It’s beyond the scope of this post to show every bad practice, but we’ll try to hit the most common ones we see and describe what to do about them.

Best Practices

  1. Test Automation
  2. Architecture Changes to Make Testing Easier
  3. Write Better Manual Tests
  4. Put Developers and QA Close Together
  5. Make Quality Everyone’s Job

1. Test Automation

Automation is usually the first thing people think of to improve QA productivity. Yes, it can really help, when done right, but it may not be the most important thing you can do.

[box] The decision to automate is a simple return on investment (ROI) decision.

How much time do I spend on running this test now?

Compare that to: How much time will it take to automate?

And, how much time will it take to maintain, run, and debug the automation?[/box]

The following are good rules to live by when considering automation:

  • Tests that you are going to run at least 6 more times. Your cutoff may vary, but this rule has worked for us.
  • Tests where you would fix a bug if the automation found it. Start automating the tests that would find your worst bugs with the least automation time and work downward. At some point, you get to diminishing returns.
  • Don’t automate a UI-only feature like a picture or the color of the dialog. When developers change a UI, these changes should be manually tested. A human will find usability issues much better than a machine. In one product measured over the course of a year, over 50% of the UI bugs would not have been found by UI automation. Only a human would have caught them, and the same was not true in reverse.
  • The time spent automating isn’t 5 or more times the time spent running an average test. Your first few tests may take a lot more time to automate, but the more tests you automate, the more shared code you have (and a nice test framework!) and automation times should drop. However, you should drop tests to the bottom of your list that are going to take several days to automate.

Best Practices

  • Test one thing. Most of your automated tests should check one important behavior. You will have a few end-to-end tests that check a lot of things along a key business scenario, but most automated tests will be much simpler. A simple test is easier to maintain and debug.
  • Log test failures and automation failures differently. A test failing because the product is broken is not the same as one that fails because it couldn’t run properly. Track different failures for these cases. You’ll save debugging and reporting time later.

Anti-patterns

  • Excessive UI automation. UI automation ROI tends to be great for the first 10 tests. It’s OK up to 100 tests, and degrades severely as you hit 1000 tests. This is the nature of the beast. UI automation tends to fail due to timing issues and frequent UI changes. Those failures cost time to debug. The best UI automation has about 99.5% reliability, which means 5 false failures every 1000 tests run. How many dev and test hours will that eat up? We’ve seen roughly .6 hours/test failure. Also, UI tests can’t easily run in parallel and they take longer to run. As you get more automation, automation through the UI is not the way you’ll want to go. Think about a test strategy to push the automation lower down the stack (e.g. at the API level).

2.  Architecture Changes to Make Testing Easier

 

Want the biggest ROI for test? Write easily maintainable code. The companies with the lowest incoming bug rates are the ones that have the best architectures – irrespective of any other development practice. They have a core set of code that almost never changes, but is configurable and extensible. Components are modular and easily isolated. When your components can be easily tested in isolation or mocked, you can have lots of simple tests that don’t depend on each other or a lot of underlying functionality to work. As a developer, if you want to help your test team, make your code more maintainable.

Best Practices

  • Use well-defined interfaces using a machine-readable specification like Open API. Have a limited number of endpoints for any component and specify them fully. A clear contract describes exactly what to test.
  • Separate your presentation from your business functionality using a pattern like MVVM. That way you can run tests against your business logic without the UI and vice versa.
  • Measure cyclomatic complexity and number of dependencies for your functions. Complexity describes the number of unit tests you are going to need. Dependencies describe the number of stubs or mocks that you’ll need to write those unit tests. Make your life easy and keep these small. We like complexity to be less than 10 and dependencies to be less than 7 for any function. If you are using Visual Studio, check out Tools/Analyze; if not, use a tool like SonarQube.

Anti-patterns

  • The dreaded monolithic architecture. If you change one thing in your code, how many tests do you have to run? In a simple modular architecture, the answer is one. If you have a large inter-connected code base, it’s “all of them, every time”. That’s what’s slowing down your test team.

Large stored procedures (SP’s). Back in the 90’s I worked on the Microsoft SQL team. Like other database vendors, we told developers to put business logic in stored procedures to improve performance. I’m sorry! Yes, if you need to use a stored procedure for performance, do it, but look everywhere else first. If you must do it, keep them small and use ANSI SQL. SP’s are hard to test, maintain, upgrade, or port to another DB product. It can be done, but it’s not easy.

3. Write Better Manual Tests

What does the classic test look like in a tool like Zephyr or TFS? It’s a set of steps and checks. Press this button, look for that window. These kinds of tests take a long time to write. They can also have a lot of duplication of steps, and if the navigation changes, you have lots of tests to change. Want to go faster? Write tests with less text, but that help you understand what is tested.

All those steps are not that important if you are optimizing for the obvious–the stuff that a new user would figure out quickly. Instead, you want to describe the important things like:

  • Prerequisites of the test. What needs to be setup in advance? What are the setup steps at the beginning of the test trying to accomplish?
  • What is the test trying to do? Instead of steps, describe the intent. What are you trying to accomplish?
  • What is the test measuring? What is being measured in the software. Ex: “check that the new config entry is created”. If it’s obvious how to do this, don’t bother spelling it out. When we automate, we could implement this in the UI, the API, or as a call to the DB.

What is the customer impact if it fails? Imagine you see a test report. If it says 90% pass, what does that mean? Not very useful. What if it says: “major breaking issues: users can’t change dates for their calendar events”. That’s useful.

 

[box] The test should have a priority based on how bad the result could be and how likely it is to happen.[/box]

 

When you automate, make sure it logs that priority and the scenario that is impacted so you know how to report the severity of the failure.

Best Practices

  • Use “functions”. Just like with good code, you want to avoid repeating steps in your tests. It’s helpful to identify repeated or complex setup and verification operations like “create event” or “create user”. Write the steps for these in a separate document and link to them in your test.
  • Use data-driven tests for complex logic so that one test can do the work of many (e.g. a test to add to numbers may have one function to perform the test action, and sets of data like for all the equivalence classes).
  • Use a chain of responsibility pattern as an oracle when there are many possible outputs from your inputs. This pattern is easy to use, code, and maintain. A chain of responsibility pattern is just a series of IF/THEN statements, where each statement calls out to a result and stops the series. The first statements should be the worst cases, like “if (A or B) then error #1”. Statements get progressively less negative till you have “else success”. This way you guarantee that you cover the negative cases and new cases can be added easily without affecting existing ones.
  • Use pairwise testing to reduce the number of tests when two or more inputs depend on each other. Also use them to reduce the number of tests when you have to run the test under multiple configurations like “Spanish, android, large data set”. By combining tests that test pairs of inputs or configurations, you can reduce a test matrix of thousands of tests into 10’s. See pairwise.org for reference.

Anti-Patterns

As we see from the Best Practices, the anti-patterns are repeated tests–lots of tests for the same function, and a very complex test oracle.

4. Put Development and QA Close Together

Development and QA engineers should share the same sprint processes, share the same code branch, and sit in the same office, and ideally sit next to each other.

Best Practices

  • Test code in same place as development code
  • Test coding practices are the same as development
  • QA and development sit near each other or are in constant communication
  • QA and Dev in the same sprintQA and Dev work together as a team on the same work and complete the work together

We strongly recommend putting dev and test in the same sprint and keeping sprints short — two weeks or less works best. Separating the QA work from the dev work lowers development productivity and hurts quality.  It may seem like dev can do more if they don’t wait for QA, but it’s not really the case if QA is involved early with defining the requirements. Here’s a typical sprint schedule:

  • Pre-sprint: QA and dev work with the Product Owner or Business Analyst to define acceptance criteria for stories. You should be having story refinement meetings at least weekly. These break down larger stories and add acceptance criteria. Sometimes stories in-flight need to be split further so that one piece can be shipped while another part requires additional work.
  • Sprint week 1: QA writes tests based on acceptance criteria, runs a part of the on-going regression and performance test matrix that is not affected by the work in the sprint, writes library automation functions, and does exploratory testing. Dev writes code and shows pieces to QA as they go. Dev writes unit tests as code is written.
  • Sprint week 2: QA does final tests on new code, and automates tests that make sense to repeat. Dev completes code, fixes issues, adds integration tests, and works on ongoing refactoring work. Dev and QA can add logging/monitoring calls to the code to help track it in production.
  • End of sprint: Team demos working code. Demo is typically run by QA to prove it can be run by someone other than the dev that wrote it.

 

We all know that QA must wait a little for dev to have something to share, so we pad the beginning of the sprint with ongoing QA work like test passes and perf testing that must get done but can happen anytime. Dev providing API stubs, for example, can help counter this. Dev will be done before QA has finished automation, so we pad the end of the sprint with ongoing dev work like refactoring and integration tests.

Anti-Patterns

  • Testing after dev has moved on. If you find a bug when the developer is working on a feature, then it is still fresh in their mind and will likely be fixed correctly. If you find it two weeks later, you’ll interrupt their current work and the dev may not remember what they were thinking and may fix it wrong. It’s not just expensive to find bugs later, it adds to your technical debt and slows productivity. We see a drop of 20% to 40% productivity for the whole team when QA lags dev by one sprint.

5. Make Quality Everyone’s Job

 

Is QA responsible for quality? What about development? The product owner? Each is responsible for their part in the quality of the product.

The product manager or owner (PM/PO) typically owns the vision for the story – how is it going to help the customer? They should be checking that the features match their vision of them before the final demo. The developer should be able to prove their code works the way they intended to write it such as with unit and integration tests. QA owns measurement of the quality – are we done? To what extent is it working? And sometimes, is it working for the customer?

Measurement of quality is important because it gives the organization the data needed to make decisions, but it’s not the same as owning all the quality. QA falls behind when the job becomes bigger than they can accomplish.

It’s best that dev and PM/PO take on quality tasks that they can do most efficiently: unit testing, integration tests, acceptance review.

Best Practices

  • A good RACI (roles/responsibilities) model. Make clear what tasks each role is responsible for doing.
  • Multiple team members should contribute to your epics and stories. Everyone brings their own skills to the process and if they write the part of it they are responsible for, they will understand it. Co-authorship eliminates a lot of the story review process. This works especially well for epics, where you might have more detail, but also makes sense for stories.
    • Typically, the Product Owner (PO) or Product Manager writes the “why” part of the story: who is the customer, what’s the problem from their point of view, and what are the outcomes. They DON’T describe how the technology will work.
    • The Developer writes the “how” part of the story. These are notes for the developers, so they only need enough detail for those working on it to understand how to build it.
    • QA writes the acceptance criteria. How will you know when the outcomes are achieved and you can measure done?
    • The operations engineer (OPS or DevOps) writes how success is measured in production (if needed).
    • The user experience (UX) designer comes up with any UI and UX guidelines.

Anti-Patterns

  • Hand-offs. Development writes code and then hands it off to QA without checking the requirements or existing tests. Instead, development should be proving their code works as they intended. QA can put it in a bigger context – does it integrate well? Does the experience make sense? What’s performance like? We also see hand-off problems when development is done in one location and QA in another. That rarely works well. QA must be in constant communication with the developer, and ideally, sit side-by-side.
  • The attitude that QA is responsible for quality. QA is a contributor to quality, not responsible for it – at least not all of it. All roles have quality responsibilities. At the end of an epic, everyone signs off that they did their part and all of those add up to quality. Think of the role of QA as “measuring product and customer behavior to give the team data necessary to make decisions”. If you think of yourself as a manual tester or an automator (I.e. back-end loaded), you aren’t providing a lot of value to the company. If you are helping the company make decisions and assure quality as early as possible, that’s a different story. Those people get paid more.
  • Product owner (PO) writes all the stories alone in a room. Product owners are great at understanding the vision behind the story, not so much with all the details. POs can fall way behind on story creation if they have to research and write everything, especially if you need enough detail in the story for an offshore development team.

Summary

Quality assurance is everyone’s job in today’s faster agile environment.  While there are a number of engineering best practices teams that can implement, we’ve had tremendous success with those described above.   Implementing these best practices will help organizations improve QA productivity while maintaining a healthy return on their overall investment in testing.