This post is part of a series on functional testing using the test pyramid.

API testing checks the external interface of one or more components, while component testing tests the functionality of a single component.

From the point of view of the consumer, these are the same, so we group them together in the same test layer. These tests are sometimes called “acceptance” tests because they verify the acceptance criteria of the story or interface. They are quick to run because they mock everything external to the API or component under test.

Whether you are building API tests or component tests is really a matter of mindset. Are you building components or interfaces to implement features? If you are doing “API first” development, and we recommend that you do, then your stories will lead to acceptance criteria that define and validate an API instead of a component. That API can be written and validated before the code behind the API is written allowing teams on either side to work in parallel.

Component Testing

Purposes

  • Verifies that the component performs the primary business and technical requirements
  • Often includes some dependent functionality like logging and storage that might be mocked in unit tests.

Who defines the test?

  • Tests are derived from the acceptance criteria in initiatives, epics, and stories that are used to define component features. Acceptance criteria become acceptance tests.
  • Tests can be written by a Product Owner (PO), Quality Engineer (QE), or developer.

Who codes the test automation?

  • The development team that writes the component also automates its tests. Some companies have a QE person in the team do this, others have engineers write their own test automation.
  • It’s not a good idea to have a separate team from the development team automate the component tests as the communication of what should be automated takes too long and loses details. Don’t farm out this testing to a separate offshore team, you won’t save money.

What to test?

  • Component requirements analysis identifies code that should be created, instead of just what it created.
  • This is basically traditional testing and there are lots of good sources on how to turn requirements into tests.
  • Often, a requirement can be tested as a unit test and, if so, it should be. Tests should be as low on the pyramid and as close to the code as they can be.
  • The component test covers functional testing bigger than the unit test. It spans the component and sometimes the immediate dependencies.

Best Practices

  • Use ATTD/BDD at component level – this helps show when the work is done and what customer experiences or requirements are working and not working when a test passes or fails.
  • Use pairwise testing to reduce the number of tests.

Measures of success

  • % requirements automated vs. intend to automate
  • % requirements passing by initiative, epic, or story
  • % of requirements passing by priority
  • # bugs that could have been found by a component test that were found after merge to master.

Maturity Levels

  • None: Component testing is not done. The architecture may be largely monolithic without separate layers or components.
  • Initial: Basic testing of components is done. Often this stresses the positive cases but may miss negative or boundary testing.
  • Definition: Epics and stories regularly have acceptance criteria that lead to tests. Usually, changes to components lead to automated component tests.
  • Integration:  Functional changes from epics and stories are regularly validated by tests of the changed components. Major parts of the product that are not part of the test are mocked or stubbed.
  • Management and measurement:All components have interfaces defined separately in an interface definition language, like Swagger, and versioning identifies interface compatibility (not component functionality) using Semver.
  • Optimization: Compatibility tests and mocks are published to allow customers to use them to validate their own development. Contributions back from customers are handled.

API Testing

Purposes

  1. Proves an API producer change hasn’t broken any API consumers
  2. Proves a consumer implementation will work with different producers
  3. Allows teams to work in parallel
  4. Tests the primary functionality of a service or sub-system, especially with microservices

Who defines the test?

  • Tests can be written by the developer of the API or a senior dev or architect who’s defining the behavior.

Who codes the test automation?

  • Depending on how you organize, this can be done by the development team producing the API or dedicated quality engineers that are part of a dev team or in a separate team
  • Mocks are developed by the producer team (to start)
  • Tests are automated by the producer team but can be extended by any consumer team

What to test?

Test
API definition changes (linting).
Any change in a component where the behavior of an external interface is changed. Examples: renaming functions or parameters, adding or removing parameters, changing the return value type.

Test serialization (on the wire) compatibility: Inserting a new function in an API (rather than adding it on the end), which would break serialization. Likewise, changing data types for parameters also breaks serialization.

Test special parameters (semantic). Semantic rules of API contracts should be validated where your service depends on the behavior.

Example: Your service calls a function on another service that returns a string with key-value pairs. You need to have tests that cause the other API to return 1 to many key-value pairs so that you can be sure you will get the info your service expects. Similarly, if you are passing key-value pairs to another service, it’s a good idea to try passing a large number of them in a string to validate that the receiving service accepts what you are sending.

Other examples: Changing the range of a parameter or the conditions under which an error is returned.

Best Practices

  • TDD: Create the API definition and tests before the code behind the API.
  • Use an API standard like Swagger/OpenAPI, gRPC proto, or RAML for the API definition.
  • Every time you create an API definition, create a mock for the API and a series of tests that can be used against the mock as well as the producer of the API.
  • Semver and version immutability
  • Put API definitions, mocks, and tests together in one place

Measures of success

  • Endpoint coverage can be measured, but if so, then you may have to include unit and API tests together as many of the APIs, parameter
    variations, and return codes are tested through unit tests.
  • % end-to-end failures – Good API and component tests should minimize the number of failures that make it to end-to-end testing.
  • Average time to get a change accepted to master. API changes are testable early on development branches both on developer machines and the CI/CD pipeline and should run in under an hour.
  • Reported internal and external bugs – Customers
    can test the API with their CI tools to verify the compatibility of the API with their own code.

Maturity Levels

  • None: No interfaces are defined except UI and possibly connections to partner APIs.
  • Initial: At least one key interface is defined and has some automated testing.
  • Definition: Interfaces are versioned for compatibility and tested with for multiple parameters and for multiple return values. Major services that are not tested may be mocked.
  • Integration:Major services all have interfaces defined separately in an interface definition language, like Swagger, and versioning identifies interface compatibility (not component functionality) using Semver. Each service has a shared mock that allows teams to work separately from it as well as a set of compatibility tests. Negative return values (ex: variations on 400x returns) are tested explicitly.
  • Management and measurement: All components have interfaces defined separately along with mocks and tests as in #3 above.
  • Optimization: Interface definitions, mocks, and tests are published to allow customers to use them to validate their own development. Contributions back from customers are handled.

Experiences

If you have one or more legacy components with little to no automated component testing, it’s good to start by implementing some API tests around the components to define what they do before you make changes to them.

One company carefully designed their internal APIs first before defining the components that would implement them. It was then straight-forward for multiple teams to develop in parallel so long as an API separated them. Initially, they needed a week or two of integration every 6 weeks to make the system work until they got a full set of API tests and mocks in place. Once good API level tests were in place, the integration costs were much smaller.

Another company developed components first and didn’t define internal APIs, only customer-facing ones. This made for constant issues that weren’t found in their unit or integration tests and only appeared in the end-to-end tests. Teams often broke each other and, because a passing end-to-end test was needed for a deployable build, an updated build was often delayed for days. Internal APIs and component tests were needed to allow more pain-free the parallel development.

I’ve seen two companies where the component testing was done through the UI instead of an API. This is possible, but the resulting tests are almost always slow, unreliable, and expensive to maintain. See my blog on Less UI Automation, More Quality. I recommend automating component tests against an API, instead.

Copyright © 2019, All Rights Reserved by Bill Hodghead, shared under creative commons license 4.0