The Testing Pyramid is a representation of the types of software testing and how they relate to each other. The further up the pyramid you are, the more certain you are that the thing under test is performing as expected, in exchange your tests are also more expensive to design, run and maintain.
This representation is particularly useful because:
It captures the fact that generally it isn’t worth running the higher level tests if a lower level test has failed since you can usually conclude that one or more higher level test will fail.
It captures the fact that generally it isn’t possible to build or economically run the higher level tests without having built the lower level tests.
Unit Testing
A Unit Test
tests that a singular function has the desired behavior in a given scenario.
Unit Tests are cheap to write, cheap to run, and quickly point out what is and isn’t working at a purely technical level. Unfortunately, they’re also incredibly granular and so it can be hard to determine, based on a failing unit test alone, what exactly is broken from a feature or product perspective.
Unit Tests, on a per-test basis, typically take seconds to minutes to write, run in microseconds and often have no meaningful maintenance costs.
Component Testing
A Component Test
tests multiple functions together at once and proves that they have the desired behavior in a given scenario.
Component Tests add breadth on top of Unit Tests and are often written and run in the same tooling. Typically Component Tests are only slightly more expensive to write, run and debug than Unit Tests. While they’re less granular than Unit Tests, they do not generally explain what is or isn’t working at a feature or product level.
Component Tests, on a per-test basis, typically take minutes to write, run in microseconds to seconds and often have trivial maintenance costs.
System Testing (aka Integration Testing)
A System Test
takes entire components, orchestrates them together under a given scenario and proves that together, they have a desired behavior.
Typically a System Test will test not just the business logic, but integrate together multiple discrete pieces of infrastructure such as the application, its database, and its server. This allows a System Test to prove that not only does the business logic do as expected, but all the pieces required to actually run that business logic in production all work together to accomplish the desired behavior as expected.
System Tests generally cannot explain what is or isn’t working at a feature or product level, but can be thought of as the last “Technical Testing” in the Testing Pyramid.
System Tests are typically much more complex to write than Component Tests as they require multiple tools working together to setup and tear down.
End to End Testing
An End to End Test
tests a singular user journey from start to finish (end to end) across an entire fully integrated system.
End to End Tests often require that entire environments be created specifically for testing and as such cost substantially more than System Tests in both dollars and clock time to design, build and maintain.
End to End Tests are capable of answering feature and product level questions such as:
Does Login work?
Can new users belonging to client Omega successfully register from the email we send?
To make End to End tests economical, your team will need to build out the same infrastructure required to spin up and provision an entire new environment. You can expect this to take dozens or hundreds of hours prior to being able to build the first scalable test. Once this infrastructure is in place you can expect each individual system test to take minutes to hours to create and seconds to minutes to execute. The reliability and maintainability of your tests will be determined by the developer experience of the infrastructure used to build and execute them.
Acceptance Testing
An Acceptance Test
is the exact same as an End to End Test, except it isn’t automatable. The entire point of an Acceptance Test is to have a human experience, and quality audit, the user journey as if they were the user themselves.
When the Product Manager responsible for a given Product runs an Acceptance Test against a user journey in their Product it is the only time you can truly say whether the given user journey is “Correct”. As such, having your PM write and execute a handful of acceptance tests per feature is incredibly valuable.
Given the reliance on a human you can think of Acceptance Testing as the single most expensive type of testing. Typically Acceptance Tests can leverage non-production environments and be executed manually by the PM. You can expect each Acceptance Test to take minutes to hours to write and minutes to hours to execute. Depending on the maturity of your systems a PM may require support from an Engineer to provision or otherwise execute the test which will double the execution costs.
Note: Developers can and should run acceptance tests on their work as part of contributing it to the code base.