Playbook covers our best practices and gives you the best insight how we deliver successfull long-term software projects.
We write a lot of automated tests. We maintain very high test coverage because it’s one of the most important things for the success of large long-term projects. Such projects are just too large to test quickly manually. Developers need to get fast feedback for their code changes, not after the days or weeks it takes for a manual tester to verify a new build. That’s why we automatically build every commit to give fast feedback.
Backend unit tests take up the largest portion of the test suite because they are the best ways to write thorough tests for isolated units. Also, these tests run faster than higher level tests, so we can afford to have a lot of unit tests even when we need fast feedback for code changes. We also test a lot in other levels including automated acceptance testing. New tests are written when a bug is found, and this helps us to avoid regressions.
Our test suits contain thousands of automated tests. It’s too slow to run this on a developer machine, so we have a shared cluster. Everyone in the team can run the complete test suite on this cluster and get test results for their changes in a few minutes. It’s a very convenient way to test local changes before committing them.
All tests must pass both on the testing cluster and the continuous integration server before they are merged to the main branch. The continuous integration server builds feature branches, and this means that all build issues are solved before they are merged to the main branch.
High code coverage is not enough. The test suite would be able to check only a small part of the code base even if the code coverage tools report full coverage. One of the best ways to verify test quality is through Mutation Testing. This technique slightly modifies the program then runs tests against a large number of slightly mutated code bases. If mutations still pass the test suite, this indicates low-quality tests.
Mutation Testing is very resource intensive. Fully Automated Mutation Testing consumes too many resources to verify each Merge Request. We use a semi-automated process to make the most of Mutation Testing. Our manual tester creates an optimal Mutation Testing scenario for each Merge Request, executes it and reports feedback to Merge Request authors via GitLab. This technique allows us to optimize the results of our test suites.
Automated testing does not guarantee that the software meets the highest level of quality that our high-profile clients expect. It’s hard to catch details such as element alignment with automated tests. A human can identify such type of imperfection much easier than a computer. The tester quickly scans the UI and can quickly tell if everything is ok. It would be very hard to specify an entire UI in automated tests and then maintain it for a fast-changing system.
The original developer is not the best person to find edge cases for the code she/he implemented. That’s because a developer tends to use the software in the way she/he implemented it. It’s hard to hit edge cases in such a way. A manual tester didn’t write that code and will most likely try to use it in a different manner than its original developers. Also, a manual tester will try to use it as a regular user, not as the individual who developed it.
We found that a combination of both automated and manual testing is needed to provide the kind of high quality software that our clients demand. We will continue to perform both until something better is invented.