In the last couple of years the practice of testing has undergone more than superficial changes. We have turned our art into engineering, introduced process-models, come up with best-practices, and developed tools to support our daily work and make each test engineer more productive. Some tools target test execution. They aim to automate the repetitive steps that a tester would take to exercise functions through the user interface of a system in order to verify its functionality. I am sure you have all seen tools like Selenium, WebDriver, Eggplant or other proprietary solutions, and that you learned to love them.
On the downside, we observe problems when we employ these tools:
- Scripting your manual tests this way takes far longer than just executing them manually.
- The UI is one of the least stable interfaces of any system, so we can start automating quite late in the development phase.
- Maintenance of the tests takes a significant amount of time.
- Execution is slow, and sometimes cumbersome.
- Tests become flaky.
- Tests break for the wrong reasons.
- It takes long to automate a test—Well, let's automate only tests that are important, and will be executed again and again in regression testing.
- Execution might be slow, but it is still faster than manual testing.
- Tests cannot break for the wrong reason—When they break we found a bug.
Most of these problems are rooted in the fact that we are just automating manual tests. By doing so we are not taking into account whether the added computational power, access to different interfaces, and faster execution speed should make us change the way we test systems.
Considering the fact that a system exposes different interfaces to the environment—e.g., the user-interface, an interface between front-end and back-end, an interface to a data-store, and interfaces to other systems—it is obvious that we need to look at each and every interface and test it. More than that we should not only take each interface into account but also avoid testing the functionality in too many different places.
Let me introduce the example of a store-administration system which allows you to add items to the store, see the current inventory, and remove items. One straightforward manual test case for adding an item would be to go to the 'Add' dialogue, enter a new item with quantity 1, and then go to the 'Display' dialogue to check that it is there. To automate this test case you would instrument exactly all the steps through the user-interface.
Probably most of the problems I listed above will apply. One way to avoid them in the first place would have been to figure out how this system looks inside.
- Is there a database? If so, the verification should probably not be performed against the UI but against the database.
- Do we need to interface with a supplier? If so, how should this interaction look?
- Is the same functionality available via an API? If so, it should be tested through the API, and the UI should just be checked to interact with the API correctly.
- write many more tests through the API, e.g., to cover many boundary conditions,
- execute multiple threads of tests on the same machine, giving us a chance to spot race-conditions,
- start earlier with testing the system, as we can test each interface when it becomes 'quasi-stable',
- makes maintenance of tests and debugging easier, as the tests break closer to the source of the problem,
- require fewer machine resources, and still execute in reasonable time.
Neither should we abandon our end-to-end tests. They are valuable and no system can be considered tested without them. Again, the question we need to ask ourselves is the ratio between full end-to-end tests and smaller integration tests.
Unfortunately, there is no free lunch. In order to change the style of test-automation we will also need to change our approach to testing. Successful test-automation needs to:
- start early in the development cycle,
- take the internal structure of the system into account,
- have a feedback loop to developers to influence the system-design.
In previous projects we were able to achieve this by
- removing any spatial separation between the test engineers and the development engineers. Sitting on the next desk is probably the best way to promote information exchange,
- using the same tools and methods as the developers,
- getting involved into daily stand-ups and design-discussions.
To summarize, I figured out that a successful automation project needs:
- to take the internal details and exposed interface of the system under test into account,
- to have many fast tests for each interface (including the UI),
- to verify the functionality at the lowest possible level,
- to have a set of end-to-end tests,
- to start at the same time as development,
- to overcome traditional boundaries between development and testing (spatial, organizational and process boundaries), and
- to use the same tools as the development team.