To be clear, except for the "API Infrastructure Service", every piece in the final diagram is part of the testing framework being described here?That seems like an impressive amount of frameworking, but you are solving, rather elegantly, a fairly complicated problem set.In terms of the framework itself, would you be able to estimate how the work breaks down to build and maintain it? Is it 50% Test API, 25% Test Library, etc?Did you have to build the whole thing before you started writing tests, or were you able to write some tests with only some pieces in place, and then iterate towards completion, evolving the tests along the way?Sorry to badger you, but given the generally vast resources, both in machines and people, available at Google, I'm very curious about the process through which something like this would evolve.Thanks!
Hi Alec, I just updated the final diagram with improved color coding and labels. The boundaries of the SUT, test case, and test infrastructure components should be more clear now. The amount of work was approximately: 80% Test API, 15% Abstraction Library, and 5% Adapter Server. We were able to iterate the development - always a good thing. The first iteration had a few basic API features working, an adapter for one language and one platform, the client abstraction library, and a single test. This became the proof of concept. We were happy with the initial results, so we decided to proceed with the design.
This was an excellent read. Such an elegant solution to a complex problem. Thank you Anthony!
How are test case organized, in code or other format? And how about the output/log?
Hi Li, The tests are organized in code with clear comments. At Google, we have an internal service that stores, queries, and provides a UI for looking over the results of all tests, including the log output.
really cool. thanks for share.
Do you simulate the APIs, for purposes of either functional or performance testing? In other words, are you able to make requests from the client library to a simulator, without accessing the live system? I'm curious to learn how you do that, if you do. Thanks.
In this case, it was a large end-to-end test, so nothing was simulated. Smaller tests (unit and integration) should use mocks and fakes.
Thanks Anthony. We're looking to simulate APIs in order to test, for example, client code before the real API is ready. The simulator would examine the request and send an appropriate reply back to the client. This provides functional testing. We could also do performance testing on the client side by firing off huge numbers of requests, again without affecting the real server. It gets complicated when you consider the fact that the requests can be HTTP, REST, EJB, etc, and there are multiple ways of creating the simulators themselves (request/response, WSDL, ...). There are a number of vendor products that will do this, in a variety of ways, but I'm interested in learning how large corporations perform simulations. Google is about the best example I could think of, given their size and client API library. Do you know where I can get more information on these best practices? Thank you.
It really depends on the focus of your testing. Just client functional testing: fake the server. Full system functional/performance testing: real server. There is rarely good reason to load test a client in a client-server system, as clients represent a single node instance of the system.
The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.