Monday, August 20, 2012

Testing Google's New API Infrastructure

By Anthony Vallone

If you haven’t noticed, Google has been launching many public APIs recently. These APIs are empowering mobile app, desktop app, web service, and website developers by providing easy access to Google tools, services, and data. In the past couple of years, we have invested heavily in building a new API infrastructure for our APIs. Before this infrastructure, our teams had numerous technical challenges to solve when releasing an API: scalability, authorization, quota, caching, billing, client libraries, translation from external REST requests to internal RPCs, etc. The new infrastructure generically solves these problems and allows our teams to focus on their service. Automating testing of these new APIs turned out to be quite a large problem. Our solution to this problem is somewhat unique within Google, and we hope you find it interesting.

System Under Test (SUT)

Let’s start with a simplified view of the SUT design:

A developer’s application uses a Google-supplied API Client Library to call Google API methods. The library connects to the API Infrastructure service and sends the request. Part of the request defines the particular API and version being used by the client. This service is knowledgeable of all Google APIs, because they are defined by API Configuration files. These files are created by each API providing team. Configuration files declare API versions, methods, method parameters, and other API settings. Given an API request and information about the API, the API Infrastructure Service can translate the request to Google’s internal RPC format and pass it to the correct API Provider Service. This service then satisfies the request and passes the response back to the developer’s app via the API Infrastructure Service and API Client Library.

Now, the Fun Part

As of this writing, we have released 10 language-specific client libraries and 35 public APIs built on this infrastructure. Also, each of the libraries need to work on multiple platforms. Our test space has three dimensions: API (35), language (10), and platform (varies by lib). How are we going to test all the libraries on all the platforms against all the APIs when we only have two engineers on the team dedicated to test automation?

Step 1: Create a Comprehensive API

Each API uses different features of the infrastructure, and we want to ensure that every feature works. Rather than use the APIs to test our infrastructure, we create a Test API that uses every feature. In some cases where API configuration options are mutually exclusive, we have to create API versions that are feature-specific. Of course, each API team still needs to do basic integration testing with the infrastructure, but they can assume that the infrastructure features that their API depends on are well tested by the infrastructure team.

Step 2: Client Abstraction Layer in the Tests

We want to avoid creating library-specific tests, because this would lead to mass duplication of test logic. The obvious solution is to create a test library to be used by all tests as an abstraction layer hiding the various libraries and platforms. This allows us to define tests that don’t care about library or platform.

Step 3: Adapter Servers

When a test library makes an API call, it should be able to use any language and platform. We can solve this by setting up servers on each of our target platforms. For each target language, create a language-specific server. These servers receive requests from test clients. The servers need only translate test client requests into actual library calls and return the response to the caller. The code for these servers is quite simple to create and maintain.

Step 4: Iterate

Now, we have all the pieces in place. When we run our tests, they are configured to run over all supported languages and platforms against the test API:

Test Nirvana Achieved

We have a suite of straightforward tests that focus on infrastructure features. When the tests run, they are quick, reliable, and test all of our supported features, platforms, and libraries. When a feature is added to the API infrastructure, we only need to create one new test, update each adapter server to handle a new call type, and add the feature to the Test API.


  1. To be clear, except for the "API Infrastructure Service", every piece in the final diagram is part of the testing framework being described here?
    That seems like an impressive amount of frameworking, but you are solving, rather elegantly, a fairly complicated problem set.
    In terms of the framework itself, would you be able to estimate how the work breaks down to build and maintain it? Is it 50% Test API, 25% Test Library, etc?
    Did you have to build the whole thing before you started writing tests, or were you able to write some tests with only some pieces in place, and then iterate towards completion, evolving the tests along the way?
    Sorry to badger you, but given the generally vast resources, both in machines and people, available at Google, I'm very curious about the process through which something like this would evolve.

    1. Hi Alec, I just updated the final diagram with improved color coding and labels. The boundaries of the SUT, test case, and test infrastructure components should be more clear now. The amount of work was approximately: 80% Test API, 15% Abstraction Library, and 5% Adapter Server. We were able to iterate the development - always a good thing. The first iteration had a few basic API features working, an adapter for one language and one platform, the client abstraction library, and a single test. This became the proof of concept. We were happy with the initial results, so we decided to proceed with the design.

  2. This was an excellent read. Such an elegant solution to a complex problem. Thank you Anthony!

  3. How are test case organized, in code or other format? And how about the output/log?

    1. Hi Li, The tests are organized in code with clear comments. At Google, we have an internal service that stores, queries, and provides a UI for looking over the results of all tests, including the log output.


The comments you read and contribute here belong only to the person who posted them. We reserve the right to remove off-topic comments.