Wednesday, April 01, 2015

Quantum Quality

UPDATE: Hey, this was an April fool's joke but in fact we wished we could
have realized this idea and we are looking forward to the day this has
been worked out and becomes a reality.

by Kevin Graney

Here at Google we have a long history of capitalizing on the latest research and technology to improve the quality of our software. Over our past 16+ years as a company, what started with some humble unit tests has grown into a massive operation. As our software complexity increased, ever larger and more complex tests were dreamed up by our Software Engineers in Test (SETs).

What we have come to realize is that our love of testing is a double-edged sword. On the one hand, large-scale testing keeps us honest and gives us confidence. It ensures our products remain reliable, our users' data is kept safe, and our engineers are able to work productively without fear of breaking things. On the other hand, it's expensive in both engineer and machine time. Our SETs have been working tirelessly to reduce the expense and latency of software tests at Google, while continuing to increase their quality.

Today, we're excited to reveal how Google is tackling this challenge. In collaboration with the Quantum AI Lab, SETs at Google have been busy revolutionizing how software is tested. The theory is relatively simple: bits in a traditional computer are either zero or one, but bits in a quantum computer can be both one and zero at the same time. This is known as superposition, and the classic example is Schrodinger's cat. Through some clever math and cutting edge electrical engineering, researchers at Google have figured out how to utilize superposition to vastly improve the quality of our software testing and the speed at which our tests run.


Figure 1 Some qubits inside a Google quantum device.

With superposition, tests at Google are now able to simultaneously model every possible state of the application under test. The state of the application can be thought of as an n bit sequential memory buffer, consistent with the traditional Von Neuman architecture of computing. Because each bit under superposition is simultaneously a 0 and a 1, these tests can simulate 2n different application states at any given instant in time in O(n) space. Each of these application states can be mutated by application logic to another state in constant time using quantum algorithms developed by Google researchers. These two properties together allow us to build a state transition graph of the application under test that shows every possible application state and all possible transitions to other application states. Using traditional computing methods this problem has intractable time complexity, but after leveraging superposition and our quantum algorithms it becomes relatively fast and cheap.

Figure 2 The application state graph for a demonstrative 3-bit application. If the start state is 001 then 000, 110, 111, and 011 are all unreachable states. States 010 and 100 both result in deadlock.

Once we have the state transition graph for the application under test, testing it becomes almost trivial. Given the initial startup state of the application, i.e. the executable bits of the application stored on disk, we can find from the application's state transition graph all reachable states. Assertions that ensure proper behavior are then written against the reachable subset of the transition graph. This paradigm of test writing allows both Google's security engineers and software engineers to work more productively. A security engineer can write a test, for example, that asserts "no executable memory regions become mutated in any reachable state". This one test effectively eliminates the potential for security flaws that result from memory safety violations. A test engineer can write higher level assertions using graph traversal methods that ensure data integrity is maintained across a subset of application state transitions. Tests of this nature can detect data corruption bugs.

We're excited about the work our team has done so far to push the envelope in the field of quantum software quality. We're just getting started, but based on early dogfood results among Googlers we believe the potential of this work is huge. Stay tuned!


Friday, March 20, 2015

Android UI Automated Testing

by Mona El Mahdy

Overview

This post reviews four strategies for Android UI testing with the goal of creating UI tests that are fast, reliable, and easy to debug.

Before we begin, let’s not forget an important rule: whatever can be unit tested should be unit tested. Robolectric and gradle unit tests support are great examples of unit test frameworks for Android. UI tests, on the other hand, are used to verify that your application returns the correct UI output in response to a sequence of user actions on a device. Espresso is a great framework for running UI actions and verifications in the same process. For more details on the Espresso and UI Automator tools, please see: test support libraries.

The Google+ team has performed many iterations of UI testing. Below we discuss the lessons learned during each strategy of UI testing. Stay tuned for more posts with more details and code samples.

Strategy 1: Using an End-To-End Test as a UI Test

Let’s start with some definitions. A UI test ensures that your application returns the correct UI output in response to a sequence of user actions on a device. An end-to-end (E2E) test brings up the full system of your app including all backend servers and client app. E2E tests will guarantee that data is sent to the client app and that the entire system functions correctly.

Usually, in order to make the application UI functional, you need data from backend servers, so UI tests need to simulate the data but not necessarily the backend servers. In many cases UI tests are confused with E2E tests because E2E is very similar to manual test scenarios. However, debugging and stabilizing E2E tests is very difficult due to many variables like network flakiness, authentication against real servers, size of your system, etc.


When you use UI tests as E2E tests, you face the following problems:
  • Very large and slow tests. 
  • High flakiness rate due to timeouts and memory issues. 
  • Hard to debug/investigate failures. 
  • Authentication issues (ex: authentication from automated tests is very tricky).

Let’s see how these problems can be fixed using the following strategies.

Strategy 2: Hermetic UI Testing using Fake Servers

In this strategy, you avoid network calls and external dependencies, but you need to provide your application with data that drives the UI. Update your application to communicate to a local server rather than external one, and create a fake local server that provides data to your application. You then need a mechanism to generate the data needed by your application. This can be done using various approaches depending on your system design. One approach is to record server responses and replay them in your fake server.

Once you have hermetic UI tests talking to a local fake server, you should also have server hermetic tests. This way you split your E2E test into a server side test, a client side test, and an integration test to verify that the server and client are in sync (for more details on integration tests, see the backend testing section of blog).

Now, the client test flow looks like:


While this approach drastically reduces the test size and flakiness rate, you still need to maintain a separate fake server as well as your test. Debugging is still not easy as you have two moving parts: the test and the local server. While test stability will be largely improved by this approach, the local server will cause some flakes.

Let’s see how this could this be improved...

Strategy 3: Dependency Injection Design for Apps.

To remove the additional dependency of a fake server running on Android, you should use dependency injection in your application for swapping real module implementations with fake ones. One example is Dagger, or you can create your own dependency injection mechanism if needed.

This will improve the testability of your app for both unit testing and UI testing, providing your tests with the ability to mock dependencies. In instrumentation testing, the test apk and the app under test are loaded in the same process, so the test code has runtime access to the app code. Not only that, but you can also use classpath override (the fact that test classpath takes priority over app under test) to override a certain class and inject test fakes there. For example, To make your test hermetic, your app should support injection of the networking implementation. During testing, the test injects a fake networking implementation to your app, and this fake implementation will provide seeded data instead of communicating with backend servers.


Strategy 4: Building Apps into Smaller Libraries

If you want to scale your app into many modules and views, and plan to add more features while maintaining stable and fast builds/tests, then you should build your app into small components/libraries. Each library should have its own UI resources and user dependency management. This strategy not only enables mocking dependencies of your libraries for hermetic testing, but also serves as an experimentation platform for various components of your application.

Once you have small components with dependency injection support, you can build a test app for each component.

The test apps bring up the actual UI of your libraries, fake data needed, and mock dependencies. Espresso tests will run against these test apps. This enables testing of smaller libraries in isolation.

For example, let’s consider building smaller libraries for login and settings of your app.


The settings component test now looks like:


Conclusion

UI testing can be very challenging for rich apps on Android. Here are some UI testing lessons learned on the Google+ team:
  1. Don’t write E2E tests instead of UI tests. Instead write unit tests and integration tests beside the UI tests. 
  2. Hermetic tests are the way to go. 
  3. Use dependency injection while designing your app. 
  4. Build your application into small libraries/modules, and test each one in isolation. You can then have a few integration tests to verify integration between components is correct . 
  5. Componentized UI tests have proven to be much faster than E2E and 99%+ stable. Fast and stable tests have proven to drastically improve developer productivity.

Tuesday, February 03, 2015

The First Annual Testing on the Toilet Awards

By Andrew Trenk

The Testing on the Toilet (TotT) series was created in 2006 as a way to spread unit-testing knowledge across Google by posting flyers in bathroom stalls. It quickly became a part of Google culture and is still going strong today, with new episodes published every week and read in hundreds of bathrooms by thousands of engineers in Google offices across the world. Initially focused on content related to testing, TotT now covers a variety of technical topics, such as tips on writing cleaner code and ways to prevent security bugs.

While TotT episodes often have a big impact on many engineers across Google, until now we never did anything to formally thank authors for their contributions. To fix that, we decided to honor the most popular TotT episodes of 2014 by establishing the Testing on the Toilet Awards. The winners were chosen through a vote that was open to all Google engineers. The Google Testing Blog is proud to present the winners that were posted on this blog (there were two additional winners that weren’t posted on this blog since we only post testing-related TotT episodes).

And the winners are ...

Erik Kuefler: Test Behaviors, Not Methods and Don't Put Logic in Tests 
Alex Eagle: Change-Detector Tests Considered Harmful

The authors of these episodes received their very own Flushy trophy, which they can proudly display on their desks.



(The logo on the trophy is the same one we put on the printed version of each TotT episode, which you can see by looking for the “printer-friendly version” link in the TotT blog posts).

Congratulations to the winners!

Tuesday, January 27, 2015

Testing on the Toilet: Change-Detector Tests Considered Harmful

by Alex Eagle

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


You have just finished refactoring some code without modifying its behavior. Then you run the tests before committing and… a bunch of unit tests are failing. While fixing the tests, you get a sense that you are wasting time by mechanically applying the same transformation to many tests. Maybe you introduced a parameter in a method, and now must update 100 callers of that method in tests to pass an empty string.

What does it look like to write tests mechanically? Here is an absurd but obvious way:
// Production code:
def abs(i: Int)
  return (i < 0) ? i * -1 : i

// Test code:
for (line: String in File(prod_source).read_lines())
  switch (line.number)
    1: assert line.content equals def abs(i: Int)
    2: assert line.content equals   return (i < 0) ? i * -1 : i

That test is clearly not useful: it contains an exact copy of the code under test and acts like a checksum. A correct or incorrect program is equally likely to pass a test that is a derivative of the code under test. No one is really writing tests like that, but how different is it from this next example?
// Production code:
def process(w: Work)
  firstPart.process(w)
  secondPart.process(w)

// Test code:
part1 = mock(FirstPart)
part2 = mock(SecondPart)
w = Work()
Processor(part1, part2).process(w)
verify_in_order
  was_called part1.process(w)
  was_called part2.process(w)

It is tempting to write a test like this because it requires little thought and will run quickly. This is a change-detector test—it is a transformation of the same information in the code under test—and it breaks in response to any change to the production code, without verifying correct behavior of either the original or modified production code.

Change detectors provide negative value, since the tests do not catch any defects, and the added maintenance cost slows down development. These tests should be re-written or deleted.

Wednesday, January 14, 2015

Testing on the Toilet: Prefer Testing Public APIs Over Implementation-Detail Classes

by Andrew Trenk

This article was adapted from a Google Testing on the Toilet (TotT) episode. You can download a printer-friendly version of this TotT episode and post it in your office.


Does this class need to have tests?
class UserInfoValidator {
  public void validate(UserInfo info) {
    if (info.getDateOfBirth().isInFuture()) { throw new ValidationException()); }
  }
}
Its method has some logic, so it may be good idea to test it. But what if its only user looks like this?
public class UserInfoService {
  private UserInfoValidator validator;
  public void save(UserInfo info) {
    validator.validate(info); // Throw an exception if the value is invalid.
    writeToDatabase(info);   
  }
}
The answer is: it probably doesn’t need tests, since all paths can be tested through UserInfoService. The key distinction is that the class is an implementation detail, not a public API.

A public API can be called by any number of users, who can pass in any possible combination of inputs to its methods. You want to make sure these are well-tested, which ensures users won’t see issues when they use the API. Examples of public APIs include classes that are used in a different part of a codebase (e.g., a server-side class that’s used by the client-side) and common utility classes that are used throughout a codebase.

An implementation-detail class exists only to support public APIs and is called by a very limited number of users (often only one). These classes can sometimes be tested indirectly by testing the public APIs that use them.

Testing implementation-detail classes is still useful in many cases, such as if the class is complex or if the tests would be difficult to write for the public API class. When you do test them, they often don’t need to be tested in as much depth as a public API, since some inputs may never be passed into their methods (in the above code sample, if UserInfoService ensured that UserInfo were never null, then it wouldn’t be useful to test what happens when null is passed as an argument to UserInfoValidator.validate, since it would never happen).

Implementation-detail classes can sometimes be thought of as private methods that happen to be in a separate class, since you typically don’t want to test private methods directly either. You should also try to restrict the visibility of implementation-detail classes, such as by making them package-private in Java.

Testing implementation-detail classes too often leads to a couple problems:

- Code is harder to maintain since you need to update tests more often, such as when changing a method signature of an implementation-detail class or even when doing a refactoring. If testing is done only through public APIs, these changes wouldn’t affect the tests at all.

- If you test a behavior only through an implementation-detail class, you may get false confidence in your code, since the same code path may not work properly when exercised through the public API. You also have to be more careful when refactoring, since it can be harder to ensure that all the behavior of the public API will be preserved if not all paths are tested through the public API.