How Google Tests Software - Part Four
Wednesday, March 02, 2011
By James Whittaker
Crawl, walk, run.
One of the key ways Google achieves good results with fewer testers than many companies is that we rarely attempt to ship a large set of features at once. In fact, the exact opposite is often the goal: build the core of a product and release it the moment it is useful to as large a crowd as feasible, then get their feedback and iterate. This is what we did with Gmail, a product that kept its beta tag for four years. That tag was our warning to users that it was still being perfected. We removed the beta tag only when we reached our goal of 99.99% uptime for a real user’s email data. Obviously, quality is a work in progress!
It’s not as cowboy a process as I make it out to be. In fact, in order to make it to what we call the beta channel release, a product must go through a number of other channels and prove its worth. For Chrome, a product I spent my first two years at Google working on, multiple channels were used depending on our confidence in the product’s quality and the extent of feedback we were looking for. The sequence looked something like this:
Canary Channel is used for code we suspect isn’t fit for release. Like a canary in a coalmine, if it failed to survive then we had work to do. Canary channel builds are only for the ultra tolerant user running experiments and not depending on the application to get real work done.
Dev Channel is what developers use on their day-to-day work. All engineers on a product are expected to pick this build and use it for real work.
Test Channel is the build used for internal dog food and represents a candidate beta channel build given good sustained performance.
The Beta Channel or Release Channel builds are the first ones that get external exposure. A build only gets to the release channel after spending enough time in the prior channels that is gets a chance to prove itself against a barrage of both tests and real usage.
This crawl, walk, run approach gives us the chance to run tests and experiment on our applications early and obtain feedback from real human beings in addition to all the automation we run in each of these channels every day.
There are analytical benefits to this process as well. If a bug is found in the field a tester can create a test that reproduces it and run it against builds in each channel to determine if a fix has already been implemented.
Crawl, walk, run.
One of the key ways Google achieves good results with fewer testers than many companies is that we rarely attempt to ship a large set of features at once. In fact, the exact opposite is often the goal: build the core of a product and release it the moment it is useful to as large a crowd as feasible, then get their feedback and iterate. This is what we did with Gmail, a product that kept its beta tag for four years. That tag was our warning to users that it was still being perfected. We removed the beta tag only when we reached our goal of 99.99% uptime for a real user’s email data. Obviously, quality is a work in progress!
It’s not as cowboy a process as I make it out to be. In fact, in order to make it to what we call the beta channel release, a product must go through a number of other channels and prove its worth. For Chrome, a product I spent my first two years at Google working on, multiple channels were used depending on our confidence in the product’s quality and the extent of feedback we were looking for. The sequence looked something like this:
Canary Channel is used for code we suspect isn’t fit for release. Like a canary in a coalmine, if it failed to survive then we had work to do. Canary channel builds are only for the ultra tolerant user running experiments and not depending on the application to get real work done.
Dev Channel is what developers use on their day-to-day work. All engineers on a product are expected to pick this build and use it for real work.
Test Channel is the build used for internal dog food and represents a candidate beta channel build given good sustained performance.
The Beta Channel or Release Channel builds are the first ones that get external exposure. A build only gets to the release channel after spending enough time in the prior channels that is gets a chance to prove itself against a barrage of both tests and real usage.
This crawl, walk, run approach gives us the chance to run tests and experiment on our applications early and obtain feedback from real human beings in addition to all the automation we run in each of these channels every day.
There are analytical benefits to this process as well. If a bug is found in the field a tester can create a test that reproduces it and run it against builds in each channel to determine if a fix has already been implemented.
This is really interesting insight. Just curious, why do you do your Test Channel phase after the Devs use it for every day work?
ReplyDeleteEngineers using our applications is something I'm trying to encourage at my company currently. I was hoping to have it as a later phase though- after QA has been part of writing acceptance tests, ironing out the functionality and getting the application in good working order. I guess my thought would be, having the devs use it after would help flesh out any remaining bugs after QA has torn it apart, instead of having to deal with some of the low hanging issues that QA can clear up right away.
James,
ReplyDeleteFantastic observation - quality is a work in progress!
It would be interesting to understand how Google test cross-browser compatibility of Chrome. Do you guys use any tools ?
Thanks
Sri
Canary Channel- I feel this is great point.But one question that comes to my how do you decide code is not fit to release ?
ReplyDeleteSo I assume that google has acceptance criteria for moving product into testing.
Also the way google does beta had always fascinated me, yes gmail if I am right had long inning in beta.Then I believe GTalk is still in beta.
Question I have here is which is more profitable , releasing product to the beta stage or having a team of tester fixing all those bugs and having little room of risk for production fix.
Also if I am right, google might be only company to earn revenues via beta releases.They did display lot of ads in gmail beta.
- Kiran Badi
Hi James,
ReplyDeletecan you share your thoughts about the difference between how Google tests and how Microsoft tests? pros/cons, or just your general impressions?
thanks!
Ted
Just wondering, with small iterations of development and testing, what test documentation is used at google for manual testing? as time is a factor as with most agile projects, creating a long test script as found in v-model projects is obviously not feasible . . .
ReplyDeleteTremendous insight and technical wisdom! Thanks!
ReplyDeleteAny updates on Testify aka Google Test Analytics? Can't wait to get my HUD going!
Answers are coming I promise. Not sure about Google v MS issue though. That one might be better left for a conference where a two way discussion with some Microsoft engineers in attendance for point/counterpoint.
ReplyDeletehello sir
ReplyDeletei have used loadrunner 9 version the probelm is i can record the image file but run time i cannot see thatimage what can do
hi James,
ReplyDeleteHave a curiosity , As you told gmail went as a beta for around 4years and went live only once you reached 99.9% uptime, so with this you are showing that quality is an ongoing process and less testers are needed. However one thing i would like to point is, Gmail can go as beta for four years however what do you suggest when there is a strict deadlines on a project (not talking about the product based companies), they have to ship it, not beta though!! What are your suggestions in those circumstances? Customers can't wait for 4years when there is a strict requirement from the end user.