An Ingredients List for Testing - Part Three
Friday, September 03, 2010
By James Whittaker
Possessing a bill of materials means that we understand the overall size of the testing problem. Unfortunately, the size of most testing problems far outstrips any reasonable level of effort to solve them. And not all of the testing surface is equally important. There are certain features that simple require more testing than others. Some prioritization must take place. What components must get tested? What features simply cannot fail? What features make up the user scenarios that simply must work?
In our experience it is the unfortunate case that no one really agrees on the answers to these questions. Talk to product planners and you may get a different assessment than if you talk to developers, sales people or executive visionaries. Even users may differ among themselves. It falls on testers to act as the user advocates and find out how to take into account all these concerns to prioritize how testing resources will be distributed across the entire testing surface.
The term commonly used for this practice is risk analysis and at Google we take information from all the projects stakeholders to come up with overall numerical risk scores for each feature. How do we get all the stakeholders involved? That's actually the easy part. All you need to do is assign numbers and then step back and have everyone tell you how wrong you are. We've found being visibly wrong is the best way to get people involved in the hopes they can influence getting the numbers right! Right now we are collecting this information in spreadsheets. By the time GTAC rolls around the tool we are using for this should be in a demonstrable form.
Possessing a bill of materials means that we understand the overall size of the testing problem. Unfortunately, the size of most testing problems far outstrips any reasonable level of effort to solve them. And not all of the testing surface is equally important. There are certain features that simple require more testing than others. Some prioritization must take place. What components must get tested? What features simply cannot fail? What features make up the user scenarios that simply must work?
In our experience it is the unfortunate case that no one really agrees on the answers to these questions. Talk to product planners and you may get a different assessment than if you talk to developers, sales people or executive visionaries. Even users may differ among themselves. It falls on testers to act as the user advocates and find out how to take into account all these concerns to prioritize how testing resources will be distributed across the entire testing surface.
The term commonly used for this practice is risk analysis and at Google we take information from all the projects stakeholders to come up with overall numerical risk scores for each feature. How do we get all the stakeholders involved? That's actually the easy part. All you need to do is assign numbers and then step back and have everyone tell you how wrong you are. We've found being visibly wrong is the best way to get people involved in the hopes they can influence getting the numbers right! Right now we are collecting this information in spreadsheets. By the time GTAC rolls around the tool we are using for this should be in a demonstrable form.
But don't you think "Too may chefs spoils the Soup?".
ReplyDeleteBy giving numbers to all the people who involved in project will hammer the progress. It will be very hard to meet satisfy everyone's number with in the time constrain?
Hmmmm.....
ReplyDeleteThis seems a wee bit counter intuitive to me. What happens when people violently agree and something that becomes the hot topic absorbs the effort that is required to "round off" the product? Another issue would be the distraction involved in absolutely defining the argument for an issue to be tackled when several things work against the "real" issues. Seniority, power, personal preference, ability to craft an argument can all combine to distract. I cannot diagree completely with the proposal, but it leaves me feeling a little bit uneasy. At some point the decision maker has got to be the one to say, "Enough, this is the direction, this is the priority, now JFDI!" based on their assessment and vision. That is where the real strength of a Test Lead/Manager shows. Help people make informed decisions and don't let a feeding frenzy develop around the priority list.
Agreement is always work to forge but once you have it -- bang! that's a solid test plan (and a STRONG test lead/manager)... When the inevitable bug crops up in production (in an area which was de-prioritized), no one wastes time blaming QA, they just focus on the priorities. I think it's more work up front but pays dividends over time! Plus by getting agreement up front, you've already raised awareness of the most critical issues to the people who actually control quality.
ReplyDelete