By Michael Kelly
Deciding what to test really involves two different questions. The first is a question of scope: "Out of everything that I could possibly test, which features are the right ones to test?" There will always be more to test than you will have time to test. The second is a question of technique and coverage: "For each feature I am testing, how do I want to test that feature?" Different quality criteria will lead to covering different product elements and different testing techniques.
In this two-minute crash course, I'll provide some details on how I answer those questions and how I structure my test execution to ensure I'm testing for the right risks at the right time.
2:00: Figure out the scope of your testing
For the question about scope -- what features should we test -- I like using Scott Barber's FIBLOTS mnemonic (which he presents in his Performance Testing Software Systems class). Each letter of the mnemonic helps us think about a different aspect of risk. Here's a summary of how I apply FIBLOTS when thinking about scope:
- Frequent: What features are most frequently used (e.g., features the user interacts with, background processes, etc.)?
- Intensive: What features are the most intensive (searches, features operating with large sets of data, features with intensive GUI interactions)?
- Business-critical: What features support processes that need to work (month-end processing, creation of new accounts)?
- Legal: What features support processes that are required to work by contract?
- Obvious: What features support processes that will earn us bad press if they don't work?
- Technically risky: What features are supported by or interact with technically risky aspects of the system (new or old technologies, places where we've seen failures before, etc.)?
- Stakeholder-mandated: What have we been asked/told to make sure we test?
1:33: Understand the details of each feature you're testing
Once I understand what it is I want to test, I move on to understanding what aspects of each feature I'd like to cover. For that, I pull out the Satisfice Heuristic Test Strategy Model. I use the product elements list in that document to determine what aspects of the feature I need to focus on. At a high level, I think of coverage in terms of:
- Structure: This is everything that comprises the physical product or the specific feature I'm looking at (code, hardware, etc.).
- Functions: Everything that the product or feature does (user interface, calculations, error handling, etc.).
- Data: Everything that the product or feature processes (input, output, lifecycle).
- Platform: Everything on which the product or feature depends (and that is outside your project).
- Operations: How the product or feature will be used (common use, disfavored use, extreme use, etc.).
- Time: Any relationship between the product and time (concurrency, race conditions, etc.).
1:03: Structure your work in a way that makes sense to you
I typically start by structuring my work in lists or spreadsheets. Then, once I know what I'm going to test, I start to think of how I'm going to test it. It's not real to me until I can visualize the testing taking place. Do I need specialized software to help (like runtime analysis tools)? Will I need to write code or coordinate some activity (like a network failure)? Even visualizing something as simple as the data that I'll need can sometimes trigger a new idea or obstacle I'll need to tackle. As I think about each test, I'll start to group my tests into charters.
Once I have my charters figured out, I'll start to tackle whatever obstacles or setup tasks need to be done to allow me to run them. Some charters won't have any, and others might require a joint effort across teams. Generally, I'm ready to start testing once two conditions are satisfied:
- There is software somewhere that's ready for some level of testing.
- I have at least one charter that's ready to be executed (setup is completed or wasn't required).
0:35: Get your hands on the software you're testing
You'll notice I don't have a lot of entry criteria for my testing. That's because I'm always interested in seeing the software as soon as possible. I don't care how buggy it might be, once I see what I'm going to be testing, often my test ideas change. So the sooner I see it, the sooner I can provide feedback to the developer and start refactoring my tests.
While this philosophy won't work for all of my testing (in general I need something that's functionally sound before I can really start performance testing), it's reflective of a value I have to be an asset to the rest of the team. While I of course always want the most bug-free code I can find (well-designed, unit-tested, peer-reviewed), I'm a realist. Sometimes my feedback is more valuable to the team if I can get eyes on the product sooner rather than later.
0:21: Start with the components and build your way out from there
That said, I do have some general timing heuristics I use when thinking about when to test what. In general, I won't start doing any sort of end-to-end testing (following data through multiple parts of a system or subsystems) until I'm fairly confident each piece of the system is working to some degree (basic functionality has been confirmed, it's relatively stable and so on).
I typically don't try to do much automation or performance testing until I get at least one "stable" interface. The interface could be a Web service, a user interface, or even a method call, but I want it to be through at least one or two rounds of preliminary testing and I want to have some indication from the programming team that they don't plan to make major changes to the interface any time soon. I'm not looking for a promise it won't change, things change all the time -- I just want us to agree that right now we don't expect it to change.
0:05: Don't forget to regression test
Finally, I typically won't start regression testing until I've completed my first round of chartered test execution. Schedule constraints can of course override that, but I like the idea of regression testing being the last thing I do. It makes me more comfortable with the changes made as a result of my testing, and it gives me one last (often more relaxed) look at the product.
1 comment:
Hi!
I’m interested in advertising software engineer/dev jobs on your site. Please contact me when you get a chance.
Regards,
Chris
Post a Comment