Custom Search

How to Define a Test Strategy

How to define a test strategy

Q- I want to define one test strategy which is suitable for all the teams in my organization. What are the questions I need to ask the developers to define a test strategy?

Expert’s response: This is a broad question with several possible meanings. However, I'll take a stab at it. It sounds to me like the question is how do you drive a system test strategy, i.e., once each component of a project has been component-tested, how do you test the strategy? What do you need to know from developers to build the strategy?

When it comes to system testing, to be frank I want less information from the developers than I do from the customers. I want to approach my system testing from a scenario basis. That having been said, there are some important things to know about the components -- specifically, how they interact with each other. What are outbound and inbound dependencies (i.e., what data is transferred between components)? The key here is to ask questions that don't offload the burden of testing from you to the developers. You can't ask a developer, "How do I test this" because, if he answers you, he might as well do it himself. What you need to ask are questions such as "How does this component interact with that?" or (better yet) "I've been reading the technical specification for your component, and I have a couple of questions." Then ask your specific questions.

To put it into a real-world analogy, let's say you're testing a procurement and inventory control application for a small gas business. The application may consist of a procurement piece (code that automates ordering delivery of gas), a projection piece (code that projects short- and mid-term inventory needs), and a delivery tracking piece (code that verifies the gas ordered is delivered, even if it's split up among several deliveries). At this point in your test planning, your interview with developers will focus on the data being shared --which portions of the database are common and which are specific to a given component. You'll also ask how components modify shared data. While this data modification may have been tested to specification during initial testing, it's very possible that the original specification overlooked some element of interaction and the spec is deficient.

Another key step is to examine the test strategy for each individual component. Here you are looking for the overlap: Which cases do two or more components have in common? Often, the team developing the integration test strategy will have spent time identifying system-level test cases, and you can leverage them here.

In our real-world example, interviewing the test leads from each team should result in them sharing with you cases that they felt were "system test cases" by nature -- cases that cover interaction, cases with cover dependencies, etc. The test lead for the procurement component, for instance, might have identified cases that cover order size and delivery date and will want the delivery tracking piece to be sure to check that they handle split orders appropriately. Through these interviews, you should build a list of cases that cover common cases.

Finally, as I mentioned, you want to spend a lot of time in system-level testing thinking about scenarios. You want to define how a user will interact with your product and follow that interaction as it goes from component to component. You definitely want to speak with the customer and in two phases. First, sit down with the customer and ask them to work with you to identify key customer scenarios. Document everything! Then, from that meeting, develop any other scenarios or obvious variations on scenarios. Prioritize these scenarios and write up the steps that comprise the scenario. Finally, return to the customer and validate your final scenarios and your steps that cover them.

In our real-world example, you'd walk through the lifetime of an order -- from the moment the projection component identifies a new order is needed, through order placement and then fulfillment.

You definitely want to script scenarios that cover full order delivery as well as split deliveries. You want to run a scenario that probes how the projection component deals with fluctuating demand, and so on. Once you've identified a set of scenarios, script high-level steps for them. Circle back, refine your scenarios and steps, and then sit with the customer and have them validate your planning -- taking feedback and modifying appropriately.

Through this research, you can begin to identify a system-level test approach that results in the highest possibility of verifying customer functionality. You'll minimize test case overlap with your integration-level testing, as well. The key to good system-level work is focusing on higher-level testing (scenario-based testing) and minimizing your component-level testing (assuming component-level testing has been carried out successfully). If you involve developers in this stage, be sure to do so as a planning augmentation. Don't ask them to define your strategy for you. Bring them in as valued experts, but be sure to minimize the questions you ask them. You want them on your side when it comes time to advocate for fixes.




No comments: