How to do integration testing
Q- How do testers do integration testing? What are top-down and bottom-up approaches in integration testing?
Expert’s response: Ironically, integration testing means completely different things to completely different companies. At Microsoft, we typically referred to integration testing as the testing that occurs at the end of a milestone and that "stabilizes" a product. Features from the new milestone are integration-tested with features from previous milestones. At Circuit City, however, we referred to integration testing as the testing done just after a developer checks in -- it's the stabilization testing that occurs when two developers check in code. I would call this feature testing, frankly…
But to answer your question, top-down vs. bottom-up testing is simply the way you look at things. Bottom-up testing is the testing of code that could almost be considered an extension of unit testing. It's very much focused on the feature being implemented and that feature's outbound dependencies, meaning how that feature impacts other areas of the product/project.
Top-down, on the other hand, is testing from a more systemic point of view. It's testing an overall product after a new feature is introduced and verifying that the features it interacts with are stable and that it "plays well"' with other features.
The key to testing here is that you are in the process of moving beyond the component level and testing as a system. Frankly, neither approach alone is sufficient. You need to test the parts with the perspective of the whole. One part of this testing is seeing how the system as a whole responds to the data (or states) generated by the new component. You want to verify that data being pushed out by the component are not only well-formatted (what you tested during component testing) but that other components are expecting and can handle that well-formatted data. You also need to validate that the data originating within the existing system are handled properly by the new component.
Real-world examples? Well, let's assume you are developing a large retail management system, and an inventory control component is ready for integration. Bottom-up testing would imply that you set up a fair amount of equivalence-classed data in the new component and introduced that new data into the system as a whole. How does the system respond? Are the inventory amounts updated correctly? If you have inventory-level triggers (e.g., if the total count of pink iPod Nanos falls below a certain threshold, generate an electronic order for more), does the order management system respond accordingly? This is bottom-up testing.
At the same time, you want to track how well the component consumes data from the rest of the system. Is it handling inventory changes coming in from the Web site? Does it integrate properly with the returns system? When an item's status is updated by the warehouse system, is it reflected in the new component?
We see constant change in the testing profession, with new methodologies being proposed all the time. This is good -- it's all part of moving from art to craft to science. But just as with anything else, we can't turn all of our testing to one methodology because one size doesn't fit all. Bottom-up and top-down testing are both critical components of an integration testing plan and both need considerable focus if the QA organization wants to maximize software quality.
No comments:
Post a Comment