Custom Search

Test coverage: Finding all the defects in your application

Test coverage: Finding all the defects in your application

Q-If trace matrix does not meet the requirement for test coverage, what would you suggest for the same? How can I assure the coverage of all functionalities by a team member as a team leader?

Expert Response: The trace matrix is a well-established test coverage tool. Let me offer a quick definition -- the purpose of the trace matrix is to map one or more than one test case to each system requirement, the trace matrix is usually formatted in a table. The fundamental premise is that if one or more than one test case has been mapped to each requirement, then all the requirements of the system must have been tested and therefore the trace matrix proves testing is complete.

I see flaws with this line of reasoning and here are my primary reservations on the over-reliance of the trace matrix:

  1. A completed trace matrix is only as valuable as the contents. If the requirements are not complete or clear than the test cases designed and executed might fulfill the requirements but the testing won't have provided what was needed. Conversely if the requirements are clear but the test cases are insufficient then a completed trace matrix still doesn't indicate the testing coverage and confidence that is being sought by a completed table.
  2. The trace matrix design relies too stringently on system requirements -- that is the primary design of the trace matrix -- to ensure all system requirements have been tested. But all sorts of defects can be found outside of the system requirements that are still relevant to the application providing a solution for the customer. By looking only at the system requirements and potentially not considering the customers' needs and real life product usage, essential testing could be overlooked. Testing only according to specified requirements may be too narrowly focused to be effective in real life usage -- unless the requirements are exceptionally robust.

Overall I feel the trace matrix might provide a clean high level view of testing but a checked-off list doesn't prove an application is ready to ship. The reason some people value the trace matrix is the matrix attempts to offer an orderly view of testing; but in my experience testing is rarely such a tidy task.

So how do you call the end of testing? And how can you assure test coverage?

  1. To be able to assure coverage at the end, I'd start with reviewing the beginning -- look at the test planning. Did your test planning include a risk analysis? A risk analysis at the start of a project can provide solid information for your test plan. Host a risk analysis either formally or informally, gather ideas by talking with multiple people. Get different points of view -- from your project stakeholders, talk to your DBAs, your developers, your network staff, and your business analysts. Plan testing based on your risk analysis.
  2. As a project continues, shift testing based on the defects found and the product and project as it evolves. Focus on high risk areas. Adapt testing based on you and your testing team's experience with the product. Be willing to adjust your test plan throughout the project.
  3. Throughout testing, watch the defects reported. Keep having conversations and debriefs with hands-on testers to understand not just what they've tested but how they feel about the application. Do they have defects they've seen but haven't been able to reproduce? What is their perception of the current state of the application?

In my view, there is no one tool including the trace matrix that signals testing is complete but the combination of knowing how testing was planned and adapted throughout the project, a thorough review of the defects reported and remaining, and the current state of the application according to you and your team's experience should provide you with an objective assessment of the product and the test coverage.




No comments: