Custom Search

Tips for Software Testing Engineers-For Effective Writing of Integration Tests

Tips for Software Testing Engineers-For Effective Writing of Integration Tests

Software testing engineers can not afford to ignore the importance of integration tests. Integration test when breaks in the event of a dependent system changing unexpectedly are nothing but early warning that the system is about to break in the real world scenario as well. As an integration test breaks, pinpointing the problematic issue, the intelligent software-testing engineer can certainly prevent the problems from piling up until they come to the surface at a later stage of the project.

Integration tests are generally problematic, hence are extremely important in any project. Despite being tedious, integration tests even if automated or not, usually pay back for the efforts put in by us in creating them and keeping them working, reason being without the Integration tests, we would be simply left with set of unit tests. As such the unit tests do not offer a confirmation that the entire operation works, from the point when a user clicks "Go" through the complete operation, to the results displayed on the user's screen. An end-to-end test confirms that all the pieces integrate together as desired.

Integration testing is aimed at finding defects in the interfaces and invariants between interacting entities that interact in a system or a product. Invariants are substates that should be unchanged by the interaction between the two entities.

The objective of integration testing is not to find defects inside the entities being integrated, because there is an underlying assumption that these have already been found during previous testing.

When our final product is part of or in itself a complex product it is necessary to introduce many more integration test levels like, for example, hardware-software system integration and software-data system integration, customer product integration etc.

The entities to be integrated may be components as defined in the architectural design or different systems as defined in the product design. The principles for integration testing are the same no matter what is being integrated by the software testing engineer.

We can define the scope of our integration tests right from the simple tests involving testing of small group of functions working together, to the testing of a full scale end-to-end scenario covering many tiers in the enterprise system like the middleware and database etc.

Strategies for the testing order in integration testing:

1) Top down integration: 
This involves first testing the interfaces in the top layer in the design hierarchy followed by testing every layer going downwards. The main program serves as the driver. This way we able to quickly create a "shell".

2) Bottom up integration:
This involves first testing the interfaces in the lowest level. Here higher components are replaced with drivers, so we may need many drivers. This integration strategy enables early integration with hardware, where this is relevant.

3) Functional integration:
This involves integration of the functionality areas. This is a sort of vertically divided top-down strategy. With this integration strategy we are quickly able to get the possibility of knowing the functional areas available to us.

4) Big-bang integration:
This involves integration of everything in one go. At first glance it seems like this strategy reduces the test effort, but it does not. It is impossible to get proper coverage when testing the interfaces in a big-bang integration, and it is very difficult to find any defects in the interfaces. Both top-down and bottom-up integration generally end up as big-bang, even if this was not the initial intention.

Important tips for writing Integration tests

A) Creation of adequate test environment:
Create a test environment like test database etc. to run the tests in. This must be quite close to the live release environment as far as possible. A frequent data refresh from the live system will keep the test database fresh and realistic. Multiple test environments may be necessary, but remember that having them can be extremely problematic, as they all need to be maintained. However, you might want one database that can be torn down and reinstated regularly to provide a clean environment every time, plus a database with a dataset similar to the live system, to allow system, load, and performance testing.

B) Beginning of the tests from "test sandbox":
The software testing engineer should start every test scenario with a script that creates a "test sandbox" containing just the data needed for that test. A common reason given for not running integration tests against a real database is that it would take too much effort to "reset" the database before every run of the test suite. In fact, it is much easier to simply have each test scenario set up just the data it needs to run the scenario.

C) Execution of startup script to kick-start the test scenarios:
Do not execute a tear-down script after the test. Although it seems a bit odd, but if a test has to clean up the database after itself so that it can run successfully next time, then it totally relies on its cleanup script having run successfully the previous time. If the cleanup failed last time, then the test can never run again. It is much easier if the software-testing engineer consistently just runs a "setup script" at the start of each test scenario.

D) Avoid execution of scenario tests as part of the regular build:
Running of the scenario tests as part of the regular build can lead to a very fragile build process. The tests will be making calls to external systems, and will be highly dependent on test data being in a specific state, not to mention other unpredictable factors such as two people running the same tests at the same time, against the same test database. It is great for unit and controller tests to be part of your automated build because their output is always deterministic, but the scenario tests, not so much. The software-testing engineer should still run the scenario tests. If they were not set up to run automatically, chances are they will be forgotten, and will be left to gradually die until it would take too much effort to get them all passing again. So it is better to schedule them to run automatically on a server, either hourly or at least once per night. The software-testing engineer must ensure to automatically email the test results to the entire team.

E) Consideration of scenario tests like Black-Box Tests:
Treat scenario tests like "black box" tests, because they do not know about the internals of the code under test. Conversely, unit tests are always "white box" tests because they often need to set internal parameters in the code under test, or substitute services with mock objects. Controller tests, are "gray box" tests because it is preferable for them to be white box, but occasionally they do need to dive deep into the code under test.

Important Lessons learnt:

1) Always remember that the objective of integration testing is to address both "external system calls" and "end-to-end testing." 

2) We should test the GUI code as part of our scenario tests.

3) We should deploy some business-friendly testing framework for storing the scenario tests.

4) We should create end-to-end scenario tests. However the project must not be delayed if these happen to be complex.

5) We should drive scenario tests from our use case scenarios.

6) We should drive the unit / controller-level integration tests from our conceptual design.

7) We should make a prior decision as to which "level" of integration test we should write.

8) We should look for the test patterns in our conceptual design.

9) We should remember to include the security tests in the integration tests.

No comments: