What is Integration Testing?
Integration testing (sometimes called Integration and Testing, abbreviated I&T) is the phase of software testing in which individual software modules are combined and tested as a group. It follows unit testing and precedes system testing.
Integration testing takes as its input modules that have been unit tested, groups them in larger aggregates, applies tests defined in an integration test plan to those aggregates, and delivers as its output the integrated system ready for system testing.
Purpose of Integration Testing
The purpose of integration testing is to verify functional, performance and reliability requirements placed on major design items. These ” design items “, i. e. assemblages (or groups of units), are exercised through their interfaces using black-box testing, success and error cases being simulated via appropriate parameter and data inputs.
Simulated usage of shared data areas and inter-process communication is tested and individual subsystems are exercised through their input interface. Test cases are constructed to test that all components within assemblages interact correctly, for example across procedure calls or process activations, and this is done after testing individual modules, i. e. unit testing.
What is the Entry Criteria of Integration testing?
The Main entry criterion for Integration testing is the completion of Unit Testing. If individual units are not tested properly for their functionality, then Integration testing should not be started.
Integration Testing Principles
Configuration should be minimal
If the integration test requires the presence of an external resource (such as a database, web service, etc.), the software implementing this resource needs to be installed on all machines where the test will run; that is, if you need to test data access logic, SQL Server must be installed; if you need to test queueing, MSMQ must be installed, etc.
However, that does not mean that you should also require a user to configure a database on SQL Server, a queue on MSMQ, etc. Many products allow you to automate configuration, so this configuration should be part of the initialization and clean – up logic for the test suite.
The end result is that you should only require minimal configuration to enable the test suite to run; often, this is equivalent to requiring that the product is installed on the machine and that the test code has privileges to perform automated configuration.
Test cases should be independent
This is a requirement inherited from unit testing in general, but in integration testing, this can often be more difficult to achieve. Particularly when you are dealing with a persistent store (such as a database or transacted queue), a test case will often leave the store in a state that is different from before the test case executed (eg. If a test case deletes a row from a database table).
A corollary to test case Independence is that all test cases should begin in a known state. This means that it is necessary to write test Initialization code that ensures that the external resource is in a known state.
Tests should be efficient
A less important ambition is that tests should execute as quickly as possible. While test case independence can be achieved by simply unconfiguring the external resource completely, and then reconfigure it again before each test, this may not be the fastest solution.
If you consider a database, you could simply drop the database and recreate it before each test, but that ‘ s not the fastest solution – a faster solution is to clear out data from all tables between test cases.
The test suite should clean up after itself
When the test run is finished, it should leave the test machine in the same state as before it started. If It created any databases in SQL Server, it should delete these databases again; if it created any queues in MSMQ, It should remove these queues again, etc.
What is the Exit Criteria of Integration testing?
Integration testing is complete when you make sure that all the interfaces where components interact with each other are covered. It is important to cover negative cases as well because components might make an assumption with respect to the data.
Some different types of integration testing are
- Big bang Approach
- Top-down integration Approach
- Bottom-up integration Approach
- Hybrid integration Approach
Big Bang Approach
In the Big Bang approach, all or most of the developed modules are coupled together to form a complete software system or major part of the system and then used for integration testing.
The Big Bang method is very effective for saving time in the integration testing process. However, if the test cases and their results are not recorded properly, the entire integration process will be more complicated and may prevent the testing team from achieving the goal of integration testing.
A type of ” Big Bang ” Integration testing is called Usage Model testing. Usage Model testing can be used in both software and hardware integration testing. The basis behind this type of integration testing is to run user – like workloads in integrated user – like environments. In doing the testing in this manner, the environment is proofed, while the individual components are proofed indirectly through their use.
Usage Model testing takes an optimistic approach to test because it expects to have little problems with the individual components. The strategy relies heavily on the component developers to do the isolated unit testing for their product.
The goal of the strategy is to avoid redoing the testing done by the developers and instead flesh out problems caused by the interaction of the components in the environment. For integration testing, Usage Model testing can be more efficient and provides better test coverage than traditional focused functional integration testing.
To be more efficient and accurate care must be used in defining the user – like workloads for creating realistic scenarios in exercising the environment. This gives added comfort that the integrated environment will work as expected for the target customers.
Limitation of Big Bang Approach
Any conditions not stated in specified integration tests, outside of the confirmation of the execution of design items, will generally not be tested.
Top-Down Integration Approach
Top-down integration testing is an incremental integration testing technique which begins by testing the top-level module and progressively adds in lower-level module one by one.