What is Black Box Testing Technique?
Blackbox is a testing technique where we are interested in providing the input and getting the desired output. We do not look into the code to determine the inputs.
Blackbox techniques can be divided into three sub techniques. They are :
- Equivalence Class Partitioning
- Boundary Value Analysis
- Error Guessing
Equivalence Class Partitioning
Equivalence partitioning is a black-box testing method. Equivalence partitioning is a method for deriving test cases. In this method, classes of input conditions called equivalence classes are identified such that each member of the class causes the same kind of processing and output to occur. In this method, the tester identifies various equivalence classes for partitioning.
A class is a set of input conditions that are is likely to be handled the same way by the system. If the system were to handle one case in the class erroneously, it would handle all cases erroneously.
- Divide the input domain of a program into classes of data
- Derive test cases based on these partitions
An equivalence class represents a set of valid or invalid states for input conditions.
An input condition is:
- A specific number value, a range of values
- A set of related values, or a Boolean condition
Importance of learning Equivalence Partitioning?
Equivalence partitioning drastically cuts down the number of test cases required to test a system reasonably. It is an attempt to get a good ‘hit rate ‘, to find the most errors with the smallest number of test cases.
Designing Test Cases Using Equivalence Partitioning
To use equivalence partitioning, you will need to perform two steps
- Identify the equivalence classes
- Design test cases
Identify the equivalence classes
Take each input condition described in the specification and derive at least two equivalence classes for it. One class represents the set of cases that satisfy the condition (the valid class) and one represents cases, which do not (the invalid class).
The general guideline for Identify the equivalence classes
If the requirements state that a numeric value is input to the system and must be within a range of values, identify one valid class inputs that are within the valid range and two invalid equivalence classes inputs which are too low and inputs which are too high.
For example, if an item in inventory can have a quantity of – 9999 to + 9999,
Identify the following classes:
- One valid class: (QTY is greater than or equal to – 9999 and is less than or equal to 9999). This is written as (- 9999 < = QTY < = 9999)
- The invalid class (QTY is less than – 9999), also written as (QTY < – 9999)
- The invalid class (QTY is greater than 9999), also written as (QTY > 9999)
Boundary Value Analysis
Boundary value analysis is a software testing design technique to determine test cases covering off-by-one errors. The boundaries of software component input ranges are areas of frequent problems.
Testing experience has shown that especially the boundaries of input ranges to a software component are liable to defects, A programmer implements eg, the range 1 to 12 an Input, which for example stands for the month January to December in date, has in his code a line checking for this range.
This may look like:
if (month > 0 && month < 13)
But a common programming error may check a wrong range e. g, starting the range at 0 by writing:
if (month >=0 && month < 13)
For more complex range checks in a program this may be a problem which is not so easily spotted as in the above simple example, Applying boundary value analysis:
Applying boundary value analysis you have to select now a test case at each side of the boundary between two partitions. In the above example, this would be 0 and 1 for the lower boundary as well as 12 and 13 for the upper boundary.
Each of these pairs consists of a “clean” and a “dirty” test case. A “clean” test case should give you a valid operation result of your program. A “dirty” test case should lead to a correct and specified input error treatment such as the limiting of values, the usage of a substitute value, or in case of a program with a user interface, it has to lead to a warning and request to enter correct data.
The boundary value analysis can have 6 test cases. n, n – 1, n + 1 for the upper limit and n, n – 1, n + 1 for the lower limit.
A further set of boundaries has to be considered when you set up your test cases. A solid testing strategy also has to consider the natural boundaries of the data types used in the program. If you are working with signed values this is especially the range around zero (- 1, 0, + 1).
Similar to the typical range check faults, programmers tend to have weaknesses in their programs in this range, e. g, this could be a division by zero problems where a zero value may occur although the programmer always thought the range started at 1.
It could be a sign problem when a value turns out to be negative in some rare cases, although the programmer always expected it to be positive. Even if this critical natural boundary is clear within an equivalence partition it should lead to additional test cases checking the ran around zero.
A further natural boundary is the natural lower and upper limit of the data type itself. E. g, an unsigned 8 – bit value has the range of 0 to 255. A good test strategy would also check how the program reacts at an input of – 1 and 0 as well as 255 and 256.
The tendency is to relate boundary value analysis more to the so-called black-box testing, which is strictly checking a software component at its interfaces, without consideration of internal structures of the software. But looking closer at the subject, there are cases where it applies also to white box testing.
After determining the necessary test cases with equivalence partitioning and subsequent boundary value analysis, it is necessary to define the combinations of the test cases when there are multiple inputs to a software component
Error Guessing comes with experience with the technology and the project. Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but we can write test cases depending on the situation: Either when reading the functional documents or when we are testing and find an error that we have not documented.
Error Guessing is not in itself a testing technique but rather a skill that can be applied to all of the other testing techniques to produce more effective tests (i. e, tests that find defects).
Error Guessing is the ability to find errors or defects in the AUT by what appears to be intuition. In fact, testers who are effective at error guessing actually use a range of techniques, including:
- Knowledge about the AUT, such as the design method or implementation technology
- Knowledge of the results of any earlier testing phases (particularly important in Regression Testing)
- Experience of testing similar or related systems (and knowing where defects have arisen previously in those systems)
- Knowledge of typical implementation errors (such as division by zero errors)
- General testing rules of thumb of heuristics.
Error guessing is a skill that is well worth cultivating since it can make testing much more effective and efficient – two extremely important goals in the testing process. Typically, the skill of Error Guessing comes with experience with the technology and the project.
Error Guessing is the art of guessing where errors can be hidden. There are no specific tools and techniques for this, but you can write test cases depending on the situation: Either when reading the functional documents or when you are testing and find an error that you have not documented.