|Question Papers for ISTQB|
Question Papers for ISTQB:
When I was preparing for ISTQB certification, I was going through the study materials provided by our guide and several other sites which was providing me the questions. Here are the set of questions that we can practice before we sit for the ISTQB certification.
In 1998-in britain the first approach of developping multi level qualififying programme was developed.
Indian Testing board:2005(foundation level)
Indian Testing board:2004(advanced level)
Different Testing boards:
The spanish Testing board:SSTQB
The German Testing Borard:GTB
International Software Quality institute-iSQI
Global association for software quality-gasq
ISTQB News letter
This post will help to clear istqb foundation exam as well help you to answer the istqb questions for interview.
Few Terminologies for ISTQB Guide:
Test Scenario :
It can be a single test case or collection of test cases or test script.It is a particular situation that is verified by set of test cases.
It can be a condition under which the test is performed. The pre-condition that is specified as a part of the test case is also a test condition.
Test Strategy :
It is the high level description of the testing activities that will be performed in the programme or the organization.
Test Plan :
A product venture test plan is a record that portrays the destinations, extension, approach and focal point of a product testing exertion. The way toward setting up a test plan is a helpful method to thoroughly consider the endeavors expected to approve the worthiness of a product item. The finished archive will help individuals outside the test amass comprehend the why and how of item approval. It ought to be intensive enough to be helpful, however not all that exhaustive that none outside the test gathering will most likely perused it.
It is the document that describes the
- Approach of testing
- Resource requirements
- Entry criteria and Exit Criteria
- Features that will be tested
- Features that will not be tested
- Cycles of testing (System, regression, adhoc..etc..)
- Test environment.
- Risks and the Contingency plan.
System Requirement Specification:
A structured collection of information that specifies the requirement of the system.
This is generally done by the Business Analyst. The information that is provided is generally a high level requirement. It is generally from an end user perspective.
Functional Requirement Specification
This is a much detailed list of the functions that the System is supposed to perform. It will be a good to design this document by the Business owner, Developer and QA .
It will consist of :Work Flow diagrams, Functions performed in each screen, Compliance requirement, operation of each screen.. etc.This will be usually for a particular release ( in case of iterative development) or Project.
I suppose this is referring to the SDLC’s. In that case, it depends on the product that is being developed.
It can be a Iterative Development.
It can be a W or V model.
Now a days it is agile or extreme development.
This is a human action that produces an incorrect result.E.g-Programming error.
A incorrect flow in the system which does not give the actual output is called Defect.
A defect caught during execution may cause failure.Rather deviation from its original outcome.
Causes of software failure:
It may be introduced into a code to a system.
Reasons for it:
- Time pressure
- Excessive demands
- Wrong Understanding
- Wrong Requirement
Changes in the system Environment
Reason for it:
- Hard disk Crashes
Debugging and Software Testing:
Debugging is done by Developers and Software Testing is done by Testers
The objective of testing is to identify the defect. The objective of debugging is to rectify the identified defect.
A tester identifies the defect in Software Testing phase and sends to Development team to fix it. Developer then re-runs the code, identifies the cause of the defect and fixes the same. This is debugging.
Features of Software Testing and who performs it?
- Test early and test often.
- Integrate the application development and testing life cycles. You’ll get better results and you won’t have to mediate between two armed camps in your IT shop.
- Formalize a testing methodology; you’ll test everything the same way and you’ll get uniform results.
- Develop a comprehensive test plan; it forms the basis for the testing methodology.
- Use both static and dynamic testing.
- Define your expected results.
- Understand the business reason behind the application. You’ll write a better application and better testing scripts.
- Use multiple levels and types of testing (regression, systems, integration, stress and load).
- Review and inspect the work, it will lower costs.
- Don’t let your programmers check their own work; they’ll miss their own errors.
- Software Testing is performed by Software Testers.
Limitations of Software Testing:
Software Testing can show the presence of errors; it cannot show the absence of errors
If at any point during planned test execution the test entry criteria are found to be not satisfied, test execution will terminate until such time that the unsatisfactory condition is removed, at which point test execution shall resume.
Testing may be suspended for the following reasons:
1. Test Environment is unavailable or cannot be efficiently utilized due to performance or other issues.
2. Bug tracking tool is unavailable or cannot be efficiently utilized due to performance or other issues.
3. Smoke Test failure.
4. Production test data is unavailable.
5. Test data is invalid and causes component failure.
6. Insufficient knowledge transfer and no clarification for queries
7. Unscheduled outage
This will be all those point due to which testing can not happen
Testing will be resumed when the cause of test suspension has been resolved and the testing team has been notified in accordance with the communication plan.
This is not entry and exit criteria.
The links for ISTQB Mock Tests are as follows:
Types of testing:
An Endurance Test is typically a subset of Load test which talks about workload and load volumes anticipated during production operation. This is called as Soak testing as well.
As per Wiki..
This test is usually done to determine if the application can sustain the continuous expected load. During endurance tests, memory utilization is monitored to detect potential leaks. Also important, but often overlooked is performance degradation. That is, to ensure that the throughput and/or response times after some long period of sustained activity are as good or better than at the beginning of the test.
Smoke Vs Sanity Testing:
When a build is received, a smoke test is run to ascertain if the build is stable and it can be considered for further testing.Smoke testing can be done for testing the stability of any interim build.Smoke testing can be executed for platform qualification tests.
Once a new build is obtained with minor revisions, instead of doing a through regression, sanity is performed so as to ascertain the build has indeed rectified the issues and no further issue has been introduced by the fixes. It’s generally a subset of regression testing and a group of test cases are executed that are related with the changes made to the app.
Generally, when multiple cycles of testing are executed, sanity testing may be done during the later cycles after through regression cycles.
Static analysis tools
Static analysis tools applied to source code can enforce coding standards, but if applied to existing code may generate a lot of messages. Warning messages do not stop the code being translated into an executable program, but should ideally be addressed so that maintenance of the code is easier in the future. A gradual implementation with initial filters to exclude some messages would be an effective approach.
The differnce between the integration test cases and system test case ?
integration test happens between the components / units of the application once they get built; and system testing happens once integration testing of all the components happens properly.System testing is nothing but testing the system as a Whole!
Smoke Vs Sanity Testing:
|Smoke Testing||Sanity Testing|
|Smoke testing originated in the hardware testing practice of turning on a new piece of hardware for the first time and considering it a success if it does not catch fire and smoke.||In software industry, smoke testing is a shallow and wide approach whereby all areas of the application without getting into too deep, is tested.
A sanity test is a narrow regression test that focuses on one or a few areas of functionality. Sanity testing is usually narrow and deep.
|A smoke test is scripted–either using a written set of tests or an automated test.||A sanity test is usually unscripted.|
|A Smoke test is designed to touch every part of the application in a cursory way. It’s is shallow and wide.||A Sanity test is used to determine a small section of the application is still working after a minor change.|
|Smoke testing will be conducted to ensure whether the most crucial functions of a program work, but not bothering with finer details. (Such as build verification).||Sanity testing is a cursory testing; it is performed whenever a cursory testing is sufficient to prove the application is functioning according to specifications. This level of testing is a subset of regression testing.|
|Smoke testing is normal health check up to a build of an application before taking it to testing in depth.||Sanity testing is to verify whether requirements are met or not, checking all features breadth-first.|
1.Test Plan: Covers how the testing will be managed, scheduled and executed.
2.Test Design Specification: defines logically what needs to be tested by examining the requirements or features. Then these requirements can be converted into test conditions.
3.Test Case Specification: Converts the test conditions into test cases by adding real data, pre-conditions and expected results
4.Test Procedure: Describes in practical terms how the tests are run.
5.Test Item Transmittal Report: Specify the items released for testing.Test execution
6.Test Log: Is an audit trail that records the details of tests in chronologically.
7.Test Incident Report: Record details of any unexpected events and behaviours that need to be investigated.
8.Test Summary Report: Summarise and evaluate tests.
Functional Quality Attribute—
The focusess are:
1.Correctness: The functionality meets the required attributes.
2.Completeness:The functionality meets all reqirements
2.Non -Functional Quality-Attributes
IEEE-829 in brief
IEEE 829 standard for software testing documentation
One of the challenges facing software testers has been the availability of an agreed set of document standards and templates for testing. The IEEE 829 provides an internationally recognised set of standards for test planning documentation.
IEEE 829 has been developed specifically with software testing in mind and is applicable to each stage of the testing life cycle including system and acceptance testing.
Type of quality Assurance(QA):
Constructive:This process is to prevent defect.
1.Defects need to be fixed
2.Defects need not to be repeated
It has two classifications:
1.4.Process rules and regulation
2.5.IDE(Integrated development environment)
This process for finding defect,leading to correcting defects and preventing failures.
1.Defects detected as early as possible.
2.Examination without execution.
3.defects finding executing the programme
It has 2 classification
1.1. Blackbox Technique
1.1.1. Equivalence Partitioning
1.1.2. Boundary value analysis
1.1.3. State Transition Testing
1.1.4. Decision Table
1.1.5. Use case based Testing
1.2. Experienced Based Testing
1.3. Whitebox based Teting
1.3.1. Statement coverage
1.3.2. Branch Coverage
1.3.3. Condition Coverage
1.3.4. Path Coverage
2.2. Control flow analysis
2.3. Data Flow analysis
2.4. Computer metric
Bug Life cycle:
The standard Bug life cycle which i have collected from Bugzilla….is given below
Generally,what we use to follow…More simplified version of the above is …
Refused vs Could not Reproduce Vs Rejected bugs:
I have heard about this term first time. After a lot of research i came a conclusion that few of the bug tracking tool uses that information.
this can happen in two different manner
Could not reproduce:
If developer is not able to reproduce the bug by the steps given in bug report by QA then developer can mark the bug as ‘CNR’. QA needs action to check if bug is reproduced and can assign to developer with detailed reproducing steps.Generally most of the cases are environment issues.
or might be developer Need more information: If developer is not clear about the bug reproduce steps provided by QA to reproduce the bug, then he/she can mark it as “Need more information’. In this case QA needs to add detailed reproducing steps and assign bug back to dev for fix.
Some times developer or team lead can mark the bug as Rejected or invalid if the system is working according to specifications and bug is just due to some misinterpretation.
What is difference between Retesting and Regression Testing?
Well we generally get confused by this two terms Retesting and Regression Testing. Few of us think that both are same. By the term retesting they mean that we need to test the same application again and for regression again we need to test end to end.
It is a type of testing after bug fix.Say testers raise few defects for a build.Now developers will fix those bugs and will give testers to test the application.This time testing team will evaluate the bug fixes. So they will go through all defects and check if these are really fixed or not.If the defect is fixed then tester should close this defect else it will be reopen state and assign back to the developers.
This process is called Retesting.
Now think about this scenario, when developers are fixing any defects or modifying codes , Testers need to make sure that due this enhancement there is no harm impact happened on the application.Say changing code somewhere impacts other location.So those test cases which are passed previously began to fail and lead application to a serious failure.To prevent this after modification Testers suppose to test the entire application end to end to check the integrity of the application.
This is called regression Testing.
What is ECP(Equivalence partitioning)?
read here for more.
Types of Bugs:
It is extremely important to understand the type & importance of every bug detected during the testing & its subsequent effect on the users of the subject software application being tested.
Such information is helpful to the developers and the management in deciding the urgency or priority of fixing the bug during the product-testing phase.
Following Severity Levels are assigned during the Testing Phase:
Critical is the most dangerous level, which does not permit continuance of the testing effort beyond a particular point. Critical situation can arise due to popping up of some error message or crashing of the system leading to forced full closure or semi closure of the application. Criticality of the situation can be judged by the fact that any type of workaround is not feasible. A bug can fall into “Critical” category in case of some menu option being absent or needing special security permissions to gain access to the desired function being tested.Sometimes we call it stopper.Typically
1. Wrong functionality
2. Fundamental problem comes under this.
High priority defects:
High impact bug is a level of major defect under which the product fails to behave according to the desired expectations or it can lead to malfunctioning of some other functions thereby causing failure to meet the customer requirements. Bugs under this category can be tackled through some sort of workaround. Examples of bugs of this type can be mistake in formulas for calculations or incorrect format of fields in the database causing failure in updating of records. Likewise there can be many instances.
2. Improper Service Levels (Control flow defects)
3. Interpreting Data Defects
4. Race Conditions (Compatibility and Intersystem defects)
5. Load Conditions (Memory Leakages under load)
Medium priority defects:
Medium defects falling under this category of medium or average severity do not have performance effect on the application. But these defects are certainly not acceptable due to non-conformance to the standards or companies vide conventions. Medium level bugs are comparatively easier to tackle since simple workarounds are possible to achieve desired objectives for performance. Examples of bugs of this type can be mismatch between some visible link compared with its corresponding text link.
1.Boundary Related Defects
2.Error Handling Defects
Defects falling under low priority or minor defect category are the ones, which do not have effect on the functionality of the product. Low severity failures generally do not happen during normal usage of the application and have very less effect on the business. Such types of bugs are generally related to looks & feel of the user interface & are mainly cosmetic in nature.
1.User Interface Defects
What is the next step when we find a bug?
When a bug is found, it needs to be communicated and assigned to developers that can fix it. After the problem is resolved, fixes should be re-tested. Additionally, determinations should be made regarding requirements, software, hardware, safety impact, etc., for regression testing to check the fixes didn’t create other problems elsewhere. If a problem-tracking system is in place, it should encapsulate these determinations. A variety of commercial, problem-tracking/management software tools are available. These tools, with the detailed input of software test engineers, will give the team complete information so developers can understand the bug, get an idea of its severity, reproduce it and fix it.
Alpha Testing Vs Beta Testing
In today’s system, the role of the customer while developing a system is very important. Typically, software goes through two stages of testing before it is considered finished. The first stage, called alpha testing, is often performed only by users within the organization developing the software. The second stage, called beta testing, generally involves a limited number of external users.
Alpha testing is simulated or actual operational testing by potential users/customers or an independent test team at the developers’ site. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing, before the software goes to beta testing–wiki
Beta testing comes after alpha testing and can be considered a form of external user acceptance testing. Versions of the software, known as beta versions, are released to a limited audience outside of the programming team. The software is released to groups of people so that further testing can ensure the product has few faults or bugs. Sometimes, beta versions are made available to the open public to increase the feedback field to a maximal number of future users—wiki
A test for a computer product prior to commercial release. Beta testing is the last stage of testing, and normally can involve sending the product to beta test sites outside the company for real-world exposure or offering the product for a free trial download over the Internet. Beta testing is often preceded by a round of testing called alpha testing
Q: What does “Beta” mean?
A: Beta testing is the process of taking software which is in the process of development and giving it to a larger group of users to test. While the testing done in our laboratory is extensive, beta testing will expose many problems not found in our laboratory testing by exposing the software to a much broader range of uses and environments. We beta test software and services because no laboratory can possibly cover every use and set of conditions that will exist after the software has been released.
The free version software available to use from any company is Beta release.
What is the difference between Verification and Validation?
Q: What is verification?
A: Verification ensures the product is designed to deliver all functionality to the customer; it typically involves reviews and meetings to evaluate documents, plans, code, requirements, and specifications; this can be done with checklists, issues lists, walkthroughs and inspection meetings.
Q: What is validation?
A: Validation ensures that functionality, as defined in requirements, is the intended behavior of the product; validation typically involves actual testing and takes place after verification are completed.
|Traceability in Projects|
4 Different Types Of Traceability in Projects:
It identifies the origin of items (e.g., customer needs) and follows these same items as they travel through the hierarchy of the Work Breakdown Structure to the project teams and eventually to the customer. When the requirements are managed well, traceability can be established from the source requirement to its lower level requirements and from the lower level requirements back to their source. Such bidirectional traceability helps determine that all source requirements have been completely addressed and that all lower level requirements can be traced to a valid source. Horizontal traceability is also important and is mentioned in sub-practice 3, but it is not required to satisfy bidirectional traceability.
Horizontal traceability :
It identifies the relationships among related items across workgroups or product components for the purpose of avoiding potential conflicts. It enables the project to anticipate potential problems (and mitigate or solve them) before integration testing. For example, horizontal traceability would follow related requirements across two work groups working on two associated components of a product. The traceability across these two work groups enables the work groups to see when and how a change in a requirement for one of the components may affect the other component. Thus, horizontal traceability enables the project to anticipate potential problems (and mitigate or solve them) before integration testing.
In business terms:–
This Traceability `from requirements through the associated life-cycle work products of architecture specifications, detailed designs, Code, unit test plans, integration test plans, system test plans, so forth and back”
It refers to the traceability from the requirements to the associated plans such as the project plan, quality assurance plan, configuration management plan, risk management plan, and so forth.
negative testing is basically ” test to fail ” attitude.
hence a sample scenario for negative testing in notepad:
type ” this app can break”
then save ur notepad and close.
then when u reopen u cannot view the same content(it wil be corrupted)
that is in notepad if u type a cotent in this format 4 – 3 -3 -5 (for ex: this app can break) then
it doesnt work as intended
How many test cases are necessary to cover all the possible sequences of statements (paths) for the following program fragment? Assume that the two conditions are independent of each other : –
if (Condition 1)
then statement 1
else statement 2
if (Condition 2)
then statement 3
a.2 Test Cases
b.3 Test Cases
c.4 Test Cases
to solve this we need to take
1.Condition 1 true and condition 2 true value this will execute statement 1 and statement 3
2.Condition 1 false and condition 2 dont care …this will execute statement 2
so we need 2 sets of testcases to solve this.
Testing Principle – Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is
tested differently from an e-commerce site.
Postal rates for ‘light letters’ are 25p up to 10g,35p up to 50g plus an extra 10p for each additional 25g up to 100g.
Which test inputs(in grams) would be selected using equivalence partitioning?????
Answer is b….
Ask: “What type of ticket do you require, single or return?” IF the customer wants ‘return’
Ask: “What rate, Standard or Cheap-day?”
IF the customer replies ‘Cheap-day’
Say: “That will be £11:20”
Say: “That will be £19:50”
Say: “That will be £9:75”
customer wants return + Cheap-day ticket: charges= £11:20
customer wants return + Standard ticket : charges= £19:50
Customer want single ticket : Charges= £9:75
These 3 test cases covers all the combinations.