Certified Tester ISTQB Internationa Advanced Level Syllabus-Test Analyst Qualifications Board t might require tests to be run in a particular The y the de dosseciatedeoepetyesrcoik。aonentmeeetesmemeot may b and tests sh licable standards such as the United States As specified above,test data is needed for testing.and in some cases these sets of data can be quite large. During implementation.Test Analysts create input and environment data to load into databases Analysts also create data to be used with data-driven automation environment is essential.i.e.the test environment should be capable of enabling the exposure of the testing operate normally wh nen failur tos are not occurring.and on anticin cha siderations. ent changes do occur during execution.it is ipont tohemct of the changes to tests that have aread een run During test implementation,testers must ensure that those responsible for the creation and and that all the testware and test defect mana nt and test logai esneagetoesenadodese0nAhgitonntnag ify the procedures that gather data for exit criteria evaluation and test results reporting. hala du case,some percenitage of the test impemem wnicn does not follow Unscripted testing should not be ad hoc or aimless as this can be unpredictable in duration and age unese a ng testing.Test analysis.test design,and test implementation still occur.but they occur primarily during test execution. at finding defects, there are s me drawbacks. ragecan1ednet6akerepeaomyee These techni oility can be 1.7 Test Execution egeoeexeeeedandheenyaernaetetetgxactoma implementation,but the Test Analyst should have adequate time to ensure coverage of additional ors t that are(any falure detecte y to roduce the failure) This in aration of scrinted and uns nted testing techniques helps to guard against test escapes due to gaps in scripted coverage and to it the pe ae par Version 2012 Page 16 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 16 of 64 19 October 2012 © International Software Testing Qualifications Board tests are to be run, carefully checking for constraints that might require tests to be run in a particular order. Dependencies must be documented and checked. The level of detail and associated complexity for work done during test implementation may be influenced by the detail of the test cases and test conditions. In some cases regulatory rules apply, and tests should provide evidence of compliance to applicable standards such as the United States Federal Aviation Administration’s DO-178B/ED 12B [RTCA DO-178B/ED-12B]. As specified above, test data is needed for testing, and in some cases these sets of data can be quite large. During implementation, Test Analysts create input and environment data to load into databases and other such repositories. Test Analysts also create data to be used with data-driven automation tests as well as for manual testing. Test implementation is also concerned with the test environment(s). During this stage the environment(s) should be fully set up and verified prior to test execution. A "fit for purpose" test environment is essential, i.e., the test environment should be capable of enabling the exposure of the defects present during controlled testing, operate normally when failures are not occurring, and adequately replicate, if required, the production or end-user environment for higher levels of testing. Test environment changes may be necessary during test execution depending on unanticipated changes, test results or other considerations. If environment changes do occur during execution, it is important to assess the impact of the changes to tests that have already been run. During test implementation, testers must ensure that those responsible for the creation and maintenance of the test environment are known and available, and that all the testware and test support tools and associated processes are ready for use. This includes configuration management, defect management, and test logging and management. In addition, Test Analysts must verify the procedures that gather data for exit criteria evaluation and test results reporting. It is wise to use a balanced approach to test implementation as determined during test planning. For example, risk-based analytical test strategies are often blended with dynamic test strategies. In this case, some percentage of the test implementation effort is allocated to testing which does not follow predetermined scripts (unscripted). Unscripted testing should not be ad hoc or aimless as this can be unpredictable in duration and coverage unless time boxed and chartered. Over the years, testers have developed a variety of experience-based techniques, such as attacks, error guessing [Myers79], and exploratory testing. Test analysis, test design, and test implementation still occur, but they occur primarily during test execution. When following such dynamic test strategies, the results of each test influence the analysis, design, and implementation of the subsequent tests. While these strategies are lightweight and often effective at finding defects, there are some drawbacks. These techniques require expertise from the Test Analyst, duration can be difficult to predict, coverage can be difficult to track and repeatability can be lost without good documentation or tool support. 1.7 Test Execution Test execution begins once the test object is delivered and the entry criteria to test execution are satisfied (or waived). Tests should be executed according to the plan determined during test implementation, but the Test Analyst should have adequate time to ensure coverage of additional interesting test scenarios and behaviors that are observed during testing (any failure detected during such deviations should be described including the variations from the scripted test case that are necessary to reproduce the failure). This integration of scripted and unscripted (e.g., exploratory) testing techniques helps to guard against test escapes due to gaps in scripted coverage and to circumvent the pesticide paradox
Certified Tester Advanced Level Syllabus-Test Analyst Qualifications Board At the heart of the test execution activity is the comparison of actual results with expected results.Test Analysts must e tasks.other anl the work o fai a signing and pehavior is misclassified as incorrect (false-positive resut.if the expected and actual resuts do not match,an incident has oc When the tes ntation (test case c should be iIf it is incorrect.it should be corrected and the test should be re-run.Since changes in the test basis should rem duy many to an e to st b uwere run and delays.(Note that adequate logging can address the coverage and associated with t nques such a he tng.,Since the st object,t re,and tes applies both to individual tests and to activities and events.Each test should be niqenits status oed as test execuion proceeds.ny events that affect the test 8Cmte9mlstgs02sboro98edsStcietinrogmationshoudbe1og9edtomeasurete wverage and support test control.test measurement of exit criteria.and test process improvement. For evamnle if automated t Analyst will log the information regardng the test execution.ofter implementation.the amount of test execution information that is logged is influenced by audit requirements. users or customers nadticinate in test ey ecution This can in syste ugh that presum s that the tests find few assumpion is often invalid in early test evels,but might be valid during acceptance tes The following are some specific areas that should be considered during test execution: e and explo re "ir elevant"oddities Observatio sor results that may seem irrelevant are (K eck that it s)a e at the nroduc does w hat it is supposed to do of testing.put the mustas pe sure the product is not misbehaving by doing something it should not be doing (for example. 。 The code will evolve and ented to cover these new functionalities.as well as to dteiexg89IoeesnoghneaeeoMeheaoeoaeoSapsoneesinganeohendscovered Version 2012 Page 17 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 17 of 64 19 October 2012 © International Software Testing Qualifications Board At the heart of the test execution activity is the comparison of actual results with expected results. Test Analysts must bring attention and focus to these tasks, otherwise all the work of designing and implementing the test can be wasted when failures are missed (false-negative result) or correct behavior is misclassified as incorrect (false-positive result). If the expected and actual results do not match, an incident has occurred. Incidents must be carefully scrutinized to determine the cause (which might or might not be a defect in the test object) and to gather data to assist with the resolution of the incident (see Chapter 6 for further details on defect management). When a failure is identified, the test documentation (test specification, test case, etc.) should be carefully evaluated to ensure correctness. A test document can be incorrect for a number of reasons. If it is incorrect, it should be corrected and the test should be re-run. Since changes in the test basis and the test object can render a test case incorrect even after the test has been run successfully many times, testers should remain aware of the possibility that the observed results could be due to an incorrect test. During test execution, test results must be logged appropriately. Tests which were run but for which results were not logged may have to be repeated to identify the correct result, leading to inefficiency and delays. (Note that adequate logging can address the coverage and repeatability concerns associated with test techniques such as exploratory testing.) Since the test object, testware, and test environments may all be evolving, logging should identify the specific versions tested as well as specific environment configurations. Test logging provides a chronological record of relevant details about the execution of tests. Results logging applies both to individual tests and to activities and events. Each test should be uniquely identified and its status logged as test execution proceeds. Any events that affect the test execution should be logged. Sufficient information should be logged to measure test coverage and document reasons for delays and interruptions in testing. In addition, information must be logged to support test control, test progress reporting, measurement of exit criteria, and test process improvement. Logging varies depending on the level of testing and the strategy. For example, if automated component testing is occurring, the automated tests should produce most of the logging information. If manual testing is occurring, the Test Analyst will log the information regarding the test execution, often into a test management tool that will track the test execution information. In some cases, as with test implementation, the amount of test execution information that is logged is influenced by regulatory or audit requirements. In some cases, users or customers may participate in test execution. This can serve as a way to build their confidence in the system, though that presumes that the tests find few defects. Such an assumption is often invalid in early test levels, but might be valid during acceptance test. The following are some specific areas that should be considered during test execution: Notice and explore “irrelevant” oddities. Observations or results that may seem irrelevant are often indicators for defects that (like icebergs) are lurking beneath the surface. Check that the product is not doing what it is not supposed to do. Checking that the product does what it is supposed to do is a normal focus of testing, but the Test Analyst must also be sure the product is not misbehaving by doing something it should not be doing (for example, additional undesired functions). Build the test suite and expect it to grow and change over time. The code will evolve and additional tests will need to be implemented to cover these new functionalities, as well as to check for regressions in other areas of the software. Gaps in testing are often discovered during execution. Building the test suite is a continuous process
Certified Tester ISTOB Advanced Level Syllabus-Test Analyst Qualifications Board do not d re will most likely be produced,so knowledge should be stored and transferred to the testers the next test ng eron ct that all n assume it will be caught in a subsequent execution of the test cases. Mine the d ata in the ring unscnptec or exploratory testing an Find the defects before regression testing.Time is often limited for regression testing and ing regr f th ave already been run (e fora previous version of same software)and defects should have been det ted in thos pr evious runs.This does not mean tha it regre er than other 1.8 Evaluating Exit Criteria and Reporting ails gress of Wh nen the exit cniteria are de there may e a bre akdown o and th pass rate could still The exit criteria must be clearly defined so they can be The Test Analyst is responsible for supplying the info nation that is used by the Test Ma tatus 。Passed Failed her hePa8segahaxception what aach of th nd muet hat cta functionality of the system? What about a usabi y dete t that caus es the user to be sed If the pass ra I facto The for arke but the cause of the failure is not a defect(e.g..the test environment was improperly configured).If h on onM metr or the usage of th eatus valu th Test An alys roour uaietneTetoaqae ng the rt du esting we geme systems s as well as sing the overall e able to erage and p Th st Mabl Version 2012 Page 18 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 18 of 64 19 October 2012 © International Software Testing Qualifications Board Take notes for the next testing effort. The testing tasks do not end when the software is provided to the user or distributed to the market. A new version or release of the software will most likely be produced, so knowledge should be stored and transferred to the testers responsible for the next testing effort. Do not expect to rerun all manual tests. It is unrealistic to expect that all manual tests will be rerun. If a problem is suspected, the Test Analyst should investigate it and note it rather than assume it will be caught in a subsequent execution of the test cases. Mine the data in the defect tracking tool for additional test cases. Consider creating test cases for defects that were discovered during unscripted or exploratory testing and add them to the regression test suite. Find the defects before regression testing. Time is often limited for regression testing and finding failures during regression testing can result in schedule delays. Regression tests generally do not find a large proportion of the defects, mostly because they are tests which have already been run (e.g., for a previous version of same software), and defects should have been detected in those previous runs. This does not mean that regression tests should be eliminated altogether, only that the effectiveness of regression tests, in terms of the capacity to detect new defects, is lower than other tests. 1.8 Evaluating Exit Criteria and Reporting From the point of view of the test process, test progress monitoring entails ensuring the collection of proper information to support the reporting requirements. This includes measuring progress towards completion. When the exit criteria are defined in the planning stages, there may be a breakdown of “must” and “should” criteria. For example, the criteria might state that there “must be no open Priority 1 or Priority 2 bugs” and there “should be 95% pass rate across all test cases”. In this case, a failure to meet the “must” criteria should cause the exit criteria to fail whereas a 93% pass rate could still allow the project to proceed to the next level. The exit criteria must be clearly defined so they can be objectively assessed. The Test Analyst is responsible for supplying the information that is used by the Test Manager to evaluate progress toward meeting the exit criteria and for ensuring that the data is accurate. If, for example, the test management system provides the following status codes for test case completion: Passed Failed Passed with exception then the Test Analyst must be very clear on what each of these means and must apply that status consistently. Does “passed with exception” mean that a defect was found but it is not affecting the functionality of the system? What about a usability defect that causes the user to be confused? If the pass rate is a “must” exit criterion, counting a test case as “failed” rather than “passed with exception” becomes a critical factor. There must also be consideration for test cases that are marked as “failed” but the cause of the failure is not a defect (e.g., the test environment was improperly configured). If there is any confusion on the metrics being tracked or the usage of the status values, the Test Analyst must clarify with the Test Manager so the information can be tracked accurately and consistently throughout the project. It is not unusual for the Test Analyst to be asked for a status report during the testing cycles as well as to contribute to the final report at the end of the testing. This may require gathering metrics from the defect and test management systems as well as assessing the overall coverage and progress. The Test Analyst should be able to use the reporting tools and be able to provide the requested information for the Test Manager to extract the information needed
Certified Tester ISTOB Advanced Level Syllabus-Test Analyst Qualifications Board 1.9 Test Closure Activities Once test execution is determined to be complete,the key outputs from the testing effort should be passed relevan person or arc test closure will use and support the use of the system.Test s and te ted,incu appropriate links.and appropriate access rights must be granted. The Test Analyst should also ex ed)where important lessons (both from within the testi are develor ent lifecycle)can be umen and plans esta shed to reinforce the and el iminate.or at least gathered.f onl will attend the meeting.the Test Analyst must convey the pertinent information to the Test Manager so an accurate picture of the project is presente Archiving results.logs.reports.and other documents and work products in the configuration MolahsoskMapoaerhtuagaetomneabetneoematanesholtebeabesahedetaahet future ti this info nd of e roesaeermrhm tnsofefiohe Version 2012 Page 19 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 19 of 64 19 October 2012 © International Software Testing Qualifications Board 1.9 Test Closure Activities Once test execution is determined to be complete, the key outputs from the testing effort should be captured and either passed to the relevant person or archived. Collectively, these are test closure activities. The Test Analyst should expect to be involved in delivering work products to those who will need them. For example, known defects deferred or accepted should be communicated to those who will use and support the use of the system. Tests and test environments should be given to those responsible for maintenance testing. Another work product may be a regression test set (either automated or manual). Information about test work products must be clearly documented, including appropriate links, and appropriate access rights must be granted. The Test Analyst should also expect to participate in retrospective meetings (“lessons learned”) where important lessons (both from within the testing project and across the whole software development lifecycle) can be documented and plans established to reinforce the “good” and eliminate, or at least control, the “bad”. The Test Analyst is a knowledgeable source of information for these meetings and must participate if valid process improvement information is to be gathered. If only the Test Manager will attend the meeting, the Test Analyst must convey the pertinent information to the Test Manager so an accurate picture of the project is presented. Archiving results, logs, reports, and other documents and work products in the configuration management system must also be done. This task often falls to the Test Analyst and is an important closure activity, particularly if a future project will require the use of this information. While the Test Manager usually determines what information should be archived, the Test Analyst should also think about what information would be needed if the project were to be started up again at a future time. Thinking about this information at the end of a project can save months of effort when the project is started up again at a later time or with another team
Certified Tester ISTOB Internationa Advanced Level Syllabus-Test Analyst Qualifications Board 2.Test Management:Responsibilities for the Test Analyst -90 mins. iek riek analvei risk identification.risk level,risk management.risk mitigation.risk-based testing.test monitoring.test strategy Learning Objectives for Test Management:Responsibilities for the Test Analyst ztPegreomtoanaconmto equate moni ngranaeonmgoahne roj 2.3 Distributed,Outsourced and Insourced Testing TA-2.3.1 whenrkngin4-hour 2.4 The Test Analyst's Tasks in Risk-Based Testing mitigation Version 2012 Page 20 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 20 of 64 19 October 2012 © International Software Testing Qualifications Board 2. Test Management: Responsibilities for the Test Analyst - 90 mins. Keywords product risk, risk analysis, risk identification, risk level, risk management, risk mitigation, risk-based testing, test monitoring, test strategy Learning Objectives for Test Management: Responsibilities for the Test Analyst 2.2 Test Progress Monitoring and Control TA-2.2.1 (K2) Explain the types of information that must be tracked during testing to enable adequate monitoring and controlling of the project 2.3 Distributed, Outsourced and Insourced Testing TA-2.3.1 (K2) Provide examples of good communication practices when working in a 24-hour testing environment 2.4 The Test Analyst’s Tasks in Risk-Based Testing TA-2.4.1 (K3) For a given project situation, participate in risk identification, perform risk assessment and propose appropriate risk mitigation