Certified Tester ISTQB Advanced Level Syllabus-Test Analyst Qualifications Board 1.3 Test Planning,Monitoring and Control This section focuses on the processes of planning,monitoring and controlling testing. 1.3.1 Test Planning ng for the most part rant for h Test Analyst.n al testin All types of test considered in the test plan and scheduled accordi Ana lyst may be responsible for usability testing.That type of testing must Review the test estimate with the Test Manager and ensure adequate time is budgeted for the procurement and validation of the testing environment. tesing If multiple types systems virtual Plan to test the documentation. Users are provided with the sot rify the documentation and may need to work with the technical witing staff to help pre data to be used for screen shots and video clips. ParotetthegsteilatoegPrcer software if the soware cannot e instaled.it winot pe used at all.This can pe dificult to a6 ntia does not fit The Test An alyst must be aware of the selecte and the expe Allwtimeornangh thesfim. for organiz Compiexoatonshipsmayexistamonghetestbasstestcondtonsandtestcasessuobhatmap ted.The Test Analyst is ust y th est person to 1.3.2 Test Monitoring and Control ofh Tet Manager.he Test trol is usual data ould be s01 that have passed.failed).In each cas se a baseline(i.e..reference st ndard)must be defined and then progress with 9 for each metric.Each test case that is completed.each defect report that is written.each milestone Version 2012 Page 11 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 11 of 64 19 October 2012 © International Software Testing Qualifications Board 1.3 Test Planning, Monitoring and Control This section focuses on the processes of planning, monitoring and controlling testing. 1.3.1 Test Planning Test planning for the most part occurs at the initiation of the test effort and involves the identification and planning of all of the activities and resources required to meet the mission and objectives identified in the test strategy. During test planning it is important for the Test Analyst, working with the Test Manager, to consider and plan for the following: Be sure the test plans are not limited to functional testing. All types of testing should be considered in the test plan and scheduled accordingly. For example, in addition to functional testing, the Test Analyst may be responsible for usability testing. That type of testing must also be covered in a test plan document. Review the test estimates with the Test Manager and ensure adequate time is budgeted for the procurement and validation of the testing environment. Plan for configuration testing. If multiple types of processors, operating systems, virtual machines, browsers, and various peripherals can be combined into many possible configurations, plan to apply testing techniques that will provide adequate coverage of these combinations. Plan to test the documentation. Users are provided with the software and with documentation. The documentation must be accurate to be effective. The Test Analyst must allocate time to verify the documentation and may need to work with the technical writing staff to help prepare data to be used for screen shots and video clips. Plan to test the installation procedures. Installation procedures, as well as backup and restore procedures, must be tested sufficiently. These procedures can be more critical than the software; if the software cannot be installed, it will not be used at all. This can be difficult to plan since the Test Analyst is often doing the initial testing on a system that has been preconfigured without the final installation processes in place. Plan the testing to align with the software lifecycle. Sequential execution of tasks does not fit into most schedules. Many tasks often need to be performed (at least partly) concurrently. The Test Analyst must be aware of the selected lifecycle and the expectations for involvement during the design, development and implementation of the software. This also includes allocating time for confirmation and regression testing. Allow adequate time for identifying and analyzing risks with the cross-functional team. Although usually not responsible for organizing the risk management sessions, the Test Analyst should expect to be involved actively in these activities. Complex relationships may exist among the test basis, test conditions and test cases such that manyto-many relationships may exist among these work products. These need to be understood to enable test planning and control to be effectively implemented. The Test Analyst is usually the best person to determine these relationships and to work to separate dependencies as much as possible. 1.3.2 Test Monitoring and Control While test monitoring and control is usually the job of the Test Manager, the Test Analyst contributes the measurements that make the control possible. A variety of quantitative data should be gathered throughout the software development lifecycle (e.g., percentage of planning activities completed, percentage of coverage attained, number of test cases that have passed, failed). In each case a baseline (i.e., reference standard) must be defined and then progress tracked with relation to this baseline. While the Test Manager will be concerned with compiling and reporting the summarized metric information, the Test Analyst gathers the information for each metric. Each test case that is completed, each defect report that is written, each milestone
Certified Tester ISTOB Advanced Level Syllabus-Test Analyst Qualifications Board ris.It is imp g。 indicate that additional testing effort is needed in that area. Requirements and risk covera a06 be used to prioritize remaining work and accurate.the ect can be controlled and accurate status information can be reported to the stakeholders. io to ensure 1.4 Test Analysis Analyze the test basis Identify the test conditions emrarteTesAnastopocaedeiecneywihtesanaystheoiowngsmiyoieashoda t nas passed revew win reaso as been updated as needed There is a reasonable budget and schedule available to accomplish the remaining testing work for this test object Test conditions are typically identified by analysis of the test basis and the test obiectives.In some situations,where doc umentation may be old or non-existent,the test conditions may be identified by h hedorthetestpan. the co are to the item being lested,there are som stadard .It is usually advisable to define test conditions at differing levels of detail.Initially.high-level alty of screen uch as "screen r eiects an account number that is one digit short of the comrect length Using this ty suf of hie to defining test conditions can help to esure the nigh each product risk must be identified and traced back to that risk item will be necessary to address the ne Tes areas of the test p hat specific tests must 1.5 Test Design Still adher etheieeeenmhetm3eeeniaankeelproeec8oesesaesteee a to the includes the following activities: Version 2012 Page 12 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 12 of 64 19 October 2012 © International Software Testing Qualifications Board that is achieved will roll up into the overall project metrics. It is important that the information entered into the various tracking tools be as accurate as possible so the metrics reflect reality. Accurate metrics allow managers to manage a project (monitor) and to initiate changes as needed (control). For example, a high number of defects being reported from one area of the software may indicate that additional testing effort is needed in that area. Requirements and risk coverage information (traceability) may be used to prioritize remaining work and to allocate resources. Root cause information is used to determine areas for process improvement. If the data that is recorded is accurate, the project can be controlled and accurate status information can be reported to the stakeholders. Future projects can be planned more effectively when the planning considers data gathered from past projects. There are myriad uses for accurate data. It is part of the Test Analyst’s job to ensure that the data is accurate, timely and objective. 1.4 Test Analysis During test planning, the scope of the testing project is defined. The Test Analyst uses this scope definition to: Analyze the test basis Identify the test conditions In order for the Test Analyst to proceed effectively with test analysis, the following entry criteria should be met: There is a document describing the test object that can serve as the test basis This document has passed review with reasonable results and has been updated as needed after the review There is a reasonable budget and schedule available to accomplish the remaining testing work for this test object Test conditions are typically identified by analysis of the test basis and the test objectives. In some situations, where documentation may be old or non-existent, the test conditions may be identified by talking to relevant stakeholders (e.g., in workshops or during sprint planning). These conditions are then used to determine what to test, using test design techniques identified within the test strategy and/or the test plan. While test conditions are usually specific to the item being tested, there are some standard considerations for the Test Analyst. It is usually advisable to define test conditions at differing levels of detail. Initially, high-level conditions are identified to define general targets for testing, such as “functionality of screen x”. Subsequently, more detailed conditions are identified as the basis of specific test cases, such as “screen x rejects an account number that is one digit short of the correct length”. Using this type of hierarchical approach to defining test conditions can help to ensure the coverage is sufficient for the high-level items. If product risks have been defined, then the test conditions that will be necessary to address each product risk must be identified and traced back to that risk item. At the conclusion of the test analysis activities, the Test Analyst should know what specific tests must be designed in order to meet the needs of the assigned areas of the test project. 1.5 Test Design Still adhering to the scope determined during test planning, the test process continues as the Test Analyst designs the tests which will be implemented and executed. The process of test design includes the following activities:
Certified Tester Advanced Level Syllabus-Test Analyst Qualifications Board .Determine in which test areas low-evel (concrete)or high-level (logical)test cases are most appropriate Determine the test case design technique(s)that provide the necessary test coverage Create test cases that exercise the identified test conditions Prioritization criteria identified during risk analysis and test planning should be applied throughout the process,from analysis and design to implementation and execution. Depending on the types of tests being designed,one of the entry criteria for test design may be the availability of tools that will be used during the design work When designing tests,it is important to re mber the following Some test items are better addressed by defining furthe er into d scrpted tests e fo as a e other testers.not just the author.If the author is not the e test.other testers will need to read and understand ne test objectives and the relative Tests must also be understandable by other stakeholders such as developers.who will review the tests,and auditors,who may have to approve the tests s er a s of e so s tha igh the with th 8eonieeend nter-process communicatio ns.batch execution and other interrupts also interact with the software and can contain defects so the Test Analyst must design tests to mitigate these risks. Tests should be designed to test the intenaces een the various test objects 1.5.1 Concrete and Logical Test Cases One of the jobs of the Test Analyst is to determine the best types of test cases for a given situation. es all the specific information and dures the tester to whe external verification of the tests,such as aud ts,is required Concrete test cases provide nd tew mit tester inge o require a ount of ma orovide bett an concre use they will var t each time they are executed. This also l to a loss in reproducibility. test c s are bes t used when the ithboth nd the duct and al dits will be conducted) Logical test cases may be defined early in the requirements process when the equire ments are no yet w e te m ay o e used to e op concret sequentially.flowing from logical to concrete with only the concrete test cases used for execution. 1.5.2 Creation of Test Cases andor the test n Version 2012 Page 13 of 64 19 October 2012 tional Software Testing Board
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 13 of 64 19 October 2012 © International Software Testing Qualifications Board Determine in which test areas low-level (concrete) or high-level (logical) test cases are most appropriate Determine the test case design technique(s) that provide the necessary test coverage Create test cases that exercise the identified test conditions Prioritization criteria identified during risk analysis and test planning should be applied throughout the process, from analysis and design to implementation and execution. Depending on the types of tests being designed, one of the entry criteria for test design may be the availability of tools that will be used during the design work. When designing tests, it is important to remember the following: Some test items are better addressed by defining only the test conditions rather than going further into defining scripted tests. In this case, the test conditions should be defined to be used as a guide for the unscripted testing. The pass/fail criteria should be clearly identified. Tests should be designed to be understandable by other testers, not just the author. If the author is not the person who executes the test, other testers will need to read and understand previously specified tests in order to understand the test objectives and the relative importance of the test. Tests must also be understandable by other stakeholders such as developers, who will review the tests, and auditors, who may have to approve the tests. Tests should be designed to cover all the interactions of the software with the actors (e.g., end users, other systems), not just the interactions that occur through the user-visible interface. Inter-process communications, batch execution and other interrupts also interact with the software and can contain defects so the Test Analyst must design tests to mitigate these risks. Tests should be designed to test the interfaces between the various test objects. 1.5.1 Concrete and Logical Test Cases One of the jobs of the Test Analyst is to determine the best types of test cases for a given situation. Concrete test cases provide all the specific information and procedures needed for the tester to execute the test case (including any data requirements) and verify the results. Concrete test cases are useful when the requirements are well-defined, when the testing staff is less experienced and when external verification of the tests, such as audits, is required. Concrete test cases provide excellent reproducibility (i.e., another tester will get the same results), but may also require a significant amount of maintenance effort and tend to limit tester ingenuity during execution. Logical test cases provide guidelines for what should be tested, but allow the Test Analyst to vary the actual data or even the procedure that is followed when executing the test. Logical test cases may provide better coverage than concrete test cases because they will vary somewhat each time they are executed. This also leads to a loss in reproducibility. Logical test cases are best used when the requirements are not well-defined, when the Test Analyst who will be executing the test is experienced with both testing and the product, and when formal documentation is not required (e.g., no audits will be conducted). Logical test cases may be defined early in the requirements process when the requirements are not yet well-defined. These test cases may be used to develop concrete test cases when the requirements become more defined and stable. In this case, the test case creation is done sequentially, flowing from logical to concrete with only the concrete test cases used for execution. 1.5.2 Creation of Test Cases Test cases are designed by the stepwise elaboration and refinement of the identified test conditions using test design techniques (see Chapter 3) identified in the test strategy and/or the test plan. The
Certified Tester ISTOB Internationa Advanced Level Syllabus-Test Analyst Qualifications Board he d traceable back to the test basis(e.g..requirements)as Test c 。 Preconditions.such as either project or localized test environment requirements and the plans for their delivery,state of the system,etc (both ata for the test case as well as data that must exist in the ected results Post-conditions,such as affected data,state of the system,triggers for subsequent processing,etc. The level of detail of the test cases.which impacts both the cost to develop and the level of rep eatability during execution,should be defined prior to actually creating the test cases.Less detail the tes e reproducibility. is often tecpeatoeoe esult of a test.Co n idenifying the expected I result,te ers are c t only with outputs on th e screen.but alsc if th e ofy de e coverage of key areas.or missing entirely.In such cases.a Test Analyst must have.or have acces to,subject matte xpertise.Als even nere th is well-sp d,con test pracle is esse tial Test case cution without any w ay to determine correctness of res sults has a very low added value or benefit.often generating spurious failure reports or false confidence in the system The activities described above may be applied to all test levels.though the test basis will vary.For example user accep may be based primarily on the requirements specification,use cases specifications.user store and the code itseif.It is important to remember that these activities occur test levels although the rge of the test may vary. the detailed de for that ompal tes a at the tion le efifimngthatconmponentsnteractiogehe and provide functionality thro ugh their interaction. At the y shou the te ng.W helps to determine the level of detail required as well as any tools that may be needed (e.g..drivers and stubs at the component test level). During the development of test conditions and test cases.some amount of documentation is typically created, in test work products.In ly.I nis can be Standards to be followed and/or regulations to be met Lifecycle model used(e.g.,an Agile approach aims for "just enough"documentation) The requirement for traceability from the test basis through test analysis and design Version 2012 Page 14 of 64 19 October 2012
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 14 of 64 19 October 2012 © International Software Testing Qualifications Board test cases should be repeatable, verifiable and traceable back to the test basis (e.g., requirements) as dictated by the test strategy that is being used. Test case design includes the identification of the following: Objective Preconditions, such as either project or localized test environment requirements and the plans for their delivery, state of the system, etc. Test data requirements (both input data for the test case as well as data that must exist in the system for the test case to be executed) Expected results Post-conditions, such as affected data, state of the system, triggers for subsequent processing, etc. The level of detail of the test cases, which impacts both the cost to develop and the level of repeatability during execution, should be defined prior to actually creating the test cases. Less detail in the test case allows the Test Analyst more flexibility when executing the test case and provides an opportunity to investigate potentially interesting areas. Less detail, however, also tends to lead to less reproducibility. A particular challenge is often the definition of the expected result of a test. Computing this manually is often tedious and error-prone; if possible, it is preferable to find or create an automated test oracle. In identifying the expected result, testers are concerned not only with outputs on the screen, but also with data and environmental post-conditions. If the test basis is clearly defined, identifying the correct result, theoretically, should be simple. However, test bases are often vague, contradictory, lacking coverage of key areas, or missing entirely. In such cases, a Test Analyst must have, or have access to, subject matter expertise. Also, even where the test basis is well-specified, complex interactions of complex stimuli and responses can make the definition of the expected results difficult; therefore, a test oracle is essential. Test case execution without any way to determine correctness of results has a very low added value or benefit, often generating spurious failure reports or false confidence in the system. The activities described above may be applied to all test levels, though the test basis will vary. For example, user acceptance tests may be based primarily on the requirements specification, use cases and defined business processes, while component tests may be based primarily on low-level design specifications, user stories and the code itself. It is important to remember that these activities occur throughout all the test levels although the target of the test may vary. For example, functional testing at the unit level is designed to ensure that a particular component provides the functionality as specified in the detailed design for that component. Functional testing at the integration level is verifying that components interact together and provide functionality through their interaction. At the system level, end to end functionality should be a target of the testing. When analyzing and designing tests, it is important to remember the target level for the test as well as the objective of the test. This helps to determine the level of detail required as well as any tools that may be needed (e.g., drivers and stubs at the component test level). During the development of test conditions and test cases, some amount of documentation is typically created, resulting in test work products. In practice the extent to which test work products are documented varies considerably. This can be affected by any of the following: Project risks (what must/must not be documented) The “value added” which the documentation brings to the project Standards to be followed and/or regulations to be met Lifecycle model used (e.g., an Agile approach aims for “just enough” documentation) The requirement for traceability from the test basis through test analysis and design
Certified Tester ISTQB Advanced Level Syllabus-Test Analyst Qualifications Board Depending on the scope of the testing test analysis and desian address the quality characteristics for the test object(s).The ISO 25000 standard [ISO25000](w mentlestnghardwaresotwaesySieme.adeo n is replacing ISO 9126)provides a characteristics may apply. The processes of test analysis and test design may be enhanced by intertwining them with reviews and static analysis.In fact,conductir test analysis and test design are often a form of stati nent a eing ab mine a way to ass requ criteria.Simila test work products such as test cases.risk analyses.and test plans should be subjected to reviews. Some projects,such as those following an Agile lifecy cle.may have only minimally documented requirements. Thes are sometimes in the form of "user s ich describe small but If thenst to na de criteria it is considered to be ready for integration with the other completed functionality or may already have been integrated in order to demonstrate its functionality During test design the required detailed test infrastructure requirements may be defined.although in oractice these may not be finalized until test implemen tation.It must be remembered that test el,sofare remnations and all other items reauired tos.periph nan te nication alysis and test desia on the but all andsure that all the information and preparation n prov 1.6 Test Implementation Test implementation is the fulfill ent of the test design. This includes creating automated tests and fo ion to case execution to begin.This also includes checking against explicit and implicit entry criteria for the na ensunng that th ex na fo pre ous steps in the pr ess nave b een Meheaicoheveken8kieedimeceaforheei insufficient ity and When determining the execution order,there may be many considerations.In some cases.it may make se to organize the es into te suites (le..groups case Ihis can help recution order for the test cases.There m ay be other actors that de ermine order such as the availability of the right people.equipr ent.data and the with the nich the comes available for tes incremental lifecycle models,it is important for the Test Analyst to coordinate with the development snoafnazeeegseontortestiogenatestbl order.During test Version 2012 Page 15 of 64 19 October 2012 onal
Certified Tester Advanced Level Syllabus - Test Analyst International Software Testing Qualifications Board Version 2012 Page 15 of 64 19 October 2012 © International Software Testing Qualifications Board Depending on the scope of the testing, test analysis and design address the quality characteristics for the test object(s). The ISO 25000 standard [ISO25000] (which is replacing ISO 9126) provides a useful reference. When testing hardware/software systems, additional characteristics may apply. The processes of test analysis and test design may be enhanced by intertwining them with reviews and static analysis. In fact, conducting the test analysis and test design are often a form of static testing because problems may be found in the basis documents during this process. Test analysis and test design based on the requirements specification is an excellent way to prepare for a requirements review meeting. Reading the requirements to use them for creating tests requires understanding the requirement and being able to determine a way to assess fulfillment of the requirement. This activity often uncovers requirements that are not clear, are untestable or do not have defined acceptance criteria. Similarly, test work products such as test cases, risk analyses, and test plans should be subjected to reviews. Some projects, such as those following an Agile lifecycle, may have only minimally documented requirements. These are sometimes in the form of “user stories” which describe small but demonstrable bits of functionality. A user story should include a definition of the acceptance criteria. If the software is able to demonstrate that it has fulfilled the acceptance criteria, it is usually considered to be ready for integration with the other completed functionality or may already have been integrated in order to demonstrate its functionality. During test design the required detailed test infrastructure requirements may be defined, although in practice these may not be finalized until test implementation. It must be remembered that test infrastructure includes more than test objects and testware. For example the infrastructure requirements may include rooms, equipment, personnel, software, tools, peripherals, communications equipment, user authorizations, and all other items required to run the tests. The exit criteria for test analysis and test design will vary depending on the project parameters, but all items discussed in these two sections should be considered for inclusion in the defined exit criteria. It is important that the criteria be measurable and ensure that all the information and preparation required for the subsequent steps have been provided. 1.6 Test Implementation Test implementation is the fulfillment of the test design. This includes creating automated tests, organizing tests (both manual and automated) into execution order, finalizing test data and test environments, and forming a test execution schedule, including resource allocation, to enable test case execution to begin. This also includes checking against explicit and implicit entry criteria for the test level in question and ensuring that the exit criteria for the previous steps in the process have been met. If the exit criteria have been skipped, either for the test level or for a step in the test process, the implementation effort is likely to be affected with delayed schedules, insufficient quality and unexpected extra effort. It is important to ensure that all exit criteria have been met prior to starting the test implementation effort. When determining the execution order, there may be many considerations. In some cases, it may make sense to organize the test cases into test suites (i.e., groups of test cases). This can help organize the testing so that related test cases are executed together. If a risk-based testing strategy is being used, risk priority order may dictate the execution order for the test cases. There may be other factors that determine order such as the availability of the right people, equipment, data and the functionality to be tested. It is not unusual for code to be released in sections and the test effort has to be coordinated with the order in which the software becomes available for test. Particularly in incremental lifecycle models, it is important for the Test Analyst to coordinate with the development team to ensure that the software will be released for testing in a testable order. During test implementation, Test Analysts should finalize and confirm the order in which manual and automated