1. Design the tools that will be used to create and maintain the specification and database cross-referenced indexes. Establish the creation of the indexes as part of the QA function for each project.
2. Establish a standard for terminology glossaries. Create a glossary for each project and get concurrence on it from designers.
3. Examine the master cross-reference to assure the designer’s intent to comply with the specification. Determine ambiguities and contradictions in the specification, the design and the data base with one another and within each.
4. Establish an informal review procedure based on the master cross-reference. Gets concurrence with management regarding the overt and covert objectives of this activity.
5. Select and train individuals with design experience who are inherently good at reviews and can avoid being judgemental.
6. Make sure that management understands, approves, appreciates and rewards the accomplishments of the indirect methods used.
7. Define beforehand, for each project, which is to design and execute the system wide functional test and the formal acceptance test. Obtain prior agreement on the extent to which the two test design efforts can be merged or must be separated.
8. Design and adopt a standard for the formal system test or acceptance plan document. Define all permanent sections and their contents, define test design standards and conventions, test database standards and conventions, all necessary forms for summary sheets, test sheets, input preparation and definition, status reports and so on.
9. Identify all test tools (hardware and software) that will be used, that must be modified, or that must be acquired for the project. Include sufficient time in the schedule and sufficient cost in the budget to obtain the working tools.
10. Propose to centralize the acquisition, modification, and development of test tools under QA in order to reduce the incidence of a redesign of tools for one project after another and in order to build a library of increasingly more sophisticated and effective tools over the course of several projects.
11. Establish a hierarchy of tests, such as group, subgroup, test and sub test. Define a sufficiently broad set of groups and subgroups to cut across all projects. Develop first and second level outlines (subgroup, test) for all groups to form a skeleton on which subsequent acceptance tests or system tests can be based. Refine this outline, project by project, to eventually construct a library of test plans from which the bulk of any test plan can be started. Just as the designers try to save code from one project to the next, try to save tests.
12. For each project, do labor and cost projections for the system test and the acceptance test, based on the final number of sub tests to be implemented. Get management concurrence up front that there are sufficient time and money to design and conduct the test, and concurrence from the buyer that a test of the proposed scope is adequate. Establish the principle of selective culling as the development proceeds.
13. Define formal testing staffing as early as possible, identifying the necessary participation from the design staff from QA and from the buyer. Acquire all necessary consumable supplies and other resources and facilities and get them to the test site. Ask for twice as many terminals as you think you need, which is four times what you’ll be offered.
14. Get the principal of incremental sign-off at the sub test level and conditional system acceptance after a specified percentage of all sub tests have been successfully run, accepted as the basis for the successful conclusion of the test.
15. Prepare all test documentation, scripts, scores and associated paperwork in advance and triplicate. Be sure there is an adequate backup for all files and programs.
Also, Check :
- Common Mistakes in Automation Testing
- Tips for Software Professional Resume
- Developers vs. Testers When will war get over?
- What is Game Testing?