13 June 2007

VERIFICATION AND VALIDATION TECHNIQUES

Verification Techniques

Verification is the process of confirming that interim deliverables have been developed according to their inputs, process specifications, and standards. Verification techniques include the following:
· Feasibility reviews – Tests for this structural element verify the logic flow of a unit of software (e.g., verifying that the software could conceivably perform after the solution is implemented the way the developers expect). Output from this review is a preliminary statement of high-level market requirements that becomes input to the requirements definition process (where the detailed technical requirements are produced).
· Requirements reviews – These reviews examine system requirements to ensure they are feasible and that they meet the stated needs of the user. They also verify software relationships; for example, the structural limits of how much load (e.g., transactions or number of concurrent users) a system can handle. Output from this review is a statement of requirements ready to be translated into system design.
· Design reviews – These structural tests include study and discussion of the system design to ensure it will support the system requirements. Design reviews yield a system design, ready to be translated into software, hardware configurations, documentation and training.
· Code walkthroughs – These are informal, semi-structured reviews of the program source code against specifications and standards to find defects and verify coding techniques. When done, the computer software is ready for testing or more detailed code inspections by the developer.
· Code inspections (Fagan) or structured walkthroughs (Yourdon) – These test techniques use a formal, highly structured session to review the program source code against clearly defined criteria (system design specifications, product standards) to find defects. Completion of the inspection results in computer software ready for testing by the developer.
· Requirements tracing - At each stage of the life cycle (beginning with requirements or stakeholder needs) this review is used to verify that inputs to that stage are correctly translated and represented in the resulting deliverables. Requirements must be traced throughout the rest of the software development life cycle to ensure they are delivered in the final product. This is accomplished by tracing the functional and non-functional requirements into analysis and design models, class and sequence diagrams, and test plans and code. The level of traceability also enables project teams to track the status of each requirement throughout the development and test process.


Validation Techniques
Validation assures that the end product (system) meets requirements and expectations under defined operating conditions. Within an IT environment, the end product is typically executable code. Validation ensures that the system operates according to plan by executing the system functions through a series of tests that can be observed and evaluated for compliance with expected results.

Figure 6-1 illustrates how various techniques can be used throughout the standard test stages. Each technique is described below.



White Box
White Box testing (logic driven) assumes that the path of logic in a unit or program is known. White box testing consists of testing paths, branch by branch, to produce predictable results. Multiple white box testing techniques are listed below. These techniques can be combined as appropriate for the application, but should be limited, as too many techniques can lead to an unmanageable number of test cases.
· Statement coverage - execute all statements at least once
· Decision coverage - execute each decision direction at least once
· Condition coverage - execute each decision with all possible outcomes at least once
· Decision/condition coverage - execute all possible combinations of condition outcomes in each decision, treating all iterations as two-way conditions exercising the loop at least once
· Multiple condition coverage - invoke each point of entry at least once

When evaluating the paybacks received from various test techniques, white-box or program-based testing produces a higher defect yield than the other dynamic techniques when planned and executed correctly.


Black Box
In black box testing (data or condition driven), the focus is on evaluating the function of a program or application against its currently approved specifications. Specifically, this technique determines whether combinations of inputs and operations produce expected results. As a result, the initial conditions and input data are critical for black box test cases.

Three successful techniques for managing the amount of input data required include:
· Equivalence partitioning - An equivalence class is a subset of data that represents a larger class. Equivalence partitioning is a technique for testing equivalence classes rather than undertaking exhaustive testing of each value of the larger class. For example, a program which edits credit limits within a given range (at least $10,000 but less than $15,000) would have three equivalence classes:
- Less than $10,000 (invalid)
- Equal to $10,000 but not as great as $15,000 (valid)
- $15,000 or greater (invalid)


· Boundary analysis - This technique consists of developing test cases and data that focus on the input and output boundaries of a given function. In the credit limit example, boundary analysis would test the following:
- Low boundary plus or minus one ($9,999 and $10,001)
- Boundaries ($10,000 and $15,000)
- Upper boundary plus or minus one ($14,999 and $15,001)


· Error guessing – This is based on the theory that test cases can be developed from the intuition and experience of the Test Engineer. For example, in a test where one of the inputs is the date, a test engineer may try February 29, 2000; February 29, 2001; or 9/9/99.

Incremental
Incremental testing is a disciplined method of testing the interfaces between unit-tested programs and between system components. It involves adding unit-tested programs to a given module or component one by one, and testing each resultant combination. There are two types of incremental testing:
· Top-down – which begins testing from the top of the module hierarchy and works down to the bottom using interim stubs to simulate lower interfacing modules or programs. Modules are added in descending hierarchical order.
· Bottom-up – which begins testing from the bottom of the hierarchy and works up to the top. Modules are added in ascending hierarchical order. Bottom-up testing requires the development of driver modules, which provide the test input, call the module or program being tested, and display test output.

There are pros and cons associated with each of these methods, although bottom-up testing is generally considered easier to use. Drivers tend to be less difficult to create than stubs, and can serve multiple purposes. Output from bottom-up testing is also often easier to examine, as it always comes from the module directly above the module under test.


Thread
This test technique, which is often used during early integration testing, demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application. Thread testing and incremental testing are usually used together. For example, units can undergo incremental testing until enough units are integrated and a single business function can be performed, threading though the integrated components.

When testing client/server applications, these techniques are extremely critical. An example of an effective strategy for a simple two-tier client/server application could include:

1. Unit and bottom-up incrementally test the application server components
1. Unit and incrementally test the GUI or client components
2. Test the network
3. Thread test a valid business transaction through the integrated client, server, and network

Regression

There are always risks associated with introducing change to an application. To reduce this risk, regression testing should be conducted during all stages of testing after a functional change, reduction, improvement, or repair has been made. This technique assures that the change will not cause adverse effects on parts of the application or system that were not supposed to change. Regression testing can be a very expensive undertaking, both in terms of time and money. The test manager’s objective is to maximize the benefits of the regression test while minimizing the time and effort required for executing the test.

The test manager must choose which types of regression test minimizes the impact to the project schedule when changes are made, and still assures that no new defects were introduced. The types of regression tests include:

· Unit regression testing – which retests a single program or component after a change has been made. At a minimum, the developer should always execute unit regression testing when a change is made.

· Regional regression testing – which retests modules connected to the program or component that have been changed. If accurate system models or system documentation are available, it is possible to use them to identify system components adjacent to the changed components, and define the appropriate set of test cases to be executed. A regional regression test executes a subset of the full set of application test cases. This is a significant timesaving over executing a full regression test, and still helps assure the project team and users that no new defects were introduced.

· Full regression testing – which retests the entire application after a change has been made. A full regression test is usually executed when multiple changes have been made to critical components of the application. This is the full set of test cases defined for the application.
When an application feeds data to another application, called the “downstream” application, a determination must be made whether regression testing should be conducted with the integrated application. Testers from both project teams cooperate to execute this integrated test, which involves passing data from the changed application to the downstream application, and then executing a set of test cases for the receiving application to assure that it was not adversely affected by the changes.


Structural and Functional Testing
Structural testing is considered white box testing because knowledge of the internal logic of the system is used to develop test cases. Structural testing includes path testing, code coverage testing and analysis, logic testing, nested loop testing, and similar techniques. Unit testing, string or integration testing, load testing, stress testing, and performance testing are considered structural.

Functional testing addresses the overall behavior of the program by testing transaction flows, input validation, and functional completeness. Functional testing is considered black box testing because no knowledge of the internal logic of the system is used to develop test cases. System testing, regression testing, and user acceptance testing are types of functional testing.

As part of verifying and validating the project team’s solution, testers perform structural and functional tests that can be applied to every element of a computerized system. Both methods together validate the entire system. For example, a functional test case might be taken from the documentation description of how to perform a certain function, such as accepting bar code input. A structural test case might be taken from a technical documentation manual. To effectively test systems, both methods are needed.

Each method has its pros and cons:
· Structural testing advantages
- The logic of the software’s structure can be tested
- Parts of the software will be tested which might have been forgotten if only functional testing were performed
· Structural testing disadvantages
- Does not ensure that user requirements have been met
- Its tests may not mimic real-world situations
· Functional testing advantages
- Simulates actual system usage
- Makes no system structure assumptions
· Functional testing disadvantages
- Potential of missing logical errors in software
- Possibility of redundant testing

3 comments:

Anonymous said...
This comment has been removed by a blog administrator.
Anonymous said...

Gostei muito desse post e seu blog é muito interessante, vou passar por aqui sempre =) Depois dá uma passada lá no meu site, que é sobre o CresceNet, espero que goste. O endereço dele é http://www.provedorcrescenet.com . Um abraço.

Anonymous said...

Hello. This post is likeable, and your blog is very interesting, congratulations :-). I will add in my blogroll =). If possible gives a last there on my blog, it is about the Projetores, I hope you enjoy. The address is http://projetor-brasil.blogspot.com. A hug.