CEN4020: Software Engineering II up↑

Software Testing Techniques & Strategies

These lecture notes have been assembled from a variety of sources, over several offerings of software engineering courses. The primary source is a collection of notes on Software Testing Techniques posted at http://www.scribd.com/doc/8622037/Software-Testing-Techniques.

They contain portions of copyrighted materials from various textbook and web sources, collected here for classroom use under the "fair use" doctrine. They are not to be published or otherwise reproduced, and must be maintained under password protection.

Use "a" key to toggle between slide-at-a-time and single-page modes. Use the space bar or right arrow for next slide,
left arrow to back up, and "c" key for table of contents.

Testing Topics

Software Testing

"Testing is the process of exercising a program with the specific intent of finding errors, prior to delivery to the end user."

Roger Pressman, in Software Engineering: A Practitioner's Approach

Purpose: uncover as many errors as feasible

Verification & Validation

What Testing Can Do

What Testing Cannot Do

Software Qualify Assurance

SQA Complements to Testing

Informal SQA Techniques

Formal methods

Formal methods form a valuable complement to testing, but not a replacement.

Testability Characteristics of Software

Designing for testability can make testing easier and more effective.

Need for Early Testing

*

Need for Independent Testers

Developers may and should do unit tests. Other testing should be done by independent testers.

This does not imply that independent testing should be done in place of testing by the development group. On the contrary, both are necessary. Moreover, when independent testers are used they need to be monitored closely to ensure the quality of their work. Independent testers have been known to go through the motions of testing, without doing a through job of finding bugs. In contrast, good programmers write very thorough tests of their own code. Generally, knowledge of the code (whitebox testing) can be helpful in finding certain kinds of errors, but blackbox testing is equally valuable. Both should be done.

Test Planning

Testing Artifacts in UP

A test case specifies

Test Case Design

"Bugs lurk in corners and congregate at boundaries..." Boris Beer *

Testing Methods: Views of Box

Black Box Testing

White/Open/Glass Box Testing

Grey Box Testing

Test Coverage Analysis

White Box Testing Techniques

Code Coverage Analysis

Objective: to see how much of the code has been covered by the testing

What to Cover?

Can define coverage in terms of a control flow graph

White Box: Exhaustive Path Testing

*

White Box: Selective Path Testing

*

Basis Path Set

Basis set = a minimal set of independent paths that can, in linear combination, generate all possible paths through the CFG

Idea: Construct a set of test cases that covers a basis set of control flow paths.

Basis Sets & Cyclomatic Complexity

Fact: a CFG may have many basis sets

Fact: the number of paths in all the basis sets of a given graph is the same, V(G), is called the Cyclomatic Complexity of the CFG.

Fact: V(G) = |E| - |N| + p, where
|E| = number of edges
|N| = number of nodes
p = 1 + number of connected components (normally p = 2)

Fact: every edge of a flow graph is traversed by at least one path in every basis set.

Consequence: covering every edge of a CFG graph requires no more than V(G) tests. (It sometimes may be done with less.)

Q: How should we choose a good basis set?

A: One way is to start with what seems to be the most important path through the code, and use this as the baseline from which we start to build an independent set of paths.

Simplified Computation of V(G)

  1. Start with the value 1, and for each decision node N in the CFG add the branching factor of N.
  2. Count predicates directly from the source code:
    • if, while, etc add one to complexity
    • boolean operators add one if they have short-circuit semantics
    • switch statements add one for each case-labeled statement (not the number of labels)
  3. If CFG is planar, count regions.

There is another way to compute V(G), using an adjacency (connection) matrix.

Computing V(G) By Counting Branches

* 1 + 1 + 1 + 1 + 3 = 7

Diagram is reproduced from http://www.cs.technion.ac.il/Courses/OOP/slides/export/236804-Fall-1997/metrics/part2.html.

Computing V(G) Directly from Code

void complexity6 (int i, int j) {                // V(G) = 1
  if (i > 0 && j < 0) {                          // +2
     while (i > j) {                             // + 1
        if (i % 2 && j % 2) printf ("%d\n", i);  // + 2 = 6
        else print ("%d\n", j);
        i--;
     }
  }
}

Computing V(G) By Counting Regions

*

Diagram reproduced from http://www.cs.technion.ac.il/Courses/OOP/slides/export/236804-Fall-1997/metrics/part2.html.

Other Uses of Cyclomatic Complexity

A software metric that reflects the logical complexity of code, which can be applied to:

Cyclomatic Complexity

*

Basis Path Testing

*

Basis Path Testing

*

Basis Path Testing Notes

*

Basis Path Coverage Example

Function supposed to return the number of occurrences of 'C' is the string begins with 'A', else return -1.

int count(char * string) {     //  V(G) = 1
  int index = 0, i = 0, j = 0;       
  if (string[index] == 'A') {       // +1
L1:  index = index + 1;
     if (string[index] == 'B') {    // +1
        j++; goto L1; l}
     if (string[index] == 'C') {    // +1
       i += j; j = 0; goto L1; }
     i += j;
     if (string[index] != '\0') {   // +1 = 5
        j = 0; goto L1; }
  } else i = -1;
  return i; }

Error: The count is incorrect for strings that begin with the letter 'A' and in which the number of occurrences of 'B' is not equal to the number of occurrences of 'C'.

The commonly used statement and branch coverage testing criteria can be both satisfied using the following two test cases:

Since V(G) = 5, these tests do not satisfy the basis set testing criterion.

A set of test cases that satisfy the criterion:

stringreturnpass/fail
"X"-1correct
"ABCX"1correct
"A"0correct
"AB"1incorrect
"AC"0incorrect

Weak Structured Testing

Control Structure Testing

Typical Errors

Control Structure Testing

Condition Testing

Data Flow Testing

This is one of several helpful strategies, but like the others it is not necessarily complete. Can you think of a counter-example?

Compilers do flow analysis as part of the code "optimization" process. The same analysis can be used by test coverage analysis tools.

Loop Testing

*

Loop Testing

Coverage for simple n-bounded loop

  1. skip the loop entirely
  2. just one pass through the loop
  3. two passes through the loop
  4. m passes through the loop, where m < n
  5. n-1, n, n+1 passes through the loop

Loop Testing: Nested Loops

  1. start at innermost loop; set all outer loops to their minimum iteration parameter values
  2. test the min+1, typical, max-1, and max for the innermost loop, while holding the outer loops at their minimum values
  3. move out one loop and set it up as in step 2, holding all other loops at typical values.
  4. repeat step 3 until the outermost loop has been tested

Loop Testing: Concatenated Loops

Unstructured loops are not amenable to testing based on loop structure. One can either rewrite them into structured format, or use some other technique, like basis bath coverage.

Testing tools are very helpful in verifying test coverage. Some tools work like debuggers or execution profilers, by adding instrumentation code or breakpoints to the program under test. This allows one to verify that a given path has been taken.

Black-Box Testing

Goal = good coverage of the requirements and domain model *

Black Box Testing Techniques

Equivalence Partitioning

*

Equivalence Partitioning

Idea:

This is also known as input space partitioning.

Examples

Boundary Value Analysis

Grocery Store Example

Follow this link for an example of application of the partitioning method.

Observations

Equivalence partitioning & boundary analysis

Comparison Testing

See test examples in Ada for some tests that use this technique.

Other Black Box Techniques

Testing is an "Umbrella Activity"

*

Remember the V, spiral, and UP models. Testing activities occur in all phases.

Testing in UP

*

Testing Levels

Progress is bottom-up (smaller to larger scope)

All kinds are necessary

* Regression testing is critical, should be automated, and should encompasses all types of tests.

Unit Tests

Unit Testing Rules (Dennis)

* Some of these are intentional over-generalizations.

Unit Testing

*

Unit Test Environment

*

Unit Test Frameworks

Unit Test Examples

See examples in Ada for some tests that use this technique.

Interface testing

Does code reuse or inheritance mean we don't need to test the reused code or inherited operations?

See also Object-Oriented Testing: Myth and Reality, by Robert V. Binder, from Object magazine, May 1995.

Types of Interfaces

Interface errors

Interface testing guidelines

Stress testing

Integration Testing

Integration Testing Strategies

*

Top Down Integration

*

Bottom-Up Integration

*

Sandwich Testing

*

Testing approaches

Regression Testing

System Testing

Acceptance Tests

Alpha Testing

Beta Testing

Testing Tools

When a correction is made to code, it is a good idea to include a comment at the location of the patch, indicating the reason for the change. If this is a bug-fix, one should indicate which test failures exposed the problem that motivated the fix. If there is no concise test, one should be written and added to the regression test set. Later, if someone questions that part of the code (maybe in fixing another bug), they will know to look at the relevant regression tests *before* making a new patch.

Example of Test Assertions

Debugging

Debugging: A Diagnostic Process

*

Debugging Effort

*

Symptoms & Causes

*

Consequences of Bugs

*

Dangers of Patching

Tests catch defect

*

Dangers of Patching

Patch enables tests to pass

*

But may not completely correct defect

Patch may also introduce new defect(s)

Dangers of Patching

More patching makes defects still harder to catch

*

Also makes code harder to understand

Debugging Approaches

Brute Force

Backtracking

  1. Start from point of failure
  2. Trace flow of control and/or data backward in code
  3. Hope to locate error

Cause Elimination: Induction

  1. Locate pertinent data
  2. Organize the data
  3. Devise a hypothesis
  4. Prove the hypothesis

Deduction → The Scientific Method

  1. Analyze
  2. Hypothesize a set of possible causes
  3. Design experiment to test hypothesis
  4. Go back to step 1 until problem is understood

Bisection

Every Experiment Should Narrow the Search Space

* *
* *

This has sometimes been called the "wolf fence" approach.

Debugging Tools

Design for Debugging

Debugging Cautions

Debugging: Final Thoughts

  1. Don't run off half-cocked; think about the symptom you are seeing.
  2. Use tools (e.g., dynamic debugger) to gain more insight.
  3. If at an impasse, get help from someone else.
  4. Be absolutely sure to conduct regression tests when you do "fix" the bug.

For further reference, see my debugging notes from COP 4610.

Object-Oriented Testing

Broadening the View of "Testing"

It can be argued that the review of OO analysis and design models is especially useful because the same semantic constructs (e.g. classes, attributes, operations, messages) appear at the analysis, design, and code level. Therefore, a problem in the definition of class attributes that is uncovered during analysis will circumvent side effects that might occur if the problem were not discovered until design or code (or even the next iteration of analysis).

Testing the CRC Model

  1. Revisit the Class-Responsibility-Collaboration and object-relationship models
  2. Inspect the description of each CRC index card to determine if a delegated responsibility is part of the collaborator's definition.
  3. Invert the connection to ensure that each collaborator that is asked for service is receiving requests from a reasonable source.
  4. Using the inverted connections examined in step 3, determine whether other classes might be required or whether responsibilities are properly grouped among the classes.
  5. Determine whether widely requested responsibilities might be combined into a single responsibility.
  6. Steps 1 to 5 are applied iteratively to each class and through each evolution of the OOA model.

OOT Strategy

OOT - Test Case Design

Berard proposes the following approach:

  1. Each test case should be uniquely identified and should be explicitly associated with the class to be tested.
  2. The purpose of the test should be stated.
  3. A list of testing steps should be developed for each test and should contain:
    • A list of specified states for the object that is to be tested
    • A list of messages and operations that will be exercised as a consequence of the test
    • A list of exceptions that may occur as the object is tested
    • A list of external conditions (i.e. changes in the environment external to the software that must exist in order to properly conduct the test).
    • Supplementary information that will aid in understanding or implementing the test

OOT Methods: Random Testing

OOT Methods: Partition Testing

OOT Methods: Inter-Class Testing

Testing levels

Object class testing

Object integration

Approaches to cluster testing