Foundations of Software Testing For VTU

Foundations of Software Testing presents sound engineering approaches for software test generation, selection, minimization, assessment, and enhancement. Using numerous examples, it offers a lucid description of a wide range of techniques for a variety of testing-related tasks. Students, practitione...

Descripción completa

Detalles Bibliográficos
Autor principal: Mathur, Aditya P. (-)
Formato: Libro electrónico
Idioma:Inglés
Publicado: Noida : Pearson India 2010.
Edición:1st ed
Materias:
Ver en Biblioteca Universitat Ramon Llull:https://discovery.url.edu/permalink/34CSUC_URL/1im36ta/alma991009820413106719
Tabla de Contenidos:
  • Cover
  • Contents
  • Preface
  • Acknowledgments
  • Part I: Preliminaries
  • Chapter 1: Basics of Software Testing
  • 1.1.Humans, Errors, and Testing
  • 1.1.1. Errors, faults, and failures
  • 1.1.2. Test automation
  • 1.1.3. Developer and tester as two roles
  • 1.2 Software Quality
  • 1.2.1. Quality attributes
  • 1.2.2. Reliability
  • 1.3 Requirements, Behavior, and Correctness
  • 1.3.1. Input domain and program correctness
  • 1.3.2. Valid and invalid inputs
  • 1.4 Correctness Versus Reliability
  • 1.4.1. Correctness
  • 1.4.2. Reliability
  • 1.4.3. Program use and the operational profile
  • 1.5 Testing and Debugging
  • 1.5.1. Preparing a test plan
  • 1.5.2. Constructing test data
  • 1.5.3. Executing the program
  • 1.5.4. Specifying program behavior
  • 1.5.5. Assessing the correctness of program behavior
  • 1.5.6. Construction of oracles
  • 1.6 Test Metrics
  • 1.6.1. Organizational metrics
  • 1.6.2. Project metrics
  • 1.6.3. Process metrics
  • 1.6.4. Product metrics: Generic
  • 1.6.5. Product metrics: OO software
  • 1.6.6. Progress monitoring and trends
  • 1.6.7. Static and dynamic metrics
  • 1.6.8. Testability
  • 1.7 Software and Hardware Testing
  • 1.8 Testing and Verification
  • 1.9 Defect Management
  • 1.10 Execution History
  • 1.11 Test-Generation Strategies
  • 1.12 Static Testing
  • 1.12.1. Walkthroughs
  • 1.12.2. Inspections
  • 1.12.3. Use of static code analysis tools in static testing
  • 1.12.4. Software complexity and static testing
  • 1.13 Model-Based Testing and Model Checking
  • 1.14 Control-Flow Graph
  • 1.14.1. Basic block
  • 1.14.2. Flow graph: Definition and pictorial representation
  • 1.14.3. Path
  • 1.15 Dominators and Postdominators
  • 1.16 Program-Dependence Graph
  • 1.16.1. Data dependence
  • 1.16.2. Control dependence
  • 1.17 Strings, Languages, and Regular Expressions
  • 1.18 Types of Testing.
  • 1.18.1. Classifier C1: Source of test generation
  • 1.18.2. Classifier C2: Life cycle phase
  • 1.18.3. Classifier C3: Goal-directed testing
  • 1.18.4. Classifier C4: Artifact under test
  • 1.18.5. Classifier C5: Test process models
  • 1.19 The Saturation Effect
  • 1.19.1. Confidence and true reliability
  • 1.19.2. Saturation region
  • 1.19.3. False sense of confidence
  • 1.19.4. Reducing Δ
  • 1.19.5. Impact on test process
  • Summary
  • Bibliographic Notes
  • Exercises
  • Part II: Test Generation
  • Chapter 2: Test Generation from Requirements
  • 2.1 Introduction
  • 2.2 The Test-Selection Problem
  • 2.3 Equivalence Partitioning
  • 2.3.1. Faults targeted
  • 2.3.2. Relations and equivalence partitioning
  • 2.3.3. Equivalence classes for variables
  • 2.3.4. Unidimensional versus multidimensional partitioning
  • 2.3.5. A systematic procedure for equivalence partitioning
  • 2.3.6. Test selection based on equivalence classes
  • 2.3.7. GUI design and equivalence classes
  • 2.4 Boundary-Value Analysis
  • 2.5 Category-Partition Method
  • 2.5.1 Steps in the category-partition method
  • 2.6 Cause-Effect Graphing
  • 2.6.1. Notation used in cause-effect graphing
  • 2.6.2. Creating cause-effect graphs
  • 2.6.3. Decision table from cause-effect graph
  • 2.6.4. Heuristics to avoid combinatorial explosion
  • 2.6.5. Test generation from a decision table
  • 2.7 Test Generation from Predicates
  • 2.7.1. Predicates and boolean expressions
  • 2.7.2. Fault model for predicate testing
  • 2.7.2.1 Missing or Extra Boolean Variable Faults
  • 2.7.3. Predicate constraints
  • 2.7.4. Predicate-testing criteria
  • 2.7.5. Generating BOR-, BRO-, and BRE-adequate tests
  • 2.7.5 Generating the BOR-constraint set
  • Generating the BRO-constraint set
  • 2.7.5.3 Generating the BRE-constraint set
  • 2.7.5.4 Generating BOR Constraints for Nonsingular Expressions.
  • 2.7.6 Cause-effect Graphs and Predicate Testing
  • 2.7.7 Fault Propagation
  • 2.7.8 Predicate Testing in Practice
  • 2.7.8.1 Specification-based Predicate Test Generation
  • 2.7.8.2 Program-based Predicate Test Generation
  • Summary
  • Bibliographic Notes
  • Exercises
  • Chapter 3: Test Generation from Finite-State Models
  • 3.1 Software Design and Testing
  • 3.2 Finite-state Machines
  • 3.2.1. Excitation using an input sequence
  • 3.2.2. Tabular representation
  • 3.2.3. Properties of FSM
  • 3.3 Conformance Testing
  • 3.3.1. Reset inputs
  • 3.3.2. The testing problem
  • 3.4 A Fault Model
  • 3.4.1. Mutants of FSMs
  • 3.4.2. Fault coverage
  • 3.5 Characterization Set
  • 3.5.1. Construction of the k-equivalence partitions
  • 3.5.2. Deriving the characterization set
  • 3.5.3. Identification sets
  • 3.6 The W-method
  • 3.6.1. Assumptions
  • 3.6.2. Maximum number of states
  • 3.6.3. Computation of the transition cover set
  • 3.6.4. Constructing Z
  • 3.6.5. Deriving a test set
  • 3.6.6. Testing using the W-method
  • 3.6.7. The error-detection process
  • 3.7 The Partial W-method
  • 3.7.1. Testing using the Wp-method for m = n
  • 3.7.2. Testing using the Wp-method for m > n
  • 3.8 The UIO-sequence Method
  • 3.8.1. Assumptions
  • 3.8.2. UIO sequences
  • 3.8.3. Core and noncore behavior
  • 3.8.4. Generation of UIO sequences
  • 3.8.4.1 Explanation of gen-UIO
  • 3.8.5. Distinguishing signatures
  • 3.8.6. Test generation
  • 3.8.7. Test optimization
  • 3.8.8. Fault detection
  • 3.9 Automata Theoretic Versuscontrol-flow-based Techniques
  • 3.9.1. n-switch-cover
  • 3.9.2. Comparing automata-theoretic methods
  • Summary
  • Bibliographic Notes
  • Exercises
  • Chapter 4: Test Generation from Combinatorial Designs
  • 4.1 Combinatorial Designs
  • 4.1.1. Test configuration and test set
  • 4.1.2. Modeling the input and configuration spaces.
  • 4.2 A Combinatorial Test-Design Process
  • 4.3 Fault Model
  • 4.3.1. Fault vectors
  • 4.4 Latin Squares
  • 4.5 Mutually Orthogonal Latin Squares
  • 4.6 Pairwise Design: Binary Factors
  • 4.7 Pairwise Design: Multivalued Factors
  • 4.7.1. Shortcomings of using MOLS for test design
  • 4.8 Orthogonal Arrays
  • 4.8.1. Mixed-level orthogonal arrays
  • 4.9 Covering and Mixed-Level Covering Arrays
  • 4.9.1. Covering arrays
  • 4.9.2. Mixed-level covering arrays
  • 4.10 Arrays Of Strength >2
  • 4.11 Generating Covering Arrays
  • Summary
  • Bibliographic Notes
  • Exercises
  • Chapter 5: Test Selection, Minimization, and Prioritization for Regression Testing
  • 5.1 What is Regression Testing?
  • 5.2 Regression-Test Process
  • 5.2.1. Test revalidation, selection, minimization, and prioritization
  • 5.2.2. Test setup
  • 5.2.3. Test sequencing
  • 5.2.4. Test execution
  • 5.2.5. Output comparison
  • 5.3 RTS: The Problem
  • 5.4 Selecting Regression Tests
  • 5.4.1. Test all
  • 5.4.2. Random selection
  • 5.4.3. Selecting modification-traversing tests
  • 5.4.4. Test minimization
  • 5.4.5. Test prioritization
  • 5.5 Test Selection Using Execution Trace
  • 5.5.1. Obtaining the execution trace
  • 5.5.2. Selecting regression tests
  • 5.5.3. Handling function calls
  • 5.5.4. Handling changes in declarations
  • 5.6 Test Selection Using Dynamic Slicing
  • 5.6.1. Dynamic slicing
  • 5.6.2. Computation of dynamic slices
  • 5.6.3. Selecting tests
  • 5.6.4. Potential dependence
  • 5.6.5. Computing the relevant slice
  • 5.6.6. Addition and deletion of statements
  • 5.6.7. Identifying variables for slicing
  • 5.6.8. Reduced dynamic-dependence graph
  • 5.7 Scalability of Test-Selection Algorithms
  • 5.8 Test Minimization
  • 5.8.1. The set-cover problem
  • 5.8.2. A procedure for test minimization
  • 5.9 Test Prioritization
  • 5.10 Tools For Regression Testing
  • Summary.
  • Bibliographic Notes
  • Exercises
  • Part III: Test Adequacy Assessment and Enhancement
  • Chapter 6: Test-Adequacy Assessment Using Control Flow and Data Flow
  • 6.1 Test Adequacy: Basics
  • 6.1.1. What is test adequacy?
  • 6.1.2. Measurement of test adequacy
  • 6.1.3. Test enhancement using measurements of adequacy
  • 6.1.4. Infeasibility and test adequacy
  • 6.1.5. Error detection and test enhancement
  • 6.1.6. Single and multiple executions
  • 6.2 Adequacy Criteria Based on Control Flow
  • 6.2.1. Statement and block coverage
  • 6.2.2. Conditions and decisions
  • 6.2.3. Decision coverage
  • 6.2.4. Condition coverage
  • 6.2.5. Condition/decision coverage
  • 6.2.6. Multiple condition coverage
  • 6.2.7. Linear code sequence and jump (LCSAJ) coverage
  • 6.2.8. Modified condition/decision coverage
  • 6.2.9. MC/DC-adequate tests for compound conditions
  • 6.2.10. Definition of MC/DC coverage
  • 6.2.11. Minimal MC/DC tests
  • 6.2.12. Error detection and MC/DC adequacy
  • 6.2.13. Short-circuit evaluation and infeasibility
  • 6.2.14. Tracing test cases to requirements
  • 6.3 Data-Flow Concepts
  • 6.3.1. Definitions and uses
  • 6.3.2. c-use and p-use
  • 6.3.3. Global and local definitions and uses
  • 6.3.4. Data-flow graph
  • 6.3.5. Def-clear paths
  • 6.3.6. Def-use pairs
  • 6.3.7. Def-use chains
  • 6.3.8. A little optimization
  • 6.3.9. Data contexts and ordered data contexts
  • 6.4 Adequacy Criteria Based on Data Flow
  • 6.4.1. c-use coverage
  • 6.4.2. p-use coverage
  • 6.4.3. all-uses coverage
  • 6.4.4. k-dr chain coverage
  • 6.4.5. Using the k-dr chain coverage
  • 6.4.6. Infeasible c-uses and p-uses
  • 6.4.7. Context coverage
  • 6.5 Control Flow Versus Data Flow
  • 6.6 The Subsumes Relation
  • 6.7 Structural and Functional Testing
  • 6.8 Scalability of Coverage Measurement
  • Summary
  • Bibliographic Notes
  • Exercises.
  • Chapter 7: Test-Adequacy Assessment Using Program Mutation.