Software testing
traceability matrix for specification-based testing
Evaluation-based
Oracle-based testing
In high-volume automated testing, the oracle is probably another program that generates results or checks the sw under test’s results.
an oracle is an evaluation tool that will tell you whether the program has passed or failed a test.
Consistency is an important criterion
Inconsistency may be a reason to report a bug or may reflect intentional design variation
with user’s expectations
with claims (Function behavior is consistent with what people say it’s supposed to be)
with comparable products
with our image (an image the organization wants to project)
with history (past behavior)
Activity-based
performance testing
A significant change in performance from a previous release can indicate the effect of a coding error.
to determine how quickly the program runs (to decide whether optimization is needed)
long sequence testing
called duration testing, reliability testing, or endurance testing
goal:to discover errors that short sequence tests will miss. (memory leaks, thread leak, stack overflow)
testing is done overnight or for days or weeks
load testing
the pattern of events leading to the failure will point to vulnerabilities in the software.
the system under test is attacked by many demands for resources.
installation testing
install the sw in the various ways and on the various types of systems that it can be installed. What happens when you uninstall?
scenario testing
would be classified as coverage-based tests
(use case flow tests) tests derived from use cases
guerilla testing
done by an experienced exploratory tester. (a senior tester might spend a day testing an area that will otherwise be ignored)
A form of exploratory testing
exploratory testing
tester learns about the product, its market, its risks, and the ways in which it has failed previous tests
smoke testing
Smoke test things you expect to work, and if they don’t, you’ll suspect that the program was built with the wrong file or that something basic is broken.
(type of side-effect regression testing) the goal is to prove that a new build is not worth testing.
scripted testing
Manual testing done by a junior tester who follows a step-by-step procedure written by a senior tester.
regression testing
3 kinds
Side-effect regression (stability regression): retesting of substantial parts of the product to prove that the change has caused something that used to work to now be broken.
Old bugs regression: prove a change has caused an old bug fix to become unfixed
Bug fix regression: prove the fix is no good.
Reuse of the same old tests after change
Problems-based
computation constraints
inputs are fine but the processing failed
output constraints
inputs were legal, but output caused a failure (save file)
input constraints
a limit on what the program can handle (IP address)
risk-based testing
the greater the probability of an expensive failure, the more important it is to test that feature as early and as carefully as possible
risk analysis to determine what things to test
Coverage-based
combination testing
Testing two or more variables or functions in combination with each other in order to check bad interactions (e.g. CFU, OCS)
requirements-based testing
Proving that it satisfies every requirement in a requirements document
specification-based testing
often includes every claim made in the manual, in marketing documents or advertisements
Verifying every claim that is made about the product in the specification.
configuration coverage
If you have to test compatibility with 100 printers, and you have tested with 10, you have achieved 10% printer coverage.
statement and branch coverage
(Coverage-based testing)
100 % statement and branch coverage if you execute every statement and every branch from one statement to another
100 % statement coverage if your tests execute every statement (or line of code)
path testing
e.g. interface menu, CFNA vs. CFBL.
A (logical) path includes all of the steps that you took that the program passed through in order to get to your current state
state-based testing
A program moves from state to state. In a given state, some inputs are valid, and others are ignored or rejected
domain testing
Change the value of a field in several ways. (import data into the field, type it in, copy paste and so on)
For each var, partition its set of possible values into equivalence classes and pick a small number of representatives from each class
Identify the functions and the variables
menu tour
Walk through all of the menus and dialogs in a GUI product, taking every available choice
feature or function integration testing
Test several functions together, to see how they work together
black box function testing
focuses on things the user can do or select (commands and features)
white box function testing
concentrates on the functions as you see them in the code
called unit testing
function testing
It’s wise to do function testing before doing more complex tests that involve several functions
Test the function thoroughly, to the extent that you can say with confidence that it works.
People-based (testers)
eat your own food
company relies on prerelease versions of its own software, waiting until the software is reliable enough for real use (Beta) before selling it
paired testing
Two testers work together to
find bugs (sharing one computer)
subject-matter expert testing
expert: may or may not be someone you would expect to use the product
Give the product to an expert on some issues
addressed by the sw and request feedback
bug bashes
A typical bug-bash lasts a half-day and is done when the software is close to being ready to release.
In-house testing using secretaries, programmers, marketers, and anyone who is available
Beta testing
product under test is very close to completion
Testers aren’t part of your organization and are members of your product’s target market
Alpha testing
In-house testing by the test team or other insiders
user testing