Software testing
People-based (testers)
user testing
Alpha testing
In-house testing by the test team or other insiders
Beta testing
Testers aren’t part of your organization and are members of your product’s target market
product under test is very close to completion
bug bashes
In-house testing using secretaries, programmers, marketers, and anyone who is available
A typical bug-bash lasts a half-day and is done when the software is close to being ready to release.
subject-matter expert testing
Give the product to an expert on some issues
addressed by the sw and request feedback
expert: may or may not be someone you would expect to use the product
paired testing
Two testers work together to
find bugs (sharing one computer)
eat your own food
company relies on prerelease versions of its own software, waiting until the software is reliable enough for real use (Beta) before selling it
Coverage-based
function testing
Test the function thoroughly, to the extent that you can say with confidence that it works.
It’s wise to do function testing before doing more complex tests that involve several functions
white box function testing
called unit testing
concentrates on the functions as you see them in the code
black box function testing
focuses on things the user can do or select (commands and features)
feature or function integration testing
Test several functions together, to see how they work together
menu tour
Walk through all of the menus and dialogs in a GUI product, taking every available choice
domain testing
Identify the functions and the variables
For each var, partition its set of possible values into equivalence classes and pick a small number of representatives from each class
Change the value of a field in several ways. (import data into the field, type it in, copy paste and so on)
state-based testing
A program moves from state to state. In a given state, some inputs are valid, and others are ignored or rejected
path testing
A (logical) path includes all of the steps that you took that the program passed through in order to get to your current state
e.g. interface menu, CFNA vs. CFBL.
statement and branch coverage
(Coverage-based testing)
100 % statement coverage if your tests execute every statement (or line of code)
100 % statement and branch coverage if you execute every statement and every branch from one statement to another
configuration coverage
If you have to test compatibility with 100 printers, and you have tested with 10, you have achieved 10% printer coverage.
specification-based testing
Verifying every claim that is made about the product in the specification.
often includes every claim made in the manual, in marketing documents or advertisements
requirements-based testing
Proving that it satisfies every requirement in a requirements document
combination testing
Testing two or more variables or functions in combination with each other in order to check bad interactions (e.g. CFU, OCS)
Problems-based
risk-based testing
risk analysis to determine what things to test
the greater the probability of an expensive failure, the more important it is to test that feature as early and as carefully as possible
input constraints
a limit on what the program can handle (IP address)
output constraints
inputs were legal, but output caused a failure (save file)
computation constraints
inputs are fine but the processing failed
Activity-based
regression testing
Reuse of the same old tests after change
3 kinds
Bug fix regression: prove the fix is no good.
Old bugs regression: prove a change has caused an old bug fix to become unfixed
Side-effect regression (stability regression): retesting of substantial parts of the product to prove that the change has caused something that used to work to now be broken.
scripted testing
Manual testing done by a junior tester who follows a step-by-step procedure written by a senior tester.
smoke testing
(type of side-effect regression testing) the goal is to prove that a new build is not worth testing.
Smoke test things you expect to work, and if they don’t, you’ll suspect that the program was built with the wrong file or that something basic is broken.
exploratory testing
tester learns about the product, its market, its risks, and the ways in which it has failed previous tests
guerilla testing
A form of exploratory testing
done by an experienced exploratory tester. (a senior tester might spend a day testing an area that will otherwise be ignored)
scenario testing
(use case flow tests) tests derived from use cases
would be classified as coverage-based tests
installation testing
install the sw in the various ways and on the various types of systems that it can be installed. What happens when you uninstall?
load testing
the system under test is attacked by many demands for resources.
the pattern of events leading to the failure will point to vulnerabilities in the software.
long sequence testing
testing is done overnight or for days or weeks
goal:to discover errors that short sequence tests will miss. (memory leaks, thread leak, stack overflow)
called duration testing, reliability testing, or endurance testing
performance testing
to determine how quickly the program runs (to decide whether optimization is needed)
A significant change in performance from a previous release can indicate the effect of a coding error.
Evaluation-based
Consistency is an important criterion
with history (past behavior)
with our image (an image the organization wants to project)
with comparable products
with claims (Function behavior is consistent with what people say it’s supposed to be)
with user’s expectations
Inconsistency may be a reason to report a bug or may reflect intentional design variation
Oracle-based testing
an oracle is an evaluation tool that will tell you whether the program has passed or failed a test.
In high-volume automated testing, the oracle is probably another program that generates results or checks the sw under test’s results.