In view of the EU’s Cyber Resilience Act and an abundance of caution, we have withdrawn all our free software.
Retest makes it simple to automate black box regression testing on Windows and Unix.
Retest works by reading a retest plan (.rt
plain text file) and
either generating expected files or generating actual files and
comparing them with previously generated expecteds, reporting any
discrepencies. (It can also be used purely to generate files.)
All you need to do to use retest (beyond the easy one-off process of installing it), is to create a suitable retest plan file for each application you want to test.
For developers, retest can also be used as a Rust library; see the Retest API.
Retest is free open source software (FOSS) licensed under the GNU General Public License version 3 (GPLv3).
retest v4.0.10 © 2019-21 Qtrac Ltd. All Rights Reserved. https://www.qtrac.eu/retest.html usage: retest [verbose] [cpus=n] [nocolor] [tests] [rt.rt] run all (or specified numbered) tests and save their outputs in the actuals folder and diff their outputs with the expecteds usage: retest [verbose] [cpus=n] [nocolor] [tests] generate [rt.rt] run all (or specified numbered) tests and save their outputs in the expecteds folder (g -g --generate gen generate) usage: retest doc show the manual in your web browser and quit (doc -m --manual manual) usage: retest help show this help text and quit (help -h --help /?) usage: retest version show retest's version and quit (version -V --version) verbose: default is: show summary, errors, failures; use one verbose to show each test; use two for more; use quiet to only show errors or failures (v -v --verbose verbose q -q --quiet quiet) cpus: if specified uses at most this number of cpus; default is to use all available nocolor: if specified output is monochrome (useful for redirecting) default is to use colors (nocolor --nocolor mono --mono) tests: numbers of specific tests to run or generate, e.g., 1,3,5,8-21 36-39 52 61-65 rt.rt: the retest plan file to use; defaults to rt-{win,unix}.rt if it exists, otherwise to rt.rt The command line arguments may be given in any order. License: GNU General Public License Version 3.
Setting cpus=1
is useful if you want to force generation or
testing to be done one test at a time in order (e.g., to see the total
time). Note that if any non-existent test numbers are specified, they
will be silently ignored.
Retest's exit code/error level is 0 if there were no failures or errors generating or testing, or 1 if any failures or errors occurred, or 2 for any other kind of error.
If you specify a .rt
retest plan file on the command line,
retest will use it. Otherwise, on Windows, retest will look for
rt-win.rt
and use that if it exists, falling back to
rt.rt
otherwise. Similarly, on Unix, retest will look for
rt-unix.rt
and use that if it exists, falling back to
rt.rt
otherwise.
Every retest plan file starts with an optional “environment” section and then has one or more “test” sections. (Retest Plan File Examples are shown further on.)
Blank lines and comment lines (beginning with #
) are ignored
and may be used freely.
If present at all, this section must come first. It has this form (required, optional, user-specified):
[ENV] APP: application-to-test argument-for-application ... argument-for-application EXPECTED_PATH: path-for-expected-files ACTUAL_PATH: path-for-actual-files DIFF: comparison-application argument-for-comparison-application ... argument-for-comparison-application SET: user-key: user-value ...
Here are explanations of the arguments that can (or must) be used:
application-to-test
APP: C:\Program
Files\comparepdfcmd\comparepdfcmd.exe
APP: C:\Python36\python.exe foxtrot.py(On Unix an interpreter will normally be found in the
$PATH
,
but on Windows—or when you want to use a particular version when
two or more are present—it will need to be specified, as
illustrated here.)
path-for-expected-files
rt_expected
in the current
folder. It must be to a writable folder. The
files generated here are the “expecteds” and are meant to be preserved
between runs (to compare against), so the folder needs to be somewhere
permanent. For example:EXPECTED_PATH: V:\diffpdf5\rt_expected
path-for-actual-files
rt_actual
in the current
folder. It must be to a writable folder. The
files generated here are the “actuals” which will be compared with the
expecteds. For example:ACTUAL_PATH: V:\tmp\rt_actual
comparison-application
DIFF: C:\bin\comparepdfcmd\comparepdfcmd.exe
DIFF: diff -q -ZAlternatively, you can specify the diff to use individually for each test (e.g., if they vary). Note that if you use a custom tool it must return (exit code/error level) 0 for when the two files compared are considered to be the same and non-zero otherwise.
user-key: user-value
SET: INV: E:\accounts\invoicesNow, in any entry for any test, you can use $INV, e.g., $INV\inv681.pdf, and this will be expanded as you'd expect into E:\accounts\invoices\inv681.pdf.
Each retest plan file must have at least one test. Tests are numbered from 1. Each plan has this form:
[number] NAME: name-or-description EXITCODE: expected-exit-code WAIT: wait-time-seconds STDIN: stdin-filename STDOUT: stdout-filename APP: application-to-test argument-for-application ... argument-for-application DIFF: comparison-application argument-for-comparison-application ... argument-for-comparison-application
Here are explanations of the arguments that can (or must) be used:
number
name-or-description
expected-exit-code
wait-time-seconds
stdin-filename
stdin
as if entered by the user.
For example:stdout-filename
application-to-test
argument-for-application
\
or Unix
/
path separators on Windows; or /
on Unix)—unless
you are using the STDOUT entry.
comparison-application
argument-for-comparison-application
entries
(These arguments always follow any that are given in the [ENV] section's DIFF
entry.) For example, here's how to use Unix diff (rather than retest's
built-in text comparison) to compare text while ignoring trailing
whitespace at the end of each line:
DIFF: diff -q -ZNote that if you set DIFF: in the [ENV] section, you can use that setting in each test simply by using:
DIFF: $DIFF(See Example #3.)
Note that in addition to using $OUT_PATH (which will be automatically set to either EXPECTED_PATH or ACTUAL_PATH), you can also use $HOME which will be set to your home folder (on all platforms).
[ENV] APP: C:\Program Files\comparepdfcmd\comparepdfcmd.exe -q [1] NAME: Invoice check EXITCODE: 4 APP: $APP -r $OUT_PATH\01.csv V:\pdfs\invoice_old.pdf V:\pdfs\invoice_new.pdf [2] NAME: Selected pages appearance check APP: $APP -a --pages2=1-3,6-8 V:\pdfs\pages-a1-1-6.pdf V:\pdfs\pages-a2-1-3,6-8.pdf DIFF: no
This example has two tests. (Note that all commands shown below occupy one line each but may be wrapped by the browser.)
When generating expecteds the first command line will be:
"C:\Program Files\comparepdfcmd\comparepdfcmd.exe" -q -r rt_expected\01.csv V:\pdfs\invoice_old.pdf V:\pdfs\invoice_new.pdf
and the second will be:
"C:\Program Files\comparepdfcmd\comparepdfcmd.exe" -q -a --pages2=1-3,6-8 V:\pdfs\pages-a1-1-6.pdf V:\pdfs\pages-a2-1-3,6-8.pdf
When retest is used to run and compare, the first test is expected to
produce an exit code/error level of 4, and to output
rt_actual\01.csv
which is expected to be identical to
rt_expected\01.csv
. If either of these isn't true a
failure will be reported. Note that retest will do a text comparison
since that's the default for non-image non-JSON files.
For the second test no files are compared (due to the DIFF: no line), and the exit code is expected to be 0.
[ENV] APP: alpha.py [01] NAME: JSON output APP: $APP $OUT_PATH/01.json [02] NAME: Binary output APP: $APP $OUT_PATH/02.bin DIFF: rt-binary [03] NAME: Text output (ignoring trailing whitespace differences) APP: $APP $OUT_PATH/03.txt DIFF: diff -q -Z $EXPECTED_PATH/03.txt $ACTUAL_PATH/03.txt [04] NAME: Captured output STDOUT: 04.txt APP: $HOME/bin/beta.sh [05] NAME: Interactive usage STDIN: stdin05.txt STDOUT: 05.txt APP: $APP -i
Here, test 1 is expected to have an exit code of 0 (the default) and to produce a UTF-8 encoded JSON file. Retest's JSON comparison compares the actual JSON values and ignores any superfluous whitespace. (Force a text comparison using DIFF: rt-text if you want to compare JSON files as text.)
Test 2 is expected to produce a binary file, so we have used DIFF: rt-binary to force retest to compare byte-by-byte.
For test 3, we have chosen to use an external diff tool. Notice that for this we must use the $EXPECTED_PATH and the $ACTUAL_PATH so that we can give the external comparison tool the generated expected and the newly created actual to compare.
For test 4 we have an application that outputs to stdout so we tell retest to capture that output to a file which can then be compared.
Test 5 checks interactive usage. The input that the user is expected to
enter is in the file stdin05.txt
—this is fed into the
application as if entered by the user. And the program's output (which
is to the console, i.e., stdout) is captured into a file that is then
generated or compared against. (Note that on Windows it may sometimes be
necessary to use DIFF: rt-binary when using STDOUT.)
[ENV] APP: delta.py DIFF: diff -q -Z SET: TD: test_data [30] APP: $APP $TD/30.dat $OUT_PATH/30.txt DIFF: $DIFF $EXPECTED_PATH/30.txt $ACTUAL_PATH/30.txt [31] APP: $APP -a $TD/31.dat $OUT_PATH/31.txt DIFF: $DIFF $EXPECTED_PATH/31.txt $ACTUAL_PATH/31.txt
In this example we are using an external Unix diff tool for both tests. Because we aren't using one of retest's built-in comparisons, we must specify the two files to compare.
So, for example, when generating, (retest g
) the two command
lines will be:
./delta.py test_data/30.dat rt_expected/30.txt ./delta.py -a test_data/31.dat rt_expected/31.txt
On Windows, they will start with delta.py
of course, and the
[ENV] section's APP
may need to specify the interpreter, e.g.:
APP: C:\Python36\python.exe delta.py
And when testing and comparing (retest v
) the command lines
will be:
./delta.py test_data/30.dat rt_actual/30.txt diff -q -Z rt_expected/30.txt /tmp/rt_actual/30.txt ./delta.py -a test_data/31.dat rt_actual/31.txt diff -q -Z rt_expected/31.txt rt_actual/31.txt
[ENV] APP: X:\build\reporter.exe EXPECTED_PATH: X:\build\rt_expected ACTUAL_PATH: U:\rt_actual DIFF: T:\bin\comparepdfcmd\comparepdfcmd.exe [1] NAME: Compare Words (Terms and Conditions) APP: $APP --config=X:\build\rt_data\01.ini -o $OUT_PATH\01.pdf DIFF: $DIFF $EXPECTED_PATH\01.pdf $ACTUAL_PATH\01.pdf [2] NAME: Compare Appearance (Advert) APP: $APP --config=X:\build\rt_data\02.ini -o $OUT_PATH\01.pdf DIFF: $DIFF -a $EXPECTED_PATH\01.pdf $ACTUAL_PATH\01.pdf
This example shows how you might automate the testing of an application
that produces .pdf
files that you want to compare using comparepdfcmd. You could use the same
approach to compare using comparepdfcmd, except
in that case the ENV
section's DIFF
part would be
something like this:
DIFF: T:\bin\comparepdfcmd\comparepdfcmd.exe -q
In the first test the application to test (reporter.exe
)
reads a configuration file and outputs a .pdf
which is then
compared using comparepdfcmd
. The second test is similar, only
the comparison is done by appearance rather than the default of
comparing words.
In view of the EU’s Cyber Resilience Act and an abundance of caution, we have withdrawn all our free software.
Your Privacy • Copyright © 2006 Qtrac Ltd. All Rights Reserved.