Commit 6df6bebb authored by Mathieu Giraud's avatar Mathieu Giraud

Merge branch 'feature-36-interleave' into 'dev'

Feature 36, interleave commands and tests

Closes #36

See merge request !10
parents 5eb1d6d3 b34da43c
Pipeline #615 passed with stages
in 11 seconds
......@@ -117,12 +117,13 @@ The documentation is completed by the files in [demo/](demo/).
The [demo/cal.should](demo/cal.should) example shows
matching on several lines (`l`), counting inside lines (`w`),
ignoring whitespace differences (`b`), expecting a test to fail (`f`),
ignoring whitespace differences (`b`), expecting a test to fail (`f`)
or allowing a test to fail (`a`),
requiring less than or more than a given number of expressions (`<`/`>`).
[demo/commands.should](demo/commands.should) shows that several commands can be used into a same `.should` file.
[demo/commands.should](demo/commands.should) shows that several commands can be used into a same `.should` file. Tests are flushed after each set of commands.
[demo/exit-codes.should](demo/exit-codes.should) shows how to require a particular exit code.
[demo/exit-codes.should](demo/exit-codes.should) shows how to require a particular exit code with `!EXIT_CODE`.
[demo/variables.should](demo/variables.should) shows how to define and use variables.
......@@ -134,9 +135,10 @@ for example to launch tools like `valgrind` on a test set.
**Options and modifiers**
```shell
usage: should [-h] [--cd PATH] [--cd-same] [--launcher CMD] [--extra ARG]
[--mod MODIFIERS] [--var NAME=value] [--log] [--tap] [--xml]
[-v] [-q] [--retry]
usage: should [-h] [--version] [--cd PATH] [--cd-same] [--launcher CMD]
[--extra ARG] [--mod MODIFIERS] [--var NAME=value]
[--timeout TIMEOUT] [--shuffle] [--only-a] [--only-f] [--log]
[--tap] [--xml] [-v] [-q] [--retry]
should-file [should-file ...]
Test command-line applications through .should files
......@@ -146,22 +148,31 @@ positional arguments:
optional arguments:
-h, --help show this help message and exit
--version show program's version number and exit
--cd PATH directory from which to run the test commands
--cd-same run the test commands from the same directory as the .should files
--launcher CMD launcher preceding each command (or replacing $LAUNCHER)
--extra ARG extra argument after the first word of each command (or replacing $EXTRA)
--mod MODIFIERS global modifiers (uppercase letters cancel previous modifiers)
f/F consider that the test should fail
a/A consider that the test is allowed to fail
r/R consider as a regular expression
w/W count all occurrences, even on a same line
i/I ignore case changes
b/B ignore whitespace differences as soon as there is at least one space. Implies 'r'
l/L search on all the output rather than on every line
z/Z keep leading and trailing spaces
> requires that the expression occurs strictly more than the given number
< requires that the expression occurs strictly less than the given number
--var NAME=value variable definition (then use $NAME in .should files)
--timeout TIMEOUT Delay (in seconds) after which the task is stopped (default: 120)
--shuffle shuffle the tests
--retry launch again the last failed or warned tests
filter options:
--only-a launches only 'a' tests
--only-f launches only 'f' tests
output options:
--log stores the output into .log files
--tap outputs .tap files
......@@ -173,12 +184,14 @@ output options:
**Output.**
By default, `should` only writes to the standard output.
The `--log` and the `--tap` options enable to store the actual output of the tested commands
as well as [`.tap` files](https://testanything.org/tap-specification.html).
The `--log`, `--tap` and `--xml` options enable to store the actual output of the tested commands
as well as [`.tap` files](https://testanything.org/tap-specification.html)
and a JUnit `should.xml` file.
**Exit code.**
`should.py` returns `0` when all the tests passed (or have been skipped, or marked as TODO with `f`). A
As soon as one non-TODO test fails, it returns `1`.
`should.py` returns `0` when all the tests passed (or have been skipped, or marked as `failed-with-ALLOW` with `a`). A
As soon as one regular test fails, or as soon as a test marked with `TODO` pass,
`should.py` returns `1`.
### Alternatives
......
......@@ -8,13 +8,15 @@
echo -n "hello,"
$ The three commands ran successfully
$ The two commands ran successfully, but the third one, below, is not yet run
: message
: hello
: world
0: world
# Note that all commands are actually run before the tests
# are checked on the joined output.
# Running a command after a test resets the output buffer
echo "world"
0: message
0: hello
: world
\ No newline at end of file
......@@ -8,3 +8,8 @@ $ 'world' is in the diff
# The files differ
!EXIT_CODE: 1
# Different exit codes may be tested for different commands
diff demo/hello.should demo/hello.should
!EXIT_CODE: 0
!OPTIONS: --var MIN_VERSION=(3,5)
# A !REQUIRES directive is executed before any test.
# The .should file is taken into account only if the !REQUIRES command exits with 0.
!REQUIRES: python3 -c "import sys; sys.exit(0 if sys.version_info >= $MIN_VERSION else 1)"
# Note that !REQUIRES directives may also use variables
!OPTIONS: --var MIN_VERSION=(3,5)
# Note that !REQUIRES directives may also use variables, defined above
# We test here a function that was introduced in Python 3.5.
python3 -c "import math; print(math.isclose(math.pi, 3.14, rel_tol=0.01))"
......
# These tests will run only on a terminal supporting UTF-8.
# When the command does not output in utf-8, tests will probably fail.
!REQUIRES: locale | grep UTF-8
#😃 .should files are utf-8 files.
# 👉 Unicode characters can thus be put both in comments, in test names as well as in test expressions.
echo "é" | iconv -f utf8 -t latin1
echo "✔ á"
$ Check acute a (á)
......@@ -13,10 +14,7 @@ $ Check acute a (á)
$ Check some emoji (✔)
:✔
# When the command does not output in utf-8, tests will probably fail.
echo "✔ á"
echo "é" | iconv -f utf8 -t latin1
$ Check acute e (é)
f:é
......
......@@ -680,9 +680,11 @@ class TestSuite():
self.stdin = []
self.stdout = []
self.test_lines = []
self.skip = False
self.status = None
self.modifiers = modifiers
self.variables = []
self.status = None
self.stats = Stats('test')
self.source = None
self.cd = cd
......@@ -701,8 +703,15 @@ class TestSuite():
def test(self, should_lines, variables=[], verbose=0, colorize=True, only=None):
name = ''
this_cmd_continues = False
for l in should_lines:
current_cmd = '' # multi-line command
current_cmds = [] # commands since the last command run
current_tests = [] # tests since the last command run
self.only = only
self.variables_all = self.variables + variables
# Iterate over should_lines
# then use once DIRECTIVE_SCRIPT to flush the last tests
for l in list(should_lines) + [DIRECTIVE_SCRIPT]:
l = l.lstrip().rstrip(ENDLINE_CHARS)
if not l:
......@@ -715,16 +724,29 @@ class TestSuite():
# Directive -- Requires
if l.startswith(DIRECTIVE_REQUIRES):
self.requires_cmd = l[len(DIRECTIVE_REQUIRES):].strip()
self.variables_all = self.variables + variables
requires_cmd = self.cmd_variables_cd(self.requires_cmd, verbose, colorize)
p = subprocess.Popen(requires_cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.PIPE)
self.requires = (p.wait() == 0)
self.requires_stderr = [l.decode(errors='replace') for l in p.stderr.readlines()]
if not self.requires:
self.skip_set('Condition is not met: %s' % self.requires_cmd, verbose)
if verbose > 0:
print(color(ANSI.CYAN, ''.join(self.requires_stderr), colorize))
continue
# Directive -- No launcher
if l.startswith(DIRECTIVE_NO_LAUNCHER):
self.use_launcher = False
if replace_variables(VAR_LAUNCHER, self.variables_all):
self.skip_set('%s while %s is given' % (DIRECTIVE_NO_LAUNCHER, VAR_LAUNCHER), verbose)
continue
# Directive -- No extra options
if l.startswith(DIRECTIVE_NO_EXTRA):
self.variables = [(VAR_EXTRA, '')] + self.variables
self.variables_all = self.variables + variables
continue
# Directive -- Source
......@@ -736,6 +758,7 @@ class TestSuite():
if l.startswith(DIRECTIVE_OPTIONS):
opts, unknown = options.parse_known_args(l[len(DIRECTIVE_OPTIONS):].split())
self.variables = populate_variables(opts.var) + self.variables
self.variables_all = self.variables + variables
if opts.mod:
self.modifiers += ''.join(opts.mod)
continue
......@@ -763,50 +786,51 @@ class TestSuite():
if RE_TEST.search(l):
pos = l.find(TOKEN_TEST)
modifiers, expression = l[:pos], l[pos+1:]
self.tests.append(TestCase(self.modifiers + modifiers, expression, name))
test = TestCase(self.modifiers + modifiers, expression, name)
current_tests.append(test)
self.tests.append(test)
continue
# Command
# Command : flush and test the previous tests
# If the command is empty (for example at the ned), launch previous commands even when there are no tests
l = l.strip()
next_cmd_continues = l.endswith(CONTINUATION_CHAR)
if next_cmd_continues:
l = l[:-1]
if this_cmd_continues:
self.cmds[-1] += l
else:
self.cmds.append(l)
if current_tests or not l:
this_cmd_continues = next_cmd_continues
# Test current_cmds with current_tests
if not self.skip:
test_lines, exit_test = self.launch(current_cmds, verbose, colorize)
current_tests.append(exit_test)
self.test_lines += test_lines
self.tests_on_lines(current_tests, test_lines, verbose, colorize)
self.debug(self.status, "\n".join(current_cmds), test_lines, verbose, colorize)
else:
self.skip_tests(current_tests)
# Test
self.only = only
self.variables_all = self.variables + variables
if verbose > 1:
print_variables(self.variables_all)
current_cmds = []
current_tests = []
self.status = None
# Command
if not l:
continue
if self.requires_cmd:
requires_cmd = self.cmd_variables_cd(self.requires_cmd, verbose, colorize)
p = subprocess.Popen(requires_cmd, shell=True, stdout=subprocess.DEVNULL, stderr=subprocess.PIPE)
self.requires = (p.wait() == 0)
self.requires_stderr = [l.decode(errors='replace') for l in p.stderr.readlines()]
if verbose > 0:
print(color(ANSI.CYAN, ''.join(self.requires_stderr), colorize))
next_cmd_continues = l.endswith(CONTINUATION_CHAR)
if next_cmd_continues:
l = l[:-1]
current_cmd += l
if not self.requires:
self.skip_all('Condition is not met: %s' % self.requires_cmd, verbose)
return self.status
if not next_cmd_continues:
current_cmds.append(current_cmd)
self.cmds.append(current_cmd)
current_cmd = ''
if not self.use_launcher:
if replace_variables(VAR_LAUNCHER, self.variables_all):
self.skip_all('%s while %s is given' % (DIRECTIVE_NO_LAUNCHER, VAR_LAUNCHER), verbose)
return self.status
self.test_lines += self.launch(self.cmds, verbose, colorize)
self.tests_on_lines(self.tests, self.test_lines, verbose, colorize)
self.debug(self.status, "\n".join(self.cmds), self.test_lines, verbose, colorize)
# end of loop on should_lines
if verbose > 1:
print_variables(self.variables_all)
return self.status
......@@ -825,12 +849,15 @@ class TestSuite():
try:
self.exit_code = p.wait(self.timeout)
self.tests.append(ExternalTestCase('Exit code is %d' % self.expected_exit_code, self.exit_code == self.expected_exit_code, str(self.exit_code)))
exit_test = ExternalTestCase('Exit code is %d' % self.expected_exit_code, self.exit_code == self.expected_exit_code, str(self.exit_code))
except subprocess.TimeoutExpired:
self.exit_code = None
self.tests.append(ExternalTestCase('Exit code is %d' % self.expected_exit_code, SKIP, 'timeout after %s seconds' % self.timeout))
exit_test = ExternalTestCase('Exit code is %d' % self.expected_exit_code, SKIP, 'timeout after %s seconds' % self.timeout)
p.kill()
self.tests.append(exit_test)
self.status = combine_status(self.status, exit_test.status)
if self.elapsed_time is None:
self.elapsed_time = 0
self.elapsed_time += time.time() - start_time
......@@ -845,7 +872,7 @@ class TestSuite():
if verbose > 0:
self.print_stderr(colorize)
return open(self.source).readlines() if self.source else self.stdout
return open(self.source).readlines() if self.source else self.stdout, exit_test
......@@ -875,13 +902,16 @@ class TestSuite():
print(' stderr --> %s lines' % len(self.stderr))
print(color(ANSI.CYAN, ''.join(self.stderr), colorize))
def skip_all(self, reason, verbose=1):
def skip_set(self, reason, verbose=1):
if verbose > 0:
print('Skipping tests: %s' % reason)
for test in self.tests:
self.skip = True
self.status = combine_status(self.status, SKIP)
def skip_tests(self, tests):
for test in tests:
test.status = SKIP
self.stats.up(test.status)
self.status = SKIP
def debug(self, status, cmd, test_lines, verbose, colorize):
if status in FAIL_STATUS and verbose <= 0:
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment