File: test-framework.rst

package info (click to toggle)
scons 4.8.1%2Bdfsg-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 46,180 kB
  • sloc: xml: 202,012; python: 137,621; javascript: 4,670; sh: 1,049; perl: 493; ruby: 229; makefile: 213; ansic: 188; java: 139; f90: 108; cpp: 71; yacc: 39; lex: 10; fortran: 3
file content (768 lines) | stat: -rw-r--r-- 32,410 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
#######################
SCons Testing Framework
#######################

.. contents::
   :local:

Introduction
============

SCons uses extensive automated tests to ensure quality. The primary goal
is that users be able to upgrade from version to version without
any surprise changes in behavior.

In general, no change goes into SCons unless it has one or more new
or modified tests that demonstrably exercise the bug being fixed or
the feature being added.  There are exceptions to this guideline, but
they should be just that, *exceptions*.  When in doubt, make sure
it's tested.

Test organization
=================

There are three types of SCons tests:

*End-to-End Tests*
   End-to-end tests of SCons are Python scripts (``*.py``) underneath the
   ``test/`` subdirectory.  They use the test infrastructure modules in
   the ``testing/framework`` subdirectory. They set up small complete
   projects and call SCons to execute them, checking that the behavior is
   as expected.

*Unit Tests*
   Unit tests for individual SCons modules live underneath the
   ``SCons/`` subdirectory and are the same base name as the module
   to be tested, with ``Tests`` appended  to the basename. For example,
   the unit tests for the ``Builder.py`` module are in the
   ``BuilderTests.py`` script.  Unit tests tend to be based on assertions.

*External Tests*
   For the support of external Tools (in the form of packages, preferably),
   the testing framework is extended so it can run in standalone mode.
   You can start it from the top-level directory of your Tool's source tree,
   where it then finds all Python scripts (``*.py``) underneath the local
   ``test/`` directory.  This implies that Tool tests have to be kept in
   a directory named ``test``, like for the SCons core.


Contrasting end-to-end and unit tests
-------------------------------------

In general, end-to-end tests verify hardened parts of the public interface:
interfaces documented in the manpage that a user might use in their
project. These cannot be broken (of course, errors can be corrected,
though sometimes a transition period may be required).
Unit tests are now considered for testing internal interfaces, which do
not themselves directly have API guarantees.  An example could be using
and end-to-end test to verify that things added by env.Append() actually
appear correctly in issued command lines, while unit tests assure
correct behavior given various inputs of internal routines that
Append() may make use of. If a reported error can be tested by adding a new
case to an existing unit test, by all means, do that, as it tends to be
simpler and cleaner. On the other hand, reported problems that come with
a reproducer are by their nature more like an e2e test - this is something
a user has tried in their SConscripts that didn't have the expected result.

End-to-end tests are by their nature harder to debug. For the unit
tests, you're running a test program directly, so you can drop straight
into the Python debugger by calling ``runtest.py`` with the ``-d / --debug``
option and setting breakpoints to help examine the internal state as
the test is running. The e2e tests are each mini SCons projects executed
by an instance of scons in a subprocess, and the Python debugger isn't
particularly useful in this context.
There's a separate section of this document on that topic: see `Debugging
end-to-end tests`_.


Naming conventions
------------------

The end-to-end tests, more or less, follow this naming convention:

#. All tests end with a ``.py`` suffix.
#. In the *General* form we use

   ``Feature.py``
      for the test of a specified feature; try to keep this description
      reasonably short
   ``Feature-x.py``
      for the test of a specified feature using option ``x``
#. The *command line option* tests take the form

   ``option-x.py``
      for a lower-case single-letter option
   ``option--X.py``
      upper-case single-letter option (with an extra hyphen, so the
      file names will be unique on case-insensitive systems)
   ``option--lo.py``
      long option; you can abbreviate the long option name to a
      few characters (the abbreviation must be unique, of course).
#. Use a suitably named subdirectory if there's a whole group of
   related test files.


Testing Architecture
====================

The test framework provides a lot of useful functions for use within a
test program. This includes test setup, parameterization, running tests,
looking at results and reporting outcomes. You can run a particular test
directly by making sure the Python interpreter can find the framework::

    $ PYTHON_PATH=testing/framework python SCons/ActionTests.py

The framework does *not* provide facilities for handling a collection of
test programs. For that, SCons provides a driver script ``runtest.py``.
Help is available through the ``-h`` option::

   $ python runtest.py -h

You run tests from the top-level source directory.
To simply run all the tests, use the ``-a`` option::

   $ python runtest.py -a

You may specifically list one or more tests to be run. ``runtest``
considers all arguments it doesn't recognize as options to be
part of the test list::

   $ python runtest.py SCons/BuilderTests.py
   $ python runtest.py -t test/option/option-j.py test/option/option-p.py

Folder names work in the test list as well, so you can do::

   $ python runtest.py test/SWIG

to run all SWIG tests (and no others).

You can also use the ``-f`` option to execute just the tests listed in
a test list file::

   $ cat testlist.txt
   test/option/option-j.py
   test/option/option-p.py
   $ python runtest.py -f testlist.txt

List one test file per line. Lines that begin with the
comment mark ``#`` will be ignored (this lets you quickly change the
test list by commenting out a few tests in the testlist file).

If more than one test is run, the ``runtest.py`` script prints a summary
and count of tests that failed or yielded no result (skips). Skipped
tests do not count towards considering the overall run to have failed,
unless the ``--no-ignore-skips`` option is used. Passed tests can be
listed using the ``--passed`` option, though this tends to make the
result section at the end quite noisy, which is why it's off by default.
Also by default, ``runtest.py`` prints a running count and completion
percentage message for each test case as it finishes, along with the name
of the test file.  You can quiet this output:
have a look at the ``-q``, ``-s`` and ``-k`` options.

Since a test run can produce a lot of output that you may want to examine
later, there is an option ``-o FILE`` to save the same output that went
to the screen to a file named by ``FILE``. There is also an option to
save the results in a custom XML format.

The above invocations all test against the SCons files in the current
directory (that is, in ``./SCons``, and do not require that a packaging
build of SCons be performed first.  This is the most common mode: make
some changes, and test the effects in place.  The ``runtest.py`` script
supports additional options to run tests against unpacked packages in the
``build/test-*/`` subdirectories.

If you are testing a separate Tool outside of the SCons source tree,
call the ``runtest.py`` script in *external* (stand-alone) mode::

   $ python ~/scons/runtest.py -e -a

This ensures that the testing framework doesn't try to access SCons
classes needed for some of the *internal* test cases.

Note that as each test is run, it is executed in a temporary directory
created just for that test, which is by default removed when the
test is complete.  This ensures that your source directories
don't get clobbered with temporary files and changes from the test runs.
If the test itself needs to know the directory, it can be obtained
as ``test.workdir``, or more commonly by calling ``test.workpath()``,
a function which takes a path-component argument and returns the path to
that path-component in the testing directory.

The use of an ephemeral test directory means that you can't simply change
into a directory to debug after a test has gone wrong.
For a way around this, check out the ``PRESERVE`` environment variable.
It can be seen in action in `How to convert old tests to use fixtures`_ below.

Not running tests
=================

If you simply want to check which tests would get executed, you can call
the ``runtest.py`` script with the ``-l`` option combined with whichever
test selection options (see below) you intend to use. Example::

   $ python runtest.py -l test/scons-time

``runtest.py`` also has a ``-n`` option, which prints the command line for
each test which would have been run, but doesn't actually run them::

   $ python runtest.py -n -a

Selecting tests
===============

When started in *standard* mode::

   $ python runtest.py -a

``runtest.py`` assumes that it is run from the SCons top-level source
directory.  It then dives into the ``SCons`` and ``test`` directories,
where it tries to find filenames

``*Test.py``
   for the ``SCons`` directory (unit tests)

``*.py``
   for the ``test`` directory (end-to-end tests)

When using fixtures, you may end up in a situation where you have
supporting Python script files in a subdirectory which shouldn't be
picked up as test scripts of their own.  There are two options here:

#. Add a file with the name ``sconstest.skip`` to your subdirectory. This
   tells ``runtest.py`` to skip the contents of the directory completely.
#. Create a file ``.exclude_tests`` in each directory in question, and in
   it list line-by-line the files to exclude from testing.

The same rules apply when testing external Tools when using the ``-e``
option.


Example End-to-End test script
==============================

To illustrate how the end-to-end test scripts work, let's walk through
a simple *Hello, world!* example::

    #!python
    import TestSCons

    test = TestSCons.TestSCons()

    test.write('SConstruct', """\
    Program('hello.c')
    """)

    test.write('hello.c', """\
    #include <stdio.h>

    int
    main(int argc, char *argv[])
    {
        printf("Hello, world!\\n");
        exit (0);
    }
    """)

    test.run()

    test.run(program='./hello', stdout="Hello, world!\n")

    test.pass_test()

Explanation
-----------

``import TestSCons``
   Imports the main infrastructure for SCons tests.  This is
   normally the only part of the infrastructure that needs importing.
   Sometimes other Python modules are necessary or helpful, and get
   imported before this line.

``test = TestSCons.TestSCons()``
   Initializes an object for testing.  A fair amount happens under
   the covers when the object is created, including:

   * A temporary directory is created for all the in-line files that will
     get created.
   * The temporary directory's removal is arranged for when
     the test is finished.
   * The test does ``os.chdir()`` to the temporary directory.

``test.write('SConstruct', ...)``
   This line creates an ``SConstruct`` file in the temporary directory,
   to be used as input to the ``scons`` run(s) that we're testing.
   Note the use of the Python triple-quoted string for the contents
   of the ``SConstruct`` file (and see the next section for an
   alternative approach).

``test.write('hello.c', ...)``
   This line creates an ``hello.c`` file in the temporary directory.
   Note that we have to escape the newline in the
   ``"Hello, world!\\n"`` string so that it ends up as a single
   backslash in the ``hello.c`` file on disk.

``test.run()``
   This actually runs SCons.  Like the object initialization, things
   happen under the covers:

   * The exit status is verified; the test exits with a failure if
     the exit status is not zero.
   * The error output is examined, and the test exits with a failure
     if there is any.

``test.run(program='./hello', stdout="Hello, world!\n")``
   This shows use of the ``TestSCons.run()`` method to execute a program
   other than ``scons``, in this case the ``hello`` program we just
   built.  The ``stdout=`` keyword argument also tells the
   ``TestSCons.run()`` method to fail if the program output does not
   match the expected string ``"Hello, world!\n"``.  Like the previous
   ``test.run()`` line, it will also fail the test if the exit status is
   non-zero, or there is any error output.

``test.pass_test()``
   This is always the last line in a test script.  If we get to
   this line, it means we haven't bailed out on a failure or skip,
   so the result was good. It prints ``PASSED``
   on the screen and makes sure we exit with a ``0`` status to indicate
   the test passed.  As a side effect of destroying the ``test`` object,
   the created temporary directory will be removed.

Working with fixtures
=====================

In the simple example above, the files to set up the test are created
on the fly by the test program. We give a filename to the ``TestSCons.write()``
method, plus a string holding its contents, and it gets written to the test
directory right before starting.

This simple technique can be seen throughout most of the end-to-end
tests as it was the original technique provided to test developers,
but it is no longer the preferred way to write a new test.
To develop this way, you first need to create the necessary files and
get them to work, then convert them to an embedded string form, which may
involve lots of extra escaping.  These embedded files are then tricky
to maintain.  As a test grows multiple steps, it becomes less easy to
read, since many if the embedded strings aren't quite the final files,
and the volume of test code obscures the flow of the testing steps.
Additionally, as SCons moves more to the use of automated code checkers
and formatters to detect problems and keep a standard coding style for
better readability, note that such tools don't look inside strings
for code, so the effect is lost on them.

In testing parlance, a fixture is a repeatable test setup.  The SCons
test harness allows the use of saved files or directories to be used
in that sense: *the fixture for this test is foo*, instead of writing
a whole bunch of strings to create files. Since these setups can be
reusable across multiple tests, the *fixture* terminology applies well.

Note: fixtures must not be treated by SCons as runnable tests. To exclude
them, see instructions in the above section named `Selecting tests`_.

Directory fixtures
------------------

The test harness method ``dir_fixture(srcdir, [dstdir])``
copies the contents of the specified directory ``srcdir`` from
the directory of the called test script to the current temporary test
directory.  The ``srcdir`` name may be a list, in which case the elements
are concatenated into a path first.  The optional ``dstdir`` is
used as a destination path under the temporary working directory.
``distdir`` is created automatically, if it does not already exist.

If ``srcdir`` represents an absolute path, it is used as-is.
Otherwise, if the harness was invoked with the environment variable
``FIXTURE_DIRS`` set (which ``runtest.py`` does by default),
the test instance will present that list of directories to search
as ``self.fixture_dirs``, each of these are additionally searched for
a directory with the name of ``srcdir``.

A short syntax example::

   test = TestSCons.TestSCons()
   test.dir_fixture('image')
   test.run()

would copy all files and subdirectories from the local ``image`` directory
to the temporary directory for the current test, then run it.

To see a real example for this in action, refer to the test named
``test/packaging/convenience-functions/convenience-functions.py``.

File fixtures
-------------

The method ``file_fixture(srcfile, [dstfile])``
copies the file ``srcfile`` from the directory of the called script
to the temporary test directory.
The optional ``dstfile`` is used as a destination file name
under the temporary working directory, unless it is an absolute path name.
If ``dstfile`` includes directory elements, they are
created automatically if they don't already exist.
The ``srcfile`` and ``dstfile`` parameters may each be a list,
which will be concatenated into a path.

If ``srcfile`` represents an absolute path, it is used as-is. Otherwise,
any passed in fixture directories are used as additional places to
search for the fixture file, as for the ``dir_fixture`` case.

With the following code::

   test = TestSCons.TestSCons()
   test.file_fixture('SConstruct')
   test.file_fixture(['src', 'main.cpp'], ['src', 'main.cpp'])
   test.run()

The files ``SConstruct`` and ``src/main.cpp`` are copied to the
temporary test directory. Notice the second ``file_fixture`` call
preserves the path of the original, otherwise ``main.cpp``
would have been placed in the top level of the test directory.

Again, a reference example can be found in the current revision
of SCons, see ``test/packaging/sandbox-test/sandbox-test.py``.

For even more examples you should check out one of the external Tools,
e.g. the *Qt5* Tool at
https://github.com/SCons/scons-contrib/tree/master/sconscontrib/SCons/Tool/qt5.
There are many other tools in the contrib repository,
and you can also visit the SCons Tools
Index at https://github.com/SCons/scons/wiki/ToolsIndex for a complete
list of available Tools, though not all may have tests yet.

How to convert old tests to use fixtures
----------------------------------------

Tests using the inline ``TestSCons.write()`` method can fairly easily be
converted to the fixture based approach. For this, we need to get at the
files as they are written to each temporary test directory,
which we can do by taking advantage of an existing debugging aid,
namely that ``runtest.py`` checks for the existence of an environment
variable named ``PRESERVE``. If it is set to a non-zero value, the testing
framework preserves the test directory instead of deleting it, and prints
a message about its name to the screen.

So, you should be able to give the commands::

   $ PRESERVE=1 python runtest.py test/packaging/sandbox-test.py

assuming Linux and a bash-like shell. For a Windows ``cmd`` shell, use
``set PRESERVE=1`` (that will leave it set for the duration of the
``cmd`` session, unless manually cleared).

The output will then look something like this::

   1/1 (100.00%) /usr/bin/python test/packaging/sandbox-test.py
   PASSED
   preserved directory /tmp/testcmd.4060.twlYNI

You can now copy the files from that directory to your new
*fixture* directory. Then, in the test script you simply remove all the
tedious ``TestSCons.write()`` statements and replace them with a single
``TestSCons.dir_fixture()`` call.

For more complex testing scenarios you can use ``file_fixture`` with
the optional second argument (or the keyword arg ``dstfile``) to assign
a name to the file being copied.  For example, some tests need to
write multiple ``SConstruct`` files across the full run.
These files can be given different names in the source (perhaps using a
suffix to distinguish them), and then be successively copied to the
final name as needed::

   test.file_fixture('fixture/SConstruct.part1', 'SConstruct')
   # more setup, then run test
   test.file_fixture('fixture/SConstruct.part2', 'SConstruct')
   # run new test


When not to use a fixture
-------------------------

Note that some files are not appropriate for use in a fixture as-is:
fixture files should be static. If the creation of the file involves
interpolating data discovered during the run of the test script,
that process should stay in the script.  Here is an example of this
kind of usage that does not lend itself easily to a fixture::

   import TestSCons
   _python_ = TestSCons._python_

   test.write('SConstruct', f"""
   cc = Environment().Dictionary('CC')
   env = Environment(
       LINK=r'{_python_} mylink.py',
       LINKFLAGS=[],
       CC=r'{_python_} mycc.py',
       CXX=cc,
       CXXFLAGS=[],
   )
   env.Program(target='test1', source='test1.c')
   """

Here the value of ``_python_`` from the test program is
pasted in via f-string formatting. A fixture would be hard to use
here because we don't know the value of ``_python_`` until runtime
(also note that as it will be an absolute pathname, it's entered using
Python raw string notation to avoid interpretation problems on Windows,
where the path separator is a backslash).

The other files created in this test may still be candidates for
use as fixture files, however.


Debugging end-to-end tests
==========================

The end-to-end tests are hand-crafted SCons projects, so testing
involves running an instance of scons with those inputs. The
tests treat the SCons invocation as a *black box*,
usually looking for *external* effects of the test - targets are
created, created files have expected contents, files properly
removed on clean, etc.  They often also look for
the flow of messages from SCons.

Simple tricks like inserting ``print`` statements in the SCons code
itself don't really help as they end up disrupting those external
effects (e.g. ``test.run(stdout="Some text")``, but with the
``print``, ``stdout`` contains the extra print output and the
result doesn't match).

Even more irritatingly, added text can cause other tests to fail and
obscure the error you're looking for.  Say you have three different
tests in a script exercising different code paths for the same feature,
and the third one is unexpectedly failing. You add some debug prints to
the affected part of scons, and now the first test of the three starts
failing, aborting the test run before it even gets to the third test -
the one you were trying to debug.

Still, there are some techniques to help debugging.

The first step should be to run the tests so the harness
emits more information, without forcing more information into
the test stdout/stderr which will confuse result evaluation.
``runtest.py`` has several levels of verbosity which can be used
for this purpose::

   $ python runtest.py --verbose=2 test/foo.py

You can also use the internal
``SCons.Debug.Trace()`` function, which prints output to
``/dev/tty`` on Linux/UNIX systems and ``con`` on Windows systems,
so you can see what's going on, but do not contribute to the
captured stdout/stderr and mess up the test expectations.

If you do need to add informational messages in scons code
to debug a problem, you can use logging and send the messages
to a file instead, so they don't interrupt the test expectations.
Or write directly to a trace file of your choosing.

Part of the technique discussed in the section
`How to convert old tests to use fixtures`_ can also be helpful
for debugging purposes.  If you have a failing test, try::

   $ PRESERVE=1 python runtest.py test/failing-test.py

You can now go to the save directory reported from this run and invoke
scons manually (with appropriate arguments matching what the test did)
to see the results without the presence of the test infrastructure which
would otherwise consume output you may be interested in. In this case,
adding debug prints may be more useful.

There are related variables ``PRESERVE_PASS``, ``PRESERVE_FAIL`` and
``PRESERVE_NORESULT`` that preserve the directory only if the test result
was the indicated one, which is helpful if you're trying to work with
multiple tests showing an unusual result.

From a Windows ``cmd`` shell, you will have to set the environment
variable first, it doesn't work on a single line like the example above for
POSIX-style shells.


Test infrastructure
===================

The main end-to-end test API is defined in the ``TestSCons`` class.
``TestSCons`` is a subclass of ``TestCommon``,
which is a subclass of ``TestCmd``.
``TestSCons`` provides the help for using an instance of SCons during
the run.

The unit tests do not run an instance of SCons separately, but instead
import the modules of SCons that they intend to test. Those tests
should use the ``TestCmd`` class - it is intended for runnable scripts.

Those classes are defined in Python files of the same name
in ``testing/framework``.
Start in ``testing/framework/TestCmd.py`` for the base API definitions, like how
to create files (``test.write()``) and run commands (``test.run()``).

The match functions work like this:

``TestSCons.match_re``
   match each line with an RE

   * Splits the lines into a list (unless they already are)
   * splits the REs at newlines (unless already a list)
     and puts ``^..$`` around each
   * then each RE must match each line.  This means there must be as many
     REs as lines.

``TestSCons.match_re_dotall``
   match all the lines against a single RE

   * Joins the lines with newline (unless already a string)
   * joins the REs with newline (unless it's a string) and puts ``^..$``
     around the whole  thing
   * then whole thing must match with Python re.DOTALL.

Use them in a test like this::

   test.run(..., match=TestSCons.match_re, ...)

or::

   test.must_match(..., match=TestSCons.match_re, ...)

Often you want to supply arguments to SCons when it is invoked
to run a test, which you can do using an *arguments* parameter::

   test.run(arguments="-O -v FOO=BAR")

One caveat here: the way the parameter is processed is unavoidably
different from typing on the command line - if you need it not to
be split on spaces, pre-split it yourself, and pass the list, like::

   test.run(arguments=["-f", "SConstruct2", "FOO=Two Words"])


Avoiding tests based on tool existence
======================================

For many tests, if the tool being tested is backed by an external program
which is not installed on the machine under test, it may not be worth
proceeding with the test. For example, it's hard to test compiling code with
a C compiler if no C compiler exists. In this case, the test should be
skipped.

Here's a simple example for end-to-end tests::

   intelc = test.detect_tool('intelc', prog='icpc')
   if not intelc:
       test.skip_test("Could not load 'intelc' Tool; skipping test(s).\n")

See ``testing/framework/TestSCons.py`` for the ``detect_tool()`` method.
It calls the tool's ``generate()`` method, and then looks for the given
program (tool name by default) in ``env['ENV']['PATH']``.

The ``where_is()`` method can be used to look for programs that
are do not have tool specifications. The existing test code
will have many samples of using either or both of these to detect
if it is worth even proceeding with a test.

For the unit tests, there are decorators for conditional skipping and
other actions that will produce the correct output display and statistics
in abnormal situations.

``@unittest.skip(reason)``
   Unconditionally skip the decorated test.
   reason should describe why the test is being skipped.

``@unittest.skipIf(condition, reason)``
   Skip the decorated test if condition is true.

``@unittest.skipUnless(condition, reason)``
   Skip the decorated test unless condition is true.

``@unittest.expectedFailure``
   Mark the test as an expected failure.
   If the test fails it will be considered a success.
   If the test passes, it will be considered a failure.

You can also directly call ``testcase.skipTest(reason)``.

Note that it is usually possible to test at least part of the operation of
a tool without the underlying program.  Tools are responsible for setting up
construction variables and having the right builders, scanners and emitters
plumbed into the environment.  These things can be tested by mocking the
behavior of the executable.  Many examples of this can be found in the
``test`` directory. See for example ``test/subdivide.py``.

Testing DOs and DONTs
=====================

There's no question that having to write tests in order to get a change
approved - even an apparently trivial change - does make it a little harder
to contribute to the SCons code base - but the requirement to have features
and bugfixes testable is a necessary part of ensuring SCons quality.
Thinking of SCons development in terms of the red/green model from
Test Driven Development should make things a little easier.

If you are working on an SCons bug, try to come up with a simple
reproducer first.  Bug reports (even your own!) are often like *I tried
to do this but it surprisingly failed*, and a reproducer is normally an
``SConstruct`` along with, probably, some supporting files such as source
files, data files, subsidiary SConscripts, etc.  Try to make this example
as simple and clean as possible.  No, this isn't necessarily easy to do,
but winnowing down what triggers a problem and removing the stuff that
doesn't actually contribute to triggering the problem it is a step that
lets you (and later readers) more clearly understand what is going on.
You don't have to turn this into a formal testcase yet, but keep this
reproducer around, and document with it what you expect to happen,
and what actually happens.  This material will help produce an E2E
test later, and this is something you *may* be able to get help with,
if the way the tests are usually written and the test harness proves
too confusing.  With a clean test in hand (make sure it's failing!)
you can go ahead an code up a fix and make sure it passes with the fix
in place.  Jumping straight to a fix without working on a testcase like
this will often lead to a disappointing *how do I come up with a test
so the maintainer will be willing to merge* phase. Asking questions on
a public forum can be productive here.

E2E-specific Suggestions:

* Do not require the use of an external tool unless necessary.
  Usually the SCons behavior is the thing we want to test,
  not the behavior of the external tool. *Necessary* is not a precise term -
  sometimes it would be too time-consuming to write a script to mock
  a compiler with an extensive set of options, and sometimes it's
  not a good idea to assume you know what all those will do vs what
  the real tool does; there may be other good reasons for just going
  ahead and calling the external tool.
* If using an external tool, be prepared to skip the test if it is unavailable.
* Do not combine tests that need an external tool with ones that
  do not - split these into separate test files. There is no concept
  of partial skip for e2e tests, so if you successfully complete seven
  of eight tests, and then come to a conditional "skip if tool missing"
  or "skip if on Windows", and that branch is taken, then the
  whole test file ends up skipped, and the seven that ran will
  never be recorded.  Some tests follow the convention of creating a
  second test file with the ending ``-live`` for the part that requires
  actually running the external tool.
* In testing, *fail fast* is not always the best policy - if you can think
  of many scenarios that could go wrong and they are all run linearly in
  a single test file, then you only hear about the first one that fails.
  In some cases it may make sense to split them out a bit more, so you
  can see several fails at once, which may show a helpful failure pattern
  you wouldn't spot from a single fail.
* Use test fixtures where it makes sense, and in particular, try to
  make use of shareable mocked tools, which, by getting lots of use,
  will be better debugged (that is, don't have each test produce its
  own ``myfortan.py`` or ``mylex.py`` etc. unless they need drastically
  different behaviors).

Unittest-specific hints:

- Let the ``unittest`` module help!  Lots of the existing tests just
  use a bare ``assert`` call for checks, which works fine, but then
  you are responsible for preparing the message if it fails.  The base
  ``TestCase`` class has methods which know how to display many things,
  for example ``self.assertEqual()`` displays in what way the two arguments
  differ if they are *not* equal. Checking for am expected exception can
  be done with ``self.assertRaises()`` rather than crafting a stub of
  code using a try block for this situation.
- The *fail fast* consideration applies here, too: try not to fail a whole
  testcase on the first problem, if there are more checks to go.
  Again, existing tests may use elaborate tricks for this, but modern
  ``unittest`` has a ``subTest`` context manager that can be used to wrap
  each distinct piece and not abort the testcase for a failing subtest
  (to be fair, this functionality is a recent addition, after most SCons
  unit tests were written - but it should be used going forward).