File: API.md

package info (click to toggle)
sharness 1.0.0-1.1
  • links: PTS, VCS
  • area: main
  • in suites: bookworm, bullseye, sid
  • size: 228 kB
  • sloc: sh: 912; perl: 63; makefile: 60
file content (349 lines) | stat: -rw-r--r-- 10,403 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
# Sharness API

### SHARNESS_VERSION

    Public: Current version of Sharness.

### SHARNESS_TEST_EXTENSION

    Public: The file extension for tests.  By default, it is set to "t".

### SHARNESS_ORIG_TERM

    Public: The unsanitized TERM under which sharness is originally run

### test_set_prereq()

    Public: Define that a test prerequisite is available.

    The prerequisite can later be checked explicitly using test_have_prereq or
    implicitly by specifying the prerequisite name in calls to test_expect_success
    or test_expect_failure.

    $1 - Name of prerequiste (a simple word, in all capital letters by convention)

    Examples

      # Set PYTHON prerequisite if interpreter is available.
      command -v python >/dev/null && test_set_prereq PYTHON

      # Set prerequisite depending on some variable.
      test -z "$NO_GETTEXT" && test_set_prereq GETTEXT

    Returns nothing.

### test_have_prereq()

    Public: Check if one or more test prerequisites are defined.

    The prerequisites must have previously been set with test_set_prereq.
    The most common use of this is to skip all the tests if some essential
    prerequisite is missing.

    $1 - Comma-separated list of test prerequisites.

    Examples

      # Skip all remaining tests if prerequisite is not set.
      if ! test_have_prereq PERL; then
          skip_all='skipping perl interface tests, perl not available'
          test_done
      fi

    Returns 0 if all prerequisites are defined or 1 otherwise.

### test_debug()

    Public: Execute commands in debug mode.

    Takes a single argument and evaluates it only when the test script is started
    with --debug. This is primarily meant for use during the development of test
    scripts.

    $1 - Commands to be executed.

    Examples

      test_debug "cat some_log_file"

    Returns the exit code of the last command executed in debug mode or 0
      otherwise.

### test_pause()

    Public: Stop execution and start a shell.

    This is useful for debugging tests and only makes sense together with "-v".
    Be sure to remove all invocations of this command before submitting.

### test_expect_success()

    Public: Run test commands and expect them to succeed.

    When the test passed, an "ok" message is printed and the number of successful
    tests is incremented. When it failed, a "not ok" message is printed and the
    number of failed tests is incremented.

    With --immediate, exit test immediately upon the first failed test.

    Usually takes two arguments:
    $1 - Test description
    $2 - Commands to be executed.

    With three arguments, the first will be taken to be a prerequisite:
    $1 - Comma-separated list of test prerequisites. The test will be skipped if
         not all of the given prerequisites are set. To negate a prerequisite,
         put a "!" in front of it.
    $2 - Test description
    $3 - Commands to be executed.

    Examples

      test_expect_success \
          'git-write-tree should be able to write an empty tree.' \
          'tree=$(git-write-tree)'

      # Test depending on one prerequisite.
      test_expect_success TTY 'git --paginate rev-list uses a pager' \
          ' ... '

      # Multiple prerequisites are separated by a comma.
      test_expect_success PERL,PYTHON 'yo dawg' \
          ' test $(perl -E 'print eval "1 +" . qx[python -c "print 2"]') == "4" '

    Returns nothing.

### test_expect_failure()

    Public: Run test commands and expect them to fail. Used to demonstrate a known
    breakage.

    This is NOT the opposite of test_expect_success, but rather used to mark a
    test that demonstrates a known breakage.

    When the test passed, an "ok" message is printed and the number of fixed tests
    is incremented. When it failed, a "not ok" message is printed and the number
    of tests still broken is incremented.

    Failures from these tests won't cause --immediate to stop.

    Usually takes two arguments:
    $1 - Test description
    $2 - Commands to be executed.

    With three arguments, the first will be taken to be a prerequisite:
    $1 - Comma-separated list of test prerequisites. The test will be skipped if
         not all of the given prerequisites are set. To negate a prerequisite,
         put a "!" in front of it.
    $2 - Test description
    $3 - Commands to be executed.

    Returns nothing.

### test_must_fail()

    Public: Run command and ensure that it fails in a controlled way.

    Use it instead of "! <command>". For example, when <command> dies due to a
    segfault, test_must_fail diagnoses it as an error, while "! <command>" would
    mistakenly be treated as just another expected failure.

    This is one of the prefix functions to be used inside test_expect_success or
    test_expect_failure.

    $1.. - Command to be executed.

    Examples

      test_expect_success 'complain and die' '
          do something &&
          do something else &&
          test_must_fail git checkout ../outerspace
      '

    Returns 1 if the command succeeded (exit code 0).
    Returns 1 if the command died by signal (exit codes 130-192)
    Returns 1 if the command could not be found (exit code 127).
    Returns 0 otherwise.

### test_might_fail()

    Public: Run command and ensure that it succeeds or fails in a controlled way.

    Similar to test_must_fail, but tolerates success too. Use it instead of
    "<command> || :" to catch failures caused by a segfault, for instance.

    This is one of the prefix functions to be used inside test_expect_success or
    test_expect_failure.

    $1.. - Command to be executed.

    Examples

      test_expect_success 'some command works without configuration' '
          test_might_fail git config --unset all.configuration &&
          do something
      '

    Returns 1 if the command died by signal (exit codes 130-192)
    Returns 1 if the command could not be found (exit code 127).
    Returns 0 otherwise.

### test_expect_code()

    Public: Run command and ensure it exits with a given exit code.

    This is one of the prefix functions to be used inside test_expect_success or
    test_expect_failure.

    $1   - Expected exit code.
    $2.. - Command to be executed.

    Examples

      test_expect_success 'Merge with d/f conflicts' '
          test_expect_code 1 git merge "merge msg" B master
      '

    Returns 0 if the expected exit code is returned or 1 otherwise.

### test_cmp()

    Public: Compare two files to see if expected output matches actual output.

    The TEST_CMP variable defines the command used for the comparision; it
    defaults to "diff -u". Only when the test script was started with --verbose,
    will the command's output, the diff, be printed to the standard output.

    This is one of the prefix functions to be used inside test_expect_success or
    test_expect_failure.

    $1 - Path to file with expected output.
    $2 - Path to file with actual output.

    Examples

      test_expect_success 'foo works' '
          echo expected >expected &&
          foo >actual &&
          test_cmp expected actual
      '

    Returns the exit code of the command set by TEST_CMP.

### test_seq()

    Public: portably print a sequence of numbers.

    seq is not in POSIX and GNU seq might not be available everywhere,
    so it is nice to have a seq implementation, even a very simple one.

    $1 - Starting number.
    $2 - Ending number.

    Examples

      test_expect_success 'foo works 10 times' '
          for i in $(test_seq 1 10)
          do
              foo || return
          done
      '

    Returns 0 if all the specified numbers can be displayed.

### test_must_be_empty()

    Public: Check if the file expected to be empty is indeed empty, and barfs
    otherwise.

    $1 - File to check for emptyness.

    Returns 0 if file is empty, 1 otherwise.

### test_when_finished()

    Public: Schedule cleanup commands to be run unconditionally at the end of a
    test.

    If some cleanup command fails, the test will not pass. With --immediate, no
    cleanup is done to help diagnose what went wrong.

    This is one of the prefix functions to be used inside test_expect_success or
    test_expect_failure.

    $1.. - Commands to prepend to the list of cleanup commands.

    Examples

      test_expect_success 'test core.capslock' '
          git config core.capslock true &&
          test_when_finished "git config --unset core.capslock" &&
          do_something
      '

    Returns the exit code of the last cleanup command executed.

### final_cleanup

    Public: Schedule cleanup commands to be run unconditionally when all tests
    have run.

    This can be used to clean up things like test databases. It is not needed to
    clean up temporary files, as test_done already does that.

    Examples:

      cleanup mysql -e "DROP DATABASE mytest"

    Returns the exit code of the last cleanup command executed.

### test_done()

    Public: Summarize test results and exit with an appropriate error code.

    Must be called at the end of each test script.

    Can also be used to stop tests early and skip all remaining tests. For this,
    set skip_all to a string explaining why the tests were skipped before calling
    test_done.

    Examples

      # Each test script must call test_done at the end.
      test_done

      # Skip all remaining tests if prerequisite is not set.
      if ! test_have_prereq PERL; then
          skip_all='skipping perl interface tests, perl not available'
          test_done
      fi

    Returns 0 if all tests passed or 1 if there was a failure.

### SHARNESS_TEST_DIRECTORY

    Public: Root directory containing tests. Tests can override this variable,
    e.g. for testing Sharness itself.

### SHARNESS_TEST_SRCDIR

    Public: Source directory of test code and sharness library.
    This directory may be different from the directory in which tests are
    being run.

### SHARNESS_BUILD_DIRECTORY

    Public: Build directory that will be added to PATH. By default, it is set to
    the parent directory of SHARNESS_TEST_DIRECTORY.

### SHARNESS_TEST_FILE

    Public: Path to test script currently executed.

### SHARNESS_TRASH_DIRECTORY

    Public: Empty trash directory, the test area, provided for each test. The HOME
    variable is set to that directory too.

Generated by tomdoc.sh version 0.1.5