File: README

package info (click to toggle)
cfengine3 3.24.2-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 37,552 kB
  • sloc: ansic: 163,161; sh: 10,296; python: 2,950; makefile: 1,744; lex: 784; yacc: 633; perl: 211; pascal: 157; xml: 21; sed: 13
file content (373 lines) | stat: -rw-r--r-- 15,123 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
==============================================================================
CFEngine acceptance testsuite
==============================================================================

CFEngine has an extensive testsuite which covers lot of functionality which can
be tested as a series of cf-agent runs.

You are encouraged to run this testsuite on any new software/hardware
configuration, in order to
 * verify that CFEngine functions correctly
 * provide developers with reproducible way to fix any problems encountered in
   new configurations / environment

In case you find a bug you are encouraged to create tests in format of testsuite
which demonstrate bug found, so the test could be added to this testsuite and
checked for in the future.

Note that the testall script generates JUnit style XML output for parsing by CI systems.

https://llg.cubic.org/docs/junit/


------------------------------------------------------------------------------
Preparing for running tests
------------------------------------------------------------------------------

* Compile CFEngine.
  - It is advised to use Tokyo Cabinet as it gives much better performance in
    test suite over Berkeley DB.

* Install fakeroot(1). If this tool is not available for your operating system,
  you may use any other "fake root" environment or even sudo(1). Alternative
  tools are specified by --gainroot option of `testall' script. Note that if you
  use the --unsafe option (can damage your system), you may have to use
  --gainroot=sudo in order to get correct results.

* If you want output in color, set CFENGINE_COLOR to 1.  If you want
  diff output colorized, you also need to install the colordiff
  utility.

* If you plan to use `git bisect` to hunt for a bug, try
  tests/acceptance/bisect-acceptance.pl, contributed by the
  indomitable Chris Dituri

------------------------------------------------------------------------------
Running testsuite
------------------------------------------------------------------------------

All tests ought only to create files and directories in /tmp, and ought not to
modify other files. The exception to this rule are so called unsafe tests, which
reside in a special directory. More on unsafe tests below.

Run

  ./testall --agent=$workdir/bin/cf-agent

e.g.

  ./testall --agent=/var/cfengine/bin/cf-agent

Testing will start. For every test case the name and result (failed / passed)
will be produced. At the end testing summary will be provided.

Test runner creates the following log files:

 * test.log contains detailed information about each test case (name, standard
   output/error contents, return code, and test status).
 * summary.log contains summary information, like the one displayed during
   testsuite run.

Also a directory .succeeded will be created, containing stamps for each passed
test case, so test cases which passed before and failing in subsequent testsuite
run will be additionally marked in output as "(UNEXPECTED FAILURE)".

You might run a subset of tests by passing either filenames:

  ./testall --agent=$workdir/bin/cf-agent 01_vars/01_basic/sysvars.cf

or directories to 'testall':

  ./testall --agent=$workdir/bin/cf-agent 01_vars

------------------------------------------------------------------------------
Creating/editing test cases
------------------------------------------------------------------------------

Each test should be 100% standalone. If you include "default.cf.sub" in
inputs, then the bundlesequence is automatically defined, and your file
must contain at least 1 of the main bundles:

* init - setup, create initial and hoped-for final states
* test - the actual test code
* check - the comparison of expected and actual results
* destroy - anything you want to do at the end like killing processes etc

However if the test is really simple you might want to skip including
"default.cf.sub" in inputs. Then you should define your own
bundlesequence, e.g. {"test","check"}. It's recommended to avoid
including "default.cf.sub" when not needed, since the test will run much
faster.

Look in default.cf for some standard check bundles (for example, to compare
files when testing file edits, or for cleaning up temporary files).

For a test named XYZ, if the test runner finds the file XYZ.def.json,
that file will be copied to the input directory so it will be loaded
on startup. That lets you set hard classes and def variables and add
inputs before the test ever runs.

Tests should be named with short names describing what they test, using lower-
case letters and underscores only. If the test is expected to generate an error
(that is, if they contan syntax errors or other faults), it should have an
additional '.x' suffix before '.cf' (e.g. 'string_vars.x.cf'). A crash will
still cause a failure.

Tests which are not expected to pass yet (e.g. there is a bug in code which
prevents tests from passing) should be placed in 'staging' subdirectory in the
test directory where they belong. Such test cases will be only run if --staging
argument to ./testall is passed.

Tests which modify the system outside of /tmp are so called unsafe tests, and
should be placed in the 'unsafe' subdirectory in the directory where they
belong. Such test cases have the potential to damage the host system and will
only be run if the --unsafe argument is given to ./testall. For the user's
protection, this option is needed even if the test file name is specified
directly on the command line.

Tests which need network connectivity should be placed to 'network'
subdirectories. Those tests may be disabled by passing --no-network option to
'testall'.

Tests which cannot be run in parallel with other tests should be put in a
'serial' subdirectory. This is necessary if the test depends on for example
a certain number of cf-agent processes running, or to open a network port that
may be shared with other tests. Unsafe tests are always carried out serially,
and do not need to be in a 'serial' subdirectory. You can also give individual
tests a name that contains "serial" as a word.

Serial tests enforce strictly sorted order when they are encountered, so you can
use them to initialize some precondition for a set a of tests by sorting it
lexicographically before the others and giving it a name containing "serial".
The other tests can then be executed in parallel, unless they need to be serial
for other reasons. For example, you can use two serial tests, one to set up a
cf-serverd, and one to tear it down, and in between you put normal tests. Note
that timed tests (see below) have one small exception: If a test that is both
timed and serial is executed in the same "block" as other serial tests (that is,
when one or more serials tests follow another), it will always be executed
first. So put it in a timed directory as well, if you want to be absolutely
sure.

NOTE: Since the class 'ok' is used in most tests, never create a persistent
class called 'ok' in any test. Persistent classes are cleaned up between test
runs, but better safe than sorry.

All tests should contain three bundles: init, test and check. In the
"body common control" this file should be included, and bundlesequence
should be set to default("$(this.promise_filename)").

Output "$(this.promise_filename) Pass" for passing
   and "$(this.promise_filename) FAIL" for failing.

If you want to use tools like grep, diff, touch, etc, please use the
$(G.grep) format so that the correct path to the tool is chosen by
the test template. If a tool is missing you can add it to dcs.cf.sub.

------------------------------------------------------------------------------
Waiting in tests
------------------------------------------------------------------------------

If your test needs to wait for a significant amount of time, for example in
order for locks to expire, you should use the wait functionality in the test
suite. It requires three parts:

  1. Your test needs to be put in a "timed" directory.
  2. Whenever you want to wait, use the
     "dcs_wait($(this.promise_filename), <seconds>)" method to wait the
     specified number of seconds.
  3. Each test invocation will have a predefined class set,
     "test_pass_<number>", where <number> is the current pass number starting
     from one. This means you can wait several times.

The test suite will keep track of time, and run other tests while your test is
waiting. Some things to look out for though:

  - During the wait time, your test is no longer running, so you cannot for
    example do polling.
  - You cannot leave daemons running while waiting, because it may interfere
    with other tests. If you need that you will have to wait the traditional
    way, by introducing sleeps in the policy itself.
  - The timing is not guaranteed to be accurate to the second. The test will be
    resumed as soon as the current test has finished running, but if it takes
    a long time, this will add to the wait time.

------------------------------------------------------------------------------
Handling different platforms
------------------------------------------------------------------------------

For tests that need to be skipped on certain platforms, you can add
special meta variables to the *test* bundle. These are the
possible variable names:

  - test_skip_unsupported
      Skips a test because it makes no sense on that platform (e.g.
      symbolic links on Windows).

  - test_skip_needs_work
      Skips a test because the test itself is not adapted to the
      platform (even if the functionality exists).

  - test_soft_fail
  - test_suppress_fail
  - test_flakey_fail
      Runs the test, but will accept failure. Use this when there is a
      real failure, but it is acceptable for the time being. This
      variable requires a meta tag on the variable set to
      "redmine<number>", where <number> is a Redmine issue number.
      There is a subtle difference between the three. Soft failures will
      not be reported as a failure in the XML output, and is
      appropriate for test cases that document incoming bug reports.
      Suppressed failures will count as a failure, but it won't block
      the build, and is appropriate for regressions or bad test
      failures.
      Flakey failures count in a category by themselves and won't block
      the build. If any are present then a different exit code will be
      produced from the test run so that CI runners can react accordingly.

Additionally, a *description* meta variable can be added to the test to describe
its function.

  meta:
      "description" -> { "CFE-2971" }
        string => "Test that class expressions can be used to define classes via augments";

The rule of thumb is:

* If you are writing an acceptance test for a (not yet fixed) bug in
  Redmine, use test_soft_fail.

* If you need the build to work, but the bug you are suppressing cannot
  stay unfixed for long, use test_suppress_fail.

In all cases, the variable is expected to be set to a class expression
suitable for ifvarclass, where a positive match means that the test
will be skipped. So the expression '"test_skip_needs_work" string =>
"hpux|aix";' will skip the test on HP-UX and AIX, and nowhere else.

Example:
  bundle agent test
  {
    meta:

      # Indicate that this test should be skipped on hpux because the
      # functionality is unsupported on that platform.
      "test_skip_unsupported" string => "hpux";
  }

  bundle agent test
  {
    meta:

      # Indicates that the test should be skipped on hpux because the
      # test needs to be adapted for the platform.
      "test_skip_needs_work" string => "hpux";
  }

  bundle agent test
  {
    meta:

      # Indicate the test is expected to fail on hpux and aix, but
      # should not cause the build to change from green. This is
      # appropriate for test cases that document incoming bug reports.
      "test_soft_fail"
        string => "hpux|aix",
        meta => { "redmine1234" };
  }

  bundle agent test
  {
    meta:

      # Indicate the test is expected to fail on hpux but will not
      # block the build. This is appropriate for regressions or very
      # bad bad test failures.

      "test_suppress_fail"
        string => "hpux",
        meta => { "redmine1234" };
  }


------------------------------------------------------------------------------
Glossary
------------------------------------------------------------------------------

For purposes of testing, here is what our terms mean:

Pass: the test did what we expected (whether that was setting a variable,
editing a file, killing or starting a process, or correctly failing to do
these actions in the light of existing conditions or attributes).  Note that
in the case of tests that end in an 'x', a Pass is generated when the test
abnormally terminates and we wanted it to do that.

FAIL: not doing what we wanted: either test finished and returned "FAIL" from
check bundle, or something went wrong - cf-agent might have dropped core,
cf-promises may have denied execution of the promises, etc.

Soft fail: the test failed as expected by a "test_soft_fail" promise.

Skipped: test is skipped due to be either explicitly disabled or being
Nova-specific and being run on Community cf-agent.

------------------------------------------------------------------------------
Example Test Skeleton
------------------------------------------------------------------------------
body file control
{
      # Feel free to avoid including "default.cf.sub" and define your
      # own bundlesequence for simple tests

      inputs => { "../default.cf.sub" };
}

#######################################################

bundle agent init
# Initialize the environment and prepare for test
{

}

bundle agent test
# Activate policy for behaviour you wish to inspect
{
  meta:
    "description" -> { "CFE-1234" }
      string => "What does this test?";

    "test_soft_fail"
      string => "Class_Expression|here",
      meta => { "CFE-1234" };
}

bundle agent check
# Check the result of the test
{

  # Pass/Fail requires a specific format:
  # reports: "$(this.promise_filename) Pass";
  # reports: "$(this.promise_filename) FAIL";

  # Consider using one of the dcs bundles
  # methods: "Pass/Fail" usebundle => dcs_passif( "Namespace_Scoped_Class_Indicating_Test_Passed", $(this.promise_filename) );

}
bundle agent __main__
# @brief The test entry
# Note: The testall script runs the agent with the acceptance test as the entry point,
##+begin_src sh :results output :exports both :dir ~/CFEngine/core/tests/acceptance
# pwd | sed 's/\/.*CFEngine\///'
# find . -name "*.cf*" | xargs chmod 600
# cf-agent -Kf ./01_vars/02_functions/basename_1.cf --define AUTO
##+end_src sh
#
##+RESULTS:
#: core/tests/acceptance
#: R: /home/nickanderson/Northern.Tech/CFEngine/core/tests/acceptance/./01_vars/02_functions/basename_1.cf Pass
{
  methods:
      "Run the default test bundle sequence (init,test,check,cleanup) if defined"
        usebundle => default( $(this.promise_filename) );
}
------------------------------------------------------------------------------