File: README

package info (click to toggle)
cfengine3 3.6.2-4
  • links: PTS, VCS
  • area: main
  • in suites: jessie, jessie-kfreebsd
  • size: 20,256 kB
  • ctags: 9,613
  • sloc: ansic: 116,129; sh: 12,366; yacc: 1,088; makefile: 1,006; lex: 391; perl: 197; xml: 21; sed: 4
file content (222 lines) | stat: -rw-r--r-- 9,464 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
==============================================================================
CFEngine acceptance testsuite
==============================================================================

CFEngine has an extensive testsuite which covers lot of functionality which can
be tested as a series of cf-agent runs.

You are encouraged to run this testsuite on any new software/hardware
configuration, in order to
 * verify that CFEngine functions correctly
 * provide developers with reproducible way to fix any problems encountered in
   new configurations / environment

In case you find a bug you are encouraged to create tests in format of testsuite
which demonstrate bug found, so the test could be added to this testsuite and
checked for in the future.

------------------------------------------------------------------------------
Preparing for running tests
------------------------------------------------------------------------------

* Compile CFEngine.
  - It is advised to use Tokyo Cabinet as it gives much better performance in
    test suite over Berkeley DB.

* Install fakeroot(1). If this tool is not available for your operating system,
  you may any other "fake root" environment or even sudo(1). Alternative tools
  are specified by --gainroot option of `testall' script. Note that if you use
  the --unsafe option (can damage your system), you may have to use
  --gainroot=sudo in order to get correct results.

* If you want output in color, set CFENGINE_COLOR to 1.  If you want
  diff output colorized, you also need to install the colordiff
  utility.

------------------------------------------------------------------------------
Running testsuite
------------------------------------------------------------------------------

All tests ought only to create files and directories in /tmp, and ought not to
modify other files. The exception to this rule are so called unsafe tests, which
reside in a special directory. More on unsafe tests below.

Run

  ./testall --agent=$workdir/bin/cf-agent

e.g.

  ./testall --agent=/var/cfengine/bin/cf-agent

Testing will start. For every test case the name and result (failed / passed)
will be produced. At the end testing summary will be provided.

Test runner creates the following log files:

 * test.log contains detailed information about each test case (name, standard
   output/error contents, return code, and test status).
 * summary.log contains summary information, like the one displayed during
   testsuite run.

Also a directory .succeeded will be created, containing stamps for each passed
test case, so test cases which passed before and failing in subsequent testsuite
run will be additionally marked in output as "(UNEXPECTED FAILURE)".

You might run a subset of tests by passing either filenames:

  ./testall --agent=$workdir/bin/cf-agent 01_vars/01_basic/sysvars.cf

or directories to 'testall':

  ./testall --agent=$workdir/bin/cf-agent 01_vars

------------------------------------------------------------------------------
Creating/editing test cases
------------------------------------------------------------------------------

Each test should be 100% standalone, and must contain at least 1 of the main
bundles:
	init		setup, create initial and hoped-for final states
	test		the actual test code
	check		the comparison of expected and actual results

Look in default.cf for some standard check bundles (for example, to compare
files when testing file edits, or for cleaning up temporary files).

Tests should be named with short names describing what they test, using lower-
case letters and underscores only. If the test is expected to generate a crash
or error (that is, if they contan syntax errors or other faults), it should
have an additional '.x' suffix before '.cf' (e.g. 'string_vars.x.cf').

Tests which are not expected to pass yet (e.g. there is a bug in code which
prevents tests from passing) should be placed in 'staging' subdirectory in the
test directory where they belong. Such test cases will be only run if --staging
argument to ./testall is passed.

Tests which modify the system outside of /tmp are so called unsafe tests, and
should be placed in the 'unsafe' subdirectory in the directory where they
belong. Such test cases have the potential to damage the host system and will
only be run if the --unsafe argument is given to ./testall. For the user's
protection, this option is needed even if the test file name is specified
directly on the command line.

Tests which need network connectivity should be placed to 'network'
subdirectories. Those tests may be disabled by passing --no-network option to
'testall'.

NOTE: Since the class 'ok' is used in most tests, never create a persistent
class called 'ok' in any test. Persistent classes are cleaned up between test
runs, but better safe than sorry.

All tests should contain three bundles: init, test and check. In the
"body common control" this file should be included, and bundlesequence
should be set to default("$(this.promise_filename)").

Output "$(this.promise_filename) Pass" for passing
   and "$(this.promise_filename) FAIL" for failing.

If you want to use tools like grep, diff, touch, etc, please use the
$(G.grep) format so that the correct path to the tool is chosen by
the test template. If a tool is missing you can add it to dcs.cf.sub.

------------------------------------------------------------------------------
Waiting in tests
------------------------------------------------------------------------------

If your test needs to wait for a significant amount of time, for example in
order for locks to expire, you should use the wait functionality in the test
suite. It requires three parts:

  1. Your test needs to be put in a "timed" directory.
  2. Whenever you want to wait, use the
     "dcs_wait($(this.promise_filename), <seconds>)" method to wait the
     specified number of seconds.
  3. Each test invocation will have a predefined class set,
     "test_pass_<number>", where <number> is the current pass number starting
     from one. This means you can wait several times.

The test suite will keep track of time, and run other tests while your test is
waiting. Some things to look out for though:

  - During the wait time, your test is no longer running, so you cannot for
    example do polling.
  - You cannot leave daemons running while waiting, because it may interfere
    with other tests. If you need that you will have to wait the traditional
    way, by introducing sleeps in the policy itself.
  - The timing is not guaranteed to be accurate to the second. The test will be
    resumed as soon as the current test has finished running, but if it takes
    a long time, this will add to the wait time.

------------------------------------------------------------------------------
Handling different platforms
------------------------------------------------------------------------------

For tests that need to be skipped on certain platforms, you can add
special meta variables to one of the test bundles. These are the
possible variable names:

  - test_skip_unsupported
      Skips a test because it makes no sense on that platform (e.g.
      symbolic links on Windows).

  - test_skip_needs_work
      Skips a test because the test itself is not adapted to the
      platform (even if the functionality exists).

  - test_suppress_fail
      Runs the test, but will accept failure. Use this when there is a
      real failure, but it is acceptable for the time being. This
      variable requires a meta tag on the the variable set to
      "redmine<number>", where <number> is a Redmine issue number.

In all cases, the variable is expected to be set to a class expression
suitable for ifvarclass, where a positive match means that the test
will be skipped. So the expression '"test_skip_needs_work" string =>
"hpux|aix";' will skip the test on HP-UX and AIX, and nowhere else.

Example:
  bundle agent test
  {
    meta:
      # Class expression giving platforms that should be skipped.
      "test_skip_unsupported" string => "hpux";
  }

  bundle agent test
  {
    meta:
      # Class expression giving platforms that should be skipped.
      "test_skip_needs_work" string => "hpux";
  }

  bundle agent test
  {
    meta:
      # Class expression giving platforms where we suppress a failure.
      "test_suppress_fail" string => "hpux",
        meta => { "redmine1234" };
  }


------------------------------------------------------------------------------
Glossary
------------------------------------------------------------------------------

For purposes of testing, here is what our terms mean:

Pass: the test did what we expected (whether that was setting a variable,
editing a file, killing or starting a process, or correctly failing to do
these actions in the light of existing conditions or attributes).  Note that
in the case of tests that end in an 'x', a Pass is generated when the test
abnormally terminates and we wanted it to do that.

FAIL: not doing what we wanted: either test finished and returned "FAIL" from
check bundle, or something went wrong - cf-agent might have dropped core,
cf-promises may have denied execution of the promises, etc.

FAILed to crash: test was expected to crash, but did not. This is another kind
of failure, split into separate kind due to low impact.

Skipped: test is skipped due to be either explicitly disabled or being
Nova-specific and being run on Community cf-agent.