File: test-definition.md

package info (click to toggle)
lava 2026.02-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 31,432 kB
  • sloc: python: 83,095; javascript: 16,658; sh: 1,364; makefile: 328
file content (311 lines) | stat: -rw-r--r-- 9,365 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
# Test definitions

The test definition is a `yaml` file that describe the tests that you want LAVA
to run on the DUT.

## Smoke example

Let's look at this example:

```yaml
--8<-- "tests/smoke.yaml"
```

## Metadata

The `metadata` dictionary describes the test. `format` and `name` are mandatory
while `description` is optional.

## Run

The dictionary list the actions that LAVA should execute on the DUT. The only
available key is `steps`.

`steps` is an array that will be transformed into a shell script that LAVA will
run on the DUT.

In this example, `run.steps` will be rendered into the following shell script:

```shell
--8<-- "tests/smoke.sh"
```

### LAVA test helpers

The LAVA Test Helpers are scripts maintained in the LAVA codebase, like
`lava-test-case`. These are designed to work using only the barest
minimum of operating system support, to make them portable to all deployments.
The helpers are mainly used to:

* embed information from LAVA into the test shell
* support communication with LAVA during test runs

#### lava-test-case

##### Result

Record a test result using the `--result` argument:

```shell
lava-test-case <test-case-id> --result <pass|fail|skip|unknown> [--measurement <val>] [--units <unit>] [--output <file>]
```

##### Shell

Record a test result using the `--shell` command exit code:

```shell
lava-test-case <test-case-id> --shell <command>
```

If `<command>` exits with 0, the result is `pass`. Otherwise, it is `fail`.

!!! warning
    When using `--shell`, multiple commands or combined on-liner command with pipes
    and redirects should be quoted so it is passed as a single argument and the final
    exit code is checked.
    ```yaml title="Correct"
    - lava-test-case check-os --shell "cat /etc/os-release | grep 'ID=debian'"
    ```
    If not quoted, only the exit code of the first command is checked and a
    `| grep <pattern>` check could potentially prevent the helper from sending
    the LAVA test result signal and lead to missing test result.
    ```yaml title="Incorrect"
    - lava-test-case check-os --shell cat /etc/os-release | grep "ID=debian"
    ```

##### Output

Attach output from a file to a test result using `--output`:

```shell
lava-test-case <test-case-id> --result <result> --output <file>
```

This streams the contents of `<file>` between start/end test case signals,
similar to how `--shell` captures command output. Useful when the test output
has already been saved to a file by a previous step.

#### lava-test-set

Group test cases into named sets. This allows test writers to subdivide
results within a single test definition using an arbitrary label:

```shell
lava-test-set start <name>
lava-test-set stop
```

```yaml
steps:
  - lava-test-set start network-tests
  - lava-test-case ping --shell ping -c4 localhost
  - lava-test-set stop
```

#### lava-test-reference

Some test cases may relate to specific bug reports or have specific URLs
associated with the result. This help can be used to associate a URL with a
test result:

```shell
lava-test-reference <test-case-id> --result <result> --reference <url>
```

!!! note
    The URL should be a simple file reference, complex query strings could
    fail to be parsed.

#### lava-test-raise

Raise a `TestError` to abort the current test job immediately:

```shell
lava-test-raise <message>
```

This is useful when a setup step fails and running subsequent tests would be
pointless. The message is included in the error reported by LAVA.

#### lava-background-process-

Manage background processes during the test:

```yaml
steps:
  - lava-background-process-start MON --cmd "top -b"
  - ./run-my-tests.sh
  - lava-background-process-stop MON
```

### Device Information Helpers

Some elements of the static device configuration are exposed to the test shell,
where it is safe to do so and where the admin has explicitly configured the
information.

#### lava-target-ip

Prints the target's IP address if configured by the admin in device dictionary
using [device_ip](../../technical-references/configuration/device-dictionary.md#device_ip).
Devices with a fixed IPv4 address configured in the device dictionary will populate
this field. Test writers can use this in an Docker container to connect to the device:

```shell
ping -c4 $(lava-target-ip)
```

#### lava-target-mac

Prints the target's MAC address if configured by the admin in device dictionary
using [device_mac](../../technical-references/configuration/device-dictionary.md#device_mac).
This can be useful to look up the IP address of the device:

```shell
echo $(lava-target-mac)
```

#### lava-echo-ipv4

Prints the IPv4 address for a given network interface using `ifconfig` or `ip`:

```shell
lava-echo-ipv4 <interface>
```

#### lava-target-storage

Prints available storage devices if configured by the admin in device dictionary
using [storage_info](../../technical-references/configuration/device-dictionary.md#storage_info).

Without arguments, outputs one line per device with the name and value separated
by a tab:

```shell
$ lava-target-storage
UMS	/dev/disk/by-id/usb-Linux_UMS_disk_0_WaRP7-0xac2400d300000054-0:0
SATA	/dev/disk/by-id/ata-ST500DM002-1BD142_W3T79GCW
```

With a name filter, outputs only the matching device value:

```shell
$ lava-target-storage UMS
/dev/disk/by-id/usb-Linux_UMS_disk_0_WaRP7-0xac2400d300000054-0:0
```

If there is no matching name, exits non-zero and outputs nothing.

### Using custom scripts

When multiple steps are necessary to run a test and get usable output, write a
custom script to go alongside the YAML and execute it as a run step:

```yaml
run:
  steps:
    - ./my-script.sh arguments
```

You can choose whatever scripting language you prefer, as long as it is
available in the test image. The best practices are:

* Be verbose - Add progress messages, error handling, and debug output so
  that test logs are useful during triage. Control the total amount of output
  to keep logs readable.
* Watch out for `cd` - If you change directories inside a script, save the
original working directory first and return to it at the end.
* Wait for subprocesses - In Python, use `subprocess.check_call()` or
`subprocess.run()` instead of `subprocess.Popen()` when calling LAVA helpers.
`Popen` returns immediately, which can cause output to arrive after the test
definition finishes and lead to missing results.
* Make it portable - Scripts should check if the LAVA helper in `$PATH` to
determine whether they are running inside LAVA. When the helper is not available,
report results with `echo` or `print()` instead. This allows the same script to
run both inside and outside of LAVA, helping developers reproduce issues without
the full CI system.

    ```shell
    #!/bin/sh
    if command -v lava-test-case >/dev/null 2>&1; then
        lava-test-case my-test --result pass
    else
        echo "my-test: pass"
    fi
    ```

See also [test writing guidelines](https://github.com/Linaro/test-definitions/blob/master/docs/test-writing-guidelines.md#test-writing-guidelines), typically the **Running in
LAVA** section. For test definition examples that follow the best practices,
see [automated](https://github.com/Linaro/test-definitions/tree/master/automated).

## Install

Before running the run steps, LAVA can also `install` some Git repositories and
run arbitrary shell commands to prepare for the following test runs.

### git-repos

Specifies the Git repositories to be cloned into the test working directory.
The repositories are cloned on the LAVA worker and applied to the test image as
part of the LAVA overlay.

```yaml
install:
  git-repos:
    - url: https://gitlab.com/lava/lava.git
      branch: 2026.01
      destination: lava202601
```

### steps

Specifies arbitrary shell commands to run during the install phase. The install
steps are run directly on the DUT.

```yaml
install:
  git-repos:
    - url: https://example.com/repo.git
  steps:
    - cd repo
    - make install
```

## Expected

The `expected` dictionary allows users to define a list of expected test cases.
At the end of each test run, missing expected test cases from the test results
are marked as fail. Conversely, test cases present in the results but not in the
expected list are logged as warnings.

With the following test definition example, tc3 and tc4 will be reported as `fail`,
and warnings will be logged for tc5 and tc6.

```yaml title="Test definition"
metadata:
  format: Lava-Test Test Definition 1.0
  name: expected-testdef-example
run:
  steps: []
expected:
  - tc1
  - tc2
  - tc3
  - tc4
```

The list can be defined in either the test definition or the job definition. If
both are provided, the value in the job definition takes precedence.

!!!warning "Limitation"
    Expected test cases defined in test definition from `git` are checked and
    reported after test execution. If the test definition not get executed at
    all, they are not reported. The limitation exists because these test
    definitions and the expected test case lists defined inside may be
    unreachable or not deployed yet when job exists on error. If you need
    consistent and predictable job results for the test suites and cases, you
    should provide the expected test cases in the
    [job definition](../../technical-references/job-definition/actions/test.md#expected).

--8<-- "refs.txt"