File: 05_tools_and_utilities.md

package info (click to toggle)
scap-security-guide 0.1.76-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 110,644 kB
  • sloc: xml: 241,883; sh: 73,777; python: 32,527; makefile: 27
file content (789 lines) | stat: -rw-r--r-- 33,270 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
# Tools and Utilities

To run the Python utilities (those ending in `.py`), you will need to
have the `PYTHONPATH` environment variable set. This can be accomplished
one of two ways: by prefixing all commands with a local variable
(`PYTHONPATH=/path/to/scap-security-guide`), or by exporting
`PYTHONPATH` in your shell environment. We provide a script for making
this easier: `.pyenv.sh`. To set `PYTHONPATH` correctly for the current
shell, simply call `source .pyenv.sh`. For more information on how to
use this script, please see the comments at the top of the file.

It is also possible to install the module with `pip`.
In this case it is recommended to install it within a Python virtual environment.
Please note that this possibility was added after releasing the 0.1.75 version and until the release 0.1.76 is out it can only be installed from `master` branch.
To install the ssg module currently present in the master branch, run the following command:

```bash
pip install git+https://github.com/ComplianceasCode/content
```

It is also possible to install an ssg library version associated with a certain release.
This is recommended because the library is not stable and it can change unexpectedly when installing from master.
This is an example command which installs the library associated with the 0.1.76 release:

```bash
pip install git+https://github.com/ComplianceasCode/content@v0.1.76
```

The installed package is named `ssg`.
Please note that the name of the package will very probably change in the future.
Therefore, if you install the module this way, please pay close attention to the release notes.
It is worth emphasizing here that stability of the module is not guaranteed.
Currently, it is used mainly for building the content and it is therefore modified predominantly based on needs of the content build system.

## Profile Statistics and Utilities

The `profile_tool.py` tool displays XCCDF profile statistics. It can
show number of rules in the profile, how many of these rules have an
OVAL check implemented, how many have a remediation available, shows
rule IDs which are missing them and other useful information.

To use the script, first build the content, then pass the built XCCDF
(not data stream) to the script.

For example, to check which rules in RHEL8 OSPP profile are missing
remediations, run this command:

```bash
    $ ./build_product rhel8
    $ ./build-scripts/profile_tool.py stats --missing-fixes --profile ospp --benchmark build/ssg-rhel8-xccdf.xml
```

Note: There is an automated job which provides latest statistics from
all products and all profiles, you can view it here:
[Statistics](https://complianceascode.github.io/content-pages/statistics/index.html)

The tool also can subtract rules between YAML profiles.

For example, to subtract selected rules from a given profile based on
rules selected by another profile, run this command:

```bash
    $ ./build-scripts/profile_tool.py sub --profile1 rhel9/profiles/ospp.profile --profile2 rhel9/profiles/pci-dss.profile
```

This will result in a new YAML profile containing exclusive rules to the
profile pointed by the `--profile1` option.

The tool can also generate a list of the most used rules contained in profiles from a given data stream or benchmark.

For example, to get a list of the most used rules in the benchmark for `rhel8`, run this command:

```bash
    $ ./build-scripts/profile_tool.py most-used-rules build/ssg-rhel8-xccdf.xml
```

Or you can also run this command to get a list of the most used rules in the entire project:

```bash
    $ ./build-scripts/profile_tool.py most-used-rules
```

Optionally, you can use this command to limit the statistics for a specific product:

```bash
    $ ./build-scripts/profile_tool.py most-used-rules --products rhel9
```

The result will be a list of rules with the number of uses in the profiles.
The list can be generated as plain text, JSON or CVS.
Via the `--format FORMAT` parameter.

The tool can also generate a list of the most used component based on rules contained in profiles from the entire project:

```bash
    $ ./build-scripts/profile_tool.py most-used-components
```

Optionally, you can use this command to limit the statistics for a specific product:

```bash
    $ ./build-scripts/profile_tool.py most-used-components --products rhel9
```

You can also get a list of the most used components with used rules for the RHEL9 product, you can use the `--rules` flag.
As shown in this command:

```bash
    $ ./build-scripts/profile_tool.py most-used-components --products rhel9 --rules
```

You can also use the `--all` flag to get a list of all components and rules in the output, including unused components and unused rules.
As shown in this command:

```bash
    $ ./build-scripts/profile_tool.py most-used-components --products rhel9 --all
```

The result will be a list of rules with the number of uses in the profiles.
The list can be generated as plain text, JSON or CVS.
Via the `--format FORMAT` parameter.

## Generating Controls from DISA's XCCDF Files

If you want a control file for product from DISA's XCCDF files you can run the following command:
It supports the following arguments:

```text
options:
  -h, --help            show this help message and exit
  -r ROOT, --root ROOT  Path to SSG root directory (defaults to the root of the repository)
  -o OUTPUT, --output OUTPUT
                        File to write yaml output to (defaults to build/stig_control.yml)
  -p PRODUCT, --product PRODUCT
                        What product to get STIGs for
  -m MANUAL, --manual MANUAL
                        Path to XML XCCDF manual file to use as the source of the STIGs
  -j JSON, --json JSON  Path to the rules_dir.json (defaults to build/rule_dirs.json)
  -c BUILD_CONFIG_YAML, --build-config-yaml BUILD_CONFIG_YAML
                        YAML file with information about the build configuration.
  -ref REFERENCE, --reference REFERENCE
                        Reference system to check for, defaults to stigid
  -s, --split           Splits the each ID into its own file.
```

Example

```bash
    $ ./utils/build_stig_control.py -p rhel8 -m shared/references/disa-stig-rhel8-v1r5-xccdf-manual.xml
```


## Generating Controls From a Reference
When converting profile to use a control file this script can be helpful in creating the skeleton control.
The output of this script will need to be adjusted to add other keys such as title or description to the controls.
This script does require that `./utils/rule_dir_json.py` be run before this script is used.
See `./utils/build_control_from_reference.py --help` for the full set options the script provides.


Example
```bash
    $ ./utils/build_control_from_reference.py --product rhel10 --reference ospp --output controls/ospp.yml
```

## Generating login banner regular expressions

Rules like `banner_etc_issue` and `dconf_gnome_login_banner_text` will
check for configuration of login banners and remediate them. Both rules
source the banner text from the same variable `login_banner_text`, and
the banner texts need to be in the form of a regular expression. There
are a few utilities you can use to transform your text into the
appropriate regular expression:

When adding a new banner directly to the `login_banner_text`, use the
custom Jinja filter `banner_regexify`.
If customizing content via SCAP Workbench, or directly writing your
tailoring XML, use `utils/regexify_banner.py` to generate the
appropriate regular expression.

## Modifying rule directory content files

All utilities discussed below require information about the existing rules
for fast operation. We've provided the `utils/rule_dir_json.py` script to
build this information in a format understood by these scripts.

To execute it:

```bash
    $ ./utils/rule_dir_json.py
```

Optionally, provide a path to a CaC root and destination YAML file:

```bash
    $ ./utils/rule_dir_json.py --root /path/to/ComplianceAsCode/content \
                               --output /tmp/rule_dirs.json
```

### `utils/fix_rules.py` – automatically fix-up rules

`utils/fix_rules.py` includes various sub-commands for automatically fixing
common problems in rules.

These sub-commands are:

- `empty_identifiers`: removes any `identifiers` which are empty.
- `invalid_identifiers`: removes any invalid CCE `identifiers` (due to
  incorrect format).
- `int_identifiers`: turns any identifiers which are an integer into a
  string.
- `empty_references`: removes any `references` which are empty.
- `int_references`: turns any references which are an integer into a string.
- `duplicate_subkeys`: finds (but doesn't fix!) any rules with duplicated
  `identifiers` or `references`.
- `sort_subkeys`: sorts all subkeys under `identifiers` and `references`.
- `add-cce`: automatically assign CCE identifiers to rules.

To execute:

```bash
    $ ./utils/fix_rules.py [--assume-yes] [--dry-run] <command>
```

For example:

```bash
    $ ./utils/fix_rules.py -y sort_subkeys
```

Note that it is generally good practice to commit all changes prior to running
one of these commands and then commit the results separately.

#### How to automatically assign CCEs with the `add-cce` sub-command

First, you need to make sure that that the `rule_dirs.json` exists, run the following command to create it:

```bash
    $ ./utils/rule_dir_json.py
```

Then based on the available pool you want to assign the CCEs, you can run something like:

```bash
    $ python utils/fix_rules.py --product products/rhel9/product.yml add-cce --cce-pool redhat audit_rules_privileged_commands_newuidmap
```

Note: Multiple rules can have the CCE at the same time by just adding space separated rule IDs.

Example for `sle15` product:

```bash
    $ python utils/fix_rules.py --product products/sle15/product.yml add-cce --cce-pool sle15 audit_rules_privileged_commands_newuidmap audit_rules_privileged_commands_newuidmap
```


### `utils/refchecker.py` &ndash; automatically check `rule.yml` for references

This utility checks all `rule.yml` referenced from a given profile for the
specified reference.  Unlike `build-scripts/profile_tool.py`, which operates
on the built XCCDF information, `utils/refchecker.py` operates on the contents
of the `rule.yml` files.

To execute:

```bash
    $ ./utils/refchecker.py <product> <profile> <reference>
```

For example:

```bash
    $ ./utils/refchecker.py ubuntu2004 cis_level1_server cis
```

This utility has some knowledge of which references are product-specific
(checking for `cis@ubuntu2004` in the above example) and which are
product-independent.

Note that this utility does not modify the rule directories at all.


### `utils/mod_checks.py` and `utils/mod_fixes.py` &ndash; programmatically modify check and fix applicability

These two utilities have identical usage. Both modifies the platform/product
applicability of various files (either OVAL or hardening content). They support
the following sub-commands:

- `add`: add the given platform(s) to the specified rule's OVAL check.
  **Note**: Only applies to shared content.
- `list`: list the given OVAL(s) and the products that apply to them; empty
  if product-independent.
- `remove`: remove the given platform(s) from the specified rule's OVAL check.
  **Note**: Only applies to shared content.
- `replace`: perform a pattern-match replacement on the specified rule's
  platform applicability. **Note**: Only applies to shared content.
- `diff`: perform a textual diff between content for the specified products.
- `delete`: remove an OVAL for the specified product.
- `make_shared`: move a product-specific OVAL into a shared OVAL.

To execute:

```bash
    $ ./utils/mod_checks.py <rule_id> <command> [...other arguments...]
    $ ./utils/mod_fixes.py <rule_id> <type> <command> [...other arguments...]
````

For an example of `add`:

```bash
    $ ./utils/mod_checks.py clean_components_post_updating add multi_platform_sle
    $ ./utils/mod_fixes.py clean_components_post_updating bash add multi_platform_sle
```

For an example of `list`:

```bash
    $ ./utils/mod_checks.py clean_components_post_updating list
    $ ./utils/mod_fixes.py clean_components_post_updating ansible list
```

For an example of `remove`:

```bash
    $ ./utils/mod_checks.py file_permissions_local_var_log_messages remove multi_platform_sle
    $ ./utils/mod_fixes.py file_permissions_local_var_log_messages bash remove multi_platform_sle
```

For an example of `replace`:

```bash
    $ ./utils/mod_checks.py file_permissions_local_var_log_messages replace multi_platform_sle~multi_platform_sle,multi_platform_ubuntu
    $ ./utils/mod_fixes.py file_permissions_local_var_log_messages bash replace multi_platform_sle~multi_platform_sle,multi_platform_ubuntu
```

For an example of `diff`:

```bash
    $ ./utils/mod_checks.py clean_components_post_updating diff sle12 sle15
    $ ./utils/mod_fixes.py clean_components_post_updating bash diff sle12 sle15
```

For an example of `delete`:

```bash
    $ ./utils/mod_checks.py clean_components_post_updating delete sle12
    $ ./utils/mod_fixes.py clean_components_post_updating bash delete sle12
```

For an example of `make_shared`:

```bash
    $ ./utils/mod_checks.py clean_components_post_updating make_shared sle12
    $ ./utils/mod_fixes.py clean_components_post_updating bash make_shared sle12
```

### `utils/create_scap_delta_tailoring.py` &ndash; Create tailoring files for rules not covered by other content

The goal of this tool is to create a tailoring file that enable rules that are not covered by other SCAP content and disables rules that are covered by the given content.
It supports the following arguments:

- `-r`, `--root` - Path to SSG root directory
- `-p`, `--product` - What product to produce the tailoring file for (required)
- `-m`, `--manual` - Path to the XCCDF XML file of the SCAP content (required)
- `-j`, `--json` - Path to the `rules_dir.json` file.
    - Defaults to `build/stig_control.json`
- `-c`, `--build-config-yaml` - YAML file with information about the build configuration.
    - Defaults to `build/build_config.yml`
- `-b`, `--profile` - What profile to use.
    - Defaults to stig
- `-ref`, `--reference` - What reference system to check for.
    - Defaults to `stigid`
    - `-o`, `--output` - Defaults `build/PRODUCT_PROFILE_tailoring.xml`, where `PRODUCT` and `PROFILE` are respective parameters given to the script.
    - `--profile-id` - The id of the created profile. Defaults to PROFILE_delta
    - `--tailoring-id` - The id of the created tailoring file. Defaults to xccdf_content-disa-delta_tailoring_default

To execute:

```bash
    $ ./utils/create_scap_delta_tailoring.py -p rhel8 -b stig -m shared/references/disa-stig-rhel8-v1r4-xccdf-scap.xml
```

### `utils/compare_ds.py` &ndash; Compare two data streams (can also compare XCCDFs)

This script compares two data streams or two benchmarks and generates a diff output.
It can show what changed in rules, for example in description, references and remediation scripts.
Changes in checks (OVAL and OCIL) are shown too, but the OVAL diff is limited to the `criteria`
and `criterion` order and their IDs.

When comparing contents from DISA, either the Automated content or Manual benchmark, use the
`--disa-content` option, this options maps the rules in the old content to the rule in the new content.
The rule mapping is necessary because the IDs in DISA's content's rules change IDs whenever
they are updated.

By default, the script prints the diff to the standard output, which can generate a single huge
diff for the whole data stream or benchmark.
The option `--rule-diffs` can be used to generate a diff file per rule. In this mode the diff files
are created in a directory: `./compare_ds-diffs`. To change the output dir use `--output-dir` option.

Compare current DISA's manual benchmark, and generate per file diffs:

```bash
    $ utils/compare_ds.py --disa-content --rule-diffs ./disa-stig-rhel8-v1r6-xccdf-manual.xml shared/references/disa-stig-rhel8-v1r7-xccdf-manual.xml
```

Compare two data streams:

```bash
    $ utils/compare_ds.py /tmp/ssg-rhel8-ds.xml build/ssg-rhel8-ds.xml > content.diff
```

#### HTML Diffs

The diffs generated by `utils/compare_ds.py` can be transformed to HTML diffs with `diff2html` utility.

Install `diff2html`:

```bash
    # Fedora
    $ sudo dnf install npm
    $ sudo npm install -g diff2html-cli
```

Generate the HTML diffs:

```bash
    $ mkdir -p html
    $ for f in $(ls compare_ds-diffs/); do diff2html -i file -t $f -F "html/$f.html" "compare_ds-diffs/$f"; done
```

### `utils/compare_results.py` &ndash; Compare to two ARF result files

The goal of this script is to compare the result of two ARF files.
It will show what rules are missing, different, and the same between the two files.
The script can take results from content created by this repo and by [DISA](https://public.cyber.mil/stigs/scap/).
If the result files come from the same source the script will use XCCDF ids as basis for the comparison.
Otherwise, the script will use STIG ids to compare.

If one STIG ID has more than one result (this is the case for a few STIG IDs in this repo) the results will be merged.
Given a set of status the script will select the status from the group that is the highest value on the list below.

1. Error
2. Fail
3. Not applicable
4. Not selected
5. Not checked
6. Informational
7. Pass

Examples:

- `[pass, pass]` will result in `pass`
- `[pass, fail]` will result in `fail`
- `[pass, error, fail]` will result in `error`

To execute:

```bash
    $ ./utils/compare_results.py ssg_results.xml disa_results.xml
```

### `utils/import_srg_spreadsheet.py` &ndash; Import changes made to an SRG Spreadsheet into the project

This script will import changes from a SRG export spreadsheet.
This script is designed to be run then each file reviewed carefully before being committed.

It supports the following arguments:

- `-b`, `--current` &mdash; Path to the current XLSX export (required)
- `-c`, `--changed` &mdash; Path to the XLSX that was changed, defaults to RHEL 9
- `-e`, `--end-row` &mdash; What row to stop scanning, defaults to 600.
- `-j`, `--json` &mdash; Path to the `rules_dir.json` file.
- `-n`, `--changed-name` &mdash; The name of the current in the changed file (required)
- `-p`, `--product` &mdash; What product to produce the tailoring file for (required)
- `-r`, `--root` &mdash; Path to SSG root directory

To execute:

```bash
    $ ./utils/import_srg_spreadsheet.py --changed 20220811_submission.xlsx --current build/cac_stig_output.xlsx -p rhel9
```

### `utils/import_disa_stig.py` &ndash; Import Content from DISA's XML Content

This script imports SRG Requirements, Check Text, and Fix Text from a DISA STIG XML File.
This script only updates STIG items that only have one rule assigned to them.

To execute:
```bash
$ ./utils/import_disa_stig.py --product rhel9 --control stig_rhel9 shared/references/disa-stig-rhel9-v1r2-xccdf-manual.xml
```

## Profiling the Build System

The goal of `utils/build_profiler.sh` and `utils/build_profiler_report.py` is to help developers measure and compare build times of a single product or a group of products and determine what impact their changes had on the speed of the buildsystem.
Both of these tools shouldn't be invoked alone but rather through the build_product script by using the -p|--profiling switch.

The intended usage is:

```bash
    $ ./build_product <products> -p|--profiling
```

### `utils/build_profiler.sh` &ndash; Handle directory structure for profiling files and invokes other script

The goal of this tool is to create the directory structure necessary for the profiling system and create a new numbered logfile, as well as invoking the `utils/build_profiler_report.py` and subsequently generating an interactive HTML file using webtreenode.

It is invoked by the `build_product` script. When invoked for the first time, it creates the `.build_profiling` directory and then a directory inside it named `product_string`, which is passed from the build_product script.
This is done so that each product combination being built has its own directory for log files, because different combinations may affect each other and have different build times.

The script then moves the ninja log from the build folder to the product_string folder and numbers it.
The baseline script is number 0 and if missing, the script will call the `utils/build_profiler_report.py` script with the `--baseline` switch.

It then invokes the `utils/build_profiler_report.py` script with a logfile name, as well as the optional baseline switch.
After that, it checks if the .webtreenode file was generated and then uses the webtreenode command to generate an interactive HTML report.

It supports exactly one argument:

- `"product_string"` - Contains all the product names that were built joined by underscores

To execute:

```bash
    $ ./build_profiler.sh <product_string>
```

### `utils/build_profiler_report.py` &ndash; Parse a ninja file and display report to user

The goal of this tool is to generate a report about differences in build times, both a text version in the terminal and a webtreenode version that is later converted into an interactive HTML report.

The script parses the data from `"logfile"` as the current logfile and the data from `0.ninja_log` as the baseline logfile (if the `--baseline` switch is used, the baseline log is not loaded).
It then generates a `.webtreemap` file and prints the report:

- `Target` - The target/outputfile that was built
- `%`      - The percentage of the total build time that the target took
- `D`      - The difference of the `%` percentage between baseline and current logfile (dimensionless value, not a percentage)
              - This is the most important metric, as it signifies the ratio of this target to the other targets, therefore it shouldn't be affected too much by the speed of the hardware
- `T`      - The time that the target took to get built in an `h:m:s` format
- `TD`     - The time difference between baseline and current logfile in an `h:m:s` format
- `%TD`     - The percentage difference of build times between current and baseline logfile

It supports up to two arguments:

- `"logfile"`  - [mandatory] Name of the numbered ninja logfile, e.g. `0.ninja_log`
- `--baseline` - [optional] If the switch is used, the values are not compared with a baseline log

To execute:

```bash
    $ ./build_profiler_report.py <logfile> [--baseline]
```

## Other Scripts

### `utils/srg_diff.py` &ndash; Compare Two SRG Spreadsheets

This script will output an HTML page that compares two SRG exports.
This script should help with reviewing changes created by `utils/import_srg_spreadsheet.py` script.
This script assumes that the STIG ID columns are compatible.
This script needs the project built for the given product and `utils/rule_dir_json.py` ran.

The report has the following sections:

- Missing in base/target: These sections list rules are not in the other spreadsheet
- Rows Missing STIG ID in base/target: These section list the requirement and SRG IDs of rows that do not have an STIG ID defined in the STIG ID and have a status of "Applicable - Configurable".
- Delta: If a rule is not the same an HTML diff will appear; base content is on the left.

Example:

```bash
    $ ./utils/srg_diff.py --target submission.xlsx --base build/cac_stig_output.xlsx --output build/diff.html -p rhel9
    Wrote output to build/diff.html.
```

### `utils/shorthand_to_oval.py` &ndash; Convert shorthand OVAL to a full OVAL

This script converts (resolved) shorthand OVAL files to a valid OVAL file.

It can be useful for example when creating minimal bug reproducers.
If you know that a problem is located in OVAL for a specific rule, you can use it to convert a resolved shorthand OVAL files from the `build/product/checks/oval` directory to a standalone valid OVAL file that you can then pass to `oscap`.

Example:

```bash
$ ./build_product rhel9
$ utils/shorthand_to_oval.py build/rhel9/checks/oval/accounts_tmout.xml oval.xml
$ oscap oval eval oval.xml
```

### `utils/controlrefcheck.py` &ndash; Ensure Control Files and Rules Are Consistent

This script helps ensure that control files and rules files are in sync.
The script takes in what product, control, and reference you are looking for.
The script loops over every rule in each control; the script then checks if the control's id is in the rule's reference that passed to the script.
If a control's ID does not match or rule does not exist in the project the script will exit with a non-zero status.
If the reference you are looking for is `cis` the script will not attempt to match anything that does not match a CIS section number.

To execute:
```bash
$ ./utils/controlrefcheck.py rhel9 cis_rhel9 cis
```

### `utils/check_eof.py` - Ensure Files Follow EOF Style

This script checks files of specific extensions (see `EXTENSIONS` in the script for the full list) to ensure that the files end with new line as required by the [style guide](https://complianceascode.readthedocs.io/en/latest/manual/developer/04_style_guide.html).
This script can be used to fix files that don't have a newline at the end if the `--fix` flag is passed to the script.
By default, the script just outputs a list of files are non-compliant.

 To execute:
 ```bash
 $ ./utils/check_eof.py ssg linux_os utils tests products shared docs apple_os applications build-scripts cmake Dockerfiles
 ```

### `utils/generate_profile.py` &ndash; Generating CIS Control Files

This script accepts a CIS benchmark spreadsheet (XLSX) and outputs a profile,
section, or rule. This is primarily useful for contributors maintaining
content. The script doesn't make assumptions about rules that implement
controls, should still be done by someone knowledge about the platform and
benchmark. The goal of the script is to reduce the amount of text contributors
have to copy and paste from benchmarks, making it easier to automate parts of
the benchmark maintenance process.

You can download CIS XLSX spreadsheets from CIS directly if you have access to
[CIS workbench](https://workbench.cisecurity.org/).

You can use the script to list all controls in a benchmark:

```bash
$ python utils/generate_profile.py -i benchmark.xlsx list
1.1.1
1.1.2
1.1.3
1.1.4
1.1.5
1.1.6
1.1.7
1.1.8
1.1.9
1.1.10
...
```

To generate a rule for a specific control:

```
$ python utils/generate_profile.py -i benchmark.xlsx generate -c 1.1.2
documentation_complete: false
title: |-
  Ensure that the API server pod specification file ownership is set to root:root
description: 'Ensure that the API server pod specification file ownership is set to
  `root:root`.

  No remediation required; file permissions are managed by the operator.'
rationale: |-
  The API server pod specification file controls various parameters that set the behavior of the API server. You should set its file ownership to maintain the integrity of the file. The file should be owned by `root:root`.
severity: PLACEHOLDER
references: PLACEHOLDER
ocil: "OpenShift 4 deploys two API servers: the OpenShift API server and the Kube\
  \ API server. \n\nThe OpenShift API server is managed as a deployment. The pod specification\
  \ yaml for openshift-apiserver is stored in etcd. \n\nThe Kube API Server is managed\
  \ as a static pod. The pod specification file for the kube-apiserver is created\
  \ on the control plane nodes at /etc/kubernetes/manifests/kube-apiserver-pod.yaml.\
  \ The kube-apiserver is mounted via hostpath to the kube-apiserver pods via /etc/kubernetes/static-pod-resources/kube-apiserver-pod.yaml\
  \ with ownership `root:root`.\n\nTo verify pod specification file ownership for\
  \ the kube-apiserver, run the following command.\n\n```\n#echo \u201Ccheck kube-apiserver\
  \ pod specification file ownership\u201D\n\nfor i in $( oc get pods -n openshift-kube-apiserver\
  \ -l app=openshift-kube-apiserver -o name )\ndo\n oc exec -n openshift-kube-apiserver\
  \ $i -- \\\n stat -c %U:%G /etc/kubernetes/static-pod-resources/kube-apiserver-pod.yaml\n\
  done\n```\nVerify that the ownership is set to `root:root`."
ocil_clause: PLACEHOLDER
warnings: PLACEHOLDER
template: PLACEHOLDER
```

To generate an entire section:

```
$ python utils/generate_profile.py -i benchmark.xlsx generate -s 1
```

The `PLACEHOLDER` values must be filled in later, ideally when the rules are
provided for each control.


### `utils/compare_versions.py` &ndash; Compare ComplianceAsCode versions

Show differences between two ComplianceAsCode versions.
Lists added or removed rules, profiles, changes in profile composition and changes in remediations and platforms.
For comparison, you can use git tags or ComplianceAsCode JSON manifest files directly.

To compare 2 ComplianceAsCode JSON manifests, provide the manifest files.

```
python3 utils/compare_versions.py compare_manifests ~/manifests/old.json ~/manifests/new.json
```

To compare 2 upstream versions, you need to specify the version git tags and a product ID.

```
$ python3 utils/compare_versions.py compare_tags v0.1.67 v0.1.68 rhel9
```

It will internally clone the upstream project, checkout these tags, generate ComplianceAsCode JSON manifests, compare them and print the output.


### `utils/oscal/build_cd_from_policy.py` &ndash; Build a Component Definition from a Policy

This script builds an OSCAL Component Definition (cd) (version `1.0.4`) for an existing OSCAL profile from a policy. The script uses the
[compliance-trestle](https://github.com/oscal-compass/compliance-trestle) library to build the component definition. The component definition can be used with the `compliance-trestle` CLI after generation.

Some assumption made by this script:

- The script maps control file statuses to valid OSCAL [statuses](https://pages.nist.gov/OSCAL-Reference/models/v1.1.1/system-security-plan/json-reference/#/system-security-plan/control-implementation/implemented-requirements/by-components/implementation-status) as follows:

  * `pending` - `alternative`

  * `not applicable`: `not-applicable`

  * `inherently met`: `implemented`

  * `documentation`: `implemented`

  * `planned`: `planned`

  * `partial`: `partial`

  * `supported`: `implemented`

  * `automated`: `implemented`

  * `manual`: `alternative`

  * `does not meet`: `alternative`

- The script uses the "Section *letter*:" convention in the control notes to create statements under the implemented requirements.
- The script maps parameter to rules uses the `xccdf_variable` field under `template.vars`
- To determine what responses will mapped to the controls in the OSCAL profile the control id and label property from the resolved catalog is searched.

It supports the following arguments:
  - `-o`, `--output` &mdash; Path to write the cd to
  - `-r`, `--root` &mdash; Root of the SSG project. Defaults to /content.
  - `-v`, `--vendor-dir` &mdash; Path to the vendor directory with third party OSCAL artifacts
  - `-p`, `--profile` &mdash; Main profile href, or name of the profile model in the trestle workspace
  - `-pr`, `--product` &mdash; Product to build cd with
  - `-c`, `--control` &mdash; Control to use as the source for control responses. To optionally filter by level, use the format <control_id>:<level>.
  - `-j`, `--json` &mdash; Path to the rules_dir.json. Defaults to /content/build/rule_dirs.json.
  - `-b`, `--build-config-yaml` &mdash; YAML file with information about the build configuration
  - `-t`, `--component-definition-type` &mdash; Type of component definition to create. Defaults to service. Options are service or validation.

An example of how to execute the script:

```bash
$ ./build_product ocp4
$ ./utils/rule_dir_json.py
$ ./utils/oscal/build_cd_from_policy.py -o build/ocp4.json -p fedramp_rev4_high -pr ocp4 -c nist_ocp4:high
```

### `utils/ansible_playbook_to_role.py` &ndash; Generates Ansible Roles and pushes them to Github

This script converts the Ansible playbooks created by the build system and converts them to Ansible roles and can upload them to GitHub.


An example of how to execute the script to generate roles locally:

```bash
$ ./build_product rhel9
$ ./utils/ansible_playbook_to_role.py --dry-run output
```

### `utils/find_unused_rules.py` &ndash; List Rules That Are Not Used In Any Data stream

This script will output rules are not in any data streams.
To prevent false positives the script will not run if the number of build datas treams less than the total number of products in the project.
The script assumes that `./build_project --derivatives` was executed before the script is used.
This script does require that `./utils/rule_dir_json.py` was executed before this script is used as well.

This script works by comparing rules in the data streams to the rules in the `rule_dirs.json` file.
The script works by adding off the rule ids from the data streams to a `set`.
Then the script converts the keys of `rule_dirs.json` to a set.
The set of rules in the data stream is subtracted to from the set of rules in `rule_dirs.json`.
The difference is then output to the user.

Example usage:

```bash
$ ./build_product --derivatives
$ ./utils/rule_dir_json.py
$ ./utils/find_unused_rules.py
```