1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192
|
# Field Trial Testing Configuration
This directory contains the `fieldtrial_testing_config.json` configuration file,
which is used to ensure test coverage of active field trials.
For each study, the first available experiment after platform filtering is used
as the default experiment for Chromium builds. This experiment is also used for
perf bots and various tests in the waterfall (browser tests, including those in
browser_tests, components_browsertests, content_browsertests,
extensions_browsertests, interactive_ui_tests, and sync_integration_tests, and
[web platform tests](/docs/testing/web_platform_tests.md)). It is not used by
unit test targets.
> Note: This configuration applies specifically to Chromium developer and
> [Chrome for Testing branded](https://goo.gle/chrome-for-testing) builds.
> Chrome branded builds do not use these definitions by default. They can, however,
> be enabled with the `--enable-field-trial-config` switch.
> For Chrome branded Android builds, due to binary size constraints, the
> configuration cannot be applied by this switch.
> Note: Non-developer builds of Chromium (for example, non-Chrome browsers,
> or Chromium builds provided by Linux distros) should disable the testing
> config by either (1) specifying the GN flag `disable_fieldtrial_testing_config=true`,
> (2) specifying the `--disable-field-trial-config` switch or (3) specifying a
> custom variations server URL using the `--variations-server-url` switch.
> Note: An experiment in the testing configuration file that enables/disables a
> feature that is explicitly overridden (e.g. using the `--enable-features` or
> `--disable-features` switches) will be skipped.
## Config File Format
```json
{
"StudyName": [
{
"platforms": [Array of Strings of Valid Platforms for These Experiments],
"experiments": [
{
"//0": "Comment Line 0. Lines 0-9 are supported.",
"name": "ExperimentName",
"params": {Dictionary of Params},
"enable_features": [Array of Strings of Features],
"disable_features": [Array of Strings of Features]
},
...
]
},
...
],
...
}
```
The config file is a dictionary at the top level mapping a study name to an
array of *study configurations*. The study name in the configuration file
**must** match the FieldTrial name used in the Chromium client code.
> Note: Many newer studies do not use study names in the client code at all, and
> rely on the [Feature List API][FeatureListAPI] instead. Nonetheless, if a
> study has a server-side configuration, the study `name` specified here
> must still match the name specified in the server-side configuration; this is
> used to implement consistency checks on the server.
### Study Configurations
Each *study configuration* is a dictionary containing `platforms` and
`experiments`.
`platforms` is an array of strings, indicating the targetted platforms. The
strings may be `android`, `android_weblayer`, `android_webview`, `chromeos`,
`chromeos_lacros`, `ios`, `linux`, `mac`, or `windows`.
`experiments` is an array containing the *experiments*.
The converter uses the `platforms` array to determine which experiment to use
for the study. The first experiment matching the active platform will be used.
> Note: While `experiments` is defined as an array, currently only the first
> entry is used*\**. We would love to be able to test all possible study
> configurations, but don't currently have the buildbot resources to do so.
> Hence, the current best-practice is to identify which experiment group is the
> most likely candidate for ultimate launch, and to test that configuration. If
> there is a server-side configuration for this study, it's typically
> appropriate to copy/paste one of the experiment definitions into this file.
>
> *\**
> <small>
> Technically, there is one exception: If there's a forcing_flag group
> specified in the config, that group will be used if there's a corresponding
> forcing_flag specified on the command line. You, dear reader, should
> probably not use this fancy mechanism unless you're <em>quite</em> sure you
> know what you're doing =)
> </small>
### Experiments (Groups)
Each *experiment* is a dictionary that must contain the `name` key, identifying
the experiment group name.
> Note: Studies should typically use the [Feature List API][FeatureListAPI]. For
> such studies, the experiment `name` specified in the testing config is still
> required (for legacy reasons), but it is ignored. However, the lists of
> `enable_features`, `disable_features`, and `params` **must** match the server
> config. This is enforced via server-side Tricorder checks.
>
> For old-school studies that do check the actual experiment group name in the
> client code, the `name` **must** exactly match the client code and the server
> config.
The remaining keys -- `enable_features`, `disable_features`, `min_os_version`,
`disable_benchmarking`, and `params` -- are optional.
`enable_features` and `disable_features` indicate which features should be
enabled and disabled, respectively, through the
[Feature List API][FeatureListAPI].
`min_os_version` indicates a minimum OS version level (e.g. "10.0.0") to apply
the experiment. This string is decoded as a `base::Version`. The same version is
applied to all platforms. If you need different versions for different
platforms, you will need to use different studies.
`disable_benchmarking` indicates that when the flag
`--enable-benchmarking` is passed at start up this experiment should not be
enabled. This should be used extremely sparingly.
> Warning: `disable_benchmarking` works as described above on most platforms
> however when using the
> [fieldtrial_util.py](https://source.chromium.org/chromium/chromium/src/+/main:tools/variations/fieldtrial_util.py)
> script we will always exclude `disable_benchmarking` experiments. This is
> due to this script being primarily used for benchmarking, and because it
> generates command lines flags to set state we don't know if
> `--enable-benchmarking` will be passed or not.
`params` is a dictionary mapping parameter name to parameter value.
> Reminder: The variations framework does not actually fetch any field trial
> definitions from the server for Chromium builds, so any feature enabling or
> disabling must be configured here.
[FeatureListAPI]: https://cs.chromium.org/chromium/src/base/feature_list.h
#### Comments
Each experiment may have up to 10 lines of comments. The comment key must be of
the form `//N` where `N` is between 0 and 9.
```json
{
"AStudyWithExperimentComment": [
{
"platforms": ["chromeos", "linux", "mac", "windows"],
"experiments": [
{
"//0": "This is the first comment line.",
"//1": "This is the second comment line.",
"name": "DesktopExperiment"
}
]
}
]
}
```
### Specifying Different Experiments for Different Platforms
Simply specify two different study configurations in the study:
```json
{
"DifferentExperimentsPerPlatform": [
{
"platforms": ["chromeos", "linux", "mac", "windows"],
"experiments": [{ "name": "DesktopExperiment" }]
},
{
"platforms": ["android", "ios"],
"experiments": [{ "name": "MobileExperiment" }]
}
]
}
```
## Formatting
Run the following command to auto-format the `fieldtrial_testing_config.json`
configuration file:
```shell
python3 testing/variations/PRESUBMIT.py testing/variations/fieldtrial_testing_config.json
```
The presubmit tool will also ensure that your changes follow the correct
ordering and format.
|