1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199
|
#!/usr/bin/env python
# Copyright 2019 The Chromium Authors
# Use of this source code is governed by a BSD-style license that can be
# found in the LICENSE file.
"""Merge results from code-coverage/pgo swarming runs.
This script merges code-coverage/pgo profiles from multiple shards. It also
merges the test results of the shards.
It is functionally similar to merge_steps.py but it accepts the parameters
passed by swarming api.
"""
import argparse
import json
import logging
import os
import subprocess
import sys
import merge_lib as profile_merger
def _MergeAPIArgumentParser(*args, **kwargs):
"""Parameters passed to this merge script, as per:
https://chromium.googlesource.com/chromium/tools/build/+/main/scripts/slave/recipe_modules/swarming/resources/merge_api.py
"""
parser = argparse.ArgumentParser(*args, **kwargs)
parser.add_argument('--build-properties',
help=argparse.SUPPRESS,
default='{}')
parser.add_argument('--summary-json', help=argparse.SUPPRESS)
parser.add_argument('--task-output-dir', help=argparse.SUPPRESS)
parser.add_argument('-o',
'--output-json',
required=True,
help=argparse.SUPPRESS)
parser.add_argument('jsons_to_merge', nargs='*', help=argparse.SUPPRESS)
# Custom arguments for this merge script.
parser.add_argument('--additional-merge-script',
help='additional merge script to run')
parser.add_argument(
'--additional-merge-script-args',
help='JSON serialized string of args for the additional merge script')
parser.add_argument('--profdata-dir',
required=True,
help='where to store the merged data')
parser.add_argument('--llvm-profdata',
required=True,
help='path to llvm-profdata executable')
parser.add_argument('--test-target-name', help='test target name')
parser.add_argument('--java-coverage-dir',
help='directory for Java coverage data')
parser.add_argument('--jacococli-path', help='path to jacococli.jar.')
parser.add_argument(
'--merged-jacoco-filename',
help='filename used to uniquely name the merged exec file.')
parser.add_argument('--javascript-coverage-dir',
help='directory for JavaScript coverage data')
parser.add_argument('--chromium-src-dir',
help='directory for chromium/src checkout')
parser.add_argument('--build-dir',
help='directory for the build directory in chromium/src')
parser.add_argument(
'--per-cl-coverage',
action='store_true',
help='set to indicate that this is a per-CL coverage build')
parser.add_argument('--sparse',
action='store_true',
dest='sparse',
help='run llvm-profdata with the sparse flag.')
# (crbug.com/1091310) - IR PGO is incompatible with the initial conversion
# of .profraw -> .profdata that's run to detect validation errors.
# Introducing a bypass flag that'll merge all .profraw directly to .profdata
parser.add_argument(
'--skip-validation',
action='store_true',
help='skip validation for good raw profile data. this will pass all '
'raw profiles found to llvm-profdata to be merged. only applicable '
'when input extension is .profraw.')
return parser
def main():
desc = 'Merge profraw files in <--task-output-dir> into a single profdata.'
parser = _MergeAPIArgumentParser(description=desc)
params = parser.parse_args()
if params.java_coverage_dir:
if not params.jacococli_path:
parser.error('--jacococli-path required when merging Java coverage')
if not params.merged_jacoco_filename:
parser.error(
'--merged-jacoco-filename required when merging Java coverage')
output_path = os.path.join(params.java_coverage_dir,
'%s.exec' % params.merged_jacoco_filename)
logging.info('Merging JaCoCo .exec files to %s', output_path)
profile_merger.merge_java_exec_files(params.task_output_dir, output_path,
params.jacococli_path)
failed = False
if params.javascript_coverage_dir and params.chromium_src_dir \
and params.build_dir:
current_dir = os.path.dirname(__file__)
merge_js_results_script = os.path.join(current_dir, 'merge_js_results.py')
args = [
sys.executable,
merge_js_results_script,
'--task-output-dir',
params.task_output_dir,
'--javascript-coverage-dir',
params.javascript_coverage_dir,
'--chromium-src-dir',
params.chromium_src_dir,
'--build-dir',
params.build_dir,
]
rc = subprocess.call(args)
if rc != 0:
failed = True
logging.warning('%s exited with %s', merge_js_results_script, rc)
# Name the output profdata file name as {test_target}.profdata or
# default.profdata.
output_prodata_filename = (params.test_target_name or 'default') + '.profdata'
# NOTE: The profile data merge script must make sure that the profraw files
# are deleted from the task output directory after merging, otherwise, other
# test results merge script such as layout tests will treat them as json test
# results files and result in errors.
invalid_profiles, counter_overflows = profile_merger.merge_profiles(
params.task_output_dir,
os.path.join(params.profdata_dir, output_prodata_filename),
'.profraw',
params.llvm_profdata,
sparse=params.sparse,
skip_validation=params.skip_validation)
# At the moment counter overflows overlap with invalid profiles, but this is
# not guaranteed to remain the case indefinitely. To avoid future conflicts
# treat these separately.
if counter_overflows:
with open(os.path.join(params.profdata_dir, 'profiles_with_overflows.json'),
'w') as f:
json.dump(counter_overflows, f)
if invalid_profiles:
with open(os.path.join(params.profdata_dir, 'invalid_profiles.json'),
'w') as f:
json.dump(invalid_profiles, f)
# If given, always run the additional merge script, even if we only have one
# output json. Merge scripts sometimes upload artifacts to cloud storage, or
# do other processing which can be needed even if there's only one output.
if params.additional_merge_script:
new_args = [
'--build-properties',
params.build_properties,
'--summary-json',
params.summary_json,
'--task-output-dir',
params.task_output_dir,
'--output-json',
params.output_json,
]
if params.additional_merge_script_args:
new_args += json.loads(params.additional_merge_script_args)
new_args += params.jsons_to_merge
args = [sys.executable, params.additional_merge_script] + new_args
rc = subprocess.call(args)
if rc != 0:
failed = True
logging.warning('Additional merge script %s exited with %s',
params.additional_merge_script, rc)
elif len(params.jsons_to_merge) == 1:
logging.info('Only one output needs to be merged; directly copying it.')
with open(params.jsons_to_merge[0]) as f_read:
with open(params.output_json, 'w') as f_write:
f_write.write(f_read.read())
else:
logging.warning(
'This script was told to merge test results, but no additional merge '
'script was given.')
# TODO(crbug.com/40868908): Return non-zero if invalid_profiles is not None
return 1 if failed else 0
if __name__ == '__main__':
logging.basicConfig(format='[%(asctime)s %(levelname)s] %(message)s',
level=logging.INFO)
sys.exit(main())
|