1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142
|
# frozen_string_literal: true
require_relative 'bson'
require_relative 'multi_doc'
require_relative 'parallel'
require_relative 'single_doc'
module Mongo
module DriverBench
ALL = [ *BSON::ALL, *SingleDoc::ALL, *MultiDoc::ALL, *Parallel::ALL ].freeze
BENCHES = {
'BSONBench' => BSON::BENCH,
'SingleBench' => SingleDoc::BENCH,
'MultiBench' => MultiDoc::BENCH,
'ParallelBench' => Parallel::BENCH,
'ReadBench' => [
SingleDoc::FindOneByID,
MultiDoc::FindMany,
MultiDoc::GridFS::Download,
Parallel::LDJSON::Export,
Parallel::GridFS::Download
].freeze,
'WriteBench' => [
SingleDoc::InsertOne::SmallDoc,
SingleDoc::InsertOne::LargeDoc,
MultiDoc::BulkInsert::SmallDoc,
MultiDoc::BulkInsert::LargeDoc,
MultiDoc::GridFS::Upload,
Parallel::LDJSON::Import,
Parallel::GridFS::Upload
].freeze
}.freeze
# A benchmark suite for running all benchmarks and aggregating (and
# reporting) the results.
#
# @api private
class Suite
PERCENTILES = [ 10, 25, 50, 75, 90, 95, 98, 99 ].freeze
def self.run!
new.run
end
def run
perf_data = []
benches = Hash.new { |h, k| h[k] = [] }
ALL.each do |klass|
result = run_benchmark(klass)
perf_data << compile_perf_data(result)
append_to_benchmarks(klass, result, benches)
end
perf_data += compile_benchmarks(benches)
save_perf_data(perf_data)
summarize_perf_data(perf_data)
end
private
def run_benchmark(klass)
print klass.bench_name, ': '
$stdout.flush
klass.new.run.tap do |result|
puts format('%4.4g', result[:score])
end
end
def compile_perf_data(result)
percentile_data = PERCENTILES.map do |percentile|
{ 'name' => "time-#{percentile}%",
'value' => result[:percentiles][percentile] }
end
{
'info' => {
'test_name' => result[:name],
'args' => {},
},
'metrics' => [
{ 'name' => 'score',
'value' => result[:score] },
*percentile_data
]
}
end
def append_to_benchmarks(klass, result, benches)
BENCHES.each do |benchmark, list|
benches[benchmark] << result[:score] if list.include?(klass)
end
end
def compile_benchmarks(benches)
benches.each_key do |key|
benches[key] = benches[key].sum / benches[key].length
end
benches['DriverBench'] = (benches['ReadBench'] + benches['WriteBench']) / 2
benches.map do |bench, score|
{
'info' => {
'test_name' => bench,
'args' => {}
},
'metrics' => [
{ 'name' => 'score',
'value' => score }
]
}
end
end
# rubocop:disable Metrics/AbcSize
def summarize_perf_data(data)
puts '===== Performance Results ====='
data.each do |item|
puts format('%s : %4.4g', item['info']['test_name'], item['metrics'][0]['value'])
next unless item['metrics'].length > 1
item['metrics'].each do |metric|
next if metric['name'] == 'score'
puts format(' %s : %4.4g', metric['name'], metric['value'])
end
end
end
# rubocop:enable Metrics/AbcSize
def save_perf_data(data, file_name: ENV['PERFORMANCE_RESULTS_FILE'] || 'results.json')
File.write(file_name, data.to_json)
end
end
end
end
|