1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156
|
// Copyright 2023 The Chromium Authors
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "gpu/command_buffer/service/graphite_cache_controller.h"
#include <atomic>
#include "base/functional/callback_helpers.h"
#include "base/task/sequenced_task_runner.h"
#include "base/time/time.h"
#include "gpu/command_buffer/service/graphite_image_provider.h"
#include "gpu/command_buffer/service/graphite_shared_context.h"
#include "skia/buildflags.h"
#include "third_party/skia/include/gpu/graphite/Context.h"
#include "third_party/skia/include/gpu/graphite/Recorder.h"
#if BUILDFLAG(SKIA_USE_DAWN)
#include "gpu/command_buffer/service/dawn_context_provider.h"
#endif
namespace gpu::raster {
namespace {
// Any resources not used in the last 5 seconds should be purged.
constexpr base::TimeDelta kResourceNotUsedSinceDelay = base::Seconds(5);
// All unused resources should be purged after an idle time delay of 1 seconds.
constexpr base::TimeDelta kCleanUpAllResourcesDelay = base::Seconds(1);
// Global atomic idle id shared between all instances of this class. There are
// multiple GraphiteCacheControllers used from gpu threads and we want to defer
// cleanup until all those threads are idle.
std::atomic<uint32_t> g_current_idle_id = 0;
} // namespace
GraphiteCacheController::GraphiteCacheController(
skgpu::graphite::Recorder* recorder,
bool can_handle_context_resources,
DawnContextProvider* dawn_context_provider)
: recorder_(recorder),
dawn_context_provider_(dawn_context_provider),
#if BUILDFLAG(SKIA_USE_DAWN)
can_handle_context_resources_(can_handle_context_resources &&
dawn_context_provider) {
#else
can_handle_context_resources_(false) {
#endif
CHECK(recorder_);
DETACH_FROM_SEQUENCE(sequence_checker_);
}
GraphiteCacheController::~GraphiteCacheController() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
}
void GraphiteCacheController::ScheduleCleanup() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
if (can_handle_context_resources_) {
GetGraphiteSharedContext()->performDeferredCleanup(
std::chrono::seconds(kResourceNotUsedSinceDelay.InSeconds()));
}
auto* image_provider =
static_cast<GraphiteImageProvider*>(recorder_->clientImageProvider());
image_provider->PurgeImagesNotUsedSince(kResourceNotUsedSinceDelay);
recorder_->performDeferredCleanup(
std::chrono::seconds(kResourceNotUsedSinceDelay.InSeconds()));
if (UseGlobalIdleId()) {
g_current_idle_id.fetch_add(1, std::memory_order_relaxed);
} else {
++local_idle_id_;
}
if (idle_cleanup_cb_.IsCancelled()) {
ScheduleCleanUpAllResources(GetIdleId());
}
}
void GraphiteCacheController::CleanUpScratchResources() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
if (can_handle_context_resources_) {
GetGraphiteSharedContext()->freeGpuResources();
}
recorder_->freeGpuResources();
}
void GraphiteCacheController::CleanUpAllResources() {
DCHECK_CALLED_ON_VALID_SEQUENCE(sequence_checker_);
idle_cleanup_cb_.Cancel();
CleanUpAllResourcesImpl();
}
bool GraphiteCacheController::UseGlobalIdleId() const {
return dawn_context_provider_ != nullptr;
}
uint32_t GraphiteCacheController::GetIdleId() const {
return UseGlobalIdleId() ? g_current_idle_id.load(std::memory_order_relaxed)
: local_idle_id_;
}
void GraphiteCacheController::ScheduleCleanUpAllResources(uint32_t idle_id) {
idle_cleanup_cb_.Reset(
base::BindOnce(&GraphiteCacheController::MaybeCleanUpAllResources,
weak_ptr_factory_.GetWeakPtr(), idle_id));
base::SequencedTaskRunner::GetCurrentDefault()->PostDelayedTask(
FROM_HERE, idle_cleanup_cb_.callback(), kCleanUpAllResourcesDelay);
}
void GraphiteCacheController::MaybeCleanUpAllResources(
uint32_t posted_idle_id) {
idle_cleanup_cb_.Cancel();
uint32_t current_idle_id = GetIdleId();
if (posted_idle_id != current_idle_id) {
// If `GetIdleId()` has changed since this task was posted then the
// GPU process has not been idle. Check again after another delay.
ScheduleCleanUpAllResources(current_idle_id);
return;
}
CleanUpAllResourcesImpl();
}
void GraphiteCacheController::CleanUpAllResourcesImpl() {
auto* image_provider =
static_cast<GraphiteImageProvider*>(recorder_->clientImageProvider());
image_provider->ClearImageCache();
CleanUpScratchResources();
#if BUILDFLAG(SKIA_USE_DAWN)
if (can_handle_context_resources_) {
if (dawn::native::ReduceMemoryUsage(
dawn_context_provider_->GetDevice().Get())) {
// There is scheduled work on the GPU that must complete before finishing
// cleanup. Schedule cleanup to run again after a delay.
ScheduleCleanUpAllResources(GetIdleId());
}
dawn::native::PerformIdleTasks(dawn_context_provider_->GetDevice());
}
#endif
}
GraphiteSharedContext* GraphiteCacheController::GetGraphiteSharedContext() {
#if BUILDFLAG(SKIA_USE_DAWN)
if (dawn_context_provider_) {
return dawn_context_provider_->GetGraphiteSharedContext();
}
#endif
return nullptr;
}
} // namespace gpu::raster
|