1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142
|
// Copyright 2023 The Chromium Authors
// Use of this source code is governed by a BSD-style license that can be
// found in the LICENSE file.
#include "base/task/sequence_manager/work_tracker.h"
#include "base/check.h"
#include "base/task/common/scoped_defer_task_posting.h"
#include "base/threading/thread_restrictions.h"
namespace base::sequence_manager::internal {
SyncWorkAuthorization::SyncWorkAuthorization(SyncWorkAuthorization&& other)
: tracker_(other.tracker_) {
other.tracker_ = nullptr;
}
SyncWorkAuthorization& SyncWorkAuthorization::operator=(
SyncWorkAuthorization&& other) {
tracker_ = other.tracker_;
other.tracker_ = nullptr;
return *this;
}
SyncWorkAuthorization::~SyncWorkAuthorization() {
if (!tracker_) {
return;
}
{
base::internal::CheckedAutoLock auto_lock(tracker_->active_sync_work_lock_);
uint32_t prev = tracker_->state_.fetch_and(
~WorkTracker::kActiveSyncWork, WorkTracker::kMemoryReleaseAllowWork);
DCHECK(prev & WorkTracker::kActiveSyncWork);
}
tracker_->active_sync_work_cv_.Signal();
}
SyncWorkAuthorization::SyncWorkAuthorization(WorkTracker* state)
: tracker_(state) {}
WorkTracker::WorkTracker() {
DETACH_FROM_THREAD(thread_checker_);
}
WorkTracker::~WorkTracker() = default;
void WorkTracker::SetRunTaskSynchronouslyAllowed(
bool can_run_tasks_synchronously) {
DCHECK_CALLED_ON_VALID_THREAD(thread_checker_);
if (can_run_tasks_synchronously) {
state_.fetch_or(kSyncWorkSupported, kMemoryReleaseAllowWork);
} else {
// After this returns, non-sync work may run without being tracked by
// `this`. Ensures that such work is correctly sequenced with sync work by:
// - Waiting until sync work is complete.
// - Acquiring memory written by sync work (`kMemoryAcquireBeforeWork` here
// is paired with `kMemoryReleaseAllowWork` in `~SyncWorkAuthorization`).
uint32_t prev =
state_.fetch_and(~kSyncWorkSupported, kMemoryAcquireBeforeWork);
if (prev & kActiveSyncWork) {
WaitNoSyncWork();
}
}
}
void WorkTracker::WaitNoSyncWork() {
// Do not process new PostTasks, defer them. Tracing can call PostTask, but
// it will try to grab locks that are not allowed here.
ScopedDeferTaskPosting disallow_task_posting;
ScopedAllowBaseSyncPrimitivesOutsideBlockingScope allow;
// `std::memory_order_relaxed` instead of `kMemoryAcquireBeforeWork` because
// the lock implicitly acquires memory released by `~SyncWorkAuthorization`.
base::internal::CheckedAutoLock auto_lock(active_sync_work_lock_);
uint32_t prev = state_.load(std::memory_order_relaxed);
while (prev & kActiveSyncWork) {
active_sync_work_cv_.Wait();
prev = state_.load(std::memory_order_relaxed);
}
}
void WorkTracker::WillRequestReloadImmediateWorkQueue() {
// May be called from any thread.
// Sync work is disallowed until `WillReloadImmediateWorkQueues()` and
// `OnIdle()` are called.
state_.fetch_or(kImmediateWorkQueueNeedsReload,
kMemoryRelaxedNotAllowOrBeforeWork);
}
void WorkTracker::WillReloadImmediateWorkQueues() {
DCHECK_CALLED_ON_VALID_THREAD(thread_checker_);
// Sync work is disallowed until `OnIdle()` is called.
state_.fetch_and(
~(kImmediateWorkQueueNeedsReload | kWorkQueuesEmptyAndNoWorkRunning),
kMemoryRelaxedNotAllowOrBeforeWork);
}
void WorkTracker::OnBeginWork() {
DCHECK_CALLED_ON_VALID_THREAD(thread_checker_);
uint32_t prev = state_.fetch_and(~kWorkQueuesEmptyAndNoWorkRunning,
kMemoryAcquireBeforeWork);
if (prev & kActiveSyncWork) {
DCHECK(prev & kSyncWorkSupported);
WaitNoSyncWork();
}
}
void WorkTracker::OnIdle() {
DCHECK_CALLED_ON_VALID_THREAD(thread_checker_);
// This may allow sync work. "release" so that sync work that runs after this
// sees all writes issued by previous sequenced work.
state_.fetch_or(kWorkQueuesEmptyAndNoWorkRunning, std::memory_order_release);
}
SyncWorkAuthorization WorkTracker::TryAcquireSyncWorkAuthorization() {
// May be called from any thread.
uint32_t state = state_.load(std::memory_order_relaxed);
// "acquire" so that sync work sees writes issued by sequenced work that
// precedes it.
if (state == (kSyncWorkSupported | kWorkQueuesEmptyAndNoWorkRunning) &&
state_.compare_exchange_strong(state, state | kActiveSyncWork,
std::memory_order_acquire,
std::memory_order_relaxed)) {
return SyncWorkAuthorization(this);
}
return SyncWorkAuthorization(nullptr);
}
void WorkTracker::AssertHasWork() {
CHECK(!(state_.load(std::memory_order_relaxed) &
kWorkQueuesEmptyAndNoWorkRunning));
}
} // namespace base::sequence_manager::internal
|