1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135
|
#include "support/Threading.h"
#include "support/Trace.h"
#include "llvm/ADT/ScopeExit.h"
#include "llvm/Support/FormatVariadic.h"
#include "llvm/Support/Threading.h"
#include "llvm/Support/thread.h"
#include <atomic>
#include <thread>
#ifdef __USE_POSIX
#include <pthread.h>
#elif defined(__APPLE__)
#include <sys/resource.h>
#elif defined(_WIN32)
#include <windows.h>
#endif
namespace clang {
namespace clangd {
void Notification::notify() {
{
std::lock_guard<std::mutex> Lock(Mu);
Notified = true;
// Broadcast with the lock held. This ensures that it's safe to destroy
// a Notification after wait() returns, even from another thread.
CV.notify_all();
}
}
void Notification::wait() const {
std::unique_lock<std::mutex> Lock(Mu);
CV.wait(Lock, [this] { return Notified; });
}
Semaphore::Semaphore(std::size_t MaxLocks) : FreeSlots(MaxLocks) {}
bool Semaphore::try_lock() {
std::unique_lock<std::mutex> Lock(Mutex);
if (FreeSlots > 0) {
--FreeSlots;
return true;
}
return false;
}
void Semaphore::lock() {
trace::Span Span("WaitForFreeSemaphoreSlot");
// trace::Span can also acquire locks in ctor and dtor, we make sure it
// happens when Semaphore's own lock is not held.
{
std::unique_lock<std::mutex> Lock(Mutex);
SlotsChanged.wait(Lock, [&]() { return FreeSlots > 0; });
--FreeSlots;
}
}
void Semaphore::unlock() {
std::unique_lock<std::mutex> Lock(Mutex);
++FreeSlots;
Lock.unlock();
SlotsChanged.notify_one();
}
AsyncTaskRunner::~AsyncTaskRunner() { wait(); }
bool AsyncTaskRunner::wait(Deadline D) const {
std::unique_lock<std::mutex> Lock(Mutex);
return clangd::wait(Lock, TasksReachedZero, D,
[&] { return InFlightTasks == 0; });
}
void AsyncTaskRunner::runAsync(const llvm::Twine &Name,
llvm::unique_function<void()> Action) {
{
std::lock_guard<std::mutex> Lock(Mutex);
++InFlightTasks;
}
auto CleanupTask = llvm::make_scope_exit([this]() {
std::lock_guard<std::mutex> Lock(Mutex);
int NewTasksCnt = --InFlightTasks;
if (NewTasksCnt == 0) {
// Note: we can't unlock here because we don't want the object to be
// destroyed before we notify.
TasksReachedZero.notify_one();
}
});
auto Task = [Name = Name.str(), Action = std::move(Action),
Cleanup = std::move(CleanupTask)]() mutable {
llvm::set_thread_name(Name);
Action();
// Make sure function stored by ThreadFunc is destroyed before Cleanup runs.
Action = nullptr;
};
// Ensure our worker threads have big enough stacks to run clang.
llvm::thread Thread(
/*clang::DesiredStackSize*/ llvm::Optional<unsigned>(8 << 20),
std::move(Task));
Thread.detach();
}
Deadline timeoutSeconds(llvm::Optional<double> Seconds) {
using namespace std::chrono;
if (!Seconds)
return Deadline::infinity();
return steady_clock::now() +
duration_cast<steady_clock::duration>(duration<double>(*Seconds));
}
void wait(std::unique_lock<std::mutex> &Lock, std::condition_variable &CV,
Deadline D) {
if (D == Deadline::zero())
return;
if (D == Deadline::infinity())
return CV.wait(Lock);
CV.wait_until(Lock, D.time());
}
bool PeriodicThrottler::operator()() {
Rep Now = Stopwatch::now().time_since_epoch().count();
Rep OldNext = Next.load(std::memory_order_acquire);
if (Now < OldNext)
return false;
// We're ready to run (but may be racing other threads).
// Work out the updated target time, and run if we successfully bump it.
Rep NewNext = Now + Period;
return Next.compare_exchange_strong(OldNext, NewNext,
std::memory_order_acq_rel);
}
} // namespace clangd
} // namespace clang
|