1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163
|
Description: QReadWriteLock: fix data race on weakly-ordered memory architectures
The fix changes the relaxed load of d_ptr in lockFor{Read,Write} after
the acquire of the mutex to an acquire load, to establish
synchronization with the release store of d_ptr when converting from an
uncontended lock to a contended lock.
Origin: upstream, https://code.qt.io/cgit/qt/qtbase.git/commit?id=4fd88011fa7975ce
Last-Update: 2025-12-14
--- a/src/corelib/thread/qreadwritelock.cpp
+++ b/src/corelib/thread/qreadwritelock.cpp
@@ -267,14 +267,14 @@ bool QReadWriteLock::tryLockForRead(int
return d->recursiveLockForRead(timeout);
auto lock = qt_unique_lock(d->mutex);
- if (d != d_ptr.loadRelaxed()) {
+ if (QReadWriteLockPrivate *dd = d_ptr.loadAcquire(); d != dd) {
// d_ptr has changed: this QReadWriteLock was unlocked before we had
// time to lock d->mutex.
// We are holding a lock to a mutex within a QReadWriteLockPrivate
// that is already released (or even is already re-used). That's ok
// because the QFreeList never frees them.
// Just unlock d->mutex (at the end of the scope) and retry.
- d = d_ptr.loadAcquire();
+ d = dd;
continue;
}
return d->lockForRead(timeout);
@@ -377,11 +377,11 @@ bool QReadWriteLock::tryLockForWrite(int
return d->recursiveLockForWrite(timeout);
auto lock = qt_unique_lock(d->mutex);
- if (d != d_ptr.loadRelaxed()) {
+ if (QReadWriteLockPrivate *dd = d_ptr.loadAcquire(); d != dd) {
// The mutex was unlocked before we had time to lock the mutex.
// We are holding to a mutex within a QReadWriteLockPrivate that is already released
// (or even is already re-used) but that's ok because the QFreeList never frees them.
- d = d_ptr.loadAcquire();
+ d = dd;
continue;
}
return d->lockForWrite(timeout);
--- a/tests/auto/corelib/thread/qreadwritelock/tst_qreadwritelock.cpp
+++ b/tests/auto/corelib/thread/qreadwritelock/tst_qreadwritelock.cpp
@@ -85,6 +85,7 @@ private slots:
void multipleReadersLoop();
void multipleWritersLoop();
void multipleReadersWritersLoop();
+ void heavyLoadLocks();
void countingTest();
void limitedReaders();
void deleteOnUnlock();
@@ -635,6 +636,111 @@ public:
}
};
+class HeavyLoadLockThread : public QThread
+{
+public:
+ QReadWriteLock &testRwlock;
+ const qsizetype iterations;
+ const int numThreads;
+ inline HeavyLoadLockThread(QReadWriteLock &l, qsizetype iters, int numThreads, QVector<QAtomicInt *> &counters):
+ testRwlock(l),
+ iterations(iters),
+ numThreads(numThreads),
+ counters(counters)
+ { }
+
+private:
+ QVector<QAtomicInt *> &counters;
+ QAtomicInt *getCounter(qsizetype index)
+ {
+ QReadLocker locker(&testRwlock);
+ /*
+ The index is increased monotonically, so the index
+ being requested should be always within or at the end of the
+ counters vector.
+ */
+ Q_ASSERT(index <= counters.size());
+ if (counters.size() <= index || counters[index] == nullptr) {
+ locker.unlock();
+ QWriteLocker wlocker(&testRwlock);
+ if (counters.size() <= index)
+ counters.resize(index + 1, nullptr);
+ if (counters[index] == nullptr)
+ counters[index] = new QAtomicInt(0);
+ return counters[index];
+ }
+ return counters[index];
+ }
+ void releaseCounter(qsizetype index)
+ {
+ QWriteLocker locker(&testRwlock);
+ delete counters[index];
+ counters[index] = nullptr;
+ }
+
+public:
+ void run() override
+ {
+ for (qsizetype i = 0; i < iterations; ++i) {
+ QAtomicInt *counter = getCounter(i);
+ /*
+ Here each counter is accessed by each thread
+ and increaed only once. As a result, when the
+ counter reaches numThreads, i.e. the fetched
+ value before the increment is numThreads-1,
+ we know all threads have accessed this counter
+ and we can delete it safely.
+ */
+ int prev = counter->fetchAndAddRelaxed(1);
+ if (prev == numThreads - 1) {
+#ifdef QT_BUILDING_UNDER_TSAN
+ /*
+ Under TSAN, deleting and freeing an object
+ will trigger a write operation on the memory
+ of the object. Since we used fetchAndAddRelaxed
+ to update the counter, TSAN will report a data
+ race when deleting the counter here. To avoid
+ the false positive, we simply reset the counter
+ to 0 here, with ordered semantics to establish
+ the sequence to ensure the the free-ing option
+ happens after all fetchAndAddRelaxed operations
+ in other threads.
+
+ When not building under TSAN, deleting the counter
+ will not result in any data read or written to the
+ memory region of the counter, so no data race will
+ happen.
+ */
+ counter->fetchAndStoreOrdered(0);
+#endif
+ releaseCounter(i);
+ }
+ }
+ }
+};
+
+/*
+ Multiple threads racing acquiring and releasing
+ locks on the same indices.
+*/
+
+void tst_QReadWriteLock::heavyLoadLocks()
+{
+ constexpr qsizetype iterations = 65536 * 4;
+ constexpr int numThreads = 8;
+ QVector<QAtomicInt *> counters;
+ QReadWriteLock testLock;
+ std::array<std::unique_ptr<HeavyLoadLockThread>, numThreads> threads;
+ for (auto &thread : threads)
+ thread = std::make_unique<HeavyLoadLockThread>(testLock, iterations, numThreads, counters);
+ for (auto &thread : threads)
+ thread->start();
+ for (auto &thread : threads)
+ thread->wait();
+ QVERIFY(counters.size() == iterations);
+ for (qsizetype i = 0; i < iterations; ++i)
+ QVERIFY(counters[i] == nullptr);
+}
/*
A writer acquires a read-lock, a reader locks
|