1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419
|
30.6.4
//Shared state
//
//Many of the classes introduced in this sub-clause use some state to
//communicate results. This shared state consists of some state information and
//some (possibly not yet evaluated) result, which can be a (possibly void) value
//or an exception.
//
// [ Note: Futures, promises, and tasks defined in this clause reference such
// shared state. — end note ]
//
//
//Asynchronous return object
//
//An asynchronous return object is an object that reads (obtains) results from a
//shared state.
//
//A waiting function of an asynchronous return object
//potentially blocks to wait for the shared state to be made ready (see below at
//Making a shared state ready)
//Asynchronous provider
//=====================
//
//An asynchronous provider is an object that provides a result to a shared
//state.
//
//The result of a shared state is set by certain members [respective functions]
//of the asynchronous provider. [ Note: Such as promises or tasks. — end note ]
//
//The means of setting [The way] the result of an shared state [is set] is
//specified in the description of those classes and functions that create such a
//state object.
//Releasing the shared state
//==========================
//
//When an asynchronous return object or an asynchronous provider is said to
//release its shared state, it means:
//
// — if the return object or provider holds the last reference to its shared
// state, the shared state is destroyed;
//and
// — the return object or provider gives up its reference to its shared
// state.
//
//
//Making a shared state ready
//===========================
//
//When an asynchronous provider is said to make its shared state ready, it
//means:
//
// — first, the provider marks its shared state as ready; and
// — second, the provider unblocks any execution agents waiting for its
// shared state to become ready.
//
//Abandon a shared state
//======================
//
//When an asynchronous provider is said to abandon its shared state, it means:
//
// — first, if that state is not ready, the provider
// — stores an exception object of type future_error with an error
// condition of broken_promise within its shared state; and then
// — makes its shared state ready;
// — second, the provider releases its shared state.
//
//Shared State being ready
//========================
//
//A shared state is ready only if it holds a value or an exception ready for
//retrieval.
//
//Waiting for a shared state to become ready may invoke code to compute the
//result on the waiting thread (if so specified in the description of the class
//or function that creates the state object).
//
//Calls to functions that successfully set the stored result of a shared state
//synchronize with (1.10) calls to functions successfully detecting the ready
//state resulting from that setting.
//
//The storage of the result (whether normal or exceptional) into the shared
//state synchronizes with (1.10) the successful return from a call to a waiting
//function on the shared state.
//
//Accesses to the same shared state conflict (1.10).
==============================
1.10 Multi-threaded executions and data races [intro.multithread]
//A thread of execution (also known as a thread ) is a single flow of control
//within a program, including the initial invocation of a specific top-level
//function, and recursively including every function invocation subsequently
//executed by the thread.
//
//Note:
//When one thread creates another, the initial call to the top-level function of
//the new thread is executed by the new thread, not by the creating thread.
//
//Every thread in a
//program can potentially access every object and function in a program.
//
//Under a hosted implementation, a C++ program can have more than one thread
//running concurrently. The execution of each thread proceeds as defined by the
//remainder of this standard. The execution of the entire program consists of an
//execution of all of its threads.
//
//Note: Usually the execution can be viewed as an interleaving of all its
//threads. However, some kinds of atomic operations, for example, allow
//executions inconsistent with a simple interleaving, as described below.
//
//Under a freestanding implementation, it is implementation-defined whether
//a program can have more than one thread of execution.
-------
Implementations should ensure that all unblocked threads eventually make
progress.
Note: Standard library functions may silently block on I/O or locks. Factors
in the execution environment, including externally-imposed thread priorities,
may prevent an implementation from making certain guarantees of forward
progress.
-------
The value of an object visible to a thread T at a particular point is the
initial value of the object, a value assigned to the object by T , or a value
assigned to the object by another thread, according to the rules below.
Note: In some cases, there may instead be undefined behavior. Much of this
section is motivated by the desire to support atomic operations with explicit
and detailed visibility constraints. However, it also implicitly supports a
simpler view for more restricted programs.
----------
Two expression evaluations conflict if one of them modifies a memory location
and the other one accesses or modifies the same memory location.
----------
//The library defines a number of atomic operations (Clause 29 ) and operations
//on mutexes (Clause 30 ) that are specially identified as synchronization
//operations.
//
//These operations play a special role in making assignments in one thread
//visible to another. A synchronization operation on one or more memory
//locations is either a consume operation, an acquire operation, a release
//operation, or both an acquire and release operation. A synchronization
//operation without an associated memory location is a fence and can be either
//an acquire fence, a release fence, or both an acquire and release fence. In
//addition, there are relaxed atomic operations, which are not synchronization
//operations, and atomic read-modify-write operations, which have special
//characteristics.
//
//Note: For example, a call that acquires a mutex will perform an acquire
//operation on the locations comprising the mutex. Correspondingly, a call that
//releases the same mutex will perform a release operation on those same
//locations. Informally, performing a release operation on A forces prior side
//effects on other memory locations to become visible to other threads that
//later perform a consume or an acquire operation on A . `Relaxed' atomic
//operations are not synchronization operations even though, like
//synchronization operations, they cannot contribute to data races.
//
//--------------
//
//In other words, function executions do not interleave with each other.
//
------------
An object with automatic or thread storage duration ( 3.7 ) is associated with
one specific thread, and can be accessed by a different thread only indirectly
through a pointer or reference ( 3.9.2 ).
--------
All modifications to a particular atomic object M occur in some particular
total order, called the modification order of M . If A and B are modifications
of an atomic object M and A happens before (as defined below) B , then A shall
precede B in the modification order of M , which is defined below.
Note: This states that the modification orders must respect the `happens
before' relationship.
Note: There is a separate order for each atomic object. There is no
requirement that these can be combined into a single total order for all
objects. In general this will be impossible since different threads may
observe modifications to different objects in inconsistent orders.
--------
A release sequence headed by a release operation A on an atomic object M is a
maximal contiguous sub- sequence of side effects in the modification order of
M , where the first operation is A , and every subsequent operation — is
performed by the same thread that performed A , or — is an atomic
read-modify-write operation.
--------
Certain library calls synchronize with other library calls performed by
another thread. For example, an atomic store-release synchronizes with a
load-acquire that takes its value from the store ( 29.3 ).
Note: Except in the specified cases, reading a later value does not
necessarily ensure visibility as described below. Such a requirement would
sometimes interfere with efficient implementation.
Note: The specifications of the synchronization operations define when one
reads the value written by another. For atomic objects, the definition is
clear. All operations on a given mutex occur in a single total order. Each
mutex acquisition “reads the value written” by the last mutex release.
--------
An evaluation A carries a dependency to an evaluation B if — the value of A is
used as an operand of B , unless:
— B is an invocation of any specialization of std::kill_dependency ( 29.3
), or
— A is the left operand of a built-in logical AND ( && , see 5.14 ) or
logical OR ( || , see 5.15 ) operator, or
— A is the left operand of a conditional ( ?: , see 5.16 ) operator, or
— A is the
left operand of the built-in comma ( , ) operator ( 5.18 ); or
— A writes a scalar object or bit-field M , B reads the value written by A
from M , and A is sequenced before B , or
— for some evaluation X , A carries a dependency to X , and X carries a
dependency to B .
Note: `Carries a dependency to' is a subset of `is sequenced before', and is
similarly strictly intra-thread.
--------
An evaluation A is dependency-ordered before an evaluation B if — A performs a
release operation on an atomic object M , and, in another thread, B performs a
consume operation on M and reads a value written by any side effect in the
release sequence headed by A , or — for some evaluation X , A is
dependency-ordered before X and X carries a dependency to B .
Note: The relation `is dependency-ordered before' is analogous to
`synchronizes with', but uses release/- consume in place of release/acquire.
--------
An evaluation A inter-thread happens before an evaluation B if
— A synchronizes with B , or — A is dependency-ordered before B , or
— for some evaluation X — A synchronizes with X and X is sequenced before
B , or
— A is sequenced before X and X inter-thread happens before B , or
— A inter-thread happens before X and X inter-thread happens before B .
Note: The `inter-thread happens before' relation describes arbitrary
concatenations of `sequenced be- fore', `synchronizes with' and
`dependency-ordered before' relationships, with two exceptions. The first
exception is that a concatenation is not permitted to end with
`dependency-ordered before' followed by `se- quenced before'. The reason for
this limitation is that a consume operation participating in a `dependency-
ordered before' relationship provides ordering only with respect to operations
to which this consume op- eration actually carries a dependency. The reason
that this limitation applies only to the end of such a concatenation is that
any subsequent release operation will provide the required ordering for a
prior consume operation. The second exception is that a concatenation is not
permitted to consist entirely of `sequenced before'. The reasons for this
limitation are (1) to permit `inter-thread happens before' to be transitively
closed and (2) the `happens before' relation, defined below, provides for
relationships consisting entirely of `sequenced before'.
------
An evaluation A happens before an evaluation B if:
— A is sequenced before B , or
— A inter-thread happens before B .
The implementation shall ensure that no program execution demonstrates a cycle
in the `happens before' relation.
Note: This cycle would otherwise be possible only through the use of consume
operations.
--------
A visible side effect A on a scalar object or bit-field M with respect to a
value computation B of M satisfies the conditions:
— A happens before B and
— there is no other side effect X to M such that A happens before X and X
happens before B .
The value of a non-atomic scalar object or bit-field M , as determined by
evaluation B , shall be the value stored by the visible side effect A .
Note: If there is ambiguity about which side effect to a non-atomic object or
bit-field is visible, then the behavior is either unspecified or undefined.
Note: This states that operations on ordinary objects are not visibly
reordered. This is not actually detectable without data races, but it is
necessary to ensure that data races, as defined below, and with suitable
restrictions on the use of atomics, correspond to data races in a simple
interleaved (sequentially consistent) execution.
--------
The visible sequence of side effects on an atomic object M , with respect to a
value computation B of M , is a maximal contiguous sub-sequence of side
effects in the modification order of M , where the first side effect is
visible with respect to B , and for every side effect, it is not the case that
B happens before it. The value of an atomic object M , as determined by
evaluation B , shall be the value stored by some operation in the visible
sequence of M with respect to B .
Note: It can be shown that the visible sequence of side effects of a value
computation is unique given the coherence requirements below.
-----
If an operation A that modifies an atomic object M happens before an operation
B that modifies M , then A shall be earlier than B in the modification order
of M .
Note: This requirement is known as write-write coherence.
-------
If a value computation A of an atomic object M happens before a value
computation B of M , and A takes its value from a side effect X on M , then
the value computed by B shall either be the value stored by X or the value
stored by a side effect Y on M , where Y follows X in the modification order
of M .
Note: This requirement is known as read-read coherence.
-------
If a value computation A of an atomic object M happens before an operation B
that modifies M , then A shall take its value from a side effect X on M ,
where X precedes B in the modification order of M .
Note: This requirement is known as read-write coherence.
------
If a side effect X on an atomic object M happens before a value computation B
of M , then the evaluation B shall take its value from X or from a side effect
Y that follows X in the modification order of M.
Note:
This requirement is known as write-read coherence.
------------
Note: The four preceding coherence requirements effectively disallow compiler
reordering of atomic opera- tions to a single object, even if both operations
are relaxed loads. This effectively makes the cache coherence guarantee
provided by most hardware available to C++ atomic operations.
Note: The visible sequence of side effects depends on the `happens before'
relation, which depends on the values observed by loads of atomics, which we
are restricting here. The intended reading is that there must exist an
association of atomic loads with modifications they observe that, together
with suitably chosen modification orders and the `happens before' relation
derived as described above, satisfy the resulting constraints as imposed here.
-----
//The execution of a program contains a data race if it contains two conflicting
//actions in different threads, at least one of which is not atomic, and neither
//happens before the other. Any such data race results in undefined behavior.
Note: It can be shown that programs that correctly use mutexes and
memory_order_seq_cst operations to prevent all data races and use no other
synchronization operations behave as if the operations executed by their
constituent threads were simply interleaved, with each value computation of an
object being taken from the last side effect on that object in that
interleaving. This is normally referred to as `sequential
consistency'. However, this applies only to data-race-free programs, and
data-race-free programs cannot observe most program transformations that do
not change single-threaded program semantics. In fact, most single-threaded
program transformations continue to be allowed, since any program that behaves
differently as a result must perform an undefined operation.
Note: Compiler transformations that introduce assignments to a potentially
shared memory location that would not be modified by the abstract machine are
generally precluded by this standard, since such an assignment might overwrite
another assignment by a different thread in cases in which an abstract machine
execution would not have encountered a data race. This includes
implementations of data member assign- ment that overwrite adjacent members in
separate memory locations. Reordering of atomic loads in cases in which the
atomics in question may alias is also generally precluded, since this may
violate the `visible sequence' rules.
Note: Transformations that introduce a speculative read of a potentially
shared memory location may not preserve the semantics of the C++ program as
defined in this standard, since they potentially introduce a data
race. However, they are typically valid in the context of an optimizing
compiler that targets a specific machine with well-defined semantics for data
races. They would be invalid for a hypothetical machine that is not tolerant
of races or provides hardware race detection.
-------
The implementation may assume that any thread will eventually do one of the
following:
— terminate,
— make a call to a library I/O function,
— access or modify a volatile object, or
— perform a synchronization operation or an atomic operation.
Note: This is intended to allow compiler transformations such as removal of
empty loops, even when termination cannot be proven.
---------
An implementation should ensure that the last value (in modification order)
assigned by an atomic or synchronization operation will become visible to all
other threads in a finite period of time.
|