Lines Matching full:which

11 LLVM supports instructions which are well-defined in the presence of threads and
32 which ensures that every volatile load and store happens and is performed in the
52 section specifically goes into the one optimizer restriction which applies in
53 concurrent environments, which gets a bit more of an extended description
91 expecting, which can lead to undefined behavior down the line. (This example is
95 Note that speculative loads are allowed; a load which is part of a race returns
113 A ``fence`` provides Acquire and/or Release ordering which is not part of
123 therefore an instruction which is wider than the target natively supports can be
141 NotAtomic is the obvious, a load or store which is not atomic. (This isn't
172 which writes to surrounding bytes. (If you are writing a backend for an
173 architecture which cannot satisfy these restrictions and cares about
191 "safe" languages which need to guarantee that the generated code never
201 narrows a store, or stores a value which would not be stored otherwise. Some
210 unordered loads and unordered stores, a load cannot see a value which was
213 instructions (or an instruction which does multiple memory operations, like
229 If you are writing a frontend which uses this directly, use with caution. The
231 only used in a pattern which you know is correct. Generally, these would
232 either be used for atomic operations which do not protect other memory (like
240 operations are unlikely to be used in ways which would make those
259 If you are writing a frontend which uses this directly, use with caution.
273 a simple implementation, most architectures provide a barrier which is strong
288 If you are writing a frontend which uses this directly, use with caution.
308 barrier (for fences and operations which both read and write memory).
314 If you are writing a frontend which uses this directly, use with caution.
335 the gcc-compatible ``__sync_*`` builtins which do not specify otherwise.
354 fence after the stores; which is preferred varies by architecture.
361 * ``isSimple()``: A load or store which is not volatile or atomic. This is
364 * ``isUnordered()``: A load or store which is not volatile and at most
369 that they return true for any operation which is volatile or at least
400 MemoryDependencyAnalysis (which is also used by other passes like GVN).
410 On architectures which use barrier instructions for all atomic ordering (like
434 ``setMaxAtomicSizeInBitsSupported`` (which defaults to 0).
449 which take some sort of exclusive lock on a cache line (``LDREX`` and ``STREX``
502 There are four generic functions, which can be called with data of any size or
510 There are also size-specialized versions of the above functions, which can only
521 Finally there are some read-modify-write functions, which are only available in
535 - They support all sizes and alignments -- including those which cannot be
540 compiler-support library, as they have state which must be shared amongst all
548 produced by an old compiler (which will have called the ``__atomic_*`` function)
549 interoperates with code produced by the new compiler (which will use native
565 Some CPUs support multiple instruction sets which can be switched back and forth
566 on function-call boundaries. For example, MIPS supports the MIPS16 ISA, which
575 function which on older CPUs contains a "magically-restartable" atomic sequence
576 (which looks atomic so long as there's only one CPU), and contains actual atomic
578 be provided on any architecture, if all CPUs which are missing atomic
584 Some targets (like RISCV) support a ``+forced-atomics`` target feature, which
632 On AArch64, a variant of the __sync_* routines is used which contain the memory
634 whether the single-instruction atomic operations which were introduced as part