Lines Matching full:operations
122 operations. A Monotonic load followed by an Acquire fence is roughly
192 operations; if you need such an operation, use explicit locking.
212 call. Reordering unordered operations is safe, though, and optimizers should
213 take advantage of that because unordered operations are common in languages
217 These operations are required to be atomic in the sense that if you use
221 instructions (or an instruction which does multiple memory operations, like
229 essentially guarantees that if you take all the operations affecting a specific
240 either be used for atomic operations which do not protect other memory (like
248 operations are unlikely to be used in ways which would make those
316 barrier (for fences and operations which both read and write memory).
339 ordering exists between all SequentiallyConsistent operations.
346 If a frontend is exposing atomic operations, these are much easier to reason
347 about for the programmer than other kinds of operations, and using them is
354 operations may not be reordered.
358 operations and SequentiallyConsistent stores require Release
370 what, for example, memcpyopt would check for operations it might transform.
388 To support optimizing around atomic operations, make sure you are using the
390 optimize some atomic operations (Unordered operations in particular), make sure
394 operations:
397 memcpy/memset, including unordered loads/stores. It can pull operations
398 across some atomic operations.
401 monotonic operations like a read+write to a memory location, and anything
417 Atomic operations are represented in the SelectionDAG with ``ATOMIC_*`` opcodes.
422 The MachineMemOperand for all atomic operations is currently marked as volatile;
426 One very important property of the atomic operations is that if your backend
427 supports any inline lock-free atomic operations of a given size, you should
428 support *ALL* operations of that size in a lock-free manner.
431 this is trivial: all the other operations can be implemented on top of those
440 AtomicExpandPass can help with that: it will expand all atomic operations to the
449 other ``atomicrmw`` operations generate a loop with ``LOCK CMPXCHG``. Depending
450 on the users of the result, some ``atomicrmw`` operations can be translated into
451 operations like ``LOCK AND``, but that does not work in general.
484 operations. However, many architectures have strict requirements for LL/SC
507 LLVM's AtomicExpandPass will translate atomic operations on data sizes above
562 implement the operations on naturally-aligned pointers of supported sizes, and a
600 appropriate size, and then implement some subset of the operations via libcalls
642 whether the single-instruction atomic operations which were introduced as part
658 operations are used in place.