Lines Matching +full:in +full:- +full:memory

7  * COPYING in the main directory of this source tree, or the
10 * Redistribution and use in source and binary forms, with or
14 * - Redistributions of source code must retain the above
18 * - Redistributions in binary form must reproduce the above
20 * disclaimer in the documentation and/or other materials
26 * NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS
27 * BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN
28 * ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
29 * CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
41 are looking for barriers to use with cache-coherent multi-threaded
42 consitency then look in stdatomic.h. If you need both kinds of synchronicity
47 - CPU attached address space (the CPU memory could be a range of things:
48 cached/uncached/non-temporal CPU DRAM, uncached MMIO space in another
53 - A DMA initiator on a bus. For instance a PCI-E device issuing
57 happens if a MemRd TLP is sent in via PCI-E relative to a CPU WRITE to the
58 same memory location.
62 name should be used in the provider as a form of documentation.
65 /* Ensure that the device's view of memory matches the CPU's view of memory.
76 writes could be to any CPU mapped memory object with any cachability mode.
79 only fenced normal stores to normal memory. libibverbs users using other
80 memory types or non-temporal stores are required to use SFENCE in their own
84 #define udma_to_device_barrier() asm volatile("" ::: "memory")
86 #define udma_to_device_barrier() asm volatile("" ::: "memory")
88 #define udma_to_device_barrier() asm volatile("sync" ::: "memory")
90 #define udma_to_device_barrier() asm volatile("sync" ::: "memory")
92 #define udma_to_device_barrier() asm volatile("mf" ::: "memory")
94 #define udma_to_device_barrier() asm volatile("membar #StoreStore" ::: "memory")
96 #define udma_to_device_barrier() asm volatile("dsb st" ::: "memory");
98 #define udma_to_device_barrier() asm volatile("" ::: "memory")
112 #error No architecture specific memory barrier defines found!
117 from the device - eg by reading a MMIO register or seeing that CPU memory is
123 For instance, this would be used after testing a valid bit in a memory
128 #define udma_from_device_barrier() asm volatile("lock; addl $0,0(%%esp) " ::: "memory")
130 #define udma_from_device_barrier() asm volatile("lfence" ::: "memory")
132 #define udma_from_device_barrier() asm volatile("lwsync" ::: "memory")
134 #define udma_from_device_barrier() asm volatile("sync" ::: "memory")
136 #define udma_from_device_barrier() asm volatile("mf" ::: "memory")
138 #define udma_from_device_barrier() asm volatile("membar #LoadLoad" ::: "memory")
140 #define udma_from_device_barrier() asm volatile("dsb ld" ::: "memory");
142 #define udma_from_device_barrier() asm volatile("" ::: "memory")
150 #error No architecture specific memory barrier defines found!
153 /* Order writes to CPU memory so that a DMA device cannot view writes after
157 This would be used in cases where a DMA buffer might have a valid bit and
163 anything but normal stores to normal malloc memory. Usage should be:
166 udma_to_device_barrier(); // Get user memory ready for DMA
167 wqe->addr = ...;
168 wqe->flags = ...;
169 udma_ordering_write_barrier(); // Guarantee WQE written in order
170 wqe->valid = 1;
174 /* Promptly flush writes to MMIO Write Cominbing memory.
175 This should be used after a write to WC memory. This is both a barrier
179 This is not required to have any effect on CPU memory.
191 Note that there is no order guarantee for writes to WC memory without
194 This is intended to be used in conjunction with WC memory to generate large
195 PCI-E MemWr TLPs from the CPU.
198 #define mmio_flush_writes() asm volatile("lock; addl $0,0(%%esp) " ::: "memory")
200 #define mmio_flush_writes() asm volatile("sfence" ::: "memory")
202 #define mmio_flush_writes() asm volatile("sync" ::: "memory")
204 #define mmio_flush_writes() asm volatile("sync" ::: "memory")
206 #define mmio_flush_writes() asm volatile("fwb" ::: "memory")
208 #define mmio_flush_writes() asm volatile("membar #StoreStore" ::: "memory")
210 #define mmio_flush_writes() asm volatile("dsb st" ::: "memory");
212 #define mmio_flush_writes() asm volatile("" ::: "memory")
220 #error No architecture specific memory barrier defines found!
223 /* Prevent WC writes from being re-ordered relative to other MMIO
224 writes. This should be used before a write to WC memory.
226 This must act as a barrier to prevent write re-ordering from different
227 memory types:
235 This is intended to be used in conjunction with WC memory to generate large
236 PCI-E MemWr TLPs from the CPU.
240 /* Keep MMIO writes in order.
242 writes happen in order, like the kernel does. Even worse many
243 providers haphazardly open code writes to MMIO memory omitting even
247 is a stand in to indicate places where MMIO writes should be switched
254 Any access to a multi-value WC region must ensure that multiple cpus do not
260 device strictly in the order that the spinlocks are acquired, and combining
271 * strongly order WC and other memory types. */ in mmio_wc_spinlock()
278 /* It is possible that on x86 the atomic in the lock is strong enough in mmio_wc_spinunlock()
279 * to force-flush the WC buffers quickly, and this SFENCE can be in mmio_wc_spinunlock()