| /spdk/doc/ |
| H A D | memory.md | 1 # Direct Memory Access (DMA) From User Space {#memory} 5 DPDK's proven base functionality to implement memory management. (Note: DPDK 9 Computing platforms generally carve physical memory up into 4KiB segments 11 addressable memory. Operating systems then overlay 4KiB virtual memory pages on 15 Physical memory is attached on channels, where each memory channel provides 16 some fixed amount of bandwidth. To optimize total memory bandwidth, the 19 1, page 2 on channel 2, etc. This is so that writing to memory sequentially 29 NVMe devices transfer data to and from system memory using Direct Memory Access 31 transfers. In the absence of an IOMMU, these messages contain *physical* memory 33 is responsible for making access to memory coherent. [all …]
|
| H A D | nvme_spec.md | 12 queue pairs in host memory. The term "host" is used a lot, so to clarify that's 33 necessary, a location in host memory containing a descriptor for host memory 34 associated with the command. This host memory is the data to be written on a 73 built into memory embedded into the request object - not directly into an NVMe 89 memory. A virtually contiguous memory region may not be physically contiguous, 93 separate requests transparently. For more information on how memory is managed, 94 see @ref memory. 97 PRP list description must be allocated in DMA-able memory and can be quite
|
| H A D | virtio.md | 25 vhost-user specification puts a limitation on the number of "memory regions" used (8). 26 Each region corresponds to one file descriptor, and DPDK - as SPDK's memory allocator - 30 non-physically-contiguous hugetlbfs file for all its memory.
|
| H A D | porting.md | 5 to allocate physically contiguous and pinned memory, perform PCI 7 address translation and managing memory pools. The *env* API is
|
| H A D | bdev_pg.md | 25 - Automatic queueing of I/O requests in response to queue full or out-of-memory conditions 90 other memory may be freed. A bdev cannot be torn down while open descriptors 116 memory or a scatter gather list describing memory that will be transferred to 117 the block device. This memory must be allocated through spdk_dma_malloc() or 118 its variants. For a full explanation of why the memory must come from a 119 special allocation pool, see @ref memory. Where possible, data in memory will
|
| H A D | userspace.md | 9 virtual memory into two categories of addresses based on privilege level - 11 separation is aided by features on the CPU itself that enforce memory 33 which is a critical piece of hardware for ensuring memory safety in user space 34 drivers. See @ref memory for full details. 66 memory needs to be read (no MMIO) to check a queue pair for a bit flip and 69 will ensure that the host memory being checked is present in the CPU cache
|
| H A D | vhost_processing.md | 85 After the negotiation, the Vhost-user driver shares its memory, so that the vhost 86 device (SPDK) can access it directly. The memory can be fragmented into multiple 92 - user address - for memory translations in Vhost-user messages (e.g. 99 The front-end will send new memory regions after each memory change - usually 121 The front-end sends I/O by allocating proper buffers in shared memory, filling 176 the Vhost-user memory region table and goes through a gpa_to_vva translation 178 and response data to be contained within a single memory region. I/O buffers
|
| H A D | applications.md | 37 -n | --mem-channels | integer | all channels | number of memory channels u… 40 -s | --mem-size | integer | all hugepage memory | @ref cmd_arg_memory_size 98 Applications using the same shm-id share their memory and 108 Total size of the hugepage memory to reserve. If DPDK env layer is used, it will 109 reserve memory from all available hugetlbfs mounts, starting with the one with 114 that SPDK application can be started with 0 pre-reserved memory. Unlike hugepages
|
| H A D | concepts.md | 4 - @subpage memory
|
| H A D | blobfs.md | 46 Make sure you have at least 5GB of memory allocated for huge pages. 48 The following will allocate 5GB of huge page memory (in addition to binding the NVMe devices to uio… 65 3. `spdk_cache_size` - Defines the amount of userspace cache memory used by SPDK. Specified in ter…
|
| H A D | system_configuration.md | 91 As soon as the first device is attached to SPDK, all of SPDK memory will be 92 mapped to the IOMMU through the VFIO APIs. VFIO will try to mlock that memory and 93 will likely exceed user ulimit on locked memory. Besides having various 110 try to map not only its reserved hugepages, but also all the memory that's
|
| H A D | vhost.md | 170 First, specify the memory backend for the virtual machine. Since QEMU must 171 share the virtual machine's memory with the SPDK vhost target, the memory 175 -object memory-backend-file,id=mem,size=1G,mem-path=/dev/hugepages,share=on 270 …-m 1G -object memory-backend-file,id=mem0,size=1G,mem-path=/dev/hugepages,share=on -numa node,memd…
|
| H A D | compression.md | 6 storage and persistent memory for metadata. This metadata includes mappings of logical blocks 48 "chunk map" in persistent memory. Each chunk map consists of N 64-bit values, where N is the maxim… 79 * 5 chunk maps will be allocated in 160B of persistent memory. This corresponds to 4 chunk maps 84 * The "logical map" will be allocated in 32B of persistent memory. This corresponds to 114 * Allocate a 16KB buffer in memory
|
| H A D | about.md | 23 handle conditions such as out of memory or I/O hangs, and logical volume
|
| H A D | ftl.md | 22 addresses in memory (the amount is configurable), and page them in and out of the cache device 156 ### Shared memory recovery {#ftl_shm_recovery} 158 …after crash of the target application, FTL also stores its metadata in shared memory (`shm`) - this
|
| /spdk/lib/virtio/ |
| H A D | virtio_vhost_user.c | 238 SPDK_ERRLOG("Failed to prepare memory for vhost-user\n"); in prepare_vhost_memory_user() 243 /* the memory regions are unaligned */ in prepare_vhost_memory_user() 244 msg->payload.memory.regions[i].guest_phys_addr = hugepages[i].addr; /* use vaddr! */ in prepare_vhost_memory_user() 245 msg->payload.memory.regions[i].userspace_addr = hugepages[i].addr; in prepare_vhost_memory_user() 246 msg->payload.memory.regions[i].memory_size = hugepages[i].size; in prepare_vhost_memory_user() 247 msg->payload.memory.regions[i].flags_padding = 0; in prepare_vhost_memory_user() 251 msg->payload.memory.nregions = num; in prepare_vhost_memory_user() 252 msg->payload.memory.padding = 0; in prepare_vhost_memory_user() 319 fd_num = msg.payload.memory.nregions; in vhost_user_sock() 320 msg.size = sizeof(msg.payload.memory in vhost_user_sock() [all...] |
| /spdk/test/vhost/migration/ |
| H A D | migration-tc2.sh | 98 --migrate-to=$target_vm --memory=1024 --vhost-name=0 99 …e=$target_vm --disk-type=spdk_vhost_scsi --disks=VhostScsi0 --incoming=$incoming_vm --memory=1024 \
|
| /spdk/test/env/ |
| H A D | env.sh | 10 run_test "env_memory" $testdir/memory/memory_ut
|
| H A D | Makefile | 14 DIRS-y += env_dpdk_post_init memory pci
|
| /spdk/test/app/fuzz/vhost_fuzz/ |
| H A D | README.md | 25 memory locations. By default, these values are overwritten by the application even when supplied as… 31 the request will no longer point to a valid memory location.
|
| /spdk/include/spdk_internal/ |
| H A D | vhost_user.h | 103 struct vhost_memory_padded memory; member
|
| /spdk/test/vhost/windows/ |
| H A D | windows_scsi_compliance.sh | 71 vm_setup --force=1 --disk-type=spdk_vhost_scsi --os=$WINDOWS_IMG --disks=vhost --memory=4096
|
| /spdk/test/vfio_user/nvme/ |
| H A D | vfio_user_fio.sh | 48 vm_setup --disk-type=vfio_user --force=$i --os=$VM_IMAGE --memory=768 --disks="$i"
|
| /spdk/test/ftl/ |
| H A D | common.sh | 204 echo Remove shared memory files
|
| /spdk/docker/ |
| H A D | README.md | 15 customization of host resources like CPUs, memory, and block I/O. 99 - [telegraf](https://www.influxdata.com/time-series-platform/telegraf/) is a very minimal memory fo…
|