/spdk/docker/ |
H A D | README.md | 4 into docker container images. The example containers consist of SPDK NVMe-oF 5 target sharing devices to another SPDK NVMe-oF application. Which serves 6 as both initiator and target. Finally a traffic generator based on FIO 17 docker-compose: We recommend using 1.29.2 version or newer. 21 under /dev/shm. Depending on the use-case, some kernel modules should be also 26 [docker-proxy](https://docs.docker.com/config/daemon/systemd/#httphttps-proxy) 28 To pass `$http_proxy` to docker-compose build use: 30 docker-compose build --build-arg PROXY=$http_proxy 33 ## How-To 35 `docker-compose.yaml` shows an example deployment of the storage containers based on SPDK. [all …]
|
H A D | docker-compose.monitoring.yaml | 1 # SPDX-License-Identifier: Apache-2.0 14 storage-target: 15 image: spdk-app 17 context: spdk-app 18 container_name: storage-target 20 - build_base 25 - /dev/hugepages:/dev/hugepages 26 - ./spdk-app/storage-target.conf:/config 28 - SPDK_HTTP_PROXY=0.0.0.0 9009 spdkuser spdkpass 34 - ./monitoring/telegraf.conf:/etc/telegraf/telegraf.conf:ro [all …]
|
H A D | docker-compose.yaml | 1 # SPDX-License-Identifier: Apache-2.0 12 storage-target: 13 image: spdk-app 15 context: spdk-app 16 container_name: storage-target 18 - build_base 23 - /dev/hugepages:/dev/hugepages 24 - ./spdk-app/storage-target.conf:/config 26 - SPDK_ARGS=-m 0x2 28 proxy-container: [all …]
|
/spdk/doc/ |
H A D | about.md | 3 The Storage Performance Development Kit (SPDK) provides a set of tools and 4 libraries for writing high performance, scalable, user-mode storage 9 and enables zero-copy access from the application. 14 The bedrock of SPDK is a user space, polled-mode, asynchronous, lockless 15 [NVMe](http://www.nvmexpress.org) driver. This provides zero-copy, highly 22 includes unifying the interface between disparate storage devices, queueing to 27 [NVMe-oF](http://www.nvmexpress.org/nvm-express-over-fabrics-specification-released), 29 [vhost](http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html) 32 NVMe-oF and iSCSI interoperate with these targets, as well as QEMU with vhost. 35 high performance storage target, or used as the basis for production
|
H A D | nvmf_tgt_pg.md | 1 # NVMe over Fabrics Target Programming Guide {#nvmf_tgt_pg} 3 ## Target Audience 6 use the SPDK NVMe-oF target library (`lib/nvmf`). It is intended to provide 8 guide will not cover how to use the SPDK NVMe-oF target application. For a 9 guide on how to use the existing application as-is, see @ref nvmf. 13 The SPDK NVMe-oF target library is located in `lib/nvmf`. The library 14 implements all logic required to create an NVMe-oF target application. It is 15 used in the implementation of the example NVMe-oF target application in 24 The library exposes a number of primitives - basic objects that the user 27 `struct spdk_nvmf_tgt`: An NVMe-oF target. This concept, surprisingly, does [all …]
|
H A D | vhost.md | 1 # vhost Target {#vhost} 5 - @ref vhost_intro 6 - @ref vhost_prereqs 7 - @ref vhost_start 8 - @ref vhost_config 9 - @ref vhost_qemu_config 10 - @ref vhost_example 11 - @ref vhost_advanced_topics 12 - @ref vhost_bugs 16 A vhost target provides a local storage service as a process running on a local machine. [all …]
|
H A D | sma.md | 1 # Storage Management Agent {#sma} 3 Storage Management Agent (SMA) is a service providing a gRPC interface for 5 users to create and manage various types of devices (e.g. NVMe, virtio-blk, 6 etc.). The major difference between SMA's API and the existing SPDK-RPC 8 SPDK-RPCs, which enables it to be more easily consumed by orchestration 11 lot of hardware-specific options. 17 `sma.proto` file, while device-specific options are defined in their separate 22 some storage media. It is equivalent to a SPDK bdev and/or an NVMe namespace 28 The following sections provide a high-level description of each method. For 34 it becomes a no-op and returns a handle to that device. [all …]
|
H A D | template_pg.md | 14 ## Target Audience {#componentname_pg_audience} 25 a lengthy tutorial or commentary on storage in general or the goodness of SPDK but provide enough i… 27 starting from scratch if they're at this point, they are by definition a storage application develo… 50 … need to consider initialization options, threading, limitations, any sort of quirky or non-obvious 62 when you can configure it - i.e. at run time or only up front. For specifics about how the RPCs wor… 78 If sequence diagrams makes sense for this module, use mscgen to create simple UML-style (they don't…
|
H A D | iscsi.md | 1 # iSCSI Target {#iscsi} 3 ## iSCSI Target Getting Started Guide {#iscsi_getting_started} 5 The Storage Performance Development Kit iSCSI target application is named `iscsi_tgt`. 26 ### Assigning CPU Cores to the iSCSI Target {#iscsi_config_lcore} 31 To ensure the SPDK iSCSI target has the best performance, place the NICs and the NVMe devices on the 32 same NUMA node and configure the target to run on CPU cores associated with that node. The following 33 command line option is used to configure the SPDK iSCSI target: 36 -m 0xF000000 39 This is a hexadecimal bit mask of the CPU cores where the iSCSI target will start polling threads. 42 ## Configuring iSCSI Target via RPC method {#iscsi_rpc} [all …]
|
H A D | nvme_spec.md | 6 storage devices. The specification includes network transport definitions for 7 remote storage as well as a hardware register layout for local PCIe devices. 14 queues - a submission queue and a completion queue. These queues are more 18 structures, plus 2 integers (head and tail indices). There are also two 32-bit 52 The user supplies a data buffer, the target LBA, and the length, as well as 61 slow, so SPDK keeps a pre-allocated set of request objects inside of the NVMe 62 queue pair object - `struct spdk_nvme_qpair`. The number of requests allocated to 65 software queueing - SPDK will allow the user to submit more requests than the 73 built into memory embedded into the request object - not directly into an NVMe 97 PRP list description must be allocated in DMA-able memory and can be quite [all …]
|
H A D | bdev_module.md | 3 ## Target Audience 14 device modules including NVMe, RAM-disk, and Ceph RBD. However, some users 16 existing storage software stack. This guide is intended to demonstrate exactly 29 a new bdev module - SPDK_BDEV_MODULE_REGISTER. This macro take as argument a 34 the function that returns context size (`get_ctx_size`) - scratch space that 69 * Output driver-specific configuration to a JSON stream. Optional - may be NULL. 77 /* Get spin-time per I/O channel in microseconds. 78 * Optional - may be NULL. 87 longer needs it. What `destruct` does is up to the module - it may just be 126 probably only makes sense to implement those if the backing storage device is [all …]
|
H A D | bdev_pg.md | 3 ## Target Audience 10 A block device is a storage device that supports reading and writing data in 11 fixed-size blocks. These blocks are usually 512 or 4096 bytes. The 25 - Automatic queueing of I/O requests in response to queue full or out-of-memory conditions 26 - Hot remove support, even while I/O traffic is occurring. 27 - I/O statistics such as bandwidth and latency 28 - Device reset support and I/O timeout tracking 69 Aliases behave like symlinks - they can be used interchangeably with the real 88 a physical NVMe SSD when the NVMe SSD is hot-unplugged. In this case 98 spdk_bdev_get_io_channel(). This will allocate the necessary per-thread [all …]
|
H A D | blob.md | 13 ## Target Audience {#blob_pg_audience} 23 Blobstore is a persistent, power-fail safe block allocator designed to be used as the local storage… 24 backing a higher level storage service, typically in lieu of a traditional filesystem. These higher… 26 distributed storage systems (ex. Ceph, Cassandra). It is not designed to be a general purpose files… 40 The Blobstore defines a hierarchy of storage abstractions as follows. 56 * **Blobstore**: An SSD which has been initialized by a Blobstore-based application is referred to … 61 +-----------------------------------------------------------------+ 63 | +-----------------------------+ +-----------------------------+ | 65 | | +----+ +----+ +----+ +----+ | | +----+ +----+ +----+ +----+ | | 67 | | +----+ +----+ +----+ +----+ | | +----+ +----+ +----+ +----+ | | [all …]
|
/spdk/ |
H A D | README.md | 1 # Storage Performance Development Kit 3 [](https://travis-ci.org/spdk/spd… 5 [](http://godoc.org/github.com/spdk… 13 The Storage Performance Development Kit ([SPDK](http://www.spdk.io)) provides a set of tools 14 and libraries for writing high performance, scalable, user-mode storage 24 * [NVMe over Fabrics target](http://www.spdk.io/doc/nvmf.html) 25 * [iSCSI target](http://www.spdk.io/doc/iscsi.html) 26 * [vhost target](http://www.spdk.io/doc/vhost.html) 27 * [Virtio-SCSI driver](http://www.spdk.io/doc/virtio.html) [all …]
|
/spdk/docker/monitoring/ |
H A D | telegraf.conf | 1 # SPDX-License-Identifier: Apache-2.0 6 urls = ["http://storage-target:9009"] 7 headers = {"Content-Type" = "application/json"}
|
/spdk/test/nvmf/host/ |
H A D | discovery.sh | 2 # SPDX-License-Identifier: BSD-3-Clause 6 testdir=$(readlink -f $(dirname $0)) 7 rootdir=$(readlink -f $testdir/../../..) 12 …"Skipping tests on RDMA because the rdma stack fails to configure the same IP for host and target." 17 DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 20 NQN=nqn.2016-06.io.spdk:cnode 22 HOST_NQN=nqn.2021-12.io.spdk:test 27 # We will start the target as normal, emulating a storage cluster. We will simulate new paths 30 nvmfappstart -m 0x2 32 $rpc_py nvmf_create_transport $NVMF_TRANSPORT_OPTS -u 8192 [all …]
|
H A D | mdns_discovery.sh | 2 # SPDX-License-Identifier: BSD-3-Clause 8 testdir=$(readlink -f $(dirname $0)) 9 rootdir=$(readlink -f $testdir/../../..) 15 DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 18 NQN=nqn.2016-06.io.spdk:cnode 19 NQN2=nqn.2016-06.io.spdk:cnode2 21 HOST_NQN=nqn.2021-1 [all...] |
/spdk/lib/nvmf/ |
H A D | nvmf_internal.h | 1 /* SPDX-License-Identifier: BSD-3-Clause 72 /* Used for round-robin assignment of connections to poll groups */ 81 /* Allowed DH-HMAC-CHAP digests/dhgroups */ 138 /* Array of namespace information for each namespace indexed by nsid - 1 */ 211 * This structure represents an NVMe-oF controller, 245 /* Time to trigger keep-aliv [all...] |
H A D | vfio_user.c | 1 /* SPDX-License-Identifier: BSD-3-Clause 3 * Copyright (c) 2019-2022, Nutanix Inc. All rights reserved. 8 * NVMe over vfio-user transport 13 #include <vfio-user/libvfio-user.h> 14 #include <vfio-user/pci_defs.h> 40 #define NVMF_VFIO_USER_DEFAULT_MAX_IO_SIZE ((NVMF_REQ_MAX_BUFFERS - [all...] |
/spdk/test/common/ |
H A D | autotest_common.sh | 2 # SPDX-License-Identifier: BSD-3-Clause 15 # unset'ing foo[-1] under older Bash (4.2 -> Centos7) won't work, hence the dance 16 unset -v "X_STACK[${#X_STACK[@]} - 1 < 0 ? 0 : ${#X_STACK[@]} - 1]" # pop 18 set - [all...] |
/spdk/scripts/ |
H A D | rpc.py | 2 # SPDX-License-Identifier: BSD-3-Clause 6 # Copyright (c) 2022-2024 NVIDIA CORPORATION & AFFILIATES. All rights reserved. 36 parser.add_argument('-s', dest='server_addr', 38 parser.add_argument('-p', dest='port', 41 parser.add_argument('-t', dest='timeout', 44 parser.add_argument('-r', dest='conn_retries', 47 parser.add_argument('- [all...] |
H A D | setup.sh | 2 # SPDX-License-Identifier: BSD-3-Clause 6 set -e 7 shopt -s nullglob extglob 9 os=$(uname -s) 16 rootdir=$(readlink -f $(dirname $0))/.. 26 [[ -n $2 ]] && ( 36 echo "$options - a [all...] |