xref: /spdk/doc/about.md (revision 71efe5db245bd06f0d0d55079adda7c609d4ede3)
1*71efe5dbSKarol Latecki# What is SPDK {#about}
2d5d24e3eSBen Walker
3d5d24e3eSBen WalkerThe Storage Performance Development Kit (SPDK) provides a set of tools and
4d5d24e3eSBen Walkerlibraries for writing high performance, scalable, user-mode storage
5d5d24e3eSBen Walkerapplications. It achieves high performance through the use of a number of key
6d5d24e3eSBen Walkertechniques:
7d5d24e3eSBen Walker
8d5d24e3eSBen Walker* Moving all of the necessary drivers into userspace, which avoids syscalls
9d5d24e3eSBen Walker  and enables zero-copy access from the application.
10d5d24e3eSBen Walker* Polling hardware for completions instead of relying on interrupts, which
11d5d24e3eSBen Walker  lowers both total latency and latency variance.
12d5d24e3eSBen Walker* Avoiding all locks in the I/O path, instead relying on message passing.
13d5d24e3eSBen Walker
14d5d24e3eSBen WalkerThe bedrock of SPDK is a user space, polled-mode, asynchronous, lockless
15d5d24e3eSBen Walker[NVMe](http://www.nvmexpress.org) driver. This provides zero-copy, highly
16d5d24e3eSBen Walkerparallel access directly to an SSD from a user space application. The driver is
17d5d24e3eSBen Walkerwritten as a C library with a single public header. See @ref nvme for more
18d5d24e3eSBen Walkerdetails.
19d5d24e3eSBen Walker
20d5d24e3eSBen WalkerSPDK further provides a full block stack as a user space library that performs
21d5d24e3eSBen Walkermany of the same operations as a block stack in an operating system. This
22d5d24e3eSBen Walkerincludes unifying the interface between disparate storage devices, queueing to
23d5d24e3eSBen Walkerhandle conditions such as out of memory or I/O hangs, and logical volume
24d5d24e3eSBen Walkermanagement. See @ref bdev for more information.
25d5d24e3eSBen Walker
26d5d24e3eSBen WalkerFinally, SPDK provides
27d5d24e3eSBen Walker[NVMe-oF](http://www.nvmexpress.org/nvm-express-over-fabrics-specification-released),
28d5d24e3eSBen Walker[iSCSI](https://en.wikipedia.org/wiki/ISCSI), and
29d5d24e3eSBen Walker[vhost](http://blog.vmsplice.net/2011/09/qemu-internals-vhost-architecture.html)
30d5d24e3eSBen Walkerservers built on top of these components that are capable of serving disks over
31d5d24e3eSBen Walkerthe network or to other processes. The standard Linux kernel initiators for
32d5d24e3eSBen WalkerNVMe-oF and iSCSI interoperate with these targets, as well as QEMU with vhost.
33d5d24e3eSBen WalkerThese servers can be up to an order of magnitude more CPU efficient than other
34d5d24e3eSBen Walkerimplementations. These targets can be used as examples of how to implement a
35d5d24e3eSBen Walkerhigh performance storage target, or used as the basis for production
36d5d24e3eSBen Walkerdeployments.
37