| a91d250f | 20-Nov-2024 |
Shuhei Matsumoto <smatsumoto@nvidia.com> |
bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata
Add DIF insert/strip into the generic bdev layer.
Use bounce buffer if I/O allcates it regardless that I/O is not awa
bdev: Insert metadata using bounce/accel buffer if I/O is not aware of metadata
Add DIF insert/strip into the generic bdev layer.
Use bounce buffer if I/O allcates it regardless that I/O is not aware of metadata. Allocate and use bounce buffer if I/O is not aware of metadata, and it does not use memory domain or bdev module does not support accel sequence. Allocate and use accel buffer if I/O uses memory domain, bdev module supports accel sequence, and I/O is not aware of metadata.
When accel buffer is used, join the existing code path to pull data.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Change-Id: I057d64ab4a1f48c838e1cddd08f3e12cc595817b Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25087 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Mellanox Build Bot Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
show more ...
|
| ff173863 | 20-Nov-2024 |
Shuhei Matsumoto <smatsumoto@nvidia.com> |
ut/bdev: Remove duplication with many stups among unit test files
Move all duplicated stubs in bdev.c, mt/bdev.c, and part.c unit test files into the new file common_stubs.h in test/common/lib/bdev.
ut/bdev: Remove duplication with many stups among unit test files
Move all duplicated stubs in bdev.c, mt/bdev.c, and part.c unit test files into the new file common_stubs.h in test/common/lib/bdev.
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Change-Id: Ic3d75821bf828e196fa576a18feae90d8bd2ffeb Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25455 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot
show more ...
|
| cab1decc | 13-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
thread: add NUMA node support to spdk_iobuf_put()
For the default numa-disabled case, just always free the buffer to node 0. Otherwise use spdk_mem_get_numa_id() to find the numa_id based on the buf
thread: add NUMA node support to spdk_iobuf_put()
For the default numa-disabled case, just always free the buffer to node 0. Otherwise use spdk_mem_get_numa_id() to find the numa_id based on the buffer pointer.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: Ibcb6d8dc83c5e9a6a01aac9732728c62fa4c0719 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24557 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
| 2ef611c1 | 12-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
thread: update all iobuf non-get/put functions for multiple NUMA nodes
We add a custom IOBUF_FOREACH_SOCKET_ID() iterator, so that when NUMA is disabled, it will only use node 0 for all operations.
thread: update all iobuf non-get/put functions for multiple NUMA nodes
We add a custom IOBUF_FOREACH_SOCKET_ID() iterator, so that when NUMA is disabled, it will only use node 0 for all operations.
But the get/put functions still only use NUMA node 0. Upcoming patches will add the APIs and implementation to choose a NUMA node to allocate from.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: Ic863d483432ad2f059c7facccf69b820bcad828a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24542 Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com>
show more ...
|
| 42d1bd28 | 12-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
thread: add enable_numa parameter to iobuf_set_options RPC
This parameter does not yet actually enable per-NUMA node iobuf buffer pools. It only checks that the application was built with support fo
thread: add enable_numa parameter to iobuf_set_options RPC
This parameter does not yet actually enable per-NUMA node iobuf buffer pools. It only checks that the application was built with support for the number of NUMA nodes reported by the system.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I1b9d11ccb8f6914280874a40754c51625d21645d Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24539 Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot
show more ...
|
| f1900e4d | 19-Jul-2024 |
Jim Harris <jim.harris@samsung.com> |
thread: convert iobuf nodes to 1-sized arrays
This is just a step towards supporting a bigger array, 1 element per NUMA node.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: If235aa91
thread: convert iobuf nodes to 1-sized arrays
This is just a step towards supporting a bigger array, 1 element per NUMA node.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: If235aa9120965921cda6291043169a5c55ceb144 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24520 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
show more ...
|
| eba178bf | 19-Jul-2024 |
Jim Harris <jim.harris@samsung.com> |
thread: add spdk_iobuf_node_cache
We also rename spdk_iobuf_pool to spdk_iobuf_pool_cache.
This makes it more clear that the iobuf_channel has a cache of buffers per NUMA node. Currently there is o
thread: add spdk_iobuf_node_cache
We also rename spdk_iobuf_pool to spdk_iobuf_pool_cache.
This makes it more clear that the iobuf_channel has a cache of buffers per NUMA node. Currently there is only one node, but future changes will extend this to support multiple NUMA nodes when configured.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: Id403a089a0de943bd3717e40aba156cbb2368cab Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24517 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
show more ...
|
| b82fd48a | 19-Jul-2024 |
Jim Harris <jim.harris@samsung.com> |
thread: remove pool parameter from spdk_iobuf_for_each_entry
We always want to iterate across the entire iobuf_channel, so just make the iterator do exactly that, rather than requiring the caller to
thread: remove pool parameter from spdk_iobuf_for_each_entry
We always want to iterate across the entire iobuf_channel, so just make the iterator do exactly that, rather than requiring the caller to have to iterate both of the pools itself.
This both simplifies the calling logic, and prepares for upcoming changes which will support multiple NUMA node caches in one channel.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: Ieed144671e7ee7cb4d7b7b28e803ea3cae641fee Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24515 Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com>
show more ...
|
| ffd098cf | 17-Jul-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
nvme/auth: process auth transactions inside transports
We chose to process auth transactions in the common fabrics code to avoid code duplication. However, it makes it difficult to implement reauth
nvme/auth: process auth transactions inside transports
We chose to process auth transactions in the common fabrics code to avoid code duplication. However, it makes it difficult to implement reauthentication, as the transports are polling for fabrics connect, which doesn't happen once a connection has been established.
So, polling for the authentication will now be done by the transports. Also, the next patch will add a transport callback that moves a qpair back to the AUTHENTICATING state after it's been connected, which will be used to implement reauthentication.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: Idd0e82593f1730b37eec452edc5e0ea49627921d Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24232 Reviewed-by: Jim Harris <jim.harris@samsung.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
| 9ae05589 | 13-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
Revert "test/env: add UNIT_TEST_NO_PCI_ADDR"
This #define is no longer needed, there have been other changes to the env_dpdk code that have made it obsolete.
This reverts commit 577b667ac364701f222
Revert "test/env: add UNIT_TEST_NO_PCI_ADDR"
This #define is no longer needed, there have been other changes to the env_dpdk code that have made it obsolete.
This reverts commit 577b667ac364701f222025309e6c6d9df2dc7aef.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I61bd4e172ee70a5dcb99c8c6fc1fb19070a2a7ce Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24556 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
| 38b1eaa4 | 20-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
env: add spdk_env_get_numa_id()
This will effectively replace spdk_env_get_socket_id(), which is marked obsolete as part of this patch.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id:
env: add spdk_env_get_numa_id()
This will effectively replace spdk_env_get_socket_id(), which is marked obsolete as part of this patch.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I5d39e5e1b98e07f709b14c86382e59ea76584def Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24608 Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
show more ...
|
| 186b109d | 20-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
env: add SPDK_ENV_NUMA_ID_ANY and replace socket_id with numa_id
We will try to avoid further proliferation of "SOCKET_ID" to refer to a NUMA socket ID moving forward, and just use "NUMA_ID" to avoi
env: add SPDK_ENV_NUMA_ID_ANY and replace socket_id with numa_id
We will try to avoid further proliferation of "SOCKET_ID" to refer to a NUMA socket ID moving forward, and just use "NUMA_ID" to avoid confusion with TCP sockets.
Change all of the existing in-tree SPDK_ENV_SOCKET_ID_ANY uses to SPDK_ENV_NUMA_ID_ANY, but keep the old #define around, at least for now. Also change all 'socket_id' parameters to 'numa_id'.
We still have spdk_env_get_socket_id(), we will need to keep this but next patch will add spdk_env_get_numa_id().
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: Idc31c29e32b708c24d88f9c6fecaf9a99e34ba1e Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24607 Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot
show more ...
|
| 34f6147c | 12-Apr-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Use rdma_utils to manage memory domains
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Ia50cf1389798593a90a141b5f49641b9ce6b072d Reviewed-on: https://review.spdk.io/gerri
nvme/rdma: Use rdma_utils to manage memory domains
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Ia50cf1389798593a90a141b5f49641b9ce6b072d Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/23095 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
| 8ffb2c09 | 11-Apr-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
lib/rdma_utils: Explicit access flags to create memmap
This provides more flexibility to the caller
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Ie0ec8093fd86d36c5f75424eea4aa033
lib/rdma_utils: Explicit access flags to create memmap
This provides more flexibility to the caller
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Ie0ec8093fd86d36c5f75424eea4aa033d3c5999f Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/23093 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
| 8a01b4d6 | 11-Apr-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
lib/rdma_utils: Intorduce generic rdma utils lib
This library holds generic rdma functions from rdma_provider. That is done to avoid cross link reference in future patches. The library will be exten
lib/rdma_utils: Intorduce generic rdma utils lib
This library holds generic rdma functions from rdma_provider. That is done to avoid cross link reference in future patches. The library will be extended with new functionality
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: If1918efb0fe6f0baa77cf20f992fbd6a97de4264 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/23072 Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
| cf151d60 | 11-Apr-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
lib/rdma: Rename lib to rdma_provider
The new name better reflects purpose of this library Next patch moves part of functions to a dedicated lib and the new name helps to avoid confusion
Signed-off
lib/rdma: Rename lib to rdma_provider
The new name better reflects purpose of this library Next patch moves part of functions to a dedicated lib and the new name helps to avoid confusion
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: If7296ed77a07f7084bce66971d6937d7671b3a91 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/23071 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
show more ...
|
| 194983ee | 28-Feb-2024 |
John Levon <john.levon@nutanix.com> |
harmonize spdk_*_get/set_opts()
spdk_bdev_get/set_opts() is careful to check its size argument, so that it can add options in a backwards-compatible manner. However, spdk_iobuf_get/set_opts() and sp
harmonize spdk_*_get/set_opts()
spdk_bdev_get/set_opts() is careful to check its size argument, so that it can add options in a backwards-compatible manner. However, spdk_iobuf_get/set_opts() and spdk_accel_get/set_opts() both have slightly different interfaces to the bdev variant, and are less careful.
Make all three variants operate in the same manner instead.
For spdk_iobuf_set_opts(), make all validation consistently return an error instead of trying to adjust automatically.
Signed-off-by: John Levon <john.levon@nutanix.com> Change-Id: I4077a5f1df7039992a556544acdcb1ef379887ae Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/22093 Reviewed-by: Jim Harris <jim.harris@samsung.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
| bd7c9e07 | 11-Nov-2023 |
Jim Harris <jim.harris@samsung.com> |
nvme: do port number checking in nvme_parse_addr()
TCP was already range checking the port number, so move it to the common nvme_parse_addr() so that RDMA gets the port checking as well.
Note, prev
nvme: do port number checking in nvme_parse_addr()
TCP was already range checking the port number, so move it to the common nvme_parse_addr() so that RDMA gets the port checking as well.
Note, previously TCP was checking against MAX_INT, so change this to reject >= 65536 instead which is the real IP port limit.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Suggested-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I91e2f23ac4f4d8ec80cdea95a4bbff73477b8464 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/20565 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
show more ...
|
| 01bbc271 | 11-Nov-2023 |
Jim Harris <jim.harris@samsung.com> |
nvme: add nvme_parse_addr()
nvme_tcp_parse_addr() and nvme_rdma_parse_addr() were exact duplicates, so break this out into a common helper function.
Signed-off-by: Jim Harris <jim.harris@samsung.co
nvme: add nvme_parse_addr()
nvme_tcp_parse_addr() and nvme_rdma_parse_addr() were exact duplicates, so break this out into a common helper function.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I5dcf3c218992c41f51a883f020847de36f1652e3 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/20564 Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot
show more ...
|
| ae431e31 | 28-Jul-2023 |
Konrad Sztyber <konrad.sztyber@intel.com> |
test/unit: move spdk_cunit.h to include/spdk_internal
It'll make it easier to include this file outside of unit tests.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I171ddb864
test/unit: move spdk_cunit.h to include/spdk_internal
It'll make it easier to include this file outside of unit tests.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I171ddb8649f67b5786f08647560e2907603d0574 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/19284 Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
show more ...
|
| 784b3b89 | 29-Nov-2022 |
Ben Walker <benjamin.walker@intel.com> |
sock: Add a zero copy receive interface
This is not implemented by any sock module just yet.
The basic concept is that the user provides buffers to a socket group. These get filled in with the next
sock: Add a zero copy receive interface
This is not implemented by any sock module just yet.
The basic concept is that the user provides buffers to a socket group. These get filled in with the next portions of the stream on each socket. The user then calls spdk_sock_recv_next() to get the pointer to the buffer holding the next part of the stream. When the user is done, they can put the buffer back to the group to be used again.
The provided buffers are held in a pool on the socket group. Implementations can request a buffer from the pool or set up a notification any time a buffer is returned.
Change-Id: I236fca43c6e3e11aafeb58caba0e4798654124be Signed-off-by: Ben Walker <benjamin.walker@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16105 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
| 4bb9dcdb | 17-Jan-2023 |
Ben Walker <benjamin.walker@intel.com> |
test: Add test_iobuf.c to mock the iobuf library
Use it in all of the places that were previously hooking spdk_mempool_get.
Change-Id: I311f75fb9601b4f987b106160eb0a0014d3327cd Signed-off-by: Ben W
test: Add test_iobuf.c to mock the iobuf library
Use it in all of the places that were previously hooking spdk_mempool_get.
Change-Id: I311f75fb9601b4f987b106160eb0a0014d3327cd Signed-off-by: Ben Walker <benjamin.walker@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16329 Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
| bcd987ea | 07-Dec-2022 |
Shuhei Matsumoto <smatsumoto@nvidia.com> |
nvme_rdma: Support SRQ for I/O qpairs
Support SRQ in RDMA transport of NVMe-oF initiator.
Add a new spdk_nvme_transport_opts structure and add rdma_srq_size to the spdk_nvme_transport_opts structur
nvme_rdma: Support SRQ for I/O qpairs
Support SRQ in RDMA transport of NVMe-oF initiator.
Add a new spdk_nvme_transport_opts structure and add rdma_srq_size to the spdk_nvme_transport_opts structure.
For the user of the NVMe driver, provide two public APIs, spdk_nvme_transport_get_opts() and spdk_nvme_transport_set_opts().
In the NVMe driver, the instance of spdk_nvme_transport_opts, g_spdk_nvme_transport_opts, is accessible throughtout.
From an issue that async event handling caused conflicts between initiator and target, the NVMe-oF RDMA initiator does not handle the LAST_WQE_REACHED event. Hence, it may geta WC for a already destroyed QP. To clarify this, add a comment in the source code.
The following is a result of a small performance evaluation using SPDK NVMe perf tool. Even for queue_depth=1, overhead was less than 1%. Eventually, we may be able to enable SRQ by default for NVMe-oF initiator.
1.1 randwrite, qd=1, srq=enabled ./build/examples/perf -q 1 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 162411.97 634.42 6.14 5.42 284.07 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 163095.87 637.09 6.12 5.41 423.95 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164725.30 643.46 6.06 5.32 165.60 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 162548.57 634.96 6.14 5.39 227.24 ======================================================== Total : 652781.70 2549.93 6.12
1.2 randwrite, qd=1, srq=disabled ./build/examples/perf -q 1 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 163398.03 638.27 6.11 5.33 240.76 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 164632.47 643.10 6.06 5.29 125.22 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 164694.40 643.34 6.06 5.31 408.43 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 164007.13 640.65 6.08 5.33 170.10 ======================================================== Total : 656732.03 2565.36 6.08 5.29 408.43
2.1 randread, qd=1, srq=enabled ./build/examples/perf -q 1 -s 1024 -w randread -t 30 -c 0xF -o 4096 -r ' ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 153514.40 599.67 6.50 5.97 277.22 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 153567.57 599.87 6.50 5.95 408.06 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 153590.33 599.96 6.50 5.88 134.74 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 153357.40 599.05 6.51 5.97 229.03 ======================================================== Total : 614029.70 2398.55 6.50 5.88 408.06
2.2 randread, qd=1, srq=disabled ./build/examples/perf -q 1 -s 1024 -w randread -t 30 -c 0XF -o 4096 -r ' ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 154452.40 603.33 6.46 5.94 233.15 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 154711.67 604.34 6.45 5.91 25.55 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 154717.70 604.37 6.45 5.88 130.92 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 154713.77 604.35 6.45 5.91 128.19 ======================================================== Total : 618595.53 2416.39 6.45 5.88 233.15
3.1 randwrite, qd=32, srq=enabled ./build/examples/perf -q 32 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r 'trtype:RDMA adrfam:IPv4 traddr:1.1.18.1 trsvcid:4420' ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 672608.17 2627.38 47.56 11.33 326.96 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 672386.20 2626.51 47.58 11.03 221.88 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 673343.70 2630.25 47.51 9.11 387.54 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 672799.10 2628.12 47.55 10.48 552.80 ======================================================== Total : 2691137.17 10512.25 47.55 9.11 552.80
3.2 randwrite, qd=32, srq=disabled ./build/examples/perf -q 32 -s 1024 -w randwrite -t 30 -c 0XF -o 4096 -r 'trtype:RDMA adrfam:IPv4 traddr:1.1.18.1 trsvcid:4420' ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 672647.53 2627.53 47.56 11.13 389.95 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 672756.50 2627.96 47.55 9.53 394.83 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 672464.63 2626.81 47.57 9.48 528.07 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 673250.73 2629.89 47.52 9.43 389.83 ======================================================== Total : 2691119.40 10512.19 47.55 9.43 528.07
4.1 randread, qd=32, srq=enabled ./build/examples/perf -q 32 -s 1024 -w randread -t 30 -c 0xF -o 4096 -r ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 677286.30 2645.65 47.23 12.29 335.90 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 677554.97 2646.70 47.22 20.39 196.21 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 677086.07 2644.87 47.25 19.17 386.26 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 677654.93 2647.09 47.21 18.92 181.05 ======================================================== Total : 2709582.27 10584.31 47.23 12.29 386.26
4.2 randread, qd=32, srq=disabled ./build/examples/perf -q 32 -s 1024 -w randread -t 30 -c 0XF -o 4096 -r ======================================================== Latency(us) Device Information : IOPS MiB/s Average min max RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 677432.60 2646.22 47.22 13.05 435.91 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 1: 677450.43 2646.29 47.22 16.26 178.60 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 677647.10 2647.06 47.21 17.82 177.83 RDMA (addr:1.1.18.1 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 677047.33 2644.72 47.25 15.62 308.21 ======================================================== Total : 2709577.47 10584.29 47.23 13.05 435.91
Signed-off-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Signed-off-by: Denis Nagorny <denisn@nvidia.com> Signed-off-by: Evgeniy Kochetov <evgeniik@nvidia.com> Change-Id: I843a5eda14e872bf6e2010e9f63b8e46d5bba691 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14174 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com>
show more ...
|
| a6dbe372 | 01-Nov-2022 |
paul luse <paul.e.luse@intel.com> |
update Intel copyright notices
per Intel policy to include file commit date using git cmd below. The policy does not apply to non-Intel (C) notices.
git log --follow -C90% --format=%ad --date defa
update Intel copyright notices
per Intel policy to include file commit date using git cmd below. The policy does not apply to non-Intel (C) notices.
git log --follow -C90% --format=%ad --date default <file> | tail -1
and then pull just the 4 digit year from the result.
Intel copyrights were not added to files where Intel either had no contribution ot the contribution lacked substance (ie license header updates, formatting changes, etc). Contribution date used "--follow -C95%" to get the most accurate date.
Note that several files in this patch didn't end the license/(c) block with a blank comment line so these were added as the vast majority of files do have this last blank line. Simply there for consistency.
Signed-off-by: paul luse <paul.e.luse@intel.com> Change-Id: Id5b7ce4f658fe87132f14139ead58d6e285c04d4 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/15192 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Community-CI: Mellanox Build Bot
show more ...
|
| c66b68e9 | 08-Sep-2022 |
Aleksey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Inline nvme_rdma_calloc/free
These functions used to allocate resources using calloc/spdk_zmalloc depending on the g_nvme_hooks pointer. Later these functions were refactored to always us
nvme/rdma: Inline nvme_rdma_calloc/free
These functions used to allocate resources using calloc/spdk_zmalloc depending on the g_nvme_hooks pointer. Later these functions were refactored to always use spdk_zmalloc, so they became simple wrappers of spdk_zmalloc and spdk_free. There is no sense to use them, call spdk memory API directly.
Signed-off-by: Aleksey Marchuk <alexeymar@nvidia.com> Change-Id: I3b514b20e2128beb5d2397881d3de00111a8a3bc Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/14429 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Dong Yi <dongx.yi@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com>
show more ...
|