2c140f58 | 18-Jul-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Support accel sequence
If a request has an accel sequence, we append a copy task with RDMA memory domain and don't send capsule until the data_transfer callback is called. In the callback
nvme/rdma: Support accel sequence
If a request has an accel sequence, we append a copy task with RDMA memory domain and don't send capsule until the data_transfer callback is called. In the callback we expect to get a single iov and a memory key which are sent in NVMF capsule to remote taget. When network transmission is finished, we finish data tranfer operation. The reuqest is completed in accel sequence finish_cb. A request which is executing accel sequence has a special flag, we don't abort such requests. Also, we store the data transfer completion callback and call it in case of network failure. Added tests for this feature
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I021bd5f268185a5e1b2d77eb098f8daf491aacf9 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24702 Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
6c35d974 | 07-Nov-2024 |
Nathan Claudel <nclaudel@kalrayinc.com> |
lib/nvme: destruct controllers that failed init asynchronously
The controller destroy sequence is as follows: - Set `CC.SHN` to request shutdown - Wait for `CSTS.SHST` to be set to `0b10` (Shutdown
lib/nvme: destruct controllers that failed init asynchronously
The controller destroy sequence is as follows: - Set `CC.SHN` to request shutdown - Wait for `CSTS.SHST` to be set to `0b10` (Shutdown complete) - Destroy the associated structs when it's done or after a timeout To do it, two things should be done: - First, call `nvme_ctrlr_destruct_async` - Then, poll `nvme_ctrlr_destruct_poll_async`
However, when a controller fails to initialize on probe, this polling is done synchronously using `nvme_ctrlr_destruct`, which introduces 1ms sleep between each poll.
This is really bad if a controller does not behave as expected and does not set its `CSTS.SHST` in a timely manner because it burdens the probe thread with tons of sleep. If hot-plug is enabled, it makes things even worse because this operation is retried again and again.
Fix this by doing an asynchronous destruct when the controller fails to initialize. Add contexts for this operation on the probe context and poll for controllers destruction in the probe poller function.
Signed-off-by: Nathan Claudel <nclaudel@kalrayinc.com> Change-Id: Ic072a2b7c3351a229d3b6e5c667b71dca2a84b93 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25414 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Vasuki Manikarnike <vasuki.manikarnike@hpe.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-by: Changpeng Liu <changpeliu@tencent.com> Community-CI: Mellanox Build Bot
show more ...
|
d1c46ed8 | 18-Jul-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
lib/rdma_provider: Add API to check if accel seq supported
verbs provider doesn't support accel sequence mlx5_dv provider supports accel sequence if a module which implements UMR is registered, i.e.
lib/rdma_provider: Add API to check if accel seq supported
verbs provider doesn't support accel sequence mlx5_dv provider supports accel sequence if a module which implements UMR is registered, i.e. accel_mlx5 driver is registered
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I59944aceb22661f9de3198ecd571b1a73af28584 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24701 Community-CI: Mellanox Build Bot Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
af0187bf | 17-Jul-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Remove qpair::max_recv_sge as unused
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I92cb54e92e93ffccc9bfaa42deab30a5433d336f Reviewed-on: https://review.spdk.io/gerrit/c
nvme/rdma: Remove qpair::max_recv_sge as unused
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I92cb54e92e93ffccc9bfaa42deab30a5433d336f Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24696 Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
51bde662 | 16-Jul-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Factor our contig request preparation
Move the NVMF configuration to dedicated functions, they are to be used in next patches. Move rdma_req and cid initialization out of nvme_rdma_req_in
nvme/rdma: Factor our contig request preparation
Move the NVMF configuration to dedicated functions, they are to be used in next patches. Move rdma_req and cid initialization out of nvme_rdma_req_init, that is needed in next patches to support accel sequence
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I9aca26d96c92d44b1b3f6542c3cf00fe9af9cc4b Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24694 Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
1794c395 | 05-Jul-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Allocate memory domain in rdma provider
Next patches add data_transfer function to a memory domain, for mlx5_dv provider, that means we can't use a memory domain created via rdma_utils. I
nvme/rdma: Allocate memory domain in rdma provider
Next patches add data_transfer function to a memory domain, for mlx5_dv provider, that means we can't use a memory domain created via rdma_utils. In future, memory domain will hold a qpair pointer, to minize changes we create a memory domain per qpair in this patch The verbs provider still uses rdma_utils library.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I53c20b70901c1061c8a067c612dc4ce6b9a3999a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24692 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com>
show more ...
|
6de68644 | 23-Sep-2024 |
Ankit Kumar <ankit.kumar@samsung.com> |
nvme/poll_group: create and manage fd_group for nvme poll group
Create spdk_fd_group within spdk_nvme_poll_group, which manages interrupt events for all the file descriptors of spdk_nvme_qpair that
nvme/poll_group: create and manage fd_group for nvme poll group
Create spdk_fd_group within spdk_nvme_poll_group, which manages interrupt events for all the file descriptors of spdk_nvme_qpair that are part of this poll group.
Two new APIs have been introduced to manage this fd_group
1). spdk_nvme_poll_group_get_fd() Fetches the internal epoll file descriptor of the poll group.
2). spdk_nvme_poll_group_wait() Collectively waits for interrupt events on all the I/O queue pair file descriptors managed by the poll group. When an interrupt event gets generated, it processes any outstanding completions on the I/O queue pair with interrupts. These interrupt events are registered at the the time of I/O queue pair creation.
The nvme_poll_group_connect_qpair() has been modified. Based on the poll group interrupt support, this now registers an event source for the file descriptor of queue pair to the internal epoll file descriptor of the poll group. Similarly, the nvme_poll_group_disconnect_qpair() unregisters the event source for file descriptor of the queue pair from the internal epoll file descriptor of the poll group.
Additional checks are in place to prevent mixing of interrupts enabled and interrupts disabled I/O queue pairs. The poll group interrupt support capability is set by the first I/O queue pair added to it.
Change-Id: If40f1ea82051ae598590f5a23ab9ed58bcb4af09 Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25080 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Jim Harris <jim.harris@samsung.com>
show more ...
|
f43b7650 | 16-Oct-2024 |
Ankit Kumar <ankit.kumar@samsung.com> |
lib/nvme: add opts_size to spdk_nvme_io_qpair_opts
Add opts_size to spdk_nvme_io_qpair_opts to align it with other opts structures. Cleanup spdk_nvme_ctrlr_get_default_io_qpair_opts() a bit.
Use nv
lib/nvme: add opts_size to spdk_nvme_io_qpair_opts
Add opts_size to spdk_nvme_io_qpair_opts to align it with other opts structures. Cleanup spdk_nvme_ctrlr_get_default_io_qpair_opts() a bit.
Use nvme_ctrlr_io_qpair_opts_copy() instead of memcpy.
Change-Id: I6d2f7d16a2f4f6cfb68e3fe5ac0515050e8c36ee Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25246 Reviewed-by: Jim Harris <jim.harris@samsung.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
3ab7a1f6 | 12-Sep-2024 |
Ankit Kumar <ankit.kumar@samsung.com> |
nvme: enable interrupts for pcie nvme devices
Add an option enable_interrupts to the spdk_nvme_ctrlr_opts structure. If this is set to true for pcie controllers interrupts may be enabled during init
nvme: enable interrupts for pcie nvme devices
Add an option enable_interrupts to the spdk_nvme_ctrlr_opts structure. If this is set to true for pcie controllers interrupts may be enabled during initialization. Applications are required to check the resulting value after the attach step to check for success. Maximum of 256 eventfds can be reserved for I/O queues, but the actual number can be lower and is based on the minimum requested I/O queues and number of available I/O queues. The nvme_pcie_ctrlr_cmd_create_io_cq() interface has been modified to create I/O completion queues with interrupts. The interrupt vector field corresponds to the queue identifier in this case.
This is only supported within a primary SPDK process, and if enabled SPDK will not support any secondary processes.
Change-Id: Iff4e2348a0b77199cabb30c0bf93e0eed920cc93 Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24905 Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Jim Harris <jim.harris@samsung.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
f7f0fdf5 | 03-Sep-2024 |
Ankit Kumar <ankit.kumar@samsung.com> |
nvme: Add transport interface to enable interrupts
The following commit will enable interrupts for pcie transport. So add a new interface for the same.
Change-Id: I5cd87b0bb4ec95d6a9b862b659405cf56
nvme: Add transport interface to enable interrupts
The following commit will enable interrupts for pcie transport. So add a new interface for the same.
Change-Id: I5cd87b0bb4ec95d6a9b862b659405cf56d8f864a Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24904 Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Reviewed-by: Jim Harris <jim.harris@samsung.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot
show more ...
|
75a12cbf | 10-Oct-2024 |
Slawomir Ptak <slawomir.ptak@intel.com> |
test: Comparison operator fixes
Change-Id: I1296b19b590c2c6cbb75b9362e441cd6219d7a9f Signed-off-by: Slawomir Ptak <slawomir.ptak@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/2
test: Comparison operator fixes
Change-Id: I1296b19b590c2c6cbb75b9362e441cd6219d7a9f Signed-off-by: Slawomir Ptak <slawomir.ptak@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25198 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
e1d1df9e | 03-Sep-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
nvme: check process when completing error injected requests
Obviously, requests need to be completed in the process that submitted them. However, if one process injects an error to the admin queue
nvme: check process when completing error injected requests
Obviously, requests need to be completed in the process that submitted them. However, if one process injects an error to the admin queue while the other process is polling it, a request with injected errors could be completed in the wrong process. To avoid that, we now check the pid of the process that submitted it before completing it.
Fixes: #3495
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: Ifce5b0cb5dce8adadbae7f3eeb10f019cfd73f0b Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24790 Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <jim.harris@samsung.com>
show more ...
|
4ddd77b2 | 01-Jul-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
nvme: add methods for forcing (re)authentication
These two functions can be used to force IO and admin qpairs to authenticate. They can be called at any time while a qpair is active. If it has alre
nvme: add methods for forcing (re)authentication
These two functions can be used to force IO and admin qpairs to authenticate. They can be called at any time while a qpair is active. If it has already been connected and authenticated, it'll be forced to reauthenticate. If the connection is still being established, these functions will only register the callback and let the regular authentication flow proceed.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I830fbe7d775c0997f24689fab2b2fc3cd875b15a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24233 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Jim Harris <jim.harris@samsung.com> Community-CI: Mellanox Build Bot
show more ...
|
d341bee7 | 05-Sep-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
nvme: require TLS PSKs to be specified via keyring
It's no longer possible to pass the key directly in spdk_nvme_ctrl_opts. This method was deprecated and was supposed to be removed in the upcoming
nvme: require TLS PSKs to be specified via keyring
It's no longer possible to pass the key directly in spdk_nvme_ctrl_opts. This method was deprecated and was supposed to be removed in the upcoming release.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: If06e087abb83da6b2f22c4a9f7129720f26e6f0d Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24810 Reviewed-by: Jim Harris <jim.harris@samsung.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
show more ...
|
9ce869c2 | 09-Aug-2024 |
Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> |
nvme: add spdk_nvme_probe*_ext variants that report attach failures
Currently, users cannot get any failure information if spdk_nvme_probe or spdk_nvme_probe_async fails to attach probed devices.
T
nvme: add spdk_nvme_probe*_ext variants that report attach failures
Currently, users cannot get any failure information if spdk_nvme_probe or spdk_nvme_probe_async fails to attach probed devices.
This patch adds two functions, spdk_nvme_probe_ext and spdk_nvme_probe_async_ext, that accept an extra attach_fail_cb parameter compared to their existing counterparts. For each device that is probed but unable to attach, attach_fail_cb will be called with the trid of the device and an error code describing the failure reason to inform user about the failure.
Fixes #3106
Change-Id: I55a69086b85987d9e0f5a31d1321fe3e006c2de7 Signed-off-by: Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/19872 Reviewed-by: GangCao <gang.cao@intel.com> Reviewed-by: Jim Harris <jim.harris@samsung.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
fcc1652c | 11-Jul-2024 |
Jim Harris <jim.harris@samsung.com> |
nvme/pcie: allocate cq from device-local numa node's memory
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I5c2fd1baa8584d3332dbc2d073e3d3d7e99ed062 Reviewed-on: https://review.spdk.i
nvme/pcie: allocate cq from device-local numa node's memory
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I5c2fd1baa8584d3332dbc2d073e3d3d7e99ed062 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24149 Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
dc2dd173 | 11-Jul-2024 |
Jim Harris <jim.harris@samsung.com> |
nvme: populate numa.id for rdma controllers
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I0a0ca0b0f00fc4e753a7a7603dfd71465809f0d7 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/
nvme: populate numa.id for rdma controllers
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I0a0ca0b0f00fc4e753a7a7603dfd71465809f0d7 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24146 Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
show more ...
|
70dccebe | 10-Jul-2024 |
Jim Harris <jim.harris@samsung.com> |
nvme: populate numa.id for tcp controllers
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I7813c7f548e7de90b88fbb574dff6be57bbffdf5 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/s
nvme: populate numa.id for tcp controllers
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I7813c7f548e7de90b88fbb574dff6be57bbffdf5 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24125 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
show more ...
|
85be1014 | 03-Jul-2024 |
Jim Harris <jim.harris@samsung.com> |
nvme: add spdk_nvme_ctrlr_get_numa_id()
By default, transports will just return SPDK_ENV_NUMA_ID_ANY. Future patches will add support to populate this on a transport-by-transport basis.
Note that t
nvme: add spdk_nvme_ctrlr_get_numa_id()
By default, transports will just return SPDK_ENV_NUMA_ID_ANY. Future patches will add support to populate this on a transport-by-transport basis.
Note that transports define their own ctrlr object, and embed spdk_nvmf_ctrlr inside of it. They use calloc() to allocate the structure. This means that the new numa_id member will always be set to 0. So we add numa_id_valid (which is also 0 by default) so that we can detect whether that 0 is a real numa_id set by the transport, or was just the default value implicitly set by transports that don't support numa_ids.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I0e7aaec053ec14c4e683187efa735df9c155f46a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24045 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
show more ...
|
186b109d | 20-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
env: add SPDK_ENV_NUMA_ID_ANY and replace socket_id with numa_id
We will try to avoid further proliferation of "SOCKET_ID" to refer to a NUMA socket ID moving forward, and just use "NUMA_ID" to avoi
env: add SPDK_ENV_NUMA_ID_ANY and replace socket_id with numa_id
We will try to avoid further proliferation of "SOCKET_ID" to refer to a NUMA socket ID moving forward, and just use "NUMA_ID" to avoid confusion with TCP sockets.
Change all of the existing in-tree SPDK_ENV_SOCKET_ID_ANY uses to SPDK_ENV_NUMA_ID_ANY, but keep the old #define around, at least for now. Also change all 'socket_id' parameters to 'numa_id'.
We still have spdk_env_get_socket_id(), we will need to keep this but next patch will add spdk_env_get_numa_id().
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: Idc31c29e32b708c24d88f9c6fecaf9a99e34ba1e Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24607 Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot
show more ...
|
f490bcce | 26-Aug-2024 |
Jim Harris <jim.harris@samsung.com> |
test/unit/rdma: don't use NULL when constructing iov_base addresses
Newer ubsan doesn't like adding offsets to NULL pointers, so use a non-NULL address when constructing the test iov_base addresses.
test/unit/rdma: don't use NULL when constructing iov_base addresses
Newer ubsan doesn't like adding offsets to NULL pointers, so use a non-NULL address when constructing the test iov_base addresses.
Signed-off-by: Jim Harris <jim.harris@samsung.com> Change-Id: I23860873e510f04fce7946c5a69ad0c8c1e6247b Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24675 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
show more ...
|
17cb940f | 26-Jul-2024 |
Ben Walker <ben@nvidia.com> |
nvme: Fix bug in SGL splitting when elements do not align to blocks
The scattered memory elements do not have to end on blocks. We can split them into pieces when this occurs.
Change-Id: I7d203c50d
nvme: Fix bug in SGL splitting when elements do not align to blocks
The scattered memory elements do not have to end on blocks. We can split them into pieces when this occurs.
Change-Id: I7d203c50dd6ded786abdd7072ad79b739828a1d5 Signed-off-by: Ben Walker <ben@nvidia.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24654 Reviewed-by: Jim Harris <jim.harris@samsung.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
show more ...
|
c0b7e26c | 26-Jul-2024 |
Ben Walker <ben@nvidia.com> |
test/nvme: Improve sgl mocking capabilities in nvme_ns_cmd_ut.c
Let the tests pass in exactly the SGL they want to iterate through.
Change-Id: Ie8704c89b3bf7eb0196fba5682552c8136c48879 Signed-off-b
test/nvme: Improve sgl mocking capabilities in nvme_ns_cmd_ut.c
Let the tests pass in exactly the SGL they want to iterate through.
Change-Id: Ie8704c89b3bf7eb0196fba5682552c8136c48879 Signed-off-by: Ben Walker <ben@nvidia.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24653 Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <jim.harris@samsung.com>
show more ...
|
34edd9f1 | 10-Jul-2024 |
Kamil Godzwon <kamilx.godzwon@intel.com> |
general: fix misspells and typos
Signed-off-by: Kamil Godzwon <kamilx.godzwon@intel.com> Change-Id: Iab206ef526eb7032c6681a3145450010c91705a4 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+
general: fix misspells and typos
Signed-off-by: Kamil Godzwon <kamilx.godzwon@intel.com> Change-Id: Iab206ef526eb7032c6681a3145450010c91705a4 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24120 Community-CI: Mellanox Build Bot Reviewed-by: Karol Latecki <karol.latecki@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Jim Harris <jim.harris@samsung.com>
show more ...
|
673f3731 | 23-Jul-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair
This patch fixes stack-buffer-underflow errors, which started to appear after introducing e431ba2e4 ("nvme/pcie: add disable_pcie_sg
ut/nvme_pcie: allocate nvme_pcie_qpair instead of spdk_nvme_qpair
This patch fixes stack-buffer-underflow errors, which started to appear after introducing e431ba2e4 ("nvme/pcie: add disable_pcie_sgl_merge option"). It happens, because nvme_pcie_qpair_build_hw_sgl_request() now touches nvme_pcie_qpair to retrieve the disable_pcie_sgl_merge option, while the test only allocated spdk_nvme_pqair.
Fixes: #3451
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I4fd778b11ff8ef12568b8b71b5f01aea9d2a5d03 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24300 Community-CI: Mellanox Build Bot Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Bypass-Merge-Requirements: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
show more ...
|