d58eef2a | 11-Dec-2024 |
Alex Michon <amichon@kalrayinc.com> |
nvme/rdma: Fix reinserting qpair in connecting list after stale state
When a qpair is first created, we add it to a list of connecting qpairs. If the connection fails, we move the qpair to a stale s
nvme/rdma: Fix reinserting qpair in connecting list after stale state
When a qpair is first created, we add it to a list of connecting qpairs. If the connection fails, we move the qpair to a stale state and we retry later. At this point, we should not add the qpair again to the connecting qpairs list.
Change-Id: If38a8a51d3cb86f4d52d926d1acc349af21a6947 Signed-off-by: Alex Michon <amichon@kalrayinc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25526 Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Community-CI: Mellanox Build Bot
show more ...
|
62638991 | 23-Aug-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Don't limit max_sge if UMR is used
Since UMR creates a vurtually contig memory buffer, we can always support up to 16 SGEs regardless of MSDBD reported by target
Signed-off-by: Alexey Ma
nvme/rdma: Don't limit max_sge if UMR is used
Since UMR creates a vurtually contig memory buffer, we can always support up to 16 SGEs regardless of MSDBD reported by target
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Ibd339f71ad35d355783993f777fcf8009ea68466 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24710 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot
show more ...
|
cec5ba28 | 23-Aug-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Register UMR per IO request
If accel sequence is supported, append a copy task even if there is no accel sequence. NVME RDMA driver expects that accel framework registers UMR for the data
nvme/rdma: Register UMR per IO request
If accel sequence is supported, append a copy task even if there is no accel sequence. NVME RDMA driver expects that accel framework registers UMR for the data buffer. This UMR allows to represent fragmented payload as a virtually contig one.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I410f991959b08eab033105a7dbb4a9aaba491567 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24709 Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
7219bd1a | 14-Nov-2024 |
Ankit Kumar <ankit.kumar@samsung.com> |
thread: use extended version of fd group add
As the message file descriptor of thread is of type eventfd, we can set fd type in the event handler opts and use SPDK_FD_GROUP_ADD_EXT. This way we don'
thread: use extended version of fd group add
As the message file descriptor of thread is of type eventfd, we can set fd type in the event handler opts and use SPDK_FD_GROUP_ADD_EXT. This way we don't need to read the msg fd, which will be done during the fd group wait.
Set msg_fd to -1 if fd group creation fails, to prevent wrong cleaup during thread_interrupt_destroy call.
Change-Id: I2b64ad09bb368484a81781e4805ecbf8fc8cb02c Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25432 Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
1a5bdab3 | 14-Nov-2024 |
Ankit Kumar <ankit.kumar@samsung.com> |
event: use extended version of fd group add
Since both the reactor file descriptors are of type eventfd we can set fd type in the event handler opts and use SPDK_FD_GROUP_ADD_EXT. This way we don't
event: use extended version of fd group add
Since both the reactor file descriptors are of type eventfd we can set fd type in the event handler opts and use SPDK_FD_GROUP_ADD_EXT. This way we don't need to explicitly read those fds, which will be done during the fd group wait.
Change-Id: Ic966919a9b2ce83395f98e7c6c6a883b1ffd9a0d Signed-off-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25431 Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
52a41348 | 02-Dec-2024 |
Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> |
bdev: do not retry nomem I/Os during aborting them
If bdev module reports ENOMEM for the first I/O in an I/O channel, all subsequent I/Os would be queued in the nomem list. In this case, io_outstand
bdev: do not retry nomem I/Os during aborting them
If bdev module reports ENOMEM for the first I/O in an I/O channel, all subsequent I/Os would be queued in the nomem list. In this case, io_outstanding and nomem_threshold would remain 0, allowing nomem I/Os to be resubmitted unconditionally.
Now, a coming reset could trigger nomem I/O resubmission when aborting nomem I/Os in the following path:
``` bdev_reset_freeze_channel -> bdev_abort_all_queued_io -> spdk_bdev_io_complete -> _bdev_io_handle_no_mem -> bdev_ch_retry_io ```
Both bdev_abort_all_queued_io and bdev_ch_retry_io modifies nomem_io list in this path. Thus, there might be I/Os that are firstly submitted to the underlying device by bdev_ch_retry_io, and then get aborted by bdev_abort_all_queued_io, resulting in double-completion of these I/Os later.
To fix this, just do not resubmit nomem I/Os when aborting is in progress.
Change-Id: I1f66262216885779d1a883ec9250d58a13d8c228 Signed-off-by: Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25522 Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com>
show more ...
|
d1394291 | 02-Dec-2024 |
Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> |
bdev: simplify bdev_reset_freeze_channel
Commit 055de83a "bdev: multiple QoS queues with atomic-based QoS quota" removed locking around swapping qos_queued_io, but forgot to remove the swapping. Let
bdev: simplify bdev_reset_freeze_channel
Commit 055de83a "bdev: multiple QoS queues with atomic-based QoS quota" removed locking around swapping qos_queued_io, but forgot to remove the swapping. Let's remove it to simplify bdev_reset_freeze_channel.
Change-Id: I48c1b7e09e7d92450bc2d91a2adefa26d072af52 Signed-off-by: Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25521 Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: GangCao <gang.cao@intel.com>
show more ...
|
1ae735a5 | 05-Nov-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
nvme: add poll_group interrupt callback
In interrupt mode, IO completions are processed when waiting on poll_group's fd_group. But there are some events (qpair disconnection) that require extra han
nvme: add poll_group interrupt callback
In interrupt mode, IO completions are processed when waiting on poll_group's fd_group. But there are some events (qpair disconnection) that require extra handling. Normally, this happens in spdk_nvme_poll_group_wait(), but when manually doing a spdk_fd_group_wait() on poll_group's fd_group, we need a notification to get this done.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I08979e42ff57b53f0c97670e9996b0ce6dad713e Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25468 Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com>
show more ...
|
f8047163 | 01-Nov-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
nvme: add spdk_nvme_poll_group_get_fd_group()
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I0eb30622baf8d1d0ba0af632482570aaaeef52af Reviewed-on: https://review.spdk.io/gerrit
nvme: add spdk_nvme_poll_group_get_fd_group()
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I0eb30622baf8d1d0ba0af632482570aaaeef52af Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25467 Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <jim.harris@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
969b360d | 01-Nov-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
thread: fd_group-based interrupts
It's now possible to register an interrupt for a whole fd_group. The advantage of doing this over registering an interrupt using fd_group's fd is that the fd_group
thread: fd_group-based interrupts
It's now possible to register an interrupt for a whole fd_group. The advantage of doing this over registering an interrupt using fd_group's fd is that the fd_group is nested in thread's fd_group, so spdk_fd_group_wait() on thread's fd_group will trigger events of the registered fd_group.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I1b2e4e9ea0b5dc2a8ba5e7ab7366fe1c412167f5 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25466 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com>
show more ...
|
851f166e | 01-Nov-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
thread: move interrupt allocation to a function
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I71cd8fbfb0e7dfb6836d716ab420b3af45b3d2d3 Reviewed-on: https://review.spdk.io/gerr
thread: move interrupt allocation to a function
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I71cd8fbfb0e7dfb6836d716ab420b3af45b3d2d3 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25465 Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
c12cb8fe | 01-Nov-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
util: add method for setting fd_group's wrapper
A wrapper is a function that is executed when an event is triggered prior to executing the callback associated with that event. It can be used to per
util: add method for setting fd_group's wrapper
A wrapper is a function that is executed when an event is triggered prior to executing the callback associated with that event. It can be used to perform tasks common to all fds in an fd_group, without having to control the code that adds the fds.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: Ia6e29d430dad220497aa2858529662a3934c6c52 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25464 Reviewed-by: Jim Harris <jim.harris@nvidia.com> Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Ben Walker <ben@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
43c35d80 | 01-Nov-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
util: multi-level fd_group nesting
This patch adds the ability to nest multiple fd_groups into one another. This builds a tree with fds from all fd_groups being registered at root fd_group's epfd. F
util: multi-level fd_group nesting
This patch adds the ability to nest multiple fd_groups into one another. This builds a tree with fds from all fd_groups being registered at root fd_group's epfd. For instance, in the following configuration:
fgrp0 | fgrp1----+----fgrp2 | fgrp3
fds from all fd_groups will be registered to epfd of fgrp0. After unnesting fgrp1, fgrp1 and fgrp3 fds will be removed from frgp0's epfd and added to fgrp1 epfd.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I4f586c21fe3db1739bf2010578b20606c53e5e84 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25463 Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com>
show more ...
|
6336b7c5 | 31-Oct-2024 |
Konrad Sztyber <konrad.sztyber@intel.com> |
util: keep track of nested child fd_groups
We'll need this information in the next patch, which will allow for multi level nesting.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-I
util: keep track of nested child fd_groups
We'll need this information in the next patch, which will allow for multi level nesting.
Signed-off-by: Konrad Sztyber <konrad.sztyber@intel.com> Change-Id: I1c11b35d96d7926ff176ffd577db6b08aec2323a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25462 Reviewed-by: Changpeng Liu <changpeliu@tencent.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Ankit Kumar <ankit.kumar@samsung.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <ben@nvidia.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com>
show more ...
|
2e1d23f4 | 06-Dec-2024 |
Jim Harris <jim.harris@nvidia.com> |
fuse_dispatcher: make header internal
The fuse_dispatcher is not intended to be a public API, it's for internal use only.
Signed-off-by: Jim Harris <jim.harris@nvidia.com> Change-Id: I23e839a8f7155
fuse_dispatcher: make header internal
The fuse_dispatcher is not intended to be a public API, it's for internal use only.
Signed-off-by: Jim Harris <jim.harris@nvidia.com> Change-Id: I23e839a8f71557960fe27f83c2eb9e51c57c8ea8 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25516 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
3318278a | 07-Dec-2024 |
Haoqian He <haoqian.he@smartx.com> |
vhost: check if vsession exists before remove scsi vdev
Before remove the vhost scsi target when deleting controller, we can return EBUSY if still exist vsession, which means the inflight IO has not
vhost: check if vsession exists before remove scsi vdev
Before remove the vhost scsi target when deleting controller, we can return EBUSY if still exist vsession, which means the inflight IO has not finished yet. Otherwise the guest may get I/O error.
Change-Id: Ib76b276efe1e9c6fa324bedc3cb5d9d4622c753f Signed-off-by: Haoqian He <haoqian.he@smartx.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25517 Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Changpeng Liu <changpeliu@tencent.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com>
show more ...
|
a2f5e1c2 | 08-Nov-2024 |
Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> |
blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails
Error handling of spdk_bs_destroy and spdk_bs_unload is confusing. They may or may not free the spdk_blob_store structure on error, depe
blob: don't free bs when spdk_bs_destroy/spdk_bs_unload fails
Error handling of spdk_bs_destroy and spdk_bs_unload is confusing. They may or may not free the spdk_blob_store structure on error, depending on when the error happens. And users can not know if the structure has been freed after the processes finished, thus unable to handle it correctly.
To fix this problem, we only free the structure when there are no errors happended. In this way, users can be sure that the structure pointer is still valid after the failed opertation. They can then retry the operation or debug the failure.
Fixes #3560.
Change-Id: I4f7194ab8fce4f1a408ce3e6500514fd214427d4 Signed-off-by: Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25472 Reviewed-by: Jim Harris <jim.harris@nvidia.com> Reviewed-by: GangCao <gang.cao@intel.com> Reviewed-by: Yankun Li <845245370@qq.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Mellanox Build Bot
show more ...
|
0f59982b | 08-Nov-2024 |
Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> |
blob: don't use bs_load_ctx_fail in bs_write_used_* functions
bs_write_used_* functions are used both in blobstore loading and unloading processes. However, the two processes actually need different
blob: don't use bs_load_ctx_fail in bs_write_used_* functions
bs_write_used_* functions are used both in blobstore loading and unloading processes. However, the two processes actually need different error handling logics.
Let the functions call their callbacks direclty instead of calling bs_load_ctx_fail for error cases, so that we can use different error handling logics for loading and unloading.
Change-Id: I4865eb91f1d8aa36e3ca64779c08a252433a7b34 Signed-off-by: Jinlong Chen <chenjinlong.cjl@alibaba-inc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25471 Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
0354bb8e | 05-Dec-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvme/rdma: Force qp disconnect on pg remove
If a qpair is removed from a poll group and it still has a poller, we must force qpair disconnect because otherwise group reference is removed an we wont'
nvme/rdma: Force qp disconnect on pg remove
If a qpair is removed from a poll group and it still has a poller, we must force qpair disconnect because otherwise group reference is removed an we wont' be able to release the poller
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: I42699e4a692e6b878a828812328737a729e0295e Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25513 Reviewed-by: Jim Harris <jim.harris@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot
show more ...
|
60adca7e | 22-Aug-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
lib/mlx5: API to configure UMR
Add API to configure regular UMR, without BSF and any offloads, only scatter-gather functionality
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Ib4f
lib/mlx5: API to configure UMR
Add API to configure regular UMR, without BSF and any offloads, only scatter-gather functionality
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Ib4fb21c5d27c3a89aef649ca6fd0162ba9d10e8a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24706 Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Mellanox Build Bot Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Ben Walker <ben@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
c2471e45 | 24-Sep-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvmf: Clean unassociated_qpairs on connect error
When spdk_nvmf_poll_group_add returns an error, qpair is in uninitialized state and spdk_nvmf_qpair_disconnect handles this state in a special way, i
nvmf: Clean unassociated_qpairs on connect error
When spdk_nvmf_poll_group_add returns an error, qpair is in uninitialized state and spdk_nvmf_qpair_disconnect handles this state in a special way, i.e. we don't decrement the `group->current_unassociated_qpairs` counter. That is not a critical error but may lead to uneven qpairs distribution.
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: If68c4c4c8f3a99a690ba15694b5568940a7e0c21 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25012 Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <jim.harris@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
5469bd2d | 24-Sep-2024 |
Alexey Marchuk <alexeymar@nvidia.com> |
nvmf/rdma: Fix destroy of uninitialized qpair
When rdma_accept fails we destoy a qpair immediately, however if SRQ is disabled, qpair already posted RECV WRs to the shared CQ. Later when we poll the
nvmf/rdma: Fix destroy of uninitialized qpair
When rdma_accept fails we destoy a qpair immediately, however if SRQ is disabled, qpair already posted RECV WRs to the shared CQ. Later when we poll the CQ, we can reap these RECV WRs while the qpair is already destroyed. That may lead to the app crash. To fix this issue, destroy qpair in the uninitilized state in regular way - wait for all resources to be released
Signed-off-by: Alexey Marchuk <alexeymar@nvidia.com> Change-Id: Iec2c0d712de981a531b3696cf43b49ef92ff6f6e Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25011 Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Alex Michon <amichon@kalrayinc.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com> Community-CI: Mellanox Build Bot
show more ...
|
a5e6ecf2 | 08-Nov-2024 |
Yankun Li <yankun@staff.sina.com> |
lib/reduce: Data copy logic in thin read operations
If the read data is not compressed, set req->copy_after_decompress = true, in the function _read_decompress_done, copies the data to host buffers.
lib/reduce: Data copy logic in thin read operations
If the read data is not compressed, set req->copy_after_decompress = true, in the function _read_decompress_done, copies the data to host buffers.
Change-Id: I1385e380cea0e908cd0b69762674508105b9c09e Signed-off-by: Yankun Li <yankun@staff.sina.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25416 Community-CI: Mellanox Build Bot Reviewed-by: GangCao <gang.cao@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Reviewed-by: Jim Harris <jim.harris@nvidia.com>
show more ...
|
a333974e | 13-Nov-2024 |
Alex Michon <amichon@kalrayinc.com> |
nvme/rdma: Flush queued send WRs when disconnecting a qpair
This will prevent staying in lingering state until the disconnection timeout for no good reason.
Change-Id: Ife01eb2a7dd28e000fee15fba10d
nvme/rdma: Flush queued send WRs when disconnecting a qpair
This will prevent staying in lingering state until the disconnection timeout for no good reason.
Change-Id: Ife01eb2a7dd28e000fee15fba10dfd8aa7802725 Signed-off-by: Alex Michon <amichon@kalrayinc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25429 Community-CI: Mellanox Build Bot Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|
2b867217 | 07-Nov-2024 |
Alex Michon <amichon@kalrayinc.com> |
nvme/rdma: Prevent submitting new recv WR when disconnecting
If we are in a disconnection process, we may never get WC for these recv WR and we will have to wait the entire disconnection timeout bef
nvme/rdma: Prevent submitting new recv WR when disconnecting
If we are in a disconnection process, we may never get WC for these recv WR and we will have to wait the entire disconnection timeout before deciding to destroy the qpair.
Change-Id: Ifdd5ed7866dec4c3e8b37b45aea2c95293c0d994 Signed-off-by: Alex Michon <amichon@kalrayinc.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/25415 Community-CI: Community CI Samsung <spdk.community.ci.samsung@gmail.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Ben Walker <ben@nvidia.com>
show more ...
|