Revision tags: v22.01-rc1, v21.10, v21.10-rc1 |
|
#
1f0b8df7 |
| 05-Oct-2021 |
yupeng <yupeng0921@gmail.com> |
blobstore: implement spdk_bs_grow and bdev_lvol_grow_lvstore RPC
The bdev_lvol_grow_lvstore will grow the lvstore size if the undering bdev size is increased. It invokes spdk_bs_grow internally. The
blobstore: implement spdk_bs_grow and bdev_lvol_grow_lvstore RPC
The bdev_lvol_grow_lvstore will grow the lvstore size if the undering bdev size is increased. It invokes spdk_bs_grow internally. The spdk_bs_grow will extend the used_clusters bitmap. If there is no enough space resereved for the used_clusters bitmap, the api will fail. The reserved space was calculated according to the num_md_pages at blobstore creating time.
Signed-off-by: Peng Yu <yupeng0921@gmail.com> Change-Id: If6e8c0794dbe4eaa7042acf5031de58138ce7bca Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9730 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com>
show more ...
|
#
88833020 |
| 10-Apr-2022 |
yupeng <yupeng0921@gmail.com> |
blobstore: reserve space for growing blobstore
Reserve space for used_cluster bitmap. The reserved space is calculated according to the num_md_pages. The reserved space would be used when the blobst
blobstore: reserve space for growing blobstore
Reserve space for used_cluster bitmap. The reserved space is calculated according to the num_md_pages. The reserved space would be used when the blobstore is extended in the future. Add the num_md_pages_per_cluster_ratio parameter to the bdev_lvol_create_lvstore API. Then calculate the num_md_pages according to the num_md_pages_per_cluster_ratio and bdev total size, then pass the num_md_pages to the blobstore.
Signed-off-by: Peng Yu <yupeng0921@gmail.com> Change-Id: I61a28a3c931227e0fd3e1ef6b145fc18a3657751 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9517 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com>
show more ...
|
#
8dd1cd21 |
| 22-Jun-2022 |
Ben Walker <benjamin.walker@intel.com> |
check_format: For C files only, fix return type breaks
In SPDK, declarations have the return type on the same line. Definitions have the return type on a separate line. Astyle has an option for enfo
check_format: For C files only, fix return type breaks
In SPDK, declarations have the return type on the same line. Definitions have the return type on a separate line. Astyle has an option for enforcing this. Unfortunately, it seems to have two bugs:
1) It doesn't work correctly at all on C++ files. 2) It often fails on functions that return enums, or long type names
Deal with 1) by adjusting the check_format.sh script to only tell astyle to fix return type line breaks for C files and not C++. Deal with 2) by adding a few typedefs to work around the problem.
Change-Id: Idf28281466cab8411ce252d5f02ab384166790c6 Signed-off-by: Ben Walker <benjamin.walker@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13437 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Dong Yi <dongx.yi@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
#
488570eb |
| 03-Jun-2022 |
Jim Harris <james.r.harris@intel.com> |
Replace most BSD 3-clause license text with SPDX identifier.
Many open source projects have moved to using SPDX identifiers to specify license information, reducing the amount of boilerplate code in
Replace most BSD 3-clause license text with SPDX identifier.
Many open source projects have moved to using SPDX identifiers to specify license information, reducing the amount of boilerplate code in every source file. This patch replaces the bulk of SPDK .c, .cpp and Makefiles with the BSD-3-Clause identifier.
Almost all of these files share the exact same license text, and this patch only modifies the files that contain the most common license text. There can be slight variations because the third clause contains company names - most say "Intel Corporation", but there are instances for Nvidia, Samsung, Eideticom and even "the copyright holder".
Used a bash script to automate replacement of the license text with SPDX identifier which is checked into scripts/spdx.sh.
Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: Iaa88ab5e92ea471691dc298cfe41ebfb5d169780 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12904 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@nvidia.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Dong Yi <dongx.yi@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com> Reviewed-by: <qun.wan@intel.com>
show more ...
|
#
1eca87c3 |
| 01-Apr-2022 |
Alexey Marchuk <alexeymar@mellanox.com> |
blobstore: Preallocate md_page for new cluster
When a new cluster is added to a thin provisioned blob, md_page is allocated to update extents in base dev This memory allocation reduces perfromance,
blobstore: Preallocate md_page for new cluster
When a new cluster is added to a thin provisioned blob, md_page is allocated to update extents in base dev This memory allocation reduces perfromance, it can take 250usec - 1 msec on ARM platform.
Since we may have only 1 outstainding cluster allocation per io_channel, we can preallcoate md_page on each channel and remove dynamic memory allocation.
With this change blob_write_extent_page() expects that md_page is given by the caller. Sicne this function is also used during snapshot deletion, this patch also updates this process. Now we allocate a single page and reuse it for each extent in the snapshot.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com> Change-Id: I815a4c8c69bd38d8eff4f45c088e5d05215b9e57 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/12129 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com>
show more ...
|
#
a2360845 |
| 31-Jan-2022 |
Alexey Marchuk <alexeymar@mellanox.com> |
blob: Add readv/writev_ext functions
These function accept optional spdk_blob_ext_io_opts structure. If this structure is provided by the user then readv/writev_ext ops of base dev will be used in d
blob: Add readv/writev_ext functions
These function accept optional spdk_blob_ext_io_opts structure. If this structure is provided by the user then readv/writev_ext ops of base dev will be used in data path
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com> Change-Id: I370dd43f8c56f5752f7a52d0780bcfe3e3ae2d9e Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11371 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com>
show more ...
|
#
8b25bfce |
| 02-Feb-2022 |
Alexey Marchuk <alexeymar@mellanox.com> |
blob: Destroy snapshot's back_bs_dev during initialization
When snapshot is created, the new blob is loaded and examined for BLOB_SNAPSHOT xattr in blob_load_backing_dev function. At this step there
blob: Destroy snapshot's back_bs_dev during initialization
When snapshot is created, the new blob is loaded and examined for BLOB_SNAPSHOT xattr in blob_load_backing_dev function. At this step there is no such xattr, so zeroes back_bs_dev is created. Later snapshot inherits back_bs_dev from original blob, so previously created back_bs_dev can be lost.
Signed-off-by: Alexey Marchuk <alexeymar@mellanox.com> Change-Id: I90cc9b02f56598d8c5c7fe00409f571fba0aa91a Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11384 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Community-CI: Mellanox Build Bot Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
#
0b034da1 |
| 28-Feb-2022 |
Tomasz Zawadzki <tomasz.zawadzki@intel.com> |
blob: add return codes to bs_user_op_abort
Prior to this patch bs_user_op_abort() always returned EIO back to the bdev layer.
This is not sufficient for ENOMEM cases where the I/O should be resubmi
blob: add return codes to bs_user_op_abort
Prior to this patch bs_user_op_abort() always returned EIO back to the bdev layer.
This is not sufficient for ENOMEM cases where the I/O should be resubmitted by the bdev layer.
ENOMEM for bs_sequence_start() in bs_allocate_and_copy_cluster() specifically addresses issue #2306.
Signed-off-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Change-Id: Icfb0ce9ca20e1c4dd1668ba77d121f7091acb044 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11764 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com>
show more ...
|
#
742d818e |
| 17-Feb-2022 |
Dong Yi <dongx.yi@intel.com> |
blobstore: Defer to memcpy after all xattr mallocs are finished.
This confirms that the error path can return more efficient without memcpy such as xattr->name.
Signed-off-by: Dong Yi <dongx.yi@int
blobstore: Defer to memcpy after all xattr mallocs are finished.
This confirms that the error path can return more efficient without memcpy such as xattr->name.
Signed-off-by: Dong Yi <dongx.yi@intel.com> Change-Id: Ic2ed28121ed76eda9d7b24ed6c4c95b0588817de Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11654 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com>
show more ...
|
#
8ddb1790 |
| 20-Dec-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: print LBA when dumping a metadata page
When printing metadata pages, blobcli could print the start LBA to aid someone that needs to debug with dd and od.
Signed-off-by: Mike Gerdts <mgerdts@n
blob: print LBA when dumping a metadata page
When printing metadata pages, blobcli could print the start LBA to aid someone that needs to debug with dd and od.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: I380bd923dfcd1149e3f705dd0ec0ab46b1000019 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11260 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
5c29449f |
| 20-Dec-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: print extent tables
When blobcli is printing blob metadata, extent tables are now printed.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: Ie748a2f2b3fbc3e6e5ee06a0f2eb9bd491bfed46
blob: print extent tables
When blobcli is printing blob metadata, extent tables are now printed.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: Ie748a2f2b3fbc3e6e5ee06a0f2eb9bd491bfed46 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11259 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
8caf8f5e |
| 20-Dec-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: report unexpected descriptor types
When printing blob metadata via blobcli, descriptor types that do not have full dump support should not be silently ignored. This prints a message that indic
blob: report unexpected descriptor types
When printing blob metadata via blobcli, descriptor types that do not have full dump support should not be silently ignored. This prints a message that indicates an unsupported descriptor type was encountered so that the person debugging with blobcli knows that there is more metadata present.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: Id30b671fd9dee1ec12e10625eb2af4c1e43eda27 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11258 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
6e440ff1 |
| 20-Dec-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: print invalid, data_ro, and md_ro flags
When blobcli prints blob metadata, it will now Print invalid_flags, data_ro_flags, and md_ro_flags when printing blob metadata. The complete mask is pr
blob: print invalid, data_ro, and md_ro flags
When blobcli prints blob metadata, it will now Print invalid_flags, data_ro_flags, and md_ro_flags when printing blob metadata. The complete mask is printed as well as the meaning of each bit or set of bits. If unknown bits are set, that will be indicated in the output as well.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: I743a843a5d23b0e81c04482304515ab3c3b4c7bc Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11257 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Shuhei Matsumoto <smatsumoto@nvidia.com> Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot
show more ...
|
#
c5d80a8b |
| 02-Feb-2022 |
Jim Harris <james.r.harris@intel.com> |
blob: avoid recursion when split IO immmediately complete
In some scenarios, a split IO can immediately complete. For example, a very large unmap operation to a newly thin-provisioned blob has no o
blob: avoid recursion when split IO immmediately complete
In some scenarios, a split IO can immediately complete. For example, a very large unmap operation to a newly thin-provisioned blob has no operations to perform, so the batch for its operation immediately completes.
But if it immediately completes, we can't recursively submit the next split IO. So use variables in the context structure to detect when an operation immediately completes, to allow it to unwind and submit the next operation without recursing.
Fixes issue #2347.
Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: I8e4c121190c7d08152aa8de20cf6abc55b5edc46 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11388 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
show more ...
|
#
b6992a90 |
| 02-Feb-2022 |
Jim Harris <james.r.harris@intel.com> |
blob: add do/while (false) to blob_request_submit_op_split_next
No functional change here, this only prepares this function for some functional changes in the next patch. By adding the do/while loo
blob: add do/while (false) to blob_request_submit_op_split_next
No functional change here, this only prepares this function for some functional changes in the next patch. By adding the do/while loop here we reduce the amount of whitespace changes in the next patch.
Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: I09d64fd1fb69ee232af1d298619c762e562fdc79 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11387 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com>
show more ...
|
#
7caa514f |
| 20-Dec-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: blobcli should dump XATTR_INTERNAL
Refactor the code that dumps XATTR into a function. Call this function for XATTR and XATTR_INTERNAL.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change
blob: blobcli should dump XATTR_INTERNAL
Refactor the code that dumps XATTR into a function. Call this function for XATTR and XATTR_INTERNAL.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: Ic0cb32b14f7a34e030a48e1ea468ec63172e2bf1 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11256 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com>
show more ...
|
#
a6c5feb0 |
| 18-Nov-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: add forced recovery
Add the ability to open a blobstore in such a way that recovery happens even if the superblock says it is clean.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id:
blob: add forced recovery
Add the ability to open a blobstore in such a way that recovery happens even if the superblock says it is clean.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: I475e51beff24428d387446f7785e025294d2f014 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11253 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
fae72b34 |
| 20-Dec-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: add logging for blobstore recovery
When a blobstore is not clean, a message is logged at the notice level. As other progress is made, messages are logged at the info level.
Signed-off-by: Mi
blob: add logging for blobstore recovery
When a blobstore is not clean, a message is logged at the notice level. As other progress is made, messages are logged at the info level.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: Icfbe375faaa95d5be53864f7eb8a73e1ae7c5d01 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11251 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
d715c82c |
| 18-Nov-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: print sequence and next while dumping pages
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: I2873633e435560ed1199b141851ba43fffcfe2c4 Reviewed-on: https://review.spdk.io/gerrit/c/sp
blob: print sequence and next while dumping pages
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: I2873633e435560ed1199b141851ba43fffcfe2c4 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11248 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
148bcefa |
| 18-Nov-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: report bit arrays that reference each page
While dumping the blobstore with blobcli, read the super block and bit arrays. As each metadata page is dumped, indicate which bit arrays reference
blob: report bit arrays that reference each page
While dumping the blobstore with blobcli, read the super block and bit arrays. As each metadata page is dumped, indicate which bit arrays reference the page.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: Ie023594343861d0fbf065c270424649ec715d8b4 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11247 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
76a577b0 |
| 17-Nov-2021 |
Mike Gerdts <mgerdts@nvidia.com> |
blob: blobcli should use hex for blob IDs
Blob IDs are sequentially assigned starting at 0x100000000. When debugging with a small number of blob IDs, it is much more intuitive to see blob ID 0x10000
blob: blobcli should use hex for blob IDs
Blob IDs are sequentially assigned starting at 0x100000000. When debugging with a small number of blob IDs, it is much more intuitive to see blob ID 0x100000000 rather than blob ID 4294967296. If blob IDs are displayed in hex, the things that parse commands should also accept hex to facilitate copy and paste.
Signed-off-by: Mike Gerdts <mgerdts@nvidia.com> Change-Id: Ic71eaaf1987609b4f705d372ced4240650b12684 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11245 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Paul Luse <paul.e.luse@intel.com> Reviewed-by: Krzysztof Karas <krzysztof.karas@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot
show more ...
|
#
7de351f1 |
| 29-Dec-2021 |
Liu Xiaodong <xiaodong.liu@intel.com> |
blobstore: Use RB_TREE to do blob lookup
If blobs held in a blobstore are opened a lot, lookup by RB_TREE will be much more efficient.
Change-Id: I7075b95c597a958e7bb10890f803191309532021 Signed-of
blobstore: Use RB_TREE to do blob lookup
If blobs held in a blobstore are opened a lot, lookup by RB_TREE will be much more efficient.
Change-Id: I7075b95c597a958e7bb10890f803191309532021 Signed-off-by: Liu Xiaodong <xiaodong.liu@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10917 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot
show more ...
|
#
10f32b9f |
| 01-Dec-2021 |
GangCao <gang.cao@intel.com> |
lib/blob: do not assume realloc(NULL, 0) returns a not-NULL value
There is situation that num_extent_pages is zero and original pointer is also NULL, the realloc() could return a Not NULL pointer.
lib/blob: do not assume realloc(NULL, 0) returns a not-NULL value
There is situation that num_extent_pages is zero and original pointer is also NULL, the realloc() could return a Not NULL pointer.
Related UT has been added and updated. 1) In the default allocation (num_clusters == 0), the extent_pages is not allocated as expected. 2) In the thin provisioning allocation (num_clusters != 0), the extent_pages will be allocated if extent_table is used.
More related information as below:
The crux of the problem is that according to POSIX:
realloc: "If ptr is NULL, then the call is equivalent to malloc(size)" malloc: "If size is 0, then malloc returns either NULL or a unique pointer value that can later be successfully passed to free"
blobstore was relying on realloc(NULL, 0) always return a unique pointer value, and not NULL. This is not portable behavior.
Change-Id: Ibc28d9696f15a3c0e2aa6bb2371dc23576c28954 Signed-off-by: GangCao <gang.cao@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10470 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
#
cc6920a4 |
| 25-Nov-2021 |
Josh Soref <jsoref@gmail.com> |
spelling: lib
Part of #2256
* accessible * activation * additional * allocate * association * attempt * barrier * broadcast * buffer * calculate * cases * channel * children * command * completion
spelling: lib
Part of #2256
* accessible * activation * additional * allocate * association * attempt * barrier * broadcast * buffer * calculate * cases * channel * children * command * completion * connect * copied * currently * descriptor * destroy * detachment * doesn't * enqueueing * exceeds * execution * extended * fallback * finalize * first * handling * hugepages * ignored * implementation * in_capsule * initialization * initialized * initializing * initiator * negotiated * notification * occurred * original * outstanding * partially * partition * processing * receive * received * receiving * redirected * regions * request * requested * response * retrieved * running * satisfied * should * snapshot * status * succeeds * successfully * supplied * those * transferred * translate * triggering * unregister * unsupported * urlsafe * virtqueue * volumes * workaround * zeroed
Change-Id: I569218754bd9d332ba517d4a61ad23d29eedfd0c Signed-off-by: Josh Soref <jsoref@gmail.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/10405 Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Shuhei Matsumoto <shuhei.matsumoto.xt@hitachi.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
#
f01146ae |
| 05-Oct-2021 |
Jim Harris <james.r.harris@intel.com> |
blob: use uint64_t for unmap and write_zeroes lba count
Previous patches (5363eb3c) tried to work around the 32-bit unmap and write_zeroes LBA counts by breaking up larger operations into smaller ch
blob: use uint64_t for unmap and write_zeroes lba count
Previous patches (5363eb3c) tried to work around the 32-bit unmap and write_zeroes LBA counts by breaking up larger operations into smaller chunks of max size UINT32_MAX lba chunks.
But some SSDs may just ignore unmap operations that are not aligned to full physical block boundaries - and a UINT32_MAX lba unmap on a 512B logical / 4KiB physical SSD would not be aligned. If the SSD decided to ignore the unmap/deallocate (which it is allowed to do according to NVMe spec), we could end up with not unmapping *any* blocks. Probably SSDs should always be trying hard to unmap as many blocks as possible, but let's not try to depend on that in blobstore.
So one option would be to break them into chunks close to UINT32_MAX which are still aligned to 4KiB boundaries. But the better fix is to just change the unmap and write_zeroes APIs to take 64-bit arguments, and then we can avoid the chunking altogether.
Fixes issue #2190.
Signed-off-by: Jim Harris <james.r.harris@intel.com> Change-Id: I23998e493a764d466927c3520c7a8c7f943000a6 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9737 Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Community-CI: Mellanox Build Bot Reviewed-by: Xiaodong Liu <xiaodong.liu@intel.com> Reviewed-by: Changpeng Liu <changpeng.liu@intel.com> Reviewed-by: Aleksey Marchuk <alexeymar@mellanox.com> Reviewed-by: Dong Yi <dongx.yi@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|