| 0070858e | 17-Jul-2024 |
Michal Berger <michal.berger@intel.com> |
scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default
To that end, remove it altogether and allow setup.sh to always split requested amount of hugepages across all available nodes. Custom setups pe
scripts/setup.sh: Use HUGE_EVEN_ALLOC logic by default
To that end, remove it altogether and allow setup.sh to always split requested amount of hugepages across all available nodes. Custom setups per node are still available through the HUGENODE var.
Adjust some of the hugepages.sh tests to adhere to the new default. This change allows us to remove the per_node_1G_alloc() test since its flow is now covered by the custom_alloc().
By default, autotest.sh sets HUGEMEM=4096 and by spreading it across all the nodes, for the minimal scenario, we get only 2GB on a single node. For vhost tests, the default alloc per VM via vm_setup() is set to 1GB so sharing that 2GB between qemu and SPDK spreads it a bit too thin.
Case in point, for vhost.vhost_blk_packed_ring_integrity test roughly > 600 hugepages is used for vhost. Since it slurps everything from node0, only < 512 remains. Since VMs are assigned per node, the most basic setups keep VM_*_qemu_numa_node set to 0. This is then used by qemu to bind to a specific node. With 1G, < 512 hugepages is simply not enough.
With that in mind, for vhost tests, keep all allocations on a single node instead of trying to figure out right amount of memory per node to keep the old behavior in (this may change in the future when SPDK/vhost becomes numa-aware).
Change-Id: I83e18bfa4cc6de0a777804b354de083ae6ae9d8c Signed-off-by: Michal Berger <michal.berger@intel.com> Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/24176 Community-CI: Mellanox Build Bot Reviewed-by: Tomasz Zawadzki <tomasz@tzawadzki.com> Reviewed-by: Jim Harris <jim.harris@samsung.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
| 26c2f987 | 26-Jan-2023 |
Michal Berger <michal.berger@intel.com> |
test/setup: Use hw_sector_size to convert size to sectors
The {logical,physical}_block_size may actually differ (physical can be bigger than logical) so always use the smallest available unit - the
test/setup: Use hw_sector_size to convert size to sectors
The {logical,physical}_block_size may actually differ (physical can be bigger than logical) so always use the smallest available unit - the hw_sector_size is an actual alias to logical_block_size and it's also clearly indicating what unit sgdisk is working with.
In case the physical_block_size differs, the resulted partitions may have different size than expected. For instance, under nvme with 512/4096 layout, the partitions were ending up 128MB in size instead of 1GB causing the dmsetup to fail (as it expects to join partitions 1GB in size each).
Signed-off-by: Michal Berger <michal.berger@intel.com> Change-Id: Ib6d3afd3471af2c2e9a5ced17004dd9c565708c8 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/16551 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Konrad Sztyber <konrad.sztyber@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|
| ab356d40 | 02-Aug-2022 |
Michal Berger <michal.berger@intel.com> |
test/setup: Remove the hp_status() test
This just verifies a modest setup.sh's output which is of little relevance. On top of that, depending on the system's state, the test itself may be flaky so g
test/setup: Remove the hp_status() test
This just verifies a modest setup.sh's output which is of little relevance. On top of that, depending on the system's state, the test itself may be flaky so get rid of it.
Signed-off-by: Michal Berger <michal.berger@intel.com> Change-Id: Icc918a0dbbb54067c281aa465a097c4e40a32e11 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/13827 Reviewed-by: Pawel Piatek <pawelx.piatek@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
| 9af7c30e | 16-Jan-2022 |
Michal Berger <michallinuxstuff@gmail.com> |
scripts/setup: Skip devices which have any valid data present
This is done to make sure we don't miss more complex setups where target devices are not mounted but still hold some valid data that sho
scripts/setup: Skip devices which have any valid data present
This is done to make sure we don't miss more complex setups where target devices are not mounted but still hold some valid data that shouldn't be touched in any way.
Also, adjust function names so they clearly indicate what is being checked.
Signed-off-by: Michal Berger <michallinuxstuff@gmail.com> Change-Id: Ibb0f1f21de68009a2f8f1faf4595a07ae527da35 Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/11111 Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Reviewed-by: Ben Walker <benjamin.walker@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Community-CI: Mellanox Build Bot Community-CI: Broadcom CI <spdk-ci.pdl@broadcom.com>
show more ...
|
| 0231fdc7 | 13-Aug-2021 |
Michal Berger <michalx.berger@intel.com> |
autotest: Skip use of any zoned nvme devices
Our tests, especially those which use nvme block devices for various use-cases, won't be able to perform successful IO on such devices. The idea is to sk
autotest: Skip use of any zoned nvme devices
Our tests, especially those which use nvme block devices for various use-cases, won't be able to perform successful IO on such devices. The idea is to skip them for now and introduce basic, dedicated, tests for zoned nvmes in oncoming patches.
Signed-off-by: Michal Berger <michalx.berger@intel.com> Change-Id: I67baad5c85c662921e3327f2101180283c89e96c Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/9181 Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Karol Latecki <karol.latecki@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com> Tested-by: SPDK CI Jenkins <sys_sgci@intel.com>
show more ...
|
| ea71df4f | 17-Jun-2021 |
Michal Berger <michalx.berger@intel.com> |
scripts/vagrant: Drop OCSSD awareness from functional tests
This also translates into switching fully to upstream QEMU for the vagrant setup.
This is done in order to move away from OCSSD and SPDK'
scripts/vagrant: Drop OCSSD awareness from functional tests
This also translates into switching fully to upstream QEMU for the vagrant setup.
This is done in order to move away from OCSSD and SPDK's qemu fork and align with what upstream QEMU supports. Main changes touch the way how nvme namespaces are configured. With >= 5.2.0 it's possible now to configure multiple namespace under single nvme device. Each namespace requires a separate disk image to work with. This:
-b foo.img,nvme,1... -b foo.img -b foo.img,,..
Will still configure nvme controller with a single namespace attached to foo.img.
This:
-b foo.img,,foo-ns1.img:foo-ns2.img
Will configure nvme controller with three namespaces.
Configuring nvme controller with no namespaces is possible via:
-b none ...
Note that this still allows to define other options specific to nvme controller, like CMB and PMR. E.g:
-b none,nvme,,true
This will create nvme controller with no namespaces but with CMB enabled.
It's possible now to also request for given controller to be zoned. Currently if requsted, all namespaces under the target controller will be zoned with no limit set as to max open|active zones.
All nvme devices have block size fixed to 4KB to imititate behavior of the SPDK's qemu fork.
Compatibility with spdk-5.0.0 fork is preserved in context of setting up namespaces so this:
-b foo.img,nvme,2
is valid as long as the emulator is set to that of spdk-5.0.0's.
Signed-off-by: Michal Berger <michalx.berger@intel.com> Change-Id: Ib5d53cb5c330c1f84b57e0bf877ea0e2d0312ddd Reviewed-on: https://review.spdk.io/gerrit/c/spdk/spdk/+/8421 Tested-by: SPDK CI Jenkins <sys_sgci@intel.com> Reviewed-by: Karol Latecki <karol.latecki@intel.com> Reviewed-by: Tomasz Zawadzki <tomasz.zawadzki@intel.com> Reviewed-by: Jim Harris <james.r.harris@intel.com>
show more ...
|