1# Running NVMe-OF Performance Test Cases 2 3Scripts contained in this directory are used to run TCP and RDMA benchmark tests, 4that are later published at [spdk.io performance reports section](https://spdk.io/doc/performance_reports.html). 5To run the scripts in your environment please follow steps below. 6 7## Test Systems Requirements 8 9- The OS installed on test systems must be a Linux OS. 10 Scripts were primarily used on systems with Fedora and 11 Ubuntu 18.04/20.04 distributions. 12- Each test system must have at least one RDMA-capable NIC installed for RDMA tests. 13 For TCP tests any TCP-capable NIC will do. However, high-bandwidth, 14 high-performance NICs like Intel E810 CQDA2 or Mellanox ConnectX-5 are 15 suggested because the NVMe-oF workload is network bound. 16 So, if you use a NIC capable of less than 100Gbps on NVMe-oF target 17 system, you will quickly saturate your NICs. 18- Python3 interpreter must be available on all test systems. 19 Paramiko and Pandas modules must be installed. 20- nvmecli package must be installed on all test systems. 21- fio must be downloaded from [Github](https://github.com/axboe/fio) and built. 22 This must be done on Initiator test systems to later build SPDK with 23 "--with-fio" option. 24- All test systems must have a user account with a common name, 25 password and passwordless sudo enabled. 26- [mlnx-tools](https://github.com/Mellanox/mlnx-tools) package must be downloaded 27 to /usr/src/local directory in order to configure NIC ports IRQ affinity. 28 If custom directory is to be used, then it must be set using irq_scripts_dir 29 option in Target and Initiator configuration sections. 30- `sysstat` package must be installed for SAR CPU utilization measurements. 31- `bwm-ng` package must be installed for NIC bandwidth utilization measurements. 32- `pcm` package must be installed for pcm, pcm-power and pcm-memory measurements. 33 34### Optional 35 36- For test using the Kernel Target, nvmet-cli must be downloaded and build on Target system. 37 nvmet-cli is available [here](http://git.infradead.org/users/hch/nvmetcli.git). 38 39## Manual configuration 40 41Before running the scripts some manual test systems configuration is required: 42 43- Configure IP address assignment on the NIC ports that will be used for test. 44 Make sure to make these assignments persistent, as in some cases NIC drivers may be reloaded. 45- Adjust firewall service to allow traffic on IP - port pairs used in test 46 (or disable firewall service completely if possible). 47- Adjust or completely disable local security engines like AppArmor or SELinux. 48 49## JSON configuration for test run automation 50 51An example json configuration file with the minimum configuration required 52to automate NVMe-oF testing is provided in this repository. 53The following sub-chapters describe each configuration section in more detail. 54 55### General settings section 56 57``` ~sh 58"general": { 59 "username": "user", 60 "password": "password", 61 "transport": "transport_type", 62 "skip_spdk_install": bool, 63 "irdma_roce_enable": bool 64} 65``` 66 67Required: 68 69- username - username for the SSH session 70- password - password for the SSH session 71- transport - transport layer to be used throughout the test ("tcp" or "rdma") 72 73Optional: 74 75- skip_spdk_install - by default SPDK sources will be copied from Target 76 to the Initiator systems each time run_nvmf.py script is run. If the SPDK 77 is already in place on Initiator systems and there's no need to re-build it, 78 then set this option to true. 79 Default: false. 80- irdma_roce_enable - loads irdma driver with RoCEv2 network protocol enabled on Target and 81 Initiator machines. This option applies only to system with Intel E810 NICs. 82 Default: false 83 84### Target System Configuration 85 86``` ~sh 87"target": { 88 "mode": "spdk", 89 "nic_ips": ["192.0.1.1", "192.0.2.1"], 90 "core_mask": "[1-10]", 91 "null_block_devices": 8, 92 "nvmet_bin": "/path/to/nvmetcli", 93 "sar_settings": true, 94 "pcm_settings": false, 95 "enable_bandwidth": [true, 60], 96 "enable_dpdk_memory": true 97 "num_shared_buffers": 4096, 98 "scheduler_settings": "static", 99 "zcopy_settings": false, 100 "dif_insert_strip": true, 101 "null_block_dif_type": 3, 102 "pm_settings": [true, 30, 1, 60], 103 "irq_settings": { 104 "mode": "cpulist", 105 "cpulist": "[0-10]", 106 "exclude_cpulist": false 107 } 108} 109``` 110 111Required: 112 113- mode - Target application mode, "spdk" or "kernel". 114- nic_ips - IP addresses of NIC ports to be used by the target to export 115 NVMe-oF subsystems. 116- core_mask - Used by SPDK target only. 117 CPU core mask either in form of actual mask (i.e. 0xAAAA) or core list 118 (i.e. [0,1,2-5,6). 119 At this moment the scripts cannot restrict the Kernel target to only 120 use certain CPU cores. Important: upper bound of the range is inclusive! 121 122Optional, common: 123 124- null_block_devices - int, number of null block devices to create. 125 Detected NVMe devices are not used if option is present. Default: 0. 126- sar_settings - bool 127 Enable SAR CPU utilization measurement on Target side. SAR thread will 128 wait until fio finishes it's "ramp_time" and then start measurement for 129 fio "run_time" duration. Default: enabled. 130- pcm_settings - bool 131 Enable [PCM](https://github.com/opcm/pcm.git) measurements on Target side. 132 Measurements include CPU, memory and power consumption. Default: enabled. 133- enable_bandwidth - bool. Measure bandwidth utilization on network 134 interfaces. Default: enabled. 135- tuned_profile - tunedadm profile to apply on the system before starting 136 the test. 137- irq_scripts_dir - path to scripts directory of Mellanox mlnx-tools package; 138 Used to run set_irq_affinity.sh script. 139 Default: /usr/src/local/mlnx-tools/ofed_scripts 140- enable_pm - bool; 141 if bool is set to true, power measurement is enabled via collect-bmc-pm on 142 the target side. Default: true. 143- irq_settings - dict; 144 Choose how to adjust network interface IRQ settings. 145 mode: default - run IRQ alignment script with no additional options. 146 mode: bynode - align IRQs to be processed only on CPU cores matching NIC 147 NUMA node. 148 mode: cpulist - align IRQs to be processed only on CPU cores provided 149 in the cpulist parameter. 150 cpulist: list of CPU cores to use for cpulist mode. Can be provided as 151 list of individual cores ("[0,1,10]"), core ranges ("[0-10]"), or mix 152 of both ("[0-1,10,20-22]") 153 exclude_cpulist: reverse the effect of cpulist mode. Allow IRQ processing 154 only on CPU cores which are not provided in cpulist parameter. 155 156Optional, Kernel Target only: 157 158- nvmet_bin - path to nvmetcli binary, if not available in $PATH. 159 Only for Kernel Target. Default: "nvmetcli". 160 161Optional, SPDK Target only: 162 163- zcopy_settings - bool. Disable or enable target-size zero-copy option. 164 Default: false. 165- scheduler_settings - str. Select SPDK Target thread scheduler (static/dynamic). 166 Default: static. 167- num_shared_buffers - int, number of shared buffers to allocate when 168 creating transport layer. Default: 4096. 169- max_queue_depth - int, max number of outstanding I/O per queue. Default: 128. 170- dif_insert_strip - bool. Only for TCP transport. Enable DIF option when 171 creating transport layer. Default: false. 172- num_cqe - int, number of completion queue entries. See doc/json_rpc.md 173 "nvmf_create_transport" section. Default: 4096. 174- null_block_dif_type - int, 0-3. Level of DIF type to use when creating 175 null block bdev. Default: 0. 176- enable_dpdk_memory - bool. Wait for a fio ramp_time to finish and 177 call env_dpdk_get_mem_stats RPC call to dump DPDK memory stats. 178 Default: enabled. 179- adq_enable - bool; only for TCP transport. 180 Configure system modules, NIC settings and create priority traffic classes 181 for ADQ testing. You need and ADQ-capable NIC like the Intel E810. 182- bpf_scripts - list of bpftrace scripts that will be attached during the 183 test run. Available scripts can be found in the spdk/scripts/bpf directory. 184- dsa_settings - bool. Only for TCP transport. Enable offloading CRC32C 185 calculation to DSA. You need a CPU with the Intel(R) Data Streaming 186 Accelerator (DSA) engine. 187- scheduler_core_limit - int, 0-100. Dynamic scheduler option to load limit on 188 the core to be considered full. 189- irq_settings - dict; 190 Choose how to adjust network interface IRQ settings. 191 Same as in common options section, but SPDK Target allows more modes: 192 mode: shared - align IRQs to be processed only on the same CPU cores which 193 are already used by SPDK Target process. 194 mode: split - align IRQs to be processed only on CPU cores which are not 195 used by SPDK Target process. 196 mode: split-bynode - same as "split", but reduce the number of CPU cores 197 to use for IRQ processing to only these matching NIC NUMA node. 198 199### Initiator system settings section 200 201There can be one or more `initiatorX` setting sections, depending on the test setup. 202 203``` ~sh 204"initiator1": { 205 "ip": "10.0.0.1", 206 "nic_ips": ["192.0.1.2"], 207 "target_nic_ips": ["192.0.1.1"], 208 "mode": "spdk", 209 "fio_bin": "/path/to/fio/bin", 210 "nvmecli_bin": "/path/to/nvmecli/bin", 211 "cpus_allowed": "0,1,10-15", 212 "cpus_allowed_policy": "shared", 213 "num_cores": 4, 214 "cpu_frequency": 2100000, 215 "adq_enable": false, 216 "kernel_engine": "io_uring", 217 "irq_settings": { "mode": "bynode" } 218} 219``` 220 221Required: 222 223- ip - management IP address of initiator system to set up SSH connection. 224- nic_ips - list of IP addresses of NIC ports to be used in test, 225 local to given initiator system. 226- target_nic_ips - list of IP addresses of Target NIC ports to which initiator 227 will attempt to connect to. 228- mode - initiator mode, "spdk" or "kernel". For SPDK, the bdev fio plugin 229 will be used to connect to NVMe-oF subsystems and submit I/O. For "kernel", 230 nvmecli will be used to connect to NVMe-oF subsystems and fio will use the 231 libaio ioengine to submit I/Os. 232 233Optional, common: 234 235- nvmecli_bin - path to nvmecli binary; Will be used for "discovery" command 236 (for both SPDK and Kernel modes) and for "connect" (in case of Kernel mode). 237 Default: system-wide "nvme". 238- fio_bin - path to custom fio binary, which will be used to run IO. 239 Additionally, the directory where the binary is located should also contain 240 fio sources needed to build SPDK fio_plugin for spdk initiator mode. 241 Default: /usr/src/fio/fio. 242- cpus_allowed - str, list of CPU cores to run fio threads on. Takes precedence 243 before `num_cores` setting. Default: None (CPU cores randomly allocated). 244 For more information see `man fio`. 245- cpus_allowed_policy - str, "shared" or "split". CPU sharing policy for fio 246 threads. Default: shared. For more information see `man fio`. 247- num_cores - By default fio threads on initiator side will use as many CPUs 248 as there are connected subsystems. This option limits the number of CPU cores 249 used for fio threads to this number; cores are allocated randomly and fio 250 `filename` parameters are grouped if needed. `cpus_allowed` option takes 251 precedence and `num_cores` is ignored if both are present in config. 252- cpu_frequency - int, custom CPU frequency to set. By default test setups are 253 configured to run in performance mode at max frequencies. This option allows 254 user to select CPU frequency instead of running at max frequency. Before 255 using this option `intel_pstate=disable` must be set in boot options and 256 cpupower governor be set to `userspace`. 257- tuned_profile - tunedadm profile to apply on the system before starting 258 the test. 259- irq_scripts_dir - path to scripts directory of Mellanox mlnx-tools package; 260 Used to run set_irq_affinity.sh script. 261 Default: /usr/src/local/mlnx-tools/ofed_scripts 262- kernel_engine - Select fio ioengine mode to run tests. io_uring libraries and 263 io_uring capable fio binaries must be present on Initiator systems! 264 Available options: 265 - libaio (default) 266 - io_uring 267- irq_settings - dict; 268 Same as "irq_settings" in Target common options section. 269 270Optional, SPDK Initiator only: 271 272- adq_enable - bool; only for TCP transport. Configure system modules, NIC 273 settings and create priority traffic classes for ADQ testing. 274 You need an ADQ-capable NIC like Intel E810. 275- enable_data_digest - bool; only for TCP transport. Enable the data 276 digest for the bdev controller. The target can use IDXD to calculate the 277 data digest or fallback to a software optimized implementation on system 278 that don't have the Intel(R) Data Streaming Accelerator (DSA) engine. 279 280### Fio settings section 281 282``` ~sh 283"fio": { 284 "bs": ["4k", "128k"], 285 "qd": [32, 128], 286 "rw": ["randwrite", "write"], 287 "rwmixread": 100, 288 "rate_iops": 10000, 289 "num_jobs": 2, 290 "offset": true, 291 "offset_inc": 10, 292 "run_time": 30, 293 "ramp_time": 30, 294 "run_num": 3 295} 296``` 297 298Required: 299 300- bs - fio IO block size 301- qd - fio iodepth 302- rw - fio rw mode 303- rwmixread - read operations percentage in case of mixed workloads 304- num_jobs - fio numjobs parameter 305 Note: may affect total number of CPU cores used by initiator systems 306- run_time - fio run time 307- ramp_time - fio ramp time, does not do measurements 308- run_num - number of times each workload combination is run. 309 If more than 1 then final result is the average of all runs. 310 311Optional: 312 313- rate_iops - limit IOPS to this number 314- offset - bool; enable offseting of the IO to the file. When this option is 315 enabled the file is "split" into a number of chunks equal to "num_jobs" 316 parameter value, and each "num_jobs" fio thread gets it's own chunk to 317 work with. 318 For more detail see "offset", "offset_increment" and "size" in fio man 319 pages. Default: false. 320- offset_inc - int; Percentage value determining the offset, size and 321 offset_increment when "offset" option is enabled. By default if "offset" 322 is enabled fio file will get split evenly between fio threads doing the 323 IO. Offset_inc can be used to specify a custom value. 324 325#### Test Combinations 326 327It is possible to specify more than one value for bs, qd and rw parameters. 328In such case script creates a list of their combinations and runs IO tests 329for all of these combinations. For example, the following configuration: 330 331``` ~sh 332 "bs": ["4k"], 333 "qd": [32, 128], 334 "rw": ["write", "read"] 335``` 336 337results in following workloads being tested: 338 339- 4k-write-32 340- 4k-write-128 341- 4k-read-32 342- 4k-read-128 343 344#### Important note about queue depth parameter 345 346qd in fio settings section refers to iodepth generated per single fio target 347device ("filename" in resulting fio configuration file). It is re-calculated 348while the script is running, so generated fio configuration file might contain 349a different value than what user has specified at input, especially when also 350using "numjobs" or initiator "num_cores" parameters. For example: 351 352Target system exposes 4 NVMe-oF subsystems. One initiator system connects to 353all of these systems. 354 355Initiator configuration (relevant settings only): 356 357``` ~sh 358"initiator1": { 359 "num_cores": 1 360} 361``` 362 363Fio configuration: 364 365``` ~sh 366"fio": { 367 "bs": ["4k"], 368 "qd": [128], 369 "rw": ["randread"], 370 "rwmixread": 100, 371 "num_jobs": 1, 372 "run_time": 30, 373 "ramp_time": 30, 374 "run_num": 1 375} 376``` 377 378In this case generated fio configuration will look like this 379(relevant settings only): 380 381``` ~sh 382[global] 383numjobs=1 384 385[job_section0] 386filename=Nvme0n1 387filename=Nvme1n1 388filename=Nvme2n1 389filename=Nvme3n1 390iodepth=512 391``` 392 393`num_cores` option results in 4 connected subsystems to be grouped under a 394single fio thread (job_section0). Because `iodepth` is local to `job_section0`, 395it is distributed between each `filename` local to job section in round-robin 396(by default) fashion. In case of fio targets with the same characteristics 397(IOPS & Bandwidth capabilities) it means that iodepth is distributed **roughly** 398equally. Ultimately above fio configuration results in iodepth=128 per filename. 399 400`numjobs` higher than 1 is also taken into account, so that desired qd per 401filename is retained: 402 403``` ~sh 404[global] 405numjobs=2 406 407[job_section0] 408filename=Nvme0n1 409filename=Nvme1n1 410filename=Nvme2n1 411filename=Nvme3n1 412iodepth=256 413``` 414 415Besides `run_num`, more information on these options can be found in `man fio`. 416 417## Running the test 418 419Before running the test script run the spdk/scripts/setup.sh script on Target 420system. This binds the devices to VFIO/UIO userspace driver and allocates 421hugepages for SPDK process. 422 423Run the script on the NVMe-oF target system: 424 425``` ~sh 426cd spdk 427sudo PYTHONPATH=$PYTHONPATH:$PWD/python scripts/perf/nvmf/run_nvmf.py 428``` 429 430By default script uses config.json configuration file in the scripts/perf/nvmf 431directory. You can specify a different configuration file at runtime as below: 432 433``` ~sh 434sudo PYTHONPATH=$PYTHONPATH:$PWD/python scripts/perf/nvmf/run_nvmf.py -c /path/to/config.json 435``` 436 437PYTHONPATH environment variable is needed because script uses SPDK-local Python 438modules. If you'd like to get rid of `PYTHONPATH=$PYTHONPATH:$PWD/python` 439you need to modify your environment so that Python interpreter is aware of 440`spdk/scripts` directory. 441 442## Test Results 443 444Test results for all workload combinations are printed to screen once the tests 445are finished. Additionally all aggregate results are saved to /tmp/results/nvmf_results.conf 446Results directory path can be changed by -r script parameter. 447