1# Running NVMe-OF Performance Test Cases 2 3Scripts contained in this directory are used to run TCP and RDMA benchmark tests, 4that are later published at [spdk.io performance reports section](https://spdk.io/doc/performance_reports.html). 5To run the scripts in your environment please follow steps below. 6 7## Test Systems Requirements 8 9- The OS installed on test systems must be a Linux OS. 10 Scripts were primarily used on systems with Fedora and 11 Ubuntu 18.04/20.04 distributions. 12- Each test system must have at least one RDMA-capable NIC installed for RDMA tests. 13 For TCP tests any TCP-capable NIC will do. However, high-bandwidth, 14 high-performance NICs like Intel E810 CQDA2 or Mellanox ConnectX-5 are 15 suggested because the NVMe-oF workload is network bound. 16 So, if you use a NIC capable of less than 100Gbps on NVMe-oF target 17 system, you will quickly saturate your NICs. 18- Python3 interpreter must be available on all test systems. 19 Paramiko and Pandas modules must be installed. 20- nvmecli package must be installed on all test systems. 21- fio must be downloaded from [Github](https://github.com/axboe/fio) and built. 22 This must be done on Initiator test systems to later build SPDK with 23 "--with-fio" option. 24- All test systems must have a user account with a common name, 25 password and passwordless sudo enabled. 26- [mlnx-tools](https://github.com/Mellanox/mlnx-tools) package must be downloaded 27 to /usr/src/local directory in order to configure NIC ports IRQ affinity. 28 If custom directory is to be used, then it must be set using irq_scripts_dir 29 option in Target and Initiator configuration sections. 30 31### Optional 32 33- For test using the Kernel Target, nvmet-cli must be downloaded and build on Target system. 34 nvmet-cli is available [here](http://git.infradead.org/users/hch/nvmetcli.git). 35 36## Manual configuration 37 38Before running the scripts some manual test systems configuration is required: 39 40- Configure IP address assignment on the NIC ports that will be used for test. 41 Make sure to make these assignments persistent, as in some cases NIC drivers may be reloaded. 42- Adjust firewall service to allow traffic on IP - port pairs used in test 43 (or disable firewall service completely if possible). 44- Adjust or completely disable local security engines like AppArmor or SELinux. 45 46## JSON configuration for test run automation 47 48An example json configuration file with the minimum configuration required 49to automate NVMe-oF testing is provided in this repository. 50The following sub-chapters describe each configuration section in more detail. 51 52### General settings section 53 54``` ~sh 55"general": { 56 "username": "user", 57 "password": "password", 58 "transport": "transport_type", 59 "skip_spdk_install": bool 60} 61``` 62 63Required: 64 65- username - username for the SSH session 66- password - password for the SSH session 67- transport - transport layer to be used throughout the test ("tcp" or "rdma") 68 69Optional: 70 71- skip_spdk_install - by default SPDK sources will be copied from Target 72 to the Initiator systems each time run_nvmf.py script is run. If the SPDK 73 is already in place on Initiator systems and there's no need to re-build it, 74 then set this option to true. 75 Default: false. 76 77### Target System Configuration 78 79``` ~sh 80"target": { 81 "mode": "spdk", 82 "nic_ips": ["192.0.1.1", "192.0.2.1"], 83 "core_mask": "[1-10]", 84 "null_block_devices": 8, 85 "nvmet_bin": "/path/to/nvmetcli", 86 "sar_settings": [true, 30, 1, 60], 87 "pcm_settings": [/tmp/pcm, 30, 1, 60], 88 "enable_bandwidth": [true, 60], 89 "enable_dpdk_memory": [true, 30] 90 "num_shared_buffers": 4096, 91 "scheduler_settings": "static", 92 "zcopy_settings": false, 93 "dif_insert_strip": true, 94 "null_block_dif_type": 3 95} 96``` 97 98Required: 99 100- mode - Target application mode, "spdk" or "kernel". 101- nic_ips - IP addresses of NIC ports to be used by the target to export 102 NVMe-oF subsystems. 103- core_mask - Used by SPDK target only. 104 CPU core mask either in form of actual mask (i.e. 0xAAAA) or core list 105 (i.e. [0,1,2-5,6). 106 At this moment the scripts cannot restrict the Kernel target to only 107 use certain CPU cores. Important: upper bound of the range is inclusive! 108 109Optional, common: 110 111- null_block_devices - int, number of null block devices to create. 112 Detected NVMe devices are not used if option is present. Default: 0. 113- sar_settings - [bool, int(x), int(y), int(z)]; 114 Enable SAR CPU utilization measurement on Target side. 115 Wait for "x" seconds before starting measurements, then do "z" samples 116 with "y" seconds intervals between them. Default: disabled. 117- pcm_settings - [path, int(x), int(y), int(z)]; 118 Enable [PCM](https://github.com/opcm/pcm.git) measurements on Target side. 119 Measurements include CPU, memory and power consumption. "path" points to a 120 directory where pcm executables are present. 121 "x" - time to wait before starting measurements (suggested it equals to fio 122 ramp_time). 123 "y" - time interval between measurements. 124 "z" - number of measurement samples. 125 Default: disabled. 126- enable_bandwidth - [bool, int]. Wait a given number of seconds and run 127 bwm-ng until the end of test to measure bandwidth utilization on network 128 interfaces. Default: disabled. 129- tuned_profile - tunedadm profile to apply on the system before starting 130 the test. 131- irq_scripts_dir - path to scripts directory of Mellanox mlnx-tools package; 132 Used to run set_irq_affinity.sh script. 133 Default: /usr/src/local/mlnx-tools/ofed_scripts 134 135Optional, Kernel Target only: 136 137- nvmet_bin - path to nvmetcli binary, if not available in $PATH. 138 Only for Kernel Target. Default: "nvmetcli". 139 140Optional, SPDK Target only: 141 142- zcopy_settings - bool. Disable or enable target-size zero-copy option. 143 Default: false. 144- scheduler_settings - str. Select SPDK Target thread scheduler (static/dynamic). 145 Default: static. 146- num_shared_buffers - int, number of shared buffers to allocate when 147 creating transport layer. Default: 4096. 148- max_queue_depth - int, max number of outstanding I/O per queue. Default: 128. 149- dif_insert_strip - bool. Only for TCP transport. Enable DIF option when 150 creating transport layer. Default: false. 151- null_block_dif_type - int, 0-3. Level of DIF type to use when creating 152 null block bdev. Default: 0. 153- enable_dpdk_memory - [bool, int]. Wait for a given number of seconds and 154 call env_dpdk_get_mem_stats RPC call to dump DPDK memory stats. Typically 155 wait time should be at least ramp_time of fio described in another section. 156- adq_enable - bool; only for TCP transport. 157 Configure system modules, NIC settings and create priority traffic classes 158 for ADQ testing. You need and ADQ-capable NIC like the Intel E810. 159- bpf_scripts - list of bpftrace scripts that will be attached during the 160 test run. Available scripts can be found in the spdk/scripts/bpf directory. 161- dsa_settings - bool. Only for TCP transport. Enable offloading CRC32C 162 calculation to DSA. You need a CPU with the Intel(R) Data Streaming 163 Accelerator (DSA) engine. 164- scheduler_core_limit - int, 0-100. Dynamic scheduler option to load limit on 165 the core to be considered full. 166 167### Initiator system settings section 168 169There can be one or more `initiatorX` setting sections, depending on the test setup. 170 171``` ~sh 172"initiator1": { 173 "ip": "10.0.0.1", 174 "nic_ips": ["192.0.1.2"], 175 "target_nic_ips": ["192.0.1.1"], 176 "mode": "spdk", 177 "fio_bin": "/path/to/fio/bin", 178 "nvmecli_bin": "/path/to/nvmecli/bin", 179 "cpus_allowed": "0,1,10-15", 180 "cpus_allowed_policy": "shared", 181 "num_cores": 4, 182 "cpu_frequency": 2100000, 183 "adq_enable": false, 184 "kernel_engine": "io_uring" 185} 186``` 187 188Required: 189 190- ip - management IP address of initiator system to set up SSH connection. 191- nic_ips - list of IP addresses of NIC ports to be used in test, 192 local to given initiator system. 193- target_nic_ips - list of IP addresses of Target NIC ports to which initiator 194 will attempt to connect to. 195- mode - initiator mode, "spdk" or "kernel". For SPDK, the bdev fio plugin 196 will be used to connect to NVMe-oF subsystems and submit I/O. For "kernel", 197 nvmecli will be used to connect to NVMe-oF subsystems and fio will use the 198 libaio ioengine to submit I/Os. 199 200Optional, common: 201 202- nvmecli_bin - path to nvmecli binary; Will be used for "discovery" command 203 (for both SPDK and Kernel modes) and for "connect" (in case of Kernel mode). 204 Default: system-wide "nvme". 205- fio_bin - path to custom fio binary, which will be used to run IO. 206 Additionally, the directory where the binary is located should also contain 207 fio sources needed to build SPDK fio_plugin for spdk initiator mode. 208 Default: /usr/src/fio/fio. 209- cpus_allowed - str, list of CPU cores to run fio threads on. Takes precedence 210 before `num_cores` setting. Default: None (CPU cores randomly allocated). 211 For more information see `man fio`. 212- cpus_allowed_policy - str, "shared" or "split". CPU sharing policy for fio 213 threads. Default: shared. For more information see `man fio`. 214- num_cores - By default fio threads on initiator side will use as many CPUs 215 as there are connected subsystems. This option limits the number of CPU cores 216 used for fio threads to this number; cores are allocated randomly and fio 217 `filename` parameters are grouped if needed. `cpus_allowed` option takes 218 precedence and `num_cores` is ignored if both are present in config. 219- cpu_frequency - int, custom CPU frequency to set. By default test setups are 220 configured to run in performance mode at max frequencies. This option allows 221 user to select CPU frequency instead of running at max frequency. Before 222 using this option `intel_pstate=disable` must be set in boot options and 223 cpupower governor be set to `userspace`. 224- tuned_profile - tunedadm profile to apply on the system before starting 225 the test. 226- irq_scripts_dir - path to scripts directory of Mellanox mlnx-tools package; 227 Used to run set_irq_affinity.sh script. 228 Default: /usr/src/local/mlnx-tools/ofed_scripts 229- kernel_engine - Select fio ioengine mode to run tests. io_uring libraries and 230 io_uring capable fio binaries must be present on Initiator systems! 231 Available options: 232 - libaio (default) 233 - io_uring 234 235Optional, SPDK Initiator only: 236 237- adq_enable - bool; only for TCP transport. Configure system modules, NIC 238 settings and create priority traffic classes for ADQ testing. 239 You need an ADQ-capable NIC like Intel E810. 240- enable_data_digest - bool; only for TCP transport. Enable the data 241 digest for the bdev controller. The target can use IDXD to calculate the 242 data digest or fallback to a software optimized implementation on system 243 that don't have the Intel(R) Data Streaming Accelerator (DSA) engine. 244 245### Fio settings section 246 247``` ~sh 248"fio": { 249 "bs": ["4k", "128k"], 250 "qd": [32, 128], 251 "rw": ["randwrite", "write"], 252 "rwmixread": 100, 253 "rate_iops": 10000, 254 "num_jobs": 2, 255 "run_time": 30, 256 "ramp_time": 30, 257 "run_num": 3 258} 259``` 260 261Required: 262 263- bs - fio IO block size 264- qd - fio iodepth 265- rw - fio rw mode 266- rwmixread - read operations percentage in case of mixed workloads 267- num_jobs - fio numjobs parameter 268 Note: may affect total number of CPU cores used by initiator systems 269- run_time - fio run time 270- ramp_time - fio ramp time, does not do measurements 271- run_num - number of times each workload combination is run. 272 If more than 1 then final result is the average of all runs. 273 274Optional: 275 276- rate_iops - limit IOPS to this number 277 278#### Test Combinations 279 280It is possible to specify more than one value for bs, qd and rw parameters. 281In such case script creates a list of their combinations and runs IO tests 282for all of these combinations. For example, the following configuration: 283 284``` ~sh 285 "bs": ["4k"], 286 "qd": [32, 128], 287 "rw": ["write", "read"] 288``` 289 290results in following workloads being tested: 291 292- 4k-write-32 293- 4k-write-128 294- 4k-read-32 295- 4k-read-128 296 297#### Important note about queue depth parameter 298 299qd in fio settings section refers to iodepth generated per single fio target 300device ("filename" in resulting fio configuration file). It is re-calculated 301while the script is running, so generated fio configuration file might contain 302a different value than what user has specified at input, especially when also 303using "numjobs" or initiator "num_cores" parameters. For example: 304 305Target system exposes 4 NVMe-oF subsystems. One initiator system connects to 306all of these systems. 307 308Initiator configuration (relevant settings only): 309 310``` ~sh 311"initiator1": { 312 "num_cores": 1 313} 314``` 315 316Fio configuration: 317 318``` ~sh 319"fio": { 320 "bs": ["4k"], 321 "qd": [128], 322 "rw": ["randread"], 323 "rwmixread": 100, 324 "num_jobs": 1, 325 "run_time": 30, 326 "ramp_time": 30, 327 "run_num": 1 328} 329``` 330 331In this case generated fio configuration will look like this 332(relevant settings only): 333 334``` ~sh 335[global] 336numjobs=1 337 338[job_section0] 339filename=Nvme0n1 340filename=Nvme1n1 341filename=Nvme2n1 342filename=Nvme3n1 343iodepth=512 344``` 345 346`num_cores` option results in 4 connected subsystems to be grouped under a 347single fio thread (job_section0). Because `iodepth` is local to `job_section0`, 348it is distributed between each `filename` local to job section in round-robin 349(by default) fashion. In case of fio targets with the same characteristics 350(IOPS & Bandwidth capabilities) it means that iodepth is distributed **roughly** 351equally. Ultimately above fio configuration results in iodepth=128 per filename. 352 353`numjobs` higher than 1 is also taken into account, so that desired qd per 354filename is retained: 355 356``` ~sh 357[global] 358numjobs=2 359 360[job_section0] 361filename=Nvme0n1 362filename=Nvme1n1 363filename=Nvme2n1 364filename=Nvme3n1 365iodepth=256 366``` 367 368Besides `run_num`, more information on these options can be found in `man fio`. 369 370## Running the test 371 372Before running the test script run the spdk/scripts/setup.sh script on Target 373system. This binds the devices to VFIO/UIO userspace driver and allocates 374hugepages for SPDK process. 375 376Run the script on the NVMe-oF target system: 377 378``` ~sh 379cd spdk 380sudo PYTHONPATH=$PYTHONPATH:$PWD/python scripts/perf/nvmf/run_nvmf.py 381``` 382 383By default script uses config.json configuration file in the scripts/perf/nvmf 384directory. You can specify a different configuration file at runtime as below: 385 386``` ~sh 387sudo PYTHONPATH=$PYTHONPATH:$PWD/python scripts/perf/nvmf/run_nvmf.py -c /path/to/config.json 388``` 389 390PYTHONPATH environment variable is needed because script uses SPDK-local Python 391modules. If you'd like to get rid of `PYTHONPATH=$PYTHONPATH:$PWD/python` 392you need to modify your environment so that Python interpreter is aware of 393`spdk/scripts` directory. 394 395## Test Results 396 397Test results for all workload combinations are printed to screen once the tests 398are finished. Additionally all aggregate results are saved to /tmp/results/nvmf_results.conf 399Results directory path can be changed by -r script parameter. 400