xref: /spdk/scripts/perf/nvmf/README.md (revision 19d5c3ed8e87dbd240c77ae0ddb5eda25ae99b5f)
1## Running NVMe-OF Performace Testcases
2
3In order to reproduce test cases described in [SPDK NVMe-OF Performance Test Cases](https://ci.spdk.io/download/performance-reports/SPDK_nvmeof_perf_report_18.04.pdf) follow the following instructions.
4
5Currently RDMA NIC IP address assignment must be done manually before running the tests.
6
7# Prepare the configuration file
8Configure the target, initiators, and FIO workload in the json configuration file.
9
10## General
11Options which apply to both target and all initiator servers such as "password" and "username" fields.
12All servers are required to have the same user credentials for running the test.
13Test results can be found in /tmp/results directory.
14### transport
15Transport layer to use between Target and Initiator servers - rdma or tcp.
16
17## Target
18Configure the target server information.
19### nic_ips
20List of IP addresses othat will be used in this test..
21NVMe namespaces will be split between provided IP addresses.
22So for example providing 2 IP's with 16 NVMe drives present will result in each IP managing
238 NVMe subystems.
24### mode
25"spdk" or "kernel" values allowed.
26### use_null_block
27Use null block device instead of present NVMe drives. Used for latency measurements as described
28in Test Case 3 of performance report.
29### num_cores
30List of CPU cores to assign for running SPDK NVMe-OF Target process. Can specify exact core numbers or ranges, eg:
31[0, 1, 10-15].
32### nvmet_bin
33Path to nvmetcli application executable. If not provided then system-wide package will be used
34by default. Not used if "mode" is set to "spdk".
35### num_shared_buffers
36Number of shared buffers to use when creating transport layer.
37
38## Initiator
39Describes initiator arguments. There can be more than one initiator section in the configuration file.
40For the sake of easier results parsing from multiple initiators please use only digits and letters
41in initiator section name.
42### ip
43Management IP address used for SSH communication with initiator server.
44### nic_ips
45List of target IP addresses to which the initiator should try to connect.
46### mode
47"spdk" or "kernel" values allowed.
48### num_cores
49Applies only to SPDK initiator. Number of CPUs core to use for running FIO job.
50If not specified then by default each connected subsystem gets its own CPU core.
51### nvmecli_dir
52Path to directory with nvme-cli application. If not provided then system-wide package will be used
53by default. Not used if "mode" is set to "spdk".
54### fio_bin
55Path to the fio binary that will be used to compile SPDK and run the test.
56If not specified, then the script will use /usr/src/fio/fio as the default.
57### extra_params
58Space separated string with additional settings for "nvme connect" command
59other than -t, -s, -n and -a.
60
61## fio
62Fio job parameters.
63
64- bs: block size
65- qd: io depth
66- rw: workload mode
67- rwmixread: percentage of reads in readwrite workloads
68- run_time: time (in seconds) to run workload
69- ramp_time: time (in seconds) to run workload before statistics are gathered
70- run_num: how many times to run given workload in loop
71
72# Running Test
73Before running the test script use the setup.sh script to bind the devices you want to
74use in the test to the VFIO/UIO driver.
75Run the script on the NVMe-oF target system:
76
77    cd spdk
78    sudo PYTHONPATH=$PYTHONPATH:$PWD/scripts scripts/perf/nvmf/run_nvmf.py
79The script uses the config.json configuration file in the scripts/perf/nvmf directory by default. You can
80specify a different configuration file at runtime as shown below:
81sudo PYTHONPATH=$PYTHONPATH:$PWD/scripts scripts/perf/nvmf/run_nvmf.py /path/to/config file/json config file
82
83The script uses another spdk script (scripts/rpc.py) so we pass the path to rpc.py by setting the Python path
84as a runtime environment parameter.
85
86# Test Results
87When the test completes, you will find a csv file (nvmf_results.csv) containing the results in the target node
88directory /tmp/results.
89
90#Processor Counter Monitor (PCM)
91PCM Tools provides a number of command-line utilities for real-time monitoring.
92Before using PCM Tools in nvmf perf scripts it needs to be installed on Target machine.
93PCM source and instructions are available on https://github.com/opcm/pcm.
94To enable PCM in perf test you need to add Target setting in config.json file:
95```
96"pcm_settings": ["pcm_directory", "measure_cpu", "measure_memory", delay_time, measure_interval, sample_count]
97```
98example:
99```
100"pcm_settings": ["/tmp/pcm", true, true, 10, 1, 30]
101```
102Example above will run PCM measure for cpu and memory, with start delay 10s, sample every 1 second,
103and 30 samples for cpu measure. PCM memory do not support sample count.
104