xref: /spdk/scripts/perf/nvmf/README.md (revision b30d57cdad6d2bc75cc1e4e2ebbcebcb0d98dcfa)
1## Running NVMe-OF Performace Testcases
2
3In order to reproduce test cases described in [SPDK NVMe-OF Performance Test Cases](https://ci.spdk.io/download/performance-reports/SPDK_nvmeof_perf_report_18.04.pdf) follow the following instructions.
4
5Currently RDMA NIC IP address assignment must be done manually before running the tests.
6
7# Prepare the configuration file
8
9Configure the target, initiators, and FIO workload in the json configuration file.
10
11## General
12
13Options which apply to both target and all initiator servers such as "password" and "username" fields.
14All servers are required to have the same user credentials for running the test.
15Test results can be found in /tmp/results directory.
16
17### transport
18
19Transport layer to use between Target and Initiator servers - rdma or tcp.
20
21## Target
22
23Configure the target server information.
24
25### nic_ips
26
27List of IP addresses othat will be used in this test..
28NVMe namespaces will be split between provided IP addresses.
29So for example providing 2 IP's with 16 NVMe drives present will result in each IP managing
308 NVMe subystems.
31
32### mode
33
34"spdk" or "kernel" values allowed.
35
36### null_block_devices
37
38Integer. Use null block devices instead of present NVMe drives.
39If set to 1, can be used for latency measurements as described in Test Case 3 of performance report.
40
41### null_block_dif_type
42
43Integer. Enable data protection on created null block device. Defaults to 0 if option
44not present in JSON configuration file. See doc/jsonrpc.md "bdev_null_create" for details.
45
46### num_cores
47
48List of CPU cores to assign for running SPDK NVMe-OF Target process. Can specify exact core numbers or ranges, eg:
49[0, 1, 10-15].
50
51### nvmet_bin
52
53Path to nvmetcli application executable. If not provided then system-wide package will be used
54by default. Not used if "mode" is set to "spdk".
55
56### num_shared_buffers
57
58Number of shared buffers to use when creating transport layer.
59
60### dif_insert_strip
61
62Boolean. If set to true - enable "dif_insert_or_strip" option for TCP transport layer.
63
64## Initiator
65
66Describes initiator arguments. There can be more than one initiator section in the configuration file.
67For the sake of easier results parsing from multiple initiators please use only digits and letters
68in initiator section name.
69
70### ip
71
72Management IP address used for SSH communication with initiator server.
73
74### nic_ips
75
76List of target IP addresses to which the initiator should try to connect.
77
78### mode
79
80"spdk" or "kernel" values allowed.
81
82### cpus_allowed
83
84List of CPU cores to assign for running SPDK NVMe-OF initiator process.
85Can specify exact core numbers: 0,5
86or ranges: 10-15
87or binding to CPUs 0, 5, and 8 to 15: `cpus_allowed=0,5,8-15`.
88If not specified then will use num_cores option.
89If specified with num_cores then cpu_allowed parameter has higher priority than num_cores.
90
91### num_cores
92
93Applies only to SPDK initiator. Number of CPUs core to use for running FIO job.
94If not specified then by default each connected subsystem gets its own CPU core.
95
96### nvmecli_dir
97
98Path to directory with nvme-cli application. If not provided then system-wide package will be used
99by default. Not used if "mode" is set to "spdk".
100
101### fio_bin
102
103Path to the fio binary that will be used to compile SPDK and run the test.
104If not specified, then the script will use /usr/src/fio/fio as the default.
105
106### extra_params
107
108Space separated string with additional settings for "nvme connect" command
109other than -t, -s, -n and -a.
110
111## fio
112
113Fio job parameters.
114
115- bs: block size
116- qd: io depth - Per connected fio filename target
117- rw: workload mode
118- rwmixread: percentage of reads in readwrite workloads
119- run_time: time (in seconds) to run workload
120- ramp_time: time (in seconds) to run workload before statistics are gathered
121- run_num: how many times to run given workload in loop
122
123# Running Test
124
125Before running the test script use the setup.sh script to bind the devices you want to
126use in the test to the VFIO/UIO driver.
127Run the script on the NVMe-oF target system:
128
129    cd spdk
130    sudo PYTHONPATH=$PYTHONPATH:$PWD/scripts scripts/perf/nvmf/run_nvmf.py
131The script uses the config.json configuration file in the scripts/perf/nvmf directory by default. You can
132specify a different configuration file at runtime as shown below:
133sudo PYTHONPATH=$PYTHONPATH:$PWD/scripts scripts/perf/nvmf/run_nvmf.py /path/to/config file/json config file
134
135The script uses another spdk script (scripts/rpc.py) so we pass the path to rpc.py by setting the Python path
136as a runtime environment parameter.
137
138# Test Results
139
140When the test completes, you will find a csv file (nvmf_results.csv) containing the results in the target node
141directory /tmp/results.
142
143# Processor Counter Monitor (PCM)
144PCM Tools provides a number of command-line utilities for real-time monitoring.
145Before using PCM Tools in nvmf perf scripts it needs to be installed on Target machine.
146PCM source and instructions are available on https://github.com/opcm/pcm.
147To enable PCM in perf test you need to add Target setting in config.json file:
148```
149"pcm_settings": ["pcm_directory", "measure_cpu", "measure_memory", delay_time, measure_interval, sample_count]
150```
151example:
152```
153"pcm_settings": ["/tmp/pcm", true, true, 10, 1, 30]
154```
155Example above will run PCM measure for cpu and memory, with start delay 10s, sample every 1 second,
156and 30 samples for cpu measure. PCM memory do not support sample count.
157
158# Bandwidth monitor (bwm-ng)
159PCM Tools provides a number of command-line utilities for real-time monitoring.
160Before using bwm-ng in nvmf perf scripts it needs to be installed on Target machine.
161To enable bandwidth monitor in perf test you need to add Target setting in config.json file:
162```
163"bandwidth_settings": [bool, sample_count]
164```
165example:
166```
167"bandwidth_settings": [true, 30]
168```
169
170# Enable zcopy on target side:
171To enable zcopy in perf test you need to add Target setting in config.json file:
172
173```
174"zcopy_settings": bool
175```
176example:
177```
178"zcopy_settings": true
179```
180# Scheduler settings in NVMe-oF performance scripts
181To enable dynamic scheduler in perf test you need to add Target setting in config.json file:
182
183```
184"scheduler_settings": [scheduler_name]
185```
186example:
187```
188"scheduler_settings": [static]
189```
190