xref: /spdk/scripts/perf/nvmf/README.md (revision 5ee049eeecb4bfa60a4fbc85fe82ec74dc97e07b)
1## Running NVMe-OF Performace Testcases
2
3In order to reproduce test cases described in [SPDK NVMe-OF Performance Test Cases](https://ci.spdk.io/download/performance-reports/SPDK_nvmeof_perf_report_18.04.pdf) follow the following instructions.
4
5Currently RDMA NIC IP address assignment must be done manually before running the tests.
6
7# Prepare the configuration file
8
9Configure the target, initiators, and FIO workload in the json configuration file.
10
11## General
12
13Options which apply to both target and all initiator servers such as "password" and "username" fields.
14All servers are required to have the same user credentials for running the test.
15Test results can be found in /tmp/results directory.
16
17### transport
18
19Transport layer to use between Target and Initiator servers - rdma or tcp.
20
21## Target
22
23Configure the target server information.
24
25### nic_ips
26
27List of IP addresses othat will be used in this test..
28NVMe namespaces will be split between provided IP addresses.
29So for example providing 2 IP's with 16 NVMe drives present will result in each IP managing
308 NVMe subystems.
31
32### mode
33
34"spdk" or "kernel" values allowed.
35
36### null_block_devices
37
38Integer. Use null block devices instead of present NVMe drives.
39If set to 1, can be used for latency measurements as described in Test Case 3 of performance report.
40
41### null_block_dif_type
42
43Integer. Enable data protection on created null block device. Defaults to 0 if option
44not present in JSON configuration file. See doc/jsonrpc.md "bdev_null_create" for details.
45
46### core_mask
47
48List of CPU cores to assign for running SPDK NVMe-OF Target process. Can specify exact core numbers or ranges, eg:
49[0, 1, 10-15].
50
51### nvmet_bin
52
53Path to nvmetcli application executable. If not provided then system-wide package will be used
54by default. Not used if "mode" is set to "spdk".
55
56### num_shared_buffers
57
58Number of shared buffers to use when creating transport layer.
59
60### dif_insert_strip
61
62Boolean. If set to true - enable "dif_insert_or_strip" option for TCP transport layer.
63
64### adq_enable
65
66Configure and use ADQ on selected system. Only available when using Intel E810 NICs.
67Set to "true" to enable.
68
69## Initiator
70
71Describes initiator arguments. There can be more than one initiator section in the configuration file.
72For the sake of easier results parsing from multiple initiators please use only digits and letters
73in initiator section name.
74
75### ip
76
77Management IP address used for SSH communication with initiator server.
78
79### nic_ips
80
81List of IP addresses local to initiator.
82
83### remote_nic_ips
84
85List of target IP addresses to which the initiator should try to connect.
86
87### mode
88
89"spdk" or "kernel" values allowed.
90
91### cpus_allowed
92
93List of CPU cores to assign for running SPDK NVMe-OF initiator process.
94Can specify exact core numbers: 0,5
95or ranges: 10-15
96or binding to CPUs 0, 5, and 8 to 15: `cpus_allowed=0,5,8-15`.
97If not specified then will use num_cores option.
98If specified with num_cores then cpu_allowed parameter has higher priority than num_cores.
99
100### num_cores
101
102Applies only to SPDK initiator. Number of CPUs core to use for running FIO job.
103If not specified then by default each connected subsystem gets its own CPU core.
104
105### nvmecli_dir
106
107Path to directory with nvme-cli application. If not provided then system-wide package will be used
108by default. Not used if "mode" is set to "spdk".
109
110### fio_bin
111
112Path to the fio binary that will be used to compile SPDK and run the test.
113If not specified, then the script will use /usr/src/fio/fio as the default.
114
115### adq_enable
116
117Configure and use ADQ on selected system. Only available when using Intel E810 NICs.
118Set to "true" to enable.
119
120### extra_params
121
122Space separated string with additional settings for "nvme connect" command
123other than -t, -s, -n and -a.
124
125## fio
126
127Fio job parameters.
128
129- bs: block size
130- qd: io depth - Per connected fio filename target
131- rw: workload mode
132- rwmixread: percentage of reads in readwrite workloads
133- run_time: time (in seconds) to run workload
134- ramp_time: time (in seconds) to run workload before statistics are gathered
135- run_num: how many times to run given workload in loop
136
137# Running Test
138
139Before running the test script use the setup.sh script to bind the devices you want to
140use in the test to the VFIO/UIO driver.
141Run the script on the NVMe-oF target system:
142
143    cd spdk
144    sudo PYTHONPATH=$PYTHONPATH:$PWD/scripts scripts/perf/nvmf/run_nvmf.py
145The script uses the config.json configuration file in the scripts/perf/nvmf directory by default. You can
146specify a different configuration file at runtime as shown below:
147sudo PYTHONPATH=$PYTHONPATH:$PWD/scripts scripts/perf/nvmf/run_nvmf.py /path/to/config file/json config file
148
149The script uses another spdk script (scripts/rpc.py) so we pass the path to rpc.py by setting the Python path
150as a runtime environment parameter.
151
152# Test Results
153
154When the test completes, you will find a csv file (nvmf_results.csv) containing the results in the target node
155directory /tmp/results.
156
157# Processor Counter Monitor (PCM)
158PCM Tools provides a number of command-line utilities for real-time monitoring.
159Before using PCM Tools in nvmf perf scripts it needs to be installed on Target machine.
160PCM source and instructions are available on https://github.com/opcm/pcm.
161To enable PCM in perf test you need to add Target setting in config.json file:
162```
163"pcm_settings": ["pcm_directory", delay_time, measure_interval, sample_count]
164```
165example:
166```
167"pcm_settings": ["/tmp/pcm", 10, 1, 30]
168```
169Example above will run PCM measure for cpu, memory and power. Start will be delayed by 10s,
170sample taken every 1 second. Last parameter is number of samples for cpu and power measure.
171PCM memory do not support sample count.
172
173# Bandwidth monitor (bwm-ng)
174PCM Tools provides a number of command-line utilities for real-time monitoring.
175Before using bwm-ng in nvmf perf scripts it needs to be installed on Target machine.
176To enable bandwidth monitor in perf test you need to add Target setting in config.json file:
177```
178"bandwidth_settings": [bool, sample_count]
179```
180example:
181```
182"bandwidth_settings": [true, 30]
183```
184
185# Enable zcopy on target side:
186To enable zcopy in perf test you need to add Target setting in config.json file:
187
188```
189"zcopy_settings": bool
190```
191example:
192```
193"zcopy_settings": true
194```
195# Scheduler settings in NVMe-oF performance scripts
196To enable dynamic scheduler in perf test you need to add Target setting in config.json file:
197
198```
199"scheduler_settings": [scheduler_name]
200```
201example:
202```
203"scheduler_settings": [static]
204```
205