xref: /spdk/doc/iscsi.md (revision a82b365b90fca7681cb84de66f00c8a090ee45b7)
1# iSCSI Target {#iscsi}
2
3## iSCSI Target Getting Started Guide {#iscsi_getting_started}
4
5The Storage Performance Development Kit iSCSI target application is named `iscsi_tgt`.
6This following section describes how to run iscsi from your cloned package.
7
8## Prerequisites {#iscsi_prereqs}
9
10This guide starts by assuming that you can already build the standard SPDK distribution on your
11platform.
12
13Once built, the binary will be in `build/bin`.
14
15If you want to kill the application by using signal, make sure use the SIGTERM, then the application
16will release all the shared memory resource before exit, the SIGKILL will make the shared memory
17resource have no chance to be released by applications, you may need to release the resource manually.
18
19## Introduction
20
21The following diagram shows relations between different parts of iSCSI structure described in this
22document.
23
24![iSCSI structure](iscsi.svg)
25
26### Assigning CPU Cores to the iSCSI Target {#iscsi_config_lcore}
27
28SPDK uses the [DPDK Environment Abstraction Layer](http://dpdk.org/doc/guides/prog_guide/env_abstraction_layer.html)
29to gain access to hardware resources such as huge memory pages and CPU core(s). DPDK EAL provides
30functions to assign threads to specific cores.
31To ensure the SPDK iSCSI target has the best performance, place the NICs and the NVMe devices on the
32same NUMA node and configure the target to run on CPU cores associated with that node. The following
33command line option is used to configure the SPDK iSCSI target:
34
35~~~bash
36-m 0xF000000
37~~~
38
39This is a hexadecimal bit mask of the CPU cores where the iSCSI target will start polling threads.
40In this example, CPU cores 24, 25, 26 and 27 would be used.
41
42## Configuring iSCSI Target via RPC method {#iscsi_rpc}
43
44The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details.
45
46### Portal groups
47
48- iscsi_create_portal_group -- Add a portal group.
49- iscsi_delete_portal_group -- Delete an existing portal group.
50- iscsi_target_node_add_pg_ig_maps -- Add initiator group to portal group mappings to an existing iSCSI target node.
51- iscsi_target_node_remove_pg_ig_maps -- Delete initiator group to portal group mappings from an existing iSCSI target node.
52- iscsi_get_portal_groups -- Show information about all available portal groups.
53
54~~~bash
55/path/to/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260
56~~~
57
58### Initiator groups
59
60- iscsi_create_initiator_group -- Add an initiator group.
61- iscsi_delete_initiator_group -- Delete an existing initiator group.
62- iscsi_initiator_group_add_initiators -- Add initiators to an existing initiator group.
63- iscsi_get_initiator_groups -- Show information about all available initiator groups.
64
65~~~bash
66/path/to/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32
67~~~
68
69### Target nodes
70
71- iscsi_create_target_node -- Add an iSCSI target node.
72- iscsi_delete_target_node -- Delete an iSCSI target node.
73- iscsi_target_node_add_lun -- Add a LUN to an existing iSCSI target node.
74- iscsi_get_target_nodes -- Show information about all available iSCSI target nodes.
75
76~~~bash
77/path/to/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -d
78~~~
79
80## Configuring iSCSI Initiator {#iscsi_initiator}
81
82The Linux initiator is open-iscsi.
83
84Installing open-iscsi package
85Fedora:
86~~~bash
87yum install -y iscsi-initiator-utils
88~~~
89
90Ubuntu:
91~~~bash
92apt-get install -y open-iscsi
93~~~
94
95### Setup
96
97Edit /etc/iscsi/iscsid.conf
98~~~bash
99node.session.cmds_max = 4096
100node.session.queue_depth = 128
101~~~
102
103iscsid must be restarted or receive SIGHUP for changes to take effect. To send SIGHUP, run:
104~~~bash
105killall -HUP iscsid
106~~~
107
108Recommended changes to /etc/sysctl.conf
109~~~bash
110net.ipv4.tcp_timestamps = 1
111net.ipv4.tcp_sack = 0
112
113net.ipv4.tcp_rmem = 10000000 10000000 10000000
114net.ipv4.tcp_wmem = 10000000 10000000 10000000
115net.ipv4.tcp_mem = 10000000 10000000 10000000
116net.core.rmem_default = 524287
117net.core.wmem_default = 524287
118net.core.rmem_max = 524287
119net.core.wmem_max = 524287
120net.core.optmem_max = 524287
121net.core.netdev_max_backlog = 300000
122~~~
123
124### Discovery
125
126Assume target is at 10.0.0.1
127
128~~~bash
129iscsiadm -m discovery -t sendtargets -p 10.0.0.1
130~~~
131
132### Connect to target
133
134~~~bash
135iscsiadm -m node --login
136~~~
137
138At this point the iSCSI target should show up as SCSI disks. Check dmesg to see what
139they came up as.
140
141### Disconnect from target
142
143~~~bash
144iscsiadm -m node --logout
145~~~
146
147### Deleting target node cache
148
149~~~bash
150iscsiadm -m node -o delete
151~~~
152
153This will cause the initiator to forget all previously discovered iSCSI target nodes.
154
155### Finding /dev/sdX nodes for iSCSI LUNs
156
157~~~bash
158iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}'
159~~~
160
161This will show the /dev node name for each SCSI LUN in all logged in iSCSI sessions.
162
163### Tuning
164
165After the targets are connected, they can be tuned. For example if /dev/sdc is
166an iSCSI disk then the following can be done:
167Set noop to scheduler
168
169~~~bash
170echo noop > /sys/block/sdc/queue/scheduler
171~~~
172
173Disable merging/coalescing (can be useful for precise workload measurements)
174
175~~~bash
176echo "2" > /sys/block/sdc/queue/nomerges
177~~~
178
179Increase requests for block queue
180
181~~~bash
182echo "1024" > /sys/block/sdc/queue/nr_requests
183~~~
184
185### Example: Configure simple iSCSI Target with one portal and two LUNs
186
187Assuming we have one iSCSI Target server with portal at 10.0.0.1:3200, two LUNs (Malloc0 and Malloc1),
188 and accepting initiators on 10.0.0.2/32, like on diagram below:
189
190![Sample iSCSI configuration](iscsi_example.svg)
191
192#### Configure iSCSI Target
193
194Start iscsi_tgt application:
195
196```bash
197./build/bin/iscsi_tgt
198```
199
200Construct two 64MB Malloc block devices with 512B sector size "Malloc0" and "Malloc1":
201
202```bash
203./scripts/rpc.py bdev_malloc_create -b Malloc0 64 512
204./scripts/rpc.py bdev_malloc_create -b Malloc1 64 512
205```
206
207Create new portal group with id 1, and address 10.0.0.1:3260:
208
209```bash
210./scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260
211```
212
213Create one initiator group with id 2 to accept any connection from 10.0.0.2/32:
214
215```bash
216./scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32
217```
218
219Finally construct one target using previously created bdevs as LUN0 (Malloc0) and LUN1 (Malloc1)
220with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator group 2.
221
222```bash
223./scripts/rpc.py iscsi_create_target_node disk1 "Data Disk1" "Malloc0:0 Malloc1:1" 1:2 64 -d
224```
225
226#### Configure initiator
227
228Discover target
229
230~~~bash
231$ iscsiadm -m discovery -t sendtargets -p 10.0.0.1
23210.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1
233~~~
234
235Connect to the target
236
237~~~bash
238iscsiadm -m node --login
239~~~
240
241At this point the iSCSI target should show up as SCSI disks.
242
243Check dmesg to see what they came up as. In this example it can look like below:
244
245~~~bash
246...
247[630111.860078] scsi host68: iSCSI Initiator over TCP/IP
248[630112.124743] scsi 68:0:0:0: Direct-Access     INTEL    Malloc disk      0001 PQ: 0 ANSI: 5
249[630112.125445] sd 68:0:0:0: [sdd] 131072 512-byte logical blocks: (67.1 MB/64.0 MiB)
250[630112.125468] sd 68:0:0:0: Attached scsi generic sg3 type 0
251[630112.125926] sd 68:0:0:0: [sdd] Write Protect is off
252[630112.125934] sd 68:0:0:0: [sdd] Mode Sense: 83 00 00 08
253[630112.126049] sd 68:0:0:0: [sdd] Write cache: enabled, read cache: disabled, doesn't support DPO or FUA
254[630112.126483] scsi 68:0:0:1: Direct-Access     INTEL    Malloc disk      0001 PQ: 0 ANSI: 5
255[630112.127096] sd 68:0:0:1: Attached scsi generic sg4 type 0
256[630112.127143] sd 68:0:0:1: [sde] 131072 512-byte logical blocks: (67.1 MB/64.0 MiB)
257[630112.127566] sd 68:0:0:1: [sde] Write Protect is off
258[630112.127573] sd 68:0:0:1: [sde] Mode Sense: 83 00 00 08
259[630112.127728] sd 68:0:0:1: [sde] Write cache: enabled, read cache: disabled, doesn't support DPO or FUA
260[630112.128246] sd 68:0:0:0: [sdd] Attached SCSI disk
261[630112.129789] sd 68:0:0:1: [sde] Attached SCSI disk
262...
263~~~
264
265You may also use simple bash command to find /dev/sdX nodes for each iSCSI LUN
266in all logged iSCSI sessions:
267
268~~~bash
269$ iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}'
270sdd
271sde
272~~~
273
274## iSCSI Hotplug {#iscsi_hotplug}
275
276At the iSCSI level, we provide the following support for Hotplug:
277
2781. bdev/nvme:
279
280At the bdev/nvme level, we start one hotplug monitor which will call
281spdk_nvme_probe() periodically to get the hotplug events. We provide the
282private attach_cb and remove_cb for spdk_nvme_probe(). For the attach_cb,
283we will create the block device base on the NVMe device attached, and for the
284remove_cb, we will unregister the block device, which will also notify the
285upper level stack (for iSCSI target, the upper level stack is scsi/lun) to
286handle the hot-remove event.
287
2882. scsi/lun:
289
290When the LUN receive the hot-remove notification from block device layer,
291the LUN will be marked as removed, and all the IOs after this point will
292return with check condition status. Then the LUN starts one poller which will
293wait for all the commands which have already been submitted to block device to
294return back; after all the commands return back, the LUN will be deleted.
295
296@sa spdk_nvme_probe
297
298## iSCSI Login Redirection {#iscsi_login_redirection}
299
300The SPDK iSCSI target application supports iSCSI login redirection feature.
301
302A portal refers to an IP address and TCP port number pair, and a portal group
303contains a set of portals. Users for the SPDK iSCSI target application configure
304portals through portal groups.
305
306To support login redirection feature, we utilize two types of portal groups,
307public portal group and private portal group.
308
309The SPDK iSCSI target application usually has a discovery portal. The discovery
310portal is connected by an initiator to get a list of targets, as well as the list
311of portals on which these target may be accessed, by a discovery session.
312
313Public portal groups have their portals returned by a discovery session. Private
314portal groups do not have their portals returned by a discovery session. A public
315portal group may optionally have a redirect portal for non-discovery logins for
316each associated target. This redirect portal must be from a private portal group.
317
318Initiators configure portals in public portal groups as target portals. When an
319initiator logs in to a target through a portal in an associated public portal group,
320the target sends a temporary redirection response with a redirect portal. Then the
321initiator logs in to the target again through the redirect portal.
322
323Users set a portal group to public or private at creation using the
324`iscsi_create_portal_group` RPC, associate portal groups with a target using the
325`iscsi_create_target_node` RPC or the `iscsi_target_node_add_pg_ig_maps` RPC,
326specify a up-to-date redirect portal in a public portal group for a target using
327the `iscsi_target_node_set_redirect` RPC, and terminate the corresponding connections
328by asynchronous logout request using the `iscsi_target_node_request_logout` RPC.
329
330Typically users will use the login redirection feature in scale out iSCSI target
331system, which runs multiple SPDK iSCSI target applications.
332