1# iSCSI Target {#iscsi} 2 3# iSCSI Target Getting Started Guide {#iscsi_getting_started} 4 5The Storage Performance Development Kit iSCSI target application is named `iscsi_tgt`. 6This following section describes how to run iscsi from your cloned package. 7 8## Prerequisites {#iscsi_prereqs} 9 10This guide starts by assuming that you can already build the standard SPDK distribution on your 11platform. 12 13Once built, the binary will be in `build/bin`. 14 15If you want to kill the application by using signal, make sure use the SIGTERM, then the application 16will release all the shared memory resource before exit, the SIGKILL will make the shared memory 17resource have no chance to be released by applications, you may need to release the resource manually. 18 19## Introduction 20 21The following diagram shows relations between different parts of iSCSI structure described in this 22document. 23 24 25 26### Assigning CPU Cores to the iSCSI Target {#iscsi_config_lcore} 27 28SPDK uses the [DPDK Environment Abstraction Layer](http://dpdk.org/doc/guides/prog_guide/env_abstraction_layer.html) 29to gain access to hardware resources such as huge memory pages and CPU core(s). DPDK EAL provides 30functions to assign threads to specific cores. 31To ensure the SPDK iSCSI target has the best performance, place the NICs and the NVMe devices on the 32same NUMA node and configure the target to run on CPU cores associated with that node. The following 33command line option is used to configure the SPDK iSCSI target: 34 35~~~ 36-m 0xF000000 37~~~ 38 39This is a hexadecimal bit mask of the CPU cores where the iSCSI target will start polling threads. 40In this example, CPU cores 24, 25, 26 and 27 would be used. 41 42## Configuring iSCSI Target via RPC method {#iscsi_rpc} 43 44The iSCSI target is configured via JSON-RPC calls. See @ref jsonrpc for details. 45 46### Portal groups 47 48 - iscsi_create_portal_group -- Add a portal group. 49 - iscsi_delete_portal_group -- Delete an existing portal group. 50 - iscsi_target_node_add_pg_ig_maps -- Add initiator group to portal group mappings to an existing iSCSI target node. 51 - iscsi_target_node_remove_pg_ig_maps -- Delete initiator group to portal group mappings from an existing iSCSI target node. 52 - iscsi_get_portal_groups -- Show information about all available portal groups. 53 54~~~ 55/path/to/spdk/scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 56~~~ 57 58### Initiator groups 59 60 - iscsi_create_initiator_group -- Add an initiator group. 61 - iscsi_delete_initiator_group -- Delete an existing initiator group. 62 - iscsi_initiator_group_add_initiators -- Add initiators to an existing initiator group. 63 - iscsi_get_initiator_groups -- Show information about all available initiator groups. 64 65~~~ 66/path/to/spdk/scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 67~~~ 68 69### Target nodes 70 71 - iscsi_create_target_node -- Add an iSCSI target node. 72 - iscsi_delete_target_node -- Delete an iSCSI target node. 73 - iscsi_target_node_add_lun -- Add a LUN to an existing iSCSI target node. 74 - iscsi_get_target_nodes -- Show information about all available iSCSI target nodes. 75 76~~~ 77/path/to/spdk/scripts/rpc.py iscsi_create_target_node Target3 Target3_alias MyBdev:0 1:2 64 -d 78~~~ 79 80## Configuring iSCSI Initiator {#iscsi_initiator} 81 82The Linux initiator is open-iscsi. 83 84Installing open-iscsi package 85Fedora: 86~~~ 87yum install -y iscsi-initiator-utils 88~~~ 89 90Ubuntu: 91~~~ 92apt-get install -y open-iscsi 93~~~ 94 95### Setup 96 97Edit /etc/iscsi/iscsid.conf 98~~~ 99node.session.cmds_max = 4096 100node.session.queue_depth = 128 101~~~ 102 103iscsid must be restarted or receive SIGHUP for changes to take effect. To send SIGHUP, run: 104~~~ 105killall -HUP iscsid 106~~~ 107 108Recommended changes to /etc/sysctl.conf 109~~~ 110net.ipv4.tcp_timestamps = 1 111net.ipv4.tcp_sack = 0 112 113net.ipv4.tcp_rmem = 10000000 10000000 10000000 114net.ipv4.tcp_wmem = 10000000 10000000 10000000 115net.ipv4.tcp_mem = 10000000 10000000 10000000 116net.core.rmem_default = 524287 117net.core.wmem_default = 524287 118net.core.rmem_max = 524287 119net.core.wmem_max = 524287 120net.core.optmem_max = 524287 121net.core.netdev_max_backlog = 300000 122~~~ 123 124### Discovery 125 126Assume target is at 10.0.0.1 127~~~ 128iscsiadm -m discovery -t sendtargets -p 10.0.0.1 129~~~ 130 131### Connect to target 132 133~~~ 134iscsiadm -m node --login 135~~~ 136 137At this point the iSCSI target should show up as SCSI disks. Check dmesg to see what 138they came up as. 139 140### Disconnect from target 141 142~~~ 143iscsiadm -m node --logout 144~~~ 145 146### Deleting target node cache 147 148~~~ 149iscsiadm -m node -o delete 150~~~ 151 152This will cause the initiator to forget all previously discovered iSCSI target nodes. 153 154### Finding /dev/sdX nodes for iSCSI LUNs 155 156~~~ 157iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}' 158~~~ 159 160This will show the /dev node name for each SCSI LUN in all logged in iSCSI sessions. 161 162### Tuning 163 164After the targets are connected, they can be tuned. For example if /dev/sdc is 165an iSCSI disk then the following can be done: 166Set noop to scheduler 167 168~~~ 169echo noop > /sys/block/sdc/queue/scheduler 170~~~ 171 172Disable merging/coalescing (can be useful for precise workload measurements) 173 174~~~ 175echo "2" > /sys/block/sdc/queue/nomerges 176~~~ 177 178Increase requests for block queue 179 180~~~ 181echo "1024" > /sys/block/sdc/queue/nr_requests 182~~~ 183 184### Example: Configure simple iSCSI Target with one portal and two LUNs 185 186Assuming we have one iSCSI Target server with portal at 10.0.0.1:3200, two LUNs (Malloc0 and Malloc1), 187 and accepting initiators on 10.0.0.2/32, like on diagram below: 188 189 190 191#### Configure iSCSI Target 192 193Start iscsi_tgt application: 194``` 195./build/bin/iscsi_tgt 196``` 197 198Construct two 64MB Malloc block devices with 512B sector size "Malloc0" and "Malloc1": 199 200``` 201./scripts/rpc.py bdev_malloc_create -b Malloc0 64 512 202./scripts/rpc.py bdev_malloc_create -b Malloc1 64 512 203``` 204 205Create new portal group with id 1, and address 10.0.0.1:3260: 206 207``` 208./scripts/rpc.py iscsi_create_portal_group 1 10.0.0.1:3260 209``` 210 211Create one initiator group with id 2 to accept any connection from 10.0.0.2/32: 212 213``` 214./scripts/rpc.py iscsi_create_initiator_group 2 ANY 10.0.0.2/32 215``` 216 217Finally construct one target using previously created bdevs as LUN0 (Malloc0) and LUN1 (Malloc1) 218with a name "disk1" and alias "Data Disk1" using portal group 1 and initiator group 2. 219 220``` 221./scripts/rpc.py iscsi_create_target_node disk1 "Data Disk1" "Malloc0:0 Malloc1:1" 1:2 64 -d 222``` 223 224#### Configure initiator 225 226Discover target 227 228~~~ 229$ iscsiadm -m discovery -t sendtargets -p 10.0.0.1 23010.0.0.1:3260,1 iqn.2016-06.io.spdk:disk1 231~~~ 232 233Connect to the target 234 235~~~ 236iscsiadm -m node --login 237~~~ 238 239At this point the iSCSI target should show up as SCSI disks. 240 241Check dmesg to see what they came up as. In this example it can look like below: 242 243~~~ 244... 245[630111.860078] scsi host68: iSCSI Initiator over TCP/IP 246[630112.124743] scsi 68:0:0:0: Direct-Access INTEL Malloc disk 0001 PQ: 0 ANSI: 5 247[630112.125445] sd 68:0:0:0: [sdd] 131072 512-byte logical blocks: (67.1 MB/64.0 MiB) 248[630112.125468] sd 68:0:0:0: Attached scsi generic sg3 type 0 249[630112.125926] sd 68:0:0:0: [sdd] Write Protect is off 250[630112.125934] sd 68:0:0:0: [sdd] Mode Sense: 83 00 00 08 251[630112.126049] sd 68:0:0:0: [sdd] Write cache: enabled, read cache: disabled, doesn't support DPO or FUA 252[630112.126483] scsi 68:0:0:1: Direct-Access INTEL Malloc disk 0001 PQ: 0 ANSI: 5 253[630112.127096] sd 68:0:0:1: Attached scsi generic sg4 type 0 254[630112.127143] sd 68:0:0:1: [sde] 131072 512-byte logical blocks: (67.1 MB/64.0 MiB) 255[630112.127566] sd 68:0:0:1: [sde] Write Protect is off 256[630112.127573] sd 68:0:0:1: [sde] Mode Sense: 83 00 00 08 257[630112.127728] sd 68:0:0:1: [sde] Write cache: enabled, read cache: disabled, doesn't support DPO or FUA 258[630112.128246] sd 68:0:0:0: [sdd] Attached SCSI disk 259[630112.129789] sd 68:0:0:1: [sde] Attached SCSI disk 260... 261~~~ 262 263You may also use simple bash command to find /dev/sdX nodes for each iSCSI LUN 264in all logged iSCSI sessions: 265 266~~~ 267$ iscsiadm -m session -P 3 | grep "Attached scsi disk" | awk '{print $4}' 268sdd 269sde 270~~~ 271 272# iSCSI Hotplug {#iscsi_hotplug} 273 274At the iSCSI level, we provide the following support for Hotplug: 275 2761. bdev/nvme: 277 At the bdev/nvme level, we start one hotplug monitor which will call 278 spdk_nvme_probe() periodically to get the hotplug events. We provide the 279 private attach_cb and remove_cb for spdk_nvme_probe(). For the attach_cb, 280 we will create the block device base on the NVMe device attached, and for the 281 remove_cb, we will unregister the block device, which will also notify the 282 upper level stack (for iSCSI target, the upper level stack is scsi/lun) to 283 handle the hot-remove event. 284 2852. scsi/lun: 286 When the LUN receive the hot-remove notification from block device layer, 287 the LUN will be marked as removed, and all the IOs after this point will 288 return with check condition status. Then the LUN starts one poller which will 289 wait for all the commands which have already been submitted to block device to 290 return back; after all the commands return back, the LUN will be deleted. 291 292@sa spdk_nvme_probe 293 294# iSCSI Login Redirection {#iscsi_login_redirection} 295 296The SPDK iSCSI target application supports iSCSI login redirection feature. 297 298A portal refers to an IP address and TCP port number pair, and a portal group 299contains a set of portals. Users for the SPDK iSCSI target application configure 300portals through portal groups. 301 302To support login redirection feature, we utilize two types of portal groups, 303public portal group and private portal group. 304 305The SPDK iSCSI target application usually has a discovery portal. The discovery 306portal is connected by an initiator to get a list of targets, as well as the list 307of portals on which these target may be accessed, by a discovery session. 308 309Public portal groups have their portals returned by a discovery session. Private 310portal groups do not have their portals returned by a discovery session. A public 311portal group may optionally have a redirect portal for non-discovery logins for 312each associated target. This redirect portal must be from a private portal group. 313 314Initiators configure portals in public portal groups as target portals. When an 315initator logs in to a target through a portal in an associated public portal group, 316the target sends a temporary redirection response with a redirect portal. Then the 317initiator logs in to the target again through the redirect portal. 318 319Users set a portal group to public or private at creation using the 320`iscsi_create_portal_group` RPC, associate portal groups with a target using the 321`iscsi_create_target_node` RPC or the `iscsi_target_node_add_pg_ig_maps` RPC, 322specify a up-to-date redirect portal in a public portal group for a target using 323the `iscsi_target_node_set_redirect` RPC, and terminate the corresponding connections 324by asynchronous logout request using the `iscsi_target_node_request_logout` RPC. 325 326Typically users will use the login redirection feature in scale out iSCSI target 327system, which runs multiple SPDK iSCSI target applications. 328