1.. BSD LICENSE 2 Copyright(c) 2016 Intel Corporation. All rights reserved. 3 All rights reserved. 4 5 Redistribution and use in source and binary forms, with or without 6 modification, are permitted provided that the following conditions 7 are met: 8 9 * Redistributions of source code must retain the above copyright 10 notice, this list of conditions and the following disclaimer. 11 * Redistributions in binary form must reproduce the above copyright 12 notice, this list of conditions and the following disclaimer in 13 the documentation and/or other materials provided with the 14 distribution. 15 * Neither the name of Intel Corporation nor the names of its 16 contributors may be used to endorse or promote products derived 17 from this software without specific prior written permission. 18 19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 31 32Live Migration of VM with Virtio on host running vhost_user 33=========================================================== 34 35Overview 36-------- 37 38Live Migration of a VM with DPDK Virtio PMD on a host which is 39running the Vhost sample application (vhost-switch) and using the DPDK PMD (ixgbe or i40e). 40 41The Vhost sample application uses VMDQ so SRIOV must be disabled on the NIC's. 42 43The following sections show an example of how to do this migration. 44 45Test Setup 46---------- 47 48To test the Live Migration two servers with identical operating systems installed are used. 49KVM and QEMU is also required on the servers. 50 51QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts. 52 53In this example, the servers have Niantic and or Fortville NIC's installed. 54The NIC's on both servers are connected to a switch 55which is also connected to the traffic generator. 56 57The switch is configured to broadcast traffic on all the NIC ports. 58 59The ip address of host_server_1 is 10.237.212.46 60 61The ip address of host_server_2 is 10.237.212.131 62 63.. _figure_lm_vhost_user: 64 65.. figure:: img/lm_vhost_user.* 66 67Live Migration steps 68-------------------- 69 70The sample scripts mentioned in the steps below can be found in the 71:ref:`Sample host scripts <lm_virtio_vhost_user_host_scripts>` and 72:ref:`Sample VM scripts <lm_virtio_vhost_user_vm_scripts>` sections. 73 74On host_server_1: Terminal 1 75~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 76 77Setup DPDK on host_server_1 78 79.. code-block:: console 80 81 cd /root/dpdk/host_scripts 82 ./setup_dpdk_on_host.sh 83 84On host_server_1: Terminal 2 85~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 86 87Bind the Niantic or Fortville NIC to igb_uio on host_server_1. 88 89For Fortville NIC. 90 91.. code-block:: console 92 93 cd /root/dpdk/usertools 94 ./dpdk-devbind.py -b igb_uio 0000:02:00.0 95 96For Niantic NIC. 97 98.. code-block:: console 99 100 cd /root/dpdk/usertools 101 ./dpdk-devbind.py -b igb_uio 0000:09:00.0 102 103On host_server_1: Terminal 3 104~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 105 106For Fortville and Niantic NIC's reset SRIOV and run the 107vhost_user sample application (vhost-switch) on host_server_1. 108 109.. code-block:: console 110 111 cd /root/dpdk/host_scripts 112 ./reset_vf_on_212_46.sh 113 ./run_vhost_switch_on_host.sh 114 115On host_server_1: Terminal 1 116~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 117 118Start the VM on host_server_1 119 120.. code-block:: console 121 122 ./vm_virtio_vhost_user.sh 123 124On host_server_1: Terminal 4 125~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 126 127Connect to the QEMU monitor on host_server_1. 128 129.. code-block:: console 130 131 cd /root/dpdk/host_scripts 132 ./connect_to_qemu_mon_on_host.sh 133 (qemu) 134 135On host_server_1: Terminal 1 136~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 137 138**In VM on host_server_1:** 139 140Setup DPDK in the VM and run testpmd in the VM. 141 142.. code-block:: console 143 144 cd /root/dpdk/vm_scripts 145 ./setup_dpdk_in_vm.sh 146 ./run_testpmd_in_vm.sh 147 148 testpmd> show port info all 149 testpmd> set fwd mac retry 150 testpmd> start tx_first 151 testpmd> show port stats all 152 153Virtio traffic is seen at P1 and P2. 154 155On host_server_2: Terminal 1 156~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 157 158Set up DPDK on the host_server_2. 159 160.. code-block:: console 161 162 cd /root/dpdk/host_scripts 163 ./setup_dpdk_on_host.sh 164 165On host_server_2: Terminal 2 166~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 167 168Bind the Niantic or Fortville NIC to igb_uio on host_server_2. 169 170For Fortville NIC. 171 172.. code-block:: console 173 174 cd /root/dpdk/usertools 175 ./dpdk-devbind.py -b igb_uio 0000:03:00.0 176 177For Niantic NIC. 178 179.. code-block:: console 180 181 cd /root/dpdk/usertools 182 ./dpdk-devbind.py -b igb_uio 0000:06:00.0 183 184On host_server_2: Terminal 3 185~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 186 187For Fortville and Niantic NIC's reset SRIOV, and run 188the vhost_user sample application on host_server_2. 189 190.. code-block:: console 191 192 cd /root/dpdk/host_scripts 193 ./reset_vf_on_212_131.sh 194 ./run_vhost_switch_on_host.sh 195 196On host_server_2: Terminal 1 197~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 198 199Start the VM on host_server_2. 200 201.. code-block:: console 202 203 ./vm_virtio_vhost_user_migrate.sh 204 205On host_server_2: Terminal 4 206~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 207 208Connect to the QEMU monitor on host_server_2. 209 210.. code-block:: console 211 212 cd /root/dpdk/host_scripts 213 ./connect_to_qemu_mon_on_host.sh 214 (qemu) info status 215 VM status: paused (inmigrate) 216 (qemu) 217 218On host_server_1: Terminal 4 219~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 220 221Check that switch is up before migrating the VM. 222 223.. code-block:: console 224 225 (qemu) migrate tcp:10.237.212.131:5555 226 (qemu) info status 227 VM status: paused (postmigrate) 228 229 (qemu) info migrate 230 capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off 231 Migration status: completed 232 total time: 11619 milliseconds 233 downtime: 5 milliseconds 234 setup: 7 milliseconds 235 transferred ram: 379699 kbytes 236 throughput: 267.82 mbps 237 remaining ram: 0 kbytes 238 total ram: 1590088 kbytes 239 duplicate: 303985 pages 240 skipped: 0 pages 241 normal: 94073 pages 242 normal bytes: 376292 kbytes 243 dirty sync count: 2 244 (qemu) quit 245 246On host_server_2: Terminal 1 247~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 248 249**In VM on host_server_2:** 250 251 Hit Enter key. This brings the user to the testpmd prompt. 252 253.. code-block:: console 254 255 testpmd> 256 257On host_server_2: Terminal 4 258~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 259 260**In QEMU monitor on host_server_2** 261 262.. code-block:: console 263 264 (qemu) info status 265 VM status: running 266 267On host_server_2: Terminal 1 268~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 269 270**In VM on host_server_2:** 271 272.. code-block:: console 273 274 testomd> show port info all 275 testpmd> show port stats all 276 277Virtio traffic is seen at P0 and P1. 278 279 280.. _lm_virtio_vhost_user_host_scripts: 281 282Sample host scripts 283------------------- 284 285reset_vf_on_212_46.sh 286~~~~~~~~~~~~~~~~~~~~~ 287 288.. code-block:: sh 289 290 #!/bin/sh 291 # This script is run on the host 10.237.212.46 to reset SRIOV 292 293 # BDF for Fortville NIC is 0000:02:00.0 294 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs 295 echo 0 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs 296 cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs 297 298 # BDF for Niantic NIC is 0000:09:00.0 299 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs 300 echo 0 > /sys/bus/pci/devices/0000\:09\:00.0/max_vfs 301 cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs 302 303vm_virtio_vhost_user.sh 304~~~~~~~~~~~~~~~~~~~~~~~ 305 306.. code-block:: sh 307 308 #/bin/sh 309 # Script for use with vhost_user sample application 310 # The host system has 8 cpu's (0-7) 311 312 # Path to KVM tool 313 KVM_PATH="/usr/bin/qemu-system-x86_64" 314 315 # Guest Disk image 316 DISK_IMG="/home/user/disk_image/virt1_sml.disk" 317 318 # Number of guest cpus 319 VCPUS_NR="6" 320 321 # Memory 322 MEM=1024 323 324 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off" 325 326 # Socket Path 327 SOCKET_PATH="/root/dpdk/host_scripts/usvhost" 328 329 taskset -c 2-7 $KVM_PATH \ 330 -enable-kvm \ 331 -m $MEM \ 332 -smp $VCPUS_NR \ 333 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \ 334 -numa node,memdev=mem,nodeid=0 \ 335 -cpu host \ 336 -name VM1 \ 337 -no-reboot \ 338 -net none \ 339 -vnc none \ 340 -nographic \ 341 -hda $DISK_IMG \ 342 -chardev socket,id=chr0,path=$SOCKET_PATH \ 343 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \ 344 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \ 345 -chardev socket,id=chr1,path=$SOCKET_PATH \ 346 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \ 347 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \ 348 -monitor telnet::3333,server,nowait 349 350connect_to_qemu_mon_on_host.sh 351~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 352 353.. code-block:: sh 354 355 #!/bin/sh 356 # This script is run on both hosts when the VM is up, 357 # to connect to the Qemu Monitor. 358 359 telnet 0 3333 360 361reset_vf_on_212_131.sh 362~~~~~~~~~~~~~~~~~~~~~~ 363 364.. code-block:: sh 365 366 #!/bin/sh 367 # This script is run on the host 10.237.212.131 to reset SRIOV 368 369 # BDF for Ninatic NIC is 0000:06:00.0 370 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs 371 echo 0 > /sys/bus/pci/devices/0000\:06\:00.0/max_vfs 372 cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs 373 374 # BDF for Fortville NIC is 0000:03:00.0 375 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs 376 echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/max_vfs 377 cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs 378 379vm_virtio_vhost_user_migrate.sh 380~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 381 382.. code-block:: sh 383 384 #/bin/sh 385 # Script for use with vhost user sample application 386 # The host system has 8 cpu's (0-7) 387 388 # Path to KVM tool 389 KVM_PATH="/usr/bin/qemu-system-x86_64" 390 391 # Guest Disk image 392 DISK_IMG="/home/user/disk_image/virt1_sml.disk" 393 394 # Number of guest cpus 395 VCPUS_NR="6" 396 397 # Memory 398 MEM=1024 399 400 VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off" 401 402 # Socket Path 403 SOCKET_PATH="/root/dpdk/host_scripts/usvhost" 404 405 taskset -c 2-7 $KVM_PATH \ 406 -enable-kvm \ 407 -m $MEM \ 408 -smp $VCPUS_NR \ 409 -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \ 410 -numa node,memdev=mem,nodeid=0 \ 411 -cpu host \ 412 -name VM1 \ 413 -no-reboot \ 414 -net none \ 415 -vnc none \ 416 -nographic \ 417 -hda $DISK_IMG \ 418 -chardev socket,id=chr0,path=$SOCKET_PATH \ 419 -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \ 420 -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \ 421 -chardev socket,id=chr1,path=$SOCKET_PATH \ 422 -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \ 423 -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \ 424 -incoming tcp:0:5555 \ 425 -monitor telnet::3333,server,nowait 426 427.. _lm_virtio_vhost_user_vm_scripts: 428 429Sample VM scripts 430----------------- 431 432setup_dpdk_virtio_in_vm.sh 433~~~~~~~~~~~~~~~~~~~~~~~~~~ 434 435.. code-block:: sh 436 437 #!/bin/sh 438 # this script matches the vm_virtio_vhost_user script 439 # virtio port is 03 440 # virtio port is 04 441 442 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 443 echo 1024 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 444 cat /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 445 446 ifconfig -a 447 /root/dpdk/usertools/dpdk-devbind.py --status 448 449 rmmod virtio-pci 450 451 modprobe uio 452 insmod /root/dpdk/x86_64-default-linuxapp-gcc/kmod/igb_uio.ko 453 454 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0 455 /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0 456 457 /root/dpdk/usertools/dpdk-devbind.py --status 458 459run_testpmd_in_vm.sh 460~~~~~~~~~~~~~~~~~~~~ 461 462.. code-block:: sh 463 464 #!/bin/sh 465 # Run testpmd for use with vhost_user sample app. 466 # test system has 8 cpus (0-7), use cpus 2-7 for VM 467 468 /root/dpdk/x86_64-default-linuxapp-gcc/app/testpmd \ 469 -c 3f -n 4 --socket-mem 350 -- --burst=64 --i --disable-hw-vlan-filter 470