xref: /dpdk/doc/guides/howto/lm_bond_virtio_sriov.rst (revision 4f84008676739874712cb95f3c3df62198b80dc8)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2016 Intel Corporation.
3
4Live Migration of VM with SR-IOV VF
5===================================
6
7Overview
8--------
9
10It is not possible to migrate a Virtual Machine which has an SR-IOV Virtual Function (VF).
11
12To get around this problem the bonding PMD is used.
13
14The following sections show an example of how to do this.
15
16Test Setup
17----------
18
19A bonding device is created in the VM.
20The virtio and VF PMD's are added as members to the bonding device.
21The VF is set as the primary member of the bonding device.
22
23A bridge must be set up on the Host connecting the tap device, which is the
24backend of the Virtio device and the Physical Function (PF) device.
25
26To test the Live Migration two servers with identical operating systems installed are used.
27KVM and Qemu 2.3 is also required on the servers.
28
29In this example, the servers have Niantic and or Fortville NIC's installed.
30The NIC's on both servers are connected to a switch
31which is also connected to the traffic generator.
32
33The switch is configured to broadcast traffic on all the NIC ports.
34A :ref:`Sample switch configuration <lm_bond_virtio_sriov_switch_conf>`
35can be found in this section.
36
37The host is running the Kernel PF driver (ixgbe or i40e).
38
39The ip address of host_server_1 is 10.237.212.46
40
41The ip address of host_server_2 is 10.237.212.131
42
43.. _figure_lm_bond_virtio_sriov:
44
45.. figure:: img/lm_bond_virtio_sriov.*
46
47Live Migration steps
48--------------------
49
50The sample scripts mentioned in the steps below can be found in the
51:ref:`Sample host scripts <lm_bond_virtio_sriov_host_scripts>` and
52:ref:`Sample VM scripts <lm_bond_virtio_sriov_vm_scripts>` sections.
53
54On host_server_1: Terminal 1
55~~~~~~~~~~~~~~~~~~~~~~~~~~~~
56
57.. code-block:: console
58
59   cd /root/dpdk/host_scripts
60   ./setup_vf_on_212_46.sh
61
62For Fortville NIC
63
64.. code-block:: console
65
66   ./vm_virtio_vf_i40e_212_46.sh
67
68For Niantic NIC
69
70.. code-block:: console
71
72   ./vm_virtio_vf_one_212_46.sh
73
74On host_server_1: Terminal 2
75~~~~~~~~~~~~~~~~~~~~~~~~~~~~
76
77.. code-block:: console
78
79   cd /root/dpdk/host_scripts
80   ./setup_bridge_on_212_46.sh
81   ./connect_to_qemu_mon_on_host.sh
82   (qemu)
83
84On host_server_1: Terminal 1
85~~~~~~~~~~~~~~~~~~~~~~~~~~~~
86
87**In VM on host_server_1:**
88
89.. code-block:: console
90
91   cd /root/dpdk/vm_scripts
92   ./setup_dpdk_in_vm.sh
93   ./run_testpmd_bonding_in_vm.sh
94
95   testpmd> show port info all
96
97The ``mac_addr`` command only works with kernel PF for Niantic
98
99.. code-block:: console
100
101   testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
102
103The syntax of the ``testpmd`` command is:
104
105Create bonding device (mode) (socket).
106
107Mode 1 is active backup.
108
109Virtio is port 0 (P0).
110
111VF is port 1 (P1).
112
113Bonding is port 2 (P2).
114
115.. code-block:: console
116
117   testpmd> create bonding device 1 0
118   Created new bonding device net_bond_testpmd_0 on (port 2).
119   testpmd> add bonding member 0 2
120   testpmd> add bonding member 1 2
121   testpmd> show bonding config 2
122
123The syntax of the ``testpmd`` command is:
124
125set bonding primary (member id) (port id)
126
127Set primary to P1 before starting bonding port.
128
129.. code-block:: console
130
131   testpmd> set bonding primary 1 2
132   testpmd> show bonding config 2
133   testpmd> port start 2
134   Port 2: 02:09:C0:68:99:A5
135   Checking link statuses...
136   Port 0 Link Up - speed 10000 Mbps - full-duplex
137   Port 1 Link Up - speed 10000 Mbps - full-duplex
138   Port 2 Link Up - speed 10000 Mbps - full-duplex
139
140   testpmd> show bonding config 2
141
142Primary is now P1. There are 2 active members.
143
144Use P2 only for forwarding.
145
146.. code-block:: console
147
148   testpmd> set portlist 2
149   testpmd> show config fwd
150   testpmd> set fwd mac
151   testpmd> start
152   testpmd> show bonding config 2
153
154Primary is now P1. There are 2 active members.
155
156.. code-block:: console
157
158   testpmd> show port stats all
159
160VF traffic is seen at P1 and P2.
161
162.. code-block:: console
163
164   testpmd> clear port stats all
165   testpmd> set bonding primary 0 2
166   testpmd> remove bonding member 1 2
167   testpmd> show bonding config 2
168
169Primary is now P0. There is 1 active member.
170
171.. code-block:: console
172
173   testpmd> clear port stats all
174   testpmd> show port stats all
175
176No VF traffic is seen at P0 and P2, VF MAC address still present.
177
178.. code-block:: console
179
180   testpmd> port stop 1
181   testpmd> port close 1
182
183Port close should remove VF MAC address, it does not remove perm_addr.
184
185The ``mac_addr`` command only works with the kernel PF for Niantic.
186
187.. code-block:: console
188
189   testpmd> mac_addr remove 1 AA:BB:CC:DD:EE:FF
190   testpmd> port detach 1
191   Port '0000:00:04.0' is detached. Now total ports is 2
192   testpmd> show port stats all
193
194No VF traffic is seen at P0 and P2.
195
196On host_server_1: Terminal 2
197~~~~~~~~~~~~~~~~~~~~~~~~~~~~
198
199.. code-block:: console
200
201   (qemu) device_del vf1
202
203
204On host_server_1: Terminal 1
205~~~~~~~~~~~~~~~~~~~~~~~~~~~~
206
207**In VM on host_server_1:**
208
209.. code-block:: console
210
211   testpmd> show bonding config 2
212
213Primary is now P0. There is 1 active member.
214
215.. code-block:: console
216
217   testpmd> show port info all
218   testpmd> show port stats all
219
220On host_server_2: Terminal 1
221~~~~~~~~~~~~~~~~~~~~~~~~~~~~
222
223.. code-block:: console
224
225   cd /root/dpdk/host_scripts
226   ./setup_vf_on_212_131.sh
227   ./vm_virtio_one_migrate.sh
228
229On host_server_2: Terminal 2
230~~~~~~~~~~~~~~~~~~~~~~~~~~~~
231
232.. code-block:: console
233
234   ./setup_bridge_on_212_131.sh
235   ./connect_to_qemu_mon_on_host.sh
236   (qemu) info status
237   VM status: paused (inmigrate)
238   (qemu)
239
240On host_server_1: Terminal 2
241~~~~~~~~~~~~~~~~~~~~~~~~~~~~
242
243Check that the switch is up before migrating.
244
245.. code-block:: console
246
247   (qemu) migrate tcp:10.237.212.131:5555
248   (qemu) info status
249   VM status: paused (postmigrate)
250
251For the Niantic NIC.
252
253.. code-block:: console
254
255   (qemu) info migrate
256   capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
257   Migration status: completed
258   total time: 11834 milliseconds
259   downtime: 18 milliseconds
260   setup: 3 milliseconds
261   transferred ram: 389137 kbytes
262   throughput: 269.49 mbps
263   remaining ram: 0 kbytes
264   total ram: 1590088 kbytes
265   duplicate: 301620 pages
266   skipped: 0 pages
267   normal: 96433 pages
268   normal bytes: 385732 kbytes
269   dirty sync count: 2
270   (qemu) quit
271
272For the Fortville NIC.
273
274.. code-block:: console
275
276   (qemu) info migrate
277   capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
278   Migration status: completed
279   total time: 11619 milliseconds
280   downtime: 5 milliseconds
281   setup: 7 milliseconds
282   transferred ram: 379699 kbytes
283   throughput: 267.82 mbps
284   remaining ram: 0 kbytes
285   total ram: 1590088 kbytes
286   duplicate: 303985 pages
287   skipped: 0 pages
288   normal: 94073 pages
289   normal bytes: 376292 kbytes
290   dirty sync count: 2
291   (qemu) quit
292
293On host_server_2: Terminal 1
294~~~~~~~~~~~~~~~~~~~~~~~~~~~~
295
296**In VM on host_server_2:**
297
298   Hit Enter key. This brings the user to the testpmd prompt.
299
300.. code-block:: console
301
302   testpmd>
303
304On host_server_2: Terminal 2
305~~~~~~~~~~~~~~~~~~~~~~~~~~~~
306
307.. code-block:: console
308
309   (qemu) info status
310   VM status: running
311
312For the Niantic NIC.
313
314.. code-block:: console
315
316   (qemu) device_add pci-assign,host=06:10.0,id=vf1
317
318For the Fortville NIC.
319
320.. code-block:: console
321
322   (qemu) device_add pci-assign,host=03:02.0,id=vf1
323
324On host_server_2: Terminal 1
325~~~~~~~~~~~~~~~~~~~~~~~~~~~~
326
327**In VM on host_server_2:**
328
329.. code-block:: console
330
331   testpmd> show port info all
332   testpmd> show port stats all
333   testpmd> show bonding config 2
334   testpmd> port attach 0000:00:04.0
335   Port 1 is attached.
336   Now total ports is 3
337   Done
338
339   testpmd> port start 1
340
341The ``mac_addr`` command only works with the Kernel PF for Niantic.
342
343.. code-block:: console
344
345   testpmd> mac_addr add port 1 vf 0 AA:BB:CC:DD:EE:FF
346   testpmd> show port stats all.
347   testpmd> show config fwd
348   testpmd> show bonding config 2
349   testpmd> add bonding member 1 2
350   testpmd> set bonding primary 1 2
351   testpmd> show bonding config 2
352   testpmd> show port stats all
353
354VF traffic is seen at P1 (VF) and P2 (Bonded device).
355
356.. code-block:: console
357
358   testpmd> remove bonding member 0 2
359   testpmd> show bonding config 2
360   testpmd> port stop 0
361   testpmd> port close 0
362   testpmd> port detach 0
363   Port '0000:00:03.0' is detached. Now total ports is 2
364
365   testpmd> show port info all
366   testpmd> show config fwd
367   testpmd> show port stats all
368
369VF traffic is seen at P1 (VF) and P2 (Bonded device).
370
371.. _lm_bond_virtio_sriov_host_scripts:
372
373Sample host scripts
374-------------------
375
376setup_vf_on_212_46.sh
377~~~~~~~~~~~~~~~~~~~~~
378Set up Virtual Functions on host_server_1
379
380.. code-block:: sh
381
382   #!/bin/sh
383   # This script is run on the host 10.237.212.46 to setup the VF
384
385   # set up Niantic VF
386   cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
387   echo 1 > /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
388   cat /sys/bus/pci/devices/0000\:09\:00.0/sriov_numvfs
389   rmmod ixgbevf
390
391   # set up Fortville VF
392   cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
393   echo 1 > /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
394   cat /sys/bus/pci/devices/0000\:02\:00.0/sriov_numvfs
395   rmmod iavf
396
397vm_virtio_vf_one_212_46.sh
398~~~~~~~~~~~~~~~~~~~~~~~~~~
399
400Setup Virtual Machine on host_server_1
401
402.. code-block:: sh
403
404   #!/bin/sh
405
406   # Path to KVM tool
407   KVM_PATH="/usr/bin/qemu-system-x86_64"
408
409   # Guest Disk image
410   DISK_IMG="/home/username/disk_image/virt1_sml.disk"
411
412   # Number of guest cpus
413   VCPUS_NR="4"
414
415   # Memory
416   MEM=1536
417
418   taskset -c 1-5 $KVM_PATH \
419    -enable-kvm \
420    -m $MEM \
421    -smp $VCPUS_NR \
422    -cpu host \
423    -name VM1 \
424    -no-reboot \
425    -net none \
426    -vnc none -nographic \
427    -hda $DISK_IMG \
428    -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
429    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
430    -device pci-assign,host=09:10.0,id=vf1 \
431    -monitor telnet::3333,server,nowait
432
433setup_bridge_on_212_46.sh
434~~~~~~~~~~~~~~~~~~~~~~~~~
435
436Setup bridge on host_server_1
437
438.. code-block:: sh
439
440   #!/bin/sh
441   # This script is run on the host 10.237.212.46 to setup the bridge
442   # for the Tap device and the PF device.
443   # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
444
445   # ens3f0 is the Niantic NIC
446   # ens6f0 is the Fortville NIC
447
448   ifconfig ens3f0 down
449   ifconfig tap1 down
450   ifconfig ens6f0 down
451   ifconfig virbr0 down
452
453   brctl show virbr0
454   brctl addif virbr0 ens3f0
455   brctl addif virbr0 ens6f0
456   brctl addif virbr0 tap1
457   brctl show virbr0
458
459   ifconfig ens3f0 up
460   ifconfig tap1 up
461   ifconfig ens6f0 up
462   ifconfig virbr0 up
463
464connect_to_qemu_mon_on_host.sh
465~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
466
467.. code-block:: sh
468
469   #!/bin/sh
470   # This script is run on both hosts when the VM is up,
471   # to connect to the Qemu Monitor.
472
473   telnet 0 3333
474
475setup_vf_on_212_131.sh
476~~~~~~~~~~~~~~~~~~~~~~
477
478Set up Virtual Functions on host_server_2
479
480.. code-block:: sh
481
482   #!/bin/sh
483   # This script is run on the host 10.237.212.131 to setup the VF
484
485   # set up Niantic VF
486   cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
487   echo 1 > /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
488   cat /sys/bus/pci/devices/0000\:06\:00.0/sriov_numvfs
489   rmmod ixgbevf
490
491   # set up Fortville VF
492   cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
493   echo 1 > /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
494   cat /sys/bus/pci/devices/0000\:03\:00.0/sriov_numvfs
495   rmmod iavf
496
497vm_virtio_one_migrate.sh
498~~~~~~~~~~~~~~~~~~~~~~~~
499
500Setup Virtual Machine on host_server_2
501
502.. code-block:: sh
503
504   #!/bin/sh
505   # Start the VM on host_server_2 with the same parameters except without the VF
506   # parameters, as the VM on host_server_1, in migration-listen mode
507   # (-incoming tcp:0:5555)
508
509   # Path to KVM tool
510   KVM_PATH="/usr/bin/qemu-system-x86_64"
511
512   # Guest Disk image
513   DISK_IMG="/home/username/disk_image/virt1_sml.disk"
514
515   # Number of guest cpus
516   VCPUS_NR="4"
517
518   # Memory
519   MEM=1536
520
521   taskset -c 1-5 $KVM_PATH \
522    -enable-kvm \
523    -m $MEM \
524    -smp $VCPUS_NR \
525    -cpu host \
526    -name VM1 \
527    -no-reboot \
528    -net none \
529    -vnc none -nographic \
530    -hda $DISK_IMG \
531    -netdev type=tap,id=net1,script=no,downscript=no,ifname=tap1 \
532    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB \
533    -incoming tcp:0:5555 \
534    -monitor telnet::3333,server,nowait
535
536setup_bridge_on_212_131.sh
537~~~~~~~~~~~~~~~~~~~~~~~~~~
538
539Setup bridge on host_server_2
540
541.. code-block:: sh
542
543   #!/bin/sh
544   # This script is run on the host to setup the bridge
545   # for the Tap device and the PF device.
546   # This enables traffic to go from the PF to the Tap to the Virtio PMD in the VM.
547
548   # ens4f0 is the Niantic NIC
549   # ens5f0 is the Fortville NIC
550
551   ifconfig ens4f0 down
552   ifconfig tap1 down
553   ifconfig ens5f0 down
554   ifconfig virbr0 down
555
556   brctl show virbr0
557   brctl addif virbr0 ens4f0
558   brctl addif virbr0 ens5f0
559   brctl addif virbr0 tap1
560   brctl show virbr0
561
562   ifconfig ens4f0 up
563   ifconfig tap1 up
564   ifconfig ens5f0 up
565   ifconfig virbr0 up
566
567.. _lm_bond_virtio_sriov_vm_scripts:
568
569Sample VM scripts
570-----------------
571
572setup_dpdk_in_vm.sh
573~~~~~~~~~~~~~~~~~~~
574
575Set up DPDK in the Virtual Machine
576
577.. code-block:: sh
578
579   #!/bin/sh
580   # this script matches the vm_virtio_vf_one script
581   # virtio port is 03
582   # vf port is 04
583
584   /root/dpdk/usertools/dpdk-hugepages.py --show
585   /root/dpdk/usertools/dpdk-hugepages.py --setup 2G
586   /root/dpdk/usertools/dpdk-hugepages.py --show
587
588   ifconfig -a
589   /root/dpdk/usertools/dpdk-devbind.py --status
590
591   rmmod virtio-pci ixgbevf
592
593   modprobe uio
594   insmod igb_uio.ko
595
596   /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
597   /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
598
599   /root/dpdk/usertools/dpdk-devbind.py --status
600
601run_testpmd_bonding_in_vm.sh
602~~~~~~~~~~~~~~~~~~~~~~~~~~~~
603
604Run testpmd in the Virtual Machine.
605
606.. code-block:: sh
607
608   #!/bin/sh
609   # Run testpmd in the VM
610
611   # The test system has 8 cpus (0-7), use cpus 2-7 for VM
612   # Use taskset -pc <core number> <thread_id>
613
614   # use for bonding of virtio and vf tests in VM
615
616   /root/dpdk/<build_dir>/app/dpdk-testpmd \
617   -l 0-3 -n 4 --socket-mem 350 --  --i --port-topology=chained
618
619.. _lm_bond_virtio_sriov_switch_conf:
620
621Sample switch configuration
622---------------------------
623
624The Intel switch is used to connect the traffic generator to the
625NIC's on host_server_1 and host_server_2.
626
627In order to run the switch configuration two console windows are required.
628
629Log in as root in both windows.
630
631TestPointShared, run_switch.sh and load /root/switch_config must be executed
632in the sequence below.
633
634On Switch: Terminal 1
635~~~~~~~~~~~~~~~~~~~~~
636
637run TestPointShared
638
639.. code-block:: console
640
641   /usr/bin/TestPointShared
642
643On Switch: Terminal 2
644~~~~~~~~~~~~~~~~~~~~~
645
646execute run_switch.sh
647
648.. code-block:: console
649
650   /root/run_switch.sh
651
652On Switch: Terminal 1
653~~~~~~~~~~~~~~~~~~~~~
654
655load switch configuration
656
657.. code-block:: console
658
659   load /root/switch_config
660
661Sample switch configuration script
662~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
663
664The ``/root/switch_config`` script:
665
666.. code-block:: sh
667
668   # TestPoint History
669   show port 1,5,9,13,17,21,25
670   set port 1,5,9,13,17,21,25 up
671   show port 1,5,9,13,17,21,25
672   del acl 1
673   create acl 1
674   create acl-port-set
675   create acl-port-set
676   add port port-set 1 0
677   add port port-set 5,9,13,17,21,25 1
678   create acl-rule 1 1
679   add acl-rule condition 1 1 port-set 1
680   add acl-rule action 1 1 redirect 1
681   apply acl
682   create vlan 1000
683   add vlan port 1000 1,5,9,13,17,21,25
684   set vlan tagging 1000 1,5,9,13,17,21,25 tag
685   set switch config flood_ucast fwd
686   show port stats all 1,5,9,13,17,21,25
687