xref: /dpdk/doc/guides/howto/lm_virtio_vhost_user.rst (revision de34aaa96be969bb919b98891af18ed0c435625d)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2016 Intel Corporation.
3
4Live Migration of VM with Virtio on host running vhost_user
5===========================================================
6
7Overview
8--------
9
10Live Migration of a VM with DPDK Virtio PMD on a host which is
11running the Vhost sample application (vhost-switch) and using the DPDK PMD (ixgbe or i40e).
12
13The Vhost sample application uses VMDQ so SRIOV must be disabled on the NIC's.
14
15The following sections show an example of how to do this migration.
16
17Test Setup
18----------
19
20To test the Live Migration two servers with identical operating systems installed are used.
21KVM and QEMU is also required on the servers.
22
23QEMU 2.5 is required for Live Migration of a VM with vhost_user running on the hosts.
24
25In this example, the servers have Niantic and or Fortville NIC's installed.
26The NIC's on both servers are connected to a switch
27which is also connected to the traffic generator.
28
29The switch is configured to broadcast traffic on all the NIC ports.
30
31The ip address of host_server_1 is 10.237.212.46
32
33The ip address of host_server_2 is 10.237.212.131
34
35.. _figure_lm_vhost_user:
36
37.. figure:: img/lm_vhost_user.*
38
39Live Migration steps
40--------------------
41
42The sample scripts mentioned in the steps below can be found in the
43:ref:`Sample host scripts <lm_virtio_vhost_user_host_scripts>` and
44:ref:`Sample VM scripts <lm_virtio_vhost_user_vm_scripts>` sections.
45
46On host_server_1: Terminal 1
47~~~~~~~~~~~~~~~~~~~~~~~~~~~~
48
49Setup DPDK on host_server_1
50
51.. code-block:: console
52
53   cd /root/dpdk/host_scripts
54   ./setup_dpdk_on_host.sh
55
56On host_server_1: Terminal 2
57~~~~~~~~~~~~~~~~~~~~~~~~~~~~
58
59Bind the Niantic or Fortville NIC to igb_uio on host_server_1.
60
61For Fortville NIC.
62
63.. code-block:: console
64
65   cd /root/dpdk/usertools
66   ./dpdk-devbind.py -b igb_uio 0000:02:00.0
67
68For Niantic NIC.
69
70.. code-block:: console
71
72   cd /root/dpdk/usertools
73   ./dpdk-devbind.py -b igb_uio 0000:09:00.0
74
75On host_server_1: Terminal 3
76~~~~~~~~~~~~~~~~~~~~~~~~~~~~
77
78For Fortville and Niantic NIC's reset SRIOV and run the
79vhost_user sample application (vhost-switch) on host_server_1.
80
81.. code-block:: console
82
83   cd /root/dpdk/host_scripts
84   ./reset_vf_on_212_46.sh
85   ./run_vhost_switch_on_host.sh
86
87On host_server_1: Terminal 1
88~~~~~~~~~~~~~~~~~~~~~~~~~~~~
89
90Start the VM on host_server_1
91
92.. code-block:: console
93
94   ./vm_virtio_vhost_user.sh
95
96On host_server_1: Terminal 4
97~~~~~~~~~~~~~~~~~~~~~~~~~~~~
98
99Connect to the QEMU monitor on host_server_1.
100
101.. code-block:: console
102
103   cd /root/dpdk/host_scripts
104   ./connect_to_qemu_mon_on_host.sh
105   (qemu)
106
107On host_server_1: Terminal 1
108~~~~~~~~~~~~~~~~~~~~~~~~~~~~
109
110**In VM on host_server_1:**
111
112Setup DPDK in the VM and run testpmd in the VM.
113
114.. code-block:: console
115
116   cd /root/dpdk/vm_scripts
117   ./setup_dpdk_in_vm.sh
118   ./run_testpmd_in_vm.sh
119
120   testpmd> show port info all
121   testpmd> set fwd mac retry
122   testpmd> start tx_first
123   testpmd> show port stats all
124
125Virtio traffic is seen at P1 and P2.
126
127On host_server_2: Terminal 1
128~~~~~~~~~~~~~~~~~~~~~~~~~~~~
129
130Set up DPDK on the host_server_2.
131
132.. code-block:: console
133
134   cd /root/dpdk/host_scripts
135   ./setup_dpdk_on_host.sh
136
137On host_server_2: Terminal 2
138~~~~~~~~~~~~~~~~~~~~~~~~~~~~
139
140Bind the Niantic or Fortville NIC to igb_uio on host_server_2.
141
142For Fortville NIC.
143
144.. code-block:: console
145
146   cd /root/dpdk/usertools
147   ./dpdk-devbind.py -b igb_uio 0000:03:00.0
148
149For Niantic NIC.
150
151.. code-block:: console
152
153   cd /root/dpdk/usertools
154   ./dpdk-devbind.py -b igb_uio 0000:06:00.0
155
156On host_server_2: Terminal 3
157~~~~~~~~~~~~~~~~~~~~~~~~~~~~
158
159For Fortville and Niantic NIC's reset SRIOV, and run
160the vhost_user sample application on host_server_2.
161
162.. code-block:: console
163
164   cd /root/dpdk/host_scripts
165   ./reset_vf_on_212_131.sh
166   ./run_vhost_switch_on_host.sh
167
168On host_server_2: Terminal 1
169~~~~~~~~~~~~~~~~~~~~~~~~~~~~
170
171Start the VM on host_server_2.
172
173.. code-block:: console
174
175   ./vm_virtio_vhost_user_migrate.sh
176
177On host_server_2: Terminal 4
178~~~~~~~~~~~~~~~~~~~~~~~~~~~~
179
180Connect to the QEMU monitor on host_server_2.
181
182.. code-block:: console
183
184   cd /root/dpdk/host_scripts
185   ./connect_to_qemu_mon_on_host.sh
186   (qemu) info status
187   VM status: paused (inmigrate)
188   (qemu)
189
190On host_server_1: Terminal 4
191~~~~~~~~~~~~~~~~~~~~~~~~~~~~
192
193Check that switch is up before migrating the VM.
194
195.. code-block:: console
196
197   (qemu) migrate tcp:10.237.212.131:5555
198   (qemu) info status
199   VM status: paused (postmigrate)
200
201   (qemu) info migrate
202   capabilities: xbzrle: off rdma-pin-all: off auto-converge: off zero-blocks: off
203   Migration status: completed
204   total time: 11619 milliseconds
205   downtime: 5 milliseconds
206   setup: 7 milliseconds
207   transferred ram: 379699 kbytes
208   throughput: 267.82 mbps
209   remaining ram: 0 kbytes
210   total ram: 1590088 kbytes
211   duplicate: 303985 pages
212   skipped: 0 pages
213   normal: 94073 pages
214   normal bytes: 376292 kbytes
215   dirty sync count: 2
216   (qemu) quit
217
218On host_server_2: Terminal 1
219~~~~~~~~~~~~~~~~~~~~~~~~~~~~
220
221**In VM on host_server_2:**
222
223   Hit Enter key. This brings the user to the testpmd prompt.
224
225.. code-block:: console
226
227   testpmd>
228
229On host_server_2: Terminal 4
230~~~~~~~~~~~~~~~~~~~~~~~~~~~~
231
232**In QEMU monitor on host_server_2**
233
234.. code-block:: console
235
236   (qemu) info status
237   VM status: running
238
239On host_server_2: Terminal 1
240~~~~~~~~~~~~~~~~~~~~~~~~~~~~
241
242**In VM on host_server_2:**
243
244.. code-block:: console
245
246   testpmd> show port info all
247   testpmd> show port stats all
248
249Virtio traffic is seen at P0 and P1.
250
251
252.. _lm_virtio_vhost_user_host_scripts:
253
254Sample host scripts
255-------------------
256
257reset_vf_on_212_46.sh
258~~~~~~~~~~~~~~~~~~~~~
259
260.. code-block:: sh
261
262   #!/bin/sh
263   # This script is run on the host 10.237.212.46 to reset SRIOV
264
265   # BDF for Fortville NIC is 0000:02:00.0
266   cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
267   echo 0 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
268   cat /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
269
270   # BDF for Niantic NIC is 0000:09:00.0
271   cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
272   echo 0 > /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
273   cat /sys/bus/pci/devices/0000\:09\:00.0/max_vfs
274
275vm_virtio_vhost_user.sh
276~~~~~~~~~~~~~~~~~~~~~~~
277
278.. code-block:: sh
279
280   #/bin/sh
281   # Script for use with vhost_user sample application
282   # The host system has 8 cpu's (0-7)
283
284   # Path to KVM tool
285   KVM_PATH="/usr/bin/qemu-system-x86_64"
286
287   # Guest Disk image
288   DISK_IMG="/home/user/disk_image/virt1_sml.disk"
289
290   # Number of guest cpus
291   VCPUS_NR="6"
292
293   # Memory
294   MEM=1024
295
296   VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
297
298   # Socket Path
299   SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
300
301   taskset -c 2-7 $KVM_PATH \
302    -enable-kvm \
303    -m $MEM \
304    -smp $VCPUS_NR \
305    -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
306    -numa node,memdev=mem,nodeid=0 \
307    -cpu host \
308    -name VM1 \
309    -no-reboot \
310    -net none \
311    -vnc none \
312    -nographic \
313    -hda $DISK_IMG \
314    -chardev socket,id=chr0,path=$SOCKET_PATH \
315    -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
316    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
317    -chardev socket,id=chr1,path=$SOCKET_PATH \
318    -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
319    -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
320    -monitor telnet::3333,server,nowait
321
322connect_to_qemu_mon_on_host.sh
323~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
324
325.. code-block:: sh
326
327   #!/bin/sh
328   # This script is run on both hosts when the VM is up,
329   # to connect to the Qemu Monitor.
330
331   telnet 0 3333
332
333reset_vf_on_212_131.sh
334~~~~~~~~~~~~~~~~~~~~~~
335
336.. code-block:: sh
337
338   #!/bin/sh
339   # This script is run on the host 10.237.212.131 to reset SRIOV
340
341   # BDF for Niantic NIC is 0000:06:00.0
342   cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
343   echo 0 > /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
344   cat /sys/bus/pci/devices/0000\:06\:00.0/max_vfs
345
346   # BDF for Fortville NIC is 0000:03:00.0
347   cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
348   echo 0 > /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
349   cat /sys/bus/pci/devices/0000\:03\:00.0/max_vfs
350
351vm_virtio_vhost_user_migrate.sh
352~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
353
354.. code-block:: sh
355
356   #/bin/sh
357   # Script for use with vhost user sample application
358   # The host system has 8 cpu's (0-7)
359
360   # Path to KVM tool
361   KVM_PATH="/usr/bin/qemu-system-x86_64"
362
363   # Guest Disk image
364   DISK_IMG="/home/user/disk_image/virt1_sml.disk"
365
366   # Number of guest cpus
367   VCPUS_NR="6"
368
369   # Memory
370   MEM=1024
371
372   VIRTIO_OPTIONS="csum=off,gso=off,guest_tso4=off,guest_tso6=off,guest_ecn=off"
373
374   # Socket Path
375   SOCKET_PATH="/root/dpdk/host_scripts/usvhost"
376
377   taskset -c 2-7 $KVM_PATH \
378    -enable-kvm \
379    -m $MEM \
380    -smp $VCPUS_NR \
381    -object memory-backend-file,id=mem,size=1024M,mem-path=/mnt/huge,share=on \
382    -numa node,memdev=mem,nodeid=0 \
383    -cpu host \
384    -name VM1 \
385    -no-reboot \
386    -net none \
387    -vnc none \
388    -nographic \
389    -hda $DISK_IMG \
390    -chardev socket,id=chr0,path=$SOCKET_PATH \
391    -netdev type=vhost-user,id=net1,chardev=chr0,vhostforce \
392    -device virtio-net-pci,netdev=net1,mac=CC:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
393    -chardev socket,id=chr1,path=$SOCKET_PATH \
394    -netdev type=vhost-user,id=net2,chardev=chr1,vhostforce \
395    -device virtio-net-pci,netdev=net2,mac=DD:BB:BB:BB:BB:BB,$VIRTIO_OPTIONS \
396    -incoming tcp:0:5555 \
397    -monitor telnet::3333,server,nowait
398
399.. _lm_virtio_vhost_user_vm_scripts:
400
401Sample VM scripts
402-----------------
403
404setup_dpdk_virtio_in_vm.sh
405~~~~~~~~~~~~~~~~~~~~~~~~~~
406
407.. code-block:: sh
408
409   #!/bin/sh
410   # this script matches the vm_virtio_vhost_user script
411   # virtio port is 03
412   # virtio port is 04
413
414   /root/dpdk/usertools/dpdk-hugepages.py --show
415   /root/dpdk/usertools/dpdk-hugepages.py --setup 2G
416   /root/dpdk/usertools/dpdk-hugepages.py --show
417
418   ifconfig -a
419   /root/dpdk/usertools/dpdk-devbind.py --status
420
421   rmmod virtio-pci
422
423   modprobe uio
424   insmod igb_uio.ko
425
426   /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:03.0
427   /root/dpdk/usertools/dpdk-devbind.py -b igb_uio 0000:00:04.0
428
429   /root/dpdk/usertools/dpdk-devbind.py --status
430
431run_testpmd_in_vm.sh
432~~~~~~~~~~~~~~~~~~~~
433
434.. code-block:: sh
435
436   #!/bin/sh
437   # Run testpmd for use with vhost_user sample app.
438   # test system has 8 cpus (0-7), use cpus 2-7 for VM
439
440   /root/dpdk/<build_dir>/app/dpdk-testpmd \
441   -l 0-5 -n 4 --socket-mem 350 -- --burst=64 --i
442