xref: /dpdk/doc/guides/nics/intel_vf.rst (revision 68a03efeed657e6e05f281479b33b51102797e15)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright(c) 2010-2014 Intel Corporation.
3
4Intel Virtual Function Driver
5=============================
6
7Supported Intel® Ethernet Controllers (see the *DPDK Release Notes* for details)
8support the following modes of operation in a virtualized environment:
9
10*   **SR-IOV mode**: Involves direct assignment of part of the port resources to different guest operating systems
11    using the PCI-SIG Single Root I/O Virtualization (SR IOV) standard,
12    also known as "native mode" or "pass-through" mode.
13    In this chapter, this mode is referred to as IOV mode.
14
15*   **VMDq mode**: Involves central management of the networking resources by an IO Virtual Machine (IOVM) or
16    a Virtual Machine Monitor (VMM), also known as software switch acceleration mode.
17    In this chapter, this mode is referred to as the Next Generation VMDq mode.
18
19SR-IOV Mode Utilization in a DPDK Environment
20---------------------------------------------
21
22The DPDK uses the SR-IOV feature for hardware-based I/O sharing in IOV mode.
23Therefore, it is possible to partition SR-IOV capability on Ethernet controller NIC resources logically and
24expose them to a virtual machine as a separate PCI function called a "Virtual Function".
25Refer to :numref:`figure_single_port_nic`.
26
27Therefore, a NIC is logically distributed among multiple virtual machines (as shown in :numref:`figure_single_port_nic`),
28while still having global data in common to share with the Physical Function and other Virtual Functions.
29The DPDK fm10kvf, i40evf, igbvf or ixgbevf as a Poll Mode Driver (PMD) serves for the Intel® 82576 Gigabit Ethernet Controller,
30Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller NIC,
31Intel® Fortville 10/40 Gigabit Ethernet Controller NIC's virtual PCI function, or PCIe host-interface of the Intel Ethernet Switch
32FM10000 Series.
33Meanwhile the DPDK Poll Mode Driver (PMD) also supports "Physical Function" of such NIC's on the host.
34
35The DPDK PF/VF Poll Mode Driver (PMD) supports the Layer 2 switch on Intel® 82576 Gigabit Ethernet Controller,
36Intel® Ethernet Controller I350 family, Intel® 82599 10 Gigabit Ethernet Controller,
37and Intel® Fortville 10/40 Gigabit Ethernet Controller NICs so that guest can choose it for inter virtual machine traffic in SR-IOV mode.
38
39For more detail on SR-IOV, please refer to the following documents:
40
41*   `SR-IOV provides hardware based I/O sharing <http://www.intel.com/network/connectivity/solutions/vmdc.htm>`_
42
43*   `PCI-SIG-Single Root I/O Virtualization Support on IA
44    <http://www.intel.com/content/www/us/en/pci-express/pci-sig-single-root-io-virtualization-support-in-virtualization-technology-for-connectivity-paper.html>`_
45
46*   `Scalable I/O Virtualized Servers <http://www.intel.com/content/www/us/en/virtualization/server-virtualization/scalable-i-o-virtualized-servers-paper.html>`_
47
48.. _figure_single_port_nic:
49
50.. figure:: img/single_port_nic.*
51
52   Virtualization for a Single Port NIC in SR-IOV Mode
53
54
55Physical and Virtual Function Infrastructure
56~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
57
58The following describes the Physical Function and Virtual Functions infrastructure for the supported Ethernet Controller NICs.
59
60Virtual Functions operate under the respective Physical Function on the same NIC Port and therefore have no access
61to the global NIC resources that are shared between other functions for the same NIC port.
62
63A Virtual Function has basic access to the queue resources and control structures of the queues assigned to it.
64For global resource access, a Virtual Function has to send a request to the Physical Function for that port,
65and the Physical Function operates on the global resources on behalf of the Virtual Function.
66For this out-of-band communication, an SR-IOV enabled NIC provides a memory buffer for each Virtual Function,
67which is called a "Mailbox".
68
69Intel® Ethernet Adaptive Virtual Function
70^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
71Adaptive Virtual Function (IAVF) is a SR-IOV Virtual Function with the same device id (8086:1889) on different Intel Ethernet Controller.
72IAVF Driver is VF driver which supports for all future Intel devices without requiring a VM update. And since this happens to be an adaptive VF driver,
73every new drop of the VF driver would add more and more advanced features that can be turned on in the VM if the underlying HW device supports those
74advanced features based on a device agnostic way without ever compromising on the base functionality. IAVF provides generic hardware interface and
75interface between IAVF driver and a compliant PF driver is specified.
76
77Intel products starting Ethernet Controller 700 Series to support Adaptive Virtual Function.
78
79The way to generate Virtual Function is like normal, and the resource of VF assignment depends on the NIC Infrastructure.
80
81For more detail on SR-IOV, please refer to the following documents:
82
83*   `Intel® IAVF HAS <https://www.intel.com/content/dam/www/public/us/en/documents/product-specifications/ethernet-adaptive-virtual-function-hardware-spec.pdf>`_
84
85.. note::
86
87    To use DPDK IAVF PMD on Intel® 700 Series Ethernet Controller, the device id (0x1889) need to specified during device
88    assignment in hypervisor. Take qemu for example, the device assignment should carry the IAVF device id (0x1889) like
89    ``-device vfio-pci,x-pci-device-id=0x1889,host=03:0a.0``.
90
91    When IAVF is backed by an Intel® E810 device, the "Protocol Extraction" feature which is supported by ice PMD is also
92    available for IAVF PMD. The same devargs with the same parameters can be applied to IAVF PMD, for detail please reference
93    the section ``Protocol extraction for per queue`` of ice.rst.
94
95The PCIE host-interface of Intel Ethernet Switch FM10000 Series VF infrastructure
96^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
97
98In a virtualized environment, the programmer can enable a maximum of *64 Virtual Functions (VF)*
99globally per PCIE host-interface of the Intel Ethernet Switch FM10000 Series device.
100Each VF can have a maximum of 16 queue pairs.
101The Physical Function in host could be only configured by the Linux* fm10k driver
102(in the case of the Linux Kernel-based Virtual Machine [KVM]), DPDK PMD PF driver doesn't support it yet.
103
104For example,
105
106*   Using Linux* fm10k driver:
107
108    .. code-block:: console
109
110        rmmod fm10k (To remove the fm10k module)
111        insmod fm0k.ko max_vfs=2,2 (To enable two Virtual Functions per port)
112
113Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
114When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
115represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
116However:
117
118*   Virtual Functions 0 and 2 belong to Physical Function 0
119
120*   Virtual Functions 1 and 3 belong to Physical Function 1
121
122.. note::
123
124    The above is an important consideration to take into account when targeting specific packets to a selected port.
125
126Intel® X710/XL710 Gigabit Ethernet Controller VF Infrastructure
127^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
128
129In a virtualized environment, the programmer can enable a maximum of *128 Virtual Functions (VF)*
130globally per Intel® X710/XL710 Gigabit Ethernet Controller NIC device.
131The Physical Function in host could be either configured by the Linux* i40e driver
132(in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
133When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
134
135For example,
136
137*   Using Linux* i40e  driver:
138
139    .. code-block:: console
140
141        rmmod i40e (To remove the i40e module)
142        insmod i40e.ko max_vfs=2,2 (To enable two Virtual Functions per port)
143
144*   Using the DPDK PMD PF i40e driver:
145
146    Kernel Params: iommu=pt, intel_iommu=on
147
148    .. code-block:: console
149
150        modprobe uio
151        insmod igb_uio
152        ./dpdk-devbind.py -b igb_uio bb:ss.f
153        echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
154
155    Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
156
157Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
158When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
159represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
160However:
161
162*   Virtual Functions 0 and 2 belong to Physical Function 0
163
164*   Virtual Functions 1 and 3 belong to Physical Function 1
165
166.. note::
167
168    The above is an important consideration to take into account when targeting specific packets to a selected port.
169
170    For Intel® X710/XL710 Gigabit Ethernet Controller, queues are in pairs. One queue pair means one receive queue and
171    one transmit queue. The default number of queue pairs per VF is 4, and can be 16 in maximum.
172
173Intel® 82599 10 Gigabit Ethernet Controller VF Infrastructure
174^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
175
176The programmer can enable a maximum of *63 Virtual Functions* and there must be *one Physical Function* per Intel® 82599
17710 Gigabit Ethernet Controller NIC port.
178The reason for this is that the device allows for a maximum of 128 queues per port and a virtual/physical function has to
179have at least one queue pair (RX/TX).
180The current implementation of the DPDK ixgbevf driver supports a single queue pair (RX/TX) per Virtual Function.
181The Physical Function in host could be either configured by the Linux* ixgbe driver
182(in the case of the Linux Kernel-based Virtual Machine [KVM]) or by DPDK PMD PF driver.
183When using both DPDK PMD PF/VF drivers, the whole NIC will be taken over by DPDK based application.
184
185For example,
186
187*   Using Linux* ixgbe driver:
188
189    .. code-block:: console
190
191        rmmod ixgbe (To remove the ixgbe module)
192        insmod ixgbe max_vfs=2,2 (To enable two Virtual Functions per port)
193
194*   Using the DPDK PMD PF ixgbe driver:
195
196    Kernel Params: iommu=pt, intel_iommu=on
197
198    .. code-block:: console
199
200        modprobe uio
201        insmod igb_uio
202        ./dpdk-devbind.py -b igb_uio bb:ss.f
203        echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific PCI device)
204
205    Launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
206
207*   Using the DPDK PMD PF ixgbe driver to enable VF RSS:
208
209    Same steps as above to install the modules of uio, igb_uio, specify max_vfs for PCI device, and
210    launch the DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
211
212    The available queue number (at most 4) per VF depends on the total number of pool, which is
213    determined by the max number of VF at PF initialization stage and the number of queue specified
214    in config:
215
216    *   If the max number of VFs (max_vfs) is set in the range of 1 to 32:
217
218        If the number of Rx queues is specified as 4 (``--rxq=4`` in testpmd), then there are totally 32
219        pools (ETH_32_POOLS), and each VF could have 4 Rx queues;
220
221        If the number of Rx queues is specified as 2 (``--rxq=2`` in testpmd), then there are totally 32
222        pools (ETH_32_POOLS), and each VF could have 2 Rx queues;
223
224    *   If the max number of VFs (max_vfs) is in the range of 33 to 64:
225
226        If the number of Rx queues in specified as 4 (``--rxq=4`` in testpmd), then error message is expected
227        as ``rxq`` is not correct at this case;
228
229        If the number of rxq is 2 (``--rxq=2`` in testpmd), then there is totally 64 pools (ETH_64_POOLS),
230        and each VF have 2 Rx queues;
231
232    On host, to enable VF RSS functionality, rx mq mode should be set as ETH_MQ_RX_VMDQ_RSS
233    or ETH_MQ_RX_RSS mode, and SRIOV mode should be activated (max_vfs >= 1).
234    It also needs config VF RSS information like hash function, RSS key, RSS key length.
235
236.. note::
237
238    The limitation for VF RSS on Intel® 82599 10 Gigabit Ethernet Controller is:
239    The hash and key are shared among PF and all VF, the RETA table with 128 entries is also shared
240    among PF and all VF; So it could not to provide a method to query the hash and reta content per
241    VF on guest, while, if possible, please query them on host for the shared RETA information.
242
243Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a dual-port NIC.
244When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
245represented by (Bus#, Device#, Function#) in sequence starting from 0 to 3.
246However:
247
248*   Virtual Functions 0 and 2 belong to Physical Function 0
249
250*   Virtual Functions 1 and 3 belong to Physical Function 1
251
252.. note::
253
254    The above is an important consideration to take into account when targeting specific packets to a selected port.
255
256Intel® 82576 Gigabit Ethernet Controller and Intel® Ethernet Controller I350 Family VF Infrastructure
257^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
258
259In a virtualized environment, an Intel® 82576 Gigabit Ethernet Controller serves up to eight virtual machines (VMs).
260The controller has 16 TX and 16 RX queues.
261They are generally referred to (or thought of) as queue pairs (one TX and one RX queue).
262This gives the controller 16 queue pairs.
263
264A pool is a group of queue pairs for assignment to the same VF, used for transmit and receive operations.
265The controller has eight pools, with each pool containing two queue pairs, that is, two TX and two RX queues assigned to each VF.
266
267In a virtualized environment, an Intel® Ethernet Controller I350 family device serves up to eight virtual machines (VMs) per port.
268The eight queues can be accessed by eight different VMs if configured correctly (the i350 has 4x1GbE ports each with 8T X and 8 RX queues),
269that means, one Transmit and one Receive queue assigned to each VF.
270
271For example,
272
273*   Using Linux* igb driver:
274
275    .. code-block:: console
276
277        rmmod igb (To remove the igb module)
278        insmod igb max_vfs=2,2 (To enable two Virtual Functions per port)
279
280*   Using DPDK PMD PF igb driver:
281
282    Kernel Params: iommu=pt, intel_iommu=on modprobe uio
283
284    .. code-block:: console
285
286        insmod igb_uio
287        ./dpdk-devbind.py -b igb_uio bb:ss.f
288        echo 2 > /sys/bus/pci/devices/0000\:bb\:ss.f/max_vfs (To enable two VFs on a specific pci device)
289
290    Launch DPDK testpmd/example or your own host daemon application using the DPDK PMD library.
291
292Virtual Function enumeration is performed in the following sequence by the Linux* pci driver for a four-port NIC.
293When you enable the four Virtual Functions with the above command, the four enabled functions have a Function#
294represented by (Bus#, Device#, Function#) in sequence, starting from 0 to 7.
295However:
296
297*   Virtual Functions 0 and 4 belong to Physical Function 0
298
299*   Virtual Functions 1 and 5 belong to Physical Function 1
300
301*   Virtual Functions 2 and 6 belong to Physical Function 2
302
303*   Virtual Functions 3 and 7 belong to Physical Function 3
304
305.. note::
306
307    The above is an important consideration to take into account when targeting specific packets to a selected port.
308
309Validated Hypervisors
310~~~~~~~~~~~~~~~~~~~~~
311
312The validated hypervisor is:
313
314*   KVM (Kernel Virtual Machine) with  Qemu, version 0.14.0
315
316However, the hypervisor is bypassed to configure the Virtual Function devices using the Mailbox interface,
317the solution is hypervisor-agnostic.
318Xen* and VMware* (when SR- IOV is supported) will also be able to support the DPDK with Virtual Function driver support.
319
320Expected Guest Operating System in Virtual Machine
321~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
322
323The expected guest operating systems in a virtualized environment are:
324
325*   Fedora* 14 (64-bit)
326
327*   Ubuntu* 10.04 (64-bit)
328
329For supported kernel versions, refer to the *DPDK Release Notes*.
330
331Setting Up a KVM Virtual Machine Monitor
332----------------------------------------
333
334The following describes a target environment:
335
336*   Host Operating System: Fedora 14
337
338*   Hypervisor: KVM (Kernel Virtual Machine) with Qemu  version 0.14.0
339
340*   Guest Operating System: Fedora 14
341
342*   Linux Kernel Version: Refer to the  *DPDK Getting Started Guide*
343
344*   Target Applications:  l2fwd, l3fwd-vf
345
346The setup procedure is as follows:
347
348#.  Before booting the Host OS, open **BIOS setup** and enable **Intel® VT features**.
349
350#.  While booting the Host OS kernel, pass the intel_iommu=on kernel command line argument using GRUB.
351    When using DPDK PF driver on host, pass the iommu=pt kernel command line argument in GRUB.
352
353#.  Download qemu-kvm-0.14.0 from
354    `http://sourceforge.net/projects/kvm/files/qemu-kvm/ <http://sourceforge.net/projects/kvm/files/qemu-kvm/>`_
355    and install it in the Host OS using the following steps:
356
357    When using a recent kernel (2.6.25+) with kvm modules included:
358
359    .. code-block:: console
360
361        tar xzf qemu-kvm-release.tar.gz
362        cd qemu-kvm-release
363        ./configure --prefix=/usr/local/kvm
364        make
365        sudo make install
366        sudo /sbin/modprobe kvm-intel
367
368    When using an older kernel, or a kernel from a distribution without the kvm modules,
369    you must download (from the same link), compile and install the modules yourself:
370
371    .. code-block:: console
372
373        tar xjf kvm-kmod-release.tar.bz2
374        cd kvm-kmod-release
375        ./configure
376        make
377        sudo make install
378        sudo /sbin/modprobe kvm-intel
379
380    qemu-kvm installs in the /usr/local/bin directory.
381
382    For more details about KVM configuration and usage, please refer to:
383
384    `http://www.linux-kvm.org/page/HOWTO1 <http://www.linux-kvm.org/page/HOWTO1>`_.
385
386#.  Create a Virtual Machine and install Fedora 14 on the Virtual Machine.
387    This is referred to as the Guest Operating System (Guest OS).
388
389#.  Download and install the latest ixgbe driver from:
390
391    `http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&amp;DwnldID=14687 <http://downloadcenter.intel.com/Detail_Desc.aspx?agr=Y&amp;DwnldID=14687>`_
392
393#.  In the Host OS
394
395    When using Linux kernel ixgbe driver, unload the Linux ixgbe driver and reload it with the max_vfs=2,2 argument:
396
397    .. code-block:: console
398
399        rmmod ixgbe
400        modprobe ixgbe max_vfs=2,2
401
402    When using DPDK PMD PF driver, insert DPDK kernel module igb_uio and set the number of VF by sysfs max_vfs:
403
404    .. code-block:: console
405
406        modprobe uio
407        insmod igb_uio
408        ./dpdk-devbind.py -b igb_uio 02:00.0 02:00.1 0e:00.0 0e:00.1
409        echo 2 > /sys/bus/pci/devices/0000\:02\:00.0/max_vfs
410        echo 2 > /sys/bus/pci/devices/0000\:02\:00.1/max_vfs
411        echo 2 > /sys/bus/pci/devices/0000\:0e\:00.0/max_vfs
412        echo 2 > /sys/bus/pci/devices/0000\:0e\:00.1/max_vfs
413
414    .. note::
415
416        You need to explicitly specify number of vfs for each port, for example,
417        in the command above, it creates two vfs for the first two ixgbe ports.
418
419    Let say we have a machine with four physical ixgbe ports:
420
421
422        0000:02:00.0
423
424        0000:02:00.1
425
426        0000:0e:00.0
427
428        0000:0e:00.1
429
430    The command above creates two vfs for device 0000:02:00.0:
431
432    .. code-block:: console
433
434        ls -alrt /sys/bus/pci/devices/0000\:02\:00.0/virt*
435        lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn1 -> ../0000:02:10.2
436        lrwxrwxrwx. 1 root root 0 Apr 13 05:40 /sys/bus/pci/devices/0000:02:00.0/virtfn0 -> ../0000:02:10.0
437
438    It also creates two vfs for device 0000:02:00.1:
439
440    .. code-block:: console
441
442        ls -alrt /sys/bus/pci/devices/0000\:02\:00.1/virt*
443        lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn1 -> ../0000:02:10.3
444        lrwxrwxrwx. 1 root root 0 Apr 13 05:51 /sys/bus/pci/devices/0000:02:00.1/virtfn0 -> ../0000:02:10.1
445
446#.  List the PCI devices connected and notice that the Host OS shows two Physical Functions (traditional ports)
447    and four Virtual Functions (two for each port).
448    This is the result of the previous step.
449
450#.  Insert the pci_stub module to hold the PCI devices that are freed from the default driver using the following command
451    (see http://www.linux-kvm.org/page/How_to_assign_devices_with_VT-d_in_KVM Section 4 for more information):
452
453    .. code-block:: console
454
455        sudo /sbin/modprobe pci-stub
456
457    Unbind the default driver from the PCI devices representing the Virtual Functions.
458    A script to perform this action is as follows:
459
460    .. code-block:: console
461
462        echo "8086 10ed" > /sys/bus/pci/drivers/pci-stub/new_id
463        echo 0000:08:10.0 > /sys/bus/pci/devices/0000:08:10.0/driver/unbind
464        echo 0000:08:10.0 > /sys/bus/pci/drivers/pci-stub/bind
465
466    where, 0000:08:10.0 belongs to the Virtual Function visible in the Host OS.
467
468#.  Now, start the Virtual Machine by running the following command:
469
470    .. code-block:: console
471
472        /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
473
474    where:
475
476        — -m = memory to assign
477
478        — -smp = number of smp cores
479
480        — -boot = boot option
481
482        — -hda = virtual disk image
483
484        — -device = device to attach
485
486    .. note::
487
488        — The pci-assign,host=08:10.0 value indicates that you want to attach a PCI device
489        to a Virtual Machine and the respective (Bus:Device.Function)
490        numbers should be passed for the Virtual Function to be attached.
491
492        — qemu-kvm-0.14.0 allows a maximum of four PCI devices assigned to a VM,
493        but this is qemu-kvm version dependent since qemu-kvm-0.14.1 allows a maximum of five PCI devices.
494
495        — qemu-system-x86_64 also has a -cpu command line option that is used to select the cpu_model
496        to emulate in a Virtual Machine. Therefore, it can be used as:
497
498        .. code-block:: console
499
500            /usr/local/kvm/bin/qemu-system-x86_64 -cpu ?
501
502            (to list all available cpu_models)
503
504            /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -cpu host -smp 4 -boot c -hda lucid.qcow2 -device pci-assign,host=08:10.0
505
506            (to use the same cpu_model equivalent to the host cpu)
507
508        For more information, please refer to: `http://wiki.qemu.org/Features/CPUModels <http://wiki.qemu.org/Features/CPUModels>`_.
509
510#.  If use vfio-pci to pass through device instead of pci-assign, steps 8 and 9 need to be updated to bind device to vfio-pci and
511    replace pci-assign with vfio-pci when start virtual machine.
512
513    .. code-block:: console
514
515        sudo /sbin/modprobe vfio-pci
516
517        echo "8086 10ed" > /sys/bus/pci/drivers/vfio-pci/new_id
518        echo 0000:08:10.0 > /sys/bus/pci/devices/0000:08:10.0/driver/unbind
519        echo 0000:08:10.0 > /sys/bus/pci/drivers/vfio-pci/bind
520
521        /usr/local/kvm/bin/qemu-system-x86_64 -m 4096 -smp 4 -boot c -hda lucid.qcow2 -device vfio-pci,host=08:10.0
522
523#.  Install and run DPDK host app to take  over the Physical Function. Eg.
524
525    .. code-block:: console
526
527        ./<build_dir>/app/dpdk-testpmd -l 0-3 -n 4 -- -i
528
529#.  Finally, access the Guest OS using vncviewer with the localhost:5900 port and check the lspci command output in the Guest OS.
530    The virtual functions will be listed as available for use.
531
532#.  Configure and install the DPDK on the Guest OS as normal, that is, there is no change to the normal installation procedure.
533
534.. note::
535
536    If you are unable to compile the DPDK and you are getting "error: CPU you selected does not support x86-64 instruction set",
537    power off the Guest OS and start the virtual machine with the correct -cpu option in the qemu- system-x86_64 command as shown in step 9.
538    You must select the best x86_64 cpu_model to emulate or you can select host option if available.
539
540.. note::
541
542    Run the DPDK l2fwd sample application in the Guest OS with Hugepages enabled.
543    For the expected benchmark performance, you must pin the cores from the Guest OS to the Host OS (taskset can be used to do this) and
544    you must also look at the PCI Bus layout on the board to ensure you are not running the traffic over the QPI Interface.
545
546.. note::
547
548    *   The Virtual Machine Manager (the Fedora package name is virt-manager) is a utility for virtual machine management
549        that can also be used to create, start, stop and delete virtual machines.
550        If this option is used, step 2 and 6 in the instructions provided will be different.
551
552    *   virsh, a command line utility for virtual machine management,
553        can also be used to bind and unbind devices to a virtual machine in Ubuntu.
554        If this option is used, step 6 in the instructions provided will be different.
555
556    *   The Virtual Machine Monitor (see :numref:`figure_perf_benchmark`) is equivalent to a Host OS with KVM installed as described in the instructions.
557
558.. _figure_perf_benchmark:
559
560.. figure:: img/perf_benchmark.*
561
562   Performance Benchmark Setup
563
564
565DPDK SR-IOV PMD PF/VF Driver Usage Model
566----------------------------------------
567
568Fast Host-based Packet Processing
569~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
570
571Software Defined Network (SDN) trends are demanding fast host-based packet handling.
572In a virtualization environment,
573the DPDK VF PMD driver performs the same throughput result as a non-VT native environment.
574
575With such host instance fast packet processing, lots of services such as filtering, QoS,
576DPI can be offloaded on the host fast path.
577
578:numref:`figure_fast_pkt_proc` shows the scenario where some VMs directly communicate externally via a VFs,
579while others connect to a virtual switch and share the same uplink bandwidth.
580
581.. _figure_fast_pkt_proc:
582
583.. figure:: img/fast_pkt_proc.*
584
585   Fast Host-based Packet Processing
586
587
588SR-IOV (PF/VF) Approach for Inter-VM Communication
589--------------------------------------------------
590
591Inter-VM data communication is one of the traffic bottle necks in virtualization platforms.
592SR-IOV device assignment helps a VM to attach the real device, taking advantage of the bridge in the NIC.
593So VF-to-VF traffic within the same physical port (VM0<->VM1) have hardware acceleration.
594However, when VF crosses physical ports (VM0<->VM2), there is no such hardware bridge.
595In this case, the DPDK PMD PF driver provides host forwarding between such VMs.
596
597:numref:`figure_inter_vm_comms` shows an example.
598In this case an update of the MAC address lookup tables in both the NIC and host DPDK application is required.
599
600In the NIC, writing the destination of a MAC address belongs to another cross device VM to the PF specific pool.
601So when a packet comes in, its destination MAC address will match and forward to the host DPDK PMD application.
602
603In the host DPDK application, the behavior is similar to L2 forwarding,
604that is, the packet is forwarded to the correct PF pool.
605The SR-IOV NIC switch forwards the packet to a specific VM according to the MAC destination address
606which belongs to the destination VF on the VM.
607
608.. _figure_inter_vm_comms:
609
610.. figure:: img/inter_vm_comms.*
611
612   Inter-VM Communication
613