xref: /dpdk/doc/guides/vdpadevs/mlx5.rst (revision cb0da841649ee317e44faacbc4063caa26008366)
1..  SPDX-License-Identifier: BSD-3-Clause
2    Copyright 2019 Mellanox Technologies, Ltd
3
4.. include:: <isonum.txt>
5
6NVIDIA MLX5 vDPA Driver
7=======================
8
9.. note::
10
11   NVIDIA acquired Mellanox Technologies in 2020.
12   The DPDK documentation and code might still include instances
13   of or references to Mellanox trademarks (like BlueField and ConnectX)
14   that are now NVIDIA trademarks.
15
16The mlx5 vDPA (vhost data path acceleration) driver library
17(**librte_vdpa_mlx5**) provides support for **NVIDIA ConnectX-6**,
18**NVIDIA ConnectX-6 Dx**, **NVIDIA ConnectX-6 Lx**, **NVIDIA ConnectX7**,
19**NVIDIA BlueField**, **NVIDIA BlueField-2** and **NVIDIA BlueField-3** families
20of 10/25/40/50/100/200 Gb/s adapters as well as their virtual functions (VF) in
21SR-IOV context.
22
23.. note::
24
25   This driver is enabled automatically when using "meson" build system which
26   will detect dependencies.
27
28See :doc:`../../platform/mlx5` guide for design details,
29and which PMDs can be combined with vDPA PMD.
30
31Supported NICs
32--------------
33
34* NVIDIA\ |reg| ConnectX\ |reg|-6 200G MCX654106A-HCAT (2x200G)
35* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx EN 25G MCX621102AN-ADAT (2x25G)
36* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx EN 100G MCX623106AN-CDAT (2x100G)
37* NVIDIA\ |reg| ConnectX\ |reg|-6 Dx EN 200G MCX623105AN-VDAT (1x200G)
38* NVIDIA\ |reg| ConnectX\ |reg|-6 Lx EN 25G MCX631102AN-ADAT (2x25G)
39* NVIDIA\ |reg| ConnectX\ |reg|-7 200G CX713106AE-HEA_QP1_Ax (2x200G)
40* NVIDIA\ |reg| BlueField SmartNIC 25G MBF1M332A-ASCAT (2x25G)
41* NVIDIA\ |reg| BlueField |reg|-2 SmartNIC MT41686 - MBF2H332A-AEEOT_A1 (2x25G)
42* NVIDIA\ |reg| BlueField\ |reg|-3 200GbE 900-9D3B6-00CV-AAB_Ax
43
44Prerequisites
45-------------
46
47- NVIDIA MLNX_OFED version: **5.0**
48  See :ref:`mlx5 common prerequisites <mlx5_linux_prerequisites>` for more details.
49
50Run-time configuration
51~~~~~~~~~~~~~~~~~~~~~~
52
53Driver options
54^^^^^^^^^^^^^^
55
56Please refer to :ref:`mlx5 common options <mlx5_common_driver_options>`
57for an additional list of options shared with other mlx5 drivers.
58
59- ``event_mode`` parameter [int]
60
61  - 0, Completion queue scheduling will be managed by a timer thread which
62    automatically adjusts its delays to the coming traffic rate.
63
64  - 1, Completion queue scheduling will be managed by a timer thread with fixed
65    delay time.
66
67  - 2, Completion queue scheduling will be managed by interrupts. Each CQ burst
68    arms the CQ in order to get an interrupt event in the next traffic burst.
69
70  - Default mode is 1.
71
72- ``event_us`` parameter [int]
73
74  Per mode micro-seconds parameter - relevant only for event mode 0 and 1:
75
76  - 0, A nonzero value to set timer step in micro-seconds. The timer thread
77    dynamic delay change steps according to this value. Default value is 1us.
78
79  - 1, A value to set fixed timer delay in micro-seconds. Default value is 0us.
80
81- ``no_traffic_time`` parameter [int]
82
83  A nonzero value defines the traffic off time, in polling cycle time units,
84  that moves the driver to no-traffic mode. In this mode the polling is stopped
85  and interrupts are configured to the device in order to notify traffic for the
86  driver. Default value is 16.
87
88- ``event_core`` parameter [int]
89
90  The CPU core number of the timer thread, default: EAL main lcore.
91
92.. note::
93
94   This core can be shared among different mlx5 vDPA devices as `event_core`
95   but using it also for other tasks may affect the performance and the latency
96   of the mlx5 vDPA devices.
97
98- ``max_conf_threads`` parameter [int]
99
100  Allow the driver to use internal threads to obtain fast configuration.
101  All the threads will be open on the same core of the event completion queue scheduling thread.
102
103  - 0, default, don't use internal threads for configuration.
104
105  - 1 - 256, number of internal threads in addition to the caller thread (8 is suggested).
106    This value, if not 0, should be the same for all the devices;
107    the first probing will take it with the ``event_core``
108    for all the multi-thread configurations in the driver.
109
110- ``hw_latency_mode`` parameter [int]
111
112  The completion queue moderation mode:
113
114  - 0, HW default.
115
116  - 1, Latency is counted from the first packet completion report.
117
118  - 2, Latency is counted from the last packet completion.
119
120- ``hw_max_latency_us`` parameter [int]
121
122  - 1 - 4095, The maximum time in microseconds that packet completion report
123    can be delayed.
124
125  - 0, HW default.
126
127- ``hw_max_pending_comp`` parameter [int]
128
129  - 1 - 65535, The maximum number of pending packets completions in an HW queue.
130
131  - 0, HW default.
132
133- ``queue_size`` parameter [int]
134
135  - 1 - 1024, Virtio queue depth for pre-creating queue resource to speed up
136    first time queue creation. Set it together with ``queues`` parameter.
137
138  - 0, default value, no pre-create virtq resource.
139
140- ``queues`` parameter [int]
141
142  - 1 - 128, Maximum number of virtio queue pair (including 1 Rx queue and 1 Tx queue)
143    for pre-creating queue resource to speed up first time queue creation.
144    Set it together with ``queue_size`` parameter.
145
146  - 0, default value, no pre-create virtq resource.
147
148Error handling
149^^^^^^^^^^^^^^
150
151Upon potential hardware errors, mlx5 PMD try to recover, give up if failed 3
152times in 3 seconds, virtq will be put in disable state. User should check log
153to get error information, or query vdpa statistics counter to know error type
154and count report.
155
156Statistics
157^^^^^^^^^^
158
159The device statistics counter persists in reconfiguration until the device gets
160removed. User can reset counters by calling function rte_vdpa_reset_stats().
161