History log of /dpdk/drivers/net/mlx5/mlx5_ethdev.c (Results 1 – 25 of 249)
Revision Date Author Comments
# 4c3d7961 07-Aug-2024 Igor Gutorov <igootorov@gmail.com>

net/mlx5: fix reported Rx/Tx descriptor limits

Currently, `rte_eth_dev_info.rx_desc_lim.nb_max` as well as
`rte_eth_dev_info.tx_desc_lim.nb_max` shows 65535 as the limit,
which results in a few prob

net/mlx5: fix reported Rx/Tx descriptor limits

Currently, `rte_eth_dev_info.rx_desc_lim.nb_max` as well as
`rte_eth_dev_info.tx_desc_lim.nb_max` shows 65535 as the limit,
which results in a few problems:

* It is not the actual Rx/Tx queue limit
* Allocating an Rx queue and passing `rx_desc_lim.nb_max` results in an
integer overflow and 0 ring size:

```
rte_eth_rx_queue_setup(0, 0, rx_desc_lim.nb_max, 0, NULL, mb_pool);
```

Which overflows ring size and generates the following log:
```
mlx5_net: port 0 increased number of descriptors in Rx queue 0 to the
next power of two (0)
```
The same holds for allocating a Tx queue.

Fixes: e60fbd5b24fc ("mlx5: add device configure/start/stop")
Cc: stable@dpdk.org

Signed-off-by: Igor Gutorov <igootorov@gmail.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 6a3446cf 11-Oct-2024 Dariusz Sosnowski <dsosnowski@nvidia.com>

net/mlx5: disable config restore

mlx5 PMD does not require configuration restore
on rte_eth_dev_start().
Add implementation of get_restore_flags() indicating that.

Signed-off-by: Dariusz Sosnowski

net/mlx5: disable config restore

mlx5 PMD does not require configuration restore
on rte_eth_dev_start().
Add implementation of get_restore_flags() indicating that.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>

show more ...


# 10859ecf 08-Jul-2024 Dariusz Sosnowski <dsosnowski@nvidia.com>

net/mlx5: fix MTU configuration

Apply provided MTU, derived from rte_eth_conf.rxmode.mtu,
on port configuration.

Bugzilla ID: 1483
Fixes: e60fbd5b24fc ("mlx5: add device configure/start/stop")
Cc:

net/mlx5: fix MTU configuration

Apply provided MTU, derived from rte_eth_conf.rxmode.mtu,
on port configuration.

Bugzilla ID: 1483
Fixes: e60fbd5b24fc ("mlx5: add device configure/start/stop")
Cc: stable@dpdk.org

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 1944fbc3 05-Jun-2024 Suanming Mou <suanmingm@nvidia.com>

net/mlx5: support flow match with external Tx queue

For using external created Tx queues in RTE_FLOW_ITEM_TX_QUEUE,
this commit provides the map and unmap functions to convert the
external created S

net/mlx5: support flow match with external Tx queue

For using external created Tx queues in RTE_FLOW_ITEM_TX_QUEUE,
this commit provides the map and unmap functions to convert the
external created SQ's devx ID to DPDK flow item Tx queue ID.

Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

show more ...


# 3d57ec9f 05-Mar-2024 Thomas Monjalon <thomas@monjalon.net>

net/mlx5: apply default tuning to future speeds

Some default parameters for number of queues and ring size
are different starting with 100G speed capability.

Instead of checking all speed above 100

net/mlx5: apply default tuning to future speeds

Some default parameters for number of queues and ring size
are different starting with 100G speed capability.

Instead of checking all speed above 100G, make sure it is applied
for any speed capability newer than 100G (including 400G for instance).

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>

show more ...


# ba6a168a 01-Feb-2024 Sivaramakrishnan Venkat <venkatx.sivaramakrishnan@intel.com>

drivers/net: return number of supported packet types

Missing "RTE_PTYPE_UNKNOWN" ptype causes buffer overflow.
Enhance code such that the dev_supported_ptypes_get()
function pointer now returns the

drivers/net: return number of supported packet types

Missing "RTE_PTYPE_UNKNOWN" ptype causes buffer overflow.
Enhance code such that the dev_supported_ptypes_get()
function pointer now returns the number of elements to
eliminate the need for "RTE_PTYPE_UNKNOWN" as the last item.

Same applied to 'buffer_split_supported_hdr_ptypes_get()' dev_ops too.

Signed-off-by: Sivaramakrishnan Venkat <venkatx.sivaramakrishnan@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>

show more ...


# 11c73de9 31-Oct-2023 Dariusz Sosnowski <dsosnowski@nvidia.com>

net/mlx5: probe multi-port E-Switch device

This patch adds support for probing ports of a Multiport
E-Switch device to mlx5 PMD.

Multiport E-Switch is a configuration of NVIDIA ConnectX/BlueField H

net/mlx5: probe multi-port E-Switch device

This patch adds support for probing ports of a Multiport
E-Switch device to mlx5 PMD.

Multiport E-Switch is a configuration of NVIDIA ConnectX/BlueField HCAs
where all connected entities (i.e. physical ports, VFs and SFs)
share the same switch domain.
In this mode, applications are allowed to create transfer flow rules
which explicitly match on the physical port on which traffic
arrives and/or on VFs and SFs, regardless of the root PF.
On top of that, forwarding to any of these entities is allowed.
Notably, applications are allowed to explicitly forward traffic
to any of the physical ports of the HCA.

This patch implements the following procedure for probing ports
of the device configured as Multiport E-Switch:

1. EAL calls mlx5 PMD to probe certain PCI device (with address BDF).
2. mlx5 PMD iterates over all existing IB devices:
2.1. Check if IB device has a PCI address which matches BDF.
2.2. Check if IB device is configured as Multiport E-Switch device,
using devlink interface.
2.3. Iterate over all IB ports of this device to find a netdev with
matching PCI address.
If any is found, IB device is chosen to instantiate DPDK ports
from it.
3. Iterate over all IB ports of the selected IB device,
to choose which ports to instantiate:
3.1. Choose IB ports which match the selected representor ports
(selected through representor devarg).
Instantiate DPDK ports based on those.
3.2. If IB port represented an uplink port and this port corresponds
to the probed PCI device, instantiated DPDK port is selected
as a switch master port.

Bulk of this work was done in mlx5_os_pci_probe_pf().

To properly enable support for Multiport E-Switch, this patch also
changes the following:

- Probing of representors of type RTE_ETH_REPRESENTOR_PF is allowed,
but if and only if Multiport E-Switch is enabled.
- Uplink ports have a representor type NONE and have
representor ID equal to UINT16_MAX.
rte_eth_dev_representor_info struct returned for uplink ports
have their index stored in `pf` field.
- flow_hw_set_port_info() used by HWS steering layer sets `is_wire`
field to true if a port is an uplink port,
if Multiport E-Switch is enabled.
- Changing MAC address of a port marked as representor is done directly
through its corresponding netdev if it is a Multiport E-Switch uplink.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 86647d46 31-Oct-2023 Thomas Monjalon <thomas@monjalon.net>

net/mlx5: add global API prefix to public constants

The file rte_pmd_mlx5.h is a public API,
so its components must be prefixed with RTE_PMD_.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>


# f2d43ff5 06-Oct-2022 Dariusz Sosnowski <dsosnowski@nvidia.com>

net/mlx5: allow hairpin Rx queue in locked memory

This patch adds a capability to place hairpin Rx queue in locked device
memory. This capability is equivalent to storing hairpin RQ's data
buffers i

net/mlx5: allow hairpin Rx queue in locked memory

This patch adds a capability to place hairpin Rx queue in locked device
memory. This capability is equivalent to storing hairpin RQ's data
buffers in locked internal device memory.

Hairpin Rx queue creation is extended with requesting that RQ is
allocated in locked internal device memory. If allocation fails and
force_memory hairpin configuration is set, then hairpin queue creation
(and, as a result, device start) fails. If force_memory is unset, then
PMD will fallback to allocating memory for hairpin RQ in unlocked
internal device memory.

To allow such allocation, the user must set HAIRPIN_DATA_BUFFER_LOCK
flag in FW using mlxconfig tool.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 7274b417 06-Oct-2022 Dariusz Sosnowski <dsosnowski@nvidia.com>

net/mlx5: allow hairpin Tx queue in host memory

This patch adds a capability to place hairpin Tx queue in host memory
managed by DPDK. This capability is equivalent to storing hairpin SQ's
WQ buffer

net/mlx5: allow hairpin Tx queue in host memory

This patch adds a capability to place hairpin Tx queue in host memory
managed by DPDK. This capability is equivalent to storing hairpin SQ's
WQ buffer in host memory.

Hairpin Tx queue creation is extended with allocating a memory buffer of
proper size (calculated from required number of packets and WQE BB size
advertised in HCA capabilities).

force_memory flag of hairpin queue configuration is also supported.
If it is set and:

- allocation of memory buffer fails,
- or hairpin SQ creation fails,

then device start will fail. If it is unset, PMD will fallback to
creating the hairpin SQ with WQ buffer located in unlocked device
memory.

Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 1f37cb2b 28-Jul-2022 David Marchand <david.marchand@redhat.com>

bus/pci: make driver-only headers private

The pci bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.

While at it, cleanup the code:
- fix indentatio

bus/pci: make driver-only headers private

The pci bus interface is for drivers only.
Mark as internal and move the header in the driver headers list.

While at it, cleanup the code:
- fix indentation,
- remove unneeded reference to bus specific singleton object,
- remove unneeded list head structure type,
- reorder the definitions and macro manipulating the bus singleton object,
- remove inclusion of rte_bus.h and fix the code that relied on implicit
inclusion,

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>

show more ...


# 3ef18940 03-Mar-2022 Bing Zhao <bingz@nvidia.com>

net/mlx5: fix configuration without Rx queue

None Rx queue configured in a DPDK application should be supported.
In this mode, the NIC can be used to generate packets without
receiving any ingress t

net/mlx5: fix configuration without Rx queue

None Rx queue configured in a DPDK application should be supported.
In this mode, the NIC can be used to generate packets without
receiving any ingress traffic.

In the current implementation, once there is no Rx queue specified,
the array to store the queues' pointers is NULL after allocation.
Then the checking of the array allocation prevents the application
from starting up.

By adding another condition checking of the Rx queue number, the
application with none Rx queue can start up successfully.

Fixes: 4cda06c3c35e ("net/mlx5: split Rx queue into shareable and private")
Cc: stable@dpdk.org

Signed-off-by: Bing Zhao <bingz@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 80f872ee 24-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: add external Rx queue mapping API

External queue is a queue that has been created and managed outside the
PMD. The queues owner might use PMD to generate flow rules using these
external qu

net/mlx5: add external Rx queue mapping API

External queue is a queue that has been created and managed outside the
PMD. The queues owner might use PMD to generate flow rules using these
external queues.

When the queue is created in hardware it is given an ID represented by
32 bits. In contrast, the index of the queues in PMD is represented by
16 bits. To enable the use of PMD to generate flow rules, the queue
owner must provide a mapping between the HW index and a 16-bit index
corresponding to the ethdev API.

This patch adds an API enabling to insert/cancel a mapping between HW
queue id and ethdev queue id.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# c06f77ae 24-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: optimize queue type checks

The RxQ/TxQ control structure has a field named type. This type is enum
with values for standard and hairpin.
The use of this field is to check whether the queue

net/mlx5: optimize queue type checks

The RxQ/TxQ control structure has a field named type. This type is enum
with values for standard and hairpin.
The use of this field is to check whether the queue is of the hairpin
type or standard.

This patch replaces it with a boolean variable that saves whether it is
a hairpin.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# 45a6df80 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: separate per port configuration

Add configuration structure for port (ethdev). This structure contains
all configurations coming from devargs which oriented to port. It is a
field of mlx5_

net/mlx5: separate per port configuration

Add configuration structure for port (ethdev). This structure contains
all configurations coming from devargs which oriented to port. It is a
field of mlx5_priv structure, and is updated in spawn function for each
port.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# c4b86201 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: refactor to detect operation by DevX

Add inline function indicating whether HW objects operations can be
created by DevX. It makes the code more readable.

Signed-off-by: Michael Baum <mic

net/mlx5: refactor to detect operation by DevX

Add inline function indicating whether HW objects operations can be
created by DevX. It makes the code more readable.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# a13ec19c 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: add shared device context config structure

Add configuration structure for shared device context. This structure
contains all configurations coming from devargs which oriented to
device. I

net/mlx5: add shared device context config structure

Add configuration structure for shared device context. This structure
contains all configurations coming from devargs which oriented to
device. It is a field of shared device context (SH) structure, and is
updated once in mlx5_alloc_shared_dev_ctx() function.
This structure cannot be changed when probing again, so add function to
prevent it. The mlx5_probe_again_args_validate() function creates a
temporary IB context configure structure according to new devargs
attached in probing again, then checks the match between the temporary
structure and the existing IB context configure structure.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# 87af0d1e 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: concentrate all device configurations

Move all device configure to be performed by mlx5_os_cap_config()
function instead of the spawn function.
In addition move all relevant fields from ml

net/mlx5: concentrate all device configurations

Move all device configure to be performed by mlx5_os_cap_config()
function instead of the spawn function.
In addition move all relevant fields from mlx5_dev_config structure to
mlx5_dev_cap.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# 91d1cfaf 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: rearrange device attribute structure

Rearrange the mlx5_os_get_dev_attr() function in such a way that it
first executes the queries and only then updates the fields.
In addition, it change

net/mlx5: rearrange device attribute structure

Rearrange the mlx5_os_get_dev_attr() function in such a way that it
first executes the queries and only then updates the fields.
In addition, it changed its name in preparation for expanding its
operations to configure the capabilities inside it.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# cf004fd3 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: add E-Switch mode flag

This patch adds in SH structure a flag which indicates whether is
E-Switch mode.
When configure "dv_esw_en" from devargs, it is enabled only when is
E-switch mode. S

net/mlx5: add E-Switch mode flag

This patch adds in SH structure a flag which indicates whether is
E-Switch mode.
When configure "dv_esw_en" from devargs, it is enabled only when is
E-switch mode. So, since dv_esw_en has been configure, it is enough to
check if "dv_esw_en" is valid.
This patch also removes E-Switch mode check when "dv_esw_en" is checked
too.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# 6dc0cbc6 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: remove DevX flag duplication

The sharing device context structure has a field named "devx" which
indicates if DevX is supported.
The common configure structure has also field named "devx"

net/mlx5: remove DevX flag duplication

The sharing device context structure has a field named "devx" which
indicates if DevX is supported.
The common configure structure has also field named "devx" with the same
meaning.

There is no need for this duplication, because there is a reference to
the common structure from within the sharing device context structure.

This patch removes it from sharing device context structure and uses the
common config structure instead.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# 53820561 14-Feb-2022 Michael Baum <michaelba@nvidia.com>

net/mlx5: remove HCA attribute structure duplication

The HCA attribute structure is field of net configure structure.
It is also field of common configure structure.

There is no need for this dupli

net/mlx5: remove HCA attribute structure duplication

The HCA attribute structure is field of net configure structure.
It is also field of common configure structure.

There is no need for this duplication, because there is a reference to
the common structure from within the net structures.

This patch removes it from net configure structure and uses the common
config structure instead.

Signed-off-by: Michael Baum <michaelba@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# 09c25553 04-Nov-2021 Xueming Li <xuemingl@nvidia.com>

net/mlx5: support shared Rx queue

This patch introduces shared RxQ. All shared Rx queues with same group
and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from diffe

net/mlx5: support shared Rx queue

This patch introduces shared RxQ. All shared Rx queues with same group
and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared,
all queues from different member port share same WQ and CQ, essentially
one Rx WQ, mbufs are filled into this singleton WQ.

Shared rxq_data is set into device Rx queues of all member ports as
RxQ object, used for receiving packets. Polling queue of any member
ports returns packets of any member, mbuf->port is used to identify
source port.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 5cf0707f 04-Nov-2021 Xueming Li <xuemingl@nvidia.com>

net/mlx5: remove Rx queue data list from device

Rx queue data list(priv->rxqs) can be replaced by Rx queue
list(priv->rxq_privs), removes it and replaces with universal wrapper
API.

Signed-off-by:

net/mlx5: remove Rx queue data list from device

Rx queue data list(priv->rxqs) can be replaced by Rx queue
list(priv->rxq_privs), removes it and replaces with universal wrapper
API.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 4cda06c3 04-Nov-2021 Xueming Li <xuemingl@nvidia.com>

net/mlx5: split Rx queue into shareable and private

To prepare shared Rx queue, splits RxQ data into shareable and private.
Struct mlx5_rxq_priv is per queue data.
Struct mlx5_rxq_ctrl is shared que

net/mlx5: split Rx queue into shareable and private

To prepare shared Rx queue, splits RxQ data into shareable and private.
Struct mlx5_rxq_priv is per queue data.
Struct mlx5_rxq_ctrl is shared queue resources and data.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


12345678910