#
a6b9d5a5 |
| 23-Feb-2022 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: update doorbell mapping parameter name
The "tx_db_nc" devarg forces doorbell register mapping to non-cached region eliminating the extra write memory barrier. This argument was used in
common/mlx5: update doorbell mapping parameter name
The "tx_db_nc" devarg forces doorbell register mapping to non-cached region eliminating the extra write memory barrier. This argument was used in creating the UAR for Tx and thus affected its performance.
Recently [1] its use has been extended to all UAR creation in all mlx5 drivers, and now its name is no longer so accurate.
This patch changes its name to "sq_db_nc" to suit any send queue that uses it. The old name will still work for backward compatibility.
[1] commit 5dfa003db53f ("common/mlx5: fix post doorbell barrier")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Raslan Darawsheh <rasland@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
45a6df80 |
| 14-Feb-2022 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: separate per port configuration
Add configuration structure for port (ethdev). This structure contains all configurations coming from devargs which oriented to port. It is a field of mlx5_
net/mlx5: separate per port configuration
Add configuration structure for port (ethdev). This structure contains all configurations coming from devargs which oriented to port. It is a field of mlx5_priv structure, and is updated in spawn function for each port.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
87af0d1e |
| 14-Feb-2022 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: concentrate all device configurations
Move all device configure to be performed by mlx5_os_cap_config() function instead of the spawn function. In addition move all relevant fields from ml
net/mlx5: concentrate all device configurations
Move all device configure to be performed by mlx5_os_cap_config() function instead of the spawn function. In addition move all relevant fields from mlx5_dev_config structure to mlx5_dev_cap.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
91d1cfaf |
| 14-Feb-2022 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: rearrange device attribute structure
Rearrange the mlx5_os_get_dev_attr() function in such a way that it first executes the queries and only then updates the fields. In addition, it change
net/mlx5: rearrange device attribute structure
Rearrange the mlx5_os_get_dev_attr() function in such a way that it first executes the queries and only then updates the fields. In addition, it changed its name in preparation for expanding its operations to configure the capabilities inside it.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
53820561 |
| 14-Feb-2022 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: remove HCA attribute structure duplication
The HCA attribute structure is field of net configure structure. It is also field of common configure structure.
There is no need for this dupli
net/mlx5: remove HCA attribute structure duplication
The HCA attribute structure is field of net configure structure. It is also field of common configure structure.
There is no need for this duplication, because there is a reference to the common structure from within the net structures.
This patch removes it from net configure structure and uses the common config structure instead.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
0947ed38 |
| 23-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: improve stride parameter names
In the striding RQ management there are two important parameters, the size of the single stride in bytes and the number of strides.
Both the data-path struc
net/mlx5: improve stride parameter names
In the striding RQ management there are two important parameters, the size of the single stride in bytes and the number of strides.
Both the data-path structure and config structure keep the log of the above parameters. However, in their names there is no mention that the value is a log which may be misleading as if the fields represent the values themselves.
This patch updates their names describing the values more accurately.
Fixes: ecb160456aed ("net/mlx5: add device parameter for MPRQ stride size") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
5dfa003d |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: fix post doorbell barrier
The rdma-core library can map doorbell register in two ways, depending on the environment variable "MLX5_SHUT_UP_BF":
- as regular cached memory, the variab
common/mlx5: fix post doorbell barrier
The rdma-core library can map doorbell register in two ways, depending on the environment variable "MLX5_SHUT_UP_BF":
- as regular cached memory, the variable is either missing or set to zero. This type of mapping may cause the significant doorbell register writing latency and requires an explicit memory write barrier to mitigate this issue and prevent write combining.
- as non-cached memory, the variable is present and set to not "0" value. This type of mapping may cause performance impact under heavy loading conditions but the explicit write memory barrier is not required and it may improve core performance.
The UAR creation function maps a doorbell in one of the above ways according to the system. In run time, it always adds an explicit memory barrier after writing to. In cases where the doorbell was mapped as non-cached memory, the explicit memory barrier is unnecessary and may impair performance.
The commit [1] solved this problem for a Tx queue. In run time, it checks the mapping type and provides the memory barrier after writing to a Tx doorbell register if it is needed. The mapping type is extracted directly from the uar_mmap_offset field in the queue properties.
This patch shares this code between the drivers and extends the above solution for each of them.
[1] commit 8409a28573d3 ("net/mlx5: control transmit doorbell register mapping")
Fixes: f8c97babc9f4 ("compress/mlx5: add data-path functions") Fixes: 8e196c08ab53 ("crypto/mlx5: support enqueue/dequeue operations") Fixes: 4d4e245ad637 ("regex/mlx5: support enqueue") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
b6e9c33c |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: remove duplicated reference of Tx doorbell
The Tx doorbell has different virtual addresses per process. The secondary process takes the UAR physical page ID of the primary and mmap it to i
net/mlx5: remove duplicated reference of Tx doorbell
The Tx doorbell has different virtual addresses per process. The secondary process takes the UAR physical page ID of the primary and mmap it to its own virtual address. The primary doorbell references were saved in two shared memory locations: the TxQ structure and a dedicated doorbell array.
Remove the doorbell reference from the TxQ structure and move the primary processes to take the UAR information from the primary doorbell array.
Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
febcac7b |
| 05-Nov-2021 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: support Rx queue delay drop
For the Ethernet RQs, if there all receiving descriptors are exhausted, the packets being received will be dropped. This behavior prevents slow or malicious sof
net/mlx5: support Rx queue delay drop
For the Ethernet RQs, if there all receiving descriptors are exhausted, the packets being received will be dropped. This behavior prevents slow or malicious software entities at the host from affecting the network. While for hairpin cases, even if there is no software involved during the packet forwarding from Rx to Tx side, some hiccup in the hardware or back pressure from Tx side may still cause the descriptors to be exhausted. In certain scenarios it may be preferred to configure the device to avoid such packet drops, assuming the posting of descriptors will resume shortly.
To support this, a new devarg "delay_drop" is introduced. By default, the delay drop is enabled for hairpin Rx queues and disabled for standard Rx queues. This value is used as a bit mask: - bit 0: enablement of standard Rx queue - bit 1: enablement of hairpin Rx queue And this attribute will be applied to all Rx queues of a device.
The "rq_delay_drop" capability in the HCA_CAP is checked before creating any queue. If the hardware capabilities do not support this delay drop, all the Rx queues will still be created without this attribute, and the devarg setting will be ignored even if it is specified explicitly. A warning log is used to notify the application when this occurs.
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
25ed2ebf |
| 04-Nov-2021 |
Viacheslav Ovsiienko <viacheslavo@nvidia.com> |
net/mlx5: support shared Rx queue port data path
When receive packet, mlx5 PMD saves mbuf port number from RxQ data.
To support shared RxQ, save port number into RQ context as user index. Received
net/mlx5: support shared Rx queue port data path
When receive packet, mlx5 PMD saves mbuf port number from RxQ data.
To support shared RxQ, save port number into RQ context as user index. Received packet resolve port number from CQE user index which derived from RQ context.
Legacy Verbs API doesn't support RQ user index setting, still read from RxQ port number.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
09c25553 |
| 04-Nov-2021 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: support shared Rx queue
This patch introduces shared RxQ. All shared Rx queues with same group and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared, all queues from diffe
net/mlx5: support shared Rx queue
This patch introduces shared RxQ. All shared Rx queues with same group and queue ID share the same rxq_ctrl. Rxq_ctrl and rxq_data are shared, all queues from different member port share same WQ and CQ, essentially one Rx WQ, mbufs are filled into this singleton WQ.
Shared rxq_data is set into device Rx queues of all member ports as RxQ object, used for receiving packets. Polling queue of any member ports returns packets of any member, mbuf->port is used to identify source port.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
5cf0707f |
| 04-Nov-2021 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: remove Rx queue data list from device
Rx queue data list(priv->rxqs) can be replaced by Rx queue list(priv->rxq_privs), removes it and replaces with universal wrapper API.
Signed-off-by:
net/mlx5: remove Rx queue data list from device
Rx queue data list(priv->rxqs) can be replaced by Rx queue list(priv->rxq_privs), removes it and replaces with universal wrapper API.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
5ceb3a02 |
| 04-Nov-2021 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: move Rx queue DevX resource
To support shared RX queue, moves DevX RQ which is per queue resource to Rx queue private data.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viach
net/mlx5: move Rx queue DevX resource
To support shared RX queue, moves DevX RQ which is per queue resource to Rx queue private data.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
5db77fef |
| 04-Nov-2021 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: remove port info from shareable Rx queue
To prepare for shared Rx queue, removes port info from shareable Rx queue control.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viach
net/mlx5: remove port info from shareable Rx queue
To prepare for shared Rx queue, removes port info from shareable Rx queue control.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
056c87d0 |
| 04-Nov-2021 |
Xueming Li <xuemingl@nvidia.com> |
common/mlx5: support receive memory pool
The hardware Receive Memory Pool (RMP) object holds the destination for incoming packets/messages that are routed to the RMP through RQs. RMP enables sharing
common/mlx5: support receive memory pool
The hardware Receive Memory Pool (RMP) object holds the destination for incoming packets/messages that are routed to the RMP through RQs. RMP enables sharing of memory across multiple Receive Queues. Multiple Receive Queues can be attached to the same RMP and consume memory from that shared poll. When using RMPs, completions are reported to the CQ pointed to by the RQ, user index that set in RQ creation time is carried to completion entry.
This patch enables RMP based RQ, RMP is created when mlx5_devx_rq.rmp is set.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
bc5bee02 |
| 02-Nov-2021 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
net/mlx5: create drop queue using DevX
Drop queue creation and destruction were not implemented for DevX flow engine and Verbs engine methods were used as a workaround. Implement these methods for D
net/mlx5: create drop queue using DevX
Drop queue creation and destruction were not implemented for DevX flow engine and Verbs engine methods were used as a workaround. Implement these methods for DevX so that there is a valid queue ID that can be used regardless of queue configuration via API.
Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
a89f6433 |
| 21-Oct-2021 |
Rongwei Liu <rongweil@nvidia.com> |
net/mlx5: set Tx queue affinity in round-robin
Previously, we set txq affinity to 0 and let firmware to perform round-robin when bonding. Firmware uses a global counter to assign txq affinity to dif
net/mlx5: set Tx queue affinity in round-robin
Previously, we set txq affinity to 0 and let firmware to perform round-robin when bonding. Firmware uses a global counter to assign txq affinity to different physical ports accord to remainder after division.
There are three dis-advantages: 1. The global counter is shared between kernel and dpdk. 2. After restarting pmd or port, the previous counter value is reused, so the new affinity is unpredictable. 3. There is no way to get what affinity is set by firmware.
In this update, we will create several TISs up to the number of bonding ports and bind each TIS to one PF port.
For each port, it will start to pick up TIS using its port index. Upper layer application can quickly calculate each txq's affinity without querying.
At DPDK layer, when creating txq with 2 bonding ports, the affinity is set like: port 0: 1-->2-->1-->2 port 1: 2-->1-->2-->1 port 2: 1-->2-->1-->2
Note: Only applicable to DevX api. This affinity subjects to HW hash.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
686d05b6 |
| 19-Oct-2021 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: enable DevX Tx queue creation
Verbs API does not support Infiniband device port number larger 255 by design. To support more representors on a single Infiniband device DevX API should be e
net/mlx5: enable DevX Tx queue creation
Verbs API does not support Infiniband device port number larger 255 by design. To support more representors on a single Infiniband device DevX API should be engaged.
While creating Send Queue (SQ) object with Verbs API, the PMD assigned IB device port attribute and kernel created the default miss flows in FDB domain, to redirect egress traffic from the queue being created to representor appropriate peer (wire, HPF, VF or SF).
With DevX API there is no IB-device port attribute (it is merely kernel one, DevX operates in PRM terms) and PMD must create default miss flows in FDB explicitly. PMD did not provide this and using DevX API for E-Switch configurations was disabled.
The default miss FDB flow matches E-Switch manager vport (to make sure the source is some representor) and SQn (Send Queue number - device internal queue index). The root flow table managed by kernel/firmware and it does not support vport redirect action, we have to split the default miss flow into two ones:
- flow with lowest priority in the root table that matches E-Switch manager vport ID and jump to group 1. - flow in group 1 that matches E-Switch manager vport ID and SQn and forwards packet to peer vport
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
fe46b20c |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share HCA capabilities handle
Add HCA attributes structure as a field of device config structure. It query in common probing, and updates the timestamp format fields.
Each driver use H
common/mlx5: share HCA capabilities handle
Add HCA attributes structure as a field of device config structure. It query in common probing, and updates the timestamp format fields.
Each driver use HCA attributes from common device config structure, instead of query it for itself.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
e35ccf24 |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share protection domain object
Create shared Protection Domain in common area and add it and its PDN as fields of common device structure.
Use this Protection Domain in all drivers and
common/mlx5: share protection domain object
Create shared Protection Domain in common area and add it and its PDN as fields of common device structure.
Use this Protection Domain in all drivers and remove the PD and PDN fields from their private structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
ca1418ce |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share device context object
Create shared context device in common area and add it as a field of common device. Use this context device in all drivers and remove the ctx field from thei
common/mlx5: share device context object
Create shared context device in common area and add it as a field of common device. Use this context device in all drivers and remove the ctx field from their private structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
c150dff4 |
| 14-Jun-2021 |
Viacheslav Ovsiienko <viacheslavo@nvidia.com> |
net/mlx5: fix Rx queue timestamp format
The timestamp format was not configured correctly for the receiving queues created via DevX calls. It caused non-UTC timestamps in CQEs for real time configu
net/mlx5: fix Rx queue timestamp format
The timestamp format was not configured correctly for the receiving queues created via DevX calls. It caused non-UTC timestamps in CQEs for real time configurations.
Fixes: d61381ad46d0 ("net/mlx5: support timestamp format") Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
23233fd6 |
| 17-May-2021 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: fix loopback for Direct Verbs queue
In the past, all the queues and other hardware objects were created through Verbs interface. Currently, most of the objects creation are migrated to Dev
net/mlx5: fix loopback for Direct Verbs queue
In the past, all the queues and other hardware objects were created through Verbs interface. Currently, most of the objects creation are migrated to Devx interface by default, including queues. Only when the DV is disabled by device arg or eswitch is enabled, all or some of the objects are created through Verbs interface.
When using Devx interface to create queues, the kernel driver behavior is different from the case using Verbs. The Tx loopback cannot work properly even if the Tx and Rx queues are configured with loopback attribute. To fix the support self loopback for Tx, a Verbs dummy queue pair needs to be created to trigger the kernel to enable the global loopback capability.
This is only required when TIR is created for Rx and loopback is needed. Only CQ and QP are needed for this case, no WQ(RQ) needs to be created.
Bugzilla ID: 645 Fixes: 6deb19e1b2d2 ("net/mlx5: separate Rx queue object creations") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
31625e62 |
| 28-Apr-2021 |
Viacheslav Ovsiienko <viacheslavo@nvidia.com> |
net/mlx5: fix Tx queue doorbell record field offset
If the Send Queue (backing one for PMD Tx queue) the was created with DevX API the doorbell record offset for the producer index field was incorre
net/mlx5: fix Tx queue doorbell record field offset
If the Send Queue (backing one for PMD Tx queue) the was created with DevX API the doorbell record offset for the producer index field was incorrect. If hardware missed the doorbell register write event the wrong content of doorbell record might cause queue malfunction. For the Send Queues created with Verbs API the doorbell record offset was configured correctly.
Fixes: 86d259cec852 ("net/mlx5: separate Tx queue object creations") Cc: stable@dpdk.org
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
377b69fb |
| 12-Apr-2021 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: separate Tx function declarations to another file
This patch separates Tx function declarations to different header file in preparation for removing their implementation from the source fi
net/mlx5: separate Tx function declarations to another file
This patch separates Tx function declarations to different header file in preparation for removing their implementation from the source file and as an optional preparation for Tx cleanup.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|