#
d46f3b52 |
| 27-Oct-2024 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix error notifications in counter initialization
Use rte_flow_error structure to report failures during counter action initialization instead of integer values. This provides more detaile
net/mlx5: fix error notifications in counter initialization
Use rte_flow_error structure to report failures during counter action initialization instead of integer values. This provides more detailed error information to upper layers.
Previously, the PMD was using integer values like -1 or errno to notify of initialization failures, which limited the error details that could be communicated.
Fixes: 4d368e1da3a4 ("net/mlx5: support flow counter action for HWS") Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
d9f28495 |
| 22-Oct-2024 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: add dynamic unicast flow rule management
This patch extens the mlx5_traffic interface with a couple of functions:
- mlx5_traffic_mac_add() - Create an unicast DMAC flow rule, without re
net/mlx5: add dynamic unicast flow rule management
This patch extens the mlx5_traffic interface with a couple of functions:
- mlx5_traffic_mac_add() - Create an unicast DMAC flow rule, without recreating all control flow rules. - mlx5_traffic_mac_remove() - Remove an unicast DMAC flow rule, without recreating all control flow rules. - mlx5_traffic_mac_vlan_add() - Create an unicast DMAC with VLAN flow rule, without recreating all control flow rules. - mlx5_traffic_mac_vlan_remove() - Remove an unicast DMAC with VLAN flow rule, without recreating all control flow rules.
These functions will be used in the follow up commit, which will modify the behavior of adding/removing MAC address and enabling/disabling VLAN filter in mlx5 PMD.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
86d09686 |
| 22-Oct-2024 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: add legacy unicast flow rule registration
Whenever a unicast DMAC or unicast DMAC with VLAN ID control flow rule is created when working with Verbs or DV flow engine, add this flow rule to
net/mlx5: add legacy unicast flow rule registration
Whenever a unicast DMAC or unicast DMAC with VLAN ID control flow rule is created when working with Verbs or DV flow engine, add this flow rule to the control flow rule list, with information required for recognizing it.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
821a6a5c |
| 04-Jul-2024 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: add metadata split for compatibility
The method will not create any new flow rule implicitly during split stage, but only generate needed items, actions and attributes based on the detecti
net/mlx5: add metadata split for compatibility
The method will not create any new flow rule implicitly during split stage, but only generate needed items, actions and attributes based on the detection.
All the rules will still be created in the flow_hw_list_create().
In the meanwhile, once the mark action is specified in the FDB rule, a new rule in the NIC Rx will be created to: 1. match the mark value on REG_C_x in FDB and set it into Rx flow tag field. 2. copy the metadata in REG_C_x' into the REG_B.
If there is no mark, the default rule with only copying metadata will be hit if there is Queue or RSS action in the NIC Rx rule.
Regarding the NIC Tx, only the metadata is relevant and it will be copied in NIC Tx from REG_A into some REG_C_x. The current HWS implementation already has already supported in the default copy rule or the default SQ miss rule in the NIC Tx root table.
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
e0d947a1 |
| 04-Oct-2024 |
Ferruh Yigit <ferruh.yigit@amd.com> |
ethdev: convert string initialization
gcc 15 experimental [1], with -Wextra flag, gives warning in variable initialization as string [2].
The warning has a point when initialized variable is intend
ethdev: convert string initialization
gcc 15 experimental [1], with -Wextra flag, gives warning in variable initialization as string [2].
The warning has a point when initialized variable is intended to use as string, since assignment is missing the required null terminator for this case. But warning is useless for our usecase.
In this patch only updated a few instance to show the issue, there are many instances to fix, if we prefer to go this way. Other option is to disable warning but it can be useful for actual string usecases, so I prefer to keep it.
Converted string initialization to array initialization.
[1] gcc (GCC) 15.0.0 20241003 (experimental)
[2] ../lib/ethdev/rte_flow.h:906:36: error: initializer-string for array of ‘unsigned char’ is too long [-Werror=unterminated-string-initialization] 906 | .hdr.dst_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", | ^~~~~~~~~~~~~~~~~~~~~~~~~~
../lib/ethdev/rte_flow.h:907:36: error: initializer-string for array of ‘unsigned char’ is too long [-Werror=unterminated-string-initialization] 907 | .hdr.src_addr.addr_bytes = "\xff\xff\xff\xff\xff\xff", | ^~~~~~~~~~~~~~~~~~~~~~~~~~
../lib/ethdev/rte_flow.h:1009:25: error: initializer-string for array of ‘unsigned char’ is too long [-Werror=unterminated-string-initialization] 1009 | "\xff\xff\xff\xff\xff\xff\xff\xff" | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../lib/ethdev/rte_flow.h:1012:25: error: initializer-string for array of ‘unsigned char’ is too long [-Werror=unterminated-string-initialization] 1012 | "\xff\xff\xff\xff\xff\xff\xff\xff" | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
../lib/ethdev/rte_flow.h:1135:20: error: initializer-string for array of ‘unsigned char’ is too long [-Werror=unterminated-string-initialization] 1135 | .hdr.vni = "\xff\xff\xff", | ^~~~~~~~~~~~~~
Signed-off-by: Ferruh Yigit <ferruh.yigit@amd.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
cf9a91c6 |
| 18-Jul-2024 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: fix disabling E-Switch default flow rules
`fdb_def_rule_en` devarg controls whether mlx5 PMD creates default E-Switch flow rules for:
- Transferring traffic from wire, VFs and SFs to grou
net/mlx5: fix disabling E-Switch default flow rules
`fdb_def_rule_en` devarg controls whether mlx5 PMD creates default E-Switch flow rules for:
- Transferring traffic from wire, VFs and SFs to group 1 (default jump). - Providing default behavior for application traffic (default SQ miss flow rules).
With these flow rules, applications effectively create transfer flow rules in group 1 and higher (application group is translated to one higher) allowing for faster insertion on all groups and providing ability to forward to VF, SF and wire on any group.
By default, these rules are created (`fdb_def_rule_en` == 1).
When these default flow rules are disabled (`fdb_def_rule_en` == 0) with HW Steering flow engine (`dv_flow_en` == 2) only creation of default jump rules was disabled. Also, necessary template table and pattern/actions templates were created as well, but they were never used. SQ miss flow rules were still created. This is a bug, because with `fdb_def_rule_en` == 0, application should not expect any default E-Switch flow rules.
This patch fixes that by disabling all default E-Switch flow rules creation and disabling creating templates for these flow rules, when `fdb_def_rule_en` == 0. If an application needs to run with these flow rules disabled, and requires flow rules providing SQ miss flow rules functionality, then application must explicitly create similar flow rules.
Fixes: 1939eb6f660c ("net/mlx5: support flow port action with HWS") Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
e38776c3 |
| 09-Jun-2024 |
Maayan Kashani <mkashani@nvidia.com> |
net/mlx5: introduce HWS for non-template flow API
Implement the frame and needed building blocks for non template to hws API's.
Added validate, list_create and list_destroy to mlx5_flow_hw_drv_ops.
net/mlx5: introduce HWS for non-template flow API
Implement the frame and needed building blocks for non template to hws API's.
Added validate, list_create and list_destroy to mlx5_flow_hw_drv_ops. Rename old list_create/list_destroy functions to legacy_* and added a call from verbs/dv ops to the legacy functions.
Updated rte_flow_hw as needed. Added rte_flow_nt2hws structure for non-template rule data.
Signed-off-by: Maayan Kashani <mkashani@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
e12a0166 |
| 14-May-2024 |
Tyler Retzlaff <roretzla@linux.microsoft.com> |
drivers: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microso
drivers: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
show more ...
|
#
87e4384d |
| 21-Feb-2024 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: fix condition of LACP miss flow
The LACP traffic is only related to the bond interface. The default miss flow to redirect the LACP traffic with ethertype 0x8809 to the kernel driver should
net/mlx5: fix condition of LACP miss flow
The LACP traffic is only related to the bond interface. The default miss flow to redirect the LACP traffic with ethertype 0x8809 to the kernel driver should only be created on the bond device.
This commit will: 1. remove the incorrect assertion of the port role. 2. skip the resource allocation and flow rule creation on the representor port.
Fixes: 0f0ae73a3287 ("net/mlx5: add parameter for LACP packets control") Fixes: 49dffadf4b0c ("net/mlx5: fix LACP redirection in Rx domain") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
49dffadf |
| 13-Nov-2023 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: fix LACP redirection in Rx domain
When the "lacp_by_user" is not set from the application in bond mode, the LACP traffic should be handled by the kernel driver by default.
This commit add
net/mlx5: fix LACP redirection in Rx domain
When the "lacp_by_user" is not set from the application in bond mode, the LACP traffic should be handled by the kernel driver by default.
This commit adds the missing support in the template API when "dv_flow_en=2". The behavior will be the same as that in the DV mode with "dv_flow_en=1". The LACP packets will be redirected to the kernel when starting the steering in the NIC Rx domain.
With this commit, the DEFAULT_MISS action usage is refactored a bit. In the HWS, one unique action can be created with supported bits set in the "flag" per port. The *ROOT_FDB and *HWS_FDB flag bits will only be set when the port is in switchdev mode and working as the E-Switch manager proxy port. The SF/VF and all other representors won't have the FDB flag bits when creating the DEFAULT_MISS action.
Fixes: 9fa7c1cddb85 ("net/mlx5: create control flow rules with HWS") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
ca638c49 |
| 09-Nov-2023 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: fix hairpin queue states
This patch fixes the expected SQ and RQ states used in MODIFY_SQ and MODIFY_RQ during unbinding of the hairpin queues. When unbinding the queue objects, they are i
net/mlx5: fix hairpin queue states
This patch fixes the expected SQ and RQ states used in MODIFY_SQ and MODIFY_RQ during unbinding of the hairpin queues. When unbinding the queue objects, they are in RDY state and are transitioning to RST state, instead of going from RST to RST state.
Also, this patch fixes the constants used for RQ states. Instead of MLX5_SQC_STATE_*, now MLX5_RQC_STATE_* are used.
Fixes: 6a338ad4f7fe ("net/mlx5: add hairpin binding function") Fixes: 37cd4501e873 ("net/mlx5: support two ports hairpin mode") Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
ab2439f8 |
| 09-Nov-2023 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: fix hairpin queue unbind
Let's take an application with the following configuration:
- It uses 2 ports. - Each port has 3 Rx queues and 3 Tx queues. - On each port, Rx queues have a follo
net/mlx5: fix hairpin queue unbind
Let's take an application with the following configuration:
- It uses 2 ports. - Each port has 3 Rx queues and 3 Tx queues. - On each port, Rx queues have a following purposes: - Rx queue 0 - SW queue, - Rx queue 1 - hairpin queue, bound to Tx queue on the same port, - Rx queue 2 - hairpin queue, bound to Tx queue on another port. - On each port, Tx queues have a following purposes: - Tx queue 0 - SW queue, - Tx queue 1 - hairpin queue, bound to Rx queue on the same port, - Tx queue 2 - hairpin queue, bound to Rx queue on another port. - Application configured all of the hairpin queues for manual binding.
After ports are configured and queues are set up, if the application does the following API call sequence:
1. rte_eth_dev_start(port_id=0) 2. rte_eth_hairpin_bind(tx_port=0, rx_port=0) 3. rte_eth_hairpin_bind(tx_port=0, rx_port=1)
mlx5 PMD fails to modify SQ and logs this error:
mlx5_common: mlx5_devx_cmds.c:2079: mlx5_devx_cmd_modify_sq(): Failed to modify SQ using DevX
This error was caused by an incorrect unbind operation taken during error handling inside call (3).
(3) fails, because port 1 (Rx side of the hairpin) was not started. As a result of this failure, PMD goes into error handling, where all previously bound hairpin queues are unbound. This is incorrect, since this error handling procedure in rte_eth_hairpin_bind() implementation assumes that all hairpin queues are bound to the same rx_port, which is not the case. The following sequence of function calls appears:
- rte_eth_hairpin_queue_peer_unbind(rx_port=**1**, rx_queue=1, 0), - mlx5_hairpin_queue_peer_unbind(dev=**port 0**, tx_queue=1, 1).
Which violates the hairpin queue destroy flow, by unbinding Tx queue 1 on port 0, before unbinding Rx queue 1 on port 1.
This patch fixes that behavior, by filtering Tx queues on which error handling is done to only affect:
- hairpin queues (it also reduces unnecessary debug log messages), - hairpin queues connected to the rx_port which is currently processed.
Fixes: 37cd4501e873 ("net/mlx5: support two ports hairpin mode") Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
c93943c5 |
| 09-Nov-2023 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: fix use after free on Rx queue start
If RX queue is not started yet, then a mlx5_rxq_obj struct used for storing HW queue objects will be allocated and added to the list held in port's pri
net/mlx5: fix use after free on Rx queue start
If RX queue is not started yet, then a mlx5_rxq_obj struct used for storing HW queue objects will be allocated and added to the list held in port's private data structure. After that allocation, Rx queue HW object configuration is done. If that configuration failed, then mlx5_rxq_obj struct is freed, but not removed from the list. This causes an use after free bug, during error handling in mlx5_rxq_start(), where this deallocated struct was accessed during list cleanup.
This patch fixes that by inserting mlx5_rxq_obj struct to the list only after HW queue object configuration succeeded.
Fixes: 09c2555303be ("net/mlx5: support shared Rx queue") Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
f37c184a |
| 09-Nov-2023 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: fix destroying external representor flow
The external representor matched SQ flows are managed by external SQ, PMD traffic enable/disable should not touch these flows.
This commit adds an
net/mlx5: fix destroying external representor flow
The external representor matched SQ flows are managed by external SQ, PMD traffic enable/disable should not touch these flows.
This commit adds an extra external list for the external representor matched SQ flows.
Fixes: 26e1eaf2dac4 ("net/mlx5: support device control for E-Switch default rule") Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
fca8cba4 |
| 21-Jun-2023 |
David Marchand <david.marchand@redhat.com> |
ethdev: advertise flow restore in mbuf
As reported by Ilya [1], unconditionally calling rte_flow_get_restore_info() impacts an application performance for drivers that do not provide this ops. It co
ethdev: advertise flow restore in mbuf
As reported by Ilya [1], unconditionally calling rte_flow_get_restore_info() impacts an application performance for drivers that do not provide this ops. It could also impact processing of packets that require no call to rte_flow_get_restore_info() at all.
Register a dynamic mbuf flag when an application negotiates tunnel metadata delivery (calling rte_eth_rx_metadata_negotiate() with RTE_ETH_RX_METADATA_TUNNEL_ID).
Drivers then advertise that metadata can be extracted by setting this dynamic flag in each mbuf.
The application then calls rte_flow_get_restore_info() only when required.
Link: http://inbox.dpdk.org/dev/5248c2ca-f2a6-3fb0-38b8-7f659bfa40de@ovn.org/
Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Tested-by: Ali Alnubani <alialnu@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
9284987a |
| 07-Mar-2023 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: fix hairpin Tx queue reference count
When calling the haipin unbind interface, all the hairpin Tx queues of the port will be unbound from the peer Rx queues. If one of the Tx queue is work
net/mlx5: fix hairpin Tx queue reference count
When calling the haipin unbind interface, all the hairpin Tx queues of the port will be unbound from the peer Rx queues. If one of the Tx queue is working in the auto bind mode, the interface will return directly.
Only when the Tx and peer Rx ports are the same, the auto bind mode is supported. In this condition branch, the Tx queue release is missed and the reference count is not decreased. Then in the port stop stage, the hardware resources of this Tx queue won't be freed. There would be some assertion or failure when starting the port again.
With this commit, the reference count will be operated correctly.
Fixes: 37cd4501e873 ("net/mlx5: support two ports hairpin mode") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
8275d5fc |
| 25-Oct-2022 |
Thomas Monjalon <thomas@monjalon.net> |
ethdev: use Ethernet protocol struct for flow matching
As announced in the deprecation notice, flow item structures should re-use the protocol header definitions from the directory lib/net/. The Eth
ethdev: use Ethernet protocol struct for flow matching
As announced in the deprecation notice, flow item structures should re-use the protocol header definitions from the directory lib/net/. The Ethernet headers (including VLAN) structures are used instead of the redundant fields in the flow items.
The remaining protocols to clean up are listed for future work in the deprecation list. Some protocols are not even defined in the directory net yet.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> Reviewed-by: Niklas Söderlund <niklas.soderlund@corigine.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
show more ...
|
#
8e82ebe2 |
| 14-Nov-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: document E-Switch limitations with HWS
This patch adds the following limitations to the mlx5 PMD guide:
- With HW Steering and E-Switch enabled, transfer proxy port must be started befo
net/mlx5: document E-Switch limitations with HWS
This patch adds the following limitations to the mlx5 PMD guide:
- With HW Steering and E-Switch enabled, transfer proxy port must be started before any port representor. - With HW Steering and E-Switch enabled, all representors must be stopped before transfer proxy port is stopped.
Documentation of mlx5 PMD's implementations of rte_eth_dev_start() and rte_eth_dev_stop() is updated accordingly:
- rte_eth_dev_start() returns (-EAGAIN) when transfer proxy port cannot be started. - rte_eth_dev_stop() returns (-EBUSY) when port representor cannot be stopped.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
f359b715 |
| 14-Nov-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: fix log level of transfer proxy stop failure
This patch increases log level for error reporting when stopping the transfer proxy port failed. Stopping can fail with EBUSY when related repr
net/mlx5: fix log level of transfer proxy stop failure
This patch increases log level for error reporting when stopping the transfer proxy port failed. Stopping can fail with EBUSY when related representor ports are still running.
Fixes: 483181f7b6dd ("net/mlx5: support device control of representor matching")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
f64a7946 |
| 10-Nov-2022 |
Rongwei Liu <rongweil@nvidia.com> |
net/mlx5: fix marks on Rx packets
If HW Steering is enabled, Rx queues were configured to receive MARKs when a table with MARK actions was created. After stopping the port, Rx queue configuration is
net/mlx5: fix marks on Rx packets
If HW Steering is enabled, Rx queues were configured to receive MARKs when a table with MARK actions was created. After stopping the port, Rx queue configuration is released, but during starting the port the mark flag was not updated in the Rx queue configuration.
This patch introduces a reference count on the MARK action and it increases/decreases per template_table create/destroy.
When the port is stopped, Rx queue configuration is not cleared if reference count is not zero.
Fixes: 3a2f674b6aa8 ("net/mlx5: add queue and RSS HW steering action") Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
b9f1f4c2 |
| 09-Nov-2022 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix port initialization with small LRO
If application provided maximal LRO size was less than expected PMD minimum, the PMD either crashed with assert, if asserts were enabled, or proceede
net/mlx5: fix port initialization with small LRO
If application provided maximal LRO size was less than expected PMD minimum, the PMD either crashed with assert, if asserts were enabled, or proceeded with port initialization to set port private maximal LRO size below supported minimum.
The patch terminates port start if LRO size does not match PMD requirements and TCP LRO offload was requested at least for one Rx queue.
Fixes: 50c00baff763 ("net/mlx5: limit LRO size to maximum Rx packet") Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
9fa7c1cd |
| 20-Oct-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: create control flow rules with HWS
This patch adds the creation of control flow rules required to receive default traffic (based on port configuration) with HWS.
Control flow rules are cr
net/mlx5: create control flow rules with HWS
This patch adds the creation of control flow rules required to receive default traffic (based on port configuration) with HWS.
Control flow rules are created on port start and destroyed on port stop. Handling of destroying these rules was already implemented before that patch.
Control flow rules are created if and only if flow isolation mode is disabled and the creation process goes as follows:
- Port configuration is collected into a set of flags. Each flag corresponds to a certain Ethernet pattern type, defined by mlx5_flow_ctrl_rx_eth_pattern_type enumeration. There is a separate flag for VLAN filtering.
- For each possible Ethernet pattern type and: - For each possible RSS action configuration: - If configuration flags do not match this combination, it is omitted. - A template table is created using this combination of pattern and actions template (templates are fetched from hw_ctrl_rx struct stored in the port's private data). - Flow rules are created in this table.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
483181f7 |
| 20-Oct-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: support device control of representor matching
In some E-Switch use cases, applications want to receive all traffic on a single port. Since currently, flow API does not provide a way to ma
net/mlx5: support device control of representor matching
In some E-Switch use cases, applications want to receive all traffic on a single port. Since currently, flow API does not provide a way to match traffic forwarded to any port representor, this patch adds support for controlling representor matching on ingress flow rules.
Representor matching is controlled through a new device argument repr_matching_en.
- If representor matching is enabled (default setting), then each ingress pattern template has an implicit REPRESENTED_PORT item added. Flow rules based on this pattern template will match the vport associated with the port on which the rule is created. - If representor matching is disabled, then there will be no implicit item added. As a result ingress flow rules will match traffic coming to any port, not only the port on which the flow rule is created.
Representor matching is enabled by default, to provide an expected default behavior.
This patch enables egress flow rules on representors when E-Switch is enabled in the following configurations:
- repr_matching_en=1 and dv_xmeta_en=4 - repr_matching_en=1 and dv_xmeta_en=0 - repr_matching_en=0 and dv_xmeta_en=0
When representor matching is enabled, the following logic is implemented:
1. Creating an egress template table in group 0 for each port. These tables will hold default flow rules defined as follows:
pattern SQ actions MODIFY_FIELD (set available bits in REG_C_0 to vport_meta_tag) MODIFY_FIELD (copy REG_A to REG_C_1, only when dv_xmeta_en == 4) JUMP (group 1)
2. Egress pattern templates created by an application have an implicit MLX5_RTE_FLOW_ITEM_TYPE_TAG item prepended to the pattern, which matches available bits of REG_C_0.
3. Egress flow rules created by an application have an implicit MLX5_RTE_FLOW_ITEM_TYPE_TAG item prepended to the pattern, which matches vport_meta_tag placed in available bits of REG_C_0.
4. Egress template tables created by an application, which are in group n, are placed in group n + 1.
5. Items and actions related to META are operating on REG_A when dv_xmeta_en == 0 or REG_C_1 when dv_xmeta_en == 4.
When representor matching is disabled and extended metadata is disabled, no changes to the current logic are required.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
26e1eaf2 |
| 20-Oct-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: support device control for E-Switch default rule
This patch adds support for fdb_def_rule_en device argument to HW Steering, which controls:
- the creation of the default FDB jump flow ru
net/mlx5: support device control for E-Switch default rule
This patch adds support for fdb_def_rule_en device argument to HW Steering, which controls:
- the creation of the default FDB jump flow rule. - the ability of the user to create transfer flow rules in the root table.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
f1fecffa |
| 20-Oct-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: support Direct Rules action template API
This patch adapts mlx5 PMD to changes in mlx5dr API regarding the action templates. It changes the following:
1. Actions template creation:
-
net/mlx5: support Direct Rules action template API
This patch adapts mlx5 PMD to changes in mlx5dr API regarding the action templates. It changes the following:
1. Actions template creation:
- Flow actions types are translated to mlx5dr action types in order to create mlx5dr_action_template object. - An offset is assigned to each flow action. This offset is used to predetermine the action's location in the rule_acts array passed on the rule creation.
2. Template table creation:
- Fixed actions are created and put in the rule_acts cache using predetermined offsets - mlx5dr matcher is parametrized by action templates bound to template table. - mlx5dr matcher is configured to optimize rule creation based on passed rule indices.
3. Flow rule creation:
- mlx5dr rule is parametrized by the action template on which these rule's actions are based. - Rule index hint is provided to mlx5dr.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|