#
5d2301a2 |
| 27-Feb-2024 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: fix VLAN handling in meter split
On the attempt to create a flow rule with:
- matching on REPRESENTED_PORT, - matching on outer VLAN tag, - matching on inner VLAN tag, - METER action,
fl
net/mlx5: fix VLAN handling in meter split
On the attempt to create a flow rule with:
- matching on REPRESENTED_PORT, - matching on outer VLAN tag, - matching on inner VLAN tag, - METER action,
flow splitting mechanism for handling metering flows was causing memory corruption. It was assumed that suffix flow will have a single VLAN item (used for translation of OF_PUSH_VLAN/OF_SET_VLAN_VID actions), however during flow_meter_split_prep() 2 VLAN items were parsed. This caused the buffer overflow on allocated suffix flow item buffer.
This patch fixes this overflow, by account for number of VLAN items in flow rule pattern when allocating items for suffix flow.
Fixes: 50f576d657d7 ("net/mlx5: fix VLAN actions in meter") Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
c156799c |
| 26-Feb-2024 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: support inner fields modification
This patch adds support for copying from inner fields using "level" 2.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Dariusz Sosnowski <ds
net/mlx5: support inner fields modification
This patch adds support for copying from inner fields using "level" 2.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
74f98c15 |
| 23-Feb-2024 |
Rongwei Liu <rongweil@nvidia.com> |
net/mlx5: fix modify flex item
In the rte_flow_field_data structure, the flex item handle is part of union with other members like level/tag_index.
If the user wants to modify the flex item as sour
net/mlx5: fix modify flex item
In the rte_flow_field_data structure, the flex item handle is part of union with other members like level/tag_index.
If the user wants to modify the flex item as source or destination, there should not be any checking against zero.
Fixes: c23626f27b09 ("ethdev: add MPLS header modification") Cc: stable@dpdk.org
Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
f5177bdc |
| 25-Jan-2024 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: add GENEVE TLV options parser API
Add a new private API to create/destroy parser for GENEVE TLV options.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Signed-off-by: Viacheslav Ovsii
net/mlx5: add GENEVE TLV options parser API
Add a new private API to create/destroy parser for GENEVE TLV options.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
279aa34f |
| 12-Jan-2024 |
Gavin Li <gavinl@nvidia.com> |
net/mlx5: support VXLAN-GPE reserved fields matching
This adds matching on the reserved fields of VXLAN-GPE header (the 16-bits before Next Protocol and the last 8-bits).
To support all the header
net/mlx5: support VXLAN-GPE reserved fields matching
This adds matching on the reserved fields of VXLAN-GPE header (the 16-bits before Next Protocol and the last 8-bits).
To support all the header fields, tunnel_header_0_1 should be supported by FW and misc5_cap is set.
If one of the reserved fields is matched on, misc5 is used for matching. Otherwise, keep using misc3
Signed-off-by: Gavin Li <gavinl@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
b4aec7dd |
| 12-Jan-2024 |
Gavin Li <gavinl@nvidia.com> |
net/mlx5: support VXLAN-GPE flags matching
This commit adds support for matching on the flags field of VXLAN-GPE header (the first 8-bits).
Signed-off-by: Gavin Li <gavinl@nvidia.com> Acked-by: Dar
net/mlx5: support VXLAN-GPE flags matching
This commit adds support for matching on the flags field of VXLAN-GPE header (the first 8-bits).
Signed-off-by: Gavin Li <gavinl@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
d1c84dc0 |
| 12-Jan-2024 |
Gavin Li <gavinl@nvidia.com> |
net/mlx5: discover IPv6 traffic class support
Previously, IPv6 traffic class used the same ids of IPv4 DSCP and ECN by rdma core and firmware. New FW support new IPv6 traffic class id which is recom
net/mlx5: discover IPv6 traffic class support
Previously, IPv6 traffic class used the same ids of IPv4 DSCP and ECN by rdma core and firmware. New FW support new IPv6 traffic class id which is recommended to be used though the old way is still working.
FW exposed a new cap bit to indicate the supporting of the new id while RDMA core does not have such mechanism.
To fix the backward compatibility issue of combination of RDMA core and FW of different versions, a new function and a new flag were introduced to check if the new IPv6 traffic class id is supported by RDMA core.
Signed-off-by: Gavin Li <gavinl@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
bb328f44 |
| 13-Feb-2024 |
Ori Kam <orika@nvidia.com> |
net/mlx5: support encapsulation hash calculation
This commit adds support for encap hash calculation.
Signed-off-by: Ori Kam <orika@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
|
#
537bfdda |
| 06-Feb-2024 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
ethdev: rework fast path async flow API
This patch reworks the async flow API functions called in data path, to reduce the overhead during flow operations at the library level. Main source of the ov
ethdev: rework fast path async flow API
This patch reworks the async flow API functions called in data path, to reduce the overhead during flow operations at the library level. Main source of the overhead was indirection and checks done while ethdev library was fetching rte_flow_ops from a given driver.
This patch introduces rte_flow_fp_ops struct which holds callbacks to driver's implementation of fast path async flow API functions. Each driver implementing these functions must populate flow_fp_ops field inside rte_eth_dev structure with a reference to its own implementation. By default, ethdev library provides dummy callbacks with implementations returning ENOSYS. Such design provides a few assumptions:
- rte_flow_fp_ops struct for given port is always available. - Each callback is either: - Default provided by library. - Set up by driver.
As a result, no checks for availability of the implementation are needed at library level in data path. Any library-level validation checks in async flow API are compiled if and only if RTE_FLOW_DEBUG macro is defined.
This design was based on changes in ethdev library introduced in [1].
These changes apply only to the following API functions:
- rte_flow_async_create() - rte_flow_async_create_by_index() - rte_flow_async_actions_update() - rte_flow_async_destroy() - rte_flow_push() - rte_flow_pull() - rte_flow_async_action_handle_create() - rte_flow_async_action_handle_destroy() - rte_flow_async_action_handle_update() - rte_flow_async_action_handle_query() - rte_flow_async_action_handle_query_update() - rte_flow_async_action_list_handle_create() - rte_flow_async_action_list_handle_destroy() - rte_flow_async_action_list_handle_query_update()
This patch also adjusts the mlx5 PMD to the introduced flow API changes.
[1] commit c87d435a4d79 ("ethdev: copy fast-path API into separate structure")
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
77edfda9 |
| 06-Feb-2024 |
Suanming Mou <suanmingm@nvidia.com> |
ethdev: rename flow field data structure
Current rte_flow_action_modify_data struct describes the pkt field perfectly and is used only in action.
It is planned to be used for item as well. This com
ethdev: rename flow field data structure
Current rte_flow_action_modify_data struct describes the pkt field perfectly and is used only in action.
It is planned to be used for item as well. This commit renames it to "rte_flow_field_data" making it compatible to be used by item.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
86647d46 |
| 31-Oct-2023 |
Thomas Monjalon <thomas@monjalon.net> |
net/mlx5: add global API prefix to public constants
The file rte_pmd_mlx5.h is a public API, so its components must be prefixed with RTE_PMD_.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
|
#
2ece3b71 |
| 26-Oct-2023 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: fix flow workspace double free in Windows
The thread specific variable workspace indicated by "key_workspace" should be freed explicitly when closing a device. For example, in Linux, when
net/mlx5: fix flow workspace double free in Windows
The thread specific variable workspace indicated by "key_workspace" should be freed explicitly when closing a device. For example, in Linux, when exiting an application, the thread will not exit explicitly and the thread resources will not be deconstructed.
The commit to solve this introduced a global list to manage the workspace resources as a garbage collector. It will also be executed in Windows, but the workspaces have already been freed in the function mlx5_flow_os_release_workspace().
With this commit, the garbage collector will only be executed in Linux. The workspace resources management in Windows will remain the same with some stub function when needed.
Fixes: dc7c5e0aa905 ("net/mlx5: fix flow workspace destruction") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
6c991cd9 |
| 29-Oct-2023 |
Ori Kam <orika@nvidia.com> |
net/mlx5: calculate flow table hash
This commit adds calculate hash function support for mlx5 PMD.
Signed-off-by: Ori Kam <orika@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acke
net/mlx5: calculate flow table hash
This commit adds calculate hash function support for mlx5 PMD.
Signed-off-by: Ori Kam <orika@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
5e9f9a28 |
| 29-Oct-2023 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: merge C registers aliases
Merge `mtr_color_reg` and `mlx5_flow_hw_aso_tag` into `aso_reg`
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-b
net/mlx5: merge C registers aliases
Merge `mtr_color_reg` and `mlx5_flow_hw_aso_tag` into `aso_reg`
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
04e740e6 |
| 29-Oct-2023 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: separate registers usage per port
Current implementation stored REG_C registers available for HWS tags in PMD global array. As the result, PMD could not work properly with different port t
net/mlx5: separate registers usage per port
Current implementation stored REG_C registers available for HWS tags in PMD global array. As the result, PMD could not work properly with different port types that allocate REG_C registers differently.
The patch stores registers available to a port in the port shared context. Register values will be assigned according to the port capabilities.
New function call `flow_hw_get_reg_id_from_ctx()` matches REG_C register to input DR5 context.
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
8ce638ef |
| 26-Sep-2023 |
Tomer Shmilovich <tshmilovich@nvidia.com> |
net/mlx5: support group set miss actions
Add implementation for rte_flow_group_set_miss_actions() API.
Signed-off-by: Tomer Shmilovich <tshmilovich@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
|
#
5e26c99f |
| 29-Oct-2023 |
Rongwei Liu <rongweil@nvidia.com> |
net/mlx5: support indirect flow encap/decap
Support the raw_encap/decap combinations in the indirect action list, and translates to 4 types of underlayer tunnel operations: 1. Layer 2 encapsulation
net/mlx5: support indirect flow encap/decap
Support the raw_encap/decap combinations in the indirect action list, and translates to 4 types of underlayer tunnel operations: 1. Layer 2 encapsulation like VxLAN. 2. Layer 2 decapsulation like VxLAN. 3. Layer 3 encapsulation like GRE. 4. Layer 3 decapsulation like GRE.
Each indirect action list has a unique handle ID and stands for different tunnel operations. The operation is shared globally with fixed patterns. It means there is no configuration associated with each handle ID and conf pointer should be NULL always no matter in the action template or flow rules.
If the handle ID mask in the action template is NULL, each flow rule can take its own indirect handle, otherwise, the ID in action template is used for all rules. The handle ID used in the flow rules must be the same type as the one in the action template.
Testpmd cli example:
flow indirect_action 0 create action_id 10 transfer list actions raw_decap index 1 / raw_encap index 2 / end
flow pattern_template 0 create transfer pattern_template_id 1 template eth / ipv4 / udp / end
flow actions_template 0 create transfer actions_template_id 1 template indirect_list handle 10 / jump / end mask indirect_list / jump / end
flow template_table 0 create table_id 1 group 1 priority 0 transfer rules_number 64 pattern_template 1 actions_template 1
flow queue 0 create 0 template_table 1 pattern_template 0 actions_template 0 postpone no pattern eth / ipv4 / udp / end actions indirect_list handle 11 / jump group 10 / end
Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
e26f50ad |
| 26-Oct-2023 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: support indirect list meter mark action
Support indirect list METER_MARK action.
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
|
#
3564e928 |
| 26-Oct-2023 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: support HWS flow mirror action
HWS mirror clones original packet to one or two destinations and proceeds with the original packet path.
The mirror has no dedicated RTE flow action type. M
net/mlx5: support HWS flow mirror action
HWS mirror clones original packet to one or two destinations and proceeds with the original packet path.
The mirror has no dedicated RTE flow action type. Mirror object is referenced by INDIRECT_LIST action. INDIRECT_LIST for a mirror built from actions list:
SAMPLE [/ SAMPLE] / <Orig. packet destination> / END
Mirror SAMPLE action defines packet clone. It specifies the clone destination and optional clone reformat action. Destination action for both clone and original packet depends on HCA domain: - for NIC RX, destination is ether RSS or QUEUE - for FDB, destination is PORT
HWS mirror was implemented with the INDIRECT_LIST flow action.
MLX5 PMD defines general `struct mlx5_indirect_list` type for all. INDIRECT_LIST handler objects:
struct mlx5_indirect_list { enum mlx5_indirect_list_type type; LIST_ENTRY(mlx5_indirect_list) chain; char data[]; };
Specific INDIRECT_LIST type must overload `mlx5_indirect_list::data` and provide unique `type` value. PMD returns a pointer to `mlx5_indirect_list` object.
Existing non-masked actions template API cannot identify flow actions in INDIRECT_LIST handler because INDIRECT_LIST handler can represent several flow actions.
For example: A: SAMPLE / JUMP B: SAMPE / SAMPLE / RSS
Actions template command
template indirect_list / end mask indirect_list 0 / end
does not provide any information to differentiate between flow actions in A and B.
MLX5 PMD requires INDIRECT_LIST configuration parameter in the template section:
Non-masked INDIRECT_LIST API: =============================
template indirect_list X / end mask indirect_list 0 / end
PMD identifies type of X handler and will use the same type in template creation. Actual parameters for actions in the list will be extracted from flow configuration
Masked INDIRECT_LIST API: =========================
template indirect_list X / end mask indirect_list -lUL / end
PMD creates action template from actions types and configurations referenced by X.
INDIRECT_LIST action without configuration is invalid and will be rejected by PMD.
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
c40023ae |
| 11-Oct-2023 |
Jiawei Wang <jiaweiw@nvidia.com> |
net/mlx5: fix decap action checking in sample flow
This patch uses the temp variable to check the current action type, to avoid overlap the sample action following the decap.
Fixes: 7356aec64c48 ("
net/mlx5: fix decap action checking in sample flow
This patch uses the temp variable to check the current action type, to avoid overlap the sample action following the decap.
Fixes: 7356aec64c48 ("net/mlx5: fix mirror flow split with L3 encapsulation") Cc: stable@dpdk.org
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
6f7d6622 |
| 08-Oct-2023 |
Haifei Luo <haifeil@nvidia.com> |
net/mlx5: support NSH flow matching
1. Add validation for item NSH. It will fail if HCA cap for NSH is false. 2. Add item_flags for NSH. 3. For vxlan-gpe if next header is NSH, set next_protocol
net/mlx5: support NSH flow matching
1. Add validation for item NSH. It will fail if HCA cap for NSH is false. 2. Add item_flags for NSH. 3. For vxlan-gpe if next header is NSH, set next_protocol as NSH.
Signed-off-by: Haifei Luo <haifeil@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
93722589 |
| 08-Oct-2023 |
Haifei Luo <haifeil@nvidia.com> |
net/mlx5: enhance validation of item VXLAN-GPE
Enhance the validation so that configuring vxlan-gpe's next protocol as NSH is supported.
1. The spec's protocol can have value and nic_mask's protoco
net/mlx5: enhance validation of item VXLAN-GPE
Enhance the validation so that configuring vxlan-gpe's next protocol as NSH is supported.
1. The spec's protocol can have value and nic_mask's protocol is 0xff.
Signed-off-by: Haifei Luo <haifeil@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
3d4e27fd |
| 25-Aug-2023 |
David Marchand <david.marchand@redhat.com> |
use abstracted bit count functions
Now that DPDK provides such bit count functions, make use of them.
This patch was prepared with a "brutal" commandline:
$ old=__builtin_clzll; new=rte_clz64; g
use abstracted bit count functions
Now that DPDK provides such bit count functions, make use of them.
This patch was prepared with a "brutal" commandline:
$ old=__builtin_clzll; new=rte_clz64; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g" $ old=__builtin_clz; new=rte_clz32; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g"
$ old=__builtin_ctzll; new=rte_ctz64; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g" $ old=__builtin_ctz; new=rte_ctz32; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g"
$ old=__builtin_popcountll; new=rte_popcount64; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g" $ old=__builtin_popcount; new=rte_popcount32; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g"
Then inclusion of rte_bitops.h was added were necessary.
Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Long Li <longli@microsoft.com>
show more ...
|
#
8b6f2439 |
| 18-Jul-2023 |
Alexander Kozyrev <akozyrev@nvidia.com> |
net/mlx5: fix handle validation for meter mark
Skip the METER_MARK validation for the indirect action update. The old synchronous indirect action update was left out during the METER_MARK implementa
net/mlx5: fix handle validation for meter mark
Skip the METER_MARK validation for the indirect action update. The old synchronous indirect action update was left out during the METER_MARK implementation in favor of the async way. Allow the sync method of doing this with relaxed validation.
Fixes: 48fbb0e93d06 ("net/mlx5: support flow meter mark indirect action with HWS") Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
a9156d68 |
| 18-Jul-2023 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: fix validation for conntrack indirect action
After rte_flow_shared_action_* API was replaced with rte_flow_action_handle_* API, one input parameter of the update interface was also changed
net/mlx5: fix validation for conntrack indirect action
After rte_flow_shared_action_* API was replaced with rte_flow_action_handle_* API, one input parameter of the update interface was also changed. A generic pointer was used instead of the "const struct rte_flow_action *" pointer.
In the entrance of mlx5 PMD callback for update, the validation is called for all indirect actions. But for conntrack type, the pointer is no longer with rte_flow_action pointer type and it will cause an incorrect casting and error.
The content for updating should only be validated when needed. It should skip the validation in the entrance. Right now, the content was already added before updating the hardware by WQE. So the type of the indirect action should be checked before calling the action validate function.
When creating a new conntrack object, the validation is still needed since all the content will be used to update the hardware context.
Fixes: 4b61b8774be9 ("ethdev: introduce indirect flow action") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|