#
388dd1c9 |
| 16-Nov-2020 |
Xiaoyu Min <jackmin@nvidia.com> |
net/mlx5: fix encap/decap limit for hairpin flow split
The rte_flow_item_eth and rte_flow_item_vlan items are refined. The structs do not exactly represent the packet bits captured on the wire anymo
net/mlx5: fix encap/decap limit for hairpin flow split
The rte_flow_item_eth and rte_flow_item_vlan items are refined. The structs do not exactly represent the packet bits captured on the wire anymore. Should use real header instead of the whole struct.
Replace the rte_flow_item_* with the existing corresponding rte_*_hdr.
Fixes: 09315fc83861 ("ethdev: add VLAN attributes to ethernet and VLAN items") Fixes: f9210259cac7 ("net/mlx5: fix raw encap/decap limit")
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
31ef2982 |
| 18-Nov-2020 |
Dekel Peled <dekelp@nvidia.com> |
net/mlx5: fix input register for ASO object
Existing code uses the hard-coded value REG_C_5 as input for function mlx5dv_dr_action_create_flow_hit().
This patch updates function mlx5_flow_get_reg_i
net/mlx5: fix input register for ASO object
Existing code uses the hard-coded value REG_C_5 as input for function mlx5dv_dr_action_create_flow_hit().
This patch updates function mlx5_flow_get_reg_id() to return the selected REG_C value for ASO Flow Hit operation. The returned value is used, after reducing offset REG_C_0, as input for function mlx5dv_dr_action_create_flow_hit().
Fixes: f935ed4b645a ("net/mlx5: support flow hit action for aging")
Signed-off-by: Dekel Peled <dekelp@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
1db3678d |
| 18-Nov-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix restore info in non-tunnel traffic
Tunnel offload API provides applications with ability to restore packet outer headers after partial offload. Exact feature execution depends on hardw
net/mlx5: fix restore info in non-tunnel traffic
Tunnel offload API provides applications with ability to restore packet outer headers after partial offload. Exact feature execution depends on hardware abilities and PMD implementation. Hardware that is supported by MLX5 PMD places a mark on a packet after partial offload. PMD decodes that mark and provides application with required information. Application can call the restore API for packets that are part of offloaded tunnel and not. It's up to a PMD to provide correct information. Current MLX5 tunnel offload implementation does not allow applications to use flow MARK actions. It is restricted to tunnel offload use only. This fault was triggered by application that did not activate tunnel offload and called the restore API with a marked packet. The PMD tried to decode the mark value and crashed. The patch decodes mark value only if tunnel offload is active.
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
76563365 |
| 16-Nov-2020 |
Xiaoyu Min <jackmin@nvidia.com> |
net/mlx5: fix RSS queue type validation
When the RSS queues' types are not uniformed, i.e, mixed with normal Rx queue and hairpin queue, PMD accept this flow after commit[1] instead of rejecting it.
net/mlx5: fix RSS queue type validation
When the RSS queues' types are not uniformed, i.e, mixed with normal Rx queue and hairpin queue, PMD accept this flow after commit[1] instead of rejecting it.
This because commit[1] creates Rx queue object as DevX type via DevX API instead of IBV type via Verbs, in which the latter will check the queues' type when creating Verbs ind table but the former doesn't check when creating DevX ind table.
However, in any case, logically PMD should check whether the input configuration of RSS action is reasonable or not, which should include queues' type check as well as the others.
So add the check of RSS queues' type in validation function to fix issue.
[1]: commit 6deb19e1b2d2 ("net/mlx5: separate Rx queue object creations")
Fixes: 63bd16292c3a ("net/mlx5: support RSS on hairpin") Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
a0e4728c |
| 16-Nov-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix crash in tunnel offload setup
The new flow table resource management API triggered a PMD crash in tunnel offload mode, when tunnel match flow rule was inserted before tunnel set rule.
net/mlx5: fix crash in tunnel offload setup
The new flow table resource management API triggered a PMD crash in tunnel offload mode, when tunnel match flow rule was inserted before tunnel set rule.
Reason for the crash was double flow table registration. The table was registered by the tunnel offload code for the first time and once more by PMD code, as part of general table processing. The table counter was decremented only once during the rule destruction and caused a resource leak that triggered the crash.
The patch updates PMD registration with tunnel offload parameters and removes table registration in tunnel related code.
Fixes: afd7a62514ad ("net/mlx5: make flow table cache thread safe")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
868d2e34 |
| 16-Nov-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix tunnel offload hub multi-thread protection
The original patch was removing active tunnel offload objects from a tunnels db list without checking its reference counter value. That actio
net/mlx5: fix tunnel offload hub multi-thread protection
The original patch was removing active tunnel offload objects from a tunnels db list without checking its reference counter value. That action was leading to a PMD crash.
Current patch isolates tunnels db list into a separate API. That API manages MT protection of the tunnel offload db.
Fixes: 5b38d8cd4663 ("net/mlx5: make tunnel hub list thread safe")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
9cac7ded |
| 16-Nov-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix tunnel offload object allocation
The original patch allocated tunnel offload objects with invalid indexes. As the result, PMD tunnel object allocation failed.
In this patch indexed po
net/mlx5: fix tunnel offload object allocation
The original patch allocated tunnel offload objects with invalid indexes. As the result, PMD tunnel object allocation failed.
In this patch indexed pool provides both an index and memory for a new tunnel offload object. Also tunnel offload ipool moved to dv enabled code only.
Fixes: 4ae8825c5085 ("net/mlx5: use indexed pool as id generator")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
eab3ca48 |
| 16-Nov-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix structure passing method in function call
Tunnel offload implementation introduced 64 bit-field flow_grp_info structure. Since the structure size is 64 bits, the code passed that type
net/mlx5: fix structure passing method in function call
Tunnel offload implementation introduced 64 bit-field flow_grp_info structure. Since the structure size is 64 bits, the code passed that type by value in function calls.
The patch changes that structure passing method to reference.
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
bc1d90a3 |
| 16-Nov-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix build with Direct Verbs disabled
Tunnel offload API is implemented for Direct Verbs environment only. Current patch re-arranges tunnel related functions for compilation in non Direct V
net/mlx5: fix build with Direct Verbs disabled
Tunnel offload API is implemented for Direct Verbs environment only. Current patch re-arranges tunnel related functions for compilation in non Direct Verbs setups to prevent compilation failures. The patch does not introduce new functions.
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
8b11f9aa |
| 16-Nov-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix tunnel offload callback names
Fix mlx5_flow_tunnel_action_release and mlx5_flow_tunnel_item_release callback names to match tunnel offload names pattern.
Fixes: 4ec6360de37d ("net/mlx
net/mlx5: fix tunnel offload callback names
Fix mlx5_flow_tunnel_action_release and mlx5_flow_tunnel_item_release callback names to match tunnel offload names pattern.
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
8b379953 |
| 11-Nov-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: remove unused calculation in RSS expansion
The RSS flow expansion get a memory buffer to fill the new patterns of the expanded flows. This memory management saves the next address to write
net/mlx5: remove unused calculation in RSS expansion
The RSS flow expansion get a memory buffer to fill the new patterns of the expanded flows. This memory management saves the next address to write into the buffer in a dedicated variable.
The calculation for the next address was wrongly also done when all the patterns were ready.
Remove it.
Fixes: 4ed05fcd441b ("ethdev: add flow API to expand RSS flows") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
0064bf43 |
| 09-Nov-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: fix nested flow creation
If xmedata mode 1 enabled and create a flow with RSS and mark action, there was an error that rdma-core failed to create RQT due to wrong queue definition. This wa
net/mlx5: fix nested flow creation
If xmedata mode 1 enabled and create a flow with RSS and mark action, there was an error that rdma-core failed to create RQT due to wrong queue definition. This was due to mixed flow creation in thread specific flow workspace.
This patch introduces nested flow workspace(context data), each flow uses dedicate flow workspace, pop and restore workspace when nested flow creation done, the original flow with continue with original flow workspace. The total number of thread specific flow workspace should be 2 due to only one nested flow creation scenario so far.
Fixes: 8bb81f2649b1 ("net/mlx5: use thread specific flow workspace") Fixes: 3ac3d8234b82 ("net/mlx5: fix index when creating flow") Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
6f921f61 |
| 10-Nov-2020 |
Xiaoyu Min <jackmin@nvidia.com> |
net/mlx5: validate MPLSoGRE with GRE key
Currently PMD only accept flow which item_mpls directly follow item_gre, means to match the GRE header without GRE optional field key in MPLSoGRE encapsulati
net/mlx5: validate MPLSoGRE with GRE key
Currently PMD only accept flow which item_mpls directly follow item_gre, means to match the GRE header without GRE optional field key in MPLSoGRE encapsulation.
However, for the MPLSoGRE, the GRE header could have the optional field (i.e, key) according to the RFC. So PMD need to accept this.
Add MLX5_FLOW_LAYER_GRE_KEY into allowed prev_layer to fix
Fixes: a7a0365565a4 ("net/mlx5: match GRE key and present bits") Cc: stable@dpdk.org
Signed-off-by: Xiaoyu Min <jackmin@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
1d1f909c |
| 11-Nov-2020 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: fix check of eCPRI previous layer
Based on the specification, eCPRI can only follow ETH (VLAN) layer or UDP layer. When creating a flow with eCPRI item, this should be checked and invalid
net/mlx5: fix check of eCPRI previous layer
Based on the specification, eCPRI can only follow ETH (VLAN) layer or UDP layer. When creating a flow with eCPRI item, this should be checked and invalid layout of the layers should be rejected.
Fixes: c7eca23657b7 ("net/mlx5: add flow validation of eCPRI header") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
fabf8a37 |
| 10-Nov-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: fix shared RSS action release
As shared RSS action will be shared by multiple flows, the action is created as global standalone action and managed only by the relevant shared action manage
net/mlx5: fix shared RSS action release
As shared RSS action will be shared by multiple flows, the action is created as global standalone action and managed only by the relevant shared action management functions.
Currently, hrxqs will be created by shared RSS action or general queue action. For hrxqs created by shared RSS action, they should also only be released with shared RSS action. It's not correct to release the shared RSS action hrxqs as general queue actions do in flow destroy.
This commit adds a new fate action type for shared RSS action to handle the shared RSS action hrxq release correctly.
Fixes: e1592b6c4dea ("net/mlx5: make Rx queue thread safe")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
9ade91df |
| 04-Nov-2020 |
Jiawei Wang <jiaweiw@nvidia.com> |
net/mlx5: fix group value of sample suffix flow
mlx5 PMD split the sampling flow into prefix flow and suffix flow. On the sample action translation function, the scaled group value of suffix flow be
net/mlx5: fix group value of sample suffix flow
mlx5 PMD split the sampling flow into prefix flow and suffix flow. On the sample action translation function, the scaled group value of suffix flow be attached into sample object and saved into sample resource.
mlx5 PMD fetched the group value from the sample resource to create the suffix flow. On the mlx5_flow_group_to_table function the group value of suffix flow was scaled with table factor again and translated into HW table. That caused the incorrect group value of sample suffix flow.
The fix introduces a 'skip_scale' flag and sets it to 1 for the sample suffix flow creation. On the mlx5_flow_group_to_table function skips the scale with table factor to use the correct group value.
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
e82ddd28 |
| 03-Nov-2020 |
Tal Shnaiderman <talshn@nvidia.com> |
common/mlx5: split PCI relaxed ordering for read and write
The current DevX implementation of the relaxed ordering feature is enabling relaxed ordering usage only if both relaxed ordering read AND w
common/mlx5: split PCI relaxed ordering for read and write
The current DevX implementation of the relaxed ordering feature is enabling relaxed ordering usage only if both relaxed ordering read AND write are supported. In that case both relaxed ordering read and write are activated.
This commit will optimize the usage of relaxed ordering by enabling it when the read OR write features are supported. Each relaxed ordering type will be activated according to its own capability bit.
This will align the DevX flow with the verbs implementation of ibv_reg_mr when using the flag IBV_ACCESS_RELAXED_ORDERING
Fixes: 53ac93f71ad1 ("net/mlx5: create relaxed ordering memory regions") Cc: stable@dpdk.org
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
64ed71d5 |
| 28-Oct-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix tunnel flow destroy
Flow destructor tired to access flow related resources after the flow object memory was already released and crashed dpdk process.
The patch moves flow memory rele
net/mlx5: fix tunnel flow destroy
Flow destructor tired to access flow related resources after the flow object memory was already released and crashed dpdk process.
The patch moves flow memory release to the end of destructor.
Fixes: 4ec6360de37d ("net/mlx5: implement tunnel offload")
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
81073e1f |
| 01-Nov-2020 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: support shared age action
Add support for rte_flow shared action API for ASO age action.
First step here to support validate, create, query and destroy.
The support is only for age ASO m
net/mlx5: support shared age action
Add support for rte_flow shared action API for ASO age action.
First step here to support validate, create, query and destroy.
The support is only for age ASO mode.
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Dekel Peled <dekelp@nvidia.com>
show more ...
|
#
4a42ac1f |
| 01-Nov-2020 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: optimize shared RSS action memory
The RSS shared action was saved in flow memory by a pointer. It means that every flow memory includes 8B only for optional shared RSS case.
Move the RSS
net/mlx5: optimize shared RSS action memory
The RSS shared action was saved in flow memory by a pointer. It means that every flow memory includes 8B only for optional shared RSS case.
Move the RSS objects to be used by indexed pool which reduces the flow handle memory to 4B.
So, now, the shared action handler is also just a 4B index.
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Dekel Peled <dekelp@nvidia.com>
show more ...
|
#
f935ed4b |
| 01-Nov-2020 |
Dekel Peled <dekelp@nvidia.com> |
net/mlx5: support flow hit action for aging
A new ASO (Advanced Steering Operation) feature was added in the last mlx5 adapters to support flow hit detection.
Using this new steering action, the dr
net/mlx5: support flow hit action for aging
A new ASO (Advanced Steering Operation) feature was added in the last mlx5 adapters to support flow hit detection.
Using this new steering action, the driver can detect flow traffic hit and to reset this indication any time.
The ASO age action cannot support flows in table 0.
Add support for flow aging action in rte_flow using this new feature.
The counter aging mode will be taken only when the ASO feature is not supported for the user flow groups.
Signed-off-by: Dekel Peled <dekelp@nvidia.com> Signed-off-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
1be514fb |
| 22-Oct-2020 |
Andrew Rybchenko <arybchenko@solarflare.com> |
ethdev: remove legacy FDIR filter type support
Instead of FDIR filters RTE flow API should be used.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> Acked-by: Ajit Khaparde <ajit.khapard
ethdev: remove legacy FDIR filter type support
Instead of FDIR filters RTE flow API should be used.
Signed-off-by: Andrew Rybchenko <arybchenko@solarflare.com> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Haiyue Wang <haiyue.wang@intel.com> Acked-by: Hyong Youb Kim <hyonkim@cisco.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
show more ...
|
#
5b38d8cd |
| 28-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: make tunnel hub list thread safe
This commit uses spinlock to protect the tunnel hub list in multiple thread.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <mat
net/mlx5: make tunnel hub list thread safe
This commit uses spinlock to protect the tunnel hub list in multiple thread.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
1e2c7ced |
| 28-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: make tunnel offloading table thread safe
To support multi-thread flow insertion, this patch updates tunnel offloading hash table to use thread safe hash list.
Signed-off-by: Suanming Mou
net/mlx5: make tunnel offloading table thread safe
To support multi-thread flow insertion, this patch updates tunnel offloading hash table to use thread safe hash list.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
f7f73ac1 |
| 28-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: make metadata copy flow list thread safe
To support multi-thread flow insertion, this patch updates metadata copy flow list to use thread safe hash list.
Signed-off-by: Xueming Li <xuemin
net/mlx5: make metadata copy flow list thread safe
To support multi-thread flow insertion, this patch updates metadata copy flow list to use thread safe hash list.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|