#
ff7ab341 |
| 28-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: remove unused mreg copy
After non-cache mode feature was implemented, the flows can only be created when port started. No need to check if the mreg flows are created in port stopped status
net/mlx5: remove unused mreg copy
After non-cache mode feature was implemented, the flows can only be created when port started. No need to check if the mreg flows are created in port stopped status, and apply the mreg flows after port start will also never happen.
This commit removed the relevant not used mreg copy code.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
afd7a625 |
| 28-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: make flow table cache thread safe
To support multi-thread flow insertion/removal, this patch uses thread safe hash list API for flow table cache hash list.
Signed-off-by: Xueming Li <xuem
net/mlx5: make flow table cache thread safe
To support multi-thread flow insertion/removal, this patch uses thread safe hash list API for flow table cache hash list.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
e69a5922 |
| 28-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: support concurrent access for hash list
In order to support hash list concurrent access, adding next: 1. List level read/write lock. 2. Entry reference counter. 3. Entry create/match/remov
net/mlx5: support concurrent access for hash list
In order to support hash list concurrent access, adding next: 1. List level read/write lock. 2. Entry reference counter. 3. Entry create/match/remove callback. 4. Remove insert/lookup/remove function which are not thread safe. 5. Add register/unregister function to support entry reuse.
For better performance, lookup function uses read lock to allow concurrent lookup from different thread, all other hash list modification functions uses write lock which blocks concurrent modification and lookups from other thread.
The exact objects change will be applied in the next patches.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
d163fc2d |
| 28-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: make flow list thread safe
To support multi-thread flow operations, this patch introduces list lock for the rte_flow list manages all the rte_flow handlers.
Signed-off-by: Xueming Li <xue
net/mlx5: make flow list thread safe
To support multi-thread flow operations, this patch introduces list lock for the rte_flow list manages all the rte_flow handlers.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
4ae8825c |
| 28-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: use indexed pool as id generator
The ID generation API used an integer pool to save released ID, To support multiple flow, it has to be enhanced to be thread safe.
Indexed pool could be u
net/mlx5: use indexed pool as id generator
The ID generation API used an integer pool to save released ID, To support multiple flow, it has to be enhanced to be thread safe.
Indexed pool could be used to generate unique ID by setting size of pool entry to zero. Since bitmap is used, an extra benefits is saving memory to about one bit per entry. Further more indexed pool could be thread safe by enabling lock.
This patch leverages indexed pool to generate ID, removes unused ID generating API.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
94b6d884 |
| 28-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: reuse flow id as hairpin id
Hairpin flow matching required a unique flow ID for matching. This patch reuses flow ID as hairpin flow ID, this will save some code to generate a separate hair
net/mlx5: reuse flow id as hairpin id
Hairpin flow matching required a unique flow ID for matching. This patch reuses flow ID as hairpin flow ID, this will save some code to generate a separate hairpin ID, also saves flow memory by removing hairpin ID.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
8bb81f26 |
| 28-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: use thread specific flow workspace
As part of multi-thread flow support, this patch moves flow intermediate data to thread specific, makes them a flow workspace. The workspace is allocated
net/mlx5: use thread specific flow workspace
As part of multi-thread flow support, this patch moves flow intermediate data to thread specific, makes them a flow workspace. The workspace is allocated per thread, destroyed along with thread life-cycle.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
23f627e0 |
| 27-Oct-2020 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: add flow sync API
When creating a flow, the rule itself might not take effort immediately once the function call returns with success. It would take some time to let the steering synchroni
net/mlx5: add flow sync API
When creating a flow, the rule itself might not take effort immediately once the function call returns with success. It would take some time to let the steering synchronize with the hardware.
If the application wants the packet to be sent to hit the flow after it is created, this flow sync API can be used to clear the steering HW cache to enforce next packet hits the latest rules.
For TX, usually the NIC TX domain and/or the FDB domain should be synchronized depends in which domain the flow is created.
The application could also try to synchronize the NIC RX and/or the FDB domain for the ingress packets.
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
86b59a1a |
| 25-Oct-2020 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: support VLAN matching fields
The fields ``has_vlan`` and ``has_more_vlan`` were added in rte_flow by patch [1].
Using these fields, the application can match all the VLAN options by singl
net/mlx5: support VLAN matching fields
The fields ``has_vlan`` and ``has_more_vlan`` were added in rte_flow by patch [1].
Using these fields, the application can match all the VLAN options by single flow: any, VLAN only and non-VLAN only.
Add the support for the fields. By the way, add the support for QinQ packets matching.
VLAN\QinQ limitations are listed in the driver document.
[1] https://patches.dpdk.org/patch/80965/
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Dekel Peled <dekelp@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
509f8470 |
| 26-Oct-2020 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: do not split hairpin flow in explicit mode
In the current implementation, the hairpin flow will be split into two flows implicitly if there is some action that only belongs to the Tx part.
net/mlx5: do not split hairpin flow in explicit mode
In the current implementation, the hairpin flow will be split into two flows implicitly if there is some action that only belongs to the Tx part. A Tx device flow will be inserted by the mlx5 PMD itself.
In hairpin between two ports, the explicit Tx flow mode will be the only one to be supported. It is not the appropriate behavior to insert a Tx flow into another device implicitly. The application could create any flow as it likes and has full control of the user flows. Hairpin flows will have no difference from standard flows and the application can decide how to chain Rx and Tx flows together.
Even in the single port hairpin, this explicit Tx flow mode could also be supported.
When checking if the hairpin needs to be split, it will just return if the hairpin queue is with "tx_explicit" attribute. Then in the following steps for validation and translation, the code path will be the same as that for standard flows.
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
4ec6360d |
| 25-Oct-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: implement tunnel offload
Tunnel Offload API provides hardware independent, unified model to offload tunneled traffic. Key model elements are: - apply matches to both outer and inner packe
net/mlx5: implement tunnel offload
Tunnel Offload API provides hardware independent, unified model to offload tunneled traffic. Key model elements are: - apply matches to both outer and inner packet headers during entire offload procedure; - restore outer header of partially offloaded packet; - model is implemented as a set of helper functions.
Implementation details: * tunnel_offload PMD parameter must be set to 1 to enable the feature. * application cannot use MARK and META flow actions with tunnel. * offload JUMP action is restricted to steering tunnel rule only.
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
d7cfcddd |
| 23-Oct-2020 |
Andrey Vesnovaty <andreyv@nvidia.com> |
net/mlx5: translate shared action for RSS action
Handle shared action on flow validation/creation/destruction. mlx5 PMD translates shared action into a regular one before handling flow validation/cr
net/mlx5: translate shared action for RSS action
Handle shared action on flow validation/creation/destruction. mlx5 PMD translates shared action into a regular one before handling flow validation/creation. The shared action translation applied to utilize the same execution path for both shared and regular actions. The current implementation supports shared action translation for shared RSS action only.
RSS action validation split to validate shared RSS action on its creation in addition to action validation in flow validation/creation path.
Implement rte_flow shared action API for mlx5 PMD, mostly forwarding calls to flow driver operations (see struct mlx5_flow_driver_ops).
Signed-off-by: Andrey Vesnovaty <andreyv@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
2b5b1aeb |
| 20-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: optimize counter extend memory
Counter extend memory was allocated for non-batch counter to save the extra DevX object. Currently, for non-batch counter which does not support aging, entry
net/mlx5: optimize counter extend memory
Counter extend memory was allocated for non-batch counter to save the extra DevX object. Currently, for non-batch counter which does not support aging, entry in the generic counter struct is used only when counter is free in free list, and bytes in the struct is used only when counter is allocated in using.
In this case, the DevX object can be saved to the generic counter struct union with entry memory when counter is allocated and union with bytes when counter is free. And pool type is also not needed as non-fallback mode only has generic counter and aging counter, just a bit to indicate the pool is aged or not will be enough.
This eliminates the counter extend info struct saves the memory.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
3aa27915 |
| 20-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: synchronize flow counter pool creation
Currently, counter operations are not thread safe as the counter pools' array resize is not protected.
This commit protects the container pools' arr
net/mlx5: synchronize flow counter pool creation
Currently, counter operations are not thread safe as the counter pools' array resize is not protected.
This commit protects the container pools' array resize using a spinlock. The original counter pool statistic memory allocate is moved to the host thread in order to minimize the critical section. Since that pool statistic memory is required only in query time. The container pools' array should be resized by the user threads, the new pool may be used by other rte_flow APIs before the host thread resize is done, if the pool is not saved to the pools' array, the specified counter memory will not be found as the pool is not saved to the counter management pool array. The pool raw statistic memory will be filled in host thread.
The shared counters will be protected in other commit.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
994829e6 |
| 20-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: remove single counter container
A flow counter which was allocated by a batch API couldn't be assigned to a flow in the root table (group 0) in old rdma-core version. Hence, a root table f
net/mlx5: remove single counter container
A flow counter which was allocated by a batch API couldn't be assigned to a flow in the root table (group 0) in old rdma-core version. Hence, a root table flow counter required PMD mechanism to manage counters which were allocated singly.
Currently, the batch counters have already been supported in root table includes a new rdma-core version with MLX5_FLOW_ACTION_COUNTER_OFFSET enum and with a kernel driver includes MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX_OFFSET enum.
When the PMD uses rdma-core API to assign a batch counter to a root table flow using invalid counter offset, it should get an error only if the batch counter assignment for root table is supported. Using this trial in the initialization time can help to detect the support.
Using the above trial, if the support is valid, remove the management of single counter container in the fast counter mechanism. Otherwise, move the counter mechanism to fallback mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
6b7c717e |
| 20-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: locate aging pools in the general container
Commit [1] introduced different container for the aging counter pools. In order to save container memory the aging counter pools can be located
net/mlx5: locate aging pools in the general container
Commit [1] introduced different container for the aging counter pools. In order to save container memory the aging counter pools can be located in the general pool container.
This patch locates the aging counter pools in the general pool container. Remove the aging container management.
[1] commit fd143711a6ea ("net/mlx5: separate aging counter pool range")
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
d5a7d04c |
| 19-Oct-2020 |
Dekel Peled <dekelp@nvidia.com> |
net/mlx5: support query of age action
Recent patch [1] adds to ethdev the API for query of age action. This patch implements in MLX5 PMD the query of age action using this API.
[1] https://mails.dp
net/mlx5: support query of age action
Recent patch [1] adds to ethdev the API for query of age action. This patch implements in MLX5 PMD the query of age action using this API.
[1] https://mails.dpdk.org/archives/dev/2020-October/184864.html
Signed-off-by: Dekel Peled <dekelp@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
90e30c74 |
| 15-Oct-2020 |
Dekel Peled <dekelp@nvidia.com> |
net/mlx5: fix use of atomic cmpset for age state
According to documentation [1], function rte_atomic16_cmpset() return value is non-zero on success; 0 on failure. In existing code this function is c
net/mlx5: fix use of atomic cmpset for age state
According to documentation [1], function rte_atomic16_cmpset() return value is non-zero on success; 0 on failure. In existing code this function is called, and the return value is compared to AGE_CANDIDATE, which is defined as 1. Such comparison is incorrect and can lead to unwanted behavior.
This patch updates the calls to rte_atomic16_cmpset(), to check that the return value is 0 or non-zero.
[1] https://doc.dpdk.org/api/rte__atomic_8h.html
Fixes: fa2d01c87d2b ("net/mlx5: support flow aging") Cc: stable@dpdk.org
Signed-off-by: Dekel Peled <dekelp@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
49175737 |
| 15-Oct-2020 |
Dekel Peled <dekelp@nvidia.com> |
net/mlx5: enforce limitation on IPv6 next protocol
Due to PRM requirement, the IPv6 header item 'proto' field, indicating the next header protocol, should not be set as extension header. This patch
net/mlx5: enforce limitation on IPv6 next protocol
Due to PRM requirement, the IPv6 header item 'proto' field, indicating the next header protocol, should not be set as extension header. This patch adds the relevant validation, and documents the limitation.
Signed-off-by: Dekel Peled <dekelp@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
6859e67e |
| 15-Oct-2020 |
Dekel Peled <dekelp@nvidia.com> |
net/mlx5: support match on IPv4 fragment packets
This patch adds to MLX5 PMD the support of matching on IPv4 fragmented and non-fragmented packets, using the IPv4 header fragment_offset field.
Sign
net/mlx5: support match on IPv4 fragment packets
This patch adds to MLX5 PMD the support of matching on IPv4 fragmented and non-fragmented packets, using the IPv4 header fragment_offset field.
Signed-off-by: Dekel Peled <dekelp@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
50390aab |
| 13-Oct-2020 |
Jiawei Wang <jiaweiw@nvidia.com> |
net/mlx5: update flow mirroring validation
Mirroring flow using sample action with ratio is 1, and it doesn't support jump action with the same one flow.
Sample action must have destination actions
net/mlx5: update flow mirroring validation
Mirroring flow using sample action with ratio is 1, and it doesn't support jump action with the same one flow.
Sample action must have destination actions like port or queue for mirroring, and don't need split function as sampling flow.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
0756228b |
| 13-Oct-2020 |
Jiawei Wang <jiaweiw@nvidia.com> |
net/mlx5: update translate function for sample action
Translate the attribute of sample action that include sample ratio and sub actions list, then create the sample DR action.
The metadata registe
net/mlx5: update translate function for sample action
Translate the attribute of sample action that include sample ratio and sub actions list, then create the sample DR action.
The metadata register value will be lost in the default path after Sampler in FDB due to CX5 HW limitation.
Since source vport also be shared with metadata register c0, MLX5 PMD would set the source vport to rdma-core and rdma-core will restore the regc0 value after sampler.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
b4c0ddbf |
| 13-Oct-2020 |
Jiawei Wang <jiaweiw@nvidia.com> |
net/mlx5: split sample flow into two sub-flows
The flow with sample action will be split into two sub flows: the prefix sub flow with the all actions preceding the sample action and sample action it
net/mlx5: split sample flow into two sub-flows
The flow with sample action will be split into two sub flows: the prefix sub flow with the all actions preceding the sample action and sample action itself, and the suffix sub flow with the actions following the sample action.
The original items remain in the prefix sub flow, add the implicit tag action with unique id to set in metadata register, and suffix sub flow uses the tag item to match with that unique id.
The flow split as below:
Original flow: items / actions pre / sample / actions sfx -> prefix sub flow - items / actions pre / set_tag action / sample suffix sub flow - tag_item / actions sfx
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
b1088fcc |
| 09-Oct-2020 |
Li Zhang <lizh@nvidia.com> |
net/mlx5: support ICMP identifier matching
PRM expose fields "Icmp_header_data" in IPv4 ICMP. Update ICMP mask parameter with ICMP identifier and sequence number fields. ICMP sequence number spec wi
net/mlx5: support ICMP identifier matching
PRM expose fields "Icmp_header_data" in IPv4 ICMP. Update ICMP mask parameter with ICMP identifier and sequence number fields. ICMP sequence number spec with mask, Icmp_header_data low 16 bits are set. ICMP identifier spec with mask, Icmp_header_data high 16 bits are set.
Signed-off-by: Li Zhang <lizh@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
657df3ca |
| 10-Sep-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: disable dump of Verbs flows
There was a segment fault when dump flows with device argument of dv_flow_en=0. In such case, Verbs flow engine was enabled and fdb resources were not initializ
net/mlx5: disable dump of Verbs flows
There was a segment fault when dump flows with device argument of dv_flow_en=0. In such case, Verbs flow engine was enabled and fdb resources were not initialized. It's suggested to use mlx_fs_dump for Verbs flow dump.
This patch adds verbs engine check, prints warning message and return gracefully.
Fixes: f6d7202402c9 ("net/mlx5: support flow dump API") Cc: stable@dpdk.org
Reported-by: Jørgen Østergaard Sloth <jorgen.sloth@xci.dk> Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|