#
4ec6360d |
| 25-Oct-2020 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: implement tunnel offload
Tunnel Offload API provides hardware independent, unified model to offload tunneled traffic. Key model elements are: - apply matches to both outer and inner packe
net/mlx5: implement tunnel offload
Tunnel Offload API provides hardware independent, unified model to offload tunneled traffic. Key model elements are: - apply matches to both outer and inner packet headers during entire offload procedure; - restore outer header of partially offloaded packet; - model is implemented as a set of helper functions.
Implementation details: * tunnel_offload PMD parameter must be set to 1 to enable the feature. * application cannot use MARK and META flow actions with tunnel. * offload JUMP action is restricted to steering tunnel rule only.
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
16dbba25 |
| 21-Oct-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: fix port shared data reference count
When probe a representor, tag cache hash table and modification cache hash table allocated memory upon each port, overwrote previous existing cache in
net/mlx5: fix port shared data reference count
When probe a representor, tag cache hash table and modification cache hash table allocated memory upon each port, overwrote previous existing cache in shared context data.
This patch moves reference check of shared data prior to hash table allocation to avoid such issue.
Fixes: 6801116688fe ("net/mlx5: fix multiple flow table hash list") Fixes: 1ef4cdef2682 ("net/mlx5: fix flow tag hash list conversion") Cc: stable@dpdk.org
Acked-by: Matan Azrad <matan@nvidia.com> Signed-off-by: Xueming Li <xuemingl@nvidia.com>
show more ...
|
#
2b5b1aeb |
| 20-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: optimize counter extend memory
Counter extend memory was allocated for non-batch counter to save the extra DevX object. Currently, for non-batch counter which does not support aging, entry
net/mlx5: optimize counter extend memory
Counter extend memory was allocated for non-batch counter to save the extra DevX object. Currently, for non-batch counter which does not support aging, entry in the generic counter struct is used only when counter is free in free list, and bytes in the struct is used only when counter is allocated in using.
In this case, the DevX object can be saved to the generic counter struct union with entry memory when counter is allocated and union with bytes when counter is free. And pool type is also not needed as non-fallback mode only has generic counter and aging counter, just a bit to indicate the pool is aged or not will be enough.
This eliminates the counter extend info struct saves the memory.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
3aa27915 |
| 20-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: synchronize flow counter pool creation
Currently, counter operations are not thread safe as the counter pools' array resize is not protected.
This commit protects the container pools' arr
net/mlx5: synchronize flow counter pool creation
Currently, counter operations are not thread safe as the counter pools' array resize is not protected.
This commit protects the container pools' array resize using a spinlock. The original counter pool statistic memory allocate is moved to the host thread in order to minimize the critical section. Since that pool statistic memory is required only in query time. The container pools' array should be resized by the user threads, the new pool may be used by other rte_flow APIs before the host thread resize is done, if the pool is not saved to the pools' array, the specified counter memory will not be found as the pool is not saved to the counter management pool array. The pool raw statistic memory will be filled in host thread.
The shared counters will be protected in other commit.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
994829e6 |
| 20-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: remove single counter container
A flow counter which was allocated by a batch API couldn't be assigned to a flow in the root table (group 0) in old rdma-core version. Hence, a root table f
net/mlx5: remove single counter container
A flow counter which was allocated by a batch API couldn't be assigned to a flow in the root table (group 0) in old rdma-core version. Hence, a root table flow counter required PMD mechanism to manage counters which were allocated singly.
Currently, the batch counters have already been supported in root table includes a new rdma-core version with MLX5_FLOW_ACTION_COUNTER_OFFSET enum and with a kernel driver includes MLX5_IB_ATTR_CREATE_FLOW_ARR_COUNTERS_DEVX_OFFSET enum.
When the PMD uses rdma-core API to assign a batch counter to a root table flow using invalid counter offset, it should get an error only if the batch counter assignment for root table is supported. Using this trial in the initialization time can help to detect the support.
Using the above trial, if the support is valid, remove the management of single counter container in the fast counter mechanism. Otherwise, move the counter mechanism to fallback mode.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
613d64e4 |
| 15-Oct-2020 |
Dekel Peled <dekelp@nvidia.com> |
net/mlx5: log LRO minimal size
Add debug printout showing HCA capability lro_min_mss_size - the minimal size of TCP segment required for coalescing. MLX5 PMD documentation is updated to note this co
net/mlx5: log LRO minimal size
Add debug printout showing HCA capability lro_min_mss_size - the minimal size of TCP segment required for coalescing. MLX5 PMD documentation is updated to note this condition.
Signed-off-by: Dekel Peled <dekelp@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
3ec73abe |
| 15-Oct-2020 |
Matan Azrad <matan@nvidia.com> |
net/mlx5/linux: fix Tx queue operations decision
One of the conditions to create Tx queue object by DevX is to be sure that the DPDK mlx5 driver is not going to be the E-Switch manager of the device
net/mlx5/linux: fix Tx queue operations decision
One of the conditions to create Tx queue object by DevX is to be sure that the DPDK mlx5 driver is not going to be the E-Switch manager of the device. The issue is with the default FDB flows managed by the kernel driver, which are not created by the kernel when the Tx queues are created by DevX.
The current decision is to create the Tx queues by Verbs when E-Switch is enabled while the current behavior uses an opposite condition to create them by DevX.
Create the Tx queues by Verbs when E-Switch is enabled.
Fixes: 86d259cec852 ("net/mlx5: separate Tx queue object creations")
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
f30e69b4 |
| 14-Oct-2020 |
Ferruh Yigit <ferruh.yigit@intel.com> |
ethdev: add device flag to bypass auto-filled queue xstats
Queue stats are stored in 'struct rte_eth_stats' as array and array size is defined by 'RTE_ETHDEV_QUEUE_STAT_CNTRS' compile time flag.
As
ethdev: add device flag to bypass auto-filled queue xstats
Queue stats are stored in 'struct rte_eth_stats' as array and array size is defined by 'RTE_ETHDEV_QUEUE_STAT_CNTRS' compile time flag.
As a result of technical board discussion, decided to remove the queue statistics from 'struct rte_eth_stats' in the long term.
Instead PMDs should represent the queue statistics via xstats, this gives more flexibility on the number of the queues supported.
Currently queue stats in the xstats are filled by ethdev layer, using some basic stats, when queue stats removed from basic stats the responsibility to fill the relevant xstats will be pushed to the PMDs.
During the switch period, temporary 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' device flag is created. Initially all PMDs using xstats set this flag. The PMDs implemented queue stats in the xstats should clear the flag.
When all PMDs switch to the xstats for the queue stats, queue stats related fields from 'struct rte_eth_stats' will be removed, as well as 'RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS' flag. Later 'RTE_ETHDEV_QUEUE_STAT_CNTRS' compile time flag also can be removed.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Haiyue Wang <haiyue.wang@intel.com> Acked-by: Xiao Wang <xiao.w.wang@intel.com> Acked-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
#
96b1f027 |
| 13-Oct-2020 |
Jiawei Wang <jiaweiw@nvidia.com> |
net/mlx5: validate sample action
Add sample action validate function.
Sample Flow is supported in NIC-RX and FDB domains. For the NIC-RX the Sample Flow action list must include the destination que
net/mlx5: validate sample action
Add sample action validate function.
Sample Flow is supported in NIC-RX and FDB domains. For the NIC-RX the Sample Flow action list must include the destination queue action.
Only NIC-RX domain supports the optional actions list. FDB doesn't support any optional actions, the sampled packets is always forwarded to the E-Switch manager port.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
5d9f3c3f |
| 01-Oct-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: separate Tx queue object modification
Separate Tx object modification to the Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia
net/mlx5: separate Tx queue object modification
Separate Tx object modification to the Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
f49f4483 |
| 01-Oct-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: share Tx control code
Move Tx object similar resources allocations and debug logs from DevX and Verbs modules to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acke
net/mlx5: share Tx control code
Move Tx object similar resources allocations and debug logs from DevX and Verbs modules to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
86d259ce |
| 01-Oct-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: separate Tx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be separated to another file. By this way, the build can easily cut the unsupported
net/mlx5: separate Tx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be separated to another file. By this way, the build can easily cut the unsupported Verbs APIs from the compilation process.
Define operation structure and DevX module in addition to the existing Linux Verbs module. Separate Tx object creation into the Verbs/DevX modules and update the operation structure according to the OS support and the user configuration.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
fbd19135 |
| 28-Sep-2020 |
Thomas Monjalon <thomas@monjalon.net> |
ethdev: remove old close behaviour
The temporary flag RTE_ETH_DEV_CLOSE_REMOVE is removed. It was introduced in DPDK 18.11 in order to give time for PMDs to migrate.
The old behaviour was to free o
ethdev: remove old close behaviour
The temporary flag RTE_ETH_DEV_CLOSE_REMOVE is removed. It was introduced in DPDK 18.11 in order to give time for PMDs to migrate.
The old behaviour was to free only queues when closing a port. The new behaviour is calling rte_eth_dev_release_port() which does three more tasks: - trigger event callback - reset state and few pointers - free all generic port resources
The private port resources must be released in the .dev_close callback.
The .remove callback should: - call .dev_close callback - call rte_eth_dev_release_port() - free multi-port device shared resources
Despite waiting two years, some drivers have not migrated, so they may hit issues with the incompatible new behaviour. After sending emails, adding logs, and announcing the deprecation, the only last solution is to declare these drivers as unmaintained: ionic, liquidio, nfp Below is a summary of what to implement in those drivers.
* The freeing of private port resources must be moved from the ".remove(device)" function to the ".dev_close(port)" function.
* If a generic resource (.mac_addrs or .hash_mac_addrs) cannot be freed, it must be set to NULL in ".dev_close" function to protect from subsequent rte_eth_dev_release_port() freeing.
* Note 1: The generic resources are freed in rte_eth_dev_release_port(), after ".dev_close" is called in rte_eth_dev_close(), but not when calling ".dev_close" directly from the ".remove" PMD function. That's why rte_eth_dev_release_port() must still be called explicitly from ".remove(device)" after calling the ".dev_close" PMD function.
* Note 2: If a device can have multiple ports, the common resources must be freed only in the ".remove(device)" function.
* Note 3: The port is supposed to be in a stopped state when it is closed. If it is not the case, it is free to the PMD implementation how to react when trying to close a non-stopped port: either try to stop it automatically or just return an error.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Liron Himi <lironh@marvell.com> Reviewed-by: Haiyue Wang <haiyue.wang@intel.com> Acked-by: Jeff Guo <jia.guo@intel.com> Acked-by: Andrew Rybchenko <arybchenko@solarflare.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org>
show more ...
|
#
bf615b07 |
| 16-Sep-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: manage header reformat actions with hashed list
To manage encap decap header format actions mlx5 PMD used the single linked list and lookup and insertion operations took too long times if
net/mlx5: manage header reformat actions with hashed list
To manage encap decap header format actions mlx5 PMD used the single linked list and lookup and insertion operations took too long times if there were millions of objects and this impacted the flow insertion/deletion rate.
In order to optimize the performance the hashed list is engaged. The list implementation is updated to support non-unique keys with few collisions.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
c21e5fac |
| 15-Sep-2020 |
Xueming Li <xuemingl@nvidia.com> |
net/mlx5: use bond index for netdev operations
In case of bonding, device ifindex was detected as the PF ifindex, so any operation using ifindex applied to PF instead of the bond device. These opera
net/mlx5: use bond index for netdev operations
In case of bonding, device ifindex was detected as the PF ifindex, so any operation using ifindex applied to PF instead of the bond device. These operations includes MTU get/set, up/down and mac address manipulation, etc.
This patch detects bond interface ifindex and name for PF that join a bond interface, uses it by default for netdev operations.
Cc: stable@dpdk.org
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
7aa9892f |
| 13-Sep-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: fix Rx objects creator selection
There are 2 creators for Rx objects, DevX and Verbs. There are supported DR versions when a DevX destination TIR flow action creation cannot be supported,
net/mlx5: fix Rx objects creator selection
There are 2 creators for Rx objects, DevX and Verbs. There are supported DR versions when a DevX destination TIR flow action creation cannot be supported, using this versions the TIR object should be created by Verbs, what forces all the Rx objects to be created by Verbs.
The selection of the Rx objects creator, wrongly, didn't take into account the destination TIR action support what caused a failure in the Rx flows creation.
Select Verbs creator when destination TIR action creation is not supported by the DR version.
Fixes: 6deb19e1b2d2 ("net/mlx5: separate Rx queue object creations")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
cbfc6111 |
| 09-Sep-2020 |
Ferruh Yigit <ferruh.yigit@intel.com> |
ethdev: move inline device operations
This patch is a preparation to hide the 'struct eth_dev_ops' from applications by moving some device operations from 'struct eth_dev_ops' to 'struct rte_eth_dev
ethdev: move inline device operations
This patch is a preparation to hide the 'struct eth_dev_ops' from applications by moving some device operations from 'struct eth_dev_ops' to 'struct rte_eth_dev'.
Mentioned ethdev APIs are in the data path and implemented as inline because of performance reasons.
Exposing 'struct eth_dev_ops' to applications is bad because it is a contract between ethdev and PMDs, not really needs to be known by applications, also changes in the struct causing ABI breakages which shouldn't.
To be able to both keep APIs inline and hide the 'struct eth_dev_ops', moving device operations used in ethdev inline APIs to 'struct rte_eth_dev' to the same level with Rx/Tx burst functions.
The list of dev_ops moved: eth_rx_queue_count_t rx_queue_count; eth_rx_descriptor_done_t rx_descriptor_done; eth_rx_descriptor_status_t rx_descriptor_status; eth_tx_descriptor_status_t tx_descriptor_status;
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Reviewed-by: Andrew Rybchenko <arybchenko@solarflare.com> Acked-by: David Marchand <david.marchand@redhat.com> Acked-by: Sachin Saxena <sachin.saxena@nxp.com>
show more ...
|
#
0c762e81 |
| 03-Sep-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: share Rx queue drop action code
Move Rx queue drop action similar resources allocations from Verbs module to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by
net/mlx5: share Rx queue drop action code
Move Rx queue drop action similar resources allocations from Verbs module to a shared location.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
5eaf882e |
| 03-Sep-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: separate Rx queue drop
Separate Rx queue drop creation into both Verbs and DevX modules.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
|
#
6deb19e1 |
| 03-Sep-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: separate Rx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be separated to another file. By this way, the build can easily cut the unsupported
net/mlx5: separate Rx queue object creations
As an arrangement to Windows OS support, the Verbs operations should be separated to another file. By this way, the build can easily cut the unsupported Verbs APIs from the compilation process.
Define operation structure and DevX module in addition to the existing linux Verbs module. Separate Rx object creation into the Verbs/DevX modules and update the operation structure according to the OS support and the user configuration.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
f00f6562 |
| 25-Aug-2020 |
Ophir Munk <ophirmu@mellanox.com> |
net/mlx5: remove netlink dependency in shared code
This commit adds Linux implementation of routine mlx5_os_mac_addr_flush as wrapper to Netlink API to avoid direct calls under non-Linux operating s
net/mlx5: remove netlink dependency in shared code
This commit adds Linux implementation of routine mlx5_os_mac_addr_flush as wrapper to Netlink API to avoid direct calls under non-Linux operating systems.
Signed-off-by: Ophir Munk <ophirmu@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
#
3fe88961 |
| 31-Jul-2020 |
Suanming Mou <suanmingm@mellanox.com> |
net/mlx5: manage modify actions with hashed list
To manage header modify actions mlx5 PMD used the single linked list and lookup and insertion operations took too long times if there were millions o
net/mlx5: manage modify actions with hashed list
To manage header modify actions mlx5 PMD used the single linked list and lookup and insertion operations took too long times if there were millions of objects and this impacted the flow insertion/deletion rate.
In order to optimize the performance the hashed list is engaged. The list implementation is updated to support non-unique keys with few collisions.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
show more ...
|
#
972a1bf8 |
| 29-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
common/mlx5: fix user mode register access command
To detect the timestamp mode configured on the NIC the mlx5 PMD uses the firmware command ACCESS_REGISTER_USER. This command is relatively new and
common/mlx5: fix user mode register access command
To detect the timestamp mode configured on the NIC the mlx5 PMD uses the firmware command ACCESS_REGISTER_USER. This command is relatively new and might be not supported by older firmware versions and was rejected, causing annoying messages in kernel log.
This patch adds the attribute flag check whether firmware supports the command and avoid the call if it does not.
Fixes: bb7ef9a96281 ("common/mlx5: add register access DevX routine")
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com>
show more ...
|
#
d462a83c |
| 21-Jul-2020 |
Michael Baum <michaelba@mellanox.com> |
net/mlx5: optimize stack memory in probe
The device configuration struct is not small enough to be used as function argument by value.
Call spawn function with device configuration by reference.
S
net/mlx5: optimize stack memory in probe
The device configuration struct is not small enough to be used as function argument by value.
Call spawn function with device configuration by reference.
Signed-off-by: Michael Baum <michaelba@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
#
f6d099d7 |
| 27-Jul-2020 |
Parav Pandit <parav@mellanox.com> |
common/mlx5: remove class check from class drivers
Now that mlx5_pci PMD checks for enabled classes and performs probe(), remove() of associated classes, individual class driver does not need to che
common/mlx5: remove class check from class drivers
Now that mlx5_pci PMD checks for enabled classes and performs probe(), remove() of associated classes, individual class driver does not need to check if other driver is enabled.
Signed-off-by: Parav Pandit <parav@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|