#
b15d75b2 |
| 21-Feb-2023 |
Gerry Gribbon <ggribbon@nvidia.com> |
regex/mlx5: fix doorbell record
We were writing a value that should represent the number of items to be processed by hardware. The value being written was off by 1 (N*4)+3; The value should be (N*4)
regex/mlx5: fix doorbell record
We were writing a value that should represent the number of items to be processed by hardware. The value being written was off by 1 (N*4)+3; The value should be (N*4) + 4 simplified to (N+1)*4
Fixes: 5dfa003db53f ("common/mlx5: fix post doorbell barrier") Cc: stable@dpdk.org
Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
show more ...
|
#
2fa696a2 |
| 21-Feb-2023 |
Gerry Gribbon <ggribbon@nvidia.com> |
regex/mlx5: utilize all available queue pairs
Fix overflow of free QP mask. Regex used 64 QPs and used a bitmask to select a free QP for use. The bitmask in use was only 32 bits so did not allow hal
regex/mlx5: utilize all available queue pairs
Fix overflow of free QP mask. Regex used 64 QPs and used a bitmask to select a free QP for use. The bitmask in use was only 32 bits so did not allow half of the QPs to be utilised. Upgraded to 64 bit mask and using ffsll now instead of ffs.
Fixes: 270032608503 ("regex/mlx5: refactor HW queue objects") Cc: stable@dpdk.org
Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com>
show more ...
|
#
60ffb0d7 |
| 01-Sep-2022 |
Gerry Gribbon <ggribbon@nvidia.com> |
regex/mlx5: support stop on first match
Handle flag RTE_REGEX_OPS_REQ_STOP_ON_MATCH_F.
Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
|
#
70f1ea71 |
| 07-Oct-2022 |
Gerry Gribbon <ggribbon@nvidia.com> |
regexdev: add maximum number of mbuf segments
Allows application to query maximum number of mbuf segments that can be chained together.
Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com> Acked-by:
regexdev: add maximum number of mbuf segments
Allows application to query maximum number of mbuf segments that can be chained together.
Signed-off-by: Gerry Gribbon <ggribbon@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
1f37cb2b |
| 28-Jul-2022 |
David Marchand <david.marchand@redhat.com> |
bus/pci: make driver-only headers private
The pci bus interface is for drivers only. Mark as internal and move the header in the driver headers list.
While at it, cleanup the code: - fix indentatio
bus/pci: make driver-only headers private
The pci bus interface is for drivers only. Mark as internal and move the header in the driver headers list.
While at it, cleanup the code: - fix indentation, - remove unneeded reference to bus specific singleton object, - remove unneeded list head structure type, - reorder the definitions and macro manipulating the bus singleton object, - remove inclusion of rte_bus.h and fix the code that relied on implicit inclusion,
Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Rosen Xu <rosen.xu@intel.com>
show more ...
|
#
5dfa003d |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: fix post doorbell barrier
The rdma-core library can map doorbell register in two ways, depending on the environment variable "MLX5_SHUT_UP_BF":
- as regular cached memory, the variab
common/mlx5: fix post doorbell barrier
The rdma-core library can map doorbell register in two ways, depending on the environment variable "MLX5_SHUT_UP_BF":
- as regular cached memory, the variable is either missing or set to zero. This type of mapping may cause the significant doorbell register writing latency and requires an explicit memory write barrier to mitigate this issue and prevent write combining.
- as non-cached memory, the variable is present and set to not "0" value. This type of mapping may cause performance impact under heavy loading conditions but the explicit write memory barrier is not required and it may improve core performance.
The UAR creation function maps a doorbell in one of the above ways according to the system. In run time, it always adds an explicit memory barrier after writing to. In cases where the doorbell was mapped as non-cached memory, the explicit memory barrier is unnecessary and may impair performance.
The commit [1] solved this problem for a Tx queue. In run time, it checks the mapping type and provides the memory barrier after writing to a Tx doorbell register if it is needed. The mapping type is extracted directly from the uar_mmap_offset field in the queue properties.
This patch shares this code between the drivers and extends the above solution for each of them.
[1] commit 8409a28573d3 ("net/mlx5: control transmit doorbell register mapping")
Fixes: f8c97babc9f4 ("compress/mlx5: add data-path functions") Fixes: 8e196c08ab53 ("crypto/mlx5: support enqueue/dequeue operations") Fixes: 4d4e245ad637 ("regex/mlx5: support enqueue") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
20489176 |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: make multi-process MR management port-agnostic
In the multi-process mechanism, there are things that the secondary process does not perform itself but asks the primary process to perfor
common/mlx5: make multi-process MR management port-agnostic
In the multi-process mechanism, there are things that the secondary process does not perform itself but asks the primary process to perform for it. There is a special API for communication between the processes that receives parameters necessary for the specific action required as well as a special structure called mp_id that contains the port number of the processes through which the initial process finds the relevant ETH device for the processes.
One of the operations performed through this mechanism is the creation of a memory region, where the secondary process sends the virtual address as a parameter and the mp_id structure with the port number inside it. However, once the memory area management is shared between the drivers and either port number or ETH device is no longer relevant to them, it seems unnecessary to continue communicating between the processes through the mp_id variable.
In this patch we will remove the use of the above structure for all MR management, and add to the specific parameter of operations a pointer to the common device that contains everything needed to create/register MR.
Fixes: 9f1d636f3ef08 ("common/mlx5: share MR management")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
334ed198 |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: remove redundant parameter in MR search
Memory region management has recently been shared between drivers, including the search for caches in the data plane. The initial search in the l
common/mlx5: remove redundant parameter in MR search
Memory region management has recently been shared between drivers, including the search for caches in the data plane. The initial search in the local linear cache of the queue, usually yields a result and one should not continue searching in the next level caches.
The function that searches in the local cache gets the pointer to a device as a parameter, that is not necessary for its operation but for subsequent searches (which, as mentioned, usually do not happen). Transferring the device to a function and maintaining it, takes some time and causes some impact on performance.
Add the pointer to the device as a field of the mr_ctrl structure. The field will be updated during control path and will be used only when needed in the search.
Fixes: fc59a1ec556b ("common/mlx5: share MR mempool registration")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
9c777ccf |
| 04-Nov-2021 |
Xueming Li <xuemingl@nvidia.com> |
common/mlx5: introduce user index field in completion
On ConnectX devices the completion entry provides the dedicated 24-bit field, that is filled up with some static value assigned at the Receiving
common/mlx5: introduce user index field in completion
On ConnectX devices the completion entry provides the dedicated 24-bit field, that is filled up with some static value assigned at the Receiving Queue creation moment. This patch declares this field. This is a preparation step for supporting shared RQs and the field is supposed to provide actual port index while handling the shared receiving queue(s).
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
fe375336 |
| 22-Oct-2021 |
Ori Kam <orika@nvidia.com> |
regex/mlx5: add cleanup on stop
When stopping the device we should release all data allocated.
After rte_regexdev_configure(), the QPs are pre-allocated, and will be configured only in rte_regexdev
regex/mlx5: add cleanup on stop
When stopping the device we should release all data allocated.
After rte_regexdev_configure(), the QPs are pre-allocated, and will be configured only in rte_regexdev_queue_pair_setup(). That's why the QP jobs array initialization is checked before attempting to destroy the QP.
Signed-off-by: Ori Kam <orika@nvidia.com> Signed-off-by: Ady Agbarih <adypodoman@gmail.com>
show more ...
|
#
9f1d636f |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share MR management
Add global shared MR cache as a field of common device structure. Move MR management to use this global cache for all drivers.
Signed-off-by: Michael Baum <michaelb
common/mlx5: share MR management
Add global shared MR cache as a field of common device structure. Move MR management to use this global cache for all drivers.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
fb690f71 |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share MR top-half search function
Add function to search in local liniar cache and use it in the drivers instead of their functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
common/mlx5: share MR top-half search function
Add function to search in local liniar cache and use it in the drivers instead of their functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
e35ccf24 |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share protection domain object
Create shared Protection Domain in common area and add it and its PDN as fields of common device structure.
Use this Protection Domain in all drivers and
common/mlx5: share protection domain object
Create shared Protection Domain in common area and add it and its PDN as fields of common device structure.
Use this Protection Domain in all drivers and remove the PD and PDN fields from their private structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
ca1418ce |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share device context object
Create shared context device in common area and add it as a field of common device. Use this context device in all drivers and remove the ctx field from thei
common/mlx5: share device context object
Create shared context device in common area and add it as a field of common device. Use this context device in all drivers and remove the ctx field from their private structure.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
27003260 |
| 05-Oct-2021 |
Raja Zidane <rzidane@nvidia.com> |
regex/mlx5: refactor HW queue objects
The mlx5 PMD for regex class uses an MMO WQE operated by the GGA engine in BF devices. Currently, all the MMO WQEs are managed by the SQ object. Starting from B
regex/mlx5: refactor HW queue objects
The mlx5 PMD for regex class uses an MMO WQE operated by the GGA engine in BF devices. Currently, all the MMO WQEs are managed by the SQ object. Starting from BF3, the queue of the MMO WQEs should be connected to the GGA engine using a new configuration, MMO, that will be supported only in the QP object. The FW introduced new capabilities to define whether the MMO configuration should be configured for the GGA queue. Replace all the GGA queue objects to QP, set MMO configuration according to the new FW capabilities.
Signed-off-by: Raja Zidane <rzidane@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
51d73964 |
| 08-Aug-2021 |
Thomas Monjalon <thomas@monjalon.net> |
regex/mlx5: fix minsize build
Error occurs when configuring meson with --buildtype=minsize with GCC 11.1.0:
drivers/regex/mlx5/mlx5_regex_fastpath.c:398:17: error: ‘len’ may be used uninitialized i
regex/mlx5: fix minsize build
Error occurs when configuring meson with --buildtype=minsize with GCC 11.1.0:
drivers/regex/mlx5/mlx5_regex_fastpath.c:398:17: error: ‘len’ may be used uninitialized in this function [-Werror=maybe-uninitialized] | complete_umr_wqe(qp, sq, &qp->jobs[mkey_job_id], sq->pi, | ^~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ | klm_num, len); | ~~~~~~~~~~~~~ drivers/regex/mlx5/mlx5_regex_fastpath.c:315:31: note: ‘len’ was declared here | uint32_t klm_num = 0, len; | ^~~
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
show more ...
|
#
29ca3215 |
| 12-Jul-2021 |
Michael Baum <michaelba@nvidia.com> |
regex/mlx5: fix memory region unregistration
The issue can cause illegal physical address access while a huge-page A is released and huge-page B is allocated on the same virtual address. The old MR
regex/mlx5: fix memory region unregistration
The issue can cause illegal physical address access while a huge-page A is released and huge-page B is allocated on the same virtual address. The old MR can be matched using the virtual address of huge-page B but the HW will access the physical address of huge-page A which is no more part of the DPDK process.
Register a driver callback for memory event in order to free out all the MRs of memory that is going to be freed from the DPDK process.
Fixes: cda883bbb655 ("regex/mlx5: add dynamic memory registration to datapath") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
423719a3 |
| 01-Jul-2021 |
Michael Baum <michaelba@nvidia.com> |
regex/mlx5: fix size of setup constants
The constant representing the size of the metadata is defined as an unsigned int variable with 32-bit. Similarly the constant representing the maximal output
regex/mlx5: fix size of setup constants
The constant representing the size of the metadata is defined as an unsigned int variable with 32-bit. Similarly the constant representing the maximal output is also defined as an unsigned int variable with 32-bit.
There is potentially overflowing expression when those constants are evaluated using 32-bit arithmetic, and then used in a context that expects an expression of type size_t that might be 64-bit.
Change the size of the above constants to size_t.
Fixes: 30d604bb1504 ("regex/mlx5: fix type of setup constants") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
330a70b7 |
| 07-Apr-2021 |
Suanming Mou <suanmingm@nvidia.com> |
regex/mlx5: add data path scattered mbuf process
UMR (User-Mode Memory Registration) WQE can present data buffers scattered within multiple mbufs with single indirect mkey. Take advantage of the UMR
regex/mlx5: add data path scattered mbuf process
UMR (User-Mode Memory Registration) WQE can present data buffers scattered within multiple mbufs with single indirect mkey. Take advantage of the UMR WQE, scattered mbuf in one operation can be presented to an indirect mkey. The RegEx which only accepts one mkey can now process the whole scattered mbuf in one operation.
The maximum scattered mbuf can be supported in one UMR WQE is now defined as 64. The mbufs from multiple operations can be combined into one UMR WQE as well if there is enough space in the KLM array, since the operations can address their own mbuf's content by the mkey's address and length. However, one operation's scattered mbuf's can't be placed in two different UMR WQE's KLM array, if the UMR WQE's KLM does not has enough free space for one operation, the extra UMR WQE will be engaged.
In case the UMR WQE's indirect mkey will be over wrapped by the SQ's WQE move, the mkey's index used by the UMR WQE should be the index of last the RegEX WQE in the operations. As one operation consumes one WQE set, build the RegEx WQE by reverse helps address the mkey more efficiently. Once the operations in one burst consumes multiple mkeys, when the mkey KLM array is full, the reverse WQE set index will always be the last of the new mkey's for the new UMR WQE.
In GGA mode, the SQ WQE's memory layout becomes UMR/NOP and RegEx WQE by interleave. The UMR and RegEx WQE can be called as WQE set. The SQ's pi and ci will also be increased as WQE set not as WQE.
For operations don't have scattered mbuf, uses the mbuf's mkey directly, the WQE set combination is NOP + RegEx. For operations have scattered mbuf but share the UMR WQE with others, the WQE set combination is NOP + RegEx. For operations complete the UMR WQE, the WQE set combination is UMR + RegEx.
Signed-off-by: John Hurley <jhurley@nvidia.com> Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
88e2a46d |
| 07-Jan-2021 |
Ori Kam <orika@nvidia.com> |
regex/mlx5: support priority match
The high priority match request flags means that the RegEx engine should stop on the first match.
This commit add this flag check to the RegEx engine.
Signed-off
regex/mlx5: support priority match
The high priority match request flags means that the RegEx engine should stop on the first match.
This commit add this flag check to the RegEx engine.
Signed-off-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
2cace110 |
| 07-Jan-2021 |
Ori Kam <orika@nvidia.com> |
regex/mlx5: fix support for group id
In order to know which groups in the RegEx engine should be used there is a need to check the req_flags.
This commit adds the missing check.
Fixes: 4d4e245ad63
regex/mlx5: fix support for group id
In order to know which groups in the RegEx engine should be used there is a need to check the req_flags.
This commit adds the missing check.
Fixes: 4d4e245ad637 ("regex/mlx5: support enqueue") Cc: stable@dpdk.org
Signed-off-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
9de7b160 |
| 06-Jan-2021 |
Michael Baum <michaelba@nvidia.com> |
regex/mlx5: move DevX SQ creation to common
Using common function for DevX SQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
|
#
3ddf5706 |
| 06-Jan-2021 |
Michael Baum <michaelba@nvidia.com> |
regex/mlx5: move DevX CQ creation to common
Using common function for DevX CQ creation.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
|
#
9b27a37b |
| 17-Dec-2020 |
Ori Kam <orika@nvidia.com> |
regex/mlx5: add response flags
This commit propagate the response flags from the regex engine.
Signed-off-by: Francis Kelly <fkelly@nvidia.com> Signed-off-by: Ori Kam <orika@nvidia.com>
|
#
30d604bb |
| 18-Nov-2020 |
Michael Baum <michaelba@nvidia.com> |
regex/mlx5: fix type of setup constants
The constant representing the size of the metadata is defined as a regular number (32-bit signed), even though all of its uses request an unsigned int variabl
regex/mlx5: fix type of setup constants
The constant representing the size of the metadata is defined as a regular number (32-bit signed), even though all of its uses request an unsigned int variable. Similarly the constant representing the maximal output is also defined as a regular number, even though all of its uses request an unsigned int variable.
Change the type of the above constants to unsigned.
Fixes: 5f41b66d12cd ("regex/mlx5: setup fast path") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|