| #
e7750639 |
| 10-Jan-2025 |
Andre Muezerie <andremue@linux.microsoft.com> |
drivers: replace packed attributes
MSVC struct packing is not compatible with GCC. Replace macro __rte_packed with __rte_packed_begin to push existing pack value and set packing to 1-byte and macro
drivers: replace packed attributes
MSVC struct packing is not compatible with GCC. Replace macro __rte_packed with __rte_packed_begin to push existing pack value and set packing to 1-byte and macro __rte_packed_end to restore the pack value prior to the push.
Macro __rte_packed_end is deliberately utilized to trigger a MSVC compiler warning if no existing packing has been pushed allowing easy identification of locations where the __rte_packed_begin is missing.
Signed-off-by: Andre Muezerie <andremue@linux.microsoft.com> Reviewed-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
show more ...
|
|
Revision tags: v24.11, v24.11-rc4, v24.11-rc3, v24.11-rc2, v24.11-rc1, v24.07, v24.07-rc4, v24.07-rc3, v24.07-rc2 |
|
| #
d54e82e1 |
| 07-Jul-2024 |
Gregory Etelson <getelson@nvidia.com> |
net/mlx5: fix indexed pool resize
On success, indexed pool resize sets maximal pool entries number to the `num_entries` parameter value.
The patch fixes maximal pool entries assignment.
The patch
net/mlx5: fix indexed pool resize
On success, indexed pool resize sets maximal pool entries number to the `num_entries` parameter value.
The patch fixes maximal pool entries assignment.
The patch also adds `error` parameter to log error types.
Fixes: 89578504edd9 ("net/mlx5: add ipool resize function") Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
|
Revision tags: v24.07-rc1 |
|
| #
3be8d0d2 |
| 24-May-2024 |
Thomas Monjalon <thomas@monjalon.net> |
net/mlx5: remove redundant macro
The macro MLX5_BITSHIFT() is not used anymore, and is redundant with RTE_BIT64(), so it can be removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-b
net/mlx5: remove redundant macro
The macro MLX5_BITSHIFT() is not used anymore, and is redundant with RTE_BIT64(), so it can be removed.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
| #
e12a0166 |
| 14-May-2024 |
Tyler Retzlaff <roretzla@linux.microsoft.com> |
drivers: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microso
drivers: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
show more ...
|
| #
27595cd8 |
| 15-Apr-2024 |
Tyler Retzlaff <roretzla@linux.microsoft.com> |
drivers: move alignment attribute on types for MSVC
Move location of __rte_aligned(a) to new conventional location. The new placement between {struct,union} and the tag allows the desired alignment
drivers: move alignment attribute on types for MSVC
Move location of __rte_aligned(a) to new conventional location. The new placement between {struct,union} and the tag allows the desired alignment to be imparted on the type regardless of the toolchain being used for both C and C++. Additionally, it avoids confusion by Doxygen when generating documentation.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Morten Brørup <mb@smartsharesystems.com>
show more ...
|
|
Revision tags: v24.03, v24.03-rc4, v24.03-rc3, v24.03-rc2 |
|
| #
89578504 |
| 28-Feb-2024 |
Maayan Kashani <mkashani@nvidia.com> |
net/mlx5: add ipool resize function
Before this patch, ipool size could be fixed by setting max_idx in mlx5_indexed_pool_config upon ipool creation. Or it can be auto resized to the maximum limit by
net/mlx5: add ipool resize function
Before this patch, ipool size could be fixed by setting max_idx in mlx5_indexed_pool_config upon ipool creation. Or it can be auto resized to the maximum limit by setting max_idx to zero upon ipool creation and the saved value is the maximum index possible. This patch adds ipool_resize API that enables to update the value of max_idx in case it is not set to maximum, meaning not in auto resize mode. It enables the allocation of new trunk when using malloc/zmalloc up to the max_idx limit. Please notice the resize number of entries should be divisible by trunk_size.
Signed-off-by: Maayan Kashani <mkashani@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
|
Revision tags: v24.03-rc1, v23.11, v23.11-rc4, v23.11-rc3, v23.11-rc2, v23.11-rc1, v23.07, v23.07-rc4, v23.07-rc3, v23.07-rc2, v23.07-rc1, v23.03, v23.03-rc4, v23.03-rc3, v23.03-rc2, v23.03-rc1, v22.11, v22.11-rc4, v22.11-rc3, v22.11-rc2 |
|
| #
04a4de75 |
| 20-Oct-2022 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: support flow age action with HWS
Add support for AGE action for HW steering. This patch includes:
1. Add new structures to manage aging. 2. Initialize all of them in configure function.
net/mlx5: support flow age action with HWS
Add support for AGE action for HW steering. This patch includes:
1. Add new structures to manage aging. 2. Initialize all of them in configure function. 3. Implement per second aging check using CNT background thread. 4. Enable AGE action in flow create/destroy operations. 5. Implement a queue-based function to report aged flow rules.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
|
Revision tags: v22.11-rc1, v22.07, v22.07-rc4, v22.07-rc3, v22.07-rc2, v22.07-rc1, v22.03, v22.03-rc4, v22.03-rc3, v22.03-rc2 |
|
| #
ad98ff6c |
| 15-Feb-2022 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: remove unused function
The mlx5_l3t_prepare_entry() function is not used anymore. This commit removes the unused mlx5_l3t_prepare_entry() function.
Fixes: 92ef4b8f1688 ("ethdev: remove de
net/mlx5: remove unused function
The mlx5_l3t_prepare_entry() function is not used anymore. This commit removes the unused mlx5_l3t_prepare_entry() function.
Fixes: 92ef4b8f1688 ("ethdev: remove deprecated shared counter attribute") Cc: stable@dpdk.org
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
|
Revision tags: v22.03-rc1 |
|
| #
7be78d02 |
| 29-Nov-2021 |
Josh Soref <jsoref@gmail.com> |
fix spelling in comments and strings
The tool comes from https://github.com/jsoref
Signed-off-by: Josh Soref <jsoref@gmail.com> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
|
|
Revision tags: v21.11, v21.11-rc4, v21.11-rc3, v21.11-rc2, v21.11-rc1, v21.08, v21.08-rc4, v21.08-rc3, v21.08-rc2 |
|
| #
9c373c52 |
| 13-Jul-2021 |
Suanming Mou <suanmingm@nvidia.com> |
common/mlx5: move list utility from net driver
Hash list is planned to be implemented with the cache list code.
This commit moves the list utility to common directory.
Signed-off-by: Suanming Mou
common/mlx5: move list utility from net driver
Hash list is planned to be implemented with the cache list code.
This commit moves the list utility to common directory.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
679f46c7 |
| 13-Jul-2021 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: allocate list memory in create function
Currently, the list memory was allocated by the list API caller.
Move it to be allocated by the create API in order to save consistence with the hl
net/mlx5: allocate list memory in create function
Currently, the list memory was allocated by the list API caller.
Move it to be allocated by the create API in order to save consistence with the hlist utility.
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
| #
a603b55a |
| 13-Jul-2021 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: manage list cache entries release
When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A.
The design decisio
net/mlx5: manage list cache entries release
When a cache entry is allocated by lcore A and is released by lcore B, the driver should synchronize the cache list access of lcore A.
The design decision is to manage a counter per lcore cache that will be increased atomically when the non-original lcore decreases the reference counter of cache entry to 0.
In list register operation, before the running lcore starts a lookup in its cache, it will check the counter in order to free invalid entries in its cache.
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
| #
0b4ce17a |
| 13-Jul-2021 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: minimize list critical sections
The mlx5 internal list utility is thread safe.
In order to synchronize list access between the threads, a RW lock is taken for the critical sections.
The
net/mlx5: minimize list critical sections
The mlx5 internal list utility is thread safe.
In order to synchronize list access between the threads, a RW lock is taken for the critical sections.
The create\remove\clone\clone_free operations are in the critical sections.
These operations are heavy and make the critical sections heavy because they are used for memory and other resources allocations\deallocations.
Moved out the operations from the critical sections and use generation counter in order to detect parallel allocations.
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
| #
491b7137 |
| 13-Jul-2021 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: add per-lcore cache to the list utility
When mlx5 list object is accessed by multiple cores, the list lock counter is all the time written by all the cores what increases cache misses in t
net/mlx5: add per-lcore cache to the list utility
When mlx5 list object is accessed by multiple cores, the list lock counter is all the time written by all the cores what increases cache misses in the memory caches.
In addition, when one thread accesses the list for add\remove\lookup operation, all the other threads coming to do an operation in the list are stuck in the lock.
Add per lcore cache to allow thread manipulations to be lockless when the list objects are mostly reused.
Synchronization with atomic operations should be done in order to allow threads to unregister an entry from other thread cache.
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
| #
e78e5408 |
| 13-Jul-2021 |
Matan Azrad <matan@nvidia.com> |
net/mlx5: remove cache term from the list utility
The internal mlx5 list tool is used mainly when the list objects need to be synchronized between multiple threads.
The "cache" term is used in the
net/mlx5: remove cache term from the list utility
The internal mlx5 list tool is used mainly when the list objects need to be synchronized between multiple threads.
The "cache" term is used in the internal mlx5 list API.
Next enhancements on this tool will use the "cache" term for per thread cache management.
To prevent confusing, remove the current "cache" term from the API's names.
Signed-off-by: Matan Azrad <matan@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
| #
42f46339 |
| 13-Jul-2021 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: support indexed pool non-lcore operations
This commit supports the index pool non-lcore operations with an extra cache and lcore lock.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> A
net/mlx5: support indexed pool non-lcore operations
This commit supports the index pool non-lcore operations with an extra cache and lcore lock.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
64a80f1c |
| 13-Jul-2021 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: add indexed pool iterator
In some cases, application may want to know all the allocated index in order to apply some operations to the allocated index.
This commit adds the indexed pool f
net/mlx5: add indexed pool iterator
In some cases, application may want to know all the allocated index in order to apply some operations to the allocated index.
This commit adds the indexed pool functions to support foreach operation.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
d15c0946 |
| 13-Jul-2021 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: add indexed pool local cache
For object which wants efficient index allocate and free, local cache will be very helpful.
Two level cache is introduced to allocate and free the index more
net/mlx5: add indexed pool local cache
For object which wants efficient index allocate and free, local cache will be very helpful.
Two level cache is introduced to allocate and free the index more efficient. One as local and the other as global. The global cache is able to save all the allocated index. That means all the allocated index will not be freed. Once the local cache is full, the extra index will be flushed to the global cache. Once local cache is empty, first try to fetch more index from global, if global is still empty, allocate new trunk with more index.
This commit adds new local cache mechanism for indexed pool.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
58ecd3ad |
| 13-Jul-2021 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: allow limiting the indexed pool maximum index
Some ipool instances in the driver are used as ID\index allocator and added other logic in order to work with limited index values.
Add a new
net/mlx5: allow limiting the indexed pool maximum index
Some ipool instances in the driver are used as ID\index allocator and added other logic in order to work with limited index values.
Add a new configuration for ipool specify the maximum index value. The ipool will ensure that no index bigger than the maximum value is provided.
Use this configuration in ID allocator cases instead of the current logics. This patch add the maximum ID configurable for the index pool.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
|
Revision tags: v21.08-rc1, v21.05, v21.05-rc4, v21.05-rc3, v21.05-rc2 |
|
| #
25245d5d |
| 04-May-2021 |
Shiri Kuzin <shirik@nvidia.com> |
common/mlx5: share hash list tool
In order to use the hash list defined in net in other drivers, the hash list is moved to common utilities.
In addition, the log definition was moved from the commo
common/mlx5: share hash list tool
In order to use the hash list defined in net in other drivers, the hash list is moved to common utilities.
In addition, the log definition was moved from the common utilities to a dedicated new log file in common in order to prevent a conflict.
Signed-off-by: Shiri Kuzin <shirik@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
|
Revision tags: v21.05-rc1 |
|
| #
c123b821 |
| 20-Apr-2021 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: support three level table walk
This commit adds table entry walk for the three level table.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
|
| #
2d2546ad |
| 15-Mar-2021 |
Asaf Penso <asafp@nvidia.com> |
common/mlx5: align log prefix across drivers
Some mlx5 PMDs define the log prefix as "mlx5_pmd" while others as "pmd_mlx5". The patch aligns all pmds to use the "mlx5_pmd" format.
Signed-off-by: As
common/mlx5: align log prefix across drivers
Some mlx5 PMDs define the log prefix as "mlx5_pmd" while others as "pmd_mlx5". The patch aligns all pmds to use the "mlx5_pmd" format.
Signed-off-by: Asaf Penso <asafp@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
|
Revision tags: v21.02, v21.02-rc4, v21.02-rc3, v21.02-rc2, v21.02-rc1 |
|
| #
f5b0aed2 |
| 03-Dec-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: optimize hash list entry memory
Currently, the hash list saves the hash key in the hash entry. And the key is mostly used to get the bucket index only.
Save the entire 64 bits key to the
net/mlx5: optimize hash list entry memory
Currently, the hash list saves the hash key in the hash entry. And the key is mostly used to get the bucket index only.
Save the entire 64 bits key to the entry will not be a good option if the key is only used to get the bucket index. Since 64 bits costs more memory for the entry, mostly the signature data in the key only uses 32 bits. And in the unregister function, the key in the entry causes extra bucket index calculation.
This commit saves the bucket index to the entry instead of the hash key. For the hash list like table, tag and mreg_copy which save the signature data in the key, the signature data is moved to the resource data struct itself.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
d14cbf3d |
| 03-Dec-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: optimize hash list synchronization
Since all the hash table operations are related with one dedicated bucket, the hash table lock and gen_cnt can be allocated per-bucket.
Currently, the h
net/mlx5: optimize hash list synchronization
Since all the hash table operations are related with one dedicated bucket, the hash table lock and gen_cnt can be allocated per-bucket.
Currently, the hash table uses one global lock to protect all the buckets, that global lock avoids the buckets to be operated at one time, it hurts the hash table performance. And the gen_cnt updated by the entire hash table causes incorrect redundant list research.
This commit optimized the lock and gen_cnt to bucket solid allows different bucket entries can be operated more efficiently.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
|
Revision tags: v20.11, v20.11-rc5, v20.11-rc4, v20.11-rc3, v20.11-rc2 |
|
| #
a12c188b |
| 28-Oct-2020 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: remove unused hash list operations
In previous commits the hash list objects have been converted to new thread safe hash list. The legacy hash list code can be removed now.
Signed-off-by:
net/mlx5: remove unused hash list operations
In previous commits the hash list objects have been converted to new thread safe hash list. The legacy hash list code can be removed now.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|