| #
07a99de8 |
| 28-Dec-2020 |
Tal Shnaiderman <talshn@nvidia.com> |
net/mlx5: wrap glue reg/dereg UMEM per OS
Wrap glue calls for UMEM registration and deregistration with generic OS calls since each OS (Linux or Windows) has a different glue API parameters.
Signed
net/mlx5: wrap glue reg/dereg UMEM per OS
Wrap glue calls for UMEM registration and deregistration with generic OS calls since each OS (Linux or Windows) has a different glue API parameters.
Signed-off-by: Tal Shnaiderman <talshn@nvidia.com> Signed-off-by: Ophir Munk <ophirmu@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
9b9890e2 |
| 28-Dec-2020 |
Ophir Munk <ophirmu@nvidia.com> |
net/mlx5: move static asserts to global scope
Some Windows compilers consider static_assert() as calls to another function rather than a compiler directive which allows checking type information at
net/mlx5: move static asserts to global scope
Some Windows compilers consider static_assert() as calls to another function rather than a compiler directive which allows checking type information at compile time. This only occurs if the static_assert call appears inside another function scope. To solve it move the static_assert calls to global scope in the files where they are used.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
20698c9f |
| 28-Dec-2020 |
Ophir Munk <ophirmu@nvidia.com> |
net/mlx5: replace Linux sleep
Replace Linux API usleep() and nanosleep() with rte_delay_us_sleep(). The replacement occurs in shared files compiled under different operating systems.
Signed-off-by:
net/mlx5: replace Linux sleep
Replace Linux API usleep() and nanosleep() with rte_delay_us_sleep(). The replacement occurs in shared files compiled under different operating systems.
Signed-off-by: Ophir Munk <ophirmu@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
b492e288 |
| 28-Dec-2020 |
Ophir Munk <ophirmu@nvidia.com> |
net/mlx5: fix freeing packet pacing
Packet pacing is allocated under condition #ifdef HAVE_MLX5DV_PP_ALLOC. In a similar way - free packet pacing index under the same condition. This update is requi
net/mlx5: fix freeing packet pacing
Packet pacing is allocated under condition #ifdef HAVE_MLX5DV_PP_ALLOC. In a similar way - free packet pacing index under the same condition. This update is required to successfully compile under operating systems which do not support packet pacing.
Fixes: aef1e20ebeb2 ("net/mlx5: allocate packet pacing context") Cc: stable@dpdk.org
Signed-off-by: Ophir Munk <ophirmu@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
b15af157 |
| 19-Nov-2020 |
Viacheslav Ovsiienko <viacheslavo@nvidia.com> |
net/mlx5: make Tx scheduling xstats names compliant
xstats names for Tx packet scheduling should be compliant with [1]
[1] http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html?highlight=xstats#
net/mlx5: make Tx scheduling xstats names compliant
xstats names for Tx packet scheduling should be compliant with [1]
[1] http://doc.dpdk.org/guides/prog_guide/poll_mode_drv.html?highlight=xstats#extended-statistics-api
Bugzilla ID: 558
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
| #
41c2bb63 |
| 28-Oct-2020 |
Viacheslav Ovsiienko <viacheslavo@nvidia.com> |
net/mlx5: use C11 atomics in packet scheduling
The rte_atomic API is deprecated and needs to be replaced with C11 atomic builtins. Use the relaxed ordering and explicit memory barrier for Clock Queu
net/mlx5: use C11 atomics in packet scheduling
The rte_atomic API is deprecated and needs to be replaced with C11 atomic builtins. Use the relaxed ordering and explicit memory barrier for Clock Queue and timestamps synchronization.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
| #
e7055bbf |
| 01-Oct-2020 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: reposition event queue number field
The eqn field has become a field of sh directly since it is also relevant for Tx and Rx.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: M
net/mlx5: reposition event queue number field
The eqn field has become a field of sh directly since it is also relevant for Tx and Rx.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
| #
1f66ac5b |
| 25-Aug-2020 |
Ophir Munk <ophirmu@mellanox.com> |
net/mlx5: remove more Direct Verbs dependencies
Several DV-based structs of type 'struct mlx5dv_devx_XXX' are replaced with 'void *' to enable compilation under non-Linux operating systems. New gett
net/mlx5: remove more Direct Verbs dependencies
Several DV-based structs of type 'struct mlx5dv_devx_XXX' are replaced with 'void *' to enable compilation under non-Linux operating systems. New getter functions were added to retrieve the specific fields that were previously accessed directly.
Replaced structs: 'struct mlx5dv_pp *' 'struct mlx5dv_devx_event_channel *' 'struct mlx5dv_devx_umem *' 'struct mlx5dv_devx_uar *'
Signed-off-by: Ophir Munk <ophirmu@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
2aba9fc7 |
| 19-Jul-2020 |
Ophir Munk <ophirmu@mellanox.com> |
net/mlx5: replace Linux specific calls
The following Linux calls are replaced by their matching rte APIs.
mmap ==> rte_mem_map() munmap == >rte_mem_unmap() sysconf(_SC_PAGESIZE) ==> rte_mem_page_si
net/mlx5: replace Linux specific calls
The following Linux calls are replaced by their matching rte APIs.
mmap ==> rte_mem_map() munmap == >rte_mem_unmap() sysconf(_SC_PAGESIZE) ==> rte_mem_page_size()
Signed-off-by: Ophir Munk <ophirmu@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
ac3fc732 |
| 28-Jun-2020 |
Suanming Mou <suanmingm@mellanox.com> |
net/mlx5: convert queue objects to unified malloc
This commit allocates the Rx/Tx queue objects from unified malloc function.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com> Acked-by: Matan Az
net/mlx5: convert queue objects to unified malloc
This commit allocates the Rx/Tx queue objects from unified malloc function.
Signed-off-by: Suanming Mou <suanmingm@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
3b025c0c |
| 16-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
net/mlx5: provide send scheduling error statistics
The mlx5 PMD exposes the following new introduced extended statistics counter to report the errors of packet send scheduling on timestamps:
- tx
net/mlx5: provide send scheduling error statistics
The mlx5 PMD exposes the following new introduced extended statistics counter to report the errors of packet send scheduling on timestamps:
- txpp_err_miss_int - rearm queue interrupt was not handled was not handled in time and service routine might miss the completions
- txpp_err_rearm_queue - reports errors in rearm queue - txpp_err_clock_queue - reports errors in clock queue
- txpp_err_ts_past - timestamps in the packet being sent were found in the past, timestamps were ignored
- txpp_err_ts_future - timestamps in the packet being sent were found in the too distant future (beyond HW/clock queue capabilities to schedule, typically it is about 16M of tx_pp devarg periods)
- txpp_jitter - estimated jitter in device clocks between 8K completions of Clock Queue.
- txpp_wander - estimated wander in device clocks between 16M completions of Clock Queue.
- txpp_sync_lost - error flag, the Clock Queue completions synchronization is lost, accurate packet scheduling can not be handled, timestamps are being ignored, the restart of all ports using scheduling must be performed.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
b94d93ca |
| 16-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
net/mlx5: support reading device clock
If send schedule feature is engaged there is the Clock Queue created, that reports reliable the current device clock counter value. The device clock counter ca
net/mlx5: support reading device clock
If send schedule feature is engaged there is the Clock Queue created, that reports reliable the current device clock counter value. The device clock counter can be read directly from the Clock Queue CQE.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
085ff447 |
| 16-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
net/mlx5: convert timestamp to completion index
The application provides timestamps in Tx mbuf as clocks, the hardware performs scheduling on Clock Queue completion index match. This patch introduce
net/mlx5: convert timestamp to completion index
The application provides timestamps in Tx mbuf as clocks, the hardware performs scheduling on Clock Queue completion index match. This patch introduces the timestamp-to-completion-index inline routine.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
77522be0 |
| 16-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
net/mlx5: introduce clock queue service routine
Service routine is invoked periodically on Rearm Queue completion interrupts, typically once per some milliseconds (1-16) to track clock jitter and wa
net/mlx5: introduce clock queue service routine
Service routine is invoked periodically on Rearm Queue completion interrupts, typically once per some milliseconds (1-16) to track clock jitter and wander in robust fashion. It performs the following:
- fetches the completed CQEs for Rearm Queue - restarts Rearm Queue on errors - pushes new requests to Rearm Queue to make it continuously running and pushing cross-channel requests to Clock Queue - reads and caches the Clock Queue CQE to be used in datapath - gathers statistics to estimate clock jitter and wander - gathers Clock Queue errors statistics
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
aef1e20e |
| 16-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
net/mlx5: allocate packet pacing context
This patch allocates the Packet Pacing context from the kernel, configures one according to requested pace send scheduling granularity and assigns to Clock Q
net/mlx5: allocate packet pacing context
This patch allocates the Packet Pacing context from the kernel, configures one according to requested pace send scheduling granularity and assigns to Clock Queue.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
551c94c8 |
| 16-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
net/mlx5: create rearm queue for packet pacing
The dedicated Rearm Queue is needed to fire the work requests to the Clock Queue in realtime. The Clock Queue should never stop, otherwise the clock sy
net/mlx5: create rearm queue for packet pacing
The dedicated Rearm Queue is needed to fire the work requests to the Clock Queue in realtime. The Clock Queue should never stop, otherwise the clock synchronization might be broken and packet send scheduling would fail. The Rearm Queue uses cross channel SEND_EN/WAIT operations to provides the requests to the Clock Queue in robust way.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|
| #
d133f4cd |
| 16-Jul-2020 |
Viacheslav Ovsiienko <viacheslavo@mellanox.com> |
net/mlx5: create clock queue for packet pacing
This patch creates the special completion queue providing reference completions to schedule packet send from other transmitting queues.
Signed-off-by:
net/mlx5: create clock queue for packet pacing
This patch creates the special completion queue providing reference completions to schedule packet send from other transmitting queues.
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|