#
3dfa7877 |
| 04-Jun-2024 |
Kiran Vedere <kiranv@nvidia.com> |
net/mlx5: add hardware queue object context dump
Add debug capability to mlx5 PMD to dump SQ/RQ/CQ HW object context for a given port/queue. The context dump can provide some real-time information o
net/mlx5: add hardware queue object context dump
Add debug capability to mlx5 PMD to dump SQ/RQ/CQ HW object context for a given port/queue. The context dump can provide some real-time information on cause of certain Tx/Rx Failures.
Signed-off-by: Kiran Vedere <kiranv@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
1944fbc3 |
| 05-Jun-2024 |
Suanming Mou <suanmingm@nvidia.com> |
net/mlx5: support flow match with external Tx queue
For using external created Tx queues in RTE_FLOW_ITEM_TX_QUEUE, this commit provides the map and unmap functions to convert the external created S
net/mlx5: support flow match with external Tx queue
For using external created Tx queues in RTE_FLOW_ITEM_TX_QUEUE, this commit provides the map and unmap functions to convert the external created SQ's devx ID to DPDK flow item Tx queue ID.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
f5177bdc |
| 25-Jan-2024 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: add GENEVE TLV options parser API
Add a new private API to create/destroy parser for GENEVE TLV options.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Signed-off-by: Viacheslav Ovsii
net/mlx5: add GENEVE TLV options parser API
Add a new private API to create/destroy parser for GENEVE TLV options.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Suanming Mou <suanmingm@nvidia.com>
show more ...
|
#
86647d46 |
| 31-Oct-2023 |
Thomas Monjalon <thomas@monjalon.net> |
net/mlx5: add global API prefix to public constants
The file rte_pmd_mlx5.h is a public API, so its components must be prefixed with RTE_PMD_.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
|
#
5f5e2f86 |
| 26-Jan-2023 |
Alexander Kozyrev <akozyrev@nvidia.com> |
net/mlx5: define index register for linear tables
Set MLX5_LINEAR_HASH_TAG_INDEX as a special id for the TAG item: it holds the index in a linear table for a packet to land to. This rule index in th
net/mlx5: define index register for linear tables
Set MLX5_LINEAR_HASH_TAG_INDEX as a special id for the TAG item: it holds the index in a linear table for a packet to land to. This rule index in the table uses upper 16-bits of REG_C_3, handle this TAG item in the modify_field API for setting the index.
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
2eece379 |
| 15-Feb-2023 |
Rongwei Liu <rongweil@nvidia.com> |
net/mlx5: support live migration
When a DPDK application must be upgraded, the traffic downtime should be shortened as much as possible. During the migration time, the old application may stay alive
net/mlx5: support live migration
When a DPDK application must be upgraded, the traffic downtime should be shortened as much as possible. During the migration time, the old application may stay alive while the new application is starting and being configured.
In order to optimize the switch to the new application, the old application may need to be aware of the presence of the new application being prepared. This is achieved with a new API allowing the user to change the new application state to standby and active later.
The added function is trying to apply the new mode to all probed mlx5 ports. To make this API simple and easy to use, the same flags have to be accepted by all devices.
This is the scenario of operations in the old and new applications: - device: already configured by the old application - new: start as active - new: probe the same device - new: set as standby - new: configure the device - device: has configurations from old and new applications - old: clear its device configuration - device: has only 1 configuration from new application - new: set as active - device: downtime for connecting all to the new application - old: shutdown
The active mode means network handling configurations are programmed to the HW immediately, and no behavior changed. This is the default state. The standby mode means configurations are queued in the HW. If there is no application with active mode, any configuration is effective immediately.
Signed-off-by: Rongwei Liu <rongweil@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
1094dd94 |
| 28-Oct-2022 |
David Marchand <david.marchand@redhat.com> |
cleanup compat header inclusions
With symbols going though experimental/stable stages, we accumulated a lot of discrepancies about inclusion of the rte_compat.h header.
Some headers are including i
cleanup compat header inclusions
With symbols going though experimental/stable stages, we accumulated a lot of discrepancies about inclusion of the rte_compat.h header.
Some headers are including it where unneeded, while others rely on implicit inclusion.
Fix unneeded inclusions: $ git grep -l include..rte_compat.h | xargs grep -LE '__rte_(internal|experimental)' | xargs sed -i -e '/#include..rte_compat.h/d'
Fix missing inclusion, by inserting rte_compat.h before the first inclusion of a DPDK header: $ git grep -lE '__rte_(internal|experimental)' | xargs grep -L include..rte_compat.h | xargs sed -i -e \ '0,/#include..\(rte_\|.*pmd.h.$\)/{ s/\(#include..\(rte_\|.*pmd.h.$\)\)/#include <rte_compat.h>\n\1/ }'
Fix missing inclusion, by inserting rte_compat.h after the last inclusion of a non DPDK header: $ for file in $(git grep -lE '__rte_(internal|experimental)' | xargs grep -L include..rte_compat.h); do tac $file > $file.$$ sed -i -e \ '0,/#include../{ s/\(#include..*$\)/#include <rte_compat.h>\n\n\1/ }' $file.$$ tac $file.$$ > $file rm $file.$$ done
Fix missing inclusion, by inserting rte_compat.h after the header guard: $ git grep -lE '__rte_(internal|experimental)' | xargs grep -L include..rte_compat.h | xargs sed -i -e \ '0,/#define/{ s/\(#define .*$\)/\1\n\n#include <rte_compat.h>/ }'
And finally, exclude rte_compat.h itself. $ git checkout lib/eal/include/rte_compat.h
At the end of all this, we have a clean tree: $ git grep -lE '__rte_(internal|experimental)' | xargs grep -L include..rte_compat.h buildtools/check-symbols.sh devtools/checkpatches.sh doc/guides/contributing/abi_policy.rst doc/guides/rel_notes/release_20_11.rst lib/eal/include/rte_compat.h
Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
#
26e1eaf2 |
| 20-Oct-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
net/mlx5: support device control for E-Switch default rule
This patch adds support for fdb_def_rule_en device argument to HW Steering, which controls:
- the creation of the default FDB jump flow ru
net/mlx5: support device control for E-Switch default rule
This patch adds support for fdb_def_rule_en device argument to HW Steering, which controls:
- the creation of the default FDB jump flow rule. - the ability of the user to create transfer flow rules in the root table.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Signed-off-by: Xueming Li <xuemingl@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
2235fcda |
| 16-Jun-2022 |
Spike Du <spiked@nvidia.com> |
net/mlx5: add API to configure host port shaper
Host port shaper can be configured with QSHR (QoS Shaper Host Register). Add check in build files to enable this function or not.
The host shaper con
net/mlx5: add API to configure host port shaper
Host port shaper can be configured with QSHR (QoS Shaper Host Register). Add check in build files to enable this function or not.
The host shaper configuration affects all the ethdev ports belonging to the same host port.
Host shaper can configure shaper rate and lwm-triggered for a host port. The shaper limits the rate of traffic from host port to wire port. If lwm-triggered is enabled, a 100Mbps shaper is enabled automatically when one of the host port's Rx queues receives available descriptor threshold event.
Signed-off-by: Spike Du <spiked@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
80f872ee |
| 24-Feb-2022 |
Michael Baum <michaelba@nvidia.com> |
net/mlx5: add external Rx queue mapping API
External queue is a queue that has been created and managed outside the PMD. The queues owner might use PMD to generate flow rules using these external qu
net/mlx5: add external Rx queue mapping API
External queue is a queue that has been created and managed outside the PMD. The queues owner might use PMD to generate flow rules using these external queues.
When the queue is created in hardware it is given an ID represented by 32 bits. In contrast, the index of the queues in PMD is represented by 16 bits. To enable the use of PMD to generate flow rules, the queue owner must provide a mapping between the HW index and a 16-bit index corresponding to the ethdev API.
This patch adds an API enabling to insert/cancel a mapping between HW queue id and ethdev queue id.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
8e83ba28 |
| 16-Dec-2021 |
Elena Agostini <eagostini@nvidia.com> |
net/mlx5: add C++ include guard to public header
The support for linking rte_pmd_mlx5.h functions with C++ applications was missing.
Signed-off-by: Elena Agostini <eagostini@nvidia.com> Acked-by: V
net/mlx5: add C++ include guard to public header
The support for linking rte_pmd_mlx5.h functions with C++ applications was missing.
Signed-off-by: Elena Agostini <eagostini@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
23f627e0 |
| 27-Oct-2020 |
Bing Zhao <bingz@nvidia.com> |
net/mlx5: add flow sync API
When creating a flow, the rule itself might not take effort immediately once the function call returns with success. It would take some time to let the steering synchroni
net/mlx5: add flow sync API
When creating a flow, the rule itself might not take effort immediately once the function call returns with success. It would take some time to let the steering synchronize with the hardware.
If the application wants the packet to be sent to hit the flow after it is created, this flow sync API can be used to clear the steering HW cache to enforce next packet hits the latest rules.
For TX, usually the NIC TX domain and/or the FDB domain should be synchronized depends in which domain the flow is created.
The application could also try to synchronize the NIC RX and/or the FDB domain for the ingress packets.
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
efa79e68 |
| 29-Jan-2020 |
Ori Kam <orika@mellanox.com> |
net/mlx5: support fine grain dynamic flag
The inline feature is designed to save PCI bandwidth by copying some of the data to the wqe. This feature if enabled works for all packets.
In some cases w
net/mlx5: support fine grain dynamic flag
The inline feature is designed to save PCI bandwidth by copying some of the data to the wqe. This feature if enabled works for all packets.
In some cases when using external memory, the PCI bandwidth is not relevant since the memory can be accessed by other means.
This commit introduce the ability to control the inline with mbuf granularity.
In order to use this feature the application should register the field name, and restart the port.
Signed-off-by: Ori Kam <orika@mellanox.com> Signed-off-by: Viacheslav Ovsiienko <viacheslavo@mellanox.com> Acked-by: Matan Azrad <matan@mellanox.com>
show more ...
|