| ff4a29d1 | 28-Oct-2021 |
Radu Nicolau <radu.nicolau@intel.com> |
ipsec: support TSO
Add support for transmit segmentation offload to inline crypto processing mode. This offload is not supported by other offload modes, as at a minimum it requires inline crypto for
ipsec: support TSO
Add support for transmit segmentation offload to inline crypto processing mode. This offload is not supported by other offload modes, as at a minimum it requires inline crypto for IPsec to be supported on the network interface.
Signed-off-by: Declan Doherty <declan.doherty@intel.com> Signed-off-by: Radu Nicolau <radu.nicolau@intel.com> Signed-off-by: Abhijit Sinha <abhijit.sinha@intel.com> Signed-off-by: Daniel Martin Buckley <daniel.m.buckley@intel.com> Acked-by: Fan Zhang <roy.fan.zhang@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com> Acked-by: Akhil Goyal <gakhil@marvell.com>
show more ...
|
| 1c559ee8 | 26-Oct-2021 |
Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com> |
cryptodev: add telemetry endpoint for capabilities
Add telemetry endpoint for getting cryptodev capabilities.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com> Acked-by: Akhil Go
cryptodev: add telemetry endpoint for capabilities
Add telemetry endpoint for getting cryptodev capabilities.
Signed-off-by: Gowrishankar Muthukrishnan <gmuthukrishn@marvell.com> Acked-by: Akhil Goyal <gakhil@marvell.com>
show more ...
|
| d3d98f5c | 26-Oct-2021 |
Rebecca Troy <rebecca.troy@intel.com> |
cryptodev: support telemetry
The cryptodev library now registers commands with telemetry, and implements the corresponding callback functions. These commands allow a list of cryptodevs to be queried
cryptodev: support telemetry
The cryptodev library now registers commands with telemetry, and implements the corresponding callback functions. These commands allow a list of cryptodevs to be queried, as well as info and stats for the corresponding cryptodev.
An example usage can be seen below:
Connecting to /var/run/dpdk/rte/dpdk_telemetry.v2 {"version": "DPDK 21.11.0-rc0", "pid": 1135019, "max_output_len": 16384} --> / {"/": ["/", "/cryptodev/info", "/cryptodev/list", "/cryptodev/stats", ...]} --> /cryptodev/list {"/cryptodev/list": [0,1,2,3]} --> /cryptodev/info,0 {"/cryptodev/info": {"device_name": "0000:1c:01.0_qat_sym", \ "max_nb_queue_pairs": 2}} --> /cryptodev/stats,0 {"/cryptodev/stats": {"enqueued_count": 0, "dequeued_count": 0, \ "enqueue_err_count": 0, "dequeue_err_count": 0}}
Signed-off-by: Rebecca Troy <rebecca.troy@intel.com> Acked-by: Ciara Power <ciara.power@intel.com> Acked-by: Akhil Goyal <gakhil@marvell.com>
show more ...
|
| ab4bb424 | 02-Nov-2021 |
Maxime Coquelin <maxime.coquelin@redhat.com> |
vhost: rename driver callbacks struct
As previously announced, this patch renames struct vhost_device_ops to struct rte_vhost_device_ops.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
vhost: rename driver callbacks struct
As previously announced, this patch renames struct vhost_device_ops to struct rte_vhost_device_ops.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Chenbo Xia <chenbo.xia@intel.com>
show more ...
|
| 2c9cd45d | 02-Nov-2021 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
ethdev: add capability to keep shared objects on restart
rte_flow_action_handle_create() did not mention what happens with an indirect action when a device is stopped and started again. It is natura
ethdev: add capability to keep shared objects on restart
rte_flow_action_handle_create() did not mention what happens with an indirect action when a device is stopped and started again. It is natural for some indirect actions, like counter, to be persistent. Keeping others at least saves application time and complexity. However, not all PMDs can support it, or the support may be limited by particular action kinds, that is, combinations of action type and the value of the transfer bit in its configuration.
Add a device capability to indicate if at least some indirect actions are kept across the above sequence. Without this capability the behavior is still unspecified, and application is required to destroy the indirect actions before stopping the device. In the future, indirect actions may not be the only type of objects shared between flow rules. The capability bit intends to cover all possible types of such objects, hence its name.
Declare that the application can test for the persistence of a particular indirect action kind by attempting to create an indirect action of that kind when the device is stopped and checking for the specific error type. This is logical because if the PMD can to create an indirect action when the device is not started and use it after the start happens, it is natural that it can move its internal flow shared object to the same state when the device is stopped and restore the state when the device is started.
Indirect action persistence across a reconfigurations is not required. In case a PMD cannot keep the indirect actions across reconfiguration, it is allowed just to report an error. Application must then flush the indirect actions before attempting it.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| 1d5a3d68 | 02-Nov-2021 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
ethdev: add capability to keep flow rules on restart
Previously, it was not specified what happens to the flow rules when the device is stopped, possibly reconfigured, then started. If flow rules we
ethdev: add capability to keep flow rules on restart
Previously, it was not specified what happens to the flow rules when the device is stopped, possibly reconfigured, then started. If flow rules were kept, it could be convenient for application developers, because they wouldn't need to save and restore them. However, due to the number of flows and possible creation rate it is impractical to save all flow rules in DPDK layer. This means that flow rules persistence really depends on whether PMD and HW can implement it efficiently. It can also be limited by the rule item and action types, and its attributes transfer bit (a combination of an item/action type and a value of the transfer bit is called a rule feature).
Add a device capability bit for PMDs that can keep at least some of the flow rules across restart. Without this capability behavior is still unspecified and it is declared that the application must flush the rules before stopping the device. Allow the application to test for persistence of rules using a particular feature by attempting to create a flow rule using that feature when the device is stopped and checking for the specific error. This is logical because if the PMD can to create the flow rule when the device is not started and use it after the start happens, it is natural that it can move its internal flow rule object to the same state when the device is stopped and restore the state when the device is started.
Rule persistence across a reconfigurations is not required, because tracking all the rules and configuration-dependent resources they use may be infeasible. In case a PMD cannot keep the rules across reconfiguration, it is allowed just to report an error. Application must then flush the rules before attempting it.
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| 44c730b0 | 04-Nov-2021 |
Wojciech Liguzinski <wojciechx.liguzinski@intel.com> |
sched: add PIE based congestion management
Implement PIE based congestion management based on rfc8033.
The Proportional Integral Controller Enhanced (PIE) algorithm works by proactively dropping pa
sched: add PIE based congestion management
Implement PIE based congestion management based on rfc8033.
The Proportional Integral Controller Enhanced (PIE) algorithm works by proactively dropping packets randomly. PIE is implemented as more advanced queue management is required to address the bufferbloat problem and provide desirable quality of service to users.
Tests for PIE code added to test application. Added PIE related information to documentation.
Signed-off-by: Wojciech Liguzinski <wojciechx.liguzinski@intel.com> Acked-by: Cristian Dumitrescu <cristian.dumitrescu@intel.com> Acked-by: Jasvinder Singh <jasvinder.singh@intel.com>
show more ...
|
| 31d7c069 | 02-Nov-2021 |
Vladimir Medvedkin <vladimir.medvedkin@intel.com> |
hash: add bulk Toeplitz hash implementation
This patch adds a bulk version for the Toeplitz hash implemented with Galios Fields New Instructions (GFNI).
Signed-off-by: Vladimir Medvedkin <vladimir.
hash: add bulk Toeplitz hash implementation
This patch adds a bulk version for the Toeplitz hash implemented with Galios Fields New Instructions (GFNI).
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
show more ...
|
| 4fd8c4cb | 02-Nov-2021 |
Vladimir Medvedkin <vladimir.medvedkin@intel.com> |
hash: add new Toeplitz hash implementation
This patch add a new Toeplitz hash implementation using Galios Fields New Instructions (GFNI).
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel
hash: add new Toeplitz hash implementation
This patch add a new Toeplitz hash implementation using Galios Fields New Instructions (GFNI).
Signed-off-by: Vladimir Medvedkin <vladimir.medvedkin@intel.com> Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
show more ...
|
| 6cc51b12 | 20-Oct-2021 |
Zhihong Peng <zhihongx.peng@intel.com> |
mem: instrument allocator for ASan
This patch adds necessary hooks in the memory allocator for ASan.
This feature is currently available in DPDK only on Linux x86_64. If other OS/architectures want
mem: instrument allocator for ASan
This patch adds necessary hooks in the memory allocator for ASan.
This feature is currently available in DPDK only on Linux x86_64. If other OS/architectures want to support it, ASAN_SHADOW_OFFSET must be defined and RTE_MALLOC_ASAN must be set accordingly in meson.
Signed-off-by: Xueqin Lin <xueqin.lin@intel.com> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com> Acked-by: Anatoly Burakov <anatoly.burakov@intel.com>
show more ...
|
| 6e029025 | 20-Oct-2021 |
Zhihong Peng <zhihongx.peng@intel.com> |
build: enable AddressSanitizer
AddressSanitizer [1] a.k.a. ASan is a widely-used debugging tool to detect memory access errors. It helps to detect issues like use-after-free, various kinds of buffer
build: enable AddressSanitizer
AddressSanitizer [1] a.k.a. ASan is a widely-used debugging tool to detect memory access errors. It helps to detect issues like use-after-free, various kinds of buffer overruns in C/C++ programs, and other similar errors, as well as printing out detailed debug information whenever an error is detected.
ASan is integrated with gcc and clang and can be enabled via a meson option: -Db_sanitize=address See the documentation for details (especially regarding clang).
Enabling ASan has an impact on performance since additional checks are added to generated binaries.
Enabling ASan with Windows is currently not supported in DPDK.
1: https://github.com/google/sanitizers/wiki/AddressSanitizer
Signed-off-by: Xueqin Lin <xueqin.lin@intel.com> Signed-off-by: Zhihong Peng <zhihongx.peng@intel.com> Acked-by: John McNamara <john.mcnamara@intel.com>
show more ...
|
| daa02b5c | 15-Oct-2021 |
Olivier Matz <olivier.matz@6wind.com> |
mbuf: add namespace to offload flags
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the name. The old flags remain usable, but a deprecation warning is issued at compilation.
Sign
mbuf: add namespace to offload flags
Fix the mbuf offload flags namespace by adding an RTE_ prefix to the name. The old flags remain usable, but a deprecation warning is issued at compilation.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
show more ...
|
| 295968d1 | 22-Oct-2021 |
Ferruh Yigit <ferruh.yigit@intel.com> |
ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible way. The macros for backward compatibility can be removed in next LTS. Also updated some struct names to
ethdev: add namespace
Add 'RTE_ETH' namespace to all enums & macros in a backward compatible way. The macros for backward compatibility can be removed in next LTS. Also updated some struct names to have 'rte_eth' prefix.
All internal components switched to using new names.
Syntax fixed on lines that this patch touches.
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Jerin Jacob <jerinj@marvell.com> Acked-by: Wisam Jaddo <wisamm@nvidia.com> Acked-by: Rosen Xu <rosen.xu@intel.com> Acked-by: Chenbo Xia <chenbo.xia@intel.com> Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com> Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
show more ...
|
| dd22740c | 21-Oct-2021 |
Xueming Li <xuemingl@nvidia.com> |
ethdev: introduce shared Rx queue
In current DPDK framework, each Rx queue is pre-loaded with mbufs to save incoming packets. For some PMDs, when number of representors scale out in a switch domain,
ethdev: introduce shared Rx queue
In current DPDK framework, each Rx queue is pre-loaded with mbufs to save incoming packets. For some PMDs, when number of representors scale out in a switch domain, the memory consumption became significant. Polling all ports also leads to high cache miss, high latency and low throughput.
This patch introduces shared Rx queue. Ports in same Rx domain and switch domain could share Rx queue set by specifying non-zero sharing group in Rx queue configuration.
Shared Rx queue is identified by share_rxq field of Rx queue configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using same shared Rx queue ID.
No special API is defined to receive packets from shared Rx queue. Polling any member port of a shared Rx queue receives packets of that queue for all member ports, port_id is identified by mbuf->port. PMD is responsible to resolve shared Rx queue from device and queue data.
Shared Rx queue must be polled in same thread or core, polling a queue ID of any member port is essentially same.
Multiple share groups are supported. PMD should support mixed configuration by allowing multiple share groups and non-shared Rx queue on one port.
Example grouping and polling model to reflect service priority: Group1, 2 shared Rx queues per port: PF, rep0, rep1 Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127 Core0: poll PF queue0 Core1: poll PF queue1 Core2: poll rep2 queue0
PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE.
PMD is responsible for shared Rx queue consistency checks to avoid member port's configuration contradict each other.
Signed-off-by: Xueming Li <xuemingl@nvidia.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
show more ...
|
| 3a929df1 | 21-Oct-2021 |
Jie Wang <jie1x.wang@intel.com> |
ethdev: support L2TPv2 and PPP procotol
Added flow pattern items and header formats of L2TPv2 and PPP.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com> Signed-off-by: Jie Wang <jie1x.wang@intel.com>
ethdev: support L2TPv2 and PPP procotol
Added flow pattern items and header formats of L2TPv2 and PPP.
Signed-off-by: Wenjun Wu <wenjun1.wu@intel.com> Signed-off-by: Jie Wang <jie1x.wang@intel.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>
show more ...
|
| 280c3ca0 | 20-Oct-2021 |
Kevin Laatz <kevin.laatz@intel.com> |
dma/idxd: add operation statistic tracking
Add statistic tracking for DSA devices.
The dmadev library documentation is also updated to add a generic section for using the library's statistics APIs.
dma/idxd: add operation statistic tracking
Add statistic tracking for DSA devices.
The dmadev library documentation is also updated to add a generic section for using the library's statistics APIs.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Reviewed-by: Conor Walsh <conor.walsh@intel.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
show more ...
|
| 3d36a0a1 | 20-Oct-2021 |
Kevin Laatz <kevin.laatz@intel.com> |
dma/idxd: add data path job submission
Add data path functions for enqueuing and submitting operations to DSA devices.
Documentation updates are included for dmadev library and IDXD driver docs as
dma/idxd: add data path job submission
Add data path functions for enqueuing and submitting operations to DSA devices.
Documentation updates are included for dmadev library and IDXD driver docs as appropriate.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Signed-off-by: Kevin Laatz <kevin.laatz@intel.com> Reviewed-by: Conor Walsh <conor.walsh@intel.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
show more ...
|
| cbb44143 | 20-Oct-2021 |
Stephen Hemminger <stephen@networkplumber.org> |
app/dumpcap: add new packet capture application
This is a new packet capture application to replace existing pdump. The new application works like Wireshark dumpcap program and supports the pdump AP
app/dumpcap: add new packet capture application
This is a new packet capture application to replace existing pdump. The new application works like Wireshark dumpcap program and supports the pdump API features.
It is not complete yet some features such as filtering are not implemented.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
show more ...
|
| 10f726ef | 20-Oct-2021 |
Stephen Hemminger <stephen@networkplumber.org> |
pdump: support pcapng and filtering
This enhances the DPDK pdump library to support new pcapng format and filtering via BPF.
The internal client/server protocol is changed to support two versions:
pdump: support pcapng and filtering
This enhances the DPDK pdump library to support new pcapng format and filtering via BPF.
The internal client/server protocol is changed to support two versions: the original pdump basic version and a new pcapng version.
The internal version number (not part of exposed API or ABI) is intentionally increased to cause any attempt to try mismatched primary/secondary process to fail.
Add new API to do allow filtering of captured packets with DPDK BPF (eBPF) filter program. It keeps statistics on packets captured, filtered, and missed (because ring was full).
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Reshma Pattan <reshma.pattan@intel.com> Acked-by: Ray Kinsella <mdr@ashroe.eu>
show more ...
|
| 8d23ce8f | 20-Oct-2021 |
Stephen Hemminger <stephen@networkplumber.org> |
pcapng: add new library for writing pcapng files
This is utility library for writing pcapng format files used by Wireshark family of utilities. Older tcpdump also knows how to read (but not write) t
pcapng: add new library for writing pcapng files
This is utility library for writing pcapng format files used by Wireshark family of utilities. Older tcpdump also knows how to read (but not write) this format.
See https://github.com/pcapng/pcapng/
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Reshma Pattan <reshma.pattan@intel.com> Acked-by: Ray Kinsella <mdr@ashroe.eu>
show more ...
|
| b06bca69 | 06-Oct-2021 |
Naga Harish K S V <s.v.naga.harish.k@intel.com> |
eventdev/eth_rx: add per-queue event buffer
Added per queue buffer. To configure per queue event buffer size, application sets rte_event_eth_rx_adapter_params::use_queue_event_buf flag as true while
eventdev/eth_rx: add per-queue event buffer
Added per queue buffer. To configure per queue event buffer size, application sets rte_event_eth_rx_adapter_params::use_queue_event_buf flag as true while using rte_event_eth_rx_adapter_create_with_params().
The per queue event buffer size is populated in rte_event_eth_rx_adapter_queue_conf::event_buf_size and passed to rte_event_eth_rx_adapter_queue_add().
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com> Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com>
show more ...
|
| bc0df25c | 06-Oct-2021 |
Naga Harish K S V <s.v.naga.harish.k@intel.com> |
eventdev/eth_rx: add event buffer size configurability
Currently event buffer is static array with a default size defined internally.
To configure event buffer size from application, rte_event_eth_
eventdev/eth_rx: add event buffer size configurability
Currently event buffer is static array with a default size defined internally.
To configure event buffer size from application, rte_event_eth_rx_adapter_create_with_params() API is added which takes struct rte_event_eth_rx_adapter_params to configure event buffer size in addition other params. The event buffer size is rounded up for better buffer utilization and performance. In case of NULL params argument, default event buffer size is used.
Signed-off-by: Naga Harish K S V <s.v.naga.harish.k@intel.com> Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com> Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
show more ...
|
| da781e64 | 16-Sep-2021 |
Ganapati Kundapura <ganapati.kundapura@intel.com> |
eventdev/eth_rx: support Rx queue config get
Added rte_event_eth_rx_adapter_queue_conf_get() API to get rx queue information - event queue identifier, flags for handling received packets, scheduler
eventdev/eth_rx: support Rx queue config get
Added rte_event_eth_rx_adapter_queue_conf_get() API to get rx queue information - event queue identifier, flags for handling received packets, scheduler type, event priority, polling frequency of the receive queue and flow identifier in rte_event_eth_rx_adapter_queue_conf structure
Signed-off-by: Ganapati Kundapura <ganapati.kundapura@intel.com> Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com> Acked-by: Jerin Jacob <jerinj@marvell.com>
show more ...
|
| 929ebdd5 | 15-Sep-2021 |
Pavan Nikhilesh <pbhagavatula@marvell.com> |
eventdev/eth_rx: simplify event vector config
Include vector configuration into the structure ``rte_event_eth_rx_adapter_queue_conf`` that is used to configure Rx adapter ethernet device Rx queue pa
eventdev/eth_rx: simplify event vector config
Include vector configuration into the structure ``rte_event_eth_rx_adapter_queue_conf`` that is used to configure Rx adapter ethernet device Rx queue parameters. This simplifies event vector configuration as it avoids splitting configuration per Rx queue.
Signed-off-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Acked-by: Jay Jayatheerthan <jay.jayatheerthan@intel.com> Acked-by: Ray Kinsella <mdr@ashroe.eu> Acked-by: Jerin Jacob <jerinj@marvell.com>
show more ...
|
| 1752b087 | 19-Oct-2021 |
David Marchand <david.marchand@redhat.com> |
test: rely on EAL detection for core list
Cores count has a direct impact on the time needed to complete unit tests.
Currently, the core list used for unit test is enforced to "all cores on the sys
test: rely on EAL detection for core list
Cores count has a direct impact on the time needed to complete unit tests.
Currently, the core list used for unit test is enforced to "all cores on the system" with no way for (CI) users to adapt it. On the other hand, EAL default behavior (when no -c/-l option gets passed) is to start threads on as many cores available in the process cpu affinity.
Remove logic from meson: users can then select where to run the tests by either running meson with a custom cpu affinity (using taskset/cpuset depending on OS) or by passing a --test-args option to meson.
Example: $ sudo meson test -C build --suite fast-tests -t 3 --test-args "-l 0-3"
Signed-off-by: David Marchand <david.marchand@redhat.com> Tested-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Aaron Conole <aconole@redhat.com>
show more ...
|