| #
fca8cba4 |
| 21-Jun-2023 |
David Marchand <david.marchand@redhat.com> |
ethdev: advertise flow restore in mbuf
As reported by Ilya [1], unconditionally calling rte_flow_get_restore_info() impacts an application performance for drivers that do not provide this ops. It co
ethdev: advertise flow restore in mbuf
As reported by Ilya [1], unconditionally calling rte_flow_get_restore_info() impacts an application performance for drivers that do not provide this ops. It could also impact processing of packets that require no call to rte_flow_get_restore_info() at all.
Register a dynamic mbuf flag when an application negotiates tunnel metadata delivery (calling rte_eth_rx_metadata_negotiate() with RTE_ETH_RX_METADATA_TUNNEL_ID).
Drivers then advertise that metadata can be extracted by setting this dynamic flag in each mbuf.
The application then calls rte_flow_get_restore_info() only when required.
Link: http://inbox.dpdk.org/dev/5248c2ca-f2a6-3fb0-38b8-7f659bfa40de@ovn.org/
Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Tested-by: Ali Alnubani <alialnu@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
| #
66a1e115 |
| 14-Jun-2023 |
Thomas Monjalon <thomas@monjalon.net> |
ethdev: rename functions checking queue validity
Two functions helping to check Rx/Tx queues validity were added in DPDK 23.07-rc1. As the release is not closed, it is still time to rename.
The nam
ethdev: rename functions checking queue validity
Two functions helping to check Rx/Tx queues validity were added in DPDK 23.07-rc1. As the release is not closed, it is still time to rename.
The name proposed originally rte_eth_dev_is_valid_*xq is consistent with this function: rte_eth_dev_is_valid_port() However, the suffixes "rxq" and "txq" are uncommon in ethdev functions.
Also for shortness, many functions are dropping "_dev_" as these functions which manage the queues: rte_eth_*x_queue_info_get() rte_eth_*x_queue_setup() rte_eth_*x_hairpin_queue_setup For completeness, there are some old functions having "_dev_": rte_eth_dev_*x_queue_start() rte_eth_dev_*x_queue_stop() Anyway in all above examples, the subject is after the prefix, and the verb is at the end.
That's why I propose renaming into: rte_eth_*x_queue_is_valid()
Fixes: 7ea7e0cd3a08 ("ethdev: add functions to check queue validity")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
9fdcf2be |
| 08-May-2023 |
Denis Pryazhennikov <denis.pryazhennikov@arknetworks.am> |
ethdev: check that at least one FEC mode is specified
The behaviour is undefined in the rte_eth_fec_set() function when the fec_capa parameter is equal to zero. Add a check to handle this case.
Fix
ethdev: check that at least one FEC mode is specified
The behaviour is undefined in the rte_eth_fec_set() function when the fec_capa parameter is equal to zero. Add a check to handle this case.
Fixes: b7ccfb09da95 ("ethdev: introduce FEC API") Cc: stable@dpdk.org
Signed-off-by: Denis Pryazhennikov <denis.pryazhennikov@arknetworks.am> Acked-by: Ivan Malov <ivan.malov@arknetworks.am> Acked-by: Viacheslav Galaktionov <viacheslav.galaktionov@arknetworks.am> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
8bddc981 |
| 13-Jun-2023 |
David Marchand <david.marchand@redhat.com> |
ethdev: prefer offload names in logs
Displaying a bitmask is terrible for users. Prefer offload names when refusing some offloads in rte_eth_dev_configure.
Before: Ethdev port_id=0 requested Rx off
ethdev: prefer offload names in logs
Displaying a bitmask is terrible for users. Prefer offload names when refusing some offloads in rte_eth_dev_configure.
Before: Ethdev port_id=0 requested Rx offloads 0x621 doesn't match Rx offloads capabilities 0x0 in rte_eth_dev_configure()
After: Ethdev port_id=0 does not support Rx offloads VLAN_STRIP,QINQ_STRIP,VLAN_FILTER,VLAN_EXTEND Ethdev port_id=0 was requested Rx offloads VLAN_STRIP,QINQ_STRIP,VLAN_FILTER,VLAN_EXTEND Ethdev port_id=0 supports Rx offloads none
Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
4fe767ac |
| 07-Jun-2023 |
Jie Hai <haijie1@huawei.com> |
ethdev: extract telemetry code to a file
This patch extracts telemetry related codes in rte_ethdev.c to a new file rte_ethdev_telemetry.c.
Signed-off-by: Jie Hai <haijie1@huawei.com> Acked-by: Ferr
ethdev: extract telemetry code to a file
This patch extracts telemetry related codes in rte_ethdev.c to a new file rte_ethdev_telemetry.c.
Signed-off-by: Jie Hai <haijie1@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
bc7126d9 |
| 07-Jun-2023 |
Jie Hai <haijie1@huawei.com> |
ethdev: fix calloc arguments
The 'calloc' uses number as first argument, sizeof is generally wrong. This patch fixes it.
Fixes: 8af559f94cef ("ethdev: support telemetry private dump") Cc: stable@dp
ethdev: fix calloc arguments
The 'calloc' uses number as first argument, sizeof is generally wrong. This patch fixes it.
Fixes: 8af559f94cef ("ethdev: support telemetry private dump") Cc: stable@dpdk.org
Signed-off-by: Jie Hai <haijie1@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
7ea7e0cd |
| 05-Jun-2023 |
Dengdui Huang <huangdengdui@huawei.com> |
ethdev: add functions to check queue validity
The API rte_eth_dev_is_valid_rxq/txq which is used to check if Rx/Tx queue is valid. If the queue has been setup, it is considered valid.
Signed-off-by
ethdev: add functions to check queue validity
The API rte_eth_dev_is_valid_rxq/txq which is used to check if Rx/Tx queue is valid. If the queue has been setup, it is considered valid.
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
8f02f472 |
| 19-May-2023 |
Huisong Li <lihuisong@huawei.com> |
ethdev: fix MAC address occupies two entries
The dev->data->mac_addrs[0] will be changed to a new MAC address when applications modify the default MAC address by .mac_addr_set(). However, if the new
ethdev: fix MAC address occupies two entries
The dev->data->mac_addrs[0] will be changed to a new MAC address when applications modify the default MAC address by .mac_addr_set(). However, if the new default one has been added as a non-default MAC address by .mac_addr_add(), the .mac_addr_set() didn't check this address. As a result, this MAC address occupies two entries in the list. Like: add(MAC1) add(MAC2) add(MAC3) add(MAC4) set_default(MAC3) default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4 Note: MAC3 occupies two entries.
But .mac_addr_set() cannot remove it implicitly in case of MAC address shrinking in the list. So this patch adds a check on whether the new default address was already in the list and if so requires the user to remove it first.
In addition, this patch documents the position of the default MAC address and address unique in the list.
Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier") Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
|
Revision tags: v23.03, v23.03-rc4, v23.03-rc3, v23.03-rc2 |
|
| #
26749cd1 |
| 03-Mar-2023 |
Ankur Dwivedi <adwivedi@marvell.com> |
ethdev: optimise parameter passing to xstats trace
The rte_eth_xstat_name structure is of size 64 bytes. Instead of passing the structure as value it is passed as a pointer, to avoid copy of 64 byte
ethdev: optimise parameter passing to xstats trace
The rte_eth_xstat_name structure is of size 64 bytes. Instead of passing the structure as value it is passed as a pointer, to avoid copy of 64 bytes in function call stack.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
6686b2d2 |
| 28-Feb-2023 |
Ferruh Yigit <ferruh.yigit@amd.com> |
ethdev: remove telemetry Rx mbuf alloc failed field
'eth_dev->data->rx_mbuf_alloc_failed' field is not directly exposed to user via ethdev APIs but it is used internally to set "stats->rx_nombuf' wh
ethdev: remove telemetry Rx mbuf alloc failed field
'eth_dev->data->rx_mbuf_alloc_failed' field is not directly exposed to user via ethdev APIs but it is used internally to set "stats->rx_nombuf' which is exposed via ehtdev stat APIs.
But telemetry exposes this field to user via "/ethdev/info", instead user can get 'rx_nombuf' value from stats via "/ethdev/stats".
Removing 'rx_mbuf_alloc_failed' from telemetry to align with ethdev APIs
Fixes: 58b43c1ddfd1 ("ethdev: add telemetry endpoint for device info") Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@amd.com> Acked-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
a131d9ec |
| 01-Mar-2023 |
Thomas Monjalon <thomas@monjalon.net> |
ethdev: add link speed 400G
There are some devices supporting 400G speed, and it is well standardized in IEEE.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Morten Brørup <mb@sm
ethdev: add link speed 400G
There are some devices supporting 400G speed, and it is well standardized in IEEE.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
|
Revision tags: v23.03-rc1 |
|
| #
06ea5479 |
| 17-Feb-2023 |
Jiawei Wang <jiaweiw@nvidia.com> |
ethdev: add Tx queue mapping of aggregated ports
When multiple ports are aggregated into a single DPDK port, (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to know which port use fo
ethdev: add Tx queue mapping of aggregated ports
When multiple ports are aggregated into a single DPDK port, (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to know which port use for Tx via a queue.
This patch introduces the new ethdev API rte_eth_dev_map_aggr_tx_affinity(), it's used to map a Tx queue with an aggregated port of the DPDK port (specified with port_id), The affinity is the number of the aggregated port. Value 0 means no affinity and traffic could be routed to any aggregated port, this is the default current behavior.
The maximum number of affinity is given by rte_eth_dev_count_aggr_ports().
Add the trace point for ethdev rte_eth_dev_count_aggr_ports() and rte_eth_dev_map_aggr_tx_affinity() functions.
Add the testpmd command line: testpmd> port config (port_id) txq (queue_id) affinity (value)
For example, there're two physical ports connected to a single DPDK port (port id 0), and affinity 1 stood for the first physical port and affinity 2 stood for the second physical port. Use the below commands to config tx phy affinity for per Tx Queue: port config 0 txq 0 affinity 1 port config 0 txq 1 affinity 1 port config 0 txq 2 affinity 2 port config 0 txq 3 affinity 2
These commands config the Tx Queue index 0 and Tx Queue index 1 with phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these packets will be sent from the first physical port, and similar with the second physical port if sending packets with Tx Queue 2 or Tx Queue 3.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
| #
50e374c6 |
| 14-Feb-2023 |
Chengwen Feng <fengchengwen@huawei.com> |
ethdev: add telemetry xstats parameter to hide zero
The number of xstats may be large, after the hide zero option is added, only non-zero values can be displayed.
So display xstats with hide zero:
ethdev: add telemetry xstats parameter to hide zero
The number of xstats may be large, after the hide zero option is added, only non-zero values can be displayed.
So display xstats with hide zero: /ethdev/xstats,0,hide_zero=true and without hide zero: /ethdev/xstats,0,hide_zero=false or /ethdev/xstats,0
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
6679cf21 |
| 08-Feb-2023 |
Ankur Dwivedi <adwivedi@marvell.com> |
ethdev: add trace points
Adds trace points for ethdev functions.
The rte_ethdev_trace.h is removed. The file ethdev_trace.h is added as an internal header. ethdev_trace.h contains internal slow pat
ethdev: add trace points
Adds trace points for ethdev functions.
The rte_ethdev_trace.h is removed. The file ethdev_trace.h is added as an internal header. ethdev_trace.h contains internal slow path and fast path tracepoints. The public fast path tracepoints are present in rte_ethdev_trace_fp.h header.
Signed-off-by: Ankur Dwivedi <adwivedi@marvell.com> Acked-by: Sunil Kumar Kori <skori@marvell.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
af0785a2 |
| 12-Jan-2023 |
Bruce Richardson <bruce.richardson@intel.com> |
rename telemetry u64 functions to uint versions
Within the DPDK code-base, replace all occurrences of "rte_tel_data_add_array_u64" with "rte_tel_data_add_array_uint", and similarly replace all occur
rename telemetry u64 functions to uint versions
Within the DPDK code-base, replace all occurrences of "rte_tel_data_add_array_u64" with "rte_tel_data_add_array_uint", and similarly replace all occurrences of "rte_tel_data_add_dict_u64" with "rte_tel_data_add_dict_uint". This allows us to later mark the older functions as deprecated without hitting warnings.
Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Ciara Power <ciara.power@intel.com>
show more ...
|
| #
2d2c55e4 |
| 12-Jan-2023 |
Bruce Richardson <bruce.richardson@intel.com> |
telemetry: rename unsigned 64-bit enum value to uint
For telemetry data, rather than having unsigned 64-bit values and signed 32-bit values, we want to just have unsigned and signed values, each sto
telemetry: rename unsigned 64-bit enum value to uint
For telemetry data, rather than having unsigned 64-bit values and signed 32-bit values, we want to just have unsigned and signed values, each stored with the max bit-width i.e. 64-bits. To that end, we rename the U64 enum entry to "UINT" to have a more generic name
For backward API-level compatibility, we can use a macro to alias the old name to the new.
Suggested-by: Morten Brørup <mb@smartsharesystems.com> Signed-off-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Ciara Power <ciara.power@intel.com>
show more ...
|
| #
796b0316 |
| 19-Dec-2022 |
Huisong Li <lihuisong@huawei.com> |
ethdev: get capabilities from telemetry in hexadecimal
The 'dev_flags', 'rx_offloads', 'tx_offloads' and 'rss_hf' are better displayed in hexadecimal format.
Like: --> old display by input /ethdev
ethdev: get capabilities from telemetry in hexadecimal
The 'dev_flags', 'rx_offloads', 'tx_offloads' and 'rss_hf' are better displayed in hexadecimal format.
Like: --> old display by input /ethdev/info,0 "dev_flags": 3, "rx_offloads": 524288, "tx_offloads": 65536, "ethdev_rss_hf": 9100
--> new display "dev_flags": "0x3", "rx_offloads": "0x80000", "tx_offloads": "0x10000", "ethdev_rss_hf": "0x238c"
Signed-off-by: Huisong Li <lihuisong@huawei.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
show more ...
|
| #
7db3f2af |
| 19-Dec-2022 |
Huisong Li <lihuisong@huawei.com> |
ethdev: fix telemetry data truncation
The 'u32' and 'u64' data can not assigned to 'int' type variable. They need to use the 'u64' APIs to add.
Fixes: 58b43c1ddfd1 ("ethdev: add telemetry endpoint
ethdev: fix telemetry data truncation
The 'u32' and 'u64' data can not assigned to 'int' type variable. They need to use the 'u64' APIs to add.
Fixes: 58b43c1ddfd1 ("ethdev: add telemetry endpoint for device info") Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
show more ...
|
|
Revision tags: v22.11, v22.11-rc4, v22.11-rc3, v22.11-rc2, v22.11-rc1 |
|
| #
605975b8 |
| 09-Oct-2022 |
Yuan Wang <yuanx.wang@intel.com> |
ethdev: introduce protocol-based buffer split
Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured,
ethdev: introduce protocol-based buffer split
Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured, PMD will be able to split the received packets into multiple segments.
However, length based buffer split is not suitable for NICs that do split based on protocol headers. Given an arbitrarily variable length in Rx packet segment, it is almost impossible to pass a fixed protocol header to driver. Besides, the existence of tunneling results in the composition of a packet is various, which makes the situation even worse.
This patch extends current buffer split to support protocol header based buffer split. A new proto_hdr field is introduced in the reserved field of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr field defines the split position of packet, splitting will always happen after the protocol header defined in the Rx packet segment. When Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding protocol header is configured, driver will split the ingress packets into multiple segments.
Examples for proto_hdr field defines: To split after ETH-IPV4-UDP, it should be defined as proto_hdr = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP
For inner ETH-IPV4-UDP, it should be defined as proto_hdr = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP
If the protocol header is repeated with the previously defined one, the repeated part should be omitted. For example, split after ETH, ETH-IPV4 and ETH-IPV4-UDP, it should be defined as proto_hdr0 = RTE_PTYPE_L2_ETHER proto_hdr1 = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN proto_hdr2 = RTE_PTYPE_L4_UDP
If protocol header split can be supported by a PMD, the rte_eth_buffer_split_get_supported_hdr_ptypes function can be used to obtain a list of these protocol headers.
For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, proto_hdr0=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4, off0=2B seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B seg2 - pool2, proto_hdr2=0, off1=0B
The packet consists of ETH_IPV4_UDP_PAYLOAD will be split like following: seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - udp header @ 128 in mbuf from pool1 seg2 - payload @ 0 in mbuf from pool2
Now buffer split can be configured in two modes. User can choose length or protocol header to configure buffer split according to NIC's capability. For length based buffer split, the mp, length, offset field in Rx packet segment should be configured, while the proto_hdr field must be 0. For protocol header based buffer split, the mp, offset, proto_hdr field in Rx packet segment should be configured, while the length field must be 0.
Note: When protocol header split is enabled, NIC may receive packets which do not match all the protocol headers within the Rx segments. At this point, NIC will have two possible split behaviors according to matching results, one is exact match, another is longest match. The split result of NIC must belong to one of them.
The exact match means NIC only do split when the packets exactly match all the protocol headers in the segments. Otherwise, the whole packet will be put into the last valid mempool. The longest match means NIC will do split until packets mismatch the protocol header in the segments. The rest will be put into the last valid pool.
Pseudo-code for exact match: FOR each seg in segs except last one IF proto_hdr is not matched THEN BREAK END IF END FOR IF loop breaked THEN put whole pkt in last seg ELSE put protocol header in each seg put everything else in last seg END IF
Pseudo-code for longest match: FOR each seg in segs except last one IF proto_hdr is matched THEN put protocol header in seg ELSE BREAK END IF END FOR put everything else in last seg
Signed-off-by: Yuan Wang <yuanx.wang@intel.com> Signed-off-by: Xuan Ding <xuan.ding@intel.com> Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
e4e6f4cb |
| 09-Oct-2022 |
Yuan Wang <yuanx.wang@intel.com> |
ethdev: introduce protocol header API
Add a new ethdev API to retrieve supported protocol headers of a PMD, which helps to configure protocol header based buffer split.
Signed-off-by: Yuan Wang <yu
ethdev: introduce protocol header API
Add a new ethdev API to retrieve supported protocol headers of a PMD, which helps to configure protocol header based buffer split.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com> Signed-off-by: Xuan Ding <xuan.ding@intel.com> Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
458485eb |
| 07-Oct-2022 |
Hanumanth Pothula <hpothula@marvell.com> |
ethdev: support multiple mbuf pools per Rx queue
Some of the HW has support for choosing memory pools based on the packet's size.
This is often useful for saving the memory where the application ca
ethdev: support multiple mbuf pools per Rx queue
Some of the HW has support for choosing memory pools based on the packet's size.
This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling more efficient usage of memory.
For example, let's say HW has a capability of three pools, - pool-1 size is 2K - pool-2 size is > 2K and < 4K - pool-3 size is > 4K Here, pool-1 can accommodate packets with sizes < 2K pool-2 can accommodate packets with sizes > 2K and < 4K pool-3 can accommodate packets with sizes > 4K
With multiple mempool capability enabled in SW, an application may create three pools of different sizes and send them to PMD. Allowing PMD to program HW based on the packet lengths. So that packets with less than 2K are received on pool-1, packets with lengths between 2K and 4K are received on pool-2 and finally packets greater than 4K are received on pool-3.
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
b7fc7c53 |
| 07-Oct-2022 |
Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> |
ethdev: factor out helper function to check Rx mempool
Avoid Rx mempool checks duplication logic.
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
|
| #
bc705061 |
| 06-Oct-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
ethdev: introduce hairpin memory capabilities
Before this patch, implementation details and configuration of hairpin queues were decided internally by the PMD. Applications had no control over the c
ethdev: introduce hairpin memory capabilities
Before this patch, implementation details and configuration of hairpin queues were decided internally by the PMD. Applications had no control over the configuration of Rx and Tx hairpin queues, despite number of descriptors, explicit Tx flow mode and disabling automatic binding. This patch addresses that by adding:
- Hairpin queue capabilities reported by PMDs. - New configuration options for Rx and Tx hairpin queues.
Main goal of this patch is to allow applications to provide configuration hints regarding placement of hairpin queues. These hints specify whether buffers of hairpin queues should be placed in host memory or in dedicated device memory. Different memory options may have different performance characteristics and hairpin configuration should be fine-tuned to the specific application and use case.
This patch introduces new hairpin queue configuration options through rte_eth_hairpin_conf struct, allowing to tune Rx and Tx hairpin queues memory configuration. Hairpin configuration is extended with the following fields:
- use_locked_device_memory - If set, PMD will use specialized on-device memory to store RX or TX hairpin queue data. - use_rte_memory - If set, PMD will use DPDK-managed memory to store RX or TX hairpin queue data. - force_memory - If set, PMD will be forced to use provided memory settings. If no appropriate resources are available, then device start will fail. If unset and no resources are available, PMD will fallback to using default type of resource for given queue.
If application chooses to use PMD default memory configuration, all of these flags should remain unset.
Hairpin capabilities are also extended, to allow verification of support of given hairpin memory configurations. Struct rte_eth_hairpin_cap is extended with two additional fields of type rte_eth_hairpin_queue_cap:
- rx_cap - memory capabilities of hairpin RX queues. - tx_cap - memory capabilities of hairpin TX queues.
Struct rte_eth_hairpin_queue_cap exposes whether given queue type supports use_locked_device_memory and use_rte_memory flags.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
| #
6b81dddb |
| 04-Oct-2022 |
Jerin Jacob <jerinj@marvell.com> |
ethdev: support congestion management
NIC HW controllers often come with congestion management support on various HW objects such as Rx queue depth or mempool queue depth.
Also, it can support vari
ethdev: support congestion management
NIC HW controllers often come with congestion management support on various HW objects such as Rx queue depth or mempool queue depth.
Also, it can support various modes of operation such as RED (Random early discard), WRED etc on those HW objects.
Add a framework to express such modes(enum rte_cman_mode) and introduce (enum rte_eth_cman_obj) to enumerate the different objects where the modes can operate on.
Add RTE_CMAN_RED mode of operation and RTE_ETH_CMAN_OBJ_RX_QUEUE, RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL objects.
Introduce reserved fields in configuration structure backed by rte_eth_cman_config_init() to add new configuration parameters without ABI breakage.
Add rte_eth_cman_info_get() API to get the information such as supported modes and objects.
Add rte_eth_cman_config_init(), rte_eth_cman_config_set() APIs to configure congestion management on those object with associated mode.
Finally, add rte_eth_cman_config_get() API to retrieve the applied configuration.
Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: Sunil Kumar Kori <skori@marvell.com> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Sunil Kumar Kori <skori@marvell.com>
show more ...
|
| #
092b701f |
| 06-Oct-2022 |
Dongdong Liu <liudongdong3@huawei.com> |
ethdev: introduce Rx/Tx descriptor dump API
Added the ethdev Rx/Tx desc dump API which provides functions for query descriptor from device. HW descriptor info differs in different NICs. The informat
ethdev: introduce Rx/Tx descriptor dump API
Added the ethdev Rx/Tx desc dump API which provides functions for query descriptor from device. HW descriptor info differs in different NICs. The information demonstrates I/O process which is important for debug. As the information is different between NICs, the new API is introduced.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
show more ...
|