|
Revision tags: v23.07, v23.07-rc4 |
|
| #
3084aa90 |
| 18-Jul-2023 |
Long Wu <long.wu@corigine.com> |
doc: announce bonding macro renaming
In order to support inclusive naming, some of the macro in DPDK will need to be renamed. Do this through deprecation process now for 23.07.
Signed-off-by: Long
doc: announce bonding macro renaming
In order to support inclusive naming, some of the macro in DPDK will need to be renamed. Do this through deprecation process now for 23.07.
Signed-off-by: Long Wu <long.wu@corigine.com> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Huisong Li <lihuisong@huawei.com>
show more ...
|
|
Revision tags: v23.07-rc3, v23.07-rc2 |
|
| #
d5d13ef9 |
| 14-Jun-2023 |
Thomas Monjalon <thomas@monjalon.net> |
lib: align comment blocks
Some comment blocks were missing a space or had too many spaces at the beginning of the lines, resulting in misalignment of asterisks.
Such mistakes were found with this k
lib: align comment blocks
Some comment blocks were missing a space or had too many spaces at the beginning of the lines, resulting in misalignment of asterisks.
Such mistakes were found with this kind of commands: git grep '^\*' lib git grep '^ *\*' lib
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
show more ...
|
| #
b4f0a9bb |
| 14-Jun-2023 |
Thomas Monjalon <thomas@monjalon.net> |
lib: remove blank line ending comment blocks
At the end of a comment, no need for an extra line.
This pattern was fixed with the following command: git ls lib | xargs sed -i '/^ *\* *$/{N;/ *\*\/ *
lib: remove blank line ending comment blocks
At the end of a comment, no need for an extra line.
This pattern was fixed with the following command: git ls lib | xargs sed -i '/^ *\* *$/{N;/ *\*\/ *$/D;}'
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
show more ...
|
| #
66a1e115 |
| 14-Jun-2023 |
Thomas Monjalon <thomas@monjalon.net> |
ethdev: rename functions checking queue validity
Two functions helping to check Rx/Tx queues validity were added in DPDK 23.07-rc1. As the release is not closed, it is still time to rename.
The nam
ethdev: rename functions checking queue validity
Two functions helping to check Rx/Tx queues validity were added in DPDK 23.07-rc1. As the release is not closed, it is still time to rename.
The name proposed originally rte_eth_dev_is_valid_*xq is consistent with this function: rte_eth_dev_is_valid_port() However, the suffixes "rxq" and "txq" are uncommon in ethdev functions.
Also for shortness, many functions are dropping "_dev_" as these functions which manage the queues: rte_eth_*x_queue_info_get() rte_eth_*x_queue_setup() rte_eth_*x_hairpin_queue_setup For completeness, there are some old functions having "_dev_": rte_eth_dev_*x_queue_start() rte_eth_dev_*x_queue_stop() Anyway in all above examples, the subject is after the prefix, and the verb is at the end.
That's why I propose renaming into: rte_eth_*x_queue_is_valid()
Fixes: 7ea7e0cd3a08 ("ethdev: add functions to check queue validity")
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
|
Revision tags: v23.07-rc1 |
|
| #
9e5c9e18 |
| 08-May-2023 |
Denis Pryazhennikov <denis.pryazhennikov@arknetworks.am> |
ethdev: update documentation for API to get FEC
The documentation for the rte_eth_fec_get() is updated to clarify the description for the fec_capa parameter. The previous description implied that mo
ethdev: update documentation for API to get FEC
The documentation for the rte_eth_fec_get() is updated to clarify the description for the fec_capa parameter. The previous description implied that more than one FEC mode can be obtained.
Fixes: b7ccfb09da95 ("ethdev: introduce FEC API") Cc: stable@dpdk.org
Signed-off-by: Denis Pryazhennikov <denis.pryazhennikov@arknetworks.am> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
6af24dc3 |
| 08-May-2023 |
Denis Pryazhennikov <denis.pryazhennikov@arknetworks.am> |
ethdev: update documentation for API to set FEC
The documentation for the rte_eth_fec_set() is updated to provide more detailed information about how FEC modes are handled. It also includes a descri
ethdev: update documentation for API to set FEC
The documentation for the rte_eth_fec_set() is updated to provide more detailed information about how FEC modes are handled. It also includes a description of the case when only the AUTO bit is set.
Fixes: b7ccfb09da95 ("ethdev: introduce FEC API") Cc: stable@dpdk.org
Signed-off-by: Denis Pryazhennikov <denis.pryazhennikov@arknetworks.am> Acked-by: Ivan Malov <ivan.malov@arknetworks.am> Acked-by: Viacheslav Galaktionov <viacheslav.galaktionov@arknetworks.am> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
7ea7e0cd |
| 05-Jun-2023 |
Dengdui Huang <huangdengdui@huawei.com> |
ethdev: add functions to check queue validity
The API rte_eth_dev_is_valid_rxq/txq which is used to check if Rx/Tx queue is valid. If the queue has been setup, it is considered valid.
Signed-off-by
ethdev: add functions to check queue validity
The API rte_eth_dev_is_valid_rxq/txq which is used to check if Rx/Tx queue is valid. If the queue has been setup, it is considered valid.
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
e30aa525 |
| 08-Apr-2023 |
Jie Hai <haijie1@huawei.com> |
ethdev: introduce low latency RS FEC
This patch introduces LLRS (low latency Reed Solomon FEC). LLRS supports for 25 Gbps, 50 Gbps, 100 Gbps, 200 Gbps and 400 Gbps Ethernet networks.
Signed-off-by:
ethdev: introduce low latency RS FEC
This patch introduces LLRS (low latency Reed Solomon FEC). LLRS supports for 25 Gbps, 50 Gbps, 100 Gbps, 200 Gbps and 400 Gbps Ethernet networks.
Signed-off-by: Jie Hai <haijie1@huawei.com> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
| #
8f02f472 |
| 19-May-2023 |
Huisong Li <lihuisong@huawei.com> |
ethdev: fix MAC address occupies two entries
The dev->data->mac_addrs[0] will be changed to a new MAC address when applications modify the default MAC address by .mac_addr_set(). However, if the new
ethdev: fix MAC address occupies two entries
The dev->data->mac_addrs[0] will be changed to a new MAC address when applications modify the default MAC address by .mac_addr_set(). However, if the new default one has been added as a non-default MAC address by .mac_addr_add(), the .mac_addr_set() didn't check this address. As a result, this MAC address occupies two entries in the list. Like: add(MAC1) add(MAC2) add(MAC3) add(MAC4) set_default(MAC3) default=MAC3, the rest of the list=MAC1, MAC2, MAC3, MAC4 Note: MAC3 occupies two entries.
But .mac_addr_set() cannot remove it implicitly in case of MAC address shrinking in the list. So this patch adds a check on whether the new default address was already in the list and if so requires the user to remove it first.
In addition, this patch documents the position of the default MAC address and address unique in the list.
Fixes: 854d8ad4ef68 ("ethdev: add default mac address modifier") Cc: stable@dpdk.org
Signed-off-by: Huisong Li <lihuisong@huawei.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
|
Revision tags: v23.03, v23.03-rc4, v23.03-rc3, v23.03-rc2 |
|
| #
f92c5652 |
| 22-Feb-2023 |
Stephen Hemminger <stephen@networkplumber.org> |
ethdev: reword description of device info query
The original comment was redundant and had duplicate word 'of'.
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
|
| #
a131d9ec |
| 01-Mar-2023 |
Thomas Monjalon <thomas@monjalon.net> |
ethdev: add link speed 400G
There are some devices supporting 400G speed, and it is well standardized in IEEE.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Morten Brørup <mb@sm
ethdev: add link speed 400G
There are some devices supporting 400G speed, and it is well standardized in IEEE.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net> Reviewed-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
|
Revision tags: v23.03-rc1 |
|
| #
06ea5479 |
| 17-Feb-2023 |
Jiawei Wang <jiaweiw@nvidia.com> |
ethdev: add Tx queue mapping of aggregated ports
When multiple ports are aggregated into a single DPDK port, (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to know which port use fo
ethdev: add Tx queue mapping of aggregated ports
When multiple ports are aggregated into a single DPDK port, (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to know which port use for Tx via a queue.
This patch introduces the new ethdev API rte_eth_dev_map_aggr_tx_affinity(), it's used to map a Tx queue with an aggregated port of the DPDK port (specified with port_id), The affinity is the number of the aggregated port. Value 0 means no affinity and traffic could be routed to any aggregated port, this is the default current behavior.
The maximum number of affinity is given by rte_eth_dev_count_aggr_ports().
Add the trace point for ethdev rte_eth_dev_count_aggr_ports() and rte_eth_dev_map_aggr_tx_affinity() functions.
Add the testpmd command line: testpmd> port config (port_id) txq (queue_id) affinity (value)
For example, there're two physical ports connected to a single DPDK port (port id 0), and affinity 1 stood for the first physical port and affinity 2 stood for the second physical port. Use the below commands to config tx phy affinity for per Tx Queue: port config 0 txq 0 affinity 1 port config 0 txq 1 affinity 1 port config 0 txq 2 affinity 2 port config 0 txq 3 affinity 2
These commands config the Tx Queue index 0 and Tx Queue index 1 with phy affinity 1, uses Tx Queue 0 or Tx Queue 1 send packets, these packets will be sent from the first physical port, and similar with the second physical port if sending packets with Tx Queue 2 or Tx Queue 3.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
|
Revision tags: v22.11, v22.11-rc4, v22.11-rc3 |
|
| #
47a4e1fb |
| 14-Nov-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
ethdev: explicit some errors of port start and stop
This patch clarifies the handling of following cases in the ethdev API docs:
- If rte_eth_dev_start() returns (-EAGAIN) for some port, it canno
ethdev: explicit some errors of port start and stop
This patch clarifies the handling of following cases in the ethdev API docs:
- If rte_eth_dev_start() returns (-EAGAIN) for some port, it cannot be started right now and start operation must be retried. - If rte_eth_dev_stop() returns (-EBUSY) for some port, it cannot be stopped in the current state.
When stopping the port in testpmd fails, port's state is switched back to STARTED to allow users to manually retry stopping the port.
No additional changes in testpmd are required to handle failures to start the port. If rte_eth_dev_start() fails, port's state is switched to STOPPED and users are allowed to retry the operation.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
|
Revision tags: v22.11-rc2 |
|
| #
c622735d |
| 11-Oct-2022 |
Chengwen Feng <fengchengwen@huawei.com> |
net/bonding: call Tx prepare before Tx burst
Normally, to use the HW offloads capability (e.g. checksum and TSO) in the Tx direction, the application needs to call rte_eth_tx_prepare() to do some ad
net/bonding: call Tx prepare before Tx burst
Normally, to use the HW offloads capability (e.g. checksum and TSO) in the Tx direction, the application needs to call rte_eth_tx_prepare() to do some adjustment with the packets before sending them. But the tx_prepare callback of the bonding driver is not implemented. Therefore, the sent packets may have errors (e.g. checksum errors).
However, it is difficult to design the tx_prepare callback for bonding driver. Because when a bonded device sends packets, the bonded device allocates the packets to different slave devices based on the real-time link status and bonding mode. That is, it is very difficult for the bonded device to determine which slave device's prepare function should be invoked.
So in this patch, the tx_prepare callback of bonding driver is not implemented. Instead, the rte_eth_tx_prepare() will be called before rte_eth_tx_burst(). In this way, all tx_offloads can be processed correctly for all NIC devices.
Note: because it is rara that bond different PMDs together, so just call tx-prepare once in broadcast bonding mode.
Also the following description was added to the rte_eth_tx_burst() function: "@note This function must not modify mbufs (including packets data) unless the refcnt is 1. The exception is the bonding PMD, which does not have tx-prepare function, in this case, mbufs maybe modified."
Signed-off-by: Chengchang Tang <tangchengchang@huawei.com> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Min Hu (Connor) <humin29@huawei.com> Acked-by: Chas Williams <3chas3@gmail.com>
show more ...
|
| #
eb0d471a |
| 13-Oct-2022 |
Kalesh AP <kalesh-anakkur.purayil@broadcom.com> |
ethdev: add proactive error handling mode
Some PMDs (e.g. hns3) could detect hardware or firmware errors, one error recovery mode is to report RTE_ETH_EVENT_INTR_RESET event, and wait for applicatio
ethdev: add proactive error handling mode
Some PMDs (e.g. hns3) could detect hardware or firmware errors, one error recovery mode is to report RTE_ETH_EVENT_INTR_RESET event, and wait for application invoke rte_eth_dev_reset() to recover the port, however, this mode has the following weaknesses:
1) Due to different hardware and software design, some NIC port recovery process requires multiple handshakes with the firmware and PF (when the port is VF). It takes a long time to complete the entire operation for one port, If multiple ports (for example, multiple VFs of a PF) are reset at the same time, other VFs may fail to be reset. (Because the reset processing is serial, the previous VFs must be processed before the subsequent VFs).
2) The impact on the application layer is great, and it should stop working queues, stop calling Rx and Tx functions, and then call rte_eth_dev_reset(), and re-setup all again.
This patch introduces proactive error handling mode, the PMD will try to recover from the errors itself. In this process, the PMD sets the data path pointers to dummy functions (which will prevent the crash), and also make sure the control path operations failed with retcode -EBUSY.
Because the PMD recovers automatically, the application can only sense that the data flow is disconnected for a while and the control API returns an error in this period.
In order to sense the error happening/recovering, three events were introduced:
1) RTE_ETH_EVENT_ERR_RECOVERING: used to notify the application that it detected an error and the recovery is being started. Upon receiving the event, the application should not invoke any control path APIs until receiving RTE_ETH_EVENT_RECOVERY_SUCCESS or RTE_ETH_EVENT_RECOVERY_FAILED event.
2) RTE_ETH_EVENT_RECOVERY_SUCCESS: used to notify the application that it recovers successful from the error, the PMD already re-configures the port, and the effect is the same as that of the restart operation.
3) RTE_ETH_EVENT_RECOVERY_FAILED: used to notify the application that it recovers failed from the error, the port should not usable anymore. The application should close the port.
Signed-off-by: Kalesh AP <kalesh-anakkur.purayil@broadcom.com> Signed-off-by: Somnath Kotur <somnath.kotur@broadcom.com> Signed-off-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
| #
0d5c38ba |
| 13-Oct-2022 |
Chengwen Feng <fengchengwen@huawei.com> |
ethdev: add error handling mode to device info
Currently, the defined error handling modes include:
1) NONE: it means no error handling modes are supported by this port.
2) PASSIVE: passive error
ethdev: add error handling mode to device info
Currently, the defined error handling modes include:
1) NONE: it means no error handling modes are supported by this port.
2) PASSIVE: passive error handling, after the PMD detect that a reset is required, the PMD reports RTE_ETH_EVENT_INTR_RESET event, and application invoke rte_eth_dev_reset() to recover the port.
Signed-off-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
|
Revision tags: v22.11-rc1 |
|
| #
605975b8 |
| 09-Oct-2022 |
Yuan Wang <yuanx.wang@intel.com> |
ethdev: introduce protocol-based buffer split
Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured,
ethdev: introduce protocol-based buffer split
Currently, Rx buffer split supports length based split. With Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT enabled and Rx packet segment configured, PMD will be able to split the received packets into multiple segments.
However, length based buffer split is not suitable for NICs that do split based on protocol headers. Given an arbitrarily variable length in Rx packet segment, it is almost impossible to pass a fixed protocol header to driver. Besides, the existence of tunneling results in the composition of a packet is various, which makes the situation even worse.
This patch extends current buffer split to support protocol header based buffer split. A new proto_hdr field is introduced in the reserved field of rte_eth_rxseg_split structure to specify protocol header. The proto_hdr field defines the split position of packet, splitting will always happen after the protocol header defined in the Rx packet segment. When Rx queue offload RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is enabled and corresponding protocol header is configured, driver will split the ingress packets into multiple segments.
Examples for proto_hdr field defines: To split after ETH-IPV4-UDP, it should be defined as proto_hdr = RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_L4_UDP
For inner ETH-IPV4-UDP, it should be defined as proto_hdr = RTE_PTYPE_TUNNEL_GRENAT | RTE_PTYPE_INNER_L2_ETHER | RTE_PTYPE_INNER_L3_IPV4_EXT_UNKNOWN | RTE_PTYPE_INNER_L4_UDP
If the protocol header is repeated with the previously defined one, the repeated part should be omitted. For example, split after ETH, ETH-IPV4 and ETH-IPV4-UDP, it should be defined as proto_hdr0 = RTE_PTYPE_L2_ETHER proto_hdr1 = RTE_PTYPE_L3_IPV4_EXT_UNKNOWN proto_hdr2 = RTE_PTYPE_L4_UDP
If protocol header split can be supported by a PMD, the rte_eth_buffer_split_get_supported_hdr_ptypes function can be used to obtain a list of these protocol headers.
For example, let's suppose we configured the Rx queue with the following segments: seg0 - pool0, proto_hdr0=RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4, off0=2B seg1 - pool1, proto_hdr1=RTE_PTYPE_L4_UDP, off1=128B seg2 - pool2, proto_hdr2=0, off1=0B
The packet consists of ETH_IPV4_UDP_PAYLOAD will be split like following: seg0 - ipv4 header @ RTE_PKTMBUF_HEADROOM + 2 in mbuf from pool0 seg1 - udp header @ 128 in mbuf from pool1 seg2 - payload @ 0 in mbuf from pool2
Now buffer split can be configured in two modes. User can choose length or protocol header to configure buffer split according to NIC's capability. For length based buffer split, the mp, length, offset field in Rx packet segment should be configured, while the proto_hdr field must be 0. For protocol header based buffer split, the mp, offset, proto_hdr field in Rx packet segment should be configured, while the length field must be 0.
Note: When protocol header split is enabled, NIC may receive packets which do not match all the protocol headers within the Rx segments. At this point, NIC will have two possible split behaviors according to matching results, one is exact match, another is longest match. The split result of NIC must belong to one of them.
The exact match means NIC only do split when the packets exactly match all the protocol headers in the segments. Otherwise, the whole packet will be put into the last valid mempool. The longest match means NIC will do split until packets mismatch the protocol header in the segments. The rest will be put into the last valid pool.
Pseudo-code for exact match: FOR each seg in segs except last one IF proto_hdr is not matched THEN BREAK END IF END FOR IF loop breaked THEN put whole pkt in last seg ELSE put protocol header in each seg put everything else in last seg END IF
Pseudo-code for longest match: FOR each seg in segs except last one IF proto_hdr is matched THEN put protocol header in seg ELSE BREAK END IF END FOR put everything else in last seg
Signed-off-by: Yuan Wang <yuanx.wang@intel.com> Signed-off-by: Xuan Ding <xuan.ding@intel.com> Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
e4e6f4cb |
| 09-Oct-2022 |
Yuan Wang <yuanx.wang@intel.com> |
ethdev: introduce protocol header API
Add a new ethdev API to retrieve supported protocol headers of a PMD, which helps to configure protocol header based buffer split.
Signed-off-by: Yuan Wang <yu
ethdev: introduce protocol header API
Add a new ethdev API to retrieve supported protocol headers of a PMD, which helps to configure protocol header based buffer split.
Signed-off-by: Yuan Wang <yuanx.wang@intel.com> Signed-off-by: Xuan Ding <xuan.ding@intel.com> Signed-off-by: Wenxuan Wu <wenxuanx.wu@intel.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
458485eb |
| 07-Oct-2022 |
Hanumanth Pothula <hpothula@marvell.com> |
ethdev: support multiple mbuf pools per Rx queue
Some of the HW has support for choosing memory pools based on the packet's size.
This is often useful for saving the memory where the application ca
ethdev: support multiple mbuf pools per Rx queue
Some of the HW has support for choosing memory pools based on the packet's size.
This is often useful for saving the memory where the application can create a different pool to steer the specific size of the packet, thus enabling more efficient usage of memory.
For example, let's say HW has a capability of three pools, - pool-1 size is 2K - pool-2 size is > 2K and < 4K - pool-3 size is > 4K Here, pool-1 can accommodate packets with sizes < 2K pool-2 can accommodate packets with sizes > 2K and < 4K pool-3 can accommodate packets with sizes > 4K
With multiple mempool capability enabled in SW, an application may create three pools of different sizes and send them to PMD. Allowing PMD to program HW based on the packet lengths. So that packets with less than 2K are received on pool-1, packets with lengths between 2K and 4K are received on pool-2 and finally packets greater than 4K are received on pool-3.
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
bc705061 |
| 06-Oct-2022 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
ethdev: introduce hairpin memory capabilities
Before this patch, implementation details and configuration of hairpin queues were decided internally by the PMD. Applications had no control over the c
ethdev: introduce hairpin memory capabilities
Before this patch, implementation details and configuration of hairpin queues were decided internally by the PMD. Applications had no control over the configuration of Rx and Tx hairpin queues, despite number of descriptors, explicit Tx flow mode and disabling automatic binding. This patch addresses that by adding:
- Hairpin queue capabilities reported by PMDs. - New configuration options for Rx and Tx hairpin queues.
Main goal of this patch is to allow applications to provide configuration hints regarding placement of hairpin queues. These hints specify whether buffers of hairpin queues should be placed in host memory or in dedicated device memory. Different memory options may have different performance characteristics and hairpin configuration should be fine-tuned to the specific application and use case.
This patch introduces new hairpin queue configuration options through rte_eth_hairpin_conf struct, allowing to tune Rx and Tx hairpin queues memory configuration. Hairpin configuration is extended with the following fields:
- use_locked_device_memory - If set, PMD will use specialized on-device memory to store RX or TX hairpin queue data. - use_rte_memory - If set, PMD will use DPDK-managed memory to store RX or TX hairpin queue data. - force_memory - If set, PMD will be forced to use provided memory settings. If no appropriate resources are available, then device start will fail. If unset and no resources are available, PMD will fallback to using default type of resource for given queue.
If application chooses to use PMD default memory configuration, all of these flags should remain unset.
Hairpin capabilities are also extended, to allow verification of support of given hairpin memory configurations. Struct rte_eth_hairpin_cap is extended with two additional fields of type rte_eth_hairpin_queue_cap:
- rx_cap - memory capabilities of hairpin RX queues. - tx_cap - memory capabilities of hairpin TX queues.
Struct rte_eth_hairpin_queue_cap exposes whether given queue type supports use_locked_device_memory and use_rte_memory flags.
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
| #
6b81dddb |
| 04-Oct-2022 |
Jerin Jacob <jerinj@marvell.com> |
ethdev: support congestion management
NIC HW controllers often come with congestion management support on various HW objects such as Rx queue depth or mempool queue depth.
Also, it can support vari
ethdev: support congestion management
NIC HW controllers often come with congestion management support on various HW objects such as Rx queue depth or mempool queue depth.
Also, it can support various modes of operation such as RED (Random early discard), WRED etc on those HW objects.
Add a framework to express such modes(enum rte_cman_mode) and introduce (enum rte_eth_cman_obj) to enumerate the different objects where the modes can operate on.
Add RTE_CMAN_RED mode of operation and RTE_ETH_CMAN_OBJ_RX_QUEUE, RTE_ETH_CMAN_OBJ_RX_QUEUE_MEMPOOL objects.
Introduce reserved fields in configuration structure backed by rte_eth_cman_config_init() to add new configuration parameters without ABI breakage.
Add rte_eth_cman_info_get() API to get the information such as supported modes and objects.
Add rte_eth_cman_config_init(), rte_eth_cman_config_set() APIs to configure congestion management on those object with associated mode.
Finally, add rte_eth_cman_config_get() API to retrieve the applied configuration.
Signed-off-by: Jerin Jacob <jerinj@marvell.com> Signed-off-by: Sunil Kumar Kori <skori@marvell.com> Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru> Acked-by: Sunil Kumar Kori <skori@marvell.com>
show more ...
|
| #
092b701f |
| 06-Oct-2022 |
Dongdong Liu <liudongdong3@huawei.com> |
ethdev: introduce Rx/Tx descriptor dump API
Added the ethdev Rx/Tx desc dump API which provides functions for query descriptor from device. HW descriptor info differs in different NICs. The informat
ethdev: introduce Rx/Tx descriptor dump API
Added the ethdev Rx/Tx desc dump API which provides functions for query descriptor from device. HW descriptor info differs in different NICs. The information demonstrates I/O process which is important for debug. As the information is different between NICs, the new API is introduced.
Signed-off-by: Min Hu (Connor) <humin29@huawei.com> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@xilinx.com>
show more ...
|
| #
7dcd73e3 |
| 04-Oct-2022 |
Olivier Matz <olivier.matz@6wind.com> |
drivers/bus: set device NUMA node to unknown by default
The dev->device.numa_node field is set by each bus driver for every device it manages to indicate on which NUMA node this device lies.
When t
drivers/bus: set device NUMA node to unknown by default
The dev->device.numa_node field is set by each bus driver for every device it manages to indicate on which NUMA node this device lies.
When this information is unknown, the assigned value is not consistent across the bus drivers.
Set the default value to SOCKET_ID_ANY (-1) by all bus drivers when the NUMA information is unavailable. This change impacts rte_eth_dev_socket_id() in the same manner.
Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
show more ...
|
| #
3a26e41e |
| 28-Sep-2022 |
Satha Rao <skoteshwar@marvell.com> |
ethdev: increase queue rate parameter from 16b to 32b
The rate parameter modified to uint32_t, so that it can work for more than 64 Gbps.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Reviewed-
ethdev: increase queue rate parameter from 16b to 32b
The rate parameter modified to uint32_t, so that it can work for more than 64 Gbps.
Signed-off-by: Satha Rao <skoteshwar@marvell.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
| #
8d54b1ec |
| 12-Aug-2022 |
Xuan Ding <xuan.ding@intel.com> |
ethdev: remove Rx header split port offload
As announced in the deprecation note, remove the Rx offload flag 'RTE_ETH_RX_OFFLOAD_HEADER_SPLIT' and 'split_hdr_size' field from the structure 'rte_eth_
ethdev: remove Rx header split port offload
As announced in the deprecation note, remove the Rx offload flag 'RTE_ETH_RX_OFFLOAD_HEADER_SPLIT' and 'split_hdr_size' field from the structure 'rte_eth_rxmode'. Meanwhile, the place where the examples and apps initialize the 'split_hdr_size' field, and where the drivers check if the 'split_hdr_size' value is 0 are also removed.
User can still use `RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT` for per-queue packet split offload, which is configured by 'rte_eth_rxseg_split'.
Signed-off-by: Xuan Ding <xuan.ding@intel.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|