History log of /dpdk/app/test-pmd/testpmd.c (Results 51 – 75 of 421)
Revision Date Author Comments
# 3889a322 09-Jun-2022 Huisong Li <lihuisong@huawei.com>

app/testpmd: fix bonding slave devices not released

Currently, some eth devices are added to bond device, these devices are
not released when the quit command is executed in testpmd. This patch
adds

app/testpmd: fix bonding slave devices not released

Currently, some eth devices are added to bond device, these devices are
not released when the quit command is executed in testpmd. This patch
adds the release operation for all active slaves under a bond device.

Fixes: 0e545d3047fe ("app/testpmd: check stopping port is not in bonding")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Signed-off-by: Dongdong Liu <liudongdong3@huawei.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>

show more ...


# bc70e559 08-Jun-2022 Spike Du <spiked@nvidia.com>

ethdev: introduce available Rx descriptors threshold

A new event RTE_ETH_EVENT_RX_AVAIL_THRESH should be generated by HW
when number of available descriptors in Rx queue goes below the
threshold.

T

ethdev: introduce available Rx descriptors threshold

A new event RTE_ETH_EVENT_RX_AVAIL_THRESH should be generated by HW
when number of available descriptors in Rx queue goes below the
threshold.

The threshold is defined as a percentage of an Rx queue size with valid
values from 0 to 99 (inclusive). Zero (default) value disables it.

There is no capability reporting for the feature. Application should
simply try to set required threshold value and handle result.

Add testpmd commands to control the threshold:
set port <port_id> rxq <rxq_id> avail_thresh <avail_thresh_num>

Signed-off-by: Spike Du <spiked@nvidia.com>
Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Thomas Monjalon <thomas@monjalon.net>

show more ...


# 68629be3 25-Mar-2022 Ke Zhang <ke1x.zhang@intel.com>

app/testpmd: fix multicast address pool leak

A multicast address pool is allocated for a port when
using mcast_addr testpmd commands.

When closing a port or stopping testpmd, this pool was
not free

app/testpmd: fix multicast address pool leak

A multicast address pool is allocated for a port when
using mcast_addr testpmd commands.

When closing a port or stopping testpmd, this pool was
not freed, resulting in a leak.
This issue has been caught using ASan.

Free this pool when closing the port.

Error info as following:
ERROR: LeakSanitizer: detected memory leaksDirect leak of
192 byte(s)
0 0x7f6a2e0aeffe in __interceptor_realloc
(/lib/x86_64-linux-gnu/libasan.so.5+0x10dffe)
1 0x565361eb340f in mcast_addr_pool_extend
../app/test-pmd/config.c:5162
2 0x565361eb3556 in mcast_addr_pool_append
../app/test-pmd/config.c:5180
3 0x565361eb3aae in mcast_addr_add
../app/test-pmd/config.c:5243

Fixes: 8fff667578a7 ("app/testpmd: new command to add/remove multicast MAC addresses")
Cc: stable@dpdk.org

Signed-off-by: Ke Zhang <ke1x.zhang@intel.com>
Acked-by: Yuying Zhang <yuying.zhang@intel.com>
Acked-by: Ferruh Yigit <ferruh.yigit@xilinx.com>

show more ...


# 1108c33e 02-Jun-2022 Raja Zidane <rzidane@nvidia.com>

app/testpmd: fix packet segment allocation

When --mbuf-size cmdline parameter is specified, the segments to scatter
packets on are allocated sequentially from these extra memory pools
(the mbuf for

app/testpmd: fix packet segment allocation

When --mbuf-size cmdline parameter is specified, the segments to scatter
packets on are allocated sequentially from these extra memory pools
(the mbuf for the first segment is allocated from the first pool, the
second one from the second pool, and so on, if segment number is greater
then pool’s the mbuf for remaining segments will be allocated from the
last valid pool).
A bug in comparing segment index with mbuf index caused wrong mapping
of one of the segments.

Fix the comparison.

Fixes: 2befc67ff679 ("app/testpmd: add extended Rx queue setup")
Cc: stable@dpdk.org

Signed-off-by: Raja Zidane <rzidane@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# 592ab76f 24-May-2022 David Marchand <david.marchand@redhat.com>

app/testpmd: register driver specific commands

Introduce a testpmd API so that drivers can register specific commands.

A driver can list some files to compile with testpmd, by setting them
in the t

app/testpmd: register driver specific commands

Introduce a testpmd API so that drivers can register specific commands.

A driver can list some files to compile with testpmd, by setting them
in the testpmd_sources (driver local) meson variable.
drivers/meson.build then takes care of appending this to a global meson
variable, and adding the driver to testpmd dependency.

Note: testpmd.h is fixed to that it is self sufficient when being
included.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

show more ...


# 3c4426db 07-Mar-2022 Dmitry Kozlyuk <dkozlyuk@nvidia.com>

app/testpmd: do not poll stopped queues

Calling Rx/Tx functions on a stopped queue is not supported.
Do not run packet forwarding for streams that use stopped queues.

Each stream has a read-only "d

app/testpmd: do not poll stopped queues

Calling Rx/Tx functions on a stopped queue is not supported.
Do not run packet forwarding for streams that use stopped queues.

Each stream has a read-only "disabled" field,
so that lcore function can skip such streams.
Forwarding engines can set this field
using a new "stream_init" callback function
by checking relevant queue states,
which are stored along with queue configurations
(not all PMDs implement rte_eth_rx/tx_queue_info_get()
to query the state from there).

Fixes: 5f4ec54f1d16 ("testpmd: queue start and stop")
Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>

show more ...


# f7352c17 07-Mar-2022 Dmitry Kozlyuk <dkozlyuk@nvidia.com>

app/testpmd: fix use of indirect action after port close

When a port was closed, indirect actions could remain
with their handles no longer valid.
If a newly attached device was assigned the same ID

app/testpmd: fix use of indirect action after port close

When a port was closed, indirect actions could remain
with their handles no longer valid.
If a newly attached device was assigned the same ID as the closed port,
those indirect actions became accessible again.
Any attempt to use them resulted in an undefined behavior.
Automatically flush indirect actions when a port is closed.

Fixes: 4b61b8774be9 ("ethdev: introduce indirect flow action")
Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Matan Azrad <matan@nvidia.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>

show more ...


# e46372d7 11-May-2022 Huisong Li <lihuisong@huawei.com>

app/testpmd: fix port status of bonding slave device

Starting or stopping a bonded port also starts or stops all active slaves
under the bonded port. If this port is a bonded device, we need to modi

app/testpmd: fix port status of bonding slave device

Starting or stopping a bonded port also starts or stops all active slaves
under the bonded port. If this port is a bonded device, we need to modify
the port status of all slaves.

Fixes: 0e545d3047fe ("app/testpmd: check stopping port is not in bonding")
Cc: stable@dpdk.org

Signed-off-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>
Acked-by: Konstantin Ananyev <konstantin.v.ananyev@yandex.ru>

show more ...


# baef6bbf 06-Apr-2022 Min Hu (Connor) <humin29@huawei.com>

app/testpmd: check statistics query before printing

In function 'fwd_stats_display', if function 'rte_eth_stats_get' fails,
'stats' is uncertainty value. The display result will be abnormal.

This p

app/testpmd: check statistics query before printing

In function 'fwd_stats_display', if function 'rte_eth_stats_get' fails,
'stats' is uncertainty value. The display result will be abnormal.

This patch check the return value of 'rte_eth_stats_get' to avoid
display abnormal stats.

Fixes: 53324971a14e ("app/testpmd: display/clear forwarding stats on demand")
Cc: stable@dpdk.org

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>

show more ...


# d8c079a5 17-Feb-2022 Min Hu (Connor) <humin29@huawei.com>

app/testpmd: check starting port is not in bonding

In bond, start or stop slave port should be operated by bonding port.
This patch add port_is_bonding_slave in start_port function.

Fixes: 0e545d30

app/testpmd: check starting port is not in bonding

In bond, start or stop slave port should be operated by bonding port.
This patch add port_is_bonding_slave in start_port function.

Fixes: 0e545d3047fe ("app/testpmd: check stopping port is not in bonding")
Cc: stable@dpdk.org

Signed-off-by: Min Hu (Connor) <humin29@huawei.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 06c047b6 09-Feb-2022 Stephen Hemminger <stephen@networkplumber.org>

remove unnecessary null checks

Functions like free, rte_free, and rte_mempool_free
already handle NULL pointer so the checks here are not necessary.

Remove redundant NULL pointer checks before free

remove unnecessary null checks

Functions like free, rte_free, and rte_mempool_free
already handle NULL pointer so the checks here are not necessary.

Remove redundant NULL pointer checks before free functions
found by nullfree.cocci

Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>

show more ...


# 13b19642 17-Dec-2021 Dmitry Kozlyuk <dkozlyuk@nvidia.com>

app/testpmd: fix external buffer allocation

External pinned buffer memory (--mp-alloc=xbuf)
was allocated as multiple IOVA-contiguous memzones
of 2M size and 2M alignment.
Due to the malloc overhead

app/testpmd: fix external buffer allocation

External pinned buffer memory (--mp-alloc=xbuf)
was allocated as multiple IOVA-contiguous memzones
of 2M size and 2M alignment.
Due to the malloc overhead and the alignment requirement,
each 2M memzone consumed 4M of hugepage memory:
2M of usable memory + X of malloc overhead + (2M-X) padding.
The allocation often failed with 2M hugepages and IOVA-as-PA
if a PA-contiguous span of 2 hugepages could not be found.
Also, with any hugepage size and IOVA mode
memory consumption was almost 2x of the usable amount.

Alignment requirement of 2M for external buffers is redundant.
It was an attempt to ensure IOVA-contiguity
by forcing memzones to start at hugepage boundaries,
while 2M size intended to leave no unused space on the page.
As shown above, this in fact caused excessive memory consumption
and decreased the chance of a successful allocation.
RTE_MEMZONE_F_IOVA_CONTIG already ensures IOVA-contiguity.

Remove the alignment requirement.
Reduce the memzone size by the malloc overhead size (4 cache lines),
so that memory consumption for each memzone is
(2M-X) of usable memory + X of malloc overhead = 2M.
This also means that whenever there are free 2M hugepages,
an IOVA-contiguous memzone can always be allocated.

Fixes: 72512e1897b2 ("app/testpmd: add mempool with external data buffers")
Cc: stable@dpdk.org

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# 7be78d02 29-Nov-2021 Josh Soref <jsoref@gmail.com>

fix spelling in comments and strings

The tool comes from https://github.com/jsoref

Signed-off-by: Josh Soref <jsoref@gmail.com>
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>


# 2490bb89 16-Nov-2021 Ivan Malov <ivan.malov@oktetlabs.ru>

app/testpmd: fix flow transfer proxy port handling

The current approach detects the proxy port on each port (re-)plug and
may spam the log with error messages if the PMD does not support flows.
As t

app/testpmd: fix flow transfer proxy port handling

The current approach detects the proxy port on each port (re-)plug and
may spam the log with error messages if the PMD does not support flows.
As testpmd is a debug tool, it must not do such implicit port handling.
Instead, the new API should be called only when the user requests that.

Revoke the existing code. Implement an explicit command-line primitive
to let the user find the proxy port themselves. Provide relevant hints.

Fixes: 1179f05cc9a0 ("ethdev: query proxy port to manage transfer flows")

Signed-off-by: Ivan Malov <ivan.malov@oktetlabs.ru>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# bb9be9a4 17-Nov-2021 David Marchand <david.marchand@redhat.com>

build: make metrics libraries optional

metrics, bitratestats, jobstats and latencystats libraries can be made
optional as they provide standalone features.

Signed-off-by: David Marchand <david.marc

build: make metrics libraries optional

metrics, bitratestats, jobstats and latencystats libraries can be made
optional as they provide standalone features.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>

show more ...


# 6970401e 17-Nov-2021 David Marchand <david.marchand@redhat.com>

build: make GRO/GSO libraries optional

GRO and GSO integration in testpmd is relatively self contained and easy
to extract.
Those libraries can be made optional as they provide standalone
features.

build: make GRO/GSO libraries optional

GRO and GSO integration in testpmd is relatively self contained and easy
to extract.
Those libraries can be made optional as they provide standalone
features.

Signed-off-by: David Marchand <david.marchand@redhat.com>
Acked-by: Bruce Richardson <bruce.richardson@intel.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>

show more ...


# eac341d3 17-Nov-2021 Joyce Kong <joyce.kong@arm.com>

app/testpmd: remove atomic operations for port status

The port_status changes do not need to be handled
atomically, as they are modified during initialization
or through the testpmd prompt instead o

app/testpmd: remove atomic operations for port status

The port_status changes do not need to be handled
atomically, as they are modified during initialization
or through the testpmd prompt instead of multiple
threads.

Signed-off-by: Joyce Kong <joyce.kong@arm.com>
Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>

show more ...


# cbe70fde 15-Nov-2021 Jie Wang <jie1x.wang@intel.com>

app/testpmd: fix DCB in VT configuration

When set port DCB in VT mode enabled, it should remove RSS HASH
offload before reconfiguring the device and queues.

Because port multi-queue mode is changed

app/testpmd: fix DCB in VT configuration

When set port DCB in VT mode enabled, it should remove RSS HASH
offload before reconfiguring the device and queues.

Because port multi-queue mode is changed from RSS to DCB in VT.

Fixes: 2a977b891f99 ("app/testpmd: fix DCB configuration")
Cc: stable@dpdk.org

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Acked-by: Aman Singh <aman.deep.singh@intel.com>

show more ...


# 295968d1 22-Oct-2021 Ferruh Yigit <ferruh.yigit@intel.com>

ethdev: add namespace

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to

ethdev: add namespace

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>

show more ...


# 6a8b64fd 21-Oct-2021 Eli Britstein <elibr@nvidia.com>

app/testpmd: fix packet burst spreading stats

RX/TX functions (rte_eth_rx_burst/rte_eth_tx_burst) get 'nb_pkts'
argument, which specifies the maximum number to receive/transmit.
It can be 0..nb_pkts

app/testpmd: fix packet burst spreading stats

RX/TX functions (rte_eth_rx_burst/rte_eth_tx_burst) get 'nb_pkts'
argument, which specifies the maximum number to receive/transmit.
It can be 0..nb_pkts, meaning nb_pkts+1 options.
Testpmd can provide statistics of the burst sizes ('set
record-burst-stats on') by incrementing an array cell of index
<burst-size>. This array is mistakenly [MAX_PKT_BURST] size. Receiving
the maximum burst will cause out of bound write.
Enlarge the spread stats array by one cell to fix it.

Fixes: af75078fece3 ("first public release")
Cc: stable@dpdk.org

Signed-off-by: Eli Britstein <elibr@nvidia.com>
Reviewed-by: Matan Azrad <matan@nvidia.com>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 59840375 21-Oct-2021 Xueming Li <xuemingl@nvidia.com>

app/testpmd: add forwarding engine for shared Rx queue

To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,

app/testpmd: add forwarding engine for shared Rx queue

To support shared Rx queue, this patch introduces dedicate forwarding
engine. The engine groups received packets by mbuf->port into sub-group,
updates stream statistics and simply frees packets.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>

show more ...


# 65744833 21-Oct-2021 Xueming Li <xuemingl@nvidia.com>

app/testpmd: force shared Rx queue polled on same core

Shared Rx queue must be polled on same core. This patch checks and stops
forwarding if shared RxQ being scheduled on multiple
cores.

It's sugg

app/testpmd: force shared Rx queue polled on same core

Shared Rx queue must be polled on same core. This patch checks and stops
forwarding if shared RxQ being scheduled on multiple
cores.

It's suggested to use same number of Rx queues and polling cores.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Xiaoyun Li <xiaoyun.li@intel.com>

show more ...


# f4d178c1 21-Oct-2021 Xueming Li <xuemingl@nvidia.com>

app/testpmd: add parameter for shared Rx queue

Adds "--rxq-share=X" parameter to enable shared RxQ.

Rx queue is shared if device supports, otherwise fallback to standard
RxQ.

Shared Rx queues are

app/testpmd: add parameter for shared Rx queue

Adds "--rxq-share=X" parameter to enable shared RxQ.

Rx queue is shared if device supports, otherwise fallback to standard
RxQ.

Shared Rx queues are grouped per X ports. X defaults to UINT32_MAX,
implies all ports join share group 1. Queue ID is mapped equally with
shared Rx queue ID.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

show more ...


# 59f3a8ac 20-Oct-2021 Gregory Etelson <getelson@nvidia.com>

app/testpmd: add flex item commands

Network port hardware is shipped with fixed number of
supported network protocols. If application must work with a
protocol that is not included in the port hardw

app/testpmd: add flex item commands

Network port hardware is shipped with fixed number of
supported network protocols. If application must work with a
protocol that is not included in the port hardware by default, it
can try to add the new protocol to port hardware.

Flex item or flex parser is port infrastructure that allows
application to add support for a custom network header and
offload flows to match the header elements.

Application must complete the following tasks to create a flow
rule that matches custom header:

1. Create flow item object in port hardware.
Application must provide custom header configuration to PMD.
PMD will use that configuration to create flex item object in
port hardware.

2. Create flex patterns to match. Flex pattern has a spec and a mask
components, like a regular flow item. Combined together, spec and mask
can target unique data sequence or a number of data sequences in the
custom header.
Flex patterns of the same flex item can have different lengths.
Flex pattern is identified by unique handler value.

3. Create a flow rule with a flex flow item that references
flow pattern.

Testpmd flex CLI commands are:

testpmd> flow flex_item create <port> <flex_id> <filename>

testpmd> set flex_pattern <pattern_id> \
spec <spec data> mask <mask data>

testpmd> set flex_pattern <pattern_id> is <spec_data>

testpmd> flow create <port> ... \
/ flex item is <flex_id> pattern is <pattern_id> / ...

The patch works with the jansson library API.
A new optional dependency on jansson library is added for
testpmd. If jansson not detected the flex item functionality
is disabled.
Jansson development files must be present:
jansson.pc, jansson.h libjansson.[a,so]

Signed-off-by: Gregory Etelson <getelson@nvidia.com>
Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>

show more ...


# b563c142 18-Oct-2021 Ferruh Yigit <ferruh.yigit@intel.com>

ethdev: remove jumbo offload flag

Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_in

ethdev: remove jumbo offload flag

Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.

And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.

Removing this additional configuration for simplification.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>

show more ...


12345678910>>...17