History log of /dpdk/lib/ethdev/rte_ethdev.h (Results 76 – 100 of 119)
Revision (<<< Hide revision tags) (Show revision tags >>>) Date Author Comments
# a75ab6e5 08-Feb-2022 Akhil Goyal <gakhil@marvell.com>

ethdev: introduce IP reassembly offload

IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offlo

ethdev: introduce IP reassembly offload

IP Reassembly is a costly operation if it is done in software.
The operation becomes even more costlier if IP fragments are encrypted.
However, if it is offloaded to HW, it can considerably save application
cycles.

Hence, a new offload feature is exposed in eth_dev ops for devices which
can attempt IP reassembly of packets in hardware.
- rte_eth_ip_reassembly_capability_get() - to get the maximum values
of reassembly configuration which can be set.
- rte_eth_ip_reassembly_conf_set() - to set IP reassembly configuration
and to enable the feature in the PMD (to be called before
rte_eth_dev_start()).
- rte_eth_ip_reassembly_conf_get() - to get the current configuration
set in PMD.

Now when the offload is enabled using rte_eth_ip_reassembly_conf_set(),
the resulting reassembled IP packet would be a typical segmented mbuf in
case of success.

And if reassembly of IP fragments is failed or is incomplete (if
fragments do not come before the reass_timeout, overlap, etc), the mbuf
dynamic flags can be updated by the PMD. This is updated in a subsequent
patch.

Signed-off-by: Akhil Goyal <gakhil@marvell.com>

show more ...


# f840cf77 09-Feb-2022 Jie Wang <jie1x.wang@intel.com>

ethdev: add L2TPv2 RSS offload type

This patch defines new RSS offload type for L2TPv2, which
is required when users want to distribute packets based on
the L2TPv2 session ID field.

Signed-off-by:

ethdev: add L2TPv2 RSS offload type

This patch defines new RSS offload type for L2TPv2, which
is required when users want to distribute packets based on
the L2TPv2 session ID field.

Signed-off-by: Jie Wang <jie1x.wang@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 0de345e9 08-Feb-2022 Jerin Jacob <jerinj@marvell.com>

ethdev: support queue-based priority flow control

Based on device support and use-case need, there are two different ways
to enable PFC. The first case is the port level PFC configuration, in
this c

ethdev: support queue-based priority flow control

Based on device support and use-case need, there are two different ways
to enable PFC. The first case is the port level PFC configuration, in
this case, rte_eth_dev_priority_flow_ctrl_set() API shall be used to
configure the PFC, and PFC frames will be generated using based on VLAN
TC value.

The second case is the queue level PFC configuration, in this
case, Any packet field content can be used to steer the packet to the
specific queue using rte_flow or RSS and then use
rte_eth_dev_priority_flow_ctrl_queue_configure() to configure the
TC mapping on each queue.
Based on congestion selected on the specific queue, configured TC
shall be used to generate PFC frames.

Signed-off-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Sunil Kumar Kori <skori@marvell.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# b1cb3035 12-Jan-2022 Ferruh Yigit <ferruh.yigit@intel.com>

ethdev: mark old macros as deprecated

Old macros kept for backward compatibility, but this cause old macro
usage to sneak in silently.

Marking old macros as deprecated. Downside is this will cause

ethdev: mark old macros as deprecated

Old macros kept for backward compatibility, but this cause old macro
usage to sneak in silently.

Marking old macros as deprecated. Downside is this will cause some noise
for applications that are using old macros.

Fixes: 295968d17407 ("ethdev: add namespace")

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Stephen Hemminger <stephen@networkplumber.org>

show more ...


# f8dbaebb 22-Nov-2021 Sean Morrissey <sean.morrissey@intel.com>

fix PMD wording

Removing the use of driver following PMD as its unnecessary.

Cc: stable@dpdk.org

Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogart

fix PMD wording

Removing the use of driver following PMD as its unnecessary.

Cc: stable@dpdk.org

Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
Signed-off-by: Conor Fogarty <conor.fogarty@intel.com>
Acked-by: John McNamara <john.mcnamara@intel.com>
Reviewed-by: Conor Walsh <conor.walsh@intel.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 285725d9 28-Oct-2021 Thomas Monjalon <thomas@monjalon.net>

ethdev: promote device removal check function as stable

The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
and is integrated in error checks of ethdev library.

It is promoted as st

ethdev: promote device removal check function as stable

The function rte_eth_dev_is_removed() was introduced in DPDK 18.02,
and is integrated in error checks of ethdev library.

It is promoted as stable ABI.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Acked-by: Ray Kinsella <mdr@ashroe.eu>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 2c9cd45d 02-Nov-2021 Dmitry Kozlyuk <dkozlyuk@nvidia.com>

ethdev: add capability to keep shared objects on restart

rte_flow_action_handle_create() did not mention what happens
with an indirect action when a device is stopped and started again.
It is natura

ethdev: add capability to keep shared objects on restart

rte_flow_action_handle_create() did not mention what happens
with an indirect action when a device is stopped and started again.
It is natural for some indirect actions, like counter, to be persistent.
Keeping others at least saves application time and complexity.
However, not all PMDs can support it, or the support may be limited
by particular action kinds, that is, combinations of action type
and the value of the transfer bit in its configuration.

Add a device capability to indicate if at least some indirect actions
are kept across the above sequence. Without this capability the behavior
is still unspecified, and application is required to destroy
the indirect actions before stopping the device.
In the future, indirect actions may not be the only type of objects
shared between flow rules. The capability bit intends to cover all
possible types of such objects, hence its name.

Declare that the application can test for the persistence
of a particular indirect action kind by attempting to create
an indirect action of that kind when the device is stopped
and checking for the specific error type.
This is logical because if the PMD can to create an indirect action
when the device is not started and use it after the start happens,
it is natural that it can move its internal flow shared object
to the same state when the device is stopped and restore the state
when the device is started.

Indirect action persistence across a reconfigurations is not required.
In case a PMD cannot keep the indirect actions across reconfiguration,
it is allowed just to report an error.
Application must then flush the indirect actions before attempting it.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

show more ...


# 1d5a3d68 02-Nov-2021 Dmitry Kozlyuk <dkozlyuk@nvidia.com>

ethdev: add capability to keep flow rules on restart

Previously, it was not specified what happens to the flow rules
when the device is stopped, possibly reconfigured, then started.
If flow rules we

ethdev: add capability to keep flow rules on restart

Previously, it was not specified what happens to the flow rules
when the device is stopped, possibly reconfigured, then started.
If flow rules were kept, it could be convenient for application
developers, because they wouldn't need to save and restore them.
However, due to the number of flows and possible creation rate it is
impractical to save all flow rules in DPDK layer. This means that flow
rules persistence really depends on whether PMD and HW can implement it
efficiently. It can also be limited by the rule item and action types,
and its attributes transfer bit (a combination of an item/action type
and a value of the transfer bit is called a rule feature).

Add a device capability bit for PMDs that can keep at least some
of the flow rules across restart. Without this capability behavior
is still unspecified and it is declared that the application must
flush the rules before stopping the device.
Allow the application to test for persistence of rules using
a particular feature by attempting to create a flow rule
using that feature when the device is stopped
and checking for the specific error.
This is logical because if the PMD can to create the flow rule
when the device is not started and use it after the start happens,
it is natural that it can move its internal flow rule object
to the same state when the device is stopped and restore the state
when the device is started.

Rule persistence across a reconfigurations is not required,
because tracking all the rules and configuration-dependent resources
they use may be infeasible. In case a PMD cannot keep the rules
across reconfiguration, it is allowed just to report an error.
Application must then flush the rules before attempting it.

Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
Acked-by: Ori Kam <orika@nvidia.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

show more ...


Revision tags: v21.11-rc1
# daa02b5c 15-Oct-2021 Olivier Matz <olivier.matz@6wind.com>

mbuf: add namespace to offload flags

Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Sign

mbuf: add namespace to offload flags

Fix the mbuf offload flags namespace by adding an RTE_ prefix to the
name. The old flags remain usable, but a deprecation warning is issued
at compilation.

Signed-off-by: Olivier Matz <olivier.matz@6wind.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>

show more ...


# 68e8ca7b 22-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: avoid usage of ULL for 64-bit unsigned constants

Use UINT64_C() macro instead.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.

ethdev: avoid usage of ULL for 64-bit unsigned constants

Use UINT64_C() macro instead.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 4852c647 22-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: replace single bit masks with macros

The macros RTE_BIT32 and RTE_BIT64 are used to replace single bit masks.

Do not switch VLAN offload flags since type is not fixed size.

Signed-off-by:

ethdev: replace single bit masks with macros

The macros RTE_BIT32 and RTE_BIT64 are used to replace single bit masks.

Do not switch VLAN offload flags since type is not fixed size.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 295968d1 22-Oct-2021 Ferruh Yigit <ferruh.yigit@intel.com>

ethdev: add namespace

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to

ethdev: add namespace

Add 'RTE_ETH' namespace to all enums & macros in a backward compatible
way. The macros for backward compatibility can be removed in next LTS.
Also updated some struct names to have 'rte_eth' prefix.

All internal components switched to using new names.

Syntax fixed on lines that this patch touches.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
Acked-by: Wisam Jaddo <wisamm@nvidia.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Chenbo Xia <chenbo.xia@intel.com>
Acked-by: Hemant Agrawal <hemant.agrawal@nxp.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>

show more ...


# 9ce1717d 22-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: remove unused L2 tunnel mask defines

Fixes: cf47acc0f9ba ("ethdev: remove L2 tunnel offload control API")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: remove unused L2 tunnel mask defines

Fixes: cf47acc0f9ba ("ethdev: remove L2 tunnel offload control API")
Cc: stable@dpdk.org

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 93e441c9 21-Oct-2021 Xueming Li <xuemingl@nvidia.com>

ethdev: get device capability name as string

This patch adds API to return name of device capability.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko

ethdev: get device capability name as string

This patch adds API to return name of device capability.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Thomas Monjalon <thomas@monjalon.net>

show more ...


# dd22740c 21-Oct-2021 Xueming Li <xuemingl@nvidia.com>

ethdev: introduce shared Rx queue

In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain,

ethdev: introduce shared Rx queue

In current DPDK framework, each Rx queue is pre-loaded with mbufs to
save incoming packets. For some PMDs, when number of representors scale
out in a switch domain, the memory consumption became significant.
Polling all ports also leads to high cache miss, high latency and low
throughput.

This patch introduces shared Rx queue. Ports in same Rx domain and
switch domain could share Rx queue set by specifying non-zero sharing
group in Rx queue configuration.

Shared Rx queue is identified by share_rxq field of Rx queue
configuration. Port A RxQ X can share RxQ with Port B RxQ Y by using
same shared Rx queue ID.

No special API is defined to receive packets from shared Rx queue.
Polling any member port of a shared Rx queue receives packets of that
queue for all member ports, port_id is identified by mbuf->port. PMD is
responsible to resolve shared Rx queue from device and queue data.

Shared Rx queue must be polled in same thread or core, polling a queue
ID of any member port is essentially same.

Multiple share groups are supported. PMD should support mixed
configuration by allowing multiple share groups and non-shared Rx queue
on one port.

Example grouping and polling model to reflect service priority:
Group1, 2 shared Rx queues per port: PF, rep0, rep1
Group2, 1 shared Rx queue per port: rep2, rep3, ... rep127
Core0: poll PF queue0
Core1: poll PF queue1
Core2: poll rep2 queue0

PMD advertise shared Rx queue capability via RTE_ETH_DEV_CAPA_RXQ_SHARE.

PMD is responsible for shared Rx queue consistency checks to avoid
member port's configuration contradict each other.

Signed-off-by: Xueming Li <xuemingl@nvidia.com>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>

show more ...


# 5906be5a 20-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: fix ID spelling in comments and log messages

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@inte

ethdev: fix ID spelling in comments and log messages

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 5b49ba65 20-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: fix VLAN spelling including VLAN ID case

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.co

ethdev: fix VLAN spelling including VLAN ID case

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 064e90c4 20-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: fix DCB and VMDq spelling

Fix both in one changeset since they share line in a number of cases.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <fe

ethdev: fix DCB and VMDq spelling

Fix both in one changeset since they share line in a number of cases.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# 0d9f56a8 20-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: fix Ethernet spelling

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>


# 09fd4227 20-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: fix Rx/Tx spelling

Fix it everywhere in ethdev including log messages.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>


# 3c2ca0a9 20-Oct-2021 Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

ethdev: avoid documentation in next lines

Documentation in the next separate line is confusing. If documentation
requires own line it should be before, not after.

Move documentation to the previous

ethdev: avoid documentation in next lines

Documentation in the next separate line is confusing. If documentation
requires own line it should be before, not after.

Move documentation to the previous line if documentation on the same
line makes it too long.

Fix a number of incorrect markups on the way.

When a lines is touched by the patch anyway, do other cosmetics
changes to avoid changes in next patches.

Signed-off-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Ori Kam <orika@nvidia.com>
Reviewed-by: Ferruh Yigit <ferruh.yigit@intel.com>

show more ...


# e1823e08 20-Oct-2021 Thomas Monjalon <thomas@monjalon.net>

ethdev: replace bit shifts with macros

The macros RTE_BIT32 and RTE_BIT64 are used to replace bit shifts.
The macro UINT64C is also used to replace remaining occurrences of ULL.

The bit shifts of E

ethdev: replace bit shifts with macros

The macros RTE_BIT32 and RTE_BIT64 are used to replace bit shifts.
The macro UINT64C is also used to replace remaining occurrences of ULL.

The bit shifts of ETH_RSS_LEVEL_* are kept for aesthetic reason.

The API of rte_mtr and rte_tm is using enums for 64-bit variables.
As they are enums, unsigned bit cannot be used.

Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

show more ...


# 990912e6 18-Oct-2021 Ferruh Yigit <ferruh.yigit@intel.com>

ethdev: unify MTU checks

Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_M

ethdev: unify MTU checks

Both 'rte_eth_dev_configure()' & 'rte_eth_dev_set_mtu()' sets MTU but
have slightly different checks. Like one checks min MTU against
RTE_ETHER_MIN_MTU and other RTE_ETHER_MIN_LEN.

Checks moved into common function to unify the checks. Also this has
benefit to have common error logs.

Default 'dev_info->min_mtu' (the one set by ethdev if driver doesn't
provide one), changed to ('RTE_ETHER_MIN_LEN' - overhead). Previously it
was 'RTE_ETHER_MIN_MTU' which is min MTU for IPv4 packets. Since the
intention is to provide min MTU corresponding minimum frame size, new
default value suits better.

Suggested-by: Huisong Li <lihuisong@huawei.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>

show more ...


# b563c142 18-Oct-2021 Ferruh Yigit <ferruh.yigit@intel.com>

ethdev: remove jumbo offload flag

Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_in

ethdev: remove jumbo offload flag

Removing 'DEV_RX_OFFLOAD_JUMBO_FRAME' offload flag.

Instead of drivers announce this capability, application can deduct the
capability by checking reported 'dev_info.max_mtu' or
'dev_info.max_rx_pktlen'.

And instead of application setting this flag explicitly to enable jumbo
frames, this can be deduced by driver by comparing requested 'mtu' to
'RTE_ETHER_MTU'.

Removing this additional configuration for simplification.

Suggested-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Reviewed-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>
Acked-by: Michal Krawczyk <mk@semihalf.com>

show more ...


# 1bb4a528 18-Oct-2021 Ferruh Yigit <ferruh.yigit@intel.com>

ethdev: fix max Rx packet length

There is a confusion on setting max Rx packet length, this patch aims to
clarify it.

'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_p

ethdev: fix max Rx packet length

There is a confusion on setting max Rx packet length, this patch aims to
clarify it.

'rte_eth_dev_configure()' API accepts max Rx packet size via
'uint32_t max_rx_pkt_len' field of the config struct 'struct
rte_eth_conf'.

Also 'rte_eth_dev_set_mtu()' API can be used to set the MTU, and result
stored into '(struct rte_eth_dev)->data->mtu'.

These two APIs are related but they work in a disconnected way, they
store the set values in different variables which makes hard to figure
out which one to use, also having two different method for a related
functionality is confusing for the users.

Other issues causing confusion is:
* maximum transmission unit (MTU) is payload of the Ethernet frame. And
'max_rx_pkt_len' is the size of the Ethernet frame. Difference is
Ethernet frame overhead, and this overhead may be different from
device to device based on what device supports, like VLAN and QinQ.
* 'max_rx_pkt_len' is only valid when application requested jumbo frame,
which adds additional confusion and some APIs and PMDs already
discards this documented behavior.
* For the jumbo frame enabled case, 'max_rx_pkt_len' is an mandatory
field, this adds configuration complexity for application.

As solution, both APIs gets MTU as parameter, and both saves the result
in same variable '(struct rte_eth_dev)->data->mtu'. For this
'max_rx_pkt_len' updated as 'mtu', and it is always valid independent
from jumbo frame.

For 'rte_eth_dev_configure()', 'dev->data->dev_conf.rxmode.mtu' is user
request and it should be used only within configure function and result
should be stored to '(struct rte_eth_dev)->data->mtu'. After that point
both application and PMD uses MTU from this variable.

When application doesn't provide an MTU during 'rte_eth_dev_configure()'
default 'RTE_ETHER_MTU' value is used.

Additional clarification done on scattered Rx configuration, in
relation to MTU and Rx buffer size.
MTU is used to configure the device for physical Rx/Tx size limitation,
Rx buffer is where to store Rx packets, many PMDs use mbuf data buffer
size as Rx buffer size.
PMDs compare MTU against Rx buffer size to decide enabling scattered Rx
or not. If scattered Rx is not supported by device, MTU bigger than Rx
buffer size should fail.

Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com>
Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
Acked-by: Somnath Kotur <somnath.kotur@broadcom.com>
Acked-by: Huisong Li <lihuisong@huawei.com>
Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
Acked-by: Konstantin Ananyev <konstantin.ananyev@intel.com>
Acked-by: Rosen Xu <rosen.xu@intel.com>
Acked-by: Hyong Youb Kim <hyonkim@cisco.com>

show more ...


12345