8ac3a1cd | 16-Mar-2023 |
Eli Britstein <elibr@nvidia.com> |
app/testpmd: assign custom ID to flow rules
Upon creation of a flow, testpmd assigns it a flow ID. Later, the flow ID is used for flow operations (query, destroy, dump).
The testpmd application all
app/testpmd: assign custom ID to flow rules
Upon creation of a flow, testpmd assigns it a flow ID. Later, the flow ID is used for flow operations (query, destroy, dump).
The testpmd application allows to manage flow rules with its IDs. The flow ID is known only when the flow is created. In order to prepare a complete sequence of testpmd commands to copy/paste, the flow IDs must be predictable.
Allow the user to provide an assigned ID.
Example: testpmd> flow create 0 ingress user_id 0x1234 pattern eth / end actions count / drop / end Flow rule #0 created, user-id 0x1234
testpmd> flow query 0 0x1234 count user_id
testpmd> flow dump 0 user_id rule 0x1234
testpmd> flow destroy 0 rule 0x1234 user_id Flow rule #0 destroyed, user-id 0x1234
Here, "user_id" is a flag that signifies the "rule" ID is the user-id.
The motivation is from OVS. OVS dumps its "rte_flow_create" calls to the log in testpmd commands syntax. As the flow ID testpmd would assign is unkwon, it cannot log valid "flow destroy" commands.
With this enhancement, valid testpmd commands can be created in a log to copy/paste to testpmd. The application's flows sequence can then be played back in testpmd, to enable enhanced dpdk debug capabilities of the applications's flows in a controlled environment of testpmd rather than a dynamic, more difficult to debug environment of the application.
Signed-off-by: Eli Britstein <elibr@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
e30aa525 | 08-Apr-2023 |
Jie Hai <haijie1@huawei.com> |
ethdev: introduce low latency RS FEC
This patch introduces LLRS (low latency Reed Solomon FEC). LLRS supports for 25 Gbps, 50 Gbps, 100 Gbps, 200 Gbps and 400 Gbps Ethernet networks.
Signed-off-by:
ethdev: introduce low latency RS FEC
This patch introduces LLRS (low latency Reed Solomon FEC). LLRS supports for 25 Gbps, 50 Gbps, 100 Gbps, 200 Gbps and 400 Gbps Ethernet networks.
Signed-off-by: Jie Hai <haijie1@huawei.com> Signed-off-by: Dongdong Liu <liudongdong3@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
e9b8532e | 31-May-2023 |
Dong Zhou <dongzhou@nvidia.com> |
ethdev: add flow item for RoCE infiniband BTH
IB(InfiniBand) is one type of networking used in high-performance computing with high throughput and low latency. Like Ethernet, IB defines a layered pr
ethdev: add flow item for RoCE infiniband BTH
IB(InfiniBand) is one type of networking used in high-performance computing with high throughput and low latency. Like Ethernet, IB defines a layered protocol (Physical, Link, Network, Transport Layers). IB provides native support for RDMA(Remote DMA), an extension of the DMA that allows direct access to remote host memory without CPU intervention. IB network requires NICs and switches to support the IB protocol.
RoCE(RDMA over Converged Ethernet) is a network protocol that allows RDMA to run on Ethernet. RoCE encapsulates IB packets on Ethernet and has two versions, RoCEv1 and RoCEv2. RoCEv1 is an Ethernet link layer protocol, IB packets are encapsulated in the Ethernet layer and use Ethernet type 0x8915. RoCEv2 is an internet layer protocol, IB packets are encapsulated in UDP payload and use a destination port 4791, The format of the RoCEv2 packet is as follows: ETH + IP + UDP(dport 4791) + IB(BTH + ExtHDR + PAYLOAD + CRC)
BTH(Base Transport Header) is the IB transport layer header, RoCEv1 and RoCEv2 both contain this header. This patch introduces a new RTE item to match the IB BTH in RoCE packets. One use of this match is that the user can monitor RoCEv2's CNP(Congestion Notification Packet) by matching BTH opcode 0x81.
This patch also adds the testpmd command line to match the RoCEv2 BTH. Usage example:
testpmd> flow create 0 group 1 ingress pattern eth / ipv4 / udp dst is 4791 / ib_bth opcode is 0x81 dst_qp is 0xd3 / end actions queue index 0 / end
Signed-off-by: Dong Zhou <dongzhou@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
8ebc396b | 17-Feb-2023 |
Jiawei Wang <jiaweiw@nvidia.com> |
ethdev: add flow matching of aggregated port
When multiple ports are aggregated into a single DPDK port, (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to know which port is used fo
ethdev: add flow matching of aggregated port
When multiple ports are aggregated into a single DPDK port, (example: Linux bonding, DPDK bonding, failsafe, etc.), we want to know which port is used for Rx and Tx.
This patch allows to map a Rx queue with an aggregated port by using a flow rule. The new item is called RTE_FLOW_ITEM_TYPE_AGGR_AFFINITY.
While uses the aggregated affinity as a matching item in the flow rule, and sets the same affinity value by call rte_eth_dev_map_aggr_tx_affinity(), then the packet can be sent from the same port as the receiving one. The affinity numbering starts from 1, then trying to match on aggr_affinity 0 will result in an error.
Add the testpmd command line to match the new item: flow create 0 ingress group 0 pattern aggr_affinity affinity is 1 / end actions queue index 0 / end
The above command means that creates a flow on a single DPDK port and matches the packet from the first physical port and redirects these packets into Rx queue 0.
Signed-off-by: Jiawei Wang <jiaweiw@nvidia.com> Acked-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
f5b2846d | 10-Feb-2023 |
Viacheslav Ovsiienko <viacheslavo@nvidia.com> |
ethdev: share indirect action between ports
The flow API implements the concept of shared objects, known as indirect actions (RTE_FLOW_ACTION_TYPE_INDIRECT). An application can create the indirect a
ethdev: share indirect action between ports
The flow API implements the concept of shared objects, known as indirect actions (RTE_FLOW_ACTION_TYPE_INDIRECT). An application can create the indirect action of desired type and configuration with rte_flow_action_handle_create call and then specify the obtained action handle in multiple flows.
The initial concept supposes the action handle has strict attachment to the port it was created on and to be used exclusively in the flows being installed on the port.
Nowadays the multipath network topologies are quite common, packets belonging to the same connection might arrive and be sent over multiple ports, and there is the raising demand to handle these "spread" connections. To fulfil this demand it is proposed to extend indirect action sharing across the multiple ports. This kind of sharing would be extremely useful for the meters and counters, allowing to manage the single connection over the multiple ports.
This cross-port object sharing is hard to implement in generic way merely with software on the upper layers, but can be provided by the driver over the single hardware instance, where multiple ports reside on the same physical NIC and share the same hardware context.
To allow this action sharing application should specify the "host port" during flow configuring to claim the intention to share the indirect actions. All indirect actions reside within "host port" context and can be shared in flows being installed on the host port and on all the ports referencing this one.
If sharing between host and port being configured is not supported the configuration should be rejected with error. There might be multiple independent (mutual exclusive) sharing domains with dedicated host and referencing ports.
To manage the shared indirect action any port from sharing domain can be specified. To share or not the created action is up to application, no API change is needed.
Support is added into testpmd to share an indirect action. An action should be created on single port and the handle can be used in the templates and flows on multiple ports, example:
flow configure 0 queues_number 1 queues_size 64 counters_number 64 flow configure 1 queues_number 1 queues_size 64 counters_number 0 \ host_port 0 flags 1
flow indirect_action 0 create ingress action_id 0 action count / end
flow actions_template 0 create ingress actions_template_id 8 template indirect 0 / queue index 0 / end mask count / queue index 0 / end
flow actions_template 1 create ingress actions_template_id 18 template shared_indirect 0 0 / queue index 0 / end mask count / queue index 0 / end
Signed-off-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
750ee81d | 05-Feb-2023 |
Leo Xu <yongquanx@nvidia.com> |
ethdev: match ICMPv6 ID and sequence
This patch adds API support for ICMPv6 ID and sequence. 1: Add two new pattern item types for ICMPv6 echo request and reply: RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQU
ethdev: match ICMPv6 ID and sequence
This patch adds API support for ICMPv6 ID and sequence. 1: Add two new pattern item types for ICMPv6 echo request and reply: RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REQUEST RTE_FLOW_ITEM_TYPE_ICMP6_ECHO_REPLY
2: Add new structures for ICMP packet definitions. struct rte_icmp_base_hdr; # For basic header of all ICMP/ICMPv6 header struct rte_icmp_echo_hdr; # For ICMP/ICMPv6 echo header
The existing struct rte_icmp_hdr should not be used in new code. It should be set to be deprecated in future. The reason for that is, icmp_ident/icmp_seq_nb are not common fields of ICMP/ICMPv6 packets.
3: Enhance testpmd flow pattern to support ICMPv6 identifier and sequence.
Example of ICMPv6 echo pattern in testpmd command:
pattern eth / ipv6 / icmp6_echo_request / end pattern eth / ipv6 / icmp6_echo_reply / end pattern eth / ipv6 / icmp6_echo_request ident is 20 seq is 30 / end
Signed-off-by: Leo Xu <yongquanx@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
a4bf5421 | 21-Nov-2022 |
Hanumanth Pothula <hpothula@marvell.com> |
app/testpmd: add option to fix multi-mempool check
Add new testpmd command line argument, multi-rx-mempool, to control multi-rx-mempool feature. By default it's disabled.
Also, validate ethdev para
app/testpmd: add option to fix multi-mempool check
Add new testpmd command line argument, multi-rx-mempool, to control multi-rx-mempool feature. By default it's disabled.
Also, validate ethdev parameter 'max_rx_mempools' to know whether device supports multi-mempool feature or not.
Bugzilla ID: 1128 Fixes: 4f04edcda769 ("app/testpmd: support multiple mbuf pools per Rx queue")
Signed-off-by: Hanumanth Pothula <hpothula@marvell.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> Tested-by: Yingya Han <yingyax.han@intel.com> Tested-by: Yaqi Tang <yaqi.tang@intel.com>
show more ...
|
d252801d | 16-Nov-2022 |
Michael Baum <michaelba@nvidia.com> |
doc: fix spaces in testpmd flow syntax
In flow syntax documentation, there is example for create pattern template.
Before the example, miss a blank line causing it to look regular bold text. In add
doc: fix spaces in testpmd flow syntax
In flow syntax documentation, there is example for create pattern template.
Before the example, miss a blank line causing it to look regular bold text. In addition, inside the example, it uses tab instead of spaces which expand the indentation in one line.
This patch adds the blank line and replaces tab with spaces.
Fixes: 04cc665fab38 ("app/testpmd: add flow template management") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Thomas Monjalon <thomas@monjalon.net> Acked-by: Yuying Zhang <yuying.zhang@intel.com>
show more ...
|
ea30023e | 16-Nov-2022 |
Michael Baum <michaelba@nvidia.com> |
doc: fix colons in testpmd aged flow rules
In testpmd documentation, for listing aged-out flow rules there is some boxes of examples.
In Sphinx syntax, those boxes are achieved by "::" before. Howe
doc: fix colons in testpmd aged flow rules
In testpmd documentation, for listing aged-out flow rules there is some boxes of examples.
In Sphinx syntax, those boxes are achieved by "::" before. However, in two places it uses ":" instead and the example looks like a regular text.
This patch replace the ":" with "::" to get code box.
Fixes: 0e459ffa0889 ("app/testpmd: support flow aging") Cc: stable@dpdk.org
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
966eb55e | 26-Oct-2022 |
Michael Baum <michaelba@nvidia.com> |
ethdev: add queue-based API to report aged flow rules
When application use queue-based flow rule management and operate the same flow rule on the same queue, e.g create/destroy/query, API of queryin
ethdev: add queue-based API to report aged flow rules
When application use queue-based flow rule management and operate the same flow rule on the same queue, e.g create/destroy/query, API of querying aged flow rules should also have queue id parameter just like other queue-based flow APIs.
By this way, PMD can work in more optimized way since resources are isolated by queue and needn't synchronize.
If application do use queue-based flow management but configure port without RTE_FLOW_PORT_FLAG_STRICT_QUEUE, which means application operate a given flow rule on different queues, the queue id parameter will be ignored.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|