#
4c2e7468 |
| 21-Nov-2024 |
Stephen Hemminger <stephen@networkplumber.org> |
app/testpmd: remove redundant policy action condition
The loop over policy actions will always exit when it sees the flow end action, so the next check is redundant.
Link: https://pvs-studio.com/en
app/testpmd: remove redundant policy action condition
The loop over policy actions will always exit when it sees the flow end action, so the next check is redundant.
Link: https://pvs-studio.com/en/blog/posts/cpp/1179/ Fixes: f29fa2c59b85 ("app/testpmd: support policy actions per color") Cc: stable@dpdk.org
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com>
show more ...
|
#
098f949f |
| 18-Nov-2024 |
Danylo Vodopianov <dvo-plv@napatech.com> |
app/testpmd: fix aged flow destroy
port_flow_destroy() function never assumed that rule array can be freed when it's executing, and port_flow_aged() just violated that assumption.
In case of flow a
app/testpmd: fix aged flow destroy
port_flow_destroy() function never assumed that rule array can be freed when it's executing, and port_flow_aged() just violated that assumption.
In case of flow async create failure, it tries to do a cleanup, but it wrongly removes a 1st flow (with id 0). pf->id is not set at this moment and it always is 0, thus 1st flow is removed. A local copy of flow->id must be used to call of port_flow_destroy() to avoid access and processing of flow->id after the flow is removed.
Fixes: de956d5ecf08 ("app/testpmd: support age shared action context") Cc: stable@dpdk.org
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
5b7d82e8 |
| 18-Nov-2024 |
Danylo Vodopianov <dvo-plv@napatech.com> |
app/testpmd: fix flow update
If actions provided to "flow update..." command contained an age action, then testpmd did not update the age action context accordingly.
Thus "flow aged <port_id> destr
app/testpmd: fix flow update
If actions provided to "flow update..." command contained an age action, then testpmd did not update the age action context accordingly.
Thus "flow aged <port_id> destroy" command can not execute successfully.
Fix was done with next steps 1. Generate new port flow entry to add/replace action(s). 2. Set age context if age action is present. 3. Replace flow in the flow list.
Fixes: 2d9c7e56e52c ("app/testpmd: support updating flow rule actions") Cc: stable@dpdk.org
Signed-off-by: Danylo Vodopianov <dvo-plv@napatech.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
045e35aa |
| 10-Oct-2024 |
James Hershaw <james.hershaw@corigine.com> |
app/testpmd: support switching LED on/off
Add command to change the state of a controllable LED on an ethdev port to on/off. This is for the purpose of identifying which physical port is associated
app/testpmd: support switching LED on/off
Add command to change the state of a controllable LED on an ethdev port to on/off. This is for the purpose of identifying which physical port is associated with an ethdev.
Usage: testpmd> set port <port-id> led <on/off>
Signed-off-by: James Hershaw <james.hershaw@corigine.com> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
54780542 |
| 10-Oct-2024 |
James Hershaw <james.hershaw@corigine.com> |
app/testpmd: support setting device EEPROM
There is currently no means to test the .set_eeprom function callback of a given PMD in drivers/net/. This patch adds functionality to allow a user to set
app/testpmd: support setting device EEPROM
There is currently no means to test the .set_eeprom function callback of a given PMD in drivers/net/. This patch adds functionality to allow a user to set device eeprom from the testpmd cmdline.
Usage: testpmd> set port <port-id> eeprom <accept_risk> magic <magic> \ value <value> offset <offset>
- <accept_risk> is a fixed string that is required from the user to acknowledge the risk involved in using this command and accepting the responsibility for that. - <magic> is a decimal. - <value> is a hex-as-string with no leading "0x". - <offset> is a decimal (this field is optional and defaults to 0 if not specified.)
Signed-off-by: James Hershaw <james.hershaw@corigine.com> Reviewed-by: Chaoyong He <chaoyong.he@corigine.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
60bac722 |
| 26-Sep-2024 |
Damodharam Ammepalli <damodharam.ammepalli@broadcom.com> |
ethdev: add link speed lanes configuration
Update the eth_dev_ops structure with new function vectors to get, get capabilities and set Ethernet link speed lanes. Update the testpmd to provide requir
ethdev: add link speed lanes configuration
Update the eth_dev_ops structure with new function vectors to get, get capabilities and set Ethernet link speed lanes. Update the testpmd to provide required config and information display infrastructure.
The supporting Ethernet controller driver will register callbacks to avail link speed lanes config and get services. This lanes configuration is applicable only when the NIC is forced to fixed speeds. In Autonegotiation mode, the hardware automatically negotiates the number of lanes.
These are the new commands.
testpmd> show port 0 speed_lanes capabilities
Supported speeds Valid lanes ----------------------------------- 10 Gbps 1 25 Gbps 1 40 Gbps 4 50 Gbps 1 2 100 Gbps 1 2 4 200 Gbps 2 4 400 Gbps 4 8 testpmd>
testpmd> testpmd> port stop 0 testpmd> port config 0 speed_lanes 4 testpmd> port config 0 speed 200000 duplex full testpmd> port start 0 testpmd> testpmd> show port info 0
********************* Infos for port 0 ********************* MAC address: 14:23:F2:C3:BA:D2 Device name: 0000:b1:00.0 Driver name: net_bnxt Firmware-version: 228.9.115.0 Connect to socket: 2 memory allocation on the socket: 2 Link status: up Link speed: 200 Gbps Active Lanes: 4 Link duplex: full-duplex Autoneg status: Off
Signed-off-by: Damodharam Ammepalli <damodharam.ammepalli@broadcom.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com> Reviewed-by: Ajit Khaparde <ajit.khaparde@broadcom.com>
show more ...
|
#
933f18db |
| 25-Sep-2024 |
Alexander Kozyrev <akozyrev@nvidia.com> |
ethdev: add flow rule by index with pattern
Add a new API to enqueue flow rule creation by index with pattern. The new template table rules insertion type, index-based insertion with pattern, requir
ethdev: add flow rule by index with pattern
Add a new API to enqueue flow rule creation by index with pattern. The new template table rules insertion type, index-based insertion with pattern, requires a new flow rule creation function with both rule index and pattern provided. Packets will match on the provided pattern at the provided index.
In testpmd, allow to specify both the rule index and the pattern in the flow rule creation command line parameters. Both are needed for rte_flow_async_create_by_index_with_pattern().
flow queue 0 create 0 template_table 2 rule_index 5 pattern_template 0 actions_template 0 postpone no pattern eth / end actions count / queue index 1 / end
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
1f47f469 |
| 22-Jul-2024 |
Ferruh Yigit <ferruh.yigit@amd.com> |
app/testpmd: fix build on signed comparison
Build error: .../app/test-pmd/config.c: In function 'icmp_echo_config_setup': .../app/test-pmd/config.c:5159:30: error: comparison between signed and u
app/testpmd: fix build on signed comparison
Build error: .../app/test-pmd/config.c: In function 'icmp_echo_config_setup': .../app/test-pmd/config.c:5159:30: error: comparison between signed and unsigned integer expressions [-Werror=sign-compare] if ((nb_txq * nb_fwd_ports) < nb_fwd_lcores) ^ All 'nb_txq', 'nb_fwd_ports' & 'nb_fwd_lcores' are unsigned variables, but the warning is related to the integer promotion rules of C: 'nb_txq' -> uint16_t, promoted to 'int' 'nb_fwd_ports' -> uint16_t, promoted to 'int' (nb_txq * nb_fwd_ports) -> result 'int' nb_fwd_lcores -> 'uint32_t' Ends up comparing 'int' vs 'uint32_t'.
Fixing by adding the casting back which was initially part of the patch.
Fixes: 2bf44dd14fa5 ("app/testpmd: fix lcore ID restriction") Cc: stable@dpdk.org
Reported-by: Raslan Darawsheh <rasland@nvidia.com> Signed-off-by: Ferruh Yigit <ferruh.yigit@amd.com> Tested-by: Ali Alnubani <alialnu@nvidia.com>
show more ...
|
#
2bf44dd1 |
| 06-Jun-2024 |
Sivaprasad Tummala <sivaprasad.tummala@amd.com> |
app/testpmd: fix lcore ID restriction
With modern CPUs, it is possible to have higher CPU count thus we can have higher RTE_MAX_LCORES. In testpmd application, the current config forwarding cores op
app/testpmd: fix lcore ID restriction
With modern CPUs, it is possible to have higher CPU count thus we can have higher RTE_MAX_LCORES. In testpmd application, the current config forwarding cores option "--nb-cores" is hard limited to 255.
The patch fixes this constraint and also adjusts the lcore data structure to 32-bit to align with rte lcore APIs.
Fixes: af75078fece3 ("first public release") Cc: stable@dpdk.org
Signed-off-by: Sivaprasad Tummala <sivaprasad.tummala@amd.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
ecf408d2 |
| 25-Mar-2024 |
Bing Zhao <bingz@nvidia.com> |
app/testpmd: fix indirect action flush
The memory of the indirect action handles should be freed after being destroyed in the flush. The behavior needs to be consistent with the single handle destro
app/testpmd: fix indirect action flush
The memory of the indirect action handles should be freed after being destroyed in the flush. The behavior needs to be consistent with the single handle destroy port_action_handle_destroy().
Or else, there would be some memory leak when closing / detaching a port without quitting the application. In the meanwhile, since the action handles are already destroyed, it makes no sense to hold the indirect action software resources anymore.
Fixes: f7352c176bbf ("app/testpmd: fix use of indirect action after port close") Cc: stable@dpdk.org
Signed-off-by: Bing Zhao <bingz@nvidia.com> Reviewed-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
c1496cb6 |
| 07-Mar-2024 |
Gregory Etelson <getelson@nvidia.com> |
app/testpmd: fix async indirect action list creation
Testpmd calls the same function to create legacy indirect action and indirect list action. The function did not identify required action correctl
app/testpmd: fix async indirect action list creation
Testpmd calls the same function to create legacy indirect action and indirect list action. The function did not identify required action correctly.
The patch adds the `indirect_list` boolean function parameter that is derived from the action type.
Fixes: 72a3dec7126f ("ethdev: add indirect flow list action") Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
0da12ecb |
| 28-Feb-2024 |
Dariusz Sosnowski <dsosnowski@nvidia.com> |
app/testpmd: fix async flow create failure handling
In case of an error when an asynchronous flow create operation was enqueued, test-pmd attempted to enqueue a flow destroy operation of that flow r
app/testpmd: fix async flow create failure handling
In case of an error when an asynchronous flow create operation was enqueued, test-pmd attempted to enqueue a flow destroy operation of that flow rule. However, this was incorrect because:
- Flow rule index was used to enqueue a flow destroy operation. This flow rule index was not yet initialized, so flow rule number 0 was always destroyed as a result. - Since rte_flow_async_create() does not return a handle on error, then there is no flow rule to destroy.
test-pmd only needs to free internal memory allocated for storing a flow rule. Any flow destroy operation is not needed in this case.
Fixes: ecdc927b99f2 ("app/testpmd: add async flow create/destroy operations") Cc: stable@dpdk.org
Signed-off-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
2d9c7e56 |
| 29-Feb-2024 |
Oleksandr Kolomeiets <okl-plv@napatech.com> |
app/testpmd: support updating flow rule actions
"flow update" updates a flow rule specified by a rule ID with a new action list by making a call to "rte_flow_actions_update()":
flow update {por
app/testpmd: support updating flow rule actions
"flow update" updates a flow rule specified by a rule ID with a new action list by making a call to "rte_flow_actions_update()":
flow update {port_id} {rule_id} actions {action} [/ {action} [...]] / end [user_id]
Creating, updating and destroying a flow rule:
testpmd> flow create 0 group 1 pattern eth / end actions drop / end Flow rule #0 created testpmd> flow update 0 0 actions queue index 1 / end Flow rule #0 updated with new actions testpmd> flow destroy 0 rule 0 Flow rule #0 destroyed
Signed-off-by: Oleksandr Kolomeiets <okl-plv@napatech.com> Reviewed-by: Mykola Kostenok <mko-plv@napatech.com> Reviewed-by: Christian Koue Muf <ckm@napatech.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
99231e48 |
| 15-Feb-2024 |
Gregory Etelson <getelson@nvidia.com> |
ethdev: add template table resize
Template table creation API sets table flows capacity. If application needs more flows then the table was designed for, the following procedures must be completed:
ethdev: add template table resize
Template table creation API sets table flows capacity. If application needs more flows then the table was designed for, the following procedures must be completed: 1. Create a new template table with larger flows capacity. 2. Re-create existing flows in the new table and delete flows from the original table. 3. Destroy original table.
Application cannot always execute that procedure: * Port may not have sufficient resources to allocate a new table while maintaining original table. * Application may not have existing flows "recipes" to re-create flows in a new table.
The patch defines a new API that allows application to resize existing template table:
* Resizable template table must be created with the RTE_FLOW_TABLE_SPECIALIZE_RESIZABLE_TABLE bit set.
* Application resizes existing table with the `rte_flow_template_table_resize()` function call. The table resize procedure updates the table maximal flow number only. Other table attributes are not affected by the table resize. ** The table resize procedure must not interrupt existing table flows operations in hardware. ** The table resize procedure must not alter flow handles held by application.
* After `rte_flow_template_table_resize()` returned, application must update table flow rules by calling `rte_flow_async_update_resized()`. The call reconfigures internal flow resources for the new table configuration. The flow update must not interrupt hardware flow operations.
* After table flows were updated, application must call `rte_flow_template_table_resize_complete()`. The function releases PMD resources related to the original table. Application can start new table resize after `rte_flow_template_table_resize_complete()` returned.
Testpmd commands:
* Create resizable template table flow template_table <port-id> create table_id <tbl-id> resizable \ [transfer|ingress|egres] group <group-id> \ rules_number <initial table capacity> \ pattern_template <pt1> [ pattern_template <pt2> [ ... ]] \ actions_template <at1> [ actions_template <at2> [ ... ]]
* Resize table: flow template_table <tbl-id> resize table_resize_id <tbl-id> \ table_resize_rules_num <new table capacity>
* Queue a flow update: flow queue <port-id> update_resized <tbl-id> rule <flow-id>
* Complete table resize: flow template_table <port-id> resize_complete table <tbl-id>
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com> Acked-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
#
9733f099 |
| 13-Feb-2024 |
Ori Kam <orika@nvidia.com> |
ethdev: add encapsulation hash calculation
During encapsulation of a packet, it is possible to change some outer headers to improve flow distribution. For example, from VXLAN RFC: "It is recommended
ethdev: add encapsulation hash calculation
During encapsulation of a packet, it is possible to change some outer headers to improve flow distribution. For example, from VXLAN RFC: "It is recommended that the UDP source port number be calculated using a hash of fields from the inner packet -- one example being a hash of the inner Ethernet frame's headers. This is to enable a level of entropy for the ECMP/load-balancing"
The tunnel protocol defines which outer field should hold this hash, but it doesn't define the hash calculation algorithm.
An application that uses flow offloads gets the first few packets (exception path) and then decides to offload the flow. As a result, there are two different paths that a packet from a given flow may take. SW for the first few packets or HW for the rest. When the packet goes through the SW, the SW encapsulates the packet and must use the same hash calculation as the HW will do for the rest of the packets in this flow.
The new function rte_flow_calc_encap_hash can query the hash value from the driver for a given packet as if the packet was passed through the HW.
Testpmd command: flow hash {port} encap {target field} pattern {item} [/ {item} [...] ] / end
Testpmd example for VXLAN encapsulation: flow hash 0 encap hash_field_sport pattern ipv4 dst is 7.7.7.7 src is 8.8.8.8 / udp dst is 5678 src is 1234 / end
Signed-off-by: Ori Kam <orika@nvidia.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
42392190 |
| 09-Feb-2024 |
Ajit Khaparde <ajit.khaparde@broadcom.com> |
ethdev: support RSS based on IPv6 flow label
On supporting hardware, the 20-bit Flow Label field in the IPv6 header can be used to perform RSS in the ingress path.
Flow label values can be chosen s
ethdev: support RSS based on IPv6 flow label
On supporting hardware, the 20-bit Flow Label field in the IPv6 header can be used to perform RSS in the ingress path.
Flow label values can be chosen such that they can be used as part of the input to a hash function used in a load distribution scheme.
Example to configure IPv6 flow label based RSS: flow create 0 ingress pattern eth / ipv6 / tcp / end actions rss types ipv6-flow-label end / end
Signed-off-by: Ajit Khaparde <ajit.khaparde@broadcom.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
b3a33138 |
| 30-Jan-2024 |
Dengdui Huang <huangdengdui@huawei.com> |
app/testpmd: fix crash in multi-process forwarding
On multi-process scenario, each process creates flows based on the number of queues. When nbcore is greater than 1, multiple cores may use the same
app/testpmd: fix crash in multi-process forwarding
On multi-process scenario, each process creates flows based on the number of queues. When nbcore is greater than 1, multiple cores may use the same queue to forward packet, like: dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4 --nb-cores=2 --num-procs=2 --proc-id=0 testpmd> start mac packet forwarding - ports=1 - cores=2 - streams=4 - NUMA support enabled, MP allocation mode: native Logical Core 2 (socket 0) forwards packets on 2 streams: RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00 Logical Core 3 (socket 0) forwards packets on 2 streams: RX P=0/Q=0 (socket 0) -> TX P=0/Q=0 (socket 0) peer=02:00:00:00:00:00 RX P=0/Q=1 (socket 0) -> TX P=0/Q=1 (socket 0) peer=02:00:00:00:00:00
After this commit, the result will be: dpdk-testpmd -a BDF --proc-type=auto -- -i --rxq=4 --txq=4 --nb-cores=2 --num-procs=2 --proc-id=0 testpmd> start io packet forwarding - ports=1 - cores=2 - streams=2 - NUMA support enabled, MP allocation mode: native Logical Core 2 (socket 0) forwards packets on 1 streams: RX P=0/Q=0 (socket 2) -> TX P=0/Q=0 (socket 2) peer=02:00:00:00:00:00 Logical Core 3 (socket 0) forwards packets on 1 streams: RX P=0/Q=1 (socket 2) -> TX P=0/Q=1 (socket 2) peer=02:00:00:00:00:00
Fixes: a550baf24af9 ("app/testpmd: support multi-process") Cc: stable@dpdk.org
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
75c7849a |
| 03-Nov-2023 |
Huisong Li <lihuisong@huawei.com> |
ethdev: add maximum Rx buffer size
The "min_rx_bufsize" in struct rte_eth_dev_info stands for the minimum Rx buffer size supported by hardware. Actually, some engines also have the maximum Rx buffer
ethdev: add maximum Rx buffer size
The "min_rx_bufsize" in struct rte_eth_dev_info stands for the minimum Rx buffer size supported by hardware. Actually, some engines also have the maximum Rx buffer specification, like, hns3, i40e and so on. If mbuf data room size in mempool is greater then the maximum Rx buffer size per descriptor supported by HW, the data size application used in each mbuf is just as much as the maximum Rx buffer size instead of the whole data room size.
So introduce maximum Rx buffer size which is not enforced just to report user to avoid memory waste. In addition, fix the comment for the "min_rx_bufsize" to make it be more specific.
Signed-off-by: Huisong Li <lihuisong@huawei.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
92628e2b |
| 02-Nov-2023 |
Jie Hai <haijie1@huawei.com> |
ethdev: get RSS algorithm names
This patch adds new API rte_eth_dev_rss_algo_name() to get name of a RSS algorithm and document it.
Example:
testpmd> show port 0 rss-hash algorithm RSS algorithm:
ethdev: get RSS algorithm names
This patch adds new API rte_eth_dev_rss_algo_name() to get name of a RSS algorithm and document it.
Example:
testpmd> show port 0 rss-hash algorithm RSS algorithm: toeplitz
Signed-off-by: Jie Hai <haijie1@huawei.com> Acked-by: Huisong Li <lihuisong@huawei.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
a080d5bf |
| 08-Aug-2023 |
Gregory Etelson <getelson@nvidia.com> |
ethdev: remove init color from meter mark action
Indirect list API defines 2 types of action update: • Action mutable context is always shared between all flows that referenced indirect actions li
ethdev: remove init color from meter mark action
Indirect list API defines 2 types of action update: • Action mutable context is always shared between all flows that referenced indirect actions list handle. Action mutable context can be changed by explicit invocation of indirect handle update function. • Flow mutable context is private to a flow. Flow mutable context can be updated by indirect list handle flow rule configuration.
`METER_MARK::init_color` is flow resource. Current flows implementation placed `init_color` in the `rte_flow_action_meter_mark` making it action level resource.
The patch removes `init_color` from the `rte_flow_action_meter_mark` structure.
API change: The patch removed: • struct rte_flow_action_meter_mark::init_color
• struct rte_flow_update_meter_mark::init_color_valid
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
ffe18b05 |
| 10-Oct-2023 |
Ori Kam <orika@nvidia.com> |
ethdev: add calculate hash function
rte_flow supports insert by index table, see commit 60261a005dff ("ethdev: add flow template table insertion type").
Using the above table, the application can c
ethdev: add calculate hash function
rte_flow supports insert by index table, see commit 60261a005dff ("ethdev: add flow template table insertion type").
Using the above table, the application can create rules that are based on hash. For example application can create the following logic in order to create load balancing: 1. Create insert by index table with 2 rules, that hashes based on dmac 2. Insert to index 0 a rule that sends the traffic to port A. 3. Insert to index 1 a rule that sends the traffic to port B.
Let's also assume that before this table, there is a 5 tuple match table that jumps to the above table.
So each packet that matches one of the 5 tuple rules is RSSed to port A or B, based on dmac hash.
The issue arises when there is a miss on the 5 tuple table, which resulted due to the packet being the first packet of this flow, or fragmented packet or any other reason. In this case, the application must calculate what would be the hash calculated by the HW so it can send the packet to the correct port.
This new API allows applications to calculate the hash value of a given packet for a given table.
Signed-off-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
ef8bd7d0 |
| 08-Oct-2023 |
Dengdui Huang <huangdengdui@huawei.com> |
app/testpmd: add command to flush multicast MAC addresses
Add command to flush all multicast MAC address Usage: mcast_addr flush <port_id> : flush all multicast MAC address on port_id
Signe
app/testpmd: add command to flush multicast MAC addresses
Add command to flush all multicast MAC address Usage: mcast_addr flush <port_id> : flush all multicast MAC address on port_id
Signed-off-by: Dengdui Huang <huangdengdui@huawei.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Acked-by: Ferruh Yigit <ferruh.yigit@amd.com>
show more ...
|
#
8a26a658 |
| 20-Sep-2023 |
Tomer Shmilovich <tshmilovich@nvidia.com> |
ethdev: set flow group miss actions
Introduce new group set miss actions API: rte_flow_group_set_miss_actions().
A group's miss actions are a set of actions to be performed in case of a miss on a g
ethdev: set flow group miss actions
Introduce new group set miss actions API: rte_flow_group_set_miss_actions().
A group's miss actions are a set of actions to be performed in case of a miss on a group, meaning a packet didn't hit any rules in the group. This API function allows a user to set a group's miss actions.
Add testpmd CLI interface for the group set miss actions API:
flow group 0 group_id 1 ingress set_miss_actions jump group 3 / end flow group 0 group_id 1 ingress set_miss_actions end
Signed-off-by: Tomer Shmilovich <tshmilovich@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|
#
3d4e27fd |
| 25-Aug-2023 |
David Marchand <david.marchand@redhat.com> |
use abstracted bit count functions
Now that DPDK provides such bit count functions, make use of them.
This patch was prepared with a "brutal" commandline:
$ old=__builtin_clzll; new=rte_clz64; g
use abstracted bit count functions
Now that DPDK provides such bit count functions, make use of them.
This patch was prepared with a "brutal" commandline:
$ old=__builtin_clzll; new=rte_clz64; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g" $ old=__builtin_clz; new=rte_clz32; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g"
$ old=__builtin_ctzll; new=rte_ctz64; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g" $ old=__builtin_ctz; new=rte_ctz32; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g"
$ old=__builtin_popcountll; new=rte_popcount64; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g" $ old=__builtin_popcount; new=rte_popcount32; git grep -lw $old :^lib/eal/include/rte_bitops.h | xargs sed -i -e "s#\<$old\>#$new#g"
Then inclusion of rte_bitops.h was added were necessary.
Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Long Li <longli@microsoft.com>
show more ...
|
#
5ed39609 |
| 18-Jul-2023 |
Alexander Kozyrev <akozyrev@nvidia.com> |
app/testpmd: fix meter mark handle update
The indirect action handle update for the METER_MARK action was implemented only for the async flow API. Allow updating the METER_MARK parameters via the ol
app/testpmd: fix meter mark handle update
The indirect action handle update for the METER_MARK action was implemented only for the async flow API. Allow updating the METER_MARK parameters via the old sync method.
Fixes: 9c4a0c1859a3 ("ethdev: add meter color mark flow action") Cc: stable@dpdk.org
Signed-off-by: Alexander Kozyrev <akozyrev@nvidia.com> Acked-by: Ori Kam <orika@nvidia.com>
show more ...
|