1 /* SPDX-License-Identifier: BSD-3-Clause 2 * Copyright(c) 2010-2017 Intel Corporation 3 */ 4 5 #ifndef _RTE_ETHDEV_H_ 6 #define _RTE_ETHDEV_H_ 7 8 /** 9 * @file 10 * 11 * RTE Ethernet Device API 12 * 13 * The Ethernet Device API is composed of two parts: 14 * 15 * - The application-oriented Ethernet API that includes functions to setup 16 * an Ethernet device (configure it, setup its Rx and Tx queues and start it), 17 * to get its MAC address, the speed and the status of its physical link, 18 * to receive and to transmit packets, and so on. 19 * 20 * - The driver-oriented Ethernet API that exports functions allowing 21 * an Ethernet Poll Mode Driver (PMD) to allocate an Ethernet device instance, 22 * create memzone for HW rings and process registered callbacks, and so on. 23 * PMDs should include ethdev_driver.h instead of this header. 24 * 25 * By default, all the functions of the Ethernet Device API exported by a PMD 26 * are lock-free functions which assume to not be invoked in parallel on 27 * different logical cores to work on the same target object. For instance, 28 * the receive function of a PMD cannot be invoked in parallel on two logical 29 * cores to poll the same Rx queue [of the same port]. Of course, this function 30 * can be invoked in parallel by different logical cores on different Rx queues. 31 * It is the responsibility of the upper level application to enforce this rule. 32 * 33 * If needed, parallel accesses by multiple logical cores to shared queues 34 * shall be explicitly protected by dedicated inline lock-aware functions 35 * built on top of their corresponding lock-free functions of the PMD API. 36 * 37 * In all functions of the Ethernet API, the Ethernet device is 38 * designated by an integer >= 0 named the device port identifier. 39 * 40 * At the Ethernet driver level, Ethernet devices are represented by a generic 41 * data structure of type *rte_eth_dev*. 42 * 43 * Ethernet devices are dynamically registered during the PCI probing phase 44 * performed at EAL initialization time. 45 * When an Ethernet device is being probed, an *rte_eth_dev* structure and 46 * a new port identifier are allocated for that device. Then, the eth_dev_init() 47 * function supplied by the Ethernet driver matching the probed PCI 48 * device is invoked to properly initialize the device. 49 * 50 * The role of the device init function consists of resetting the hardware, 51 * checking access to Non-volatile Memory (NVM), reading the MAC address 52 * from NVM etc. 53 * 54 * If the device init operation is successful, the correspondence between 55 * the port identifier assigned to the new device and its associated 56 * *rte_eth_dev* structure is effectively registered. 57 * Otherwise, both the *rte_eth_dev* structure and the port identifier are 58 * freed. 59 * 60 * The functions exported by the application Ethernet API to setup a device 61 * designated by its port identifier must be invoked in the following order: 62 * - rte_eth_dev_configure() 63 * - rte_eth_tx_queue_setup() 64 * - rte_eth_rx_queue_setup() 65 * - rte_eth_dev_start() 66 * 67 * Then, the network application can invoke, in any order, the functions 68 * exported by the Ethernet API to get the MAC address of a given device, to 69 * get the speed and the status of a device physical link, to receive/transmit 70 * [burst of] packets, and so on. 71 * 72 * If the application wants to change the configuration (i.e. call 73 * rte_eth_dev_configure(), rte_eth_tx_queue_setup(), or 74 * rte_eth_rx_queue_setup()), it must call rte_eth_dev_stop() first to stop the 75 * device and then do the reconfiguration before calling rte_eth_dev_start() 76 * again. The transmit and receive functions should not be invoked when the 77 * device or the queue is stopped. 78 * 79 * Please note that some configuration is not stored between calls to 80 * rte_eth_dev_stop()/rte_eth_dev_start(). The following configuration will 81 * be retained: 82 * 83 * - MTU 84 * - flow control settings 85 * - receive mode configuration (promiscuous mode, all-multicast mode, 86 * hardware checksum mode, RSS/VMDq settings etc.) 87 * - VLAN filtering configuration 88 * - default MAC address 89 * - MAC addresses supplied to MAC address array 90 * - flow director filtering mode (but not filtering rules) 91 * - NIC queue statistics mappings 92 * 93 * The following configuration may be retained or not 94 * depending on the device capabilities: 95 * 96 * - flow rules 97 * - flow-related shared objects, e.g. indirect actions 98 * 99 * Any other configuration will not be stored and will need to be re-entered 100 * before a call to rte_eth_dev_start(). 101 * 102 * Finally, a network application can close an Ethernet device by invoking the 103 * rte_eth_dev_close() function. 104 * 105 * Each function of the application Ethernet API invokes a specific function 106 * of the PMD that controls the target device designated by its port 107 * identifier. 108 * For this purpose, all device-specific functions of an Ethernet driver are 109 * supplied through a set of pointers contained in a generic structure of type 110 * *eth_dev_ops*. 111 * The address of the *eth_dev_ops* structure is stored in the *rte_eth_dev* 112 * structure by the device init function of the Ethernet driver, which is 113 * invoked during the PCI probing phase, as explained earlier. 114 * 115 * In other words, each function of the Ethernet API simply retrieves the 116 * *rte_eth_dev* structure associated with the device port identifier and 117 * performs an indirect invocation of the corresponding driver function 118 * supplied in the *eth_dev_ops* structure of the *rte_eth_dev* structure. 119 * 120 * For performance reasons, the address of the burst-oriented Rx and Tx 121 * functions of the Ethernet driver are not contained in the *eth_dev_ops* 122 * structure. Instead, they are directly stored at the beginning of the 123 * *rte_eth_dev* structure to avoid an extra indirect memory access during 124 * their invocation. 125 * 126 * RTE Ethernet device drivers do not use interrupts for transmitting or 127 * receiving. Instead, Ethernet drivers export Poll-Mode receive and transmit 128 * functions to applications. 129 * Both receive and transmit functions are packet-burst oriented to minimize 130 * their cost per packet through the following optimizations: 131 * 132 * - Sharing among multiple packets the incompressible cost of the 133 * invocation of receive/transmit functions. 134 * 135 * - Enabling receive/transmit functions to take advantage of burst-oriented 136 * hardware features (L1 cache, prefetch instructions, NIC head/tail 137 * registers) to minimize the number of CPU cycles per packet, for instance, 138 * by avoiding useless read memory accesses to ring descriptors, or by 139 * systematically using arrays of pointers that exactly fit L1 cache line 140 * boundaries and sizes. 141 * 142 * The burst-oriented receive function does not provide any error notification, 143 * to avoid the corresponding overhead. As a hint, the upper-level application 144 * might check the status of the device link once being systematically returned 145 * a 0 value by the receive function of the driver for a given number of tries. 146 */ 147 148 #ifdef __cplusplus 149 extern "C" { 150 #endif 151 152 #include <stdint.h> 153 154 /* Use this macro to check if LRO API is supported */ 155 #define RTE_ETHDEV_HAS_LRO_SUPPORT 156 157 /* Alias RTE_LIBRTE_ETHDEV_DEBUG for backward compatibility. */ 158 #ifdef RTE_LIBRTE_ETHDEV_DEBUG 159 #define RTE_ETHDEV_DEBUG_RX 160 #define RTE_ETHDEV_DEBUG_TX 161 #endif 162 163 #include <rte_compat.h> 164 #include <rte_log.h> 165 #include <rte_interrupts.h> 166 #include <rte_dev.h> 167 #include <rte_devargs.h> 168 #include <rte_bitops.h> 169 #include <rte_errno.h> 170 #include <rte_common.h> 171 #include <rte_config.h> 172 #include <rte_power_intrinsics.h> 173 174 #include "rte_ethdev_trace_fp.h" 175 #include "rte_dev_info.h" 176 177 extern int rte_eth_dev_logtype; 178 179 #define RTE_ETHDEV_LOG(level, ...) \ 180 rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) 181 182 struct rte_mbuf; 183 184 /** 185 * Initializes a device iterator. 186 * 187 * This iterator allows accessing a list of devices matching some devargs. 188 * 189 * @param iter 190 * Device iterator handle initialized by the function. 191 * The fields bus_str and cls_str might be dynamically allocated, 192 * and could be freed by calling rte_eth_iterator_cleanup(). 193 * 194 * @param devargs 195 * Device description string. 196 * 197 * @return 198 * 0 on successful initialization, negative otherwise. 199 */ 200 int rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs); 201 202 /** 203 * Iterates on devices with devargs filter. 204 * The ownership is not checked. 205 * 206 * The next port ID is returned, and the iterator is updated. 207 * 208 * @param iter 209 * Device iterator handle initialized by rte_eth_iterator_init(). 210 * Some fields bus_str and cls_str might be freed when no more port is found, 211 * by calling rte_eth_iterator_cleanup(). 212 * 213 * @return 214 * A port ID if found, RTE_MAX_ETHPORTS otherwise. 215 */ 216 uint16_t rte_eth_iterator_next(struct rte_dev_iterator *iter); 217 218 /** 219 * Free some allocated fields of the iterator. 220 * 221 * This function is automatically called by rte_eth_iterator_next() 222 * on the last iteration (i.e. when no more matching port is found). 223 * 224 * It is safe to call this function twice; it will do nothing more. 225 * 226 * @param iter 227 * Device iterator handle initialized by rte_eth_iterator_init(). 228 * The fields bus_str and cls_str are freed if needed. 229 */ 230 void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter); 231 232 /** 233 * Macro to iterate over all ethdev ports matching some devargs. 234 * 235 * If a break is done before the end of the loop, 236 * the function rte_eth_iterator_cleanup() must be called. 237 * 238 * @param id 239 * Iterated port ID of type uint16_t. 240 * @param devargs 241 * Device parameters input as string of type char*. 242 * @param iter 243 * Iterator handle of type struct rte_dev_iterator, used internally. 244 */ 245 #define RTE_ETH_FOREACH_MATCHING_DEV(id, devargs, iter) \ 246 for (rte_eth_iterator_init(iter, devargs), \ 247 id = rte_eth_iterator_next(iter); \ 248 id != RTE_MAX_ETHPORTS; \ 249 id = rte_eth_iterator_next(iter)) 250 251 /** 252 * A structure used to retrieve statistics for an Ethernet port. 253 * Not all statistics fields in struct rte_eth_stats are supported 254 * by any type of network interface card (NIC). If any statistics 255 * field is not supported, its value is 0. 256 * All byte-related statistics do not include Ethernet FCS regardless 257 * of whether these bytes have been delivered to the application 258 * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC). 259 */ 260 struct rte_eth_stats { 261 uint64_t ipackets; /**< Total number of successfully received packets. */ 262 uint64_t opackets; /**< Total number of successfully transmitted packets.*/ 263 uint64_t ibytes; /**< Total number of successfully received bytes. */ 264 uint64_t obytes; /**< Total number of successfully transmitted bytes. */ 265 /** 266 * Total of Rx packets dropped by the HW, 267 * because there are no available buffer (i.e. Rx queues are full). 268 */ 269 uint64_t imissed; 270 uint64_t ierrors; /**< Total number of erroneous received packets. */ 271 uint64_t oerrors; /**< Total number of failed transmitted packets. */ 272 uint64_t rx_nombuf; /**< Total number of Rx mbuf allocation failures. */ 273 /* Queue stats are limited to max 256 queues */ 274 /** Total number of queue Rx packets. */ 275 uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 276 /** Total number of queue Tx packets. */ 277 uint64_t q_opackets[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 278 /** Total number of successfully received queue bytes. */ 279 uint64_t q_ibytes[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 280 /** Total number of successfully transmitted queue bytes. */ 281 uint64_t q_obytes[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 282 /** Total number of queue packets received that are dropped. */ 283 uint64_t q_errors[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 284 }; 285 286 /**@{@name Link speed capabilities 287 * Device supported speeds bitmap flags 288 */ 289 #define RTE_ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */ 290 #define RTE_ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */ 291 #define RTE_ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */ 292 #define RTE_ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */ 293 #define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */ 294 #define RTE_ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */ 295 #define RTE_ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */ 296 #define RTE_ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */ 297 #define RTE_ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */ 298 #define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */ 299 #define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */ 300 #define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */ 301 #define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */ 302 #define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */ 303 #define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */ 304 #define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */ 305 #define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */ 306 /**@}*/ 307 308 #define ETH_LINK_SPEED_AUTONEG RTE_DEPRECATED(ETH_LINK_SPEED_AUTONEG) RTE_ETH_LINK_SPEED_AUTONEG 309 #define ETH_LINK_SPEED_FIXED RTE_DEPRECATED(ETH_LINK_SPEED_FIXED) RTE_ETH_LINK_SPEED_FIXED 310 #define ETH_LINK_SPEED_10M_HD RTE_DEPRECATED(ETH_LINK_SPEED_10M_HD) RTE_ETH_LINK_SPEED_10M_HD 311 #define ETH_LINK_SPEED_10M RTE_DEPRECATED(ETH_LINK_SPEED_10M) RTE_ETH_LINK_SPEED_10M 312 #define ETH_LINK_SPEED_100M_HD RTE_DEPRECATED(ETH_LINK_SPEED_100M_HD) RTE_ETH_LINK_SPEED_100M_HD 313 #define ETH_LINK_SPEED_100M RTE_DEPRECATED(ETH_LINK_SPEED_100M) RTE_ETH_LINK_SPEED_100M 314 #define ETH_LINK_SPEED_1G RTE_DEPRECATED(ETH_LINK_SPEED_1G) RTE_ETH_LINK_SPEED_1G 315 #define ETH_LINK_SPEED_2_5G RTE_DEPRECATED(ETH_LINK_SPEED_2_5G) RTE_ETH_LINK_SPEED_2_5G 316 #define ETH_LINK_SPEED_5G RTE_DEPRECATED(ETH_LINK_SPEED_5G) RTE_ETH_LINK_SPEED_5G 317 #define ETH_LINK_SPEED_10G RTE_DEPRECATED(ETH_LINK_SPEED_10G) RTE_ETH_LINK_SPEED_10G 318 #define ETH_LINK_SPEED_20G RTE_DEPRECATED(ETH_LINK_SPEED_20G) RTE_ETH_LINK_SPEED_20G 319 #define ETH_LINK_SPEED_25G RTE_DEPRECATED(ETH_LINK_SPEED_25G) RTE_ETH_LINK_SPEED_25G 320 #define ETH_LINK_SPEED_40G RTE_DEPRECATED(ETH_LINK_SPEED_40G) RTE_ETH_LINK_SPEED_40G 321 #define ETH_LINK_SPEED_50G RTE_DEPRECATED(ETH_LINK_SPEED_50G) RTE_ETH_LINK_SPEED_50G 322 #define ETH_LINK_SPEED_56G RTE_DEPRECATED(ETH_LINK_SPEED_56G) RTE_ETH_LINK_SPEED_56G 323 #define ETH_LINK_SPEED_100G RTE_DEPRECATED(ETH_LINK_SPEED_100G) RTE_ETH_LINK_SPEED_100G 324 #define ETH_LINK_SPEED_200G RTE_DEPRECATED(ETH_LINK_SPEED_200G) RTE_ETH_LINK_SPEED_200G 325 326 /**@{@name Link speed 327 * Ethernet numeric link speeds in Mbps 328 */ 329 #define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */ 330 #define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */ 331 #define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */ 332 #define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */ 333 #define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */ 334 #define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */ 335 #define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */ 336 #define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */ 337 #define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */ 338 #define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */ 339 #define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */ 340 #define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */ 341 #define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */ 342 #define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */ 343 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */ 344 /**@}*/ 345 346 #define ETH_SPEED_NUM_NONE RTE_DEPRECATED(ETH_SPEED_NUM_NONE) RTE_ETH_SPEED_NUM_NONE 347 #define ETH_SPEED_NUM_10M RTE_DEPRECATED(ETH_SPEED_NUM_10M) RTE_ETH_SPEED_NUM_10M 348 #define ETH_SPEED_NUM_100M RTE_DEPRECATED(ETH_SPEED_NUM_100M) RTE_ETH_SPEED_NUM_100M 349 #define ETH_SPEED_NUM_1G RTE_DEPRECATED(ETH_SPEED_NUM_1G) RTE_ETH_SPEED_NUM_1G 350 #define ETH_SPEED_NUM_2_5G RTE_DEPRECATED(ETH_SPEED_NUM_2_5G) RTE_ETH_SPEED_NUM_2_5G 351 #define ETH_SPEED_NUM_5G RTE_DEPRECATED(ETH_SPEED_NUM_5G) RTE_ETH_SPEED_NUM_5G 352 #define ETH_SPEED_NUM_10G RTE_DEPRECATED(ETH_SPEED_NUM_10G) RTE_ETH_SPEED_NUM_10G 353 #define ETH_SPEED_NUM_20G RTE_DEPRECATED(ETH_SPEED_NUM_20G) RTE_ETH_SPEED_NUM_20G 354 #define ETH_SPEED_NUM_25G RTE_DEPRECATED(ETH_SPEED_NUM_25G) RTE_ETH_SPEED_NUM_25G 355 #define ETH_SPEED_NUM_40G RTE_DEPRECATED(ETH_SPEED_NUM_40G) RTE_ETH_SPEED_NUM_40G 356 #define ETH_SPEED_NUM_50G RTE_DEPRECATED(ETH_SPEED_NUM_50G) RTE_ETH_SPEED_NUM_50G 357 #define ETH_SPEED_NUM_56G RTE_DEPRECATED(ETH_SPEED_NUM_56G) RTE_ETH_SPEED_NUM_56G 358 #define ETH_SPEED_NUM_100G RTE_DEPRECATED(ETH_SPEED_NUM_100G) RTE_ETH_SPEED_NUM_100G 359 #define ETH_SPEED_NUM_200G RTE_DEPRECATED(ETH_SPEED_NUM_200G) RTE_ETH_SPEED_NUM_200G 360 #define ETH_SPEED_NUM_UNKNOWN RTE_DEPRECATED(ETH_SPEED_NUM_UNKNOWN) RTE_ETH_SPEED_NUM_UNKNOWN 361 362 /** 363 * A structure used to retrieve link-level information of an Ethernet port. 364 */ 365 __extension__ 366 struct rte_eth_link { 367 uint32_t link_speed; /**< RTE_ETH_SPEED_NUM_ */ 368 uint16_t link_duplex : 1; /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */ 369 uint16_t link_autoneg : 1; /**< RTE_ETH_LINK_[AUTONEG/FIXED] */ 370 uint16_t link_status : 1; /**< RTE_ETH_LINK_[DOWN/UP] */ 371 } __rte_aligned(8); /**< aligned for atomic64 read/write */ 372 373 /**@{@name Link negotiation 374 * Constants used in link management. 375 */ 376 #define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */ 377 #define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */ 378 #define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */ 379 #define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */ 380 #define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */ 381 #define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */ 382 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */ 383 /**@}*/ 384 385 #define ETH_LINK_HALF_DUPLEX RTE_DEPRECATED(ETH_LINK_HALF_DUPLEX) RTE_ETH_LINK_HALF_DUPLEX 386 #define ETH_LINK_FULL_DUPLEX RTE_DEPRECATED(ETH_LINK_FULL_DUPLEX) RTE_ETH_LINK_FULL_DUPLEX 387 #define ETH_LINK_DOWN RTE_DEPRECATED(ETH_LINK_DOWN) RTE_ETH_LINK_DOWN 388 #define ETH_LINK_UP RTE_DEPRECATED(ETH_LINK_UP) RTE_ETH_LINK_UP 389 #define ETH_LINK_FIXED RTE_DEPRECATED(ETH_LINK_FIXED) RTE_ETH_LINK_FIXED 390 #define ETH_LINK_AUTONEG RTE_DEPRECATED(ETH_LINK_AUTONEG) RTE_ETH_LINK_AUTONEG 391 392 /** 393 * A structure used to configure the ring threshold registers of an Rx/Tx 394 * queue for an Ethernet port. 395 */ 396 struct rte_eth_thresh { 397 uint8_t pthresh; /**< Ring prefetch threshold. */ 398 uint8_t hthresh; /**< Ring host threshold. */ 399 uint8_t wthresh; /**< Ring writeback threshold. */ 400 }; 401 402 /**@{@name Multi-queue mode 403 * @see rte_eth_conf.rxmode.mq_mode. 404 */ 405 #define RTE_ETH_MQ_RX_RSS_FLAG RTE_BIT32(0) /**< Enable RSS. @see rte_eth_rss_conf */ 406 #define RTE_ETH_MQ_RX_DCB_FLAG RTE_BIT32(1) /**< Enable DCB. */ 407 #define RTE_ETH_MQ_RX_VMDQ_FLAG RTE_BIT32(2) /**< Enable VMDq. */ 408 /**@}*/ 409 410 #define ETH_MQ_RX_RSS_FLAG RTE_DEPRECATED(ETH_MQ_RX_RSS_FLAG) RTE_ETH_MQ_RX_RSS_FLAG 411 #define ETH_MQ_RX_DCB_FLAG RTE_DEPRECATED(ETH_MQ_RX_DCB_FLAG) RTE_ETH_MQ_RX_DCB_FLAG 412 #define ETH_MQ_RX_VMDQ_FLAG RTE_DEPRECATED(ETH_MQ_RX_VMDQ_FLAG) RTE_ETH_MQ_RX_VMDQ_FLAG 413 414 /** 415 * A set of values to identify what method is to be used to route 416 * packets to multiple queues. 417 */ 418 enum rte_eth_rx_mq_mode { 419 /** None of DCB, RSS or VMDq mode */ 420 RTE_ETH_MQ_RX_NONE = 0, 421 422 /** For Rx side, only RSS is on */ 423 RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG, 424 /** For Rx side,only DCB is on. */ 425 RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG, 426 /** Both DCB and RSS enable */ 427 RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG, 428 429 /** Only VMDq, no RSS nor DCB */ 430 RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG, 431 /** RSS mode with VMDq */ 432 RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG, 433 /** Use VMDq+DCB to route traffic to queues */ 434 RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG, 435 /** Enable both VMDq and DCB in VMDq */ 436 RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | 437 RTE_ETH_MQ_RX_VMDQ_FLAG, 438 }; 439 440 #define ETH_MQ_RX_NONE RTE_DEPRECATED(ETH_MQ_RX_NONE) RTE_ETH_MQ_RX_NONE 441 #define ETH_MQ_RX_RSS RTE_DEPRECATED(ETH_MQ_RX_RSS) RTE_ETH_MQ_RX_RSS 442 #define ETH_MQ_RX_DCB RTE_DEPRECATED(ETH_MQ_RX_DCB) RTE_ETH_MQ_RX_DCB 443 #define ETH_MQ_RX_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_DCB_RSS) RTE_ETH_MQ_RX_DCB_RSS 444 #define ETH_MQ_RX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_RX_VMDQ_ONLY) RTE_ETH_MQ_RX_VMDQ_ONLY 445 #define ETH_MQ_RX_VMDQ_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_RSS) RTE_ETH_MQ_RX_VMDQ_RSS 446 #define ETH_MQ_RX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB) RTE_ETH_MQ_RX_VMDQ_DCB 447 #define ETH_MQ_RX_VMDQ_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB_RSS) RTE_ETH_MQ_RX_VMDQ_DCB_RSS 448 449 /** 450 * A set of values to identify what method is to be used to transmit 451 * packets using multi-TCs. 452 */ 453 enum rte_eth_tx_mq_mode { 454 RTE_ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */ 455 RTE_ETH_MQ_TX_DCB, /**< For Tx side,only DCB is on. */ 456 RTE_ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */ 457 RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */ 458 }; 459 460 #define ETH_MQ_TX_NONE RTE_DEPRECATED(ETH_MQ_TX_NONE) RTE_ETH_MQ_TX_NONE 461 #define ETH_MQ_TX_DCB RTE_DEPRECATED(ETH_MQ_TX_DCB) RTE_ETH_MQ_TX_DCB 462 #define ETH_MQ_TX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_TX_VMDQ_DCB) RTE_ETH_MQ_TX_VMDQ_DCB 463 #define ETH_MQ_TX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_TX_VMDQ_ONLY) RTE_ETH_MQ_TX_VMDQ_ONLY 464 465 /** 466 * A structure used to configure the Rx features of an Ethernet port. 467 */ 468 struct rte_eth_rxmode { 469 /** The multi-queue packet distribution mode to be used, e.g. RSS. */ 470 enum rte_eth_rx_mq_mode mq_mode; 471 uint32_t mtu; /**< Requested MTU. */ 472 /** Maximum allowed size of LRO aggregated packet. */ 473 uint32_t max_lro_pkt_size; 474 uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ 475 /** 476 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags. 477 * Only offloads set on rx_offload_capa field on rte_eth_dev_info 478 * structure are allowed to be set. 479 */ 480 uint64_t offloads; 481 482 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 483 void *reserved_ptrs[2]; /**< Reserved for future fields */ 484 }; 485 486 /** 487 * VLAN types to indicate if it is for single VLAN, inner VLAN or outer VLAN. 488 * Note that single VLAN is treated the same as inner VLAN. 489 */ 490 enum rte_vlan_type { 491 RTE_ETH_VLAN_TYPE_UNKNOWN = 0, 492 RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */ 493 RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */ 494 RTE_ETH_VLAN_TYPE_MAX, 495 }; 496 497 #define ETH_VLAN_TYPE_UNKNOWN RTE_DEPRECATED(ETH_VLAN_TYPE_UNKNOWN) RTE_ETH_VLAN_TYPE_UNKNOWN 498 #define ETH_VLAN_TYPE_INNER RTE_DEPRECATED(ETH_VLAN_TYPE_INNER) RTE_ETH_VLAN_TYPE_INNER 499 #define ETH_VLAN_TYPE_OUTER RTE_DEPRECATED(ETH_VLAN_TYPE_OUTER) RTE_ETH_VLAN_TYPE_OUTER 500 #define ETH_VLAN_TYPE_MAX RTE_DEPRECATED(ETH_VLAN_TYPE_MAX) RTE_ETH_VLAN_TYPE_MAX 501 502 /** 503 * A structure used to describe a VLAN filter. 504 * If the bit corresponding to a VID is set, such VID is on. 505 */ 506 struct rte_vlan_filter_conf { 507 uint64_t ids[64]; 508 }; 509 510 /** 511 * A structure used to configure the Receive Side Scaling (RSS) feature 512 * of an Ethernet port. 513 * If not NULL, the *rss_key* pointer of the *rss_conf* structure points 514 * to an array holding the RSS key to use for hashing specific header 515 * fields of received packets. The length of this array should be indicated 516 * by *rss_key_len* below. Otherwise, a default random hash key is used by 517 * the device driver. 518 * 519 * The *rss_key_len* field of the *rss_conf* structure indicates the length 520 * in bytes of the array pointed by *rss_key*. To be compatible, this length 521 * will be checked in i40e only. Others assume 40 bytes to be used as before. 522 * 523 * The *rss_hf* field of the *rss_conf* structure indicates the different 524 * types of IPv4/IPv6 packets to which the RSS hashing must be applied. 525 * Supplying an *rss_hf* equal to zero disables the RSS feature. 526 */ 527 struct rte_eth_rss_conf { 528 uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */ 529 uint8_t rss_key_len; /**< hash key length in bytes. */ 530 uint64_t rss_hf; /**< Hash functions to apply - see below. */ 531 }; 532 533 /* 534 * A packet can be identified by hardware as different flow types. Different 535 * NIC hardware may support different flow types. 536 * Basically, the NIC hardware identifies the flow type as deep protocol as 537 * possible, and exclusively. For example, if a packet is identified as 538 * 'RTE_ETH_FLOW_NONFRAG_IPV4_TCP', it will not be any of other flow types, 539 * though it is an actual IPV4 packet. 540 */ 541 #define RTE_ETH_FLOW_UNKNOWN 0 542 #define RTE_ETH_FLOW_RAW 1 543 #define RTE_ETH_FLOW_IPV4 2 544 #define RTE_ETH_FLOW_FRAG_IPV4 3 545 #define RTE_ETH_FLOW_NONFRAG_IPV4_TCP 4 546 #define RTE_ETH_FLOW_NONFRAG_IPV4_UDP 5 547 #define RTE_ETH_FLOW_NONFRAG_IPV4_SCTP 6 548 #define RTE_ETH_FLOW_NONFRAG_IPV4_OTHER 7 549 #define RTE_ETH_FLOW_IPV6 8 550 #define RTE_ETH_FLOW_FRAG_IPV6 9 551 #define RTE_ETH_FLOW_NONFRAG_IPV6_TCP 10 552 #define RTE_ETH_FLOW_NONFRAG_IPV6_UDP 11 553 #define RTE_ETH_FLOW_NONFRAG_IPV6_SCTP 12 554 #define RTE_ETH_FLOW_NONFRAG_IPV6_OTHER 13 555 #define RTE_ETH_FLOW_L2_PAYLOAD 14 556 #define RTE_ETH_FLOW_IPV6_EX 15 557 #define RTE_ETH_FLOW_IPV6_TCP_EX 16 558 #define RTE_ETH_FLOW_IPV6_UDP_EX 17 559 /** Consider device port number as a flow differentiator */ 560 #define RTE_ETH_FLOW_PORT 18 561 #define RTE_ETH_FLOW_VXLAN 19 /**< VXLAN protocol based flow */ 562 #define RTE_ETH_FLOW_GENEVE 20 /**< GENEVE protocol based flow */ 563 #define RTE_ETH_FLOW_NVGRE 21 /**< NVGRE protocol based flow */ 564 #define RTE_ETH_FLOW_VXLAN_GPE 22 /**< VXLAN-GPE protocol based flow */ 565 #define RTE_ETH_FLOW_GTPU 23 /**< GTPU protocol based flow */ 566 #define RTE_ETH_FLOW_MAX 24 567 568 /* 569 * Below macros are defined for RSS offload types, they can be used to 570 * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types. 571 */ 572 #define RTE_ETH_RSS_IPV4 RTE_BIT64(2) 573 #define RTE_ETH_RSS_FRAG_IPV4 RTE_BIT64(3) 574 #define RTE_ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4) 575 #define RTE_ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5) 576 #define RTE_ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6) 577 #define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7) 578 #define RTE_ETH_RSS_IPV6 RTE_BIT64(8) 579 #define RTE_ETH_RSS_FRAG_IPV6 RTE_BIT64(9) 580 #define RTE_ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10) 581 #define RTE_ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11) 582 #define RTE_ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12) 583 #define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13) 584 #define RTE_ETH_RSS_L2_PAYLOAD RTE_BIT64(14) 585 #define RTE_ETH_RSS_IPV6_EX RTE_BIT64(15) 586 #define RTE_ETH_RSS_IPV6_TCP_EX RTE_BIT64(16) 587 #define RTE_ETH_RSS_IPV6_UDP_EX RTE_BIT64(17) 588 #define RTE_ETH_RSS_PORT RTE_BIT64(18) 589 #define RTE_ETH_RSS_VXLAN RTE_BIT64(19) 590 #define RTE_ETH_RSS_GENEVE RTE_BIT64(20) 591 #define RTE_ETH_RSS_NVGRE RTE_BIT64(21) 592 #define RTE_ETH_RSS_GTPU RTE_BIT64(23) 593 #define RTE_ETH_RSS_ETH RTE_BIT64(24) 594 #define RTE_ETH_RSS_S_VLAN RTE_BIT64(25) 595 #define RTE_ETH_RSS_C_VLAN RTE_BIT64(26) 596 #define RTE_ETH_RSS_ESP RTE_BIT64(27) 597 #define RTE_ETH_RSS_AH RTE_BIT64(28) 598 #define RTE_ETH_RSS_L2TPV3 RTE_BIT64(29) 599 #define RTE_ETH_RSS_PFCP RTE_BIT64(30) 600 #define RTE_ETH_RSS_PPPOE RTE_BIT64(31) 601 #define RTE_ETH_RSS_ECPRI RTE_BIT64(32) 602 #define RTE_ETH_RSS_MPLS RTE_BIT64(33) 603 #define RTE_ETH_RSS_IPV4_CHKSUM RTE_BIT64(34) 604 605 #define ETH_RSS_IPV4 RTE_DEPRECATED(ETH_RSS_IPV4) RTE_ETH_RSS_IPV4 606 #define ETH_RSS_FRAG_IPV4 RTE_DEPRECATED(ETH_RSS_FRAG_IPV4) RTE_ETH_RSS_FRAG_IPV4 607 #define ETH_RSS_NONFRAG_IPV4_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_TCP) RTE_ETH_RSS_NONFRAG_IPV4_TCP 608 #define ETH_RSS_NONFRAG_IPV4_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_UDP) RTE_ETH_RSS_NONFRAG_IPV4_UDP 609 #define ETH_RSS_NONFRAG_IPV4_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_SCTP) RTE_ETH_RSS_NONFRAG_IPV4_SCTP 610 #define ETH_RSS_NONFRAG_IPV4_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_OTHER) RTE_ETH_RSS_NONFRAG_IPV4_OTHER 611 #define ETH_RSS_IPV6 RTE_DEPRECATED(ETH_RSS_IPV6) RTE_ETH_RSS_IPV6 612 #define ETH_RSS_FRAG_IPV6 RTE_DEPRECATED(ETH_RSS_FRAG_IPV6) RTE_ETH_RSS_FRAG_IPV6 613 #define ETH_RSS_NONFRAG_IPV6_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_TCP) RTE_ETH_RSS_NONFRAG_IPV6_TCP 614 #define ETH_RSS_NONFRAG_IPV6_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_UDP) RTE_ETH_RSS_NONFRAG_IPV6_UDP 615 #define ETH_RSS_NONFRAG_IPV6_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_SCTP) RTE_ETH_RSS_NONFRAG_IPV6_SCTP 616 #define ETH_RSS_NONFRAG_IPV6_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_OTHER) RTE_ETH_RSS_NONFRAG_IPV6_OTHER 617 #define ETH_RSS_L2_PAYLOAD RTE_DEPRECATED(ETH_RSS_L2_PAYLOAD) RTE_ETH_RSS_L2_PAYLOAD 618 #define ETH_RSS_IPV6_EX RTE_DEPRECATED(ETH_RSS_IPV6_EX) RTE_ETH_RSS_IPV6_EX 619 #define ETH_RSS_IPV6_TCP_EX RTE_DEPRECATED(ETH_RSS_IPV6_TCP_EX) RTE_ETH_RSS_IPV6_TCP_EX 620 #define ETH_RSS_IPV6_UDP_EX RTE_DEPRECATED(ETH_RSS_IPV6_UDP_EX) RTE_ETH_RSS_IPV6_UDP_EX 621 #define ETH_RSS_PORT RTE_DEPRECATED(ETH_RSS_PORT) RTE_ETH_RSS_PORT 622 #define ETH_RSS_VXLAN RTE_DEPRECATED(ETH_RSS_VXLAN) RTE_ETH_RSS_VXLAN 623 #define ETH_RSS_GENEVE RTE_DEPRECATED(ETH_RSS_GENEVE) RTE_ETH_RSS_GENEVE 624 #define ETH_RSS_NVGRE RTE_DEPRECATED(ETH_RSS_NVGRE) RTE_ETH_RSS_NVGRE 625 #define ETH_RSS_GTPU RTE_DEPRECATED(ETH_RSS_GTPU) RTE_ETH_RSS_GTPU 626 #define ETH_RSS_ETH RTE_DEPRECATED(ETH_RSS_ETH) RTE_ETH_RSS_ETH 627 #define ETH_RSS_S_VLAN RTE_DEPRECATED(ETH_RSS_S_VLAN) RTE_ETH_RSS_S_VLAN 628 #define ETH_RSS_C_VLAN RTE_DEPRECATED(ETH_RSS_C_VLAN) RTE_ETH_RSS_C_VLAN 629 #define ETH_RSS_ESP RTE_DEPRECATED(ETH_RSS_ESP) RTE_ETH_RSS_ESP 630 #define ETH_RSS_AH RTE_DEPRECATED(ETH_RSS_AH) RTE_ETH_RSS_AH 631 #define ETH_RSS_L2TPV3 RTE_DEPRECATED(ETH_RSS_L2TPV3) RTE_ETH_RSS_L2TPV3 632 #define ETH_RSS_PFCP RTE_DEPRECATED(ETH_RSS_PFCP) RTE_ETH_RSS_PFCP 633 #define ETH_RSS_PPPOE RTE_DEPRECATED(ETH_RSS_PPPOE) RTE_ETH_RSS_PPPOE 634 #define ETH_RSS_ECPRI RTE_DEPRECATED(ETH_RSS_ECPRI) RTE_ETH_RSS_ECPRI 635 #define ETH_RSS_MPLS RTE_DEPRECATED(ETH_RSS_MPLS) RTE_ETH_RSS_MPLS 636 #define ETH_RSS_IPV4_CHKSUM RTE_DEPRECATED(ETH_RSS_IPV4_CHKSUM) RTE_ETH_RSS_IPV4_CHKSUM 637 638 /** 639 * The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header. 640 * It is similar to ETH_RSS_PORT that they don't specify the specific type of 641 * L4 header. This macro is defined to replace some specific L4 (TCP/UDP/SCTP) 642 * checksum type for constructing the use of RSS offload bits. 643 * 644 * Due to above reason, some old APIs (and configuration) don't support 645 * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it. 646 * 647 * For the case that checksum is not used in an UDP header, 648 * it takes the reserved value 0 as input for the hash function. 649 */ 650 #define RTE_ETH_RSS_L4_CHKSUM RTE_BIT64(35) 651 #define ETH_RSS_L4_CHKSUM RTE_DEPRECATED(ETH_RSS_L4_CHKSUM) RTE_ETH_RSS_L4_CHKSUM 652 653 #define RTE_ETH_RSS_L2TPV2 RTE_BIT64(36) 654 655 /* 656 * We use the following macros to combine with above RTE_ETH_RSS_* for 657 * more specific input set selection. These bits are defined starting 658 * from the high end of the 64 bits. 659 * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents 660 * both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of 661 * the same level are used simultaneously, it is the same case as none of 662 * them are added. 663 */ 664 #define RTE_ETH_RSS_L3_SRC_ONLY RTE_BIT64(63) 665 #define RTE_ETH_RSS_L3_DST_ONLY RTE_BIT64(62) 666 #define RTE_ETH_RSS_L4_SRC_ONLY RTE_BIT64(61) 667 #define RTE_ETH_RSS_L4_DST_ONLY RTE_BIT64(60) 668 #define RTE_ETH_RSS_L2_SRC_ONLY RTE_BIT64(59) 669 #define RTE_ETH_RSS_L2_DST_ONLY RTE_BIT64(58) 670 671 #define ETH_RSS_L3_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L3_SRC_ONLY) RTE_ETH_RSS_L3_SRC_ONLY 672 #define ETH_RSS_L3_DST_ONLY RTE_DEPRECATED(ETH_RSS_L3_DST_ONLY) RTE_ETH_RSS_L3_DST_ONLY 673 #define ETH_RSS_L4_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L4_SRC_ONLY) RTE_ETH_RSS_L4_SRC_ONLY 674 #define ETH_RSS_L4_DST_ONLY RTE_DEPRECATED(ETH_RSS_L4_DST_ONLY) RTE_ETH_RSS_L4_DST_ONLY 675 #define ETH_RSS_L2_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L2_SRC_ONLY) RTE_ETH_RSS_L2_SRC_ONLY 676 #define ETH_RSS_L2_DST_ONLY RTE_DEPRECATED(ETH_RSS_L2_DST_ONLY) RTE_ETH_RSS_L2_DST_ONLY 677 678 /* 679 * Only select IPV6 address prefix as RSS input set according to 680 * https://tools.ietf.org/html/rfc6052 681 * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP, 682 * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP. 683 */ 684 #define RTE_ETH_RSS_L3_PRE32 RTE_BIT64(57) 685 #define RTE_ETH_RSS_L3_PRE40 RTE_BIT64(56) 686 #define RTE_ETH_RSS_L3_PRE48 RTE_BIT64(55) 687 #define RTE_ETH_RSS_L3_PRE56 RTE_BIT64(54) 688 #define RTE_ETH_RSS_L3_PRE64 RTE_BIT64(53) 689 #define RTE_ETH_RSS_L3_PRE96 RTE_BIT64(52) 690 691 /* 692 * Use the following macros to combine with the above layers 693 * to choose inner and outer layers or both for RSS computation. 694 * Bits 50 and 51 are reserved for this. 695 */ 696 697 /** 698 * level 0, requests the default behavior. 699 * Depending on the packet type, it can mean outermost, innermost, 700 * anything in between or even no RSS. 701 * It basically stands for the innermost encapsulation level RSS 702 * can be performed on according to PMD and device capabilities. 703 */ 704 #define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (UINT64_C(0) << 50) 705 #define ETH_RSS_LEVEL_PMD_DEFAULT RTE_DEPRECATED(ETH_RSS_LEVEL_PMD_DEFAULT) RTE_ETH_RSS_LEVEL_PMD_DEFAULT 706 707 /** 708 * level 1, requests RSS to be performed on the outermost packet 709 * encapsulation level. 710 */ 711 #define RTE_ETH_RSS_LEVEL_OUTERMOST (UINT64_C(1) << 50) 712 #define ETH_RSS_LEVEL_OUTERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_OUTERMOST) RTE_ETH_RSS_LEVEL_OUTERMOST 713 714 /** 715 * level 2, requests RSS to be performed on the specified inner packet 716 * encapsulation level, from outermost to innermost (lower to higher values). 717 */ 718 #define RTE_ETH_RSS_LEVEL_INNERMOST (UINT64_C(2) << 50) 719 #define RTE_ETH_RSS_LEVEL_MASK (UINT64_C(3) << 50) 720 721 #define ETH_RSS_LEVEL_INNERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_INNERMOST) RTE_ETH_RSS_LEVEL_INNERMOST 722 #define ETH_RSS_LEVEL_MASK RTE_DEPRECATED(ETH_RSS_LEVEL_MASK) RTE_ETH_RSS_LEVEL_MASK 723 724 #define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50) 725 #define ETH_RSS_LEVEL(rss_hf) RTE_DEPRECATED(ETH_RSS_LEVEL(rss_hf)) RTE_ETH_RSS_LEVEL(rss_hf) 726 727 /** 728 * For input set change of hash filter, if SRC_ONLY and DST_ONLY of 729 * the same level are used simultaneously, it is the same case as 730 * none of them are added. 731 * 732 * @param rss_hf 733 * RSS types with SRC/DST_ONLY. 734 * @return 735 * RSS types. 736 */ 737 static inline uint64_t 738 rte_eth_rss_hf_refine(uint64_t rss_hf) 739 { 740 if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY)) 741 rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY); 742 743 if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY)) 744 rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY); 745 746 return rss_hf; 747 } 748 749 #define RTE_ETH_RSS_IPV6_PRE32 ( \ 750 RTE_ETH_RSS_IPV6 | \ 751 RTE_ETH_RSS_L3_PRE32) 752 #define ETH_RSS_IPV6_PRE32 RTE_DEPRECATED(ETH_RSS_IPV6_PRE32) RTE_ETH_RSS_IPV6_PRE32 753 754 #define RTE_ETH_RSS_IPV6_PRE40 ( \ 755 RTE_ETH_RSS_IPV6 | \ 756 RTE_ETH_RSS_L3_PRE40) 757 #define ETH_RSS_IPV6_PRE40 RTE_DEPRECATED(ETH_RSS_IPV6_PRE40) RTE_ETH_RSS_IPV6_PRE40 758 759 #define RTE_ETH_RSS_IPV6_PRE48 ( \ 760 RTE_ETH_RSS_IPV6 | \ 761 RTE_ETH_RSS_L3_PRE48) 762 #define ETH_RSS_IPV6_PRE48 RTE_DEPRECATED(ETH_RSS_IPV6_PRE48) RTE_ETH_RSS_IPV6_PRE48 763 764 #define RTE_ETH_RSS_IPV6_PRE56 ( \ 765 RTE_ETH_RSS_IPV6 | \ 766 RTE_ETH_RSS_L3_PRE56) 767 #define ETH_RSS_IPV6_PRE56 RTE_DEPRECATED(ETH_RSS_IPV6_PRE56) RTE_ETH_RSS_IPV6_PRE56 768 769 #define RTE_ETH_RSS_IPV6_PRE64 ( \ 770 RTE_ETH_RSS_IPV6 | \ 771 RTE_ETH_RSS_L3_PRE64) 772 #define ETH_RSS_IPV6_PRE64 RTE_DEPRECATED(ETH_RSS_IPV6_PRE64) RTE_ETH_RSS_IPV6_PRE64 773 774 #define RTE_ETH_RSS_IPV6_PRE96 ( \ 775 RTE_ETH_RSS_IPV6 | \ 776 RTE_ETH_RSS_L3_PRE96) 777 #define ETH_RSS_IPV6_PRE96 RTE_DEPRECATED(ETH_RSS_IPV6_PRE96) RTE_ETH_RSS_IPV6_PRE96 778 779 #define RTE_ETH_RSS_IPV6_PRE32_UDP ( \ 780 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 781 RTE_ETH_RSS_L3_PRE32) 782 #define ETH_RSS_IPV6_PRE32_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_UDP) RTE_ETH_RSS_IPV6_PRE32_UDP 783 784 #define RTE_ETH_RSS_IPV6_PRE40_UDP ( \ 785 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 786 RTE_ETH_RSS_L3_PRE40) 787 #define ETH_RSS_IPV6_PRE40_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_UDP) RTE_ETH_RSS_IPV6_PRE40_UDP 788 789 #define RTE_ETH_RSS_IPV6_PRE48_UDP ( \ 790 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 791 RTE_ETH_RSS_L3_PRE48) 792 #define ETH_RSS_IPV6_PRE48_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_UDP) RTE_ETH_RSS_IPV6_PRE48_UDP 793 794 #define RTE_ETH_RSS_IPV6_PRE56_UDP ( \ 795 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 796 RTE_ETH_RSS_L3_PRE56) 797 #define ETH_RSS_IPV6_PRE56_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_UDP) RTE_ETH_RSS_IPV6_PRE56_UDP 798 799 #define RTE_ETH_RSS_IPV6_PRE64_UDP ( \ 800 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 801 RTE_ETH_RSS_L3_PRE64) 802 #define ETH_RSS_IPV6_PRE64_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_UDP) RTE_ETH_RSS_IPV6_PRE64_UDP 803 804 #define RTE_ETH_RSS_IPV6_PRE96_UDP ( \ 805 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 806 RTE_ETH_RSS_L3_PRE96) 807 #define ETH_RSS_IPV6_PRE96_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_UDP) RTE_ETH_RSS_IPV6_PRE96_UDP 808 809 #define RTE_ETH_RSS_IPV6_PRE32_TCP ( \ 810 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 811 RTE_ETH_RSS_L3_PRE32) 812 #define ETH_RSS_IPV6_PRE32_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_TCP) RTE_ETH_RSS_IPV6_PRE32_TCP 813 814 #define RTE_ETH_RSS_IPV6_PRE40_TCP ( \ 815 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 816 RTE_ETH_RSS_L3_PRE40) 817 #define ETH_RSS_IPV6_PRE40_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_TCP) RTE_ETH_RSS_IPV6_PRE40_TCP 818 819 #define RTE_ETH_RSS_IPV6_PRE48_TCP ( \ 820 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 821 RTE_ETH_RSS_L3_PRE48) 822 #define ETH_RSS_IPV6_PRE48_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_TCP) RTE_ETH_RSS_IPV6_PRE48_TCP 823 824 #define RTE_ETH_RSS_IPV6_PRE56_TCP ( \ 825 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 826 RTE_ETH_RSS_L3_PRE56) 827 #define ETH_RSS_IPV6_PRE56_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_TCP) RTE_ETH_RSS_IPV6_PRE56_TCP 828 829 #define RTE_ETH_RSS_IPV6_PRE64_TCP ( \ 830 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 831 RTE_ETH_RSS_L3_PRE64) 832 #define ETH_RSS_IPV6_PRE64_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_TCP) RTE_ETH_RSS_IPV6_PRE64_TCP 833 834 #define RTE_ETH_RSS_IPV6_PRE96_TCP ( \ 835 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 836 RTE_ETH_RSS_L3_PRE96) 837 #define ETH_RSS_IPV6_PRE96_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_TCP) RTE_ETH_RSS_IPV6_PRE96_TCP 838 839 #define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \ 840 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 841 RTE_ETH_RSS_L3_PRE32) 842 #define ETH_RSS_IPV6_PRE32_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_SCTP) RTE_ETH_RSS_IPV6_PRE32_SCTP 843 844 #define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \ 845 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 846 RTE_ETH_RSS_L3_PRE40) 847 #define ETH_RSS_IPV6_PRE40_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_SCTP) RTE_ETH_RSS_IPV6_PRE40_SCTP 848 849 #define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \ 850 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 851 RTE_ETH_RSS_L3_PRE48) 852 #define ETH_RSS_IPV6_PRE48_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_SCTP) RTE_ETH_RSS_IPV6_PRE48_SCTP 853 854 #define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \ 855 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 856 RTE_ETH_RSS_L3_PRE56) 857 #define ETH_RSS_IPV6_PRE56_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_SCTP) RTE_ETH_RSS_IPV6_PRE56_SCTP 858 859 #define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \ 860 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 861 RTE_ETH_RSS_L3_PRE64) 862 #define ETH_RSS_IPV6_PRE64_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_SCTP) RTE_ETH_RSS_IPV6_PRE64_SCTP 863 864 #define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \ 865 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 866 RTE_ETH_RSS_L3_PRE96) 867 #define ETH_RSS_IPV6_PRE96_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_SCTP) RTE_ETH_RSS_IPV6_PRE96_SCTP 868 869 #define RTE_ETH_RSS_IP ( \ 870 RTE_ETH_RSS_IPV4 | \ 871 RTE_ETH_RSS_FRAG_IPV4 | \ 872 RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \ 873 RTE_ETH_RSS_IPV6 | \ 874 RTE_ETH_RSS_FRAG_IPV6 | \ 875 RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \ 876 RTE_ETH_RSS_IPV6_EX) 877 #define ETH_RSS_IP RTE_DEPRECATED(ETH_RSS_IP) RTE_ETH_RSS_IP 878 879 #define RTE_ETH_RSS_UDP ( \ 880 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ 881 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 882 RTE_ETH_RSS_IPV6_UDP_EX) 883 #define ETH_RSS_UDP RTE_DEPRECATED(ETH_RSS_UDP) RTE_ETH_RSS_UDP 884 885 #define RTE_ETH_RSS_TCP ( \ 886 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ 887 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 888 RTE_ETH_RSS_IPV6_TCP_EX) 889 #define ETH_RSS_TCP RTE_DEPRECATED(ETH_RSS_TCP) RTE_ETH_RSS_TCP 890 891 #define RTE_ETH_RSS_SCTP ( \ 892 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ 893 RTE_ETH_RSS_NONFRAG_IPV6_SCTP) 894 #define ETH_RSS_SCTP RTE_DEPRECATED(ETH_RSS_SCTP) RTE_ETH_RSS_SCTP 895 896 #define RTE_ETH_RSS_TUNNEL ( \ 897 RTE_ETH_RSS_VXLAN | \ 898 RTE_ETH_RSS_GENEVE | \ 899 RTE_ETH_RSS_NVGRE) 900 #define ETH_RSS_TUNNEL RTE_DEPRECATED(ETH_RSS_TUNNEL) RTE_ETH_RSS_TUNNEL 901 902 #define RTE_ETH_RSS_VLAN ( \ 903 RTE_ETH_RSS_S_VLAN | \ 904 RTE_ETH_RSS_C_VLAN) 905 #define ETH_RSS_VLAN RTE_DEPRECATED(ETH_RSS_VLAN) RTE_ETH_RSS_VLAN 906 907 /** Mask of valid RSS hash protocols */ 908 #define RTE_ETH_RSS_PROTO_MASK ( \ 909 RTE_ETH_RSS_IPV4 | \ 910 RTE_ETH_RSS_FRAG_IPV4 | \ 911 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ 912 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ 913 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ 914 RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \ 915 RTE_ETH_RSS_IPV6 | \ 916 RTE_ETH_RSS_FRAG_IPV6 | \ 917 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 918 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 919 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 920 RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \ 921 RTE_ETH_RSS_L2_PAYLOAD | \ 922 RTE_ETH_RSS_IPV6_EX | \ 923 RTE_ETH_RSS_IPV6_TCP_EX | \ 924 RTE_ETH_RSS_IPV6_UDP_EX | \ 925 RTE_ETH_RSS_PORT | \ 926 RTE_ETH_RSS_VXLAN | \ 927 RTE_ETH_RSS_GENEVE | \ 928 RTE_ETH_RSS_NVGRE | \ 929 RTE_ETH_RSS_MPLS) 930 #define ETH_RSS_PROTO_MASK RTE_DEPRECATED(ETH_RSS_PROTO_MASK) RTE_ETH_RSS_PROTO_MASK 931 932 /* 933 * Definitions used for redirection table entry size. 934 * Some RSS RETA sizes may not be supported by some drivers, check the 935 * documentation or the description of relevant functions for more details. 936 */ 937 #define RTE_ETH_RSS_RETA_SIZE_64 64 938 #define RTE_ETH_RSS_RETA_SIZE_128 128 939 #define RTE_ETH_RSS_RETA_SIZE_256 256 940 #define RTE_ETH_RSS_RETA_SIZE_512 512 941 #define RTE_ETH_RETA_GROUP_SIZE 64 942 943 #define ETH_RSS_RETA_SIZE_64 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_64) RTE_ETH_RSS_RETA_SIZE_64 944 #define ETH_RSS_RETA_SIZE_128 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_128) RTE_ETH_RSS_RETA_SIZE_128 945 #define ETH_RSS_RETA_SIZE_256 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_256) RTE_ETH_RSS_RETA_SIZE_256 946 #define ETH_RSS_RETA_SIZE_512 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_512) RTE_ETH_RSS_RETA_SIZE_512 947 #define RTE_RETA_GROUP_SIZE RTE_DEPRECATED(RTE_RETA_GROUP_SIZE) RTE_ETH_RETA_GROUP_SIZE 948 949 /**@{@name VMDq and DCB maximums */ 950 #define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */ 951 #define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */ 952 #define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */ 953 #define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */ 954 /**@}*/ 955 956 #define ETH_VMDQ_MAX_VLAN_FILTERS RTE_DEPRECATED(ETH_VMDQ_MAX_VLAN_FILTERS) RTE_ETH_VMDQ_MAX_VLAN_FILTERS 957 #define ETH_DCB_NUM_USER_PRIORITIES RTE_DEPRECATED(ETH_DCB_NUM_USER_PRIORITIES) RTE_ETH_DCB_NUM_USER_PRIORITIES 958 #define ETH_VMDQ_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_VMDQ_DCB_NUM_QUEUES) RTE_ETH_VMDQ_DCB_NUM_QUEUES 959 #define ETH_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_DCB_NUM_QUEUES) RTE_ETH_DCB_NUM_QUEUES 960 961 /**@{@name DCB capabilities */ 962 #define RTE_ETH_DCB_PG_SUPPORT RTE_BIT32(0) /**< Priority Group(ETS) support. */ 963 #define RTE_ETH_DCB_PFC_SUPPORT RTE_BIT32(1) /**< Priority Flow Control support. */ 964 /**@}*/ 965 966 #define ETH_DCB_PG_SUPPORT RTE_DEPRECATED(ETH_DCB_PG_SUPPORT) RTE_ETH_DCB_PG_SUPPORT 967 #define ETH_DCB_PFC_SUPPORT RTE_DEPRECATED(ETH_DCB_PFC_SUPPORT) RTE_ETH_DCB_PFC_SUPPORT 968 969 /**@{@name VLAN offload bits */ 970 #define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */ 971 #define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */ 972 #define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */ 973 #define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */ 974 975 #define ETH_VLAN_STRIP_OFFLOAD RTE_DEPRECATED(ETH_VLAN_STRIP_OFFLOAD) RTE_ETH_VLAN_STRIP_OFFLOAD 976 #define ETH_VLAN_FILTER_OFFLOAD RTE_DEPRECATED(ETH_VLAN_FILTER_OFFLOAD) RTE_ETH_VLAN_FILTER_OFFLOAD 977 #define ETH_VLAN_EXTEND_OFFLOAD RTE_DEPRECATED(ETH_VLAN_EXTEND_OFFLOAD) RTE_ETH_VLAN_EXTEND_OFFLOAD 978 #define ETH_QINQ_STRIP_OFFLOAD RTE_DEPRECATED(ETH_QINQ_STRIP_OFFLOAD) RTE_ETH_QINQ_STRIP_OFFLOAD 979 980 #define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */ 981 #define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/ 982 #define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/ 983 #define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */ 984 #define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/ 985 /**@}*/ 986 987 #define ETH_VLAN_STRIP_MASK RTE_DEPRECATED(ETH_VLAN_STRIP_MASK) RTE_ETH_VLAN_STRIP_MASK 988 #define ETH_VLAN_FILTER_MASK RTE_DEPRECATED(ETH_VLAN_FILTER_MASK) RTE_ETH_VLAN_FILTER_MASK 989 #define ETH_VLAN_EXTEND_MASK RTE_DEPRECATED(ETH_VLAN_EXTEND_MASK) RTE_ETH_VLAN_EXTEND_MASK 990 #define ETH_QINQ_STRIP_MASK RTE_DEPRECATED(ETH_QINQ_STRIP_MASK) RTE_ETH_QINQ_STRIP_MASK 991 #define ETH_VLAN_ID_MAX RTE_DEPRECATED(ETH_VLAN_ID_MAX) RTE_ETH_VLAN_ID_MAX 992 993 /* Definitions used for receive MAC address */ 994 #define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */ 995 #define ETH_NUM_RECEIVE_MAC_ADDR RTE_DEPRECATED(ETH_NUM_RECEIVE_MAC_ADDR) RTE_ETH_NUM_RECEIVE_MAC_ADDR 996 997 /* Definitions used for unicast hash */ 998 #define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */ 999 #define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_DEPRECATED(ETH_VMDQ_NUM_UC_HASH_ARRAY) RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 1000 1001 /**@{@name VMDq Rx mode 1002 * @see rte_eth_vmdq_rx_conf.rx_mode 1003 */ 1004 /** Accept untagged packets. */ 1005 #define RTE_ETH_VMDQ_ACCEPT_UNTAG RTE_BIT32(0) 1006 /** Accept packets in multicast table. */ 1007 #define RTE_ETH_VMDQ_ACCEPT_HASH_MC RTE_BIT32(1) 1008 /** Accept packets in unicast table. */ 1009 #define RTE_ETH_VMDQ_ACCEPT_HASH_UC RTE_BIT32(2) 1010 /** Accept broadcast packets. */ 1011 #define RTE_ETH_VMDQ_ACCEPT_BROADCAST RTE_BIT32(3) 1012 /** Multicast promiscuous. */ 1013 #define RTE_ETH_VMDQ_ACCEPT_MULTICAST RTE_BIT32(4) 1014 /**@}*/ 1015 1016 #define ETH_VMDQ_ACCEPT_UNTAG RTE_DEPRECATED(ETH_VMDQ_ACCEPT_UNTAG) RTE_ETH_VMDQ_ACCEPT_UNTAG 1017 #define ETH_VMDQ_ACCEPT_HASH_MC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_MC) RTE_ETH_VMDQ_ACCEPT_HASH_MC 1018 #define ETH_VMDQ_ACCEPT_HASH_UC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_UC) RTE_ETH_VMDQ_ACCEPT_HASH_UC 1019 #define ETH_VMDQ_ACCEPT_BROADCAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_BROADCAST) RTE_ETH_VMDQ_ACCEPT_BROADCAST 1020 #define ETH_VMDQ_ACCEPT_MULTICAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_MULTICAST) RTE_ETH_VMDQ_ACCEPT_MULTICAST 1021 1022 /** 1023 * A structure used to configure 64 entries of Redirection Table of the 1024 * Receive Side Scaling (RSS) feature of an Ethernet port. To configure 1025 * more than 64 entries supported by hardware, an array of this structure 1026 * is needed. 1027 */ 1028 struct rte_eth_rss_reta_entry64 { 1029 /** Mask bits indicate which entries need to be updated/queried. */ 1030 uint64_t mask; 1031 /** Group of 64 redirection table entries. */ 1032 uint16_t reta[RTE_ETH_RETA_GROUP_SIZE]; 1033 }; 1034 1035 /** 1036 * This enum indicates the possible number of traffic classes 1037 * in DCB configurations 1038 */ 1039 enum rte_eth_nb_tcs { 1040 RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */ 1041 RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */ 1042 }; 1043 #define ETH_4_TCS RTE_DEPRECATED(ETH_4_TCS) RTE_ETH_4_TCS 1044 #define ETH_8_TCS RTE_DEPRECATED(ETH_8_TCS) RTE_ETH_8_TCS 1045 1046 /** 1047 * This enum indicates the possible number of queue pools 1048 * in VMDq configurations. 1049 */ 1050 enum rte_eth_nb_pools { 1051 RTE_ETH_8_POOLS = 8, /**< 8 VMDq pools. */ 1052 RTE_ETH_16_POOLS = 16, /**< 16 VMDq pools. */ 1053 RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */ 1054 RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */ 1055 }; 1056 #define ETH_8_POOLS RTE_DEPRECATED(ETH_8_POOLS) RTE_ETH_8_POOLS 1057 #define ETH_16_POOLS RTE_DEPRECATED(ETH_16_POOLS) RTE_ETH_16_POOLS 1058 #define ETH_32_POOLS RTE_DEPRECATED(ETH_32_POOLS) RTE_ETH_32_POOLS 1059 #define ETH_64_POOLS RTE_DEPRECATED(ETH_64_POOLS) RTE_ETH_64_POOLS 1060 1061 /* This structure may be extended in future. */ 1062 struct rte_eth_dcb_rx_conf { 1063 enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */ 1064 /** Traffic class each UP mapped to. */ 1065 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1066 }; 1067 1068 struct rte_eth_vmdq_dcb_tx_conf { 1069 enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */ 1070 /** Traffic class each UP mapped to. */ 1071 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1072 }; 1073 1074 struct rte_eth_dcb_tx_conf { 1075 enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */ 1076 /** Traffic class each UP mapped to. */ 1077 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1078 }; 1079 1080 struct rte_eth_vmdq_tx_conf { 1081 enum rte_eth_nb_pools nb_queue_pools; /**< VMDq mode, 64 pools. */ 1082 }; 1083 1084 /** 1085 * A structure used to configure the VMDq+DCB feature 1086 * of an Ethernet port. 1087 * 1088 * Using this feature, packets are routed to a pool of queues, based 1089 * on the VLAN ID in the VLAN tag, and then to a specific queue within 1090 * that pool, using the user priority VLAN tag field. 1091 * 1092 * A default pool may be used, if desired, to route all traffic which 1093 * does not match the VLAN filter rules. 1094 */ 1095 struct rte_eth_vmdq_dcb_conf { 1096 enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools */ 1097 uint8_t enable_default_pool; /**< If non-zero, use a default pool */ 1098 uint8_t default_pool; /**< The default pool, if applicable */ 1099 uint8_t nb_pool_maps; /**< We can have up to 64 filters/mappings */ 1100 struct { 1101 uint16_t vlan_id; /**< The VLAN ID of the received frame */ 1102 uint64_t pools; /**< Bitmask of pools for packet Rx */ 1103 } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */ 1104 /** Selects a queue in a pool */ 1105 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1106 }; 1107 1108 /** 1109 * A structure used to configure the VMDq feature of an Ethernet port when 1110 * not combined with the DCB feature. 1111 * 1112 * Using this feature, packets are routed to a pool of queues. By default, 1113 * the pool selection is based on the MAC address, the VLAN ID in the 1114 * VLAN tag as specified in the pool_map array. 1115 * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool 1116 * selection using only the MAC address. MAC address to pool mapping is done 1117 * using the rte_eth_dev_mac_addr_add function, with the pool parameter 1118 * corresponding to the pool ID. 1119 * 1120 * Queue selection within the selected pool will be done using RSS when 1121 * it is enabled or revert to the first queue of the pool if not. 1122 * 1123 * A default pool may be used, if desired, to route all traffic which 1124 * does not match the VLAN filter rules or any pool MAC address. 1125 */ 1126 struct rte_eth_vmdq_rx_conf { 1127 enum rte_eth_nb_pools nb_queue_pools; /**< VMDq only mode, 8 or 64 pools */ 1128 uint8_t enable_default_pool; /**< If non-zero, use a default pool */ 1129 uint8_t default_pool; /**< The default pool, if applicable */ 1130 uint8_t enable_loop_back; /**< Enable VT loop back */ 1131 uint8_t nb_pool_maps; /**< We can have up to 64 filters/mappings */ 1132 uint32_t rx_mode; /**< Flags from ETH_VMDQ_ACCEPT_* */ 1133 struct { 1134 uint16_t vlan_id; /**< The VLAN ID of the received frame */ 1135 uint64_t pools; /**< Bitmask of pools for packet Rx */ 1136 } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */ 1137 }; 1138 1139 /** 1140 * A structure used to configure the Tx features of an Ethernet port. 1141 */ 1142 struct rte_eth_txmode { 1143 enum rte_eth_tx_mq_mode mq_mode; /**< Tx multi-queues mode. */ 1144 /** 1145 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags. 1146 * Only offloads set on tx_offload_capa field on rte_eth_dev_info 1147 * structure are allowed to be set. 1148 */ 1149 uint64_t offloads; 1150 1151 uint16_t pvid; 1152 __extension__ 1153 uint8_t /** If set, reject sending out tagged pkts */ 1154 hw_vlan_reject_tagged : 1, 1155 /** If set, reject sending out untagged pkts */ 1156 hw_vlan_reject_untagged : 1, 1157 /** If set, enable port based VLAN insertion */ 1158 hw_vlan_insert_pvid : 1; 1159 1160 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1161 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1162 }; 1163 1164 /** 1165 * @warning 1166 * @b EXPERIMENTAL: this structure may change without prior notice. 1167 * 1168 * A structure used to configure an Rx packet segment to split. 1169 * 1170 * If RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag is set in offloads field, 1171 * the PMD will split the received packets into multiple segments 1172 * according to the specification in the description array: 1173 * 1174 * - The first network buffer will be allocated from the memory pool, 1175 * specified in the first array element, the second buffer, from the 1176 * pool in the second element, and so on. 1177 * 1178 * - The offsets from the segment description elements specify 1179 * the data offset from the buffer beginning except the first mbuf. 1180 * The first segment offset is added with RTE_PKTMBUF_HEADROOM. 1181 * 1182 * - The lengths in the elements define the maximal data amount 1183 * being received to each segment. The receiving starts with filling 1184 * up the first mbuf data buffer up to specified length. If the 1185 * there are data remaining (packet is longer than buffer in the first 1186 * mbuf) the following data will be pushed to the next segment 1187 * up to its own length, and so on. 1188 * 1189 * - If the length in the segment description element is zero 1190 * the actual buffer size will be deduced from the appropriate 1191 * memory pool properties. 1192 * 1193 * - If there is not enough elements to describe the buffer for entire 1194 * packet of maximal length the following parameters will be used 1195 * for the all remaining segments: 1196 * - pool from the last valid element 1197 * - the buffer size from this pool 1198 * - zero offset 1199 */ 1200 struct rte_eth_rxseg_split { 1201 struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ 1202 uint16_t length; /**< Segment data length, configures split point. */ 1203 uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ 1204 uint32_t reserved; /**< Reserved field. */ 1205 }; 1206 1207 /** 1208 * @warning 1209 * @b EXPERIMENTAL: this structure may change without prior notice. 1210 * 1211 * A common structure used to describe Rx packet segment properties. 1212 */ 1213 union rte_eth_rxseg { 1214 /* The settings for buffer split offload. */ 1215 struct rte_eth_rxseg_split split; 1216 /* The other features settings should be added here. */ 1217 }; 1218 1219 /** 1220 * A structure used to configure an Rx ring of an Ethernet port. 1221 */ 1222 struct rte_eth_rxconf { 1223 struct rte_eth_thresh rx_thresh; /**< Rx ring threshold registers. */ 1224 uint16_t rx_free_thresh; /**< Drives the freeing of Rx descriptors. */ 1225 uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ 1226 uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ 1227 uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */ 1228 /** 1229 * Share group index in Rx domain and switch domain. 1230 * Non-zero value to enable Rx queue share, zero value disable share. 1231 * PMD is responsible for Rx queue consistency checks to avoid member 1232 * port's configuration contradict to each other. 1233 */ 1234 uint16_t share_group; 1235 uint16_t share_qid; /**< Shared Rx queue ID in group */ 1236 /** 1237 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags. 1238 * Only offloads set on rx_queue_offload_capa or rx_offload_capa 1239 * fields on rte_eth_dev_info structure are allowed to be set. 1240 */ 1241 uint64_t offloads; 1242 /** 1243 * Points to the array of segment descriptions for an entire packet. 1244 * Array elements are properties for consecutive Rx segments. 1245 * 1246 * The supported capabilities of receiving segmentation is reported 1247 * in rte_eth_dev_info.rx_seg_capa field. 1248 */ 1249 union rte_eth_rxseg *rx_seg; 1250 1251 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1252 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1253 }; 1254 1255 /** 1256 * A structure used to configure a Tx ring of an Ethernet port. 1257 */ 1258 struct rte_eth_txconf { 1259 struct rte_eth_thresh tx_thresh; /**< Tx ring threshold registers. */ 1260 uint16_t tx_rs_thresh; /**< Drives the setting of RS bit on TXDs. */ 1261 uint16_t tx_free_thresh; /**< Start freeing Tx buffers if there are 1262 less free descriptors than this value. */ 1263 1264 uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ 1265 /** 1266 * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags. 1267 * Only offloads set on tx_queue_offload_capa or tx_offload_capa 1268 * fields on rte_eth_dev_info structure are allowed to be set. 1269 */ 1270 uint64_t offloads; 1271 1272 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1273 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1274 }; 1275 1276 /** 1277 * @warning 1278 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1279 * 1280 * A structure used to return the hairpin capabilities that are supported. 1281 */ 1282 struct rte_eth_hairpin_cap { 1283 /** The max number of hairpin queues (different bindings). */ 1284 uint16_t max_nb_queues; 1285 /** Max number of Rx queues to be connected to one Tx queue. */ 1286 uint16_t max_rx_2_tx; 1287 /** Max number of Tx queues to be connected to one Rx queue. */ 1288 uint16_t max_tx_2_rx; 1289 uint16_t max_nb_desc; /**< The max num of descriptors. */ 1290 }; 1291 1292 #define RTE_ETH_MAX_HAIRPIN_PEERS 32 1293 1294 /** 1295 * @warning 1296 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1297 * 1298 * A structure used to hold hairpin peer data. 1299 */ 1300 struct rte_eth_hairpin_peer { 1301 uint16_t port; /**< Peer port. */ 1302 uint16_t queue; /**< Peer queue. */ 1303 }; 1304 1305 /** 1306 * @warning 1307 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1308 * 1309 * A structure used to configure hairpin binding. 1310 */ 1311 struct rte_eth_hairpin_conf { 1312 uint32_t peer_count:16; /**< The number of peers. */ 1313 1314 /** 1315 * Explicit Tx flow rule mode. 1316 * One hairpin pair of queues should have the same attribute. 1317 * 1318 * - When set, the user should be responsible for inserting the hairpin 1319 * Tx part flows and removing them. 1320 * - When clear, the PMD will try to handle the Tx part of the flows, 1321 * e.g., by splitting one flow into two parts. 1322 */ 1323 uint32_t tx_explicit:1; 1324 1325 /** 1326 * Manually bind hairpin queues. 1327 * One hairpin pair of queues should have the same attribute. 1328 * 1329 * - When set, to enable hairpin, the user should call the hairpin bind 1330 * function after all the queues are set up properly and the ports are 1331 * started. Also, the hairpin unbind function should be called 1332 * accordingly before stopping a port that with hairpin configured. 1333 * - When clear, the PMD will try to enable the hairpin with the queues 1334 * configured automatically during port start. 1335 */ 1336 uint32_t manual_bind:1; 1337 uint32_t reserved:14; /**< Reserved bits. */ 1338 struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS]; 1339 }; 1340 1341 /** 1342 * A structure contains information about HW descriptor ring limitations. 1343 */ 1344 struct rte_eth_desc_lim { 1345 uint16_t nb_max; /**< Max allowed number of descriptors. */ 1346 uint16_t nb_min; /**< Min allowed number of descriptors. */ 1347 uint16_t nb_align; /**< Number of descriptors should be aligned to. */ 1348 1349 /** 1350 * Max allowed number of segments per whole packet. 1351 * 1352 * - For TSO packet this is the total number of data descriptors allowed 1353 * by device. 1354 * 1355 * @see nb_mtu_seg_max 1356 */ 1357 uint16_t nb_seg_max; 1358 1359 /** 1360 * Max number of segments per one MTU. 1361 * 1362 * - For non-TSO packet, this is the maximum allowed number of segments 1363 * in a single transmit packet. 1364 * 1365 * - For TSO packet each segment within the TSO may span up to this 1366 * value. 1367 * 1368 * @see nb_seg_max 1369 */ 1370 uint16_t nb_mtu_seg_max; 1371 }; 1372 1373 /** 1374 * This enum indicates the flow control mode 1375 */ 1376 enum rte_eth_fc_mode { 1377 RTE_ETH_FC_NONE = 0, /**< Disable flow control. */ 1378 RTE_ETH_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */ 1379 RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */ 1380 RTE_ETH_FC_FULL /**< Enable flow control on both side. */ 1381 }; 1382 #define RTE_FC_NONE RTE_DEPRECATED(RTE_FC_NONE) RTE_ETH_FC_NONE 1383 #define RTE_FC_RX_PAUSE RTE_DEPRECATED(RTE_FC_RX_PAUSE) RTE_ETH_FC_RX_PAUSE 1384 #define RTE_FC_TX_PAUSE RTE_DEPRECATED(RTE_FC_TX_PAUSE) RTE_ETH_FC_TX_PAUSE 1385 #define RTE_FC_FULL RTE_DEPRECATED(RTE_FC_FULL) RTE_ETH_FC_FULL 1386 1387 /** 1388 * A structure used to configure Ethernet flow control parameter. 1389 * These parameters will be configured into the register of the NIC. 1390 * Please refer to the corresponding data sheet for proper value. 1391 */ 1392 struct rte_eth_fc_conf { 1393 uint32_t high_water; /**< High threshold value to trigger XOFF */ 1394 uint32_t low_water; /**< Low threshold value to trigger XON */ 1395 uint16_t pause_time; /**< Pause quota in the Pause frame */ 1396 uint16_t send_xon; /**< Is XON frame need be sent */ 1397 enum rte_eth_fc_mode mode; /**< Link flow control mode */ 1398 uint8_t mac_ctrl_frame_fwd; /**< Forward MAC control frames */ 1399 uint8_t autoneg; /**< Use Pause autoneg */ 1400 }; 1401 1402 /** 1403 * A structure used to configure Ethernet priority flow control parameter. 1404 * These parameters will be configured into the register of the NIC. 1405 * Please refer to the corresponding data sheet for proper value. 1406 */ 1407 struct rte_eth_pfc_conf { 1408 struct rte_eth_fc_conf fc; /**< General flow control parameter. */ 1409 uint8_t priority; /**< VLAN User Priority. */ 1410 }; 1411 1412 /** 1413 * @warning 1414 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1415 * 1416 * A structure used to retrieve information of queue based PFC. 1417 */ 1418 struct rte_eth_pfc_queue_info { 1419 /** 1420 * Maximum supported traffic class as per PFC (802.1Qbb) specification. 1421 */ 1422 uint8_t tc_max; 1423 /** PFC queue mode capabilities. */ 1424 enum rte_eth_fc_mode mode_capa; 1425 }; 1426 1427 /** 1428 * @warning 1429 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1430 * 1431 * A structure used to configure Ethernet priority flow control parameters for 1432 * ethdev queues. 1433 * 1434 * rte_eth_pfc_queue_conf::rx_pause structure shall be used to configure given 1435 * tx_qid with corresponding tc. When ethdev device receives PFC frame with 1436 * rte_eth_pfc_queue_conf::rx_pause::tc, traffic will be paused on 1437 * rte_eth_pfc_queue_conf::rx_pause::tx_qid for that tc. 1438 * 1439 * rte_eth_pfc_queue_conf::tx_pause structure shall be used to configure given 1440 * rx_qid. When rx_qid is congested, PFC frames are generated with 1441 * rte_eth_pfc_queue_conf::rx_pause::tc and 1442 * rte_eth_pfc_queue_conf::rx_pause::pause_time to the peer. 1443 */ 1444 struct rte_eth_pfc_queue_conf { 1445 enum rte_eth_fc_mode mode; /**< Link flow control mode */ 1446 1447 struct { 1448 uint16_t tx_qid; /**< Tx queue ID */ 1449 /** Traffic class as per PFC (802.1Qbb) spec. The value must be 1450 * in the range [0, rte_eth_pfc_queue_info::tx_max - 1] 1451 */ 1452 uint8_t tc; 1453 } rx_pause; /* Valid when (mode == FC_RX_PAUSE || mode == FC_FULL) */ 1454 1455 struct { 1456 uint16_t pause_time; /**< Pause quota in the Pause frame */ 1457 uint16_t rx_qid; /**< Rx queue ID */ 1458 /** Traffic class as per PFC (802.1Qbb) spec. The value must be 1459 * in the range [0, rte_eth_pfc_queue_info::tx_max - 1] 1460 */ 1461 uint8_t tc; 1462 } tx_pause; /* Valid when (mode == FC_TX_PAUSE || mode == FC_FULL) */ 1463 }; 1464 1465 /** 1466 * Tunnel type for device-specific classifier configuration. 1467 * @see rte_eth_udp_tunnel 1468 */ 1469 enum rte_eth_tunnel_type { 1470 RTE_ETH_TUNNEL_TYPE_NONE = 0, 1471 RTE_ETH_TUNNEL_TYPE_VXLAN, 1472 RTE_ETH_TUNNEL_TYPE_GENEVE, 1473 RTE_ETH_TUNNEL_TYPE_TEREDO, 1474 RTE_ETH_TUNNEL_TYPE_NVGRE, 1475 RTE_ETH_TUNNEL_TYPE_IP_IN_GRE, 1476 RTE_ETH_L2_TUNNEL_TYPE_E_TAG, 1477 RTE_ETH_TUNNEL_TYPE_VXLAN_GPE, 1478 RTE_ETH_TUNNEL_TYPE_ECPRI, 1479 RTE_ETH_TUNNEL_TYPE_MAX, 1480 }; 1481 #define RTE_TUNNEL_TYPE_NONE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NONE) RTE_ETH_TUNNEL_TYPE_NONE 1482 #define RTE_TUNNEL_TYPE_VXLAN RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN) RTE_ETH_TUNNEL_TYPE_VXLAN 1483 #define RTE_TUNNEL_TYPE_GENEVE RTE_DEPRECATED(RTE_TUNNEL_TYPE_GENEVE) RTE_ETH_TUNNEL_TYPE_GENEVE 1484 #define RTE_TUNNEL_TYPE_TEREDO RTE_DEPRECATED(RTE_TUNNEL_TYPE_TEREDO) RTE_ETH_TUNNEL_TYPE_TEREDO 1485 #define RTE_TUNNEL_TYPE_NVGRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NVGRE) RTE_ETH_TUNNEL_TYPE_NVGRE 1486 #define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_IP_IN_GRE) RTE_ETH_TUNNEL_TYPE_IP_IN_GRE 1487 #define RTE_L2_TUNNEL_TYPE_E_TAG RTE_DEPRECATED(RTE_L2_TUNNEL_TYPE_E_TAG) RTE_ETH_L2_TUNNEL_TYPE_E_TAG 1488 #define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN_GPE) RTE_ETH_TUNNEL_TYPE_VXLAN_GPE 1489 #define RTE_TUNNEL_TYPE_ECPRI RTE_DEPRECATED(RTE_TUNNEL_TYPE_ECPRI) RTE_ETH_TUNNEL_TYPE_ECPRI 1490 #define RTE_TUNNEL_TYPE_MAX RTE_DEPRECATED(RTE_TUNNEL_TYPE_MAX) RTE_ETH_TUNNEL_TYPE_MAX 1491 1492 /* Deprecated API file for rte_eth_dev_filter_* functions */ 1493 #include "rte_eth_ctrl.h" 1494 1495 /** 1496 * Memory space that can be configured to store Flow Director filters 1497 * in the board memory. 1498 */ 1499 enum rte_eth_fdir_pballoc_type { 1500 RTE_ETH_FDIR_PBALLOC_64K = 0, /**< 64k. */ 1501 RTE_ETH_FDIR_PBALLOC_128K, /**< 128k. */ 1502 RTE_ETH_FDIR_PBALLOC_256K, /**< 256k. */ 1503 }; 1504 #define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type 1505 1506 #define RTE_FDIR_PBALLOC_64K RTE_DEPRECATED(RTE_FDIR_PBALLOC_64K) RTE_ETH_FDIR_PBALLOC_64K 1507 #define RTE_FDIR_PBALLOC_128K RTE_DEPRECATED(RTE_FDIR_PBALLOC_128K) RTE_ETH_FDIR_PBALLOC_128K 1508 #define RTE_FDIR_PBALLOC_256K RTE_DEPRECATED(RTE_FDIR_PBALLOC_256K) RTE_ETH_FDIR_PBALLOC_256K 1509 1510 /** 1511 * Select report mode of FDIR hash information in Rx descriptors. 1512 */ 1513 enum rte_fdir_status_mode { 1514 RTE_FDIR_NO_REPORT_STATUS = 0, /**< Never report FDIR hash. */ 1515 RTE_FDIR_REPORT_STATUS, /**< Only report FDIR hash for matching pkts. */ 1516 RTE_FDIR_REPORT_STATUS_ALWAYS, /**< Always report FDIR hash. */ 1517 }; 1518 1519 /** 1520 * A structure used to configure the Flow Director (FDIR) feature 1521 * of an Ethernet port. 1522 * 1523 * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored. 1524 */ 1525 struct rte_eth_fdir_conf { 1526 enum rte_fdir_mode mode; /**< Flow Director mode. */ 1527 enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */ 1528 enum rte_fdir_status_mode status; /**< How to report FDIR hash. */ 1529 /** Rx queue of packets matching a "drop" filter in perfect mode. */ 1530 uint8_t drop_queue; 1531 struct rte_eth_fdir_masks mask; 1532 /** Flex payload configuration. */ 1533 struct rte_eth_fdir_flex_conf flex_conf; 1534 }; 1535 #define rte_fdir_conf rte_eth_fdir_conf 1536 1537 /** 1538 * UDP tunneling configuration. 1539 * 1540 * Used to configure the classifier of a device, 1541 * associating an UDP port with a type of tunnel. 1542 * 1543 * Some NICs may need such configuration to properly parse a tunnel 1544 * with any standard or custom UDP port. 1545 */ 1546 struct rte_eth_udp_tunnel { 1547 uint16_t udp_port; /**< UDP port used for the tunnel. */ 1548 uint8_t prot_type; /**< Tunnel type. @see rte_eth_tunnel_type */ 1549 }; 1550 1551 /** 1552 * A structure used to enable/disable specific device interrupts. 1553 */ 1554 struct rte_eth_intr_conf { 1555 /** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */ 1556 uint32_t lsc:1; 1557 /** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */ 1558 uint32_t rxq:1; 1559 /** enable/disable rmv interrupt. 0 (default) - disable, 1 enable */ 1560 uint32_t rmv:1; 1561 }; 1562 1563 #define rte_intr_conf rte_eth_intr_conf 1564 1565 /** 1566 * A structure used to configure an Ethernet port. 1567 * Depending upon the Rx multi-queue mode, extra advanced 1568 * configuration settings may be needed. 1569 */ 1570 struct rte_eth_conf { 1571 uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be 1572 used. RTE_ETH_LINK_SPEED_FIXED disables link 1573 autonegotiation, and a unique speed shall be 1574 set. Otherwise, the bitmap defines the set of 1575 speeds to be advertised. If the special value 1576 RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds 1577 supported are advertised. */ 1578 struct rte_eth_rxmode rxmode; /**< Port Rx configuration. */ 1579 struct rte_eth_txmode txmode; /**< Port Tx configuration. */ 1580 uint32_t lpbk_mode; /**< Loopback operation mode. By default the value 1581 is 0, meaning the loopback mode is disabled. 1582 Read the datasheet of given Ethernet controller 1583 for details. The possible values of this field 1584 are defined in implementation of each driver. */ 1585 struct { 1586 struct rte_eth_rss_conf rss_conf; /**< Port RSS configuration */ 1587 /** Port VMDq+DCB configuration. */ 1588 struct rte_eth_vmdq_dcb_conf vmdq_dcb_conf; 1589 /** Port DCB Rx configuration. */ 1590 struct rte_eth_dcb_rx_conf dcb_rx_conf; 1591 /** Port VMDq Rx configuration. */ 1592 struct rte_eth_vmdq_rx_conf vmdq_rx_conf; 1593 } rx_adv_conf; /**< Port Rx filtering configuration. */ 1594 union { 1595 /** Port VMDq+DCB Tx configuration. */ 1596 struct rte_eth_vmdq_dcb_tx_conf vmdq_dcb_tx_conf; 1597 /** Port DCB Tx configuration. */ 1598 struct rte_eth_dcb_tx_conf dcb_tx_conf; 1599 /** Port VMDq Tx configuration. */ 1600 struct rte_eth_vmdq_tx_conf vmdq_tx_conf; 1601 } tx_adv_conf; /**< Port Tx DCB configuration (union). */ 1602 /** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC 1603 is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT. */ 1604 uint32_t dcb_capability_en; 1605 struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */ 1606 struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */ 1607 }; 1608 1609 /** 1610 * Rx offload capabilities of a device. 1611 */ 1612 #define RTE_ETH_RX_OFFLOAD_VLAN_STRIP RTE_BIT64(0) 1613 #define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1) 1614 #define RTE_ETH_RX_OFFLOAD_UDP_CKSUM RTE_BIT64(2) 1615 #define RTE_ETH_RX_OFFLOAD_TCP_CKSUM RTE_BIT64(3) 1616 #define RTE_ETH_RX_OFFLOAD_TCP_LRO RTE_BIT64(4) 1617 #define RTE_ETH_RX_OFFLOAD_QINQ_STRIP RTE_BIT64(5) 1618 #define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(6) 1619 #define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP RTE_BIT64(7) 1620 #define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT RTE_BIT64(8) 1621 #define RTE_ETH_RX_OFFLOAD_VLAN_FILTER RTE_BIT64(9) 1622 #define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND RTE_BIT64(10) 1623 #define RTE_ETH_RX_OFFLOAD_SCATTER RTE_BIT64(13) 1624 /** 1625 * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME 1626 * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags. 1627 * The mbuf field and flag are registered when the offload is configured. 1628 */ 1629 #define RTE_ETH_RX_OFFLOAD_TIMESTAMP RTE_BIT64(14) 1630 #define RTE_ETH_RX_OFFLOAD_SECURITY RTE_BIT64(15) 1631 #define RTE_ETH_RX_OFFLOAD_KEEP_CRC RTE_BIT64(16) 1632 #define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM RTE_BIT64(17) 1633 #define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18) 1634 #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19) 1635 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20) 1636 1637 #define DEV_RX_OFFLOAD_VLAN_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_STRIP) RTE_ETH_RX_OFFLOAD_VLAN_STRIP 1638 #define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_IPV4_CKSUM 1639 #define DEV_RX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_UDP_CKSUM 1640 #define DEV_RX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_CKSUM) RTE_ETH_RX_OFFLOAD_TCP_CKSUM 1641 #define DEV_RX_OFFLOAD_TCP_LRO RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_LRO) RTE_ETH_RX_OFFLOAD_TCP_LRO 1642 #define DEV_RX_OFFLOAD_QINQ_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_QINQ_STRIP) RTE_ETH_RX_OFFLOAD_QINQ_STRIP 1643 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 1644 #define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_MACSEC_STRIP) RTE_ETH_RX_OFFLOAD_MACSEC_STRIP 1645 #define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_DEPRECATED(DEV_RX_OFFLOAD_HEADER_SPLIT) RTE_ETH_RX_OFFLOAD_HEADER_SPLIT 1646 #define DEV_RX_OFFLOAD_VLAN_FILTER RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_FILTER) RTE_ETH_RX_OFFLOAD_VLAN_FILTER 1647 #define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_EXTEND) RTE_ETH_RX_OFFLOAD_VLAN_EXTEND 1648 #define DEV_RX_OFFLOAD_SCATTER RTE_DEPRECATED(DEV_RX_OFFLOAD_SCATTER) RTE_ETH_RX_OFFLOAD_SCATTER 1649 #define DEV_RX_OFFLOAD_TIMESTAMP RTE_DEPRECATED(DEV_RX_OFFLOAD_TIMESTAMP) RTE_ETH_RX_OFFLOAD_TIMESTAMP 1650 #define DEV_RX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_RX_OFFLOAD_SECURITY) RTE_ETH_RX_OFFLOAD_SECURITY 1651 #define DEV_RX_OFFLOAD_KEEP_CRC RTE_DEPRECATED(DEV_RX_OFFLOAD_KEEP_CRC) RTE_ETH_RX_OFFLOAD_KEEP_CRC 1652 #define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_SCTP_CKSUM) RTE_ETH_RX_OFFLOAD_SCTP_CKSUM 1653 #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM 1654 #define DEV_RX_OFFLOAD_RSS_HASH RTE_DEPRECATED(DEV_RX_OFFLOAD_RSS_HASH) RTE_ETH_RX_OFFLOAD_RSS_HASH 1655 1656 #define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \ 1657 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \ 1658 RTE_ETH_RX_OFFLOAD_TCP_CKSUM) 1659 #define DEV_RX_OFFLOAD_CHECKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_CHECKSUM) RTE_ETH_RX_OFFLOAD_CHECKSUM 1660 #define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ 1661 RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ 1662 RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \ 1663 RTE_ETH_RX_OFFLOAD_QINQ_STRIP) 1664 #define DEV_RX_OFFLOAD_VLAN RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN) RTE_ETH_RX_OFFLOAD_VLAN 1665 1666 /* 1667 * If new Rx offload capabilities are defined, they also must be 1668 * mentioned in rte_rx_offload_names in rte_ethdev.c file. 1669 */ 1670 1671 /** 1672 * Tx offload capabilities of a device. 1673 */ 1674 #define RTE_ETH_TX_OFFLOAD_VLAN_INSERT RTE_BIT64(0) 1675 #define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1) 1676 #define RTE_ETH_TX_OFFLOAD_UDP_CKSUM RTE_BIT64(2) 1677 #define RTE_ETH_TX_OFFLOAD_TCP_CKSUM RTE_BIT64(3) 1678 #define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM RTE_BIT64(4) 1679 #define RTE_ETH_TX_OFFLOAD_TCP_TSO RTE_BIT64(5) 1680 #define RTE_ETH_TX_OFFLOAD_UDP_TSO RTE_BIT64(6) 1681 #define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(7) /**< Used for tunneling packet. */ 1682 #define RTE_ETH_TX_OFFLOAD_QINQ_INSERT RTE_BIT64(8) 1683 #define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO RTE_BIT64(9) /**< Used for tunneling packet. */ 1684 #define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO RTE_BIT64(10) /**< Used for tunneling packet. */ 1685 #define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO RTE_BIT64(11) /**< Used for tunneling packet. */ 1686 #define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO RTE_BIT64(12) /**< Used for tunneling packet. */ 1687 #define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT RTE_BIT64(13) 1688 /** 1689 * Multiple threads can invoke rte_eth_tx_burst() concurrently on the same 1690 * Tx queue without SW lock. 1691 */ 1692 #define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE RTE_BIT64(14) 1693 /** Device supports multi segment send. */ 1694 #define RTE_ETH_TX_OFFLOAD_MULTI_SEGS RTE_BIT64(15) 1695 /** 1696 * Device supports optimization for fast release of mbufs. 1697 * When set application must guarantee that per-queue all mbufs comes from 1698 * the same mempool and has refcnt = 1. 1699 */ 1700 #define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE RTE_BIT64(16) 1701 #define RTE_ETH_TX_OFFLOAD_SECURITY RTE_BIT64(17) 1702 /** 1703 * Device supports generic UDP tunneled packet TSO. 1704 * Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required 1705 * for tunnel TSO. 1706 */ 1707 #define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO RTE_BIT64(18) 1708 /** 1709 * Device supports generic IP tunneled packet TSO. 1710 * Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required 1711 * for tunnel TSO. 1712 */ 1713 #define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO RTE_BIT64(19) 1714 /** Device supports outer UDP checksum */ 1715 #define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(20) 1716 /** 1717 * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME 1718 * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags. 1719 * The mbuf field and flag are registered when the offload is configured. 1720 */ 1721 #define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_BIT64(21) 1722 /* 1723 * If new Tx offload capabilities are defined, they also must be 1724 * mentioned in rte_tx_offload_names in rte_ethdev.c file. 1725 */ 1726 1727 #define DEV_TX_OFFLOAD_VLAN_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_VLAN_INSERT) RTE_ETH_TX_OFFLOAD_VLAN_INSERT 1728 #define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_IPV4_CKSUM 1729 #define DEV_TX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_UDP_CKSUM 1730 #define DEV_TX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_CKSUM) RTE_ETH_TX_OFFLOAD_TCP_CKSUM 1731 #define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_SCTP_CKSUM) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM 1732 #define DEV_TX_OFFLOAD_TCP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_TSO) RTE_ETH_TX_OFFLOAD_TCP_TSO 1733 #define DEV_TX_OFFLOAD_UDP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TSO) RTE_ETH_TX_OFFLOAD_UDP_TSO 1734 #define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 1735 #define DEV_TX_OFFLOAD_QINQ_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_QINQ_INSERT) RTE_ETH_TX_OFFLOAD_QINQ_INSERT 1736 #define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_VXLAN_TNL_TSO) RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO 1737 #define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GRE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO 1738 #define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IPIP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO 1739 #define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GENEVE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO 1740 #define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_MACSEC_INSERT) RTE_ETH_TX_OFFLOAD_MACSEC_INSERT 1741 #define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MT_LOCKFREE) RTE_ETH_TX_OFFLOAD_MT_LOCKFREE 1742 #define DEV_TX_OFFLOAD_MULTI_SEGS RTE_DEPRECATED(DEV_TX_OFFLOAD_MULTI_SEGS) RTE_ETH_TX_OFFLOAD_MULTI_SEGS 1743 #define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MBUF_FAST_FREE) RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE 1744 #define DEV_TX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_TX_OFFLOAD_SECURITY) RTE_ETH_TX_OFFLOAD_SECURITY 1745 #define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TNL_TSO) RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO 1746 #define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IP_TNL_TSO 1747 #define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM 1748 #define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_DEPRECATED(DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 1749 1750 /**@{@name Device capabilities 1751 * Non-offload capabilities reported in rte_eth_dev_info.dev_capa. 1752 */ 1753 /** Device supports Rx queue setup after device started. */ 1754 #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP RTE_BIT64(0) 1755 /** Device supports Tx queue setup after device started. */ 1756 #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP RTE_BIT64(1) 1757 /** 1758 * Device supports shared Rx queue among ports within Rx domain and 1759 * switch domain. Mbufs are consumed by shared Rx queue instead of 1760 * each queue. Multiple groups are supported by share_group of Rx 1761 * queue configuration. Shared Rx queue is identified by PMD using 1762 * share_qid of Rx queue configuration. Polling any port in the group 1763 * receive packets of all member ports, source port identified by 1764 * mbuf->port field. 1765 */ 1766 #define RTE_ETH_DEV_CAPA_RXQ_SHARE RTE_BIT64(2) 1767 /** Device supports keeping flow rules across restart. */ 1768 #define RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP RTE_BIT64(3) 1769 /** Device supports keeping shared flow objects across restart. */ 1770 #define RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP RTE_BIT64(4) 1771 /**@}*/ 1772 1773 /* 1774 * Fallback default preferred Rx/Tx port parameters. 1775 * These are used if an application requests default parameters 1776 * but the PMD does not provide preferred values. 1777 */ 1778 #define RTE_ETH_DEV_FALLBACK_RX_RINGSIZE 512 1779 #define RTE_ETH_DEV_FALLBACK_TX_RINGSIZE 512 1780 #define RTE_ETH_DEV_FALLBACK_RX_NBQUEUES 1 1781 #define RTE_ETH_DEV_FALLBACK_TX_NBQUEUES 1 1782 1783 /** 1784 * Preferred Rx/Tx port parameters. 1785 * There are separate instances of this structure for transmission 1786 * and reception respectively. 1787 */ 1788 struct rte_eth_dev_portconf { 1789 uint16_t burst_size; /**< Device-preferred burst size */ 1790 uint16_t ring_size; /**< Device-preferred size of queue rings */ 1791 uint16_t nb_queues; /**< Device-preferred number of queues */ 1792 }; 1793 1794 /** 1795 * Default values for switch domain ID when ethdev does not support switch 1796 * domain definitions. 1797 */ 1798 #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX) 1799 1800 /** 1801 * Ethernet device associated switch information 1802 */ 1803 struct rte_eth_switch_info { 1804 const char *name; /**< switch name */ 1805 uint16_t domain_id; /**< switch domain ID */ 1806 /** 1807 * Mapping to the devices physical switch port as enumerated from the 1808 * perspective of the embedded interconnect/switch. For SR-IOV enabled 1809 * device this may correspond to the VF_ID of each virtual function, 1810 * but each driver should explicitly define the mapping of switch 1811 * port identifier to that physical interconnect/switch 1812 */ 1813 uint16_t port_id; 1814 /** 1815 * Shared Rx queue sub-domain boundary. Only ports in same Rx domain 1816 * and switch domain can share Rx queue. Valid only if device advertised 1817 * RTE_ETH_DEV_CAPA_RXQ_SHARE capability. 1818 */ 1819 uint16_t rx_domain; 1820 }; 1821 1822 /** 1823 * @warning 1824 * @b EXPERIMENTAL: this structure may change without prior notice. 1825 * 1826 * Ethernet device Rx buffer segmentation capabilities. 1827 */ 1828 struct rte_eth_rxseg_capa { 1829 __extension__ 1830 uint32_t multi_pools:1; /**< Supports receiving to multiple pools.*/ 1831 uint32_t offset_allowed:1; /**< Supports buffer offsets. */ 1832 uint32_t offset_align_log2:4; /**< Required offset alignment. */ 1833 uint16_t max_nseg; /**< Maximum amount of segments to split. */ 1834 uint16_t reserved; /**< Reserved field. */ 1835 }; 1836 1837 /** 1838 * Ethernet device information 1839 */ 1840 1841 /** 1842 * Ethernet device representor port type. 1843 */ 1844 enum rte_eth_representor_type { 1845 RTE_ETH_REPRESENTOR_NONE, /**< not a representor. */ 1846 RTE_ETH_REPRESENTOR_VF, /**< representor of Virtual Function. */ 1847 RTE_ETH_REPRESENTOR_SF, /**< representor of Sub Function. */ 1848 RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */ 1849 }; 1850 1851 /** 1852 * A structure used to retrieve the contextual information of 1853 * an Ethernet device, such as the controlling driver of the 1854 * device, etc... 1855 */ 1856 struct rte_eth_dev_info { 1857 struct rte_device *device; /**< Generic device information */ 1858 const char *driver_name; /**< Device Driver name. */ 1859 unsigned int if_index; /**< Index to bound host interface, or 0 if none. 1860 Use if_indextoname() to translate into an interface name. */ 1861 uint16_t min_mtu; /**< Minimum MTU allowed */ 1862 uint16_t max_mtu; /**< Maximum MTU allowed */ 1863 const uint32_t *dev_flags; /**< Device flags */ 1864 uint32_t min_rx_bufsize; /**< Minimum size of Rx buffer. */ 1865 uint32_t max_rx_pktlen; /**< Maximum configurable length of Rx pkt. */ 1866 /** Maximum configurable size of LRO aggregated packet. */ 1867 uint32_t max_lro_pkt_size; 1868 uint16_t max_rx_queues; /**< Maximum number of Rx queues. */ 1869 uint16_t max_tx_queues; /**< Maximum number of Tx queues. */ 1870 uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */ 1871 /** Maximum number of hash MAC addresses for MTA and UTA. */ 1872 uint32_t max_hash_mac_addrs; 1873 uint16_t max_vfs; /**< Maximum number of VFs. */ 1874 uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */ 1875 struct rte_eth_rxseg_capa rx_seg_capa; /**< Segmentation capability.*/ 1876 /** All Rx offload capabilities including all per-queue ones */ 1877 uint64_t rx_offload_capa; 1878 /** All Tx offload capabilities including all per-queue ones */ 1879 uint64_t tx_offload_capa; 1880 /** Device per-queue Rx offload capabilities. */ 1881 uint64_t rx_queue_offload_capa; 1882 /** Device per-queue Tx offload capabilities. */ 1883 uint64_t tx_queue_offload_capa; 1884 /** Device redirection table size, the total number of entries. */ 1885 uint16_t reta_size; 1886 uint8_t hash_key_size; /**< Hash key size in bytes */ 1887 /** Bit mask of RSS offloads, the bit offset also means flow type */ 1888 uint64_t flow_type_rss_offloads; 1889 struct rte_eth_rxconf default_rxconf; /**< Default Rx configuration */ 1890 struct rte_eth_txconf default_txconf; /**< Default Tx configuration */ 1891 uint16_t vmdq_queue_base; /**< First queue ID for VMDq pools. */ 1892 uint16_t vmdq_queue_num; /**< Queue number for VMDq pools. */ 1893 uint16_t vmdq_pool_base; /**< First ID of VMDq pools. */ 1894 struct rte_eth_desc_lim rx_desc_lim; /**< Rx descriptors limits */ 1895 struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptors limits */ 1896 uint32_t speed_capa; /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */ 1897 /** Configured number of Rx/Tx queues */ 1898 uint16_t nb_rx_queues; /**< Number of Rx queues. */ 1899 uint16_t nb_tx_queues; /**< Number of Tx queues. */ 1900 /** Rx parameter recommendations */ 1901 struct rte_eth_dev_portconf default_rxportconf; 1902 /** Tx parameter recommendations */ 1903 struct rte_eth_dev_portconf default_txportconf; 1904 /** Generic device capabilities (RTE_ETH_DEV_CAPA_). */ 1905 uint64_t dev_capa; 1906 /** 1907 * Switching information for ports on a device with a 1908 * embedded managed interconnect/switch. 1909 */ 1910 struct rte_eth_switch_info switch_info; 1911 1912 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1913 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1914 }; 1915 1916 /**@{@name Rx/Tx queue states */ 1917 #define RTE_ETH_QUEUE_STATE_STOPPED 0 /**< Queue stopped. */ 1918 #define RTE_ETH_QUEUE_STATE_STARTED 1 /**< Queue started. */ 1919 #define RTE_ETH_QUEUE_STATE_HAIRPIN 2 /**< Queue used for hairpin. */ 1920 /**@}*/ 1921 1922 /** 1923 * Ethernet device Rx queue information structure. 1924 * Used to retrieve information about configured queue. 1925 */ 1926 struct rte_eth_rxq_info { 1927 struct rte_mempool *mp; /**< mempool used by that queue. */ 1928 struct rte_eth_rxconf conf; /**< queue config parameters. */ 1929 uint8_t scattered_rx; /**< scattered packets Rx supported. */ 1930 uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */ 1931 uint16_t nb_desc; /**< configured number of RXDs. */ 1932 uint16_t rx_buf_size; /**< hardware receive buffer size. */ 1933 /** 1934 * Available Rx descriptors threshold defined as percentage 1935 * of Rx queue size. If number of available descriptors is lower, 1936 * the event RTE_ETH_EVENT_RX_AVAIL_THESH is generated. 1937 * Value 0 means that the threshold monitoring is disabled. 1938 */ 1939 uint8_t avail_thresh; 1940 } __rte_cache_min_aligned; 1941 1942 /** 1943 * Ethernet device Tx queue information structure. 1944 * Used to retrieve information about configured queue. 1945 */ 1946 struct rte_eth_txq_info { 1947 struct rte_eth_txconf conf; /**< queue config parameters. */ 1948 uint16_t nb_desc; /**< configured number of TXDs. */ 1949 uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */ 1950 } __rte_cache_min_aligned; 1951 1952 /* Generic Burst mode flag definition, values can be ORed. */ 1953 1954 /** 1955 * If the queues have different burst mode description, this bit will be set 1956 * by PMD, then the application can iterate to retrieve burst description for 1957 * all other queues. 1958 */ 1959 #define RTE_ETH_BURST_FLAG_PER_QUEUE RTE_BIT64(0) 1960 1961 /** 1962 * Ethernet device Rx/Tx queue packet burst mode information structure. 1963 * Used to retrieve information about packet burst mode setting. 1964 */ 1965 struct rte_eth_burst_mode { 1966 uint64_t flags; /**< The ORed values of RTE_ETH_BURST_FLAG_xxx */ 1967 1968 #define RTE_ETH_BURST_MODE_INFO_SIZE 1024 /**< Maximum size for information */ 1969 char info[RTE_ETH_BURST_MODE_INFO_SIZE]; /**< burst mode information */ 1970 }; 1971 1972 /** Maximum name length for extended statistics counters */ 1973 #define RTE_ETH_XSTATS_NAME_SIZE 64 1974 1975 /** 1976 * An Ethernet device extended statistic structure 1977 * 1978 * This structure is used by rte_eth_xstats_get() to provide 1979 * statistics that are not provided in the generic *rte_eth_stats* 1980 * structure. 1981 * It maps a name ID, corresponding to an index in the array returned 1982 * by rte_eth_xstats_get_names(), to a statistic value. 1983 */ 1984 struct rte_eth_xstat { 1985 uint64_t id; /**< The index in xstats name array. */ 1986 uint64_t value; /**< The statistic counter value. */ 1987 }; 1988 1989 /** 1990 * A name element for extended statistics. 1991 * 1992 * An array of this structure is returned by rte_eth_xstats_get_names(). 1993 * It lists the names of extended statistics for a PMD. The *rte_eth_xstat* 1994 * structure references these names by their array index. 1995 * 1996 * The xstats should follow a common naming scheme. 1997 * Some names are standardized in rte_stats_strings. 1998 * Examples: 1999 * - rx_missed_errors 2000 * - tx_q3_bytes 2001 * - tx_size_128_to_255_packets 2002 */ 2003 struct rte_eth_xstat_name { 2004 char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */ 2005 }; 2006 2007 #define RTE_ETH_DCB_NUM_TCS 8 2008 #define RTE_ETH_MAX_VMDQ_POOL 64 2009 2010 #define ETH_DCB_NUM_TCS RTE_DEPRECATED(ETH_DCB_NUM_TCS) RTE_ETH_DCB_NUM_TCS 2011 #define ETH_MAX_VMDQ_POOL RTE_DEPRECATED(ETH_MAX_VMDQ_POOL) RTE_ETH_MAX_VMDQ_POOL 2012 2013 /** 2014 * A structure used to get the information of queue and 2015 * TC mapping on both Tx and Rx paths. 2016 */ 2017 struct rte_eth_dcb_tc_queue_mapping { 2018 /** Rx queues assigned to tc per Pool */ 2019 struct { 2020 uint16_t base; 2021 uint16_t nb_queue; 2022 } tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS]; 2023 /** Rx queues assigned to tc per Pool */ 2024 struct { 2025 uint16_t base; 2026 uint16_t nb_queue; 2027 } tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS]; 2028 }; 2029 2030 /** 2031 * A structure used to get the information of DCB. 2032 * It includes TC UP mapping and queue TC mapping. 2033 */ 2034 struct rte_eth_dcb_info { 2035 uint8_t nb_tcs; /**< number of TCs */ 2036 uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */ 2037 uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */ 2038 /** Rx queues assigned to tc */ 2039 struct rte_eth_dcb_tc_queue_mapping tc_queue; 2040 }; 2041 2042 /** 2043 * This enum indicates the possible Forward Error Correction (FEC) modes 2044 * of an ethdev port. 2045 */ 2046 enum rte_eth_fec_mode { 2047 RTE_ETH_FEC_NOFEC = 0, /**< FEC is off */ 2048 RTE_ETH_FEC_AUTO, /**< FEC autonegotiation modes */ 2049 RTE_ETH_FEC_BASER, /**< FEC using common algorithm */ 2050 RTE_ETH_FEC_RS, /**< FEC using RS algorithm */ 2051 }; 2052 2053 /* Translate from FEC mode to FEC capa */ 2054 #define RTE_ETH_FEC_MODE_TO_CAPA(x) RTE_BIT32(x) 2055 2056 /* This macro indicates FEC capa mask */ 2057 #define RTE_ETH_FEC_MODE_CAPA_MASK(x) RTE_BIT32(RTE_ETH_FEC_ ## x) 2058 2059 /* A structure used to get capabilities per link speed */ 2060 struct rte_eth_fec_capa { 2061 uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */ 2062 uint32_t capa; /**< FEC capabilities bitmask */ 2063 }; 2064 2065 #define RTE_ETH_ALL RTE_MAX_ETHPORTS 2066 2067 /* Macros to check for valid port */ 2068 #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \ 2069 if (!rte_eth_dev_is_valid_port(port_id)) { \ 2070 RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ 2071 return retval; \ 2072 } \ 2073 } while (0) 2074 2075 #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \ 2076 if (!rte_eth_dev_is_valid_port(port_id)) { \ 2077 RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ 2078 return; \ 2079 } \ 2080 } while (0) 2081 2082 /** 2083 * Function type used for Rx packet processing packet callbacks. 2084 * 2085 * The callback function is called on Rx with a burst of packets that have 2086 * been received on the given port and queue. 2087 * 2088 * @param port_id 2089 * The Ethernet port on which Rx is being performed. 2090 * @param queue 2091 * The queue on the Ethernet port which is being used to receive the packets. 2092 * @param pkts 2093 * The burst of packets that have just been received. 2094 * @param nb_pkts 2095 * The number of packets in the burst pointed to by "pkts". 2096 * @param max_pkts 2097 * The max number of packets that can be stored in the "pkts" array. 2098 * @param user_param 2099 * The arbitrary user parameter passed in by the application when the callback 2100 * was originally configured. 2101 * @return 2102 * The number of packets returned to the user. 2103 */ 2104 typedef uint16_t (*rte_rx_callback_fn)(uint16_t port_id, uint16_t queue, 2105 struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t max_pkts, 2106 void *user_param); 2107 2108 /** 2109 * Function type used for Tx packet processing packet callbacks. 2110 * 2111 * The callback function is called on Tx with a burst of packets immediately 2112 * before the packets are put onto the hardware queue for transmission. 2113 * 2114 * @param port_id 2115 * The Ethernet port on which Tx is being performed. 2116 * @param queue 2117 * The queue on the Ethernet port which is being used to transmit the packets. 2118 * @param pkts 2119 * The burst of packets that are about to be transmitted. 2120 * @param nb_pkts 2121 * The number of packets in the burst pointed to by "pkts". 2122 * @param user_param 2123 * The arbitrary user parameter passed in by the application when the callback 2124 * was originally configured. 2125 * @return 2126 * The number of packets to be written to the NIC. 2127 */ 2128 typedef uint16_t (*rte_tx_callback_fn)(uint16_t port_id, uint16_t queue, 2129 struct rte_mbuf *pkts[], uint16_t nb_pkts, void *user_param); 2130 2131 /** 2132 * Possible states of an ethdev port. 2133 */ 2134 enum rte_eth_dev_state { 2135 /** Device is unused before being probed. */ 2136 RTE_ETH_DEV_UNUSED = 0, 2137 /** Device is attached when allocated in probing. */ 2138 RTE_ETH_DEV_ATTACHED, 2139 /** Device is in removed state when plug-out is detected. */ 2140 RTE_ETH_DEV_REMOVED, 2141 }; 2142 2143 struct rte_eth_dev_sriov { 2144 uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */ 2145 uint8_t nb_q_per_pool; /**< Rx queue number per pool */ 2146 uint16_t def_vmdq_idx; /**< Default pool num used for PF */ 2147 uint16_t def_pool_q_idx; /**< Default pool queue start reg index */ 2148 }; 2149 #define RTE_ETH_DEV_SRIOV(dev) ((dev)->data->sriov) 2150 2151 #define RTE_ETH_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN 2152 2153 #define RTE_ETH_DEV_NO_OWNER 0 2154 2155 #define RTE_ETH_MAX_OWNER_NAME_LEN 64 2156 2157 struct rte_eth_dev_owner { 2158 uint64_t id; /**< The owner unique identifier. */ 2159 char name[RTE_ETH_MAX_OWNER_NAME_LEN]; /**< The owner name. */ 2160 }; 2161 2162 /**@{@name Device flags 2163 * Flags internally saved in rte_eth_dev_data.dev_flags 2164 * and reported in rte_eth_dev_info.dev_flags. 2165 */ 2166 /** PMD supports thread-safe flow operations */ 2167 #define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE RTE_BIT32(0) 2168 /** Device supports link state interrupt */ 2169 #define RTE_ETH_DEV_INTR_LSC RTE_BIT32(1) 2170 /** Device is a bonded slave */ 2171 #define RTE_ETH_DEV_BONDED_SLAVE RTE_BIT32(2) 2172 /** Device supports device removal interrupt */ 2173 #define RTE_ETH_DEV_INTR_RMV RTE_BIT32(3) 2174 /** Device is port representor */ 2175 #define RTE_ETH_DEV_REPRESENTOR RTE_BIT32(4) 2176 /** Device does not support MAC change after started */ 2177 #define RTE_ETH_DEV_NOLIVE_MAC_ADDR RTE_BIT32(5) 2178 /** 2179 * Queue xstats filled automatically by ethdev layer. 2180 * PMDs filling the queue xstats themselves should not set this flag 2181 */ 2182 #define RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS RTE_BIT32(6) 2183 /**@}*/ 2184 2185 /** 2186 * Iterates over valid ethdev ports owned by a specific owner. 2187 * 2188 * @param port_id 2189 * The ID of the next possible valid owned port. 2190 * @param owner_id 2191 * The owner identifier. 2192 * RTE_ETH_DEV_NO_OWNER means iterate over all valid ownerless ports. 2193 * @return 2194 * Next valid port ID owned by owner_id, RTE_MAX_ETHPORTS if there is none. 2195 */ 2196 uint64_t rte_eth_find_next_owned_by(uint16_t port_id, 2197 const uint64_t owner_id); 2198 2199 /** 2200 * Macro to iterate over all enabled ethdev ports owned by a specific owner. 2201 */ 2202 #define RTE_ETH_FOREACH_DEV_OWNED_BY(p, o) \ 2203 for (p = rte_eth_find_next_owned_by(0, o); \ 2204 (unsigned int)p < (unsigned int)RTE_MAX_ETHPORTS; \ 2205 p = rte_eth_find_next_owned_by(p + 1, o)) 2206 2207 /** 2208 * Iterates over valid ethdev ports. 2209 * 2210 * @param port_id 2211 * The ID of the next possible valid port. 2212 * @return 2213 * Next valid port ID, RTE_MAX_ETHPORTS if there is none. 2214 */ 2215 uint16_t rte_eth_find_next(uint16_t port_id); 2216 2217 /** 2218 * Macro to iterate over all enabled and ownerless ethdev ports. 2219 */ 2220 #define RTE_ETH_FOREACH_DEV(p) \ 2221 RTE_ETH_FOREACH_DEV_OWNED_BY(p, RTE_ETH_DEV_NO_OWNER) 2222 2223 /** 2224 * Iterates over ethdev ports of a specified device. 2225 * 2226 * @param port_id_start 2227 * The ID of the next possible valid port. 2228 * @param parent 2229 * The generic device behind the ports to iterate. 2230 * @return 2231 * Next port ID of the device, possibly port_id_start, 2232 * RTE_MAX_ETHPORTS if there is none. 2233 */ 2234 uint16_t 2235 rte_eth_find_next_of(uint16_t port_id_start, 2236 const struct rte_device *parent); 2237 2238 /** 2239 * Macro to iterate over all ethdev ports of a specified device. 2240 * 2241 * @param port_id 2242 * The ID of the matching port being iterated. 2243 * @param parent 2244 * The rte_device pointer matching the iterated ports. 2245 */ 2246 #define RTE_ETH_FOREACH_DEV_OF(port_id, parent) \ 2247 for (port_id = rte_eth_find_next_of(0, parent); \ 2248 port_id < RTE_MAX_ETHPORTS; \ 2249 port_id = rte_eth_find_next_of(port_id + 1, parent)) 2250 2251 /** 2252 * Iterates over sibling ethdev ports (i.e. sharing the same rte_device). 2253 * 2254 * @param port_id_start 2255 * The ID of the next possible valid sibling port. 2256 * @param ref_port_id 2257 * The ID of a reference port to compare rte_device with. 2258 * @return 2259 * Next sibling port ID, possibly port_id_start or ref_port_id itself, 2260 * RTE_MAX_ETHPORTS if there is none. 2261 */ 2262 uint16_t 2263 rte_eth_find_next_sibling(uint16_t port_id_start, uint16_t ref_port_id); 2264 2265 /** 2266 * Macro to iterate over all ethdev ports sharing the same rte_device 2267 * as the specified port. 2268 * Note: the specified reference port is part of the loop iterations. 2269 * 2270 * @param port_id 2271 * The ID of the matching port being iterated. 2272 * @param ref_port_id 2273 * The ID of the port being compared. 2274 */ 2275 #define RTE_ETH_FOREACH_DEV_SIBLING(port_id, ref_port_id) \ 2276 for (port_id = rte_eth_find_next_sibling(0, ref_port_id); \ 2277 port_id < RTE_MAX_ETHPORTS; \ 2278 port_id = rte_eth_find_next_sibling(port_id + 1, ref_port_id)) 2279 2280 /** 2281 * Get a new unique owner identifier. 2282 * An owner identifier is used to owns Ethernet devices by only one DPDK entity 2283 * to avoid multiple management of device by different entities. 2284 * 2285 * @param owner_id 2286 * Owner identifier pointer. 2287 * @return 2288 * Negative errno value on error, 0 on success. 2289 */ 2290 int rte_eth_dev_owner_new(uint64_t *owner_id); 2291 2292 /** 2293 * Set an Ethernet device owner. 2294 * 2295 * @param port_id 2296 * The identifier of the port to own. 2297 * @param owner 2298 * The owner pointer. 2299 * @return 2300 * Negative errno value on error, 0 on success. 2301 */ 2302 int rte_eth_dev_owner_set(const uint16_t port_id, 2303 const struct rte_eth_dev_owner *owner); 2304 2305 /** 2306 * Unset Ethernet device owner to make the device ownerless. 2307 * 2308 * @param port_id 2309 * The identifier of port to make ownerless. 2310 * @param owner_id 2311 * The owner identifier. 2312 * @return 2313 * 0 on success, negative errno value on error. 2314 */ 2315 int rte_eth_dev_owner_unset(const uint16_t port_id, 2316 const uint64_t owner_id); 2317 2318 /** 2319 * Remove owner from all Ethernet devices owned by a specific owner. 2320 * 2321 * @param owner_id 2322 * The owner identifier. 2323 * @return 2324 * 0 on success, negative errno value on error. 2325 */ 2326 int rte_eth_dev_owner_delete(const uint64_t owner_id); 2327 2328 /** 2329 * Get the owner of an Ethernet device. 2330 * 2331 * @param port_id 2332 * The port identifier. 2333 * @param owner 2334 * The owner structure pointer to fill. 2335 * @return 2336 * 0 on success, negative errno value on error.. 2337 */ 2338 int rte_eth_dev_owner_get(const uint16_t port_id, 2339 struct rte_eth_dev_owner *owner); 2340 2341 /** 2342 * Get the number of ports which are usable for the application. 2343 * 2344 * These devices must be iterated by using the macro 2345 * ``RTE_ETH_FOREACH_DEV`` or ``RTE_ETH_FOREACH_DEV_OWNED_BY`` 2346 * to deal with non-contiguous ranges of devices. 2347 * 2348 * @return 2349 * The count of available Ethernet devices. 2350 */ 2351 uint16_t rte_eth_dev_count_avail(void); 2352 2353 /** 2354 * Get the total number of ports which are allocated. 2355 * 2356 * Some devices may not be available for the application. 2357 * 2358 * @return 2359 * The total count of Ethernet devices. 2360 */ 2361 uint16_t rte_eth_dev_count_total(void); 2362 2363 /** 2364 * Convert a numerical speed in Mbps to a bitmap flag that can be used in 2365 * the bitmap link_speeds of the struct rte_eth_conf 2366 * 2367 * @param speed 2368 * Numerical speed value in Mbps 2369 * @param duplex 2370 * RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds) 2371 * @return 2372 * 0 if the speed cannot be mapped 2373 */ 2374 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex); 2375 2376 /** 2377 * Get RTE_ETH_RX_OFFLOAD_* flag name. 2378 * 2379 * @param offload 2380 * Offload flag. 2381 * @return 2382 * Offload name or 'UNKNOWN' if the flag cannot be recognised. 2383 */ 2384 const char *rte_eth_dev_rx_offload_name(uint64_t offload); 2385 2386 /** 2387 * Get RTE_ETH_TX_OFFLOAD_* flag name. 2388 * 2389 * @param offload 2390 * Offload flag. 2391 * @return 2392 * Offload name or 'UNKNOWN' if the flag cannot be recognised. 2393 */ 2394 const char *rte_eth_dev_tx_offload_name(uint64_t offload); 2395 2396 /** 2397 * @warning 2398 * @b EXPERIMENTAL: this API may change without prior notice. 2399 * 2400 * Get RTE_ETH_DEV_CAPA_* flag name. 2401 * 2402 * @param capability 2403 * Capability flag. 2404 * @return 2405 * Capability name or 'UNKNOWN' if the flag cannot be recognized. 2406 */ 2407 __rte_experimental 2408 const char *rte_eth_dev_capability_name(uint64_t capability); 2409 2410 /** 2411 * Configure an Ethernet device. 2412 * This function must be invoked first before any other function in the 2413 * Ethernet API. This function can also be re-invoked when a device is in the 2414 * stopped state. 2415 * 2416 * @param port_id 2417 * The port identifier of the Ethernet device to configure. 2418 * @param nb_rx_queue 2419 * The number of receive queues to set up for the Ethernet device. 2420 * @param nb_tx_queue 2421 * The number of transmit queues to set up for the Ethernet device. 2422 * @param eth_conf 2423 * The pointer to the configuration data to be used for the Ethernet device. 2424 * The *rte_eth_conf* structure includes: 2425 * - the hardware offload features to activate, with dedicated fields for 2426 * each statically configurable offload hardware feature provided by 2427 * Ethernet devices, such as IP checksum or VLAN tag stripping for 2428 * example. 2429 * The Rx offload bitfield API is obsolete and will be deprecated. 2430 * Applications should set the ignore_bitfield_offloads bit on *rxmode* 2431 * structure and use offloads field to set per-port offloads instead. 2432 * - Any offloading set in eth_conf->[rt]xmode.offloads must be within 2433 * the [rt]x_offload_capa returned from rte_eth_dev_info_get(). 2434 * Any type of device supported offloading set in the input argument 2435 * eth_conf->[rt]xmode.offloads to rte_eth_dev_configure() is enabled 2436 * on all queues and it can't be disabled in rte_eth_[rt]x_queue_setup() 2437 * - the Receive Side Scaling (RSS) configuration when using multiple Rx 2438 * queues per port. Any RSS hash function set in eth_conf->rss_conf.rss_hf 2439 * must be within the flow_type_rss_offloads provided by drivers via 2440 * rte_eth_dev_info_get() API. 2441 * 2442 * Embedding all configuration information in a single data structure 2443 * is the more flexible method that allows the addition of new features 2444 * without changing the syntax of the API. 2445 * @return 2446 * - 0: Success, device configured. 2447 * - <0: Error code returned by the driver configuration function. 2448 */ 2449 int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue, 2450 uint16_t nb_tx_queue, const struct rte_eth_conf *eth_conf); 2451 2452 /** 2453 * Check if an Ethernet device was physically removed. 2454 * 2455 * @param port_id 2456 * The port identifier of the Ethernet device. 2457 * @return 2458 * 1 when the Ethernet device is removed, otherwise 0. 2459 */ 2460 int 2461 rte_eth_dev_is_removed(uint16_t port_id); 2462 2463 /** 2464 * Allocate and set up a receive queue for an Ethernet device. 2465 * 2466 * The function allocates a contiguous block of memory for *nb_rx_desc* 2467 * receive descriptors from a memory zone associated with *socket_id* 2468 * and initializes each receive descriptor with a network buffer allocated 2469 * from the memory pool *mb_pool*. 2470 * 2471 * @param port_id 2472 * The port identifier of the Ethernet device. 2473 * @param rx_queue_id 2474 * The index of the receive queue to set up. 2475 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2476 * to rte_eth_dev_configure(). 2477 * @param nb_rx_desc 2478 * The number of receive descriptors to allocate for the receive ring. 2479 * @param socket_id 2480 * The *socket_id* argument is the socket identifier in case of NUMA. 2481 * The value can be *SOCKET_ID_ANY* if there is no NUMA constraint for 2482 * the DMA memory allocated for the receive descriptors of the ring. 2483 * @param rx_conf 2484 * The pointer to the configuration data to be used for the receive queue. 2485 * NULL value is allowed, in which case default Rx configuration 2486 * will be used. 2487 * The *rx_conf* structure contains an *rx_thresh* structure with the values 2488 * of the Prefetch, Host, and Write-Back threshold registers of the receive 2489 * ring. 2490 * In addition it contains the hardware offloads features to activate using 2491 * the RTE_ETH_RX_OFFLOAD_* flags. 2492 * If an offloading set in rx_conf->offloads 2493 * hasn't been set in the input argument eth_conf->rxmode.offloads 2494 * to rte_eth_dev_configure(), it is a new added offloading, it must be 2495 * per-queue type and it is enabled for the queue. 2496 * No need to repeat any bit in rx_conf->offloads which has already been 2497 * enabled in rte_eth_dev_configure() at port level. An offloading enabled 2498 * at port level can't be disabled at queue level. 2499 * The configuration structure also contains the pointer to the array 2500 * of the receiving buffer segment descriptions, see rx_seg and rx_nseg 2501 * fields, this extended configuration might be used by split offloads like 2502 * RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT. If mb_pool is not NULL, 2503 * the extended configuration fields must be set to NULL and zero. 2504 * @param mb_pool 2505 * The pointer to the memory pool from which to allocate *rte_mbuf* network 2506 * memory buffers to populate each descriptor of the receive ring. There are 2507 * two options to provide Rx buffer configuration: 2508 * - single pool: 2509 * mb_pool is not NULL, rx_conf.rx_nseg is 0. 2510 * - multiple segments description: 2511 * mb_pool is NULL, rx_conf.rx_seg is not NULL, rx_conf.rx_nseg is not 0. 2512 * Taken only if flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is set in offloads. 2513 * 2514 * @return 2515 * - 0: Success, receive queue correctly set up. 2516 * - -EIO: if device is removed. 2517 * - -ENODEV: if *port_id* is invalid. 2518 * - -EINVAL: The memory pool pointer is null or the size of network buffers 2519 * which can be allocated from this memory pool does not fit the various 2520 * buffer sizes allowed by the device controller. 2521 * - -ENOMEM: Unable to allocate the receive ring descriptors or to 2522 * allocate network memory buffers from the memory pool when 2523 * initializing receive descriptors. 2524 */ 2525 int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, 2526 uint16_t nb_rx_desc, unsigned int socket_id, 2527 const struct rte_eth_rxconf *rx_conf, 2528 struct rte_mempool *mb_pool); 2529 2530 /** 2531 * @warning 2532 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2533 * 2534 * Allocate and set up a hairpin receive queue for an Ethernet device. 2535 * 2536 * The function set up the selected queue to be used in hairpin. 2537 * 2538 * @param port_id 2539 * The port identifier of the Ethernet device. 2540 * @param rx_queue_id 2541 * The index of the receive queue to set up. 2542 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2543 * to rte_eth_dev_configure(). 2544 * @param nb_rx_desc 2545 * The number of receive descriptors to allocate for the receive ring. 2546 * 0 means the PMD will use default value. 2547 * @param conf 2548 * The pointer to the hairpin configuration. 2549 * 2550 * @return 2551 * - (0) if successful. 2552 * - (-ENODEV) if *port_id* is invalid. 2553 * - (-ENOTSUP) if hardware doesn't support. 2554 * - (-EINVAL) if bad parameter. 2555 * - (-ENOMEM) if unable to allocate the resources. 2556 */ 2557 __rte_experimental 2558 int rte_eth_rx_hairpin_queue_setup 2559 (uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, 2560 const struct rte_eth_hairpin_conf *conf); 2561 2562 /** 2563 * Allocate and set up a transmit queue for an Ethernet device. 2564 * 2565 * @param port_id 2566 * The port identifier of the Ethernet device. 2567 * @param tx_queue_id 2568 * The index of the transmit queue to set up. 2569 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2570 * to rte_eth_dev_configure(). 2571 * @param nb_tx_desc 2572 * The number of transmit descriptors to allocate for the transmit ring. 2573 * @param socket_id 2574 * The *socket_id* argument is the socket identifier in case of NUMA. 2575 * Its value can be *SOCKET_ID_ANY* if there is no NUMA constraint for 2576 * the DMA memory allocated for the transmit descriptors of the ring. 2577 * @param tx_conf 2578 * The pointer to the configuration data to be used for the transmit queue. 2579 * NULL value is allowed, in which case default Tx configuration 2580 * will be used. 2581 * The *tx_conf* structure contains the following data: 2582 * - The *tx_thresh* structure with the values of the Prefetch, Host, and 2583 * Write-Back threshold registers of the transmit ring. 2584 * When setting Write-Back threshold to the value greater then zero, 2585 * *tx_rs_thresh* value should be explicitly set to one. 2586 * - The *tx_free_thresh* value indicates the [minimum] number of network 2587 * buffers that must be pending in the transmit ring to trigger their 2588 * [implicit] freeing by the driver transmit function. 2589 * - The *tx_rs_thresh* value indicates the [minimum] number of transmit 2590 * descriptors that must be pending in the transmit ring before setting the 2591 * RS bit on a descriptor by the driver transmit function. 2592 * The *tx_rs_thresh* value should be less or equal then 2593 * *tx_free_thresh* value, and both of them should be less then 2594 * *nb_tx_desc* - 3. 2595 * - The *offloads* member contains Tx offloads to be enabled. 2596 * If an offloading set in tx_conf->offloads 2597 * hasn't been set in the input argument eth_conf->txmode.offloads 2598 * to rte_eth_dev_configure(), it is a new added offloading, it must be 2599 * per-queue type and it is enabled for the queue. 2600 * No need to repeat any bit in tx_conf->offloads which has already been 2601 * enabled in rte_eth_dev_configure() at port level. An offloading enabled 2602 * at port level can't be disabled at queue level. 2603 * 2604 * Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces 2605 * the transmit function to use default values. 2606 * @return 2607 * - 0: Success, the transmit queue is correctly set up. 2608 * - -ENOMEM: Unable to allocate the transmit ring descriptors. 2609 */ 2610 int rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, 2611 uint16_t nb_tx_desc, unsigned int socket_id, 2612 const struct rte_eth_txconf *tx_conf); 2613 2614 /** 2615 * @warning 2616 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2617 * 2618 * Allocate and set up a transmit hairpin queue for an Ethernet device. 2619 * 2620 * @param port_id 2621 * The port identifier of the Ethernet device. 2622 * @param tx_queue_id 2623 * The index of the transmit queue to set up. 2624 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2625 * to rte_eth_dev_configure(). 2626 * @param nb_tx_desc 2627 * The number of transmit descriptors to allocate for the transmit ring. 2628 * 0 to set default PMD value. 2629 * @param conf 2630 * The hairpin configuration. 2631 * 2632 * @return 2633 * - (0) if successful. 2634 * - (-ENODEV) if *port_id* is invalid. 2635 * - (-ENOTSUP) if hardware doesn't support. 2636 * - (-EINVAL) if bad parameter. 2637 * - (-ENOMEM) if unable to allocate the resources. 2638 */ 2639 __rte_experimental 2640 int rte_eth_tx_hairpin_queue_setup 2641 (uint16_t port_id, uint16_t tx_queue_id, uint16_t nb_tx_desc, 2642 const struct rte_eth_hairpin_conf *conf); 2643 2644 /** 2645 * @warning 2646 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2647 * 2648 * Get all the hairpin peer Rx / Tx ports of the current port. 2649 * The caller should ensure that the array is large enough to save the ports 2650 * list. 2651 * 2652 * @param port_id 2653 * The port identifier of the Ethernet device. 2654 * @param peer_ports 2655 * Pointer to the array to store the peer ports list. 2656 * @param len 2657 * Length of the array to store the port identifiers. 2658 * @param direction 2659 * Current port to peer port direction 2660 * positive - current used as Tx to get all peer Rx ports. 2661 * zero - current used as Rx to get all peer Tx ports. 2662 * 2663 * @return 2664 * - (0 or positive) actual peer ports number. 2665 * - (-EINVAL) if bad parameter. 2666 * - (-ENODEV) if *port_id* invalid 2667 * - (-ENOTSUP) if hardware doesn't support. 2668 * - Others detailed errors from PMDs. 2669 */ 2670 __rte_experimental 2671 int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, 2672 size_t len, uint32_t direction); 2673 2674 /** 2675 * @warning 2676 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2677 * 2678 * Bind all hairpin Tx queues of one port to the Rx queues of the peer port. 2679 * It is only allowed to call this function after all hairpin queues are 2680 * configured properly and the devices are in started state. 2681 * 2682 * @param tx_port 2683 * The identifier of the Tx port. 2684 * @param rx_port 2685 * The identifier of peer Rx port. 2686 * RTE_MAX_ETHPORTS is allowed for the traversal of all devices. 2687 * Rx port ID could have the same value as Tx port ID. 2688 * 2689 * @return 2690 * - (0) if successful. 2691 * - (-ENODEV) if Tx port ID is invalid. 2692 * - (-EBUSY) if device is not in started state. 2693 * - (-ENOTSUP) if hardware doesn't support. 2694 * - Others detailed errors from PMDs. 2695 */ 2696 __rte_experimental 2697 int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port); 2698 2699 /** 2700 * @warning 2701 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2702 * 2703 * Unbind all hairpin Tx queues of one port from the Rx queues of the peer port. 2704 * This should be called before closing the Tx or Rx devices, if the bind 2705 * function is called before. 2706 * After unbinding the hairpin ports pair, it is allowed to bind them again. 2707 * Changing queues configuration should be after stopping the device(s). 2708 * 2709 * @param tx_port 2710 * The identifier of the Tx port. 2711 * @param rx_port 2712 * The identifier of peer Rx port. 2713 * RTE_MAX_ETHPORTS is allowed for traversal of all devices. 2714 * Rx port ID could have the same value as Tx port ID. 2715 * 2716 * @return 2717 * - (0) if successful. 2718 * - (-ENODEV) if Tx port ID is invalid. 2719 * - (-EBUSY) if device is in stopped state. 2720 * - (-ENOTSUP) if hardware doesn't support. 2721 * - Others detailed errors from PMDs. 2722 */ 2723 __rte_experimental 2724 int rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port); 2725 2726 /** 2727 * Return the NUMA socket to which an Ethernet device is connected 2728 * 2729 * @param port_id 2730 * The port identifier of the Ethernet device 2731 * @return 2732 * The NUMA socket ID to which the Ethernet device is connected or 2733 * a default of zero if the socket could not be determined. 2734 * -1 is returned is the port_id value is out of range. 2735 */ 2736 int rte_eth_dev_socket_id(uint16_t port_id); 2737 2738 /** 2739 * Check if port_id of device is attached 2740 * 2741 * @param port_id 2742 * The port identifier of the Ethernet device 2743 * @return 2744 * - 0 if port is out of range or not attached 2745 * - 1 if device is attached 2746 */ 2747 int rte_eth_dev_is_valid_port(uint16_t port_id); 2748 2749 /** 2750 * Start specified Rx queue of a port. It is used when rx_deferred_start 2751 * flag of the specified queue is true. 2752 * 2753 * @param port_id 2754 * The port identifier of the Ethernet device 2755 * @param rx_queue_id 2756 * The index of the Rx queue to update the ring. 2757 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2758 * to rte_eth_dev_configure(). 2759 * @return 2760 * - 0: Success, the receive queue is started. 2761 * - -ENODEV: if *port_id* is invalid. 2762 * - -EINVAL: The queue_id out of range or belong to hairpin. 2763 * - -EIO: if device is removed. 2764 * - -ENOTSUP: The function not supported in PMD. 2765 */ 2766 int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id); 2767 2768 /** 2769 * Stop specified Rx queue of a port 2770 * 2771 * @param port_id 2772 * The port identifier of the Ethernet device 2773 * @param rx_queue_id 2774 * The index of the Rx queue to update the ring. 2775 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2776 * to rte_eth_dev_configure(). 2777 * @return 2778 * - 0: Success, the receive queue is stopped. 2779 * - -ENODEV: if *port_id* is invalid. 2780 * - -EINVAL: The queue_id out of range or belong to hairpin. 2781 * - -EIO: if device is removed. 2782 * - -ENOTSUP: The function not supported in PMD. 2783 */ 2784 int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id); 2785 2786 /** 2787 * Start Tx for specified queue of a port. It is used when tx_deferred_start 2788 * flag of the specified queue is true. 2789 * 2790 * @param port_id 2791 * The port identifier of the Ethernet device 2792 * @param tx_queue_id 2793 * The index of the Tx queue to update the ring. 2794 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2795 * to rte_eth_dev_configure(). 2796 * @return 2797 * - 0: Success, the transmit queue is started. 2798 * - -ENODEV: if *port_id* is invalid. 2799 * - -EINVAL: The queue_id out of range or belong to hairpin. 2800 * - -EIO: if device is removed. 2801 * - -ENOTSUP: The function not supported in PMD. 2802 */ 2803 int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id); 2804 2805 /** 2806 * Stop specified Tx queue of a port 2807 * 2808 * @param port_id 2809 * The port identifier of the Ethernet device 2810 * @param tx_queue_id 2811 * The index of the Tx queue to update the ring. 2812 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2813 * to rte_eth_dev_configure(). 2814 * @return 2815 * - 0: Success, the transmit queue is stopped. 2816 * - -ENODEV: if *port_id* is invalid. 2817 * - -EINVAL: The queue_id out of range or belong to hairpin. 2818 * - -EIO: if device is removed. 2819 * - -ENOTSUP: The function not supported in PMD. 2820 */ 2821 int rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id); 2822 2823 /** 2824 * Start an Ethernet device. 2825 * 2826 * The device start step is the last one and consists of setting the configured 2827 * offload features and in starting the transmit and the receive units of the 2828 * device. 2829 * 2830 * Device RTE_ETH_DEV_NOLIVE_MAC_ADDR flag causes MAC address to be set before 2831 * PMD port start callback function is invoked. 2832 * 2833 * On success, all basic functions exported by the Ethernet API (link status, 2834 * receive/transmit, and so on) can be invoked. 2835 * 2836 * @param port_id 2837 * The port identifier of the Ethernet device. 2838 * @return 2839 * - 0: Success, Ethernet device started. 2840 * - <0: Error code of the driver device start function. 2841 */ 2842 int rte_eth_dev_start(uint16_t port_id); 2843 2844 /** 2845 * Stop an Ethernet device. The device can be restarted with a call to 2846 * rte_eth_dev_start() 2847 * 2848 * @param port_id 2849 * The port identifier of the Ethernet device. 2850 * @return 2851 * - 0: Success, Ethernet device stopped. 2852 * - <0: Error code of the driver device stop function. 2853 */ 2854 int rte_eth_dev_stop(uint16_t port_id); 2855 2856 /** 2857 * Link up an Ethernet device. 2858 * 2859 * Set device link up will re-enable the device Rx/Tx 2860 * functionality after it is previously set device linked down. 2861 * 2862 * @param port_id 2863 * The port identifier of the Ethernet device. 2864 * @return 2865 * - 0: Success, Ethernet device linked up. 2866 * - <0: Error code of the driver device link up function. 2867 */ 2868 int rte_eth_dev_set_link_up(uint16_t port_id); 2869 2870 /** 2871 * Link down an Ethernet device. 2872 * The device Rx/Tx functionality will be disabled if success, 2873 * and it can be re-enabled with a call to 2874 * rte_eth_dev_set_link_up() 2875 * 2876 * @param port_id 2877 * The port identifier of the Ethernet device. 2878 */ 2879 int rte_eth_dev_set_link_down(uint16_t port_id); 2880 2881 /** 2882 * Close a stopped Ethernet device. The device cannot be restarted! 2883 * The function frees all port resources. 2884 * 2885 * @param port_id 2886 * The port identifier of the Ethernet device. 2887 * @return 2888 * - Zero if the port is closed successfully. 2889 * - Negative if something went wrong. 2890 */ 2891 int rte_eth_dev_close(uint16_t port_id); 2892 2893 /** 2894 * Reset a Ethernet device and keep its port ID. 2895 * 2896 * When a port has to be reset passively, the DPDK application can invoke 2897 * this function. For example when a PF is reset, all its VFs should also 2898 * be reset. Normally a DPDK application can invoke this function when 2899 * RTE_ETH_EVENT_INTR_RESET event is detected, but can also use it to start 2900 * a port reset in other circumstances. 2901 * 2902 * When this function is called, it first stops the port and then calls the 2903 * PMD specific dev_uninit( ) and dev_init( ) to return the port to initial 2904 * state, in which no Tx and Rx queues are setup, as if the port has been 2905 * reset and not started. The port keeps the port ID it had before the 2906 * function call. 2907 * 2908 * After calling rte_eth_dev_reset( ), the application should use 2909 * rte_eth_dev_configure( ), rte_eth_rx_queue_setup( ), 2910 * rte_eth_tx_queue_setup( ), and rte_eth_dev_start( ) 2911 * to reconfigure the device as appropriate. 2912 * 2913 * Note: To avoid unexpected behavior, the application should stop calling 2914 * Tx and Rx functions before calling rte_eth_dev_reset( ). For thread 2915 * safety, all these controlling functions should be called from the same 2916 * thread. 2917 * 2918 * @param port_id 2919 * The port identifier of the Ethernet device. 2920 * 2921 * @return 2922 * - (0) if successful. 2923 * - (-ENODEV) if *port_id* is invalid. 2924 * - (-ENOTSUP) if hardware doesn't support this function. 2925 * - (-EPERM) if not ran from the primary process. 2926 * - (-EIO) if re-initialisation failed or device is removed. 2927 * - (-ENOMEM) if the reset failed due to OOM. 2928 * - (-EAGAIN) if the reset temporarily failed and should be retried later. 2929 */ 2930 int rte_eth_dev_reset(uint16_t port_id); 2931 2932 /** 2933 * Enable receipt in promiscuous mode for an Ethernet device. 2934 * 2935 * @param port_id 2936 * The port identifier of the Ethernet device. 2937 * @return 2938 * - (0) if successful. 2939 * - (-ENOTSUP) if support for promiscuous_enable() does not exist 2940 * for the device. 2941 * - (-ENODEV) if *port_id* invalid. 2942 */ 2943 int rte_eth_promiscuous_enable(uint16_t port_id); 2944 2945 /** 2946 * Disable receipt in promiscuous mode for an Ethernet device. 2947 * 2948 * @param port_id 2949 * The port identifier of the Ethernet device. 2950 * @return 2951 * - (0) if successful. 2952 * - (-ENOTSUP) if support for promiscuous_disable() does not exist 2953 * for the device. 2954 * - (-ENODEV) if *port_id* invalid. 2955 */ 2956 int rte_eth_promiscuous_disable(uint16_t port_id); 2957 2958 /** 2959 * Return the value of promiscuous mode for an Ethernet device. 2960 * 2961 * @param port_id 2962 * The port identifier of the Ethernet device. 2963 * @return 2964 * - (1) if promiscuous is enabled 2965 * - (0) if promiscuous is disabled. 2966 * - (-1) on error 2967 */ 2968 int rte_eth_promiscuous_get(uint16_t port_id); 2969 2970 /** 2971 * Enable the receipt of any multicast frame by an Ethernet device. 2972 * 2973 * @param port_id 2974 * The port identifier of the Ethernet device. 2975 * @return 2976 * - (0) if successful. 2977 * - (-ENOTSUP) if support for allmulticast_enable() does not exist 2978 * for the device. 2979 * - (-ENODEV) if *port_id* invalid. 2980 */ 2981 int rte_eth_allmulticast_enable(uint16_t port_id); 2982 2983 /** 2984 * Disable the receipt of all multicast frames by an Ethernet device. 2985 * 2986 * @param port_id 2987 * The port identifier of the Ethernet device. 2988 * @return 2989 * - (0) if successful. 2990 * - (-ENOTSUP) if support for allmulticast_disable() does not exist 2991 * for the device. 2992 * - (-ENODEV) if *port_id* invalid. 2993 */ 2994 int rte_eth_allmulticast_disable(uint16_t port_id); 2995 2996 /** 2997 * Return the value of allmulticast mode for an Ethernet device. 2998 * 2999 * @param port_id 3000 * The port identifier of the Ethernet device. 3001 * @return 3002 * - (1) if allmulticast is enabled 3003 * - (0) if allmulticast is disabled. 3004 * - (-1) on error 3005 */ 3006 int rte_eth_allmulticast_get(uint16_t port_id); 3007 3008 /** 3009 * Retrieve the link status (up/down), the duplex mode (half/full), 3010 * the negotiation (auto/fixed), and if available, the speed (Mbps). 3011 * 3012 * It might need to wait up to 9 seconds. 3013 * @see rte_eth_link_get_nowait. 3014 * 3015 * @param port_id 3016 * The port identifier of the Ethernet device. 3017 * @param link 3018 * Link information written back. 3019 * @return 3020 * - (0) if successful. 3021 * - (-ENOTSUP) if the function is not supported in PMD. 3022 * - (-ENODEV) if *port_id* invalid. 3023 * - (-EINVAL) if bad parameter. 3024 */ 3025 int rte_eth_link_get(uint16_t port_id, struct rte_eth_link *link); 3026 3027 /** 3028 * Retrieve the link status (up/down), the duplex mode (half/full), 3029 * the negotiation (auto/fixed), and if available, the speed (Mbps). 3030 * 3031 * @param port_id 3032 * The port identifier of the Ethernet device. 3033 * @param link 3034 * Link information written back. 3035 * @return 3036 * - (0) if successful. 3037 * - (-ENOTSUP) if the function is not supported in PMD. 3038 * - (-ENODEV) if *port_id* invalid. 3039 * - (-EINVAL) if bad parameter. 3040 */ 3041 int rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *link); 3042 3043 /** 3044 * @warning 3045 * @b EXPERIMENTAL: this API may change without prior notice. 3046 * 3047 * The function converts a link_speed to a string. It handles all special 3048 * values like unknown or none speed. 3049 * 3050 * @param link_speed 3051 * link_speed of rte_eth_link struct 3052 * @return 3053 * Link speed in textual format. It's pointer to immutable memory. 3054 * No free is required. 3055 */ 3056 __rte_experimental 3057 const char *rte_eth_link_speed_to_str(uint32_t link_speed); 3058 3059 /** 3060 * @warning 3061 * @b EXPERIMENTAL: this API may change without prior notice. 3062 * 3063 * The function converts a rte_eth_link struct representing a link status to 3064 * a string. 3065 * 3066 * @param str 3067 * A pointer to a string to be filled with textual representation of 3068 * device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to 3069 * store default link status text. 3070 * @param len 3071 * Length of available memory at 'str' string. 3072 * @param eth_link 3073 * Link status returned by rte_eth_link_get function 3074 * @return 3075 * Number of bytes written to str array or -EINVAL if bad parameter. 3076 */ 3077 __rte_experimental 3078 int rte_eth_link_to_str(char *str, size_t len, 3079 const struct rte_eth_link *eth_link); 3080 3081 /** 3082 * Retrieve the general I/O statistics of an Ethernet device. 3083 * 3084 * @param port_id 3085 * The port identifier of the Ethernet device. 3086 * @param stats 3087 * A pointer to a structure of type *rte_eth_stats* to be filled with 3088 * the values of device counters for the following set of statistics: 3089 * - *ipackets* with the total of successfully received packets. 3090 * - *opackets* with the total of successfully transmitted packets. 3091 * - *ibytes* with the total of successfully received bytes. 3092 * - *obytes* with the total of successfully transmitted bytes. 3093 * - *ierrors* with the total of erroneous received packets. 3094 * - *oerrors* with the total of failed transmitted packets. 3095 * @return 3096 * Zero if successful. Non-zero otherwise. 3097 */ 3098 int rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats); 3099 3100 /** 3101 * Reset the general I/O statistics of an Ethernet device. 3102 * 3103 * @param port_id 3104 * The port identifier of the Ethernet device. 3105 * @return 3106 * - (0) if device notified to reset stats. 3107 * - (-ENOTSUP) if hardware doesn't support. 3108 * - (-ENODEV) if *port_id* invalid. 3109 * - (<0): Error code of the driver stats reset function. 3110 */ 3111 int rte_eth_stats_reset(uint16_t port_id); 3112 3113 /** 3114 * Retrieve names of extended statistics of an Ethernet device. 3115 * 3116 * There is an assumption that 'xstat_names' and 'xstats' arrays are matched 3117 * by array index: 3118 * xstats_names[i].name => xstats[i].value 3119 * 3120 * And the array index is same with id field of 'struct rte_eth_xstat': 3121 * xstats[i].id == i 3122 * 3123 * This assumption makes key-value pair matching less flexible but simpler. 3124 * 3125 * @param port_id 3126 * The port identifier of the Ethernet device. 3127 * @param xstats_names 3128 * An rte_eth_xstat_name array of at least *size* elements to 3129 * be filled. If set to NULL, the function returns the required number 3130 * of elements. 3131 * @param size 3132 * The size of the xstats_names array (number of elements). 3133 * @return 3134 * - A positive value lower or equal to size: success. The return value 3135 * is the number of entries filled in the stats table. 3136 * - A positive value higher than size: error, the given statistics table 3137 * is too small. The return value corresponds to the size that should 3138 * be given to succeed. The entries in the table are not valid and 3139 * shall not be used by the caller. 3140 * - A negative value on error (invalid port ID). 3141 */ 3142 int rte_eth_xstats_get_names(uint16_t port_id, 3143 struct rte_eth_xstat_name *xstats_names, 3144 unsigned int size); 3145 3146 /** 3147 * Retrieve extended statistics of an Ethernet device. 3148 * 3149 * There is an assumption that 'xstat_names' and 'xstats' arrays are matched 3150 * by array index: 3151 * xstats_names[i].name => xstats[i].value 3152 * 3153 * And the array index is same with id field of 'struct rte_eth_xstat': 3154 * xstats[i].id == i 3155 * 3156 * This assumption makes key-value pair matching less flexible but simpler. 3157 * 3158 * @param port_id 3159 * The port identifier of the Ethernet device. 3160 * @param xstats 3161 * A pointer to a table of structure of type *rte_eth_xstat* 3162 * to be filled with device statistics ids and values. 3163 * This parameter can be set to NULL if and only if n is 0. 3164 * @param n 3165 * The size of the xstats array (number of elements). 3166 * If lower than the required number of elements, the function returns 3167 * the required number of elements. 3168 * If equal to zero, the xstats must be NULL, the function returns the 3169 * required number of elements. 3170 * @return 3171 * - A positive value lower or equal to n: success. The return value 3172 * is the number of entries filled in the stats table. 3173 * - A positive value higher than n: error, the given statistics table 3174 * is too small. The return value corresponds to the size that should 3175 * be given to succeed. The entries in the table are not valid and 3176 * shall not be used by the caller. 3177 * - A negative value on error (invalid port ID). 3178 */ 3179 int rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats, 3180 unsigned int n); 3181 3182 /** 3183 * Retrieve names of extended statistics of an Ethernet device. 3184 * 3185 * @param port_id 3186 * The port identifier of the Ethernet device. 3187 * @param xstats_names 3188 * Array to be filled in with names of requested device statistics. 3189 * Must not be NULL if @p ids are specified (not NULL). 3190 * @param size 3191 * Number of elements in @p xstats_names array (if not NULL) and in 3192 * @p ids array (if not NULL). Must be 0 if both array pointers are NULL. 3193 * @param ids 3194 * IDs array given by app to retrieve specific statistics. May be NULL to 3195 * retrieve names of all available statistics or, if @p xstats_names is 3196 * NULL as well, just the number of available statistics. 3197 * @return 3198 * - A positive value lower or equal to size: success. The return value 3199 * is the number of entries filled in the stats table. 3200 * - A positive value higher than size: success. The given statistics table 3201 * is too small. The return value corresponds to the size that should 3202 * be given to succeed. The entries in the table are not valid and 3203 * shall not be used by the caller. 3204 * - A negative value on error. 3205 */ 3206 int 3207 rte_eth_xstats_get_names_by_id(uint16_t port_id, 3208 struct rte_eth_xstat_name *xstats_names, unsigned int size, 3209 uint64_t *ids); 3210 3211 /** 3212 * Retrieve extended statistics of an Ethernet device. 3213 * 3214 * @param port_id 3215 * The port identifier of the Ethernet device. 3216 * @param ids 3217 * IDs array given by app to retrieve specific statistics. May be NULL to 3218 * retrieve all available statistics or, if @p values is NULL as well, 3219 * just the number of available statistics. 3220 * @param values 3221 * Array to be filled in with requested device statistics. 3222 * Must not be NULL if ids are specified (not NULL). 3223 * @param size 3224 * Number of elements in @p values array (if not NULL) and in @p ids 3225 * array (if not NULL). Must be 0 if both array pointers are NULL. 3226 * @return 3227 * - A positive value lower or equal to size: success. The return value 3228 * is the number of entries filled in the stats table. 3229 * - A positive value higher than size: success: The given statistics table 3230 * is too small. The return value corresponds to the size that should 3231 * be given to succeed. The entries in the table are not valid and 3232 * shall not be used by the caller. 3233 * - A negative value on error. 3234 */ 3235 int rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, 3236 uint64_t *values, unsigned int size); 3237 3238 /** 3239 * Gets the ID of a statistic from its name. 3240 * 3241 * This function searches for the statistics using string compares, and 3242 * as such should not be used on the fast-path. For fast-path retrieval of 3243 * specific statistics, store the ID as provided in *id* from this function, 3244 * and pass the ID to rte_eth_xstats_get() 3245 * 3246 * @param port_id The port to look up statistics from 3247 * @param xstat_name The name of the statistic to return 3248 * @param[out] id A pointer to an app-supplied uint64_t which should be 3249 * set to the ID of the stat if the stat exists. 3250 * @return 3251 * 0 on success 3252 * -ENODEV for invalid port_id, 3253 * -EIO if device is removed, 3254 * -EINVAL if the xstat_name doesn't exist in port_id 3255 * -ENOMEM if bad parameter. 3256 */ 3257 int rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, 3258 uint64_t *id); 3259 3260 /** 3261 * Reset extended statistics of an Ethernet device. 3262 * 3263 * @param port_id 3264 * The port identifier of the Ethernet device. 3265 * @return 3266 * - (0) if device notified to reset extended stats. 3267 * - (-ENOTSUP) if pmd doesn't support both 3268 * extended stats and basic stats reset. 3269 * - (-ENODEV) if *port_id* invalid. 3270 * - (<0): Error code of the driver xstats reset function. 3271 */ 3272 int rte_eth_xstats_reset(uint16_t port_id); 3273 3274 /** 3275 * Set a mapping for the specified transmit queue to the specified per-queue 3276 * statistics counter. 3277 * 3278 * @param port_id 3279 * The port identifier of the Ethernet device. 3280 * @param tx_queue_id 3281 * The index of the transmit queue for which a queue stats mapping is required. 3282 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 3283 * to rte_eth_dev_configure(). 3284 * @param stat_idx 3285 * The per-queue packet statistics functionality number that the transmit 3286 * queue is to be assigned. 3287 * The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1]. 3288 * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256. 3289 * @return 3290 * Zero if successful. Non-zero otherwise. 3291 */ 3292 int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, 3293 uint16_t tx_queue_id, uint8_t stat_idx); 3294 3295 /** 3296 * Set a mapping for the specified receive queue to the specified per-queue 3297 * statistics counter. 3298 * 3299 * @param port_id 3300 * The port identifier of the Ethernet device. 3301 * @param rx_queue_id 3302 * The index of the receive queue for which a queue stats mapping is required. 3303 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3304 * to rte_eth_dev_configure(). 3305 * @param stat_idx 3306 * The per-queue packet statistics functionality number that the receive 3307 * queue is to be assigned. 3308 * The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1]. 3309 * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256. 3310 * @return 3311 * Zero if successful. Non-zero otherwise. 3312 */ 3313 int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, 3314 uint16_t rx_queue_id, 3315 uint8_t stat_idx); 3316 3317 /** 3318 * Retrieve the Ethernet address of an Ethernet device. 3319 * 3320 * @param port_id 3321 * The port identifier of the Ethernet device. 3322 * @param mac_addr 3323 * A pointer to a structure of type *ether_addr* to be filled with 3324 * the Ethernet address of the Ethernet device. 3325 * @return 3326 * - (0) if successful 3327 * - (-ENODEV) if *port_id* invalid. 3328 * - (-EINVAL) if bad parameter. 3329 */ 3330 int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr); 3331 3332 /** 3333 * @warning 3334 * @b EXPERIMENTAL: this API may change without prior notice 3335 * 3336 * Retrieve the Ethernet addresses of an Ethernet device. 3337 * 3338 * @param port_id 3339 * The port identifier of the Ethernet device. 3340 * @param ma 3341 * A pointer to an array of structures of type *ether_addr* to be filled with 3342 * the Ethernet addresses of the Ethernet device. 3343 * @param num 3344 * Number of elements in the @p ma array. 3345 * Note that rte_eth_dev_info::max_mac_addrs can be used to retrieve 3346 * max number of Ethernet addresses for given port. 3347 * @return 3348 * - number of retrieved addresses if successful 3349 * - (-ENODEV) if *port_id* invalid. 3350 * - (-EINVAL) if bad parameter. 3351 */ 3352 __rte_experimental 3353 int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, 3354 unsigned int num); 3355 3356 /** 3357 * Retrieve the contextual information of an Ethernet device. 3358 * 3359 * As part of this function, a number of of fields in dev_info will be 3360 * initialized as follows: 3361 * 3362 * rx_desc_lim = lim 3363 * tx_desc_lim = lim 3364 * 3365 * Where lim is defined within the rte_eth_dev_info_get as 3366 * 3367 * const struct rte_eth_desc_lim lim = { 3368 * .nb_max = UINT16_MAX, 3369 * .nb_min = 0, 3370 * .nb_align = 1, 3371 * .nb_seg_max = UINT16_MAX, 3372 * .nb_mtu_seg_max = UINT16_MAX, 3373 * }; 3374 * 3375 * device = dev->device 3376 * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN 3377 * max_mtu = UINT16_MAX 3378 * 3379 * The following fields will be populated if support for dev_infos_get() 3380 * exists for the device and the rte_eth_dev 'dev' has been populated 3381 * successfully with a call to it: 3382 * 3383 * driver_name = dev->device->driver->name 3384 * nb_rx_queues = dev->data->nb_rx_queues 3385 * nb_tx_queues = dev->data->nb_tx_queues 3386 * dev_flags = &dev->data->dev_flags 3387 * 3388 * @param port_id 3389 * The port identifier of the Ethernet device. 3390 * @param dev_info 3391 * A pointer to a structure of type *rte_eth_dev_info* to be filled with 3392 * the contextual information of the Ethernet device. 3393 * @return 3394 * - (0) if successful. 3395 * - (-ENOTSUP) if support for dev_infos_get() does not exist for the device. 3396 * - (-ENODEV) if *port_id* invalid. 3397 * - (-EINVAL) if bad parameter. 3398 */ 3399 int rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info); 3400 3401 /** 3402 * @warning 3403 * @b EXPERIMENTAL: this API may change without prior notice. 3404 * 3405 * Retrieve the configuration of an Ethernet device. 3406 * 3407 * @param port_id 3408 * The port identifier of the Ethernet device. 3409 * @param dev_conf 3410 * Location for Ethernet device configuration to be filled in. 3411 * @return 3412 * - (0) if successful. 3413 * - (-ENODEV) if *port_id* invalid. 3414 * - (-EINVAL) if bad parameter. 3415 */ 3416 __rte_experimental 3417 int rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf); 3418 3419 /** 3420 * Retrieve the firmware version of a device. 3421 * 3422 * @param port_id 3423 * The port identifier of the device. 3424 * @param fw_version 3425 * A pointer to a string array storing the firmware version of a device, 3426 * the string includes terminating null. This pointer is allocated by caller. 3427 * @param fw_size 3428 * The size of the string array pointed by fw_version, which should be 3429 * large enough to store firmware version of the device. 3430 * @return 3431 * - (0) if successful. 3432 * - (-ENOTSUP) if operation is not supported. 3433 * - (-ENODEV) if *port_id* invalid. 3434 * - (-EIO) if device is removed. 3435 * - (-EINVAL) if bad parameter. 3436 * - (>0) if *fw_size* is not enough to store firmware version, return 3437 * the size of the non truncated string. 3438 */ 3439 int rte_eth_dev_fw_version_get(uint16_t port_id, 3440 char *fw_version, size_t fw_size); 3441 3442 /** 3443 * Retrieve the supported packet types of an Ethernet device. 3444 * 3445 * When a packet type is announced as supported, it *must* be recognized by 3446 * the PMD. For instance, if RTE_PTYPE_L2_ETHER, RTE_PTYPE_L2_ETHER_VLAN 3447 * and RTE_PTYPE_L3_IPV4 are announced, the PMD must return the following 3448 * packet types for these packets: 3449 * - Ether/IPv4 -> RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 3450 * - Ether/VLAN/IPv4 -> RTE_PTYPE_L2_ETHER_VLAN | RTE_PTYPE_L3_IPV4 3451 * - Ether/[anything else] -> RTE_PTYPE_L2_ETHER 3452 * - Ether/VLAN/[anything else] -> RTE_PTYPE_L2_ETHER_VLAN 3453 * 3454 * When a packet is received by a PMD, the most precise type must be 3455 * returned among the ones supported. However a PMD is allowed to set 3456 * packet type that is not in the supported list, at the condition that it 3457 * is more precise. Therefore, a PMD announcing no supported packet types 3458 * can still set a matching packet type in a received packet. 3459 * 3460 * @note 3461 * Better to invoke this API after the device is already started or Rx burst 3462 * function is decided, to obtain correct supported ptypes. 3463 * @note 3464 * if a given PMD does not report what ptypes it supports, then the supported 3465 * ptype count is reported as 0. 3466 * @param port_id 3467 * The port identifier of the Ethernet device. 3468 * @param ptype_mask 3469 * A hint of what kind of packet type which the caller is interested in. 3470 * @param ptypes 3471 * An array pointer to store adequate packet types, allocated by caller. 3472 * @param num 3473 * Size of the array pointed by param ptypes. 3474 * @return 3475 * - (>=0) Number of supported ptypes. If the number of types exceeds num, 3476 * only num entries will be filled into the ptypes array, but the full 3477 * count of supported ptypes will be returned. 3478 * - (-ENODEV) if *port_id* invalid. 3479 * - (-EINVAL) if bad parameter. 3480 */ 3481 int rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, 3482 uint32_t *ptypes, int num); 3483 /** 3484 * Inform Ethernet device about reduced range of packet types to handle. 3485 * 3486 * Application can use this function to set only specific ptypes that it's 3487 * interested. This information can be used by the PMD to optimize Rx path. 3488 * 3489 * The function accepts an array `set_ptypes` allocated by the caller to 3490 * store the packet types set by the driver, the last element of the array 3491 * is set to RTE_PTYPE_UNKNOWN. The size of the `set_ptype` array should be 3492 * `rte_eth_dev_get_supported_ptypes() + 1` else it might only be filled 3493 * partially. 3494 * 3495 * @param port_id 3496 * The port identifier of the Ethernet device. 3497 * @param ptype_mask 3498 * The ptype family that application is interested in should be bitwise OR of 3499 * RTE_PTYPE_*_MASK or 0. 3500 * @param set_ptypes 3501 * An array pointer to store set packet types, allocated by caller. The 3502 * function marks the end of array with RTE_PTYPE_UNKNOWN. 3503 * @param num 3504 * Size of the array pointed by param ptypes. 3505 * Should be rte_eth_dev_get_supported_ptypes() + 1 to accommodate the 3506 * set ptypes. 3507 * @return 3508 * - (0) if Success. 3509 * - (-ENODEV) if *port_id* invalid. 3510 * - (-EINVAL) if *ptype_mask* is invalid (or) set_ptypes is NULL and 3511 * num > 0. 3512 */ 3513 int rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, 3514 uint32_t *set_ptypes, unsigned int num); 3515 3516 /** 3517 * Retrieve the MTU of an Ethernet device. 3518 * 3519 * @param port_id 3520 * The port identifier of the Ethernet device. 3521 * @param mtu 3522 * A pointer to a uint16_t where the retrieved MTU is to be stored. 3523 * @return 3524 * - (0) if successful. 3525 * - (-ENODEV) if *port_id* invalid. 3526 * - (-EINVAL) if bad parameter. 3527 */ 3528 int rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu); 3529 3530 /** 3531 * Change the MTU of an Ethernet device. 3532 * 3533 * @param port_id 3534 * The port identifier of the Ethernet device. 3535 * @param mtu 3536 * A uint16_t for the MTU to be applied. 3537 * @return 3538 * - (0) if successful. 3539 * - (-ENOTSUP) if operation is not supported. 3540 * - (-ENODEV) if *port_id* invalid. 3541 * - (-EIO) if device is removed. 3542 * - (-EINVAL) if *mtu* invalid, validation of mtu can occur within 3543 * rte_eth_dev_set_mtu if dev_infos_get is supported by the device or 3544 * when the mtu is set using dev->dev_ops->mtu_set. 3545 * - (-EBUSY) if operation is not allowed when the port is running 3546 */ 3547 int rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu); 3548 3549 /** 3550 * Enable/Disable hardware filtering by an Ethernet device of received 3551 * VLAN packets tagged with a given VLAN Tag Identifier. 3552 * 3553 * @param port_id 3554 * The port identifier of the Ethernet device. 3555 * @param vlan_id 3556 * The VLAN Tag Identifier whose filtering must be enabled or disabled. 3557 * @param on 3558 * If > 0, enable VLAN filtering of VLAN packets tagged with *vlan_id*. 3559 * Otherwise, disable VLAN filtering of VLAN packets tagged with *vlan_id*. 3560 * @return 3561 * - (0) if successful. 3562 * - (-ENOTSUP) if hardware-assisted VLAN filtering not configured. 3563 * - (-ENODEV) if *port_id* invalid. 3564 * - (-EIO) if device is removed. 3565 * - (-ENOSYS) if VLAN filtering on *port_id* disabled. 3566 * - (-EINVAL) if *vlan_id* > 4095. 3567 */ 3568 int rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on); 3569 3570 /** 3571 * Enable/Disable hardware VLAN Strip by a Rx queue of an Ethernet device. 3572 * 3573 * @param port_id 3574 * The port identifier of the Ethernet device. 3575 * @param rx_queue_id 3576 * The index of the receive queue for which a queue stats mapping is required. 3577 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3578 * to rte_eth_dev_configure(). 3579 * @param on 3580 * If 1, Enable VLAN Stripping of the receive queue of the Ethernet port. 3581 * If 0, Disable VLAN Stripping of the receive queue of the Ethernet port. 3582 * @return 3583 * - (0) if successful. 3584 * - (-ENOTSUP) if hardware-assisted VLAN stripping not configured. 3585 * - (-ENODEV) if *port_id* invalid. 3586 * - (-EINVAL) if *rx_queue_id* invalid. 3587 */ 3588 int rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, 3589 int on); 3590 3591 /** 3592 * Set the Outer VLAN Ether Type by an Ethernet device, it can be inserted to 3593 * the VLAN header. 3594 * 3595 * @param port_id 3596 * The port identifier of the Ethernet device. 3597 * @param vlan_type 3598 * The VLAN type. 3599 * @param tag_type 3600 * The Tag Protocol ID 3601 * @return 3602 * - (0) if successful. 3603 * - (-ENOTSUP) if hardware-assisted VLAN TPID setup is not supported. 3604 * - (-ENODEV) if *port_id* invalid. 3605 * - (-EIO) if device is removed. 3606 */ 3607 int rte_eth_dev_set_vlan_ether_type(uint16_t port_id, 3608 enum rte_vlan_type vlan_type, 3609 uint16_t tag_type); 3610 3611 /** 3612 * Set VLAN offload configuration on an Ethernet device. 3613 * 3614 * @param port_id 3615 * The port identifier of the Ethernet device. 3616 * @param offload_mask 3617 * The VLAN Offload bit mask can be mixed use with "OR" 3618 * RTE_ETH_VLAN_STRIP_OFFLOAD 3619 * RTE_ETH_VLAN_FILTER_OFFLOAD 3620 * RTE_ETH_VLAN_EXTEND_OFFLOAD 3621 * RTE_ETH_QINQ_STRIP_OFFLOAD 3622 * @return 3623 * - (0) if successful. 3624 * - (-ENOTSUP) if hardware-assisted VLAN filtering not configured. 3625 * - (-ENODEV) if *port_id* invalid. 3626 * - (-EIO) if device is removed. 3627 */ 3628 int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask); 3629 3630 /** 3631 * Read VLAN Offload configuration from an Ethernet device 3632 * 3633 * @param port_id 3634 * The port identifier of the Ethernet device. 3635 * @return 3636 * - (>0) if successful. Bit mask to indicate 3637 * RTE_ETH_VLAN_STRIP_OFFLOAD 3638 * RTE_ETH_VLAN_FILTER_OFFLOAD 3639 * RTE_ETH_VLAN_EXTEND_OFFLOAD 3640 * RTE_ETH_QINQ_STRIP_OFFLOAD 3641 * - (-ENODEV) if *port_id* invalid. 3642 */ 3643 int rte_eth_dev_get_vlan_offload(uint16_t port_id); 3644 3645 /** 3646 * Set port based Tx VLAN insertion on or off. 3647 * 3648 * @param port_id 3649 * The port identifier of the Ethernet device. 3650 * @param pvid 3651 * Port based Tx VLAN identifier together with user priority. 3652 * @param on 3653 * Turn on or off the port based Tx VLAN insertion. 3654 * 3655 * @return 3656 * - (0) if successful. 3657 * - negative if failed. 3658 */ 3659 int rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on); 3660 3661 /** 3662 * @warning 3663 * @b EXPERIMENTAL: this API may change without prior notice. 3664 * 3665 * Set Rx queue available descriptors threshold. 3666 * 3667 * @param port_id 3668 * The port identifier of the Ethernet device. 3669 * @param queue_id 3670 * The index of the receive queue. 3671 * @param avail_thresh 3672 * The available descriptors threshold is percentage of Rx queue size 3673 * which describes the availability of Rx queue for hardware. 3674 * If the Rx queue availability is below it, 3675 * the event RTE_ETH_EVENT_RX_AVAIL_THRESH is triggered. 3676 * [1-99] to set a new available descriptors threshold. 3677 * 0 to disable threshold monitoring. 3678 * 3679 * @return 3680 * - 0 if successful. 3681 * - (-ENODEV) if @p port_id is invalid. 3682 * - (-EINVAL) if bad parameter. 3683 * - (-ENOTSUP) if available Rx descriptors threshold is not supported. 3684 * - (-EIO) if device is removed. 3685 */ 3686 __rte_experimental 3687 int rte_eth_rx_avail_thresh_set(uint16_t port_id, uint16_t queue_id, 3688 uint8_t avail_thresh); 3689 3690 /** 3691 * @warning 3692 * @b EXPERIMENTAL: this API may change without prior notice. 3693 * 3694 * Find Rx queue with RTE_ETH_EVENT_RX_AVAIL_THRESH event pending. 3695 * 3696 * @param port_id 3697 * The port identifier of the Ethernet device. 3698 * @param[inout] queue_id 3699 * On input starting Rx queue index to search from. 3700 * If the queue_id is bigger than maximum queue ID of the port, 3701 * search is started from 0. So that application can keep calling 3702 * this function to handle all pending events with a simple increment 3703 * of queue_id on the next call. 3704 * On output if return value is 1, Rx queue index with the event pending. 3705 * @param[out] avail_thresh 3706 * Location for available descriptors threshold of the found Rx queue. 3707 * 3708 * @return 3709 * - 1 if an Rx queue with pending event is found. 3710 * - 0 if no Rx queue with pending event is found. 3711 * - (-ENODEV) if @p port_id is invalid. 3712 * - (-EINVAL) if bad parameter (e.g. @p queue_id is NULL). 3713 * - (-ENOTSUP) if operation is not supported. 3714 * - (-EIO) if device is removed. 3715 */ 3716 __rte_experimental 3717 int rte_eth_rx_avail_thresh_query(uint16_t port_id, uint16_t *queue_id, 3718 uint8_t *avail_thresh); 3719 3720 typedef void (*buffer_tx_error_fn)(struct rte_mbuf **unsent, uint16_t count, 3721 void *userdata); 3722 3723 /** 3724 * Structure used to buffer packets for future Tx 3725 * Used by APIs rte_eth_tx_buffer and rte_eth_tx_buffer_flush 3726 */ 3727 struct rte_eth_dev_tx_buffer { 3728 buffer_tx_error_fn error_callback; 3729 void *error_userdata; 3730 uint16_t size; /**< Size of buffer for buffered Tx */ 3731 uint16_t length; /**< Number of packets in the array */ 3732 /** Pending packets to be sent on explicit flush or when full */ 3733 struct rte_mbuf *pkts[]; 3734 }; 3735 3736 /** 3737 * Calculate the size of the Tx buffer. 3738 * 3739 * @param sz 3740 * Number of stored packets. 3741 */ 3742 #define RTE_ETH_TX_BUFFER_SIZE(sz) \ 3743 (sizeof(struct rte_eth_dev_tx_buffer) + (sz) * sizeof(struct rte_mbuf *)) 3744 3745 /** 3746 * Initialize default values for buffered transmitting 3747 * 3748 * @param buffer 3749 * Tx buffer to be initialized. 3750 * @param size 3751 * Buffer size 3752 * @return 3753 * 0 if no error 3754 */ 3755 int 3756 rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size); 3757 3758 /** 3759 * Configure a callback for buffered packets which cannot be sent 3760 * 3761 * Register a specific callback to be called when an attempt is made to send 3762 * all packets buffered on an Ethernet port, but not all packets can 3763 * successfully be sent. The callback registered here will be called only 3764 * from calls to rte_eth_tx_buffer() and rte_eth_tx_buffer_flush() APIs. 3765 * The default callback configured for each queue by default just frees the 3766 * packets back to the calling mempool. If additional behaviour is required, 3767 * for example, to count dropped packets, or to retry transmission of packets 3768 * which cannot be sent, this function should be used to register a suitable 3769 * callback function to implement the desired behaviour. 3770 * The example callback "rte_eth_count_unsent_packet_callback()" is also 3771 * provided as reference. 3772 * 3773 * @param buffer 3774 * The port identifier of the Ethernet device. 3775 * @param callback 3776 * The function to be used as the callback. 3777 * @param userdata 3778 * Arbitrary parameter to be passed to the callback function 3779 * @return 3780 * 0 on success, or -EINVAL if bad parameter 3781 */ 3782 int 3783 rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, 3784 buffer_tx_error_fn callback, void *userdata); 3785 3786 /** 3787 * Callback function for silently dropping unsent buffered packets. 3788 * 3789 * This function can be passed to rte_eth_tx_buffer_set_err_callback() to 3790 * adjust the default behavior when buffered packets cannot be sent. This 3791 * function drops any unsent packets silently and is used by Tx buffered 3792 * operations as default behavior. 3793 * 3794 * NOTE: this function should not be called directly, instead it should be used 3795 * as a callback for packet buffering. 3796 * 3797 * NOTE: when configuring this function as a callback with 3798 * rte_eth_tx_buffer_set_err_callback(), the final, userdata parameter 3799 * should point to an uint64_t value. 3800 * 3801 * @param pkts 3802 * The previously buffered packets which could not be sent 3803 * @param unsent 3804 * The number of unsent packets in the pkts array 3805 * @param userdata 3806 * Not used 3807 */ 3808 void 3809 rte_eth_tx_buffer_drop_callback(struct rte_mbuf **pkts, uint16_t unsent, 3810 void *userdata); 3811 3812 /** 3813 * Callback function for tracking unsent buffered packets. 3814 * 3815 * This function can be passed to rte_eth_tx_buffer_set_err_callback() to 3816 * adjust the default behavior when buffered packets cannot be sent. This 3817 * function drops any unsent packets, but also updates a user-supplied counter 3818 * to track the overall number of packets dropped. The counter should be an 3819 * uint64_t variable. 3820 * 3821 * NOTE: this function should not be called directly, instead it should be used 3822 * as a callback for packet buffering. 3823 * 3824 * NOTE: when configuring this function as a callback with 3825 * rte_eth_tx_buffer_set_err_callback(), the final, userdata parameter 3826 * should point to an uint64_t value. 3827 * 3828 * @param pkts 3829 * The previously buffered packets which could not be sent 3830 * @param unsent 3831 * The number of unsent packets in the pkts array 3832 * @param userdata 3833 * Pointer to an uint64_t value, which will be incremented by unsent 3834 */ 3835 void 3836 rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent, 3837 void *userdata); 3838 3839 /** 3840 * Request the driver to free mbufs currently cached by the driver. The 3841 * driver will only free the mbuf if it is no longer in use. It is the 3842 * application's responsibility to ensure rte_eth_tx_buffer_flush(..) is 3843 * called if needed. 3844 * 3845 * @param port_id 3846 * The port identifier of the Ethernet device. 3847 * @param queue_id 3848 * The index of the transmit queue through which output packets must be 3849 * sent. 3850 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 3851 * to rte_eth_dev_configure(). 3852 * @param free_cnt 3853 * Maximum number of packets to free. Use 0 to indicate all possible packets 3854 * should be freed. Note that a packet may be using multiple mbufs. 3855 * @return 3856 * Failure: < 0 3857 * -ENODEV: Invalid interface 3858 * -EIO: device is removed 3859 * -ENOTSUP: Driver does not support function 3860 * Success: >= 0 3861 * 0-n: Number of packets freed. More packets may still remain in ring that 3862 * are in use. 3863 */ 3864 int 3865 rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt); 3866 3867 /** 3868 * Subtypes for IPsec offload event(@ref RTE_ETH_EVENT_IPSEC) raised by 3869 * eth device. 3870 */ 3871 enum rte_eth_event_ipsec_subtype { 3872 /** Unknown event type */ 3873 RTE_ETH_EVENT_IPSEC_UNKNOWN = 0, 3874 /** Sequence number overflow */ 3875 RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW, 3876 /** Soft time expiry of SA */ 3877 RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY, 3878 /** Soft byte expiry of SA */ 3879 RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY, 3880 /** Max value of this enum */ 3881 RTE_ETH_EVENT_IPSEC_MAX 3882 }; 3883 3884 /** 3885 * Descriptor for @ref RTE_ETH_EVENT_IPSEC event. Used by eth dev to send extra 3886 * information of the IPsec offload event. 3887 */ 3888 struct rte_eth_event_ipsec_desc { 3889 /** Type of RTE_ETH_EVENT_IPSEC_* event */ 3890 enum rte_eth_event_ipsec_subtype subtype; 3891 /** 3892 * Event specific metadata. 3893 * 3894 * For the following events, *userdata* registered 3895 * with the *rte_security_session* would be returned 3896 * as metadata, 3897 * 3898 * - @ref RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW 3899 * - @ref RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY 3900 * - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY 3901 * 3902 * @see struct rte_security_session_conf 3903 * 3904 */ 3905 uint64_t metadata; 3906 }; 3907 3908 /** 3909 * The eth device event type for interrupt, and maybe others in the future. 3910 */ 3911 enum rte_eth_event_type { 3912 RTE_ETH_EVENT_UNKNOWN, /**< unknown event type */ 3913 RTE_ETH_EVENT_INTR_LSC, /**< lsc interrupt event */ 3914 /** queue state event (enabled/disabled) */ 3915 RTE_ETH_EVENT_QUEUE_STATE, 3916 /** reset interrupt event, sent to VF on PF reset */ 3917 RTE_ETH_EVENT_INTR_RESET, 3918 RTE_ETH_EVENT_VF_MBOX, /**< message from the VF received by PF */ 3919 RTE_ETH_EVENT_MACSEC, /**< MACsec offload related event */ 3920 RTE_ETH_EVENT_INTR_RMV, /**< device removal event */ 3921 RTE_ETH_EVENT_NEW, /**< port is probed */ 3922 RTE_ETH_EVENT_DESTROY, /**< port is released */ 3923 RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */ 3924 RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */ 3925 /** 3926 * Number of available Rx descriptors is smaller than the threshold. 3927 * @see rte_eth_rx_avail_thresh_set() 3928 */ 3929 RTE_ETH_EVENT_RX_AVAIL_THRESH, 3930 RTE_ETH_EVENT_MAX /**< max value of this enum */ 3931 }; 3932 3933 /** User application callback to be registered for interrupts. */ 3934 typedef int (*rte_eth_dev_cb_fn)(uint16_t port_id, 3935 enum rte_eth_event_type event, void *cb_arg, void *ret_param); 3936 3937 /** 3938 * Register a callback function for port event. 3939 * 3940 * @param port_id 3941 * Port ID. 3942 * RTE_ETH_ALL means register the event for all port ids. 3943 * @param event 3944 * Event interested. 3945 * @param cb_fn 3946 * User supplied callback function to be called. 3947 * @param cb_arg 3948 * Pointer to the parameters for the registered callback. 3949 * 3950 * @return 3951 * - On success, zero. 3952 * - On failure, a negative value. 3953 */ 3954 int rte_eth_dev_callback_register(uint16_t port_id, 3955 enum rte_eth_event_type event, 3956 rte_eth_dev_cb_fn cb_fn, void *cb_arg); 3957 3958 /** 3959 * Unregister a callback function for port event. 3960 * 3961 * @param port_id 3962 * Port ID. 3963 * RTE_ETH_ALL means unregister the event for all port ids. 3964 * @param event 3965 * Event interested. 3966 * @param cb_fn 3967 * User supplied callback function to be called. 3968 * @param cb_arg 3969 * Pointer to the parameters for the registered callback. -1 means to 3970 * remove all for the same callback address and same event. 3971 * 3972 * @return 3973 * - On success, zero. 3974 * - On failure, a negative value. 3975 */ 3976 int rte_eth_dev_callback_unregister(uint16_t port_id, 3977 enum rte_eth_event_type event, 3978 rte_eth_dev_cb_fn cb_fn, void *cb_arg); 3979 3980 /** 3981 * When there is no Rx packet coming in Rx Queue for a long time, we can 3982 * sleep lcore related to Rx Queue for power saving, and enable Rx interrupt 3983 * to be triggered when Rx packet arrives. 3984 * 3985 * The rte_eth_dev_rx_intr_enable() function enables Rx queue 3986 * interrupt on specific Rx queue of a port. 3987 * 3988 * @param port_id 3989 * The port identifier of the Ethernet device. 3990 * @param queue_id 3991 * The index of the receive queue from which to retrieve input packets. 3992 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3993 * to rte_eth_dev_configure(). 3994 * @return 3995 * - (0) if successful. 3996 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 3997 * that operation. 3998 * - (-ENODEV) if *port_id* invalid. 3999 * - (-EIO) if device is removed. 4000 */ 4001 int rte_eth_dev_rx_intr_enable(uint16_t port_id, uint16_t queue_id); 4002 4003 /** 4004 * When lcore wakes up from Rx interrupt indicating packet coming, disable Rx 4005 * interrupt and returns to polling mode. 4006 * 4007 * The rte_eth_dev_rx_intr_disable() function disables Rx queue 4008 * interrupt on specific Rx queue of a port. 4009 * 4010 * @param port_id 4011 * The port identifier of the Ethernet device. 4012 * @param queue_id 4013 * The index of the receive queue from which to retrieve input packets. 4014 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 4015 * to rte_eth_dev_configure(). 4016 * @return 4017 * - (0) if successful. 4018 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 4019 * that operation. 4020 * - (-ENODEV) if *port_id* invalid. 4021 * - (-EIO) if device is removed. 4022 */ 4023 int rte_eth_dev_rx_intr_disable(uint16_t port_id, uint16_t queue_id); 4024 4025 /** 4026 * Rx Interrupt control per port. 4027 * 4028 * @param port_id 4029 * The port identifier of the Ethernet device. 4030 * @param epfd 4031 * Epoll instance fd which the intr vector associated to. 4032 * Using RTE_EPOLL_PER_THREAD allows to use per thread epoll instance. 4033 * @param op 4034 * The operation be performed for the vector. 4035 * Operation type of {RTE_INTR_EVENT_ADD, RTE_INTR_EVENT_DEL}. 4036 * @param data 4037 * User raw data. 4038 * @return 4039 * - On success, zero. 4040 * - On failure, a negative value. 4041 */ 4042 int rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data); 4043 4044 /** 4045 * Rx Interrupt control per queue. 4046 * 4047 * @param port_id 4048 * The port identifier of the Ethernet device. 4049 * @param queue_id 4050 * The index of the receive queue from which to retrieve input packets. 4051 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 4052 * to rte_eth_dev_configure(). 4053 * @param epfd 4054 * Epoll instance fd which the intr vector associated to. 4055 * Using RTE_EPOLL_PER_THREAD allows to use per thread epoll instance. 4056 * @param op 4057 * The operation be performed for the vector. 4058 * Operation type of {RTE_INTR_EVENT_ADD, RTE_INTR_EVENT_DEL}. 4059 * @param data 4060 * User raw data. 4061 * @return 4062 * - On success, zero. 4063 * - On failure, a negative value. 4064 */ 4065 int rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, 4066 int epfd, int op, void *data); 4067 4068 /** 4069 * Get interrupt fd per Rx queue. 4070 * 4071 * @param port_id 4072 * The port identifier of the Ethernet device. 4073 * @param queue_id 4074 * The index of the receive queue from which to retrieve input packets. 4075 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 4076 * to rte_eth_dev_configure(). 4077 * @return 4078 * - (>=0) the interrupt fd associated to the requested Rx queue if 4079 * successful. 4080 * - (-1) on error. 4081 */ 4082 int 4083 rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id); 4084 4085 /** 4086 * Turn on the LED on the Ethernet device. 4087 * This function turns on the LED on the Ethernet device. 4088 * 4089 * @param port_id 4090 * The port identifier of the Ethernet device. 4091 * @return 4092 * - (0) if successful. 4093 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 4094 * that operation. 4095 * - (-ENODEV) if *port_id* invalid. 4096 * - (-EIO) if device is removed. 4097 */ 4098 int rte_eth_led_on(uint16_t port_id); 4099 4100 /** 4101 * Turn off the LED on the Ethernet device. 4102 * This function turns off the LED on the Ethernet device. 4103 * 4104 * @param port_id 4105 * The port identifier of the Ethernet device. 4106 * @return 4107 * - (0) if successful. 4108 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 4109 * that operation. 4110 * - (-ENODEV) if *port_id* invalid. 4111 * - (-EIO) if device is removed. 4112 */ 4113 int rte_eth_led_off(uint16_t port_id); 4114 4115 /** 4116 * @warning 4117 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 4118 * 4119 * Get Forward Error Correction(FEC) capability. 4120 * 4121 * @param port_id 4122 * The port identifier of the Ethernet device. 4123 * @param speed_fec_capa 4124 * speed_fec_capa is out only with per-speed capabilities. 4125 * If set to NULL, the function returns the required number 4126 * of required array entries. 4127 * @param num 4128 * a number of elements in an speed_fec_capa array. 4129 * 4130 * @return 4131 * - A non-negative value lower or equal to num: success. The return value 4132 * is the number of entries filled in the fec capa array. 4133 * - A non-negative value higher than num: error, the given fec capa array 4134 * is too small. The return value corresponds to the num that should 4135 * be given to succeed. The entries in fec capa array are not valid and 4136 * shall not be used by the caller. 4137 * - (-ENOTSUP) if underlying hardware OR driver doesn't support. 4138 * that operation. 4139 * - (-EIO) if device is removed. 4140 * - (-ENODEV) if *port_id* invalid. 4141 * - (-EINVAL) if *num* or *speed_fec_capa* invalid 4142 */ 4143 __rte_experimental 4144 int rte_eth_fec_get_capability(uint16_t port_id, 4145 struct rte_eth_fec_capa *speed_fec_capa, 4146 unsigned int num); 4147 4148 /** 4149 * @warning 4150 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 4151 * 4152 * Get current Forward Error Correction(FEC) mode. 4153 * If link is down and AUTO is enabled, AUTO is returned, otherwise, 4154 * configured FEC mode is returned. 4155 * If link is up, current FEC mode is returned. 4156 * 4157 * @param port_id 4158 * The port identifier of the Ethernet device. 4159 * @param fec_capa 4160 * A bitmask of enabled FEC modes. If AUTO bit is set, other 4161 * bits specify FEC modes which may be negotiated. If AUTO 4162 * bit is clear, specify FEC modes to be used (only one valid 4163 * mode per speed may be set). 4164 * @return 4165 * - (0) if successful. 4166 * - (-ENOTSUP) if underlying hardware OR driver doesn't support. 4167 * that operation. 4168 * - (-EIO) if device is removed. 4169 * - (-ENODEV) if *port_id* invalid. 4170 */ 4171 __rte_experimental 4172 int rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa); 4173 4174 /** 4175 * @warning 4176 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 4177 * 4178 * Set Forward Error Correction(FEC) mode. 4179 * 4180 * @param port_id 4181 * The port identifier of the Ethernet device. 4182 * @param fec_capa 4183 * A bitmask of allowed FEC modes. If AUTO bit is set, other 4184 * bits specify FEC modes which may be negotiated. If AUTO 4185 * bit is clear, specify FEC modes to be used (only one valid 4186 * mode per speed may be set). 4187 * @return 4188 * - (0) if successful. 4189 * - (-EINVAL) if the FEC mode is not valid. 4190 * - (-ENOTSUP) if underlying hardware OR driver doesn't support. 4191 * - (-EIO) if device is removed. 4192 * - (-ENODEV) if *port_id* invalid. 4193 */ 4194 __rte_experimental 4195 int rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa); 4196 4197 /** 4198 * Get current status of the Ethernet link flow control for Ethernet device 4199 * 4200 * @param port_id 4201 * The port identifier of the Ethernet device. 4202 * @param fc_conf 4203 * The pointer to the structure where to store the flow control parameters. 4204 * @return 4205 * - (0) if successful. 4206 * - (-ENOTSUP) if hardware doesn't support flow control. 4207 * - (-ENODEV) if *port_id* invalid. 4208 * - (-EIO) if device is removed. 4209 * - (-EINVAL) if bad parameter. 4210 */ 4211 int rte_eth_dev_flow_ctrl_get(uint16_t port_id, 4212 struct rte_eth_fc_conf *fc_conf); 4213 4214 /** 4215 * Configure the Ethernet link flow control for Ethernet device 4216 * 4217 * @param port_id 4218 * The port identifier of the Ethernet device. 4219 * @param fc_conf 4220 * The pointer to the structure of the flow control parameters. 4221 * @return 4222 * - (0) if successful. 4223 * - (-ENOTSUP) if hardware doesn't support flow control mode. 4224 * - (-ENODEV) if *port_id* invalid. 4225 * - (-EINVAL) if bad parameter 4226 * - (-EIO) if flow control setup failure or device is removed. 4227 */ 4228 int rte_eth_dev_flow_ctrl_set(uint16_t port_id, 4229 struct rte_eth_fc_conf *fc_conf); 4230 4231 /** 4232 * Configure the Ethernet priority flow control under DCB environment 4233 * for Ethernet device. 4234 * 4235 * @param port_id 4236 * The port identifier of the Ethernet device. 4237 * @param pfc_conf 4238 * The pointer to the structure of the priority flow control parameters. 4239 * @return 4240 * - (0) if successful. 4241 * - (-ENOTSUP) if hardware doesn't support priority flow control mode. 4242 * - (-ENODEV) if *port_id* invalid. 4243 * - (-EINVAL) if bad parameter 4244 * - (-EIO) if flow control setup failure or device is removed. 4245 */ 4246 int rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, 4247 struct rte_eth_pfc_conf *pfc_conf); 4248 4249 /** 4250 * Add a MAC address to the set used for filtering incoming packets. 4251 * 4252 * @param port_id 4253 * The port identifier of the Ethernet device. 4254 * @param mac_addr 4255 * The MAC address to add. 4256 * @param pool 4257 * VMDq pool index to associate address with (if VMDq is enabled). If VMDq is 4258 * not enabled, this should be set to 0. 4259 * @return 4260 * - (0) if successfully added or *mac_addr* was already added. 4261 * - (-ENOTSUP) if hardware doesn't support this feature. 4262 * - (-ENODEV) if *port* is invalid. 4263 * - (-EIO) if device is removed. 4264 * - (-ENOSPC) if no more MAC addresses can be added. 4265 * - (-EINVAL) if MAC address is invalid. 4266 */ 4267 int rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *mac_addr, 4268 uint32_t pool); 4269 4270 /** 4271 * @warning 4272 * @b EXPERIMENTAL: this API may change without prior notice. 4273 * 4274 * Retrieve the information for queue based PFC. 4275 * 4276 * @param port_id 4277 * The port identifier of the Ethernet device. 4278 * @param pfc_queue_info 4279 * A pointer to a structure of type *rte_eth_pfc_queue_info* to be filled with 4280 * the information about queue based PFC. 4281 * @return 4282 * - (0) if successful. 4283 * - (-ENOTSUP) if support for priority_flow_ctrl_queue_info_get does not exist. 4284 * - (-ENODEV) if *port_id* invalid. 4285 * - (-EINVAL) if bad parameter. 4286 */ 4287 __rte_experimental 4288 int rte_eth_dev_priority_flow_ctrl_queue_info_get(uint16_t port_id, 4289 struct rte_eth_pfc_queue_info *pfc_queue_info); 4290 4291 /** 4292 * @warning 4293 * @b EXPERIMENTAL: this API may change without prior notice. 4294 * 4295 * Configure the queue based priority flow control for a given queue 4296 * for Ethernet device. 4297 * 4298 * @note When an ethdev port switches to queue based PFC mode, the 4299 * unconfigured queues shall be configured by the driver with 4300 * default values such as lower priority value for TC etc. 4301 * 4302 * @param port_id 4303 * The port identifier of the Ethernet device. 4304 * @param pfc_queue_conf 4305 * The pointer to the structure of the priority flow control parameters 4306 * for the queue. 4307 * @return 4308 * - (0) if successful. 4309 * - (-ENOTSUP) if hardware doesn't support queue based PFC mode. 4310 * - (-ENODEV) if *port_id* invalid. 4311 * - (-EINVAL) if bad parameter 4312 * - (-EIO) if flow control setup queue failure 4313 */ 4314 __rte_experimental 4315 int rte_eth_dev_priority_flow_ctrl_queue_configure(uint16_t port_id, 4316 struct rte_eth_pfc_queue_conf *pfc_queue_conf); 4317 4318 /** 4319 * Remove a MAC address from the internal array of addresses. 4320 * 4321 * @param port_id 4322 * The port identifier of the Ethernet device. 4323 * @param mac_addr 4324 * MAC address to remove. 4325 * @return 4326 * - (0) if successful, or *mac_addr* didn't exist. 4327 * - (-ENOTSUP) if hardware doesn't support. 4328 * - (-ENODEV) if *port* invalid. 4329 * - (-EADDRINUSE) if attempting to remove the default MAC address. 4330 * - (-EINVAL) if MAC address is invalid. 4331 */ 4332 int rte_eth_dev_mac_addr_remove(uint16_t port_id, 4333 struct rte_ether_addr *mac_addr); 4334 4335 /** 4336 * Set the default MAC address. 4337 * 4338 * @param port_id 4339 * The port identifier of the Ethernet device. 4340 * @param mac_addr 4341 * New default MAC address. 4342 * @return 4343 * - (0) if successful, or *mac_addr* didn't exist. 4344 * - (-ENOTSUP) if hardware doesn't support. 4345 * - (-ENODEV) if *port* invalid. 4346 * - (-EINVAL) if MAC address is invalid. 4347 */ 4348 int rte_eth_dev_default_mac_addr_set(uint16_t port_id, 4349 struct rte_ether_addr *mac_addr); 4350 4351 /** 4352 * Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device. 4353 * 4354 * @param port_id 4355 * The port identifier of the Ethernet device. 4356 * @param reta_conf 4357 * RETA to update. 4358 * @param reta_size 4359 * Redirection table size. The table size can be queried by 4360 * rte_eth_dev_info_get(). 4361 * @return 4362 * - (0) if successful. 4363 * - (-ENODEV) if *port_id* is invalid. 4364 * - (-ENOTSUP) if hardware doesn't support. 4365 * - (-EINVAL) if bad parameter. 4366 * - (-EIO) if device is removed. 4367 */ 4368 int rte_eth_dev_rss_reta_update(uint16_t port_id, 4369 struct rte_eth_rss_reta_entry64 *reta_conf, 4370 uint16_t reta_size); 4371 4372 /** 4373 * Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. 4374 * 4375 * @param port_id 4376 * The port identifier of the Ethernet device. 4377 * @param reta_conf 4378 * RETA to query. For each requested reta entry, corresponding bit 4379 * in mask must be set. 4380 * @param reta_size 4381 * Redirection table size. The table size can be queried by 4382 * rte_eth_dev_info_get(). 4383 * @return 4384 * - (0) if successful. 4385 * - (-ENODEV) if *port_id* is invalid. 4386 * - (-ENOTSUP) if hardware doesn't support. 4387 * - (-EINVAL) if bad parameter. 4388 * - (-EIO) if device is removed. 4389 */ 4390 int rte_eth_dev_rss_reta_query(uint16_t port_id, 4391 struct rte_eth_rss_reta_entry64 *reta_conf, 4392 uint16_t reta_size); 4393 4394 /** 4395 * Updates unicast hash table for receiving packet with the given destination 4396 * MAC address, and the packet is routed to all VFs for which the Rx mode is 4397 * accept packets that match the unicast hash table. 4398 * 4399 * @param port_id 4400 * The port identifier of the Ethernet device. 4401 * @param addr 4402 * Unicast MAC address. 4403 * @param on 4404 * 1 - Set an unicast hash bit for receiving packets with the MAC address. 4405 * 0 - Clear an unicast hash bit. 4406 * @return 4407 * - (0) if successful. 4408 * - (-ENOTSUP) if hardware doesn't support. 4409 * - (-ENODEV) if *port_id* invalid. 4410 * - (-EIO) if device is removed. 4411 * - (-EINVAL) if bad parameter. 4412 */ 4413 int rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, 4414 uint8_t on); 4415 4416 /** 4417 * Updates all unicast hash bitmaps for receiving packet with any Unicast 4418 * Ethernet MAC addresses,the packet is routed to all VFs for which the Rx 4419 * mode is accept packets that match the unicast hash table. 4420 * 4421 * @param port_id 4422 * The port identifier of the Ethernet device. 4423 * @param on 4424 * 1 - Set all unicast hash bitmaps for receiving all the Ethernet 4425 * MAC addresses 4426 * 0 - Clear all unicast hash bitmaps 4427 * @return 4428 * - (0) if successful. 4429 * - (-ENOTSUP) if hardware doesn't support. 4430 * - (-ENODEV) if *port_id* invalid. 4431 * - (-EIO) if device is removed. 4432 * - (-EINVAL) if bad parameter. 4433 */ 4434 int rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on); 4435 4436 /** 4437 * Set the rate limitation for a queue on an Ethernet device. 4438 * 4439 * @param port_id 4440 * The port identifier of the Ethernet device. 4441 * @param queue_idx 4442 * The queue ID. 4443 * @param tx_rate 4444 * The Tx rate in Mbps. Allocated from the total port link speed. 4445 * @return 4446 * - (0) if successful. 4447 * - (-ENOTSUP) if hardware doesn't support this feature. 4448 * - (-ENODEV) if *port_id* invalid. 4449 * - (-EIO) if device is removed. 4450 * - (-EINVAL) if bad parameter. 4451 */ 4452 int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, 4453 uint16_t tx_rate); 4454 4455 /** 4456 * Configuration of Receive Side Scaling hash computation of Ethernet device. 4457 * 4458 * @param port_id 4459 * The port identifier of the Ethernet device. 4460 * @param rss_conf 4461 * The new configuration to use for RSS hash computation on the port. 4462 * @return 4463 * - (0) if successful. 4464 * - (-ENODEV) if port identifier is invalid. 4465 * - (-EIO) if device is removed. 4466 * - (-ENOTSUP) if hardware doesn't support. 4467 * - (-EINVAL) if bad parameter. 4468 */ 4469 int rte_eth_dev_rss_hash_update(uint16_t port_id, 4470 struct rte_eth_rss_conf *rss_conf); 4471 4472 /** 4473 * Retrieve current configuration of Receive Side Scaling hash computation 4474 * of Ethernet device. 4475 * 4476 * @param port_id 4477 * The port identifier of the Ethernet device. 4478 * @param rss_conf 4479 * Where to store the current RSS hash configuration of the Ethernet device. 4480 * @return 4481 * - (0) if successful. 4482 * - (-ENODEV) if port identifier is invalid. 4483 * - (-EIO) if device is removed. 4484 * - (-ENOTSUP) if hardware doesn't support RSS. 4485 * - (-EINVAL) if bad parameter. 4486 */ 4487 int 4488 rte_eth_dev_rss_hash_conf_get(uint16_t port_id, 4489 struct rte_eth_rss_conf *rss_conf); 4490 4491 /** 4492 * Add UDP tunneling port for a type of tunnel. 4493 * 4494 * Some NICs may require such configuration to properly parse a tunnel 4495 * with any standard or custom UDP port. 4496 * The packets with this UDP port will be parsed for this type of tunnel. 4497 * The device parser will also check the rest of the tunnel headers 4498 * before classifying the packet. 4499 * 4500 * With some devices, this API will affect packet classification, i.e.: 4501 * - mbuf.packet_type reported on Rx 4502 * - rte_flow rules with tunnel items 4503 * 4504 * @param port_id 4505 * The port identifier of the Ethernet device. 4506 * @param tunnel_udp 4507 * UDP tunneling configuration. 4508 * 4509 * @return 4510 * - (0) if successful. 4511 * - (-ENODEV) if port identifier is invalid. 4512 * - (-EIO) if device is removed. 4513 * - (-ENOTSUP) if hardware doesn't support tunnel type. 4514 */ 4515 int 4516 rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, 4517 struct rte_eth_udp_tunnel *tunnel_udp); 4518 4519 /** 4520 * Delete UDP tunneling port for a type of tunnel. 4521 * 4522 * The packets with this UDP port will not be classified as this type of tunnel 4523 * anymore if the device use such mapping for tunnel packet classification. 4524 * 4525 * @see rte_eth_dev_udp_tunnel_port_add 4526 * 4527 * @param port_id 4528 * The port identifier of the Ethernet device. 4529 * @param tunnel_udp 4530 * UDP tunneling configuration. 4531 * 4532 * @return 4533 * - (0) if successful. 4534 * - (-ENODEV) if port identifier is invalid. 4535 * - (-EIO) if device is removed. 4536 * - (-ENOTSUP) if hardware doesn't support tunnel type. 4537 */ 4538 int 4539 rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, 4540 struct rte_eth_udp_tunnel *tunnel_udp); 4541 4542 /** 4543 * Get DCB information on an Ethernet device. 4544 * 4545 * @param port_id 4546 * The port identifier of the Ethernet device. 4547 * @param dcb_info 4548 * DCB information. 4549 * @return 4550 * - (0) if successful. 4551 * - (-ENODEV) if port identifier is invalid. 4552 * - (-EIO) if device is removed. 4553 * - (-ENOTSUP) if hardware doesn't support. 4554 * - (-EINVAL) if bad parameter. 4555 */ 4556 int rte_eth_dev_get_dcb_info(uint16_t port_id, 4557 struct rte_eth_dcb_info *dcb_info); 4558 4559 struct rte_eth_rxtx_callback; 4560 4561 /** 4562 * Add a callback to be called on packet Rx on a given port and queue. 4563 * 4564 * This API configures a function to be called for each burst of 4565 * packets received on a given NIC port queue. The return value is a pointer 4566 * that can be used to later remove the callback using 4567 * rte_eth_remove_rx_callback(). 4568 * 4569 * Multiple functions are called in the order that they are added. 4570 * 4571 * @param port_id 4572 * The port identifier of the Ethernet device. 4573 * @param queue_id 4574 * The queue on the Ethernet device on which the callback is to be added. 4575 * @param fn 4576 * The callback function 4577 * @param user_param 4578 * A generic pointer parameter which will be passed to each invocation of the 4579 * callback function on this port and queue. Inter-thread synchronization 4580 * of any user data changes is the responsibility of the user. 4581 * 4582 * @return 4583 * NULL on error. 4584 * On success, a pointer value which can later be used to remove the callback. 4585 */ 4586 const struct rte_eth_rxtx_callback * 4587 rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, 4588 rte_rx_callback_fn fn, void *user_param); 4589 4590 /** 4591 * Add a callback that must be called first on packet Rx on a given port 4592 * and queue. 4593 * 4594 * This API configures a first function to be called for each burst of 4595 * packets received on a given NIC port queue. The return value is a pointer 4596 * that can be used to later remove the callback using 4597 * rte_eth_remove_rx_callback(). 4598 * 4599 * Multiple functions are called in the order that they are added. 4600 * 4601 * @param port_id 4602 * The port identifier of the Ethernet device. 4603 * @param queue_id 4604 * The queue on the Ethernet device on which the callback is to be added. 4605 * @param fn 4606 * The callback function 4607 * @param user_param 4608 * A generic pointer parameter which will be passed to each invocation of the 4609 * callback function on this port and queue. Inter-thread synchronization 4610 * of any user data changes is the responsibility of the user. 4611 * 4612 * @return 4613 * NULL on error. 4614 * On success, a pointer value which can later be used to remove the callback. 4615 */ 4616 const struct rte_eth_rxtx_callback * 4617 rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id, 4618 rte_rx_callback_fn fn, void *user_param); 4619 4620 /** 4621 * Add a callback to be called on packet Tx on a given port and queue. 4622 * 4623 * This API configures a function to be called for each burst of 4624 * packets sent on a given NIC port queue. The return value is a pointer 4625 * that can be used to later remove the callback using 4626 * rte_eth_remove_tx_callback(). 4627 * 4628 * Multiple functions are called in the order that they are added. 4629 * 4630 * @param port_id 4631 * The port identifier of the Ethernet device. 4632 * @param queue_id 4633 * The queue on the Ethernet device on which the callback is to be added. 4634 * @param fn 4635 * The callback function 4636 * @param user_param 4637 * A generic pointer parameter which will be passed to each invocation of the 4638 * callback function on this port and queue. Inter-thread synchronization 4639 * of any user data changes is the responsibility of the user. 4640 * 4641 * @return 4642 * NULL on error. 4643 * On success, a pointer value which can later be used to remove the callback. 4644 */ 4645 const struct rte_eth_rxtx_callback * 4646 rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, 4647 rte_tx_callback_fn fn, void *user_param); 4648 4649 /** 4650 * Remove an Rx packet callback from a given port and queue. 4651 * 4652 * This function is used to removed callbacks that were added to a NIC port 4653 * queue using rte_eth_add_rx_callback(). 4654 * 4655 * Note: the callback is removed from the callback list but it isn't freed 4656 * since the it may still be in use. The memory for the callback can be 4657 * subsequently freed back by the application by calling rte_free(): 4658 * 4659 * - Immediately - if the port is stopped, or the user knows that no 4660 * callbacks are in flight e.g. if called from the thread doing Rx/Tx 4661 * on that queue. 4662 * 4663 * - After a short delay - where the delay is sufficient to allow any 4664 * in-flight callbacks to complete. Alternately, the RCU mechanism can be 4665 * used to detect when data plane threads have ceased referencing the 4666 * callback memory. 4667 * 4668 * @param port_id 4669 * The port identifier of the Ethernet device. 4670 * @param queue_id 4671 * The queue on the Ethernet device from which the callback is to be removed. 4672 * @param user_cb 4673 * User supplied callback created via rte_eth_add_rx_callback(). 4674 * 4675 * @return 4676 * - 0: Success. Callback was removed. 4677 * - -ENODEV: If *port_id* is invalid. 4678 * - -ENOTSUP: Callback support is not available. 4679 * - -EINVAL: The queue_id is out of range, or the callback 4680 * is NULL or not found for the port/queue. 4681 */ 4682 int rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, 4683 const struct rte_eth_rxtx_callback *user_cb); 4684 4685 /** 4686 * Remove a Tx packet callback from a given port and queue. 4687 * 4688 * This function is used to removed callbacks that were added to a NIC port 4689 * queue using rte_eth_add_tx_callback(). 4690 * 4691 * Note: the callback is removed from the callback list but it isn't freed 4692 * since the it may still be in use. The memory for the callback can be 4693 * subsequently freed back by the application by calling rte_free(): 4694 * 4695 * - Immediately - if the port is stopped, or the user knows that no 4696 * callbacks are in flight e.g. if called from the thread doing Rx/Tx 4697 * on that queue. 4698 * 4699 * - After a short delay - where the delay is sufficient to allow any 4700 * in-flight callbacks to complete. Alternately, the RCU mechanism can be 4701 * used to detect when data plane threads have ceased referencing the 4702 * callback memory. 4703 * 4704 * @param port_id 4705 * The port identifier of the Ethernet device. 4706 * @param queue_id 4707 * The queue on the Ethernet device from which the callback is to be removed. 4708 * @param user_cb 4709 * User supplied callback created via rte_eth_add_tx_callback(). 4710 * 4711 * @return 4712 * - 0: Success. Callback was removed. 4713 * - -ENODEV: If *port_id* is invalid. 4714 * - -ENOTSUP: Callback support is not available. 4715 * - -EINVAL: The queue_id is out of range, or the callback 4716 * is NULL or not found for the port/queue. 4717 */ 4718 int rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id, 4719 const struct rte_eth_rxtx_callback *user_cb); 4720 4721 /** 4722 * Retrieve information about given port's Rx queue. 4723 * 4724 * @param port_id 4725 * The port identifier of the Ethernet device. 4726 * @param queue_id 4727 * The Rx queue on the Ethernet device for which information 4728 * will be retrieved. 4729 * @param qinfo 4730 * A pointer to a structure of type *rte_eth_rxq_info_info* to be filled with 4731 * the information of the Ethernet device. 4732 * 4733 * @return 4734 * - 0: Success 4735 * - -ENODEV: If *port_id* is invalid. 4736 * - -ENOTSUP: routine is not supported by the device PMD. 4737 * - -EINVAL: The queue_id is out of range, or the queue 4738 * is hairpin queue. 4739 */ 4740 int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, 4741 struct rte_eth_rxq_info *qinfo); 4742 4743 /** 4744 * Retrieve information about given port's Tx queue. 4745 * 4746 * @param port_id 4747 * The port identifier of the Ethernet device. 4748 * @param queue_id 4749 * The Tx queue on the Ethernet device for which information 4750 * will be retrieved. 4751 * @param qinfo 4752 * A pointer to a structure of type *rte_eth_txq_info_info* to be filled with 4753 * the information of the Ethernet device. 4754 * 4755 * @return 4756 * - 0: Success 4757 * - -ENODEV: If *port_id* is invalid. 4758 * - -ENOTSUP: routine is not supported by the device PMD. 4759 * - -EINVAL: The queue_id is out of range, or the queue 4760 * is hairpin queue. 4761 */ 4762 int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, 4763 struct rte_eth_txq_info *qinfo); 4764 4765 /** 4766 * Retrieve information about the Rx packet burst mode. 4767 * 4768 * @param port_id 4769 * The port identifier of the Ethernet device. 4770 * @param queue_id 4771 * The Rx queue on the Ethernet device for which information 4772 * will be retrieved. 4773 * @param mode 4774 * A pointer to a structure of type *rte_eth_burst_mode* to be filled 4775 * with the information of the packet burst mode. 4776 * 4777 * @return 4778 * - 0: Success 4779 * - -ENODEV: If *port_id* is invalid. 4780 * - -ENOTSUP: routine is not supported by the device PMD. 4781 * - -EINVAL: The queue_id is out of range. 4782 */ 4783 int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, 4784 struct rte_eth_burst_mode *mode); 4785 4786 /** 4787 * Retrieve information about the Tx packet burst mode. 4788 * 4789 * @param port_id 4790 * The port identifier of the Ethernet device. 4791 * @param queue_id 4792 * The Tx queue on the Ethernet device for which information 4793 * will be retrieved. 4794 * @param mode 4795 * A pointer to a structure of type *rte_eth_burst_mode* to be filled 4796 * with the information of the packet burst mode. 4797 * 4798 * @return 4799 * - 0: Success 4800 * - -ENODEV: If *port_id* is invalid. 4801 * - -ENOTSUP: routine is not supported by the device PMD. 4802 * - -EINVAL: The queue_id is out of range. 4803 */ 4804 int rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id, 4805 struct rte_eth_burst_mode *mode); 4806 4807 /** 4808 * @warning 4809 * @b EXPERIMENTAL: this API may change without prior notice. 4810 * 4811 * Retrieve the monitor condition for a given receive queue. 4812 * 4813 * @param port_id 4814 * The port identifier of the Ethernet device. 4815 * @param queue_id 4816 * The Rx queue on the Ethernet device for which information 4817 * will be retrieved. 4818 * @param pmc 4819 * The pointer to power-optimized monitoring condition structure. 4820 * 4821 * @return 4822 * - 0: Success. 4823 * -ENOTSUP: Operation not supported. 4824 * -EINVAL: Invalid parameters. 4825 * -ENODEV: Invalid port ID. 4826 */ 4827 __rte_experimental 4828 int rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id, 4829 struct rte_power_monitor_cond *pmc); 4830 4831 /** 4832 * Retrieve device registers and register attributes (number of registers and 4833 * register size) 4834 * 4835 * @param port_id 4836 * The port identifier of the Ethernet device. 4837 * @param info 4838 * Pointer to rte_dev_reg_info structure to fill in. If info->data is 4839 * NULL the function fills in the width and length fields. If non-NULL 4840 * the registers are put into the buffer pointed at by the data field. 4841 * @return 4842 * - (0) if successful. 4843 * - (-ENOTSUP) if hardware doesn't support. 4844 * - (-EINVAL) if bad parameter. 4845 * - (-ENODEV) if *port_id* invalid. 4846 * - (-EIO) if device is removed. 4847 * - others depends on the specific operations implementation. 4848 */ 4849 int rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info); 4850 4851 /** 4852 * Retrieve size of device EEPROM 4853 * 4854 * @param port_id 4855 * The port identifier of the Ethernet device. 4856 * @return 4857 * - (>=0) EEPROM size if successful. 4858 * - (-ENOTSUP) if hardware doesn't support. 4859 * - (-ENODEV) if *port_id* invalid. 4860 * - (-EIO) if device is removed. 4861 * - others depends on the specific operations implementation. 4862 */ 4863 int rte_eth_dev_get_eeprom_length(uint16_t port_id); 4864 4865 /** 4866 * Retrieve EEPROM and EEPROM attribute 4867 * 4868 * @param port_id 4869 * The port identifier of the Ethernet device. 4870 * @param info 4871 * The template includes buffer for return EEPROM data and 4872 * EEPROM attributes to be filled. 4873 * @return 4874 * - (0) if successful. 4875 * - (-ENOTSUP) if hardware doesn't support. 4876 * - (-EINVAL) if bad parameter. 4877 * - (-ENODEV) if *port_id* invalid. 4878 * - (-EIO) if device is removed. 4879 * - others depends on the specific operations implementation. 4880 */ 4881 int rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info); 4882 4883 /** 4884 * Program EEPROM with provided data 4885 * 4886 * @param port_id 4887 * The port identifier of the Ethernet device. 4888 * @param info 4889 * The template includes EEPROM data for programming and 4890 * EEPROM attributes to be filled 4891 * @return 4892 * - (0) if successful. 4893 * - (-ENOTSUP) if hardware doesn't support. 4894 * - (-ENODEV) if *port_id* invalid. 4895 * - (-EINVAL) if bad parameter. 4896 * - (-EIO) if device is removed. 4897 * - others depends on the specific operations implementation. 4898 */ 4899 int rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info); 4900 4901 /** 4902 * @warning 4903 * @b EXPERIMENTAL: this API may change without prior notice. 4904 * 4905 * Retrieve the type and size of plugin module EEPROM 4906 * 4907 * @param port_id 4908 * The port identifier of the Ethernet device. 4909 * @param modinfo 4910 * The type and size of plugin module EEPROM. 4911 * @return 4912 * - (0) if successful. 4913 * - (-ENOTSUP) if hardware doesn't support. 4914 * - (-ENODEV) if *port_id* invalid. 4915 * - (-EINVAL) if bad parameter. 4916 * - (-EIO) if device is removed. 4917 * - others depends on the specific operations implementation. 4918 */ 4919 __rte_experimental 4920 int 4921 rte_eth_dev_get_module_info(uint16_t port_id, 4922 struct rte_eth_dev_module_info *modinfo); 4923 4924 /** 4925 * @warning 4926 * @b EXPERIMENTAL: this API may change without prior notice. 4927 * 4928 * Retrieve the data of plugin module EEPROM 4929 * 4930 * @param port_id 4931 * The port identifier of the Ethernet device. 4932 * @param info 4933 * The template includes the plugin module EEPROM attributes, and the 4934 * buffer for return plugin module EEPROM data. 4935 * @return 4936 * - (0) if successful. 4937 * - (-ENOTSUP) if hardware doesn't support. 4938 * - (-EINVAL) if bad parameter. 4939 * - (-ENODEV) if *port_id* invalid. 4940 * - (-EIO) if device is removed. 4941 * - others depends on the specific operations implementation. 4942 */ 4943 __rte_experimental 4944 int 4945 rte_eth_dev_get_module_eeprom(uint16_t port_id, 4946 struct rte_dev_eeprom_info *info); 4947 4948 /** 4949 * Set the list of multicast addresses to filter on an Ethernet device. 4950 * 4951 * @param port_id 4952 * The port identifier of the Ethernet device. 4953 * @param mc_addr_set 4954 * The array of multicast addresses to set. Equal to NULL when the function 4955 * is invoked to flush the set of filtered addresses. 4956 * @param nb_mc_addr 4957 * The number of multicast addresses in the *mc_addr_set* array. Equal to 0 4958 * when the function is invoked to flush the set of filtered addresses. 4959 * @return 4960 * - (0) if successful. 4961 * - (-ENODEV) if *port_id* invalid. 4962 * - (-EIO) if device is removed. 4963 * - (-ENOTSUP) if PMD of *port_id* doesn't support multicast filtering. 4964 * - (-ENOSPC) if *port_id* has not enough multicast filtering resources. 4965 * - (-EINVAL) if bad parameter. 4966 */ 4967 int rte_eth_dev_set_mc_addr_list(uint16_t port_id, 4968 struct rte_ether_addr *mc_addr_set, 4969 uint32_t nb_mc_addr); 4970 4971 /** 4972 * Enable IEEE1588/802.1AS timestamping for an Ethernet device. 4973 * 4974 * @param port_id 4975 * The port identifier of the Ethernet device. 4976 * 4977 * @return 4978 * - 0: Success. 4979 * - -ENODEV: The port ID is invalid. 4980 * - -EIO: if device is removed. 4981 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4982 */ 4983 int rte_eth_timesync_enable(uint16_t port_id); 4984 4985 /** 4986 * Disable IEEE1588/802.1AS timestamping for an Ethernet device. 4987 * 4988 * @param port_id 4989 * The port identifier of the Ethernet device. 4990 * 4991 * @return 4992 * - 0: Success. 4993 * - -ENODEV: The port ID is invalid. 4994 * - -EIO: if device is removed. 4995 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4996 */ 4997 int rte_eth_timesync_disable(uint16_t port_id); 4998 4999 /** 5000 * Read an IEEE1588/802.1AS Rx timestamp from an Ethernet device. 5001 * 5002 * @param port_id 5003 * The port identifier of the Ethernet device. 5004 * @param timestamp 5005 * Pointer to the timestamp struct. 5006 * @param flags 5007 * Device specific flags. Used to pass the Rx timesync register index to 5008 * i40e. Unused in igb/ixgbe, pass 0 instead. 5009 * 5010 * @return 5011 * - 0: Success. 5012 * - -EINVAL: No timestamp is available. 5013 * - -ENODEV: The port ID is invalid. 5014 * - -EIO: if device is removed. 5015 * - -ENOTSUP: The function is not supported by the Ethernet driver. 5016 */ 5017 int rte_eth_timesync_read_rx_timestamp(uint16_t port_id, 5018 struct timespec *timestamp, uint32_t flags); 5019 5020 /** 5021 * Read an IEEE1588/802.1AS Tx timestamp from an Ethernet device. 5022 * 5023 * @param port_id 5024 * The port identifier of the Ethernet device. 5025 * @param timestamp 5026 * Pointer to the timestamp struct. 5027 * 5028 * @return 5029 * - 0: Success. 5030 * - -EINVAL: No timestamp is available. 5031 * - -ENODEV: The port ID is invalid. 5032 * - -EIO: if device is removed. 5033 * - -ENOTSUP: The function is not supported by the Ethernet driver. 5034 */ 5035 int rte_eth_timesync_read_tx_timestamp(uint16_t port_id, 5036 struct timespec *timestamp); 5037 5038 /** 5039 * Adjust the timesync clock on an Ethernet device. 5040 * 5041 * This is usually used in conjunction with other Ethdev timesync functions to 5042 * synchronize the device time using the IEEE1588/802.1AS protocol. 5043 * 5044 * @param port_id 5045 * The port identifier of the Ethernet device. 5046 * @param delta 5047 * The adjustment in nanoseconds. 5048 * 5049 * @return 5050 * - 0: Success. 5051 * - -ENODEV: The port ID is invalid. 5052 * - -EIO: if device is removed. 5053 * - -ENOTSUP: The function is not supported by the Ethernet driver. 5054 */ 5055 int rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta); 5056 5057 /** 5058 * Read the time from the timesync clock on an Ethernet device. 5059 * 5060 * This is usually used in conjunction with other Ethdev timesync functions to 5061 * synchronize the device time using the IEEE1588/802.1AS protocol. 5062 * 5063 * @param port_id 5064 * The port identifier of the Ethernet device. 5065 * @param time 5066 * Pointer to the timespec struct that holds the time. 5067 * 5068 * @return 5069 * - 0: Success. 5070 * - -EINVAL: Bad parameter. 5071 */ 5072 int rte_eth_timesync_read_time(uint16_t port_id, struct timespec *time); 5073 5074 /** 5075 * Set the time of the timesync clock on an Ethernet device. 5076 * 5077 * This is usually used in conjunction with other Ethdev timesync functions to 5078 * synchronize the device time using the IEEE1588/802.1AS protocol. 5079 * 5080 * @param port_id 5081 * The port identifier of the Ethernet device. 5082 * @param time 5083 * Pointer to the timespec struct that holds the time. 5084 * 5085 * @return 5086 * - 0: Success. 5087 * - -EINVAL: No timestamp is available. 5088 * - -ENODEV: The port ID is invalid. 5089 * - -EIO: if device is removed. 5090 * - -ENOTSUP: The function is not supported by the Ethernet driver. 5091 */ 5092 int rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *time); 5093 5094 /** 5095 * @warning 5096 * @b EXPERIMENTAL: this API may change without prior notice. 5097 * 5098 * Read the current clock counter of an Ethernet device 5099 * 5100 * This returns the current raw clock value of an Ethernet device. It is 5101 * a raw amount of ticks, with no given time reference. 5102 * The value returned here is from the same clock than the one 5103 * filling timestamp field of Rx packets when using hardware timestamp 5104 * offload. Therefore it can be used to compute a precise conversion of 5105 * the device clock to the real time. 5106 * 5107 * E.g, a simple heuristic to derivate the frequency would be: 5108 * uint64_t start, end; 5109 * rte_eth_read_clock(port, start); 5110 * rte_delay_ms(100); 5111 * rte_eth_read_clock(port, end); 5112 * double freq = (end - start) * 10; 5113 * 5114 * Compute a common reference with: 5115 * uint64_t base_time_sec = current_time(); 5116 * uint64_t base_clock; 5117 * rte_eth_read_clock(port, base_clock); 5118 * 5119 * Then, convert the raw mbuf timestamp with: 5120 * base_time_sec + (double)(*timestamp_dynfield(mbuf) - base_clock) / freq; 5121 * 5122 * This simple example will not provide a very good accuracy. One must 5123 * at least measure multiple times the frequency and do a regression. 5124 * To avoid deviation from the system time, the common reference can 5125 * be repeated from time to time. The integer division can also be 5126 * converted by a multiplication and a shift for better performance. 5127 * 5128 * @param port_id 5129 * The port identifier of the Ethernet device. 5130 * @param clock 5131 * Pointer to the uint64_t that holds the raw clock value. 5132 * 5133 * @return 5134 * - 0: Success. 5135 * - -ENODEV: The port ID is invalid. 5136 * - -ENOTSUP: The function is not supported by the Ethernet driver. 5137 * - -EINVAL: if bad parameter. 5138 */ 5139 __rte_experimental 5140 int 5141 rte_eth_read_clock(uint16_t port_id, uint64_t *clock); 5142 5143 /** 5144 * Get the port ID from device name. The device name should be specified 5145 * as below: 5146 * - PCIe address (Domain:Bus:Device.Function), for example- 0000:2:00.0 5147 * - SoC device name, for example- fsl-gmac0 5148 * - vdev dpdk name, for example- net_[pcap0|null0|tap0] 5149 * 5150 * @param name 5151 * pci address or name of the device 5152 * @param port_id 5153 * pointer to port identifier of the device 5154 * @return 5155 * - (0) if successful and port_id is filled. 5156 * - (-ENODEV or -EINVAL) on failure. 5157 */ 5158 int 5159 rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id); 5160 5161 /** 5162 * Get the device name from port ID. The device name is specified as below: 5163 * - PCIe address (Domain:Bus:Device.Function), for example- 0000:02:00.0 5164 * - SoC device name, for example- fsl-gmac0 5165 * - vdev dpdk name, for example- net_[pcap0|null0|tun0|tap0] 5166 * 5167 * @param port_id 5168 * Port identifier of the device. 5169 * @param name 5170 * Buffer of size RTE_ETH_NAME_MAX_LEN to store the name. 5171 * @return 5172 * - (0) if successful. 5173 * - (-ENODEV) if *port_id* is invalid. 5174 * - (-EINVAL) on failure. 5175 */ 5176 int 5177 rte_eth_dev_get_name_by_port(uint16_t port_id, char *name); 5178 5179 /** 5180 * Check that numbers of Rx and Tx descriptors satisfy descriptors limits from 5181 * the Ethernet device information, otherwise adjust them to boundaries. 5182 * 5183 * @param port_id 5184 * The port identifier of the Ethernet device. 5185 * @param nb_rx_desc 5186 * A pointer to a uint16_t where the number of receive 5187 * descriptors stored. 5188 * @param nb_tx_desc 5189 * A pointer to a uint16_t where the number of transmit 5190 * descriptors stored. 5191 * @return 5192 * - (0) if successful. 5193 * - (-ENOTSUP, -ENODEV or -EINVAL) on failure. 5194 */ 5195 int rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id, 5196 uint16_t *nb_rx_desc, 5197 uint16_t *nb_tx_desc); 5198 5199 /** 5200 * Test if a port supports specific mempool ops. 5201 * 5202 * @param port_id 5203 * Port identifier of the Ethernet device. 5204 * @param [in] pool 5205 * The name of the pool operations to test. 5206 * @return 5207 * - 0: best mempool ops choice for this port. 5208 * - 1: mempool ops are supported for this port. 5209 * - -ENOTSUP: mempool ops not supported for this port. 5210 * - -ENODEV: Invalid port Identifier. 5211 * - -EINVAL: Pool param is null. 5212 */ 5213 int 5214 rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool); 5215 5216 /** 5217 * Get the security context for the Ethernet device. 5218 * 5219 * @param port_id 5220 * Port identifier of the Ethernet device 5221 * @return 5222 * - NULL on error. 5223 * - pointer to security context on success. 5224 */ 5225 void * 5226 rte_eth_dev_get_sec_ctx(uint16_t port_id); 5227 5228 /** 5229 * @warning 5230 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 5231 * 5232 * Query the device hairpin capabilities. 5233 * 5234 * @param port_id 5235 * The port identifier of the Ethernet device. 5236 * @param cap 5237 * Pointer to a structure that will hold the hairpin capabilities. 5238 * @return 5239 * - (0) if successful. 5240 * - (-ENOTSUP) if hardware doesn't support. 5241 * - (-EINVAL) if bad parameter. 5242 */ 5243 __rte_experimental 5244 int rte_eth_dev_hairpin_capability_get(uint16_t port_id, 5245 struct rte_eth_hairpin_cap *cap); 5246 5247 /** 5248 * @warning 5249 * @b EXPERIMENTAL: this structure may change without prior notice. 5250 * 5251 * Ethernet device representor ID range entry 5252 */ 5253 struct rte_eth_representor_range { 5254 enum rte_eth_representor_type type; /**< Representor type */ 5255 int controller; /**< Controller index */ 5256 int pf; /**< Physical function index */ 5257 __extension__ 5258 union { 5259 int vf; /**< VF start index */ 5260 int sf; /**< SF start index */ 5261 }; 5262 uint32_t id_base; /**< Representor ID start index */ 5263 uint32_t id_end; /**< Representor ID end index */ 5264 char name[RTE_DEV_NAME_MAX_LEN]; /**< Representor name */ 5265 }; 5266 5267 /** 5268 * @warning 5269 * @b EXPERIMENTAL: this structure may change without prior notice. 5270 * 5271 * Ethernet device representor information 5272 */ 5273 struct rte_eth_representor_info { 5274 uint16_t controller; /**< Controller ID of caller device. */ 5275 uint16_t pf; /**< Physical function ID of caller device. */ 5276 uint32_t nb_ranges_alloc; /**< Size of the ranges array. */ 5277 uint32_t nb_ranges; /**< Number of initialized ranges. */ 5278 struct rte_eth_representor_range ranges[];/**< Representor ID range. */ 5279 }; 5280 5281 /** 5282 * Retrieve the representor info of the device. 5283 * 5284 * Get device representor info to be able to calculate a unique 5285 * representor ID. @see rte_eth_representor_id_get helper. 5286 * 5287 * @param port_id 5288 * The port identifier of the device. 5289 * @param info 5290 * A pointer to a representor info structure. 5291 * NULL to return number of range entries and allocate memory 5292 * for next call to store detail. 5293 * The number of ranges that were written into this structure 5294 * will be placed into its nb_ranges field. This number cannot be 5295 * larger than the nb_ranges_alloc that by the user before calling 5296 * this function. It can be smaller than the value returned by the 5297 * function, however. 5298 * @return 5299 * - (-ENOTSUP) if operation is not supported. 5300 * - (-ENODEV) if *port_id* invalid. 5301 * - (-EIO) if device is removed. 5302 * - (>=0) number of available representor range entries. 5303 */ 5304 __rte_experimental 5305 int rte_eth_representor_info_get(uint16_t port_id, 5306 struct rte_eth_representor_info *info); 5307 5308 /** The NIC is able to deliver flag (if set) with packets to the PMD. */ 5309 #define RTE_ETH_RX_METADATA_USER_FLAG RTE_BIT64(0) 5310 5311 /** The NIC is able to deliver mark ID with packets to the PMD. */ 5312 #define RTE_ETH_RX_METADATA_USER_MARK RTE_BIT64(1) 5313 5314 /** The NIC is able to deliver tunnel ID with packets to the PMD. */ 5315 #define RTE_ETH_RX_METADATA_TUNNEL_ID RTE_BIT64(2) 5316 5317 /** 5318 * @warning 5319 * @b EXPERIMENTAL: this API may change without prior notice 5320 * 5321 * Negotiate the NIC's ability to deliver specific kinds of metadata to the PMD. 5322 * 5323 * Invoke this API before the first rte_eth_dev_configure() invocation 5324 * to let the PMD make preparations that are inconvenient to do later. 5325 * 5326 * The negotiation process is as follows: 5327 * 5328 * - the application requests features intending to use at least some of them; 5329 * - the PMD responds with the guaranteed subset of the requested feature set; 5330 * - the application can retry negotiation with another set of features; 5331 * - the application can pass zero to clear the negotiation result; 5332 * - the last negotiated result takes effect upon 5333 * the ethdev configure and start. 5334 * 5335 * @note 5336 * The PMD is supposed to first consider enabling the requested feature set 5337 * in its entirety. Only if it fails to do so, does it have the right to 5338 * respond with a smaller set of the originally requested features. 5339 * 5340 * @note 5341 * Return code (-ENOTSUP) does not necessarily mean that the requested 5342 * features are unsupported. In this case, the application should just 5343 * assume that these features can be used without prior negotiations. 5344 * 5345 * @param port_id 5346 * Port (ethdev) identifier 5347 * 5348 * @param[inout] features 5349 * Feature selection buffer 5350 * 5351 * @return 5352 * - (-EBUSY) if the port can't handle this in its current state; 5353 * - (-ENOTSUP) if the method itself is not supported by the PMD; 5354 * - (-ENODEV) if *port_id* is invalid; 5355 * - (-EINVAL) if *features* is NULL; 5356 * - (-EIO) if the device is removed; 5357 * - (0) on success 5358 */ 5359 __rte_experimental 5360 int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features); 5361 5362 /** Flag to offload IP reassembly for IPv4 packets. */ 5363 #define RTE_ETH_DEV_REASSEMBLY_F_IPV4 (RTE_BIT32(0)) 5364 /** Flag to offload IP reassembly for IPv6 packets. */ 5365 #define RTE_ETH_DEV_REASSEMBLY_F_IPV6 (RTE_BIT32(1)) 5366 5367 /** 5368 * A structure used to get/set IP reassembly configuration. It is also used 5369 * to get the maximum capability values that a PMD can support. 5370 * 5371 * If rte_eth_ip_reassembly_capability_get() returns 0, IP reassembly can be 5372 * enabled using rte_eth_ip_reassembly_conf_set() and params values lower than 5373 * capability params can be set in the PMD. 5374 */ 5375 struct rte_eth_ip_reassembly_params { 5376 /** Maximum time in ms which PMD can wait for other fragments. */ 5377 uint32_t timeout_ms; 5378 /** Maximum number of fragments that can be reassembled. */ 5379 uint16_t max_frags; 5380 /** 5381 * Flags to enable reassembly of packet types - 5382 * RTE_ETH_DEV_REASSEMBLY_F_xxx. 5383 */ 5384 uint16_t flags; 5385 }; 5386 5387 /** 5388 * @warning 5389 * @b EXPERIMENTAL: this API may change without prior notice 5390 * 5391 * Get IP reassembly capabilities supported by the PMD. This is the first API 5392 * to be called for enabling the IP reassembly offload feature. PMD will return 5393 * the maximum values of parameters that PMD can support and user can call 5394 * rte_eth_ip_reassembly_conf_set() with param values lower than capability. 5395 * 5396 * @param port_id 5397 * The port identifier of the device. 5398 * @param capa 5399 * A pointer to rte_eth_ip_reassembly_params structure. 5400 * @return 5401 * - (-ENOTSUP) if offload configuration is not supported by device. 5402 * - (-ENODEV) if *port_id* invalid. 5403 * - (-EIO) if device is removed. 5404 * - (-EINVAL) if device is not configured or *capa* passed is NULL. 5405 * - (0) on success. 5406 */ 5407 __rte_experimental 5408 int rte_eth_ip_reassembly_capability_get(uint16_t port_id, 5409 struct rte_eth_ip_reassembly_params *capa); 5410 5411 /** 5412 * @warning 5413 * @b EXPERIMENTAL: this API may change without prior notice 5414 * 5415 * Get IP reassembly configuration parameters currently set in PMD. 5416 * The API will return error if the configuration is not already 5417 * set using rte_eth_ip_reassembly_conf_set() before calling this API or if 5418 * the device is not configured. 5419 * 5420 * @param port_id 5421 * The port identifier of the device. 5422 * @param conf 5423 * A pointer to rte_eth_ip_reassembly_params structure. 5424 * @return 5425 * - (-ENOTSUP) if offload configuration is not supported by device. 5426 * - (-ENODEV) if *port_id* invalid. 5427 * - (-EIO) if device is removed. 5428 * - (-EINVAL) if device is not configured or if *conf* passed is NULL or if 5429 * configuration is not set using rte_eth_ip_reassembly_conf_set(). 5430 * - (0) on success. 5431 */ 5432 __rte_experimental 5433 int rte_eth_ip_reassembly_conf_get(uint16_t port_id, 5434 struct rte_eth_ip_reassembly_params *conf); 5435 5436 /** 5437 * @warning 5438 * @b EXPERIMENTAL: this API may change without prior notice 5439 * 5440 * Set IP reassembly configuration parameters if the PMD supports IP reassembly 5441 * offload. User should first call rte_eth_ip_reassembly_capability_get() to 5442 * check the maximum values supported by the PMD before setting the 5443 * configuration. The use of this API is mandatory to enable this feature and 5444 * should be called before rte_eth_dev_start(). 5445 * 5446 * In datapath, PMD cannot guarantee that IP reassembly is always successful. 5447 * Hence, PMD shall register mbuf dynamic field and dynamic flag using 5448 * rte_eth_ip_reassembly_dynfield_register() to denote incomplete IP reassembly. 5449 * If dynfield is not successfully registered, error will be returned and 5450 * IP reassembly offload cannot be used. 5451 * 5452 * @param port_id 5453 * The port identifier of the device. 5454 * @param conf 5455 * A pointer to rte_eth_ip_reassembly_params structure. 5456 * @return 5457 * - (-ENOTSUP) if offload configuration is not supported by device. 5458 * - (-ENODEV) if *port_id* invalid. 5459 * - (-EIO) if device is removed. 5460 * - (-EINVAL) if device is not configured or if device is already started or 5461 * if *conf* passed is NULL or if mbuf dynfield is not registered 5462 * successfully by the PMD. 5463 * - (0) on success. 5464 */ 5465 __rte_experimental 5466 int rte_eth_ip_reassembly_conf_set(uint16_t port_id, 5467 const struct rte_eth_ip_reassembly_params *conf); 5468 5469 /** 5470 * In case of IP reassembly offload failure, packet will be updated with 5471 * dynamic flag - RTE_MBUF_DYNFLAG_IP_REASSEMBLY_INCOMPLETE_NAME and packets 5472 * will be returned without alteration. 5473 * The application can retrieve the attached fragments using mbuf dynamic field 5474 * RTE_MBUF_DYNFIELD_IP_REASSEMBLY_NAME. 5475 */ 5476 typedef struct { 5477 /** 5478 * Next fragment packet. Application should fetch dynamic field of 5479 * each fragment until a NULL is received and nb_frags is 0. 5480 */ 5481 struct rte_mbuf *next_frag; 5482 /** Time spent(in ms) by HW in waiting for further fragments. */ 5483 uint16_t time_spent; 5484 /** Number of more fragments attached in mbuf dynamic fields. */ 5485 uint16_t nb_frags; 5486 } rte_eth_ip_reassembly_dynfield_t; 5487 5488 /** 5489 * @warning 5490 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 5491 * 5492 * Dump private info from device to a file. Provided data and the order depends 5493 * on the PMD. 5494 * 5495 * @param port_id 5496 * The port identifier of the Ethernet device. 5497 * @param file 5498 * A pointer to a file for output. 5499 * @return 5500 * - (0) on success. 5501 * - (-ENODEV) if *port_id* is invalid. 5502 * - (-EINVAL) if null file. 5503 * - (-ENOTSUP) if the device does not support this function. 5504 * - (-EIO) if device is removed. 5505 */ 5506 __rte_experimental 5507 int rte_eth_dev_priv_dump(uint16_t port_id, FILE *file); 5508 5509 #include <rte_ethdev_core.h> 5510 5511 /** 5512 * @internal 5513 * Helper routine for rte_eth_rx_burst(). 5514 * Should be called at exit from PMD's rte_eth_rx_bulk implementation. 5515 * Does necessary post-processing - invokes Rx callbacks if any, etc. 5516 * 5517 * @param port_id 5518 * The port identifier of the Ethernet device. 5519 * @param queue_id 5520 * The index of the receive queue from which to retrieve input packets. 5521 * @param rx_pkts 5522 * The address of an array of pointers to *rte_mbuf* structures that 5523 * have been retrieved from the device. 5524 * @param nb_rx 5525 * The number of packets that were retrieved from the device. 5526 * @param nb_pkts 5527 * The number of elements in @p rx_pkts array. 5528 * @param opaque 5529 * Opaque pointer of Rx queue callback related data. 5530 * 5531 * @return 5532 * The number of packets effectively supplied to the @p rx_pkts array. 5533 */ 5534 uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id, 5535 struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts, 5536 void *opaque); 5537 5538 /** 5539 * 5540 * Retrieve a burst of input packets from a receive queue of an Ethernet 5541 * device. The retrieved packets are stored in *rte_mbuf* structures whose 5542 * pointers are supplied in the *rx_pkts* array. 5543 * 5544 * The rte_eth_rx_burst() function loops, parsing the Rx ring of the 5545 * receive queue, up to *nb_pkts* packets, and for each completed Rx 5546 * descriptor in the ring, it performs the following operations: 5547 * 5548 * - Initialize the *rte_mbuf* data structure associated with the 5549 * Rx descriptor according to the information provided by the NIC into 5550 * that Rx descriptor. 5551 * 5552 * - Store the *rte_mbuf* data structure into the next entry of the 5553 * *rx_pkts* array. 5554 * 5555 * - Replenish the Rx descriptor with a new *rte_mbuf* buffer 5556 * allocated from the memory pool associated with the receive queue at 5557 * initialization time. 5558 * 5559 * When retrieving an input packet that was scattered by the controller 5560 * into multiple receive descriptors, the rte_eth_rx_burst() function 5561 * appends the associated *rte_mbuf* buffers to the first buffer of the 5562 * packet. 5563 * 5564 * The rte_eth_rx_burst() function returns the number of packets 5565 * actually retrieved, which is the number of *rte_mbuf* data structures 5566 * effectively supplied into the *rx_pkts* array. 5567 * A return value equal to *nb_pkts* indicates that the Rx queue contained 5568 * at least *rx_pkts* packets, and this is likely to signify that other 5569 * received packets remain in the input queue. Applications implementing 5570 * a "retrieve as much received packets as possible" policy can check this 5571 * specific case and keep invoking the rte_eth_rx_burst() function until 5572 * a value less than *nb_pkts* is returned. 5573 * 5574 * This receive method has the following advantages: 5575 * 5576 * - It allows a run-to-completion network stack engine to retrieve and 5577 * to immediately process received packets in a fast burst-oriented 5578 * approach, avoiding the overhead of unnecessary intermediate packet 5579 * queue/dequeue operations. 5580 * 5581 * - Conversely, it also allows an asynchronous-oriented processing 5582 * method to retrieve bursts of received packets and to immediately 5583 * queue them for further parallel processing by another logical core, 5584 * for instance. However, instead of having received packets being 5585 * individually queued by the driver, this approach allows the caller 5586 * of the rte_eth_rx_burst() function to queue a burst of retrieved 5587 * packets at a time and therefore dramatically reduce the cost of 5588 * enqueue/dequeue operations per packet. 5589 * 5590 * - It allows the rte_eth_rx_burst() function of the driver to take 5591 * advantage of burst-oriented hardware features (CPU cache, 5592 * prefetch instructions, and so on) to minimize the number of CPU 5593 * cycles per packet. 5594 * 5595 * To summarize, the proposed receive API enables many 5596 * burst-oriented optimizations in both synchronous and asynchronous 5597 * packet processing environments with no overhead in both cases. 5598 * 5599 * @note 5600 * Some drivers using vector instructions require that *nb_pkts* is 5601 * divisible by 4 or 8, depending on the driver implementation. 5602 * 5603 * The rte_eth_rx_burst() function does not provide any error 5604 * notification to avoid the corresponding overhead. As a hint, the 5605 * upper-level application might check the status of the device link once 5606 * being systematically returned a 0 value for a given number of tries. 5607 * 5608 * @param port_id 5609 * The port identifier of the Ethernet device. 5610 * @param queue_id 5611 * The index of the receive queue from which to retrieve input packets. 5612 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 5613 * to rte_eth_dev_configure(). 5614 * @param rx_pkts 5615 * The address of an array of pointers to *rte_mbuf* structures that 5616 * must be large enough to store *nb_pkts* pointers in it. 5617 * @param nb_pkts 5618 * The maximum number of packets to retrieve. 5619 * The value must be divisible by 8 in order to work with any driver. 5620 * @return 5621 * The number of packets actually retrieved, which is the number 5622 * of pointers to *rte_mbuf* structures effectively supplied to the 5623 * *rx_pkts* array. 5624 */ 5625 static inline uint16_t 5626 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, 5627 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) 5628 { 5629 uint16_t nb_rx; 5630 struct rte_eth_fp_ops *p; 5631 void *qd; 5632 5633 #ifdef RTE_ETHDEV_DEBUG_RX 5634 if (port_id >= RTE_MAX_ETHPORTS || 5635 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5636 RTE_ETHDEV_LOG(ERR, 5637 "Invalid port_id=%u or queue_id=%u\n", 5638 port_id, queue_id); 5639 return 0; 5640 } 5641 #endif 5642 5643 /* fetch pointer to queue data */ 5644 p = &rte_eth_fp_ops[port_id]; 5645 qd = p->rxq.data[queue_id]; 5646 5647 #ifdef RTE_ETHDEV_DEBUG_RX 5648 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); 5649 5650 if (qd == NULL) { 5651 RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", 5652 queue_id, port_id); 5653 return 0; 5654 } 5655 #endif 5656 5657 nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts); 5658 5659 #ifdef RTE_ETHDEV_RXTX_CALLBACKS 5660 { 5661 void *cb; 5662 5663 /* __ATOMIC_RELEASE memory order was used when the 5664 * call back was inserted into the list. 5665 * Since there is a clear dependency between loading 5666 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is 5667 * not required. 5668 */ 5669 cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], 5670 __ATOMIC_RELAXED); 5671 if (unlikely(cb != NULL)) 5672 nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, 5673 rx_pkts, nb_rx, nb_pkts, cb); 5674 } 5675 #endif 5676 5677 rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx); 5678 return nb_rx; 5679 } 5680 5681 /** 5682 * Get the number of used descriptors of a Rx queue 5683 * 5684 * @param port_id 5685 * The port identifier of the Ethernet device. 5686 * @param queue_id 5687 * The queue ID on the specific port. 5688 * @return 5689 * The number of used descriptors in the specific queue, or: 5690 * - (-ENODEV) if *port_id* is invalid. 5691 * (-EINVAL) if *queue_id* is invalid 5692 * (-ENOTSUP) if the device does not support this function 5693 */ 5694 static inline int 5695 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) 5696 { 5697 struct rte_eth_fp_ops *p; 5698 void *qd; 5699 5700 if (port_id >= RTE_MAX_ETHPORTS || 5701 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5702 RTE_ETHDEV_LOG(ERR, 5703 "Invalid port_id=%u or queue_id=%u\n", 5704 port_id, queue_id); 5705 return -EINVAL; 5706 } 5707 5708 /* fetch pointer to queue data */ 5709 p = &rte_eth_fp_ops[port_id]; 5710 qd = p->rxq.data[queue_id]; 5711 5712 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); 5713 if (*p->rx_queue_count == NULL) 5714 return -ENOTSUP; 5715 if (qd == NULL) 5716 return -EINVAL; 5717 5718 return (int)(*p->rx_queue_count)(qd); 5719 } 5720 5721 /**@{@name Rx hardware descriptor states 5722 * @see rte_eth_rx_descriptor_status 5723 */ 5724 #define RTE_ETH_RX_DESC_AVAIL 0 /**< Desc available for hw. */ 5725 #define RTE_ETH_RX_DESC_DONE 1 /**< Desc done, filled by hw. */ 5726 #define RTE_ETH_RX_DESC_UNAVAIL 2 /**< Desc used by driver or hw. */ 5727 /**@}*/ 5728 5729 /** 5730 * Check the status of a Rx descriptor in the queue 5731 * 5732 * It should be called in a similar context than the Rx function: 5733 * - on a dataplane core 5734 * - not concurrently on the same queue 5735 * 5736 * Since it's a dataplane function, no check is performed on port_id and 5737 * queue_id. The caller must therefore ensure that the port is enabled 5738 * and the queue is configured and running. 5739 * 5740 * Note: accessing to a random descriptor in the ring may trigger cache 5741 * misses and have a performance impact. 5742 * 5743 * @param port_id 5744 * A valid port identifier of the Ethernet device which. 5745 * @param queue_id 5746 * A valid Rx queue identifier on this port. 5747 * @param offset 5748 * The offset of the descriptor starting from tail (0 is the next 5749 * packet to be received by the driver). 5750 * 5751 * @return 5752 * - (RTE_ETH_RX_DESC_AVAIL): Descriptor is available for the hardware to 5753 * receive a packet. 5754 * - (RTE_ETH_RX_DESC_DONE): Descriptor is done, it is filled by hw, but 5755 * not yet processed by the driver (i.e. in the receive queue). 5756 * - (RTE_ETH_RX_DESC_UNAVAIL): Descriptor is unavailable, either hold by 5757 * the driver and not yet returned to hw, or reserved by the hw. 5758 * - (-EINVAL) bad descriptor offset. 5759 * - (-ENOTSUP) if the device does not support this function. 5760 * - (-ENODEV) bad port or queue (only if compiled with debug). 5761 */ 5762 static inline int 5763 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, 5764 uint16_t offset) 5765 { 5766 struct rte_eth_fp_ops *p; 5767 void *qd; 5768 5769 #ifdef RTE_ETHDEV_DEBUG_RX 5770 if (port_id >= RTE_MAX_ETHPORTS || 5771 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5772 RTE_ETHDEV_LOG(ERR, 5773 "Invalid port_id=%u or queue_id=%u\n", 5774 port_id, queue_id); 5775 return -EINVAL; 5776 } 5777 #endif 5778 5779 /* fetch pointer to queue data */ 5780 p = &rte_eth_fp_ops[port_id]; 5781 qd = p->rxq.data[queue_id]; 5782 5783 #ifdef RTE_ETHDEV_DEBUG_RX 5784 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); 5785 if (qd == NULL) 5786 return -ENODEV; 5787 #endif 5788 if (*p->rx_descriptor_status == NULL) 5789 return -ENOTSUP; 5790 return (*p->rx_descriptor_status)(qd, offset); 5791 } 5792 5793 /**@{@name Tx hardware descriptor states 5794 * @see rte_eth_tx_descriptor_status 5795 */ 5796 #define RTE_ETH_TX_DESC_FULL 0 /**< Desc filled for hw, waiting xmit. */ 5797 #define RTE_ETH_TX_DESC_DONE 1 /**< Desc done, packet is transmitted. */ 5798 #define RTE_ETH_TX_DESC_UNAVAIL 2 /**< Desc used by driver or hw. */ 5799 /**@}*/ 5800 5801 /** 5802 * Check the status of a Tx descriptor in the queue. 5803 * 5804 * It should be called in a similar context than the Tx function: 5805 * - on a dataplane core 5806 * - not concurrently on the same queue 5807 * 5808 * Since it's a dataplane function, no check is performed on port_id and 5809 * queue_id. The caller must therefore ensure that the port is enabled 5810 * and the queue is configured and running. 5811 * 5812 * Note: accessing to a random descriptor in the ring may trigger cache 5813 * misses and have a performance impact. 5814 * 5815 * @param port_id 5816 * A valid port identifier of the Ethernet device which. 5817 * @param queue_id 5818 * A valid Tx queue identifier on this port. 5819 * @param offset 5820 * The offset of the descriptor starting from tail (0 is the place where 5821 * the next packet will be send). 5822 * 5823 * @return 5824 * - (RTE_ETH_TX_DESC_FULL) Descriptor is being processed by the hw, i.e. 5825 * in the transmit queue. 5826 * - (RTE_ETH_TX_DESC_DONE) Hardware is done with this descriptor, it can 5827 * be reused by the driver. 5828 * - (RTE_ETH_TX_DESC_UNAVAIL): Descriptor is unavailable, reserved by the 5829 * driver or the hardware. 5830 * - (-EINVAL) bad descriptor offset. 5831 * - (-ENOTSUP) if the device does not support this function. 5832 * - (-ENODEV) bad port or queue (only if compiled with debug). 5833 */ 5834 static inline int rte_eth_tx_descriptor_status(uint16_t port_id, 5835 uint16_t queue_id, uint16_t offset) 5836 { 5837 struct rte_eth_fp_ops *p; 5838 void *qd; 5839 5840 #ifdef RTE_ETHDEV_DEBUG_TX 5841 if (port_id >= RTE_MAX_ETHPORTS || 5842 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5843 RTE_ETHDEV_LOG(ERR, 5844 "Invalid port_id=%u or queue_id=%u\n", 5845 port_id, queue_id); 5846 return -EINVAL; 5847 } 5848 #endif 5849 5850 /* fetch pointer to queue data */ 5851 p = &rte_eth_fp_ops[port_id]; 5852 qd = p->txq.data[queue_id]; 5853 5854 #ifdef RTE_ETHDEV_DEBUG_TX 5855 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); 5856 if (qd == NULL) 5857 return -ENODEV; 5858 #endif 5859 if (*p->tx_descriptor_status == NULL) 5860 return -ENOTSUP; 5861 return (*p->tx_descriptor_status)(qd, offset); 5862 } 5863 5864 /** 5865 * @internal 5866 * Helper routine for rte_eth_tx_burst(). 5867 * Should be called before entry PMD's rte_eth_tx_bulk implementation. 5868 * Does necessary pre-processing - invokes Tx callbacks if any, etc. 5869 * 5870 * @param port_id 5871 * The port identifier of the Ethernet device. 5872 * @param queue_id 5873 * The index of the transmit queue through which output packets must be 5874 * sent. 5875 * @param tx_pkts 5876 * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures 5877 * which contain the output packets. 5878 * @param nb_pkts 5879 * The maximum number of packets to transmit. 5880 * @return 5881 * The number of output packets to transmit. 5882 */ 5883 uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id, 5884 struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque); 5885 5886 /** 5887 * Send a burst of output packets on a transmit queue of an Ethernet device. 5888 * 5889 * The rte_eth_tx_burst() function is invoked to transmit output packets 5890 * on the output queue *queue_id* of the Ethernet device designated by its 5891 * *port_id*. 5892 * The *nb_pkts* parameter is the number of packets to send which are 5893 * supplied in the *tx_pkts* array of *rte_mbuf* structures, each of them 5894 * allocated from a pool created with rte_pktmbuf_pool_create(). 5895 * The rte_eth_tx_burst() function loops, sending *nb_pkts* packets, 5896 * up to the number of transmit descriptors available in the Tx ring of the 5897 * transmit queue. 5898 * For each packet to send, the rte_eth_tx_burst() function performs 5899 * the following operations: 5900 * 5901 * - Pick up the next available descriptor in the transmit ring. 5902 * 5903 * - Free the network buffer previously sent with that descriptor, if any. 5904 * 5905 * - Initialize the transmit descriptor with the information provided 5906 * in the *rte_mbuf data structure. 5907 * 5908 * In the case of a segmented packet composed of a list of *rte_mbuf* buffers, 5909 * the rte_eth_tx_burst() function uses several transmit descriptors 5910 * of the ring. 5911 * 5912 * The rte_eth_tx_burst() function returns the number of packets it 5913 * actually sent. A return value equal to *nb_pkts* means that all packets 5914 * have been sent, and this is likely to signify that other output packets 5915 * could be immediately transmitted again. Applications that implement a 5916 * "send as many packets to transmit as possible" policy can check this 5917 * specific case and keep invoking the rte_eth_tx_burst() function until 5918 * a value less than *nb_pkts* is returned. 5919 * 5920 * It is the responsibility of the rte_eth_tx_burst() function to 5921 * transparently free the memory buffers of packets previously sent. 5922 * This feature is driven by the *tx_free_thresh* value supplied to the 5923 * rte_eth_dev_configure() function at device configuration time. 5924 * When the number of free Tx descriptors drops below this threshold, the 5925 * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers 5926 * of those packets whose transmission was effectively completed. 5927 * 5928 * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can 5929 * invoke this function concurrently on the same Tx queue without SW lock. 5930 * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads 5931 * 5932 * @see rte_eth_tx_prepare to perform some prior checks or adjustments 5933 * for offloads. 5934 * 5935 * @param port_id 5936 * The port identifier of the Ethernet device. 5937 * @param queue_id 5938 * The index of the transmit queue through which output packets must be 5939 * sent. 5940 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 5941 * to rte_eth_dev_configure(). 5942 * @param tx_pkts 5943 * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures 5944 * which contain the output packets. 5945 * @param nb_pkts 5946 * The maximum number of packets to transmit. 5947 * @return 5948 * The number of output packets actually stored in transmit descriptors of 5949 * the transmit ring. The return value can be less than the value of the 5950 * *tx_pkts* parameter when the transmit ring is full or has been filled up. 5951 */ 5952 static inline uint16_t 5953 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, 5954 struct rte_mbuf **tx_pkts, uint16_t nb_pkts) 5955 { 5956 struct rte_eth_fp_ops *p; 5957 void *qd; 5958 5959 #ifdef RTE_ETHDEV_DEBUG_TX 5960 if (port_id >= RTE_MAX_ETHPORTS || 5961 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5962 RTE_ETHDEV_LOG(ERR, 5963 "Invalid port_id=%u or queue_id=%u\n", 5964 port_id, queue_id); 5965 return 0; 5966 } 5967 #endif 5968 5969 /* fetch pointer to queue data */ 5970 p = &rte_eth_fp_ops[port_id]; 5971 qd = p->txq.data[queue_id]; 5972 5973 #ifdef RTE_ETHDEV_DEBUG_TX 5974 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); 5975 5976 if (qd == NULL) { 5977 RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", 5978 queue_id, port_id); 5979 return 0; 5980 } 5981 #endif 5982 5983 #ifdef RTE_ETHDEV_RXTX_CALLBACKS 5984 { 5985 void *cb; 5986 5987 /* __ATOMIC_RELEASE memory order was used when the 5988 * call back was inserted into the list. 5989 * Since there is a clear dependency between loading 5990 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is 5991 * not required. 5992 */ 5993 cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], 5994 __ATOMIC_RELAXED); 5995 if (unlikely(cb != NULL)) 5996 nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, 5997 tx_pkts, nb_pkts, cb); 5998 } 5999 #endif 6000 6001 nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts); 6002 6003 rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts); 6004 return nb_pkts; 6005 } 6006 6007 /** 6008 * Process a burst of output packets on a transmit queue of an Ethernet device. 6009 * 6010 * The rte_eth_tx_prepare() function is invoked to prepare output packets to be 6011 * transmitted on the output queue *queue_id* of the Ethernet device designated 6012 * by its *port_id*. 6013 * The *nb_pkts* parameter is the number of packets to be prepared which are 6014 * supplied in the *tx_pkts* array of *rte_mbuf* structures, each of them 6015 * allocated from a pool created with rte_pktmbuf_pool_create(). 6016 * For each packet to send, the rte_eth_tx_prepare() function performs 6017 * the following operations: 6018 * 6019 * - Check if packet meets devices requirements for Tx offloads. 6020 * 6021 * - Check limitations about number of segments. 6022 * 6023 * - Check additional requirements when debug is enabled. 6024 * 6025 * - Update and/or reset required checksums when Tx offload is set for packet. 6026 * 6027 * Since this function can modify packet data, provided mbufs must be safely 6028 * writable (e.g. modified data cannot be in shared segment). 6029 * 6030 * The rte_eth_tx_prepare() function returns the number of packets ready to be 6031 * sent. A return value equal to *nb_pkts* means that all packets are valid and 6032 * ready to be sent, otherwise stops processing on the first invalid packet and 6033 * leaves the rest packets untouched. 6034 * 6035 * When this functionality is not implemented in the driver, all packets are 6036 * are returned untouched. 6037 * 6038 * @param port_id 6039 * The port identifier of the Ethernet device. 6040 * The value must be a valid port ID. 6041 * @param queue_id 6042 * The index of the transmit queue through which output packets must be 6043 * sent. 6044 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 6045 * to rte_eth_dev_configure(). 6046 * @param tx_pkts 6047 * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures 6048 * which contain the output packets. 6049 * @param nb_pkts 6050 * The maximum number of packets to process. 6051 * @return 6052 * The number of packets correct and ready to be sent. The return value can be 6053 * less than the value of the *tx_pkts* parameter when some packet doesn't 6054 * meet devices requirements with rte_errno set appropriately: 6055 * - EINVAL: offload flags are not correctly set 6056 * - ENOTSUP: the offload feature is not supported by the hardware 6057 * - ENODEV: if *port_id* is invalid (with debug enabled only) 6058 * 6059 */ 6060 6061 #ifndef RTE_ETHDEV_TX_PREPARE_NOOP 6062 6063 static inline uint16_t 6064 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, 6065 struct rte_mbuf **tx_pkts, uint16_t nb_pkts) 6066 { 6067 struct rte_eth_fp_ops *p; 6068 void *qd; 6069 6070 #ifdef RTE_ETHDEV_DEBUG_TX 6071 if (port_id >= RTE_MAX_ETHPORTS || 6072 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 6073 RTE_ETHDEV_LOG(ERR, 6074 "Invalid port_id=%u or queue_id=%u\n", 6075 port_id, queue_id); 6076 rte_errno = ENODEV; 6077 return 0; 6078 } 6079 #endif 6080 6081 /* fetch pointer to queue data */ 6082 p = &rte_eth_fp_ops[port_id]; 6083 qd = p->txq.data[queue_id]; 6084 6085 #ifdef RTE_ETHDEV_DEBUG_TX 6086 if (!rte_eth_dev_is_valid_port(port_id)) { 6087 RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); 6088 rte_errno = ENODEV; 6089 return 0; 6090 } 6091 if (qd == NULL) { 6092 RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", 6093 queue_id, port_id); 6094 rte_errno = EINVAL; 6095 return 0; 6096 } 6097 #endif 6098 6099 if (!p->tx_pkt_prepare) 6100 return nb_pkts; 6101 6102 return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts); 6103 } 6104 6105 #else 6106 6107 /* 6108 * Native NOOP operation for compilation targets which doesn't require any 6109 * preparations steps, and functional NOOP may introduce unnecessary performance 6110 * drop. 6111 * 6112 * Generally this is not a good idea to turn it on globally and didn't should 6113 * be used if behavior of tx_preparation can change. 6114 */ 6115 6116 static inline uint16_t 6117 rte_eth_tx_prepare(__rte_unused uint16_t port_id, 6118 __rte_unused uint16_t queue_id, 6119 __rte_unused struct rte_mbuf **tx_pkts, uint16_t nb_pkts) 6120 { 6121 return nb_pkts; 6122 } 6123 6124 #endif 6125 6126 /** 6127 * Send any packets queued up for transmission on a port and HW queue 6128 * 6129 * This causes an explicit flush of packets previously buffered via the 6130 * rte_eth_tx_buffer() function. It returns the number of packets successfully 6131 * sent to the NIC, and calls the error callback for any unsent packets. Unless 6132 * explicitly set up otherwise, the default callback simply frees the unsent 6133 * packets back to the owning mempool. 6134 * 6135 * @param port_id 6136 * The port identifier of the Ethernet device. 6137 * @param queue_id 6138 * The index of the transmit queue through which output packets must be 6139 * sent. 6140 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 6141 * to rte_eth_dev_configure(). 6142 * @param buffer 6143 * Buffer of packets to be transmit. 6144 * @return 6145 * The number of packets successfully sent to the Ethernet device. The error 6146 * callback is called for any packets which could not be sent. 6147 */ 6148 static inline uint16_t 6149 rte_eth_tx_buffer_flush(uint16_t port_id, uint16_t queue_id, 6150 struct rte_eth_dev_tx_buffer *buffer) 6151 { 6152 uint16_t sent; 6153 uint16_t to_send = buffer->length; 6154 6155 if (to_send == 0) 6156 return 0; 6157 6158 sent = rte_eth_tx_burst(port_id, queue_id, buffer->pkts, to_send); 6159 6160 buffer->length = 0; 6161 6162 /* All packets sent, or to be dealt with by callback below */ 6163 if (unlikely(sent != to_send)) 6164 buffer->error_callback(&buffer->pkts[sent], 6165 (uint16_t)(to_send - sent), 6166 buffer->error_userdata); 6167 6168 return sent; 6169 } 6170 6171 /** 6172 * Buffer a single packet for future transmission on a port and queue 6173 * 6174 * This function takes a single mbuf/packet and buffers it for later 6175 * transmission on the particular port and queue specified. Once the buffer is 6176 * full of packets, an attempt will be made to transmit all the buffered 6177 * packets. In case of error, where not all packets can be transmitted, a 6178 * callback is called with the unsent packets as a parameter. If no callback 6179 * is explicitly set up, the unsent packets are just freed back to the owning 6180 * mempool. The function returns the number of packets actually sent i.e. 6181 * 0 if no buffer flush occurred, otherwise the number of packets successfully 6182 * flushed 6183 * 6184 * @param port_id 6185 * The port identifier of the Ethernet device. 6186 * @param queue_id 6187 * The index of the transmit queue through which output packets must be 6188 * sent. 6189 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 6190 * to rte_eth_dev_configure(). 6191 * @param buffer 6192 * Buffer used to collect packets to be sent. 6193 * @param tx_pkt 6194 * Pointer to the packet mbuf to be sent. 6195 * @return 6196 * 0 = packet has been buffered for later transmission 6197 * N > 0 = packet has been buffered, and the buffer was subsequently flushed, 6198 * causing N packets to be sent, and the error callback to be called for 6199 * the rest. 6200 */ 6201 static __rte_always_inline uint16_t 6202 rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, 6203 struct rte_eth_dev_tx_buffer *buffer, struct rte_mbuf *tx_pkt) 6204 { 6205 buffer->pkts[buffer->length++] = tx_pkt; 6206 if (buffer->length < buffer->size) 6207 return 0; 6208 6209 return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); 6210 } 6211 6212 #ifdef __cplusplus 6213 } 6214 #endif 6215 6216 #endif /* _RTE_ETHDEV_H_ */ 6217