1 /* SPDX-License-Identifier: BSD-3-Clause 2 * Copyright(c) 2010-2017 Intel Corporation 3 */ 4 5 #ifndef _RTE_ETHDEV_H_ 6 #define _RTE_ETHDEV_H_ 7 8 /** 9 * @file 10 * 11 * RTE Ethernet Device API 12 * 13 * The Ethernet Device API is composed of two parts: 14 * 15 * - The application-oriented Ethernet API that includes functions to setup 16 * an Ethernet device (configure it, setup its Rx and Tx queues and start it), 17 * to get its MAC address, the speed and the status of its physical link, 18 * to receive and to transmit packets, and so on. 19 * 20 * - The driver-oriented Ethernet API that exports functions allowing 21 * an Ethernet Poll Mode Driver (PMD) to allocate an Ethernet device instance, 22 * create memzone for HW rings and process registered callbacks, and so on. 23 * PMDs should include ethdev_driver.h instead of this header. 24 * 25 * By default, all the functions of the Ethernet Device API exported by a PMD 26 * are lock-free functions which assume to not be invoked in parallel on 27 * different logical cores to work on the same target object. For instance, 28 * the receive function of a PMD cannot be invoked in parallel on two logical 29 * cores to poll the same Rx queue [of the same port]. Of course, this function 30 * can be invoked in parallel by different logical cores on different Rx queues. 31 * It is the responsibility of the upper level application to enforce this rule. 32 * 33 * If needed, parallel accesses by multiple logical cores to shared queues 34 * shall be explicitly protected by dedicated inline lock-aware functions 35 * built on top of their corresponding lock-free functions of the PMD API. 36 * 37 * In all functions of the Ethernet API, the Ethernet device is 38 * designated by an integer >= 0 named the device port identifier. 39 * 40 * At the Ethernet driver level, Ethernet devices are represented by a generic 41 * data structure of type *rte_eth_dev*. 42 * 43 * Ethernet devices are dynamically registered during the PCI probing phase 44 * performed at EAL initialization time. 45 * When an Ethernet device is being probed, an *rte_eth_dev* structure and 46 * a new port identifier are allocated for that device. Then, the eth_dev_init() 47 * function supplied by the Ethernet driver matching the probed PCI 48 * device is invoked to properly initialize the device. 49 * 50 * The role of the device init function consists of resetting the hardware, 51 * checking access to Non-volatile Memory (NVM), reading the MAC address 52 * from NVM etc. 53 * 54 * If the device init operation is successful, the correspondence between 55 * the port identifier assigned to the new device and its associated 56 * *rte_eth_dev* structure is effectively registered. 57 * Otherwise, both the *rte_eth_dev* structure and the port identifier are 58 * freed. 59 * 60 * The functions exported by the application Ethernet API to setup a device 61 * designated by its port identifier must be invoked in the following order: 62 * - rte_eth_dev_configure() 63 * - rte_eth_tx_queue_setup() 64 * - rte_eth_rx_queue_setup() 65 * - rte_eth_dev_start() 66 * 67 * Then, the network application can invoke, in any order, the functions 68 * exported by the Ethernet API to get the MAC address of a given device, to 69 * get the speed and the status of a device physical link, to receive/transmit 70 * [burst of] packets, and so on. 71 * 72 * If the application wants to change the configuration (i.e. call 73 * rte_eth_dev_configure(), rte_eth_tx_queue_setup(), or 74 * rte_eth_rx_queue_setup()), it must call rte_eth_dev_stop() first to stop the 75 * device and then do the reconfiguration before calling rte_eth_dev_start() 76 * again. The transmit and receive functions should not be invoked when the 77 * device is stopped. 78 * 79 * Please note that some configuration is not stored between calls to 80 * rte_eth_dev_stop()/rte_eth_dev_start(). The following configuration will 81 * be retained: 82 * 83 * - MTU 84 * - flow control settings 85 * - receive mode configuration (promiscuous mode, all-multicast mode, 86 * hardware checksum mode, RSS/VMDq settings etc.) 87 * - VLAN filtering configuration 88 * - default MAC address 89 * - MAC addresses supplied to MAC address array 90 * - flow director filtering mode (but not filtering rules) 91 * - NIC queue statistics mappings 92 * 93 * The following configuration may be retained or not 94 * depending on the device capabilities: 95 * 96 * - flow rules 97 * - flow-related shared objects, e.g. indirect actions 98 * 99 * Any other configuration will not be stored and will need to be re-entered 100 * before a call to rte_eth_dev_start(). 101 * 102 * Finally, a network application can close an Ethernet device by invoking the 103 * rte_eth_dev_close() function. 104 * 105 * Each function of the application Ethernet API invokes a specific function 106 * of the PMD that controls the target device designated by its port 107 * identifier. 108 * For this purpose, all device-specific functions of an Ethernet driver are 109 * supplied through a set of pointers contained in a generic structure of type 110 * *eth_dev_ops*. 111 * The address of the *eth_dev_ops* structure is stored in the *rte_eth_dev* 112 * structure by the device init function of the Ethernet driver, which is 113 * invoked during the PCI probing phase, as explained earlier. 114 * 115 * In other words, each function of the Ethernet API simply retrieves the 116 * *rte_eth_dev* structure associated with the device port identifier and 117 * performs an indirect invocation of the corresponding driver function 118 * supplied in the *eth_dev_ops* structure of the *rte_eth_dev* structure. 119 * 120 * For performance reasons, the address of the burst-oriented Rx and Tx 121 * functions of the Ethernet driver are not contained in the *eth_dev_ops* 122 * structure. Instead, they are directly stored at the beginning of the 123 * *rte_eth_dev* structure to avoid an extra indirect memory access during 124 * their invocation. 125 * 126 * RTE Ethernet device drivers do not use interrupts for transmitting or 127 * receiving. Instead, Ethernet drivers export Poll-Mode receive and transmit 128 * functions to applications. 129 * Both receive and transmit functions are packet-burst oriented to minimize 130 * their cost per packet through the following optimizations: 131 * 132 * - Sharing among multiple packets the incompressible cost of the 133 * invocation of receive/transmit functions. 134 * 135 * - Enabling receive/transmit functions to take advantage of burst-oriented 136 * hardware features (L1 cache, prefetch instructions, NIC head/tail 137 * registers) to minimize the number of CPU cycles per packet, for instance, 138 * by avoiding useless read memory accesses to ring descriptors, or by 139 * systematically using arrays of pointers that exactly fit L1 cache line 140 * boundaries and sizes. 141 * 142 * The burst-oriented receive function does not provide any error notification, 143 * to avoid the corresponding overhead. As a hint, the upper-level application 144 * might check the status of the device link once being systematically returned 145 * a 0 value by the receive function of the driver for a given number of tries. 146 */ 147 148 #ifdef __cplusplus 149 extern "C" { 150 #endif 151 152 #include <stdint.h> 153 154 /* Use this macro to check if LRO API is supported */ 155 #define RTE_ETHDEV_HAS_LRO_SUPPORT 156 157 /* Alias RTE_LIBRTE_ETHDEV_DEBUG for backward compatibility. */ 158 #ifdef RTE_LIBRTE_ETHDEV_DEBUG 159 #define RTE_ETHDEV_DEBUG_RX 160 #define RTE_ETHDEV_DEBUG_TX 161 #endif 162 163 #include <rte_compat.h> 164 #include <rte_log.h> 165 #include <rte_interrupts.h> 166 #include <rte_dev.h> 167 #include <rte_devargs.h> 168 #include <rte_bitops.h> 169 #include <rte_errno.h> 170 #include <rte_common.h> 171 #include <rte_config.h> 172 #include <rte_ether.h> 173 #include <rte_power_intrinsics.h> 174 175 #include "rte_ethdev_trace_fp.h" 176 #include "rte_dev_info.h" 177 178 extern int rte_eth_dev_logtype; 179 180 #define RTE_ETHDEV_LOG(level, ...) \ 181 rte_log(RTE_LOG_ ## level, rte_eth_dev_logtype, "" __VA_ARGS__) 182 183 struct rte_mbuf; 184 185 /** 186 * Initializes a device iterator. 187 * 188 * This iterator allows accessing a list of devices matching some devargs. 189 * 190 * @param iter 191 * Device iterator handle initialized by the function. 192 * The fields bus_str and cls_str might be dynamically allocated, 193 * and could be freed by calling rte_eth_iterator_cleanup(). 194 * 195 * @param devargs 196 * Device description string. 197 * 198 * @return 199 * 0 on successful initialization, negative otherwise. 200 */ 201 int rte_eth_iterator_init(struct rte_dev_iterator *iter, const char *devargs); 202 203 /** 204 * Iterates on devices with devargs filter. 205 * The ownership is not checked. 206 * 207 * The next port ID is returned, and the iterator is updated. 208 * 209 * @param iter 210 * Device iterator handle initialized by rte_eth_iterator_init(). 211 * Some fields bus_str and cls_str might be freed when no more port is found, 212 * by calling rte_eth_iterator_cleanup(). 213 * 214 * @return 215 * A port ID if found, RTE_MAX_ETHPORTS otherwise. 216 */ 217 uint16_t rte_eth_iterator_next(struct rte_dev_iterator *iter); 218 219 /** 220 * Free some allocated fields of the iterator. 221 * 222 * This function is automatically called by rte_eth_iterator_next() 223 * on the last iteration (i.e. when no more matching port is found). 224 * 225 * It is safe to call this function twice; it will do nothing more. 226 * 227 * @param iter 228 * Device iterator handle initialized by rte_eth_iterator_init(). 229 * The fields bus_str and cls_str are freed if needed. 230 */ 231 void rte_eth_iterator_cleanup(struct rte_dev_iterator *iter); 232 233 /** 234 * Macro to iterate over all ethdev ports matching some devargs. 235 * 236 * If a break is done before the end of the loop, 237 * the function rte_eth_iterator_cleanup() must be called. 238 * 239 * @param id 240 * Iterated port ID of type uint16_t. 241 * @param devargs 242 * Device parameters input as string of type char*. 243 * @param iter 244 * Iterator handle of type struct rte_dev_iterator, used internally. 245 */ 246 #define RTE_ETH_FOREACH_MATCHING_DEV(id, devargs, iter) \ 247 for (rte_eth_iterator_init(iter, devargs), \ 248 id = rte_eth_iterator_next(iter); \ 249 id != RTE_MAX_ETHPORTS; \ 250 id = rte_eth_iterator_next(iter)) 251 252 /** 253 * A structure used to retrieve statistics for an Ethernet port. 254 * Not all statistics fields in struct rte_eth_stats are supported 255 * by any type of network interface card (NIC). If any statistics 256 * field is not supported, its value is 0. 257 * All byte-related statistics do not include Ethernet FCS regardless 258 * of whether these bytes have been delivered to the application 259 * (see RTE_ETH_RX_OFFLOAD_KEEP_CRC). 260 */ 261 struct rte_eth_stats { 262 uint64_t ipackets; /**< Total number of successfully received packets. */ 263 uint64_t opackets; /**< Total number of successfully transmitted packets.*/ 264 uint64_t ibytes; /**< Total number of successfully received bytes. */ 265 uint64_t obytes; /**< Total number of successfully transmitted bytes. */ 266 /** 267 * Total of Rx packets dropped by the HW, 268 * because there are no available buffer (i.e. Rx queues are full). 269 */ 270 uint64_t imissed; 271 uint64_t ierrors; /**< Total number of erroneous received packets. */ 272 uint64_t oerrors; /**< Total number of failed transmitted packets. */ 273 uint64_t rx_nombuf; /**< Total number of Rx mbuf allocation failures. */ 274 /* Queue stats are limited to max 256 queues */ 275 /** Total number of queue Rx packets. */ 276 uint64_t q_ipackets[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 277 /** Total number of queue Tx packets. */ 278 uint64_t q_opackets[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 279 /** Total number of successfully received queue bytes. */ 280 uint64_t q_ibytes[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 281 /** Total number of successfully transmitted queue bytes. */ 282 uint64_t q_obytes[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 283 /** Total number of queue packets received that are dropped. */ 284 uint64_t q_errors[RTE_ETHDEV_QUEUE_STAT_CNTRS]; 285 }; 286 287 /**@{@name Link speed capabilities 288 * Device supported speeds bitmap flags 289 */ 290 #define RTE_ETH_LINK_SPEED_AUTONEG 0 /**< Autonegotiate (all speeds) */ 291 #define RTE_ETH_LINK_SPEED_FIXED RTE_BIT32(0) /**< Disable autoneg (fixed speed) */ 292 #define RTE_ETH_LINK_SPEED_10M_HD RTE_BIT32(1) /**< 10 Mbps half-duplex */ 293 #define RTE_ETH_LINK_SPEED_10M RTE_BIT32(2) /**< 10 Mbps full-duplex */ 294 #define RTE_ETH_LINK_SPEED_100M_HD RTE_BIT32(3) /**< 100 Mbps half-duplex */ 295 #define RTE_ETH_LINK_SPEED_100M RTE_BIT32(4) /**< 100 Mbps full-duplex */ 296 #define RTE_ETH_LINK_SPEED_1G RTE_BIT32(5) /**< 1 Gbps */ 297 #define RTE_ETH_LINK_SPEED_2_5G RTE_BIT32(6) /**< 2.5 Gbps */ 298 #define RTE_ETH_LINK_SPEED_5G RTE_BIT32(7) /**< 5 Gbps */ 299 #define RTE_ETH_LINK_SPEED_10G RTE_BIT32(8) /**< 10 Gbps */ 300 #define RTE_ETH_LINK_SPEED_20G RTE_BIT32(9) /**< 20 Gbps */ 301 #define RTE_ETH_LINK_SPEED_25G RTE_BIT32(10) /**< 25 Gbps */ 302 #define RTE_ETH_LINK_SPEED_40G RTE_BIT32(11) /**< 40 Gbps */ 303 #define RTE_ETH_LINK_SPEED_50G RTE_BIT32(12) /**< 50 Gbps */ 304 #define RTE_ETH_LINK_SPEED_56G RTE_BIT32(13) /**< 56 Gbps */ 305 #define RTE_ETH_LINK_SPEED_100G RTE_BIT32(14) /**< 100 Gbps */ 306 #define RTE_ETH_LINK_SPEED_200G RTE_BIT32(15) /**< 200 Gbps */ 307 /**@}*/ 308 309 #define ETH_LINK_SPEED_AUTONEG RTE_DEPRECATED(ETH_LINK_SPEED_AUTONEG) RTE_ETH_LINK_SPEED_AUTONEG 310 #define ETH_LINK_SPEED_FIXED RTE_DEPRECATED(ETH_LINK_SPEED_FIXED) RTE_ETH_LINK_SPEED_FIXED 311 #define ETH_LINK_SPEED_10M_HD RTE_DEPRECATED(ETH_LINK_SPEED_10M_HD) RTE_ETH_LINK_SPEED_10M_HD 312 #define ETH_LINK_SPEED_10M RTE_DEPRECATED(ETH_LINK_SPEED_10M) RTE_ETH_LINK_SPEED_10M 313 #define ETH_LINK_SPEED_100M_HD RTE_DEPRECATED(ETH_LINK_SPEED_100M_HD) RTE_ETH_LINK_SPEED_100M_HD 314 #define ETH_LINK_SPEED_100M RTE_DEPRECATED(ETH_LINK_SPEED_100M) RTE_ETH_LINK_SPEED_100M 315 #define ETH_LINK_SPEED_1G RTE_DEPRECATED(ETH_LINK_SPEED_1G) RTE_ETH_LINK_SPEED_1G 316 #define ETH_LINK_SPEED_2_5G RTE_DEPRECATED(ETH_LINK_SPEED_2_5G) RTE_ETH_LINK_SPEED_2_5G 317 #define ETH_LINK_SPEED_5G RTE_DEPRECATED(ETH_LINK_SPEED_5G) RTE_ETH_LINK_SPEED_5G 318 #define ETH_LINK_SPEED_10G RTE_DEPRECATED(ETH_LINK_SPEED_10G) RTE_ETH_LINK_SPEED_10G 319 #define ETH_LINK_SPEED_20G RTE_DEPRECATED(ETH_LINK_SPEED_20G) RTE_ETH_LINK_SPEED_20G 320 #define ETH_LINK_SPEED_25G RTE_DEPRECATED(ETH_LINK_SPEED_25G) RTE_ETH_LINK_SPEED_25G 321 #define ETH_LINK_SPEED_40G RTE_DEPRECATED(ETH_LINK_SPEED_40G) RTE_ETH_LINK_SPEED_40G 322 #define ETH_LINK_SPEED_50G RTE_DEPRECATED(ETH_LINK_SPEED_50G) RTE_ETH_LINK_SPEED_50G 323 #define ETH_LINK_SPEED_56G RTE_DEPRECATED(ETH_LINK_SPEED_56G) RTE_ETH_LINK_SPEED_56G 324 #define ETH_LINK_SPEED_100G RTE_DEPRECATED(ETH_LINK_SPEED_100G) RTE_ETH_LINK_SPEED_100G 325 #define ETH_LINK_SPEED_200G RTE_DEPRECATED(ETH_LINK_SPEED_200G) RTE_ETH_LINK_SPEED_200G 326 327 /**@{@name Link speed 328 * Ethernet numeric link speeds in Mbps 329 */ 330 #define RTE_ETH_SPEED_NUM_NONE 0 /**< Not defined */ 331 #define RTE_ETH_SPEED_NUM_10M 10 /**< 10 Mbps */ 332 #define RTE_ETH_SPEED_NUM_100M 100 /**< 100 Mbps */ 333 #define RTE_ETH_SPEED_NUM_1G 1000 /**< 1 Gbps */ 334 #define RTE_ETH_SPEED_NUM_2_5G 2500 /**< 2.5 Gbps */ 335 #define RTE_ETH_SPEED_NUM_5G 5000 /**< 5 Gbps */ 336 #define RTE_ETH_SPEED_NUM_10G 10000 /**< 10 Gbps */ 337 #define RTE_ETH_SPEED_NUM_20G 20000 /**< 20 Gbps */ 338 #define RTE_ETH_SPEED_NUM_25G 25000 /**< 25 Gbps */ 339 #define RTE_ETH_SPEED_NUM_40G 40000 /**< 40 Gbps */ 340 #define RTE_ETH_SPEED_NUM_50G 50000 /**< 50 Gbps */ 341 #define RTE_ETH_SPEED_NUM_56G 56000 /**< 56 Gbps */ 342 #define RTE_ETH_SPEED_NUM_100G 100000 /**< 100 Gbps */ 343 #define RTE_ETH_SPEED_NUM_200G 200000 /**< 200 Gbps */ 344 #define RTE_ETH_SPEED_NUM_UNKNOWN UINT32_MAX /**< Unknown */ 345 /**@}*/ 346 347 #define ETH_SPEED_NUM_NONE RTE_DEPRECATED(ETH_SPEED_NUM_NONE) RTE_ETH_SPEED_NUM_NONE 348 #define ETH_SPEED_NUM_10M RTE_DEPRECATED(ETH_SPEED_NUM_10M) RTE_ETH_SPEED_NUM_10M 349 #define ETH_SPEED_NUM_100M RTE_DEPRECATED(ETH_SPEED_NUM_100M) RTE_ETH_SPEED_NUM_100M 350 #define ETH_SPEED_NUM_1G RTE_DEPRECATED(ETH_SPEED_NUM_1G) RTE_ETH_SPEED_NUM_1G 351 #define ETH_SPEED_NUM_2_5G RTE_DEPRECATED(ETH_SPEED_NUM_2_5G) RTE_ETH_SPEED_NUM_2_5G 352 #define ETH_SPEED_NUM_5G RTE_DEPRECATED(ETH_SPEED_NUM_5G) RTE_ETH_SPEED_NUM_5G 353 #define ETH_SPEED_NUM_10G RTE_DEPRECATED(ETH_SPEED_NUM_10G) RTE_ETH_SPEED_NUM_10G 354 #define ETH_SPEED_NUM_20G RTE_DEPRECATED(ETH_SPEED_NUM_20G) RTE_ETH_SPEED_NUM_20G 355 #define ETH_SPEED_NUM_25G RTE_DEPRECATED(ETH_SPEED_NUM_25G) RTE_ETH_SPEED_NUM_25G 356 #define ETH_SPEED_NUM_40G RTE_DEPRECATED(ETH_SPEED_NUM_40G) RTE_ETH_SPEED_NUM_40G 357 #define ETH_SPEED_NUM_50G RTE_DEPRECATED(ETH_SPEED_NUM_50G) RTE_ETH_SPEED_NUM_50G 358 #define ETH_SPEED_NUM_56G RTE_DEPRECATED(ETH_SPEED_NUM_56G) RTE_ETH_SPEED_NUM_56G 359 #define ETH_SPEED_NUM_100G RTE_DEPRECATED(ETH_SPEED_NUM_100G) RTE_ETH_SPEED_NUM_100G 360 #define ETH_SPEED_NUM_200G RTE_DEPRECATED(ETH_SPEED_NUM_200G) RTE_ETH_SPEED_NUM_200G 361 #define ETH_SPEED_NUM_UNKNOWN RTE_DEPRECATED(ETH_SPEED_NUM_UNKNOWN) RTE_ETH_SPEED_NUM_UNKNOWN 362 363 /** 364 * A structure used to retrieve link-level information of an Ethernet port. 365 */ 366 __extension__ 367 struct rte_eth_link { 368 uint32_t link_speed; /**< RTE_ETH_SPEED_NUM_ */ 369 uint16_t link_duplex : 1; /**< RTE_ETH_LINK_[HALF/FULL]_DUPLEX */ 370 uint16_t link_autoneg : 1; /**< RTE_ETH_LINK_[AUTONEG/FIXED] */ 371 uint16_t link_status : 1; /**< RTE_ETH_LINK_[DOWN/UP] */ 372 } __rte_aligned(8); /**< aligned for atomic64 read/write */ 373 374 /**@{@name Link negotiation 375 * Constants used in link management. 376 */ 377 #define RTE_ETH_LINK_HALF_DUPLEX 0 /**< Half-duplex connection (see link_duplex). */ 378 #define RTE_ETH_LINK_FULL_DUPLEX 1 /**< Full-duplex connection (see link_duplex). */ 379 #define RTE_ETH_LINK_DOWN 0 /**< Link is down (see link_status). */ 380 #define RTE_ETH_LINK_UP 1 /**< Link is up (see link_status). */ 381 #define RTE_ETH_LINK_FIXED 0 /**< No autonegotiation (see link_autoneg). */ 382 #define RTE_ETH_LINK_AUTONEG 1 /**< Autonegotiated (see link_autoneg). */ 383 #define RTE_ETH_LINK_MAX_STR_LEN 40 /**< Max length of default link string. */ 384 /**@}*/ 385 386 #define ETH_LINK_HALF_DUPLEX RTE_DEPRECATED(ETH_LINK_HALF_DUPLEX) RTE_ETH_LINK_HALF_DUPLEX 387 #define ETH_LINK_FULL_DUPLEX RTE_DEPRECATED(ETH_LINK_FULL_DUPLEX) RTE_ETH_LINK_FULL_DUPLEX 388 #define ETH_LINK_DOWN RTE_DEPRECATED(ETH_LINK_DOWN) RTE_ETH_LINK_DOWN 389 #define ETH_LINK_UP RTE_DEPRECATED(ETH_LINK_UP) RTE_ETH_LINK_UP 390 #define ETH_LINK_FIXED RTE_DEPRECATED(ETH_LINK_FIXED) RTE_ETH_LINK_FIXED 391 #define ETH_LINK_AUTONEG RTE_DEPRECATED(ETH_LINK_AUTONEG) RTE_ETH_LINK_AUTONEG 392 393 /** 394 * A structure used to configure the ring threshold registers of an Rx/Tx 395 * queue for an Ethernet port. 396 */ 397 struct rte_eth_thresh { 398 uint8_t pthresh; /**< Ring prefetch threshold. */ 399 uint8_t hthresh; /**< Ring host threshold. */ 400 uint8_t wthresh; /**< Ring writeback threshold. */ 401 }; 402 403 /**@{@name Multi-queue mode 404 * @see rte_eth_conf.rxmode.mq_mode. 405 */ 406 #define RTE_ETH_MQ_RX_RSS_FLAG RTE_BIT32(0) /**< Enable RSS. @see rte_eth_rss_conf */ 407 #define RTE_ETH_MQ_RX_DCB_FLAG RTE_BIT32(1) /**< Enable DCB. */ 408 #define RTE_ETH_MQ_RX_VMDQ_FLAG RTE_BIT32(2) /**< Enable VMDq. */ 409 /**@}*/ 410 411 #define ETH_MQ_RX_RSS_FLAG RTE_DEPRECATED(ETH_MQ_RX_RSS_FLAG) RTE_ETH_MQ_RX_RSS_FLAG 412 #define ETH_MQ_RX_DCB_FLAG RTE_DEPRECATED(ETH_MQ_RX_DCB_FLAG) RTE_ETH_MQ_RX_DCB_FLAG 413 #define ETH_MQ_RX_VMDQ_FLAG RTE_DEPRECATED(ETH_MQ_RX_VMDQ_FLAG) RTE_ETH_MQ_RX_VMDQ_FLAG 414 415 /** 416 * A set of values to identify what method is to be used to route 417 * packets to multiple queues. 418 */ 419 enum rte_eth_rx_mq_mode { 420 /** None of DCB, RSS or VMDq mode */ 421 RTE_ETH_MQ_RX_NONE = 0, 422 423 /** For Rx side, only RSS is on */ 424 RTE_ETH_MQ_RX_RSS = RTE_ETH_MQ_RX_RSS_FLAG, 425 /** For Rx side,only DCB is on. */ 426 RTE_ETH_MQ_RX_DCB = RTE_ETH_MQ_RX_DCB_FLAG, 427 /** Both DCB and RSS enable */ 428 RTE_ETH_MQ_RX_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG, 429 430 /** Only VMDq, no RSS nor DCB */ 431 RTE_ETH_MQ_RX_VMDQ_ONLY = RTE_ETH_MQ_RX_VMDQ_FLAG, 432 /** RSS mode with VMDq */ 433 RTE_ETH_MQ_RX_VMDQ_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_VMDQ_FLAG, 434 /** Use VMDq+DCB to route traffic to queues */ 435 RTE_ETH_MQ_RX_VMDQ_DCB = RTE_ETH_MQ_RX_VMDQ_FLAG | RTE_ETH_MQ_RX_DCB_FLAG, 436 /** Enable both VMDq and DCB in VMDq */ 437 RTE_ETH_MQ_RX_VMDQ_DCB_RSS = RTE_ETH_MQ_RX_RSS_FLAG | RTE_ETH_MQ_RX_DCB_FLAG | 438 RTE_ETH_MQ_RX_VMDQ_FLAG, 439 }; 440 441 #define ETH_MQ_RX_NONE RTE_DEPRECATED(ETH_MQ_RX_NONE) RTE_ETH_MQ_RX_NONE 442 #define ETH_MQ_RX_RSS RTE_DEPRECATED(ETH_MQ_RX_RSS) RTE_ETH_MQ_RX_RSS 443 #define ETH_MQ_RX_DCB RTE_DEPRECATED(ETH_MQ_RX_DCB) RTE_ETH_MQ_RX_DCB 444 #define ETH_MQ_RX_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_DCB_RSS) RTE_ETH_MQ_RX_DCB_RSS 445 #define ETH_MQ_RX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_RX_VMDQ_ONLY) RTE_ETH_MQ_RX_VMDQ_ONLY 446 #define ETH_MQ_RX_VMDQ_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_RSS) RTE_ETH_MQ_RX_VMDQ_RSS 447 #define ETH_MQ_RX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB) RTE_ETH_MQ_RX_VMDQ_DCB 448 #define ETH_MQ_RX_VMDQ_DCB_RSS RTE_DEPRECATED(ETH_MQ_RX_VMDQ_DCB_RSS) RTE_ETH_MQ_RX_VMDQ_DCB_RSS 449 450 /** 451 * A set of values to identify what method is to be used to transmit 452 * packets using multi-TCs. 453 */ 454 enum rte_eth_tx_mq_mode { 455 RTE_ETH_MQ_TX_NONE = 0, /**< It is in neither DCB nor VT mode. */ 456 RTE_ETH_MQ_TX_DCB, /**< For Tx side,only DCB is on. */ 457 RTE_ETH_MQ_TX_VMDQ_DCB, /**< For Tx side,both DCB and VT is on. */ 458 RTE_ETH_MQ_TX_VMDQ_ONLY, /**< Only VT on, no DCB */ 459 }; 460 461 #define ETH_MQ_TX_NONE RTE_DEPRECATED(ETH_MQ_TX_NONE) RTE_ETH_MQ_TX_NONE 462 #define ETH_MQ_TX_DCB RTE_DEPRECATED(ETH_MQ_TX_DCB) RTE_ETH_MQ_TX_DCB 463 #define ETH_MQ_TX_VMDQ_DCB RTE_DEPRECATED(ETH_MQ_TX_VMDQ_DCB) RTE_ETH_MQ_TX_VMDQ_DCB 464 #define ETH_MQ_TX_VMDQ_ONLY RTE_DEPRECATED(ETH_MQ_TX_VMDQ_ONLY) RTE_ETH_MQ_TX_VMDQ_ONLY 465 466 /** 467 * A structure used to configure the Rx features of an Ethernet port. 468 */ 469 struct rte_eth_rxmode { 470 /** The multi-queue packet distribution mode to be used, e.g. RSS. */ 471 enum rte_eth_rx_mq_mode mq_mode; 472 uint32_t mtu; /**< Requested MTU. */ 473 /** Maximum allowed size of LRO aggregated packet. */ 474 uint32_t max_lro_pkt_size; 475 uint16_t split_hdr_size; /**< hdr buf size (header_split enabled).*/ 476 /** 477 * Per-port Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags. 478 * Only offloads set on rx_offload_capa field on rte_eth_dev_info 479 * structure are allowed to be set. 480 */ 481 uint64_t offloads; 482 483 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 484 void *reserved_ptrs[2]; /**< Reserved for future fields */ 485 }; 486 487 /** 488 * VLAN types to indicate if it is for single VLAN, inner VLAN or outer VLAN. 489 * Note that single VLAN is treated the same as inner VLAN. 490 */ 491 enum rte_vlan_type { 492 RTE_ETH_VLAN_TYPE_UNKNOWN = 0, 493 RTE_ETH_VLAN_TYPE_INNER, /**< Inner VLAN. */ 494 RTE_ETH_VLAN_TYPE_OUTER, /**< Single VLAN, or outer VLAN. */ 495 RTE_ETH_VLAN_TYPE_MAX, 496 }; 497 498 #define ETH_VLAN_TYPE_UNKNOWN RTE_DEPRECATED(ETH_VLAN_TYPE_UNKNOWN) RTE_ETH_VLAN_TYPE_UNKNOWN 499 #define ETH_VLAN_TYPE_INNER RTE_DEPRECATED(ETH_VLAN_TYPE_INNER) RTE_ETH_VLAN_TYPE_INNER 500 #define ETH_VLAN_TYPE_OUTER RTE_DEPRECATED(ETH_VLAN_TYPE_OUTER) RTE_ETH_VLAN_TYPE_OUTER 501 #define ETH_VLAN_TYPE_MAX RTE_DEPRECATED(ETH_VLAN_TYPE_MAX) RTE_ETH_VLAN_TYPE_MAX 502 503 /** 504 * A structure used to describe a VLAN filter. 505 * If the bit corresponding to a VID is set, such VID is on. 506 */ 507 struct rte_vlan_filter_conf { 508 uint64_t ids[64]; 509 }; 510 511 /** 512 * A structure used to configure the Receive Side Scaling (RSS) feature 513 * of an Ethernet port. 514 * If not NULL, the *rss_key* pointer of the *rss_conf* structure points 515 * to an array holding the RSS key to use for hashing specific header 516 * fields of received packets. The length of this array should be indicated 517 * by *rss_key_len* below. Otherwise, a default random hash key is used by 518 * the device driver. 519 * 520 * The *rss_key_len* field of the *rss_conf* structure indicates the length 521 * in bytes of the array pointed by *rss_key*. To be compatible, this length 522 * will be checked in i40e only. Others assume 40 bytes to be used as before. 523 * 524 * The *rss_hf* field of the *rss_conf* structure indicates the different 525 * types of IPv4/IPv6 packets to which the RSS hashing must be applied. 526 * Supplying an *rss_hf* equal to zero disables the RSS feature. 527 */ 528 struct rte_eth_rss_conf { 529 uint8_t *rss_key; /**< If not NULL, 40-byte hash key. */ 530 uint8_t rss_key_len; /**< hash key length in bytes. */ 531 uint64_t rss_hf; /**< Hash functions to apply - see below. */ 532 }; 533 534 /* 535 * A packet can be identified by hardware as different flow types. Different 536 * NIC hardware may support different flow types. 537 * Basically, the NIC hardware identifies the flow type as deep protocol as 538 * possible, and exclusively. For example, if a packet is identified as 539 * 'RTE_ETH_FLOW_NONFRAG_IPV4_TCP', it will not be any of other flow types, 540 * though it is an actual IPV4 packet. 541 */ 542 #define RTE_ETH_FLOW_UNKNOWN 0 543 #define RTE_ETH_FLOW_RAW 1 544 #define RTE_ETH_FLOW_IPV4 2 545 #define RTE_ETH_FLOW_FRAG_IPV4 3 546 #define RTE_ETH_FLOW_NONFRAG_IPV4_TCP 4 547 #define RTE_ETH_FLOW_NONFRAG_IPV4_UDP 5 548 #define RTE_ETH_FLOW_NONFRAG_IPV4_SCTP 6 549 #define RTE_ETH_FLOW_NONFRAG_IPV4_OTHER 7 550 #define RTE_ETH_FLOW_IPV6 8 551 #define RTE_ETH_FLOW_FRAG_IPV6 9 552 #define RTE_ETH_FLOW_NONFRAG_IPV6_TCP 10 553 #define RTE_ETH_FLOW_NONFRAG_IPV6_UDP 11 554 #define RTE_ETH_FLOW_NONFRAG_IPV6_SCTP 12 555 #define RTE_ETH_FLOW_NONFRAG_IPV6_OTHER 13 556 #define RTE_ETH_FLOW_L2_PAYLOAD 14 557 #define RTE_ETH_FLOW_IPV6_EX 15 558 #define RTE_ETH_FLOW_IPV6_TCP_EX 16 559 #define RTE_ETH_FLOW_IPV6_UDP_EX 17 560 /** Consider device port number as a flow differentiator */ 561 #define RTE_ETH_FLOW_PORT 18 562 #define RTE_ETH_FLOW_VXLAN 19 /**< VXLAN protocol based flow */ 563 #define RTE_ETH_FLOW_GENEVE 20 /**< GENEVE protocol based flow */ 564 #define RTE_ETH_FLOW_NVGRE 21 /**< NVGRE protocol based flow */ 565 #define RTE_ETH_FLOW_VXLAN_GPE 22 /**< VXLAN-GPE protocol based flow */ 566 #define RTE_ETH_FLOW_GTPU 23 /**< GTPU protocol based flow */ 567 #define RTE_ETH_FLOW_MAX 24 568 569 /* 570 * Below macros are defined for RSS offload types, they can be used to 571 * fill rte_eth_rss_conf.rss_hf or rte_flow_action_rss.types. 572 */ 573 #define RTE_ETH_RSS_IPV4 RTE_BIT64(2) 574 #define RTE_ETH_RSS_FRAG_IPV4 RTE_BIT64(3) 575 #define RTE_ETH_RSS_NONFRAG_IPV4_TCP RTE_BIT64(4) 576 #define RTE_ETH_RSS_NONFRAG_IPV4_UDP RTE_BIT64(5) 577 #define RTE_ETH_RSS_NONFRAG_IPV4_SCTP RTE_BIT64(6) 578 #define RTE_ETH_RSS_NONFRAG_IPV4_OTHER RTE_BIT64(7) 579 #define RTE_ETH_RSS_IPV6 RTE_BIT64(8) 580 #define RTE_ETH_RSS_FRAG_IPV6 RTE_BIT64(9) 581 #define RTE_ETH_RSS_NONFRAG_IPV6_TCP RTE_BIT64(10) 582 #define RTE_ETH_RSS_NONFRAG_IPV6_UDP RTE_BIT64(11) 583 #define RTE_ETH_RSS_NONFRAG_IPV6_SCTP RTE_BIT64(12) 584 #define RTE_ETH_RSS_NONFRAG_IPV6_OTHER RTE_BIT64(13) 585 #define RTE_ETH_RSS_L2_PAYLOAD RTE_BIT64(14) 586 #define RTE_ETH_RSS_IPV6_EX RTE_BIT64(15) 587 #define RTE_ETH_RSS_IPV6_TCP_EX RTE_BIT64(16) 588 #define RTE_ETH_RSS_IPV6_UDP_EX RTE_BIT64(17) 589 #define RTE_ETH_RSS_PORT RTE_BIT64(18) 590 #define RTE_ETH_RSS_VXLAN RTE_BIT64(19) 591 #define RTE_ETH_RSS_GENEVE RTE_BIT64(20) 592 #define RTE_ETH_RSS_NVGRE RTE_BIT64(21) 593 #define RTE_ETH_RSS_GTPU RTE_BIT64(23) 594 #define RTE_ETH_RSS_ETH RTE_BIT64(24) 595 #define RTE_ETH_RSS_S_VLAN RTE_BIT64(25) 596 #define RTE_ETH_RSS_C_VLAN RTE_BIT64(26) 597 #define RTE_ETH_RSS_ESP RTE_BIT64(27) 598 #define RTE_ETH_RSS_AH RTE_BIT64(28) 599 #define RTE_ETH_RSS_L2TPV3 RTE_BIT64(29) 600 #define RTE_ETH_RSS_PFCP RTE_BIT64(30) 601 #define RTE_ETH_RSS_PPPOE RTE_BIT64(31) 602 #define RTE_ETH_RSS_ECPRI RTE_BIT64(32) 603 #define RTE_ETH_RSS_MPLS RTE_BIT64(33) 604 #define RTE_ETH_RSS_IPV4_CHKSUM RTE_BIT64(34) 605 606 #define ETH_RSS_IPV4 RTE_DEPRECATED(ETH_RSS_IPV4) RTE_ETH_RSS_IPV4 607 #define ETH_RSS_FRAG_IPV4 RTE_DEPRECATED(ETH_RSS_FRAG_IPV4) RTE_ETH_RSS_FRAG_IPV4 608 #define ETH_RSS_NONFRAG_IPV4_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_TCP) RTE_ETH_RSS_NONFRAG_IPV4_TCP 609 #define ETH_RSS_NONFRAG_IPV4_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_UDP) RTE_ETH_RSS_NONFRAG_IPV4_UDP 610 #define ETH_RSS_NONFRAG_IPV4_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_SCTP) RTE_ETH_RSS_NONFRAG_IPV4_SCTP 611 #define ETH_RSS_NONFRAG_IPV4_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV4_OTHER) RTE_ETH_RSS_NONFRAG_IPV4_OTHER 612 #define ETH_RSS_IPV6 RTE_DEPRECATED(ETH_RSS_IPV6) RTE_ETH_RSS_IPV6 613 #define ETH_RSS_FRAG_IPV6 RTE_DEPRECATED(ETH_RSS_FRAG_IPV6) RTE_ETH_RSS_FRAG_IPV6 614 #define ETH_RSS_NONFRAG_IPV6_TCP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_TCP) RTE_ETH_RSS_NONFRAG_IPV6_TCP 615 #define ETH_RSS_NONFRAG_IPV6_UDP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_UDP) RTE_ETH_RSS_NONFRAG_IPV6_UDP 616 #define ETH_RSS_NONFRAG_IPV6_SCTP RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_SCTP) RTE_ETH_RSS_NONFRAG_IPV6_SCTP 617 #define ETH_RSS_NONFRAG_IPV6_OTHER RTE_DEPRECATED(ETH_RSS_NONFRAG_IPV6_OTHER) RTE_ETH_RSS_NONFRAG_IPV6_OTHER 618 #define ETH_RSS_L2_PAYLOAD RTE_DEPRECATED(ETH_RSS_L2_PAYLOAD) RTE_ETH_RSS_L2_PAYLOAD 619 #define ETH_RSS_IPV6_EX RTE_DEPRECATED(ETH_RSS_IPV6_EX) RTE_ETH_RSS_IPV6_EX 620 #define ETH_RSS_IPV6_TCP_EX RTE_DEPRECATED(ETH_RSS_IPV6_TCP_EX) RTE_ETH_RSS_IPV6_TCP_EX 621 #define ETH_RSS_IPV6_UDP_EX RTE_DEPRECATED(ETH_RSS_IPV6_UDP_EX) RTE_ETH_RSS_IPV6_UDP_EX 622 #define ETH_RSS_PORT RTE_DEPRECATED(ETH_RSS_PORT) RTE_ETH_RSS_PORT 623 #define ETH_RSS_VXLAN RTE_DEPRECATED(ETH_RSS_VXLAN) RTE_ETH_RSS_VXLAN 624 #define ETH_RSS_GENEVE RTE_DEPRECATED(ETH_RSS_GENEVE) RTE_ETH_RSS_GENEVE 625 #define ETH_RSS_NVGRE RTE_DEPRECATED(ETH_RSS_NVGRE) RTE_ETH_RSS_NVGRE 626 #define ETH_RSS_GTPU RTE_DEPRECATED(ETH_RSS_GTPU) RTE_ETH_RSS_GTPU 627 #define ETH_RSS_ETH RTE_DEPRECATED(ETH_RSS_ETH) RTE_ETH_RSS_ETH 628 #define ETH_RSS_S_VLAN RTE_DEPRECATED(ETH_RSS_S_VLAN) RTE_ETH_RSS_S_VLAN 629 #define ETH_RSS_C_VLAN RTE_DEPRECATED(ETH_RSS_C_VLAN) RTE_ETH_RSS_C_VLAN 630 #define ETH_RSS_ESP RTE_DEPRECATED(ETH_RSS_ESP) RTE_ETH_RSS_ESP 631 #define ETH_RSS_AH RTE_DEPRECATED(ETH_RSS_AH) RTE_ETH_RSS_AH 632 #define ETH_RSS_L2TPV3 RTE_DEPRECATED(ETH_RSS_L2TPV3) RTE_ETH_RSS_L2TPV3 633 #define ETH_RSS_PFCP RTE_DEPRECATED(ETH_RSS_PFCP) RTE_ETH_RSS_PFCP 634 #define ETH_RSS_PPPOE RTE_DEPRECATED(ETH_RSS_PPPOE) RTE_ETH_RSS_PPPOE 635 #define ETH_RSS_ECPRI RTE_DEPRECATED(ETH_RSS_ECPRI) RTE_ETH_RSS_ECPRI 636 #define ETH_RSS_MPLS RTE_DEPRECATED(ETH_RSS_MPLS) RTE_ETH_RSS_MPLS 637 #define ETH_RSS_IPV4_CHKSUM RTE_DEPRECATED(ETH_RSS_IPV4_CHKSUM) RTE_ETH_RSS_IPV4_CHKSUM 638 639 /** 640 * The ETH_RSS_L4_CHKSUM works on checksum field of any L4 header. 641 * It is similar to ETH_RSS_PORT that they don't specify the specific type of 642 * L4 header. This macro is defined to replace some specific L4 (TCP/UDP/SCTP) 643 * checksum type for constructing the use of RSS offload bits. 644 * 645 * Due to above reason, some old APIs (and configuration) don't support 646 * RTE_ETH_RSS_L4_CHKSUM. The rte_flow RSS API supports it. 647 * 648 * For the case that checksum is not used in an UDP header, 649 * it takes the reserved value 0 as input for the hash function. 650 */ 651 #define RTE_ETH_RSS_L4_CHKSUM RTE_BIT64(35) 652 #define ETH_RSS_L4_CHKSUM RTE_DEPRECATED(ETH_RSS_L4_CHKSUM) RTE_ETH_RSS_L4_CHKSUM 653 654 /* 655 * We use the following macros to combine with above RTE_ETH_RSS_* for 656 * more specific input set selection. These bits are defined starting 657 * from the high end of the 64 bits. 658 * Note: If we use above RTE_ETH_RSS_* without SRC/DST_ONLY, it represents 659 * both SRC and DST are taken into account. If SRC_ONLY and DST_ONLY of 660 * the same level are used simultaneously, it is the same case as none of 661 * them are added. 662 */ 663 #define RTE_ETH_RSS_L3_SRC_ONLY RTE_BIT64(63) 664 #define RTE_ETH_RSS_L3_DST_ONLY RTE_BIT64(62) 665 #define RTE_ETH_RSS_L4_SRC_ONLY RTE_BIT64(61) 666 #define RTE_ETH_RSS_L4_DST_ONLY RTE_BIT64(60) 667 #define RTE_ETH_RSS_L2_SRC_ONLY RTE_BIT64(59) 668 #define RTE_ETH_RSS_L2_DST_ONLY RTE_BIT64(58) 669 670 #define ETH_RSS_L3_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L3_SRC_ONLY) RTE_ETH_RSS_L3_SRC_ONLY 671 #define ETH_RSS_L3_DST_ONLY RTE_DEPRECATED(ETH_RSS_L3_DST_ONLY) RTE_ETH_RSS_L3_DST_ONLY 672 #define ETH_RSS_L4_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L4_SRC_ONLY) RTE_ETH_RSS_L4_SRC_ONLY 673 #define ETH_RSS_L4_DST_ONLY RTE_DEPRECATED(ETH_RSS_L4_DST_ONLY) RTE_ETH_RSS_L4_DST_ONLY 674 #define ETH_RSS_L2_SRC_ONLY RTE_DEPRECATED(ETH_RSS_L2_SRC_ONLY) RTE_ETH_RSS_L2_SRC_ONLY 675 #define ETH_RSS_L2_DST_ONLY RTE_DEPRECATED(ETH_RSS_L2_DST_ONLY) RTE_ETH_RSS_L2_DST_ONLY 676 677 /* 678 * Only select IPV6 address prefix as RSS input set according to 679 * https://tools.ietf.org/html/rfc6052 680 * Must be combined with RTE_ETH_RSS_IPV6, RTE_ETH_RSS_NONFRAG_IPV6_UDP, 681 * RTE_ETH_RSS_NONFRAG_IPV6_TCP, RTE_ETH_RSS_NONFRAG_IPV6_SCTP. 682 */ 683 #define RTE_ETH_RSS_L3_PRE32 RTE_BIT64(57) 684 #define RTE_ETH_RSS_L3_PRE40 RTE_BIT64(56) 685 #define RTE_ETH_RSS_L3_PRE48 RTE_BIT64(55) 686 #define RTE_ETH_RSS_L3_PRE56 RTE_BIT64(54) 687 #define RTE_ETH_RSS_L3_PRE64 RTE_BIT64(53) 688 #define RTE_ETH_RSS_L3_PRE96 RTE_BIT64(52) 689 690 /* 691 * Use the following macros to combine with the above layers 692 * to choose inner and outer layers or both for RSS computation. 693 * Bits 50 and 51 are reserved for this. 694 */ 695 696 /** 697 * level 0, requests the default behavior. 698 * Depending on the packet type, it can mean outermost, innermost, 699 * anything in between or even no RSS. 700 * It basically stands for the innermost encapsulation level RSS 701 * can be performed on according to PMD and device capabilities. 702 */ 703 #define RTE_ETH_RSS_LEVEL_PMD_DEFAULT (UINT64_C(0) << 50) 704 #define ETH_RSS_LEVEL_PMD_DEFAULT RTE_DEPRECATED(ETH_RSS_LEVEL_PMD_DEFAULT) RTE_ETH_RSS_LEVEL_PMD_DEFAULT 705 706 /** 707 * level 1, requests RSS to be performed on the outermost packet 708 * encapsulation level. 709 */ 710 #define RTE_ETH_RSS_LEVEL_OUTERMOST (UINT64_C(1) << 50) 711 #define ETH_RSS_LEVEL_OUTERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_OUTERMOST) RTE_ETH_RSS_LEVEL_OUTERMOST 712 713 /** 714 * level 2, requests RSS to be performed on the specified inner packet 715 * encapsulation level, from outermost to innermost (lower to higher values). 716 */ 717 #define RTE_ETH_RSS_LEVEL_INNERMOST (UINT64_C(2) << 50) 718 #define RTE_ETH_RSS_LEVEL_MASK (UINT64_C(3) << 50) 719 720 #define ETH_RSS_LEVEL_INNERMOST RTE_DEPRECATED(ETH_RSS_LEVEL_INNERMOST) RTE_ETH_RSS_LEVEL_INNERMOST 721 #define ETH_RSS_LEVEL_MASK RTE_DEPRECATED(ETH_RSS_LEVEL_MASK) RTE_ETH_RSS_LEVEL_MASK 722 723 #define RTE_ETH_RSS_LEVEL(rss_hf) ((rss_hf & RTE_ETH_RSS_LEVEL_MASK) >> 50) 724 #define ETH_RSS_LEVEL(rss_hf) RTE_DEPRECATED(ETH_RSS_LEVEL(rss_hf)) RTE_ETH_RSS_LEVEL(rss_hf) 725 726 /** 727 * For input set change of hash filter, if SRC_ONLY and DST_ONLY of 728 * the same level are used simultaneously, it is the same case as 729 * none of them are added. 730 * 731 * @param rss_hf 732 * RSS types with SRC/DST_ONLY. 733 * @return 734 * RSS types. 735 */ 736 static inline uint64_t 737 rte_eth_rss_hf_refine(uint64_t rss_hf) 738 { 739 if ((rss_hf & RTE_ETH_RSS_L3_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L3_DST_ONLY)) 740 rss_hf &= ~(RTE_ETH_RSS_L3_SRC_ONLY | RTE_ETH_RSS_L3_DST_ONLY); 741 742 if ((rss_hf & RTE_ETH_RSS_L4_SRC_ONLY) && (rss_hf & RTE_ETH_RSS_L4_DST_ONLY)) 743 rss_hf &= ~(RTE_ETH_RSS_L4_SRC_ONLY | RTE_ETH_RSS_L4_DST_ONLY); 744 745 return rss_hf; 746 } 747 748 #define RTE_ETH_RSS_IPV6_PRE32 ( \ 749 RTE_ETH_RSS_IPV6 | \ 750 RTE_ETH_RSS_L3_PRE32) 751 #define ETH_RSS_IPV6_PRE32 RTE_DEPRECATED(ETH_RSS_IPV6_PRE32) RTE_ETH_RSS_IPV6_PRE32 752 753 #define RTE_ETH_RSS_IPV6_PRE40 ( \ 754 RTE_ETH_RSS_IPV6 | \ 755 RTE_ETH_RSS_L3_PRE40) 756 #define ETH_RSS_IPV6_PRE40 RTE_DEPRECATED(ETH_RSS_IPV6_PRE40) RTE_ETH_RSS_IPV6_PRE40 757 758 #define RTE_ETH_RSS_IPV6_PRE48 ( \ 759 RTE_ETH_RSS_IPV6 | \ 760 RTE_ETH_RSS_L3_PRE48) 761 #define ETH_RSS_IPV6_PRE48 RTE_DEPRECATED(ETH_RSS_IPV6_PRE48) RTE_ETH_RSS_IPV6_PRE48 762 763 #define RTE_ETH_RSS_IPV6_PRE56 ( \ 764 RTE_ETH_RSS_IPV6 | \ 765 RTE_ETH_RSS_L3_PRE56) 766 #define ETH_RSS_IPV6_PRE56 RTE_DEPRECATED(ETH_RSS_IPV6_PRE56) RTE_ETH_RSS_IPV6_PRE56 767 768 #define RTE_ETH_RSS_IPV6_PRE64 ( \ 769 RTE_ETH_RSS_IPV6 | \ 770 RTE_ETH_RSS_L3_PRE64) 771 #define ETH_RSS_IPV6_PRE64 RTE_DEPRECATED(ETH_RSS_IPV6_PRE64) RTE_ETH_RSS_IPV6_PRE64 772 773 #define RTE_ETH_RSS_IPV6_PRE96 ( \ 774 RTE_ETH_RSS_IPV6 | \ 775 RTE_ETH_RSS_L3_PRE96) 776 #define ETH_RSS_IPV6_PRE96 RTE_DEPRECATED(ETH_RSS_IPV6_PRE96) RTE_ETH_RSS_IPV6_PRE96 777 778 #define RTE_ETH_RSS_IPV6_PRE32_UDP ( \ 779 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 780 RTE_ETH_RSS_L3_PRE32) 781 #define ETH_RSS_IPV6_PRE32_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_UDP) RTE_ETH_RSS_IPV6_PRE32_UDP 782 783 #define RTE_ETH_RSS_IPV6_PRE40_UDP ( \ 784 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 785 RTE_ETH_RSS_L3_PRE40) 786 #define ETH_RSS_IPV6_PRE40_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_UDP) RTE_ETH_RSS_IPV6_PRE40_UDP 787 788 #define RTE_ETH_RSS_IPV6_PRE48_UDP ( \ 789 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 790 RTE_ETH_RSS_L3_PRE48) 791 #define ETH_RSS_IPV6_PRE48_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_UDP) RTE_ETH_RSS_IPV6_PRE48_UDP 792 793 #define RTE_ETH_RSS_IPV6_PRE56_UDP ( \ 794 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 795 RTE_ETH_RSS_L3_PRE56) 796 #define ETH_RSS_IPV6_PRE56_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_UDP) RTE_ETH_RSS_IPV6_PRE56_UDP 797 798 #define RTE_ETH_RSS_IPV6_PRE64_UDP ( \ 799 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 800 RTE_ETH_RSS_L3_PRE64) 801 #define ETH_RSS_IPV6_PRE64_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_UDP) RTE_ETH_RSS_IPV6_PRE64_UDP 802 803 #define RTE_ETH_RSS_IPV6_PRE96_UDP ( \ 804 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 805 RTE_ETH_RSS_L3_PRE96) 806 #define ETH_RSS_IPV6_PRE96_UDP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_UDP) RTE_ETH_RSS_IPV6_PRE96_UDP 807 808 #define RTE_ETH_RSS_IPV6_PRE32_TCP ( \ 809 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 810 RTE_ETH_RSS_L3_PRE32) 811 #define ETH_RSS_IPV6_PRE32_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_TCP) RTE_ETH_RSS_IPV6_PRE32_TCP 812 813 #define RTE_ETH_RSS_IPV6_PRE40_TCP ( \ 814 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 815 RTE_ETH_RSS_L3_PRE40) 816 #define ETH_RSS_IPV6_PRE40_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_TCP) RTE_ETH_RSS_IPV6_PRE40_TCP 817 818 #define RTE_ETH_RSS_IPV6_PRE48_TCP ( \ 819 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 820 RTE_ETH_RSS_L3_PRE48) 821 #define ETH_RSS_IPV6_PRE48_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_TCP) RTE_ETH_RSS_IPV6_PRE48_TCP 822 823 #define RTE_ETH_RSS_IPV6_PRE56_TCP ( \ 824 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 825 RTE_ETH_RSS_L3_PRE56) 826 #define ETH_RSS_IPV6_PRE56_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_TCP) RTE_ETH_RSS_IPV6_PRE56_TCP 827 828 #define RTE_ETH_RSS_IPV6_PRE64_TCP ( \ 829 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 830 RTE_ETH_RSS_L3_PRE64) 831 #define ETH_RSS_IPV6_PRE64_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_TCP) RTE_ETH_RSS_IPV6_PRE64_TCP 832 833 #define RTE_ETH_RSS_IPV6_PRE96_TCP ( \ 834 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 835 RTE_ETH_RSS_L3_PRE96) 836 #define ETH_RSS_IPV6_PRE96_TCP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_TCP) RTE_ETH_RSS_IPV6_PRE96_TCP 837 838 #define RTE_ETH_RSS_IPV6_PRE32_SCTP ( \ 839 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 840 RTE_ETH_RSS_L3_PRE32) 841 #define ETH_RSS_IPV6_PRE32_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE32_SCTP) RTE_ETH_RSS_IPV6_PRE32_SCTP 842 843 #define RTE_ETH_RSS_IPV6_PRE40_SCTP ( \ 844 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 845 RTE_ETH_RSS_L3_PRE40) 846 #define ETH_RSS_IPV6_PRE40_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE40_SCTP) RTE_ETH_RSS_IPV6_PRE40_SCTP 847 848 #define RTE_ETH_RSS_IPV6_PRE48_SCTP ( \ 849 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 850 RTE_ETH_RSS_L3_PRE48) 851 #define ETH_RSS_IPV6_PRE48_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE48_SCTP) RTE_ETH_RSS_IPV6_PRE48_SCTP 852 853 #define RTE_ETH_RSS_IPV6_PRE56_SCTP ( \ 854 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 855 RTE_ETH_RSS_L3_PRE56) 856 #define ETH_RSS_IPV6_PRE56_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE56_SCTP) RTE_ETH_RSS_IPV6_PRE56_SCTP 857 858 #define RTE_ETH_RSS_IPV6_PRE64_SCTP ( \ 859 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 860 RTE_ETH_RSS_L3_PRE64) 861 #define ETH_RSS_IPV6_PRE64_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE64_SCTP) RTE_ETH_RSS_IPV6_PRE64_SCTP 862 863 #define RTE_ETH_RSS_IPV6_PRE96_SCTP ( \ 864 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 865 RTE_ETH_RSS_L3_PRE96) 866 #define ETH_RSS_IPV6_PRE96_SCTP RTE_DEPRECATED(ETH_RSS_IPV6_PRE96_SCTP) RTE_ETH_RSS_IPV6_PRE96_SCTP 867 868 #define RTE_ETH_RSS_IP ( \ 869 RTE_ETH_RSS_IPV4 | \ 870 RTE_ETH_RSS_FRAG_IPV4 | \ 871 RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \ 872 RTE_ETH_RSS_IPV6 | \ 873 RTE_ETH_RSS_FRAG_IPV6 | \ 874 RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \ 875 RTE_ETH_RSS_IPV6_EX) 876 #define ETH_RSS_IP RTE_DEPRECATED(ETH_RSS_IP) RTE_ETH_RSS_IP 877 878 #define RTE_ETH_RSS_UDP ( \ 879 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ 880 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 881 RTE_ETH_RSS_IPV6_UDP_EX) 882 #define ETH_RSS_UDP RTE_DEPRECATED(ETH_RSS_UDP) RTE_ETH_RSS_UDP 883 884 #define RTE_ETH_RSS_TCP ( \ 885 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ 886 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 887 RTE_ETH_RSS_IPV6_TCP_EX) 888 #define ETH_RSS_TCP RTE_DEPRECATED(ETH_RSS_TCP) RTE_ETH_RSS_TCP 889 890 #define RTE_ETH_RSS_SCTP ( \ 891 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ 892 RTE_ETH_RSS_NONFRAG_IPV6_SCTP) 893 #define ETH_RSS_SCTP RTE_DEPRECATED(ETH_RSS_SCTP) RTE_ETH_RSS_SCTP 894 895 #define RTE_ETH_RSS_TUNNEL ( \ 896 RTE_ETH_RSS_VXLAN | \ 897 RTE_ETH_RSS_GENEVE | \ 898 RTE_ETH_RSS_NVGRE) 899 #define ETH_RSS_TUNNEL RTE_DEPRECATED(ETH_RSS_TUNNEL) RTE_ETH_RSS_TUNNEL 900 901 #define RTE_ETH_RSS_VLAN ( \ 902 RTE_ETH_RSS_S_VLAN | \ 903 RTE_ETH_RSS_C_VLAN) 904 #define ETH_RSS_VLAN RTE_DEPRECATED(ETH_RSS_VLAN) RTE_ETH_RSS_VLAN 905 906 /** Mask of valid RSS hash protocols */ 907 #define RTE_ETH_RSS_PROTO_MASK ( \ 908 RTE_ETH_RSS_IPV4 | \ 909 RTE_ETH_RSS_FRAG_IPV4 | \ 910 RTE_ETH_RSS_NONFRAG_IPV4_TCP | \ 911 RTE_ETH_RSS_NONFRAG_IPV4_UDP | \ 912 RTE_ETH_RSS_NONFRAG_IPV4_SCTP | \ 913 RTE_ETH_RSS_NONFRAG_IPV4_OTHER | \ 914 RTE_ETH_RSS_IPV6 | \ 915 RTE_ETH_RSS_FRAG_IPV6 | \ 916 RTE_ETH_RSS_NONFRAG_IPV6_TCP | \ 917 RTE_ETH_RSS_NONFRAG_IPV6_UDP | \ 918 RTE_ETH_RSS_NONFRAG_IPV6_SCTP | \ 919 RTE_ETH_RSS_NONFRAG_IPV6_OTHER | \ 920 RTE_ETH_RSS_L2_PAYLOAD | \ 921 RTE_ETH_RSS_IPV6_EX | \ 922 RTE_ETH_RSS_IPV6_TCP_EX | \ 923 RTE_ETH_RSS_IPV6_UDP_EX | \ 924 RTE_ETH_RSS_PORT | \ 925 RTE_ETH_RSS_VXLAN | \ 926 RTE_ETH_RSS_GENEVE | \ 927 RTE_ETH_RSS_NVGRE | \ 928 RTE_ETH_RSS_MPLS) 929 #define ETH_RSS_PROTO_MASK RTE_DEPRECATED(ETH_RSS_PROTO_MASK) RTE_ETH_RSS_PROTO_MASK 930 931 /* 932 * Definitions used for redirection table entry size. 933 * Some RSS RETA sizes may not be supported by some drivers, check the 934 * documentation or the description of relevant functions for more details. 935 */ 936 #define RTE_ETH_RSS_RETA_SIZE_64 64 937 #define RTE_ETH_RSS_RETA_SIZE_128 128 938 #define RTE_ETH_RSS_RETA_SIZE_256 256 939 #define RTE_ETH_RSS_RETA_SIZE_512 512 940 #define RTE_ETH_RETA_GROUP_SIZE 64 941 942 #define ETH_RSS_RETA_SIZE_64 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_64) RTE_ETH_RSS_RETA_SIZE_64 943 #define ETH_RSS_RETA_SIZE_128 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_128) RTE_ETH_RSS_RETA_SIZE_128 944 #define ETH_RSS_RETA_SIZE_256 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_256) RTE_ETH_RSS_RETA_SIZE_256 945 #define ETH_RSS_RETA_SIZE_512 RTE_DEPRECATED(ETH_RSS_RETA_SIZE_512) RTE_ETH_RSS_RETA_SIZE_512 946 #define RTE_RETA_GROUP_SIZE RTE_DEPRECATED(RTE_RETA_GROUP_SIZE) RTE_ETH_RETA_GROUP_SIZE 947 948 /**@{@name VMDq and DCB maximums */ 949 #define RTE_ETH_VMDQ_MAX_VLAN_FILTERS 64 /**< Maximum nb. of VMDq VLAN filters. */ 950 #define RTE_ETH_DCB_NUM_USER_PRIORITIES 8 /**< Maximum nb. of DCB priorities. */ 951 #define RTE_ETH_VMDQ_DCB_NUM_QUEUES 128 /**< Maximum nb. of VMDq DCB queues. */ 952 #define RTE_ETH_DCB_NUM_QUEUES 128 /**< Maximum nb. of DCB queues. */ 953 /**@}*/ 954 955 #define ETH_VMDQ_MAX_VLAN_FILTERS RTE_DEPRECATED(ETH_VMDQ_MAX_VLAN_FILTERS) RTE_ETH_VMDQ_MAX_VLAN_FILTERS 956 #define ETH_DCB_NUM_USER_PRIORITIES RTE_DEPRECATED(ETH_DCB_NUM_USER_PRIORITIES) RTE_ETH_DCB_NUM_USER_PRIORITIES 957 #define ETH_VMDQ_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_VMDQ_DCB_NUM_QUEUES) RTE_ETH_VMDQ_DCB_NUM_QUEUES 958 #define ETH_DCB_NUM_QUEUES RTE_DEPRECATED(ETH_DCB_NUM_QUEUES) RTE_ETH_DCB_NUM_QUEUES 959 960 /**@{@name DCB capabilities */ 961 #define RTE_ETH_DCB_PG_SUPPORT RTE_BIT32(0) /**< Priority Group(ETS) support. */ 962 #define RTE_ETH_DCB_PFC_SUPPORT RTE_BIT32(1) /**< Priority Flow Control support. */ 963 /**@}*/ 964 965 #define ETH_DCB_PG_SUPPORT RTE_DEPRECATED(ETH_DCB_PG_SUPPORT) RTE_ETH_DCB_PG_SUPPORT 966 #define ETH_DCB_PFC_SUPPORT RTE_DEPRECATED(ETH_DCB_PFC_SUPPORT) RTE_ETH_DCB_PFC_SUPPORT 967 968 /**@{@name VLAN offload bits */ 969 #define RTE_ETH_VLAN_STRIP_OFFLOAD 0x0001 /**< VLAN Strip On/Off */ 970 #define RTE_ETH_VLAN_FILTER_OFFLOAD 0x0002 /**< VLAN Filter On/Off */ 971 #define RTE_ETH_VLAN_EXTEND_OFFLOAD 0x0004 /**< VLAN Extend On/Off */ 972 #define RTE_ETH_QINQ_STRIP_OFFLOAD 0x0008 /**< QINQ Strip On/Off */ 973 974 #define ETH_VLAN_STRIP_OFFLOAD RTE_DEPRECATED(ETH_VLAN_STRIP_OFFLOAD) RTE_ETH_VLAN_STRIP_OFFLOAD 975 #define ETH_VLAN_FILTER_OFFLOAD RTE_DEPRECATED(ETH_VLAN_FILTER_OFFLOAD) RTE_ETH_VLAN_FILTER_OFFLOAD 976 #define ETH_VLAN_EXTEND_OFFLOAD RTE_DEPRECATED(ETH_VLAN_EXTEND_OFFLOAD) RTE_ETH_VLAN_EXTEND_OFFLOAD 977 #define ETH_QINQ_STRIP_OFFLOAD RTE_DEPRECATED(ETH_QINQ_STRIP_OFFLOAD) RTE_ETH_QINQ_STRIP_OFFLOAD 978 979 #define RTE_ETH_VLAN_STRIP_MASK 0x0001 /**< VLAN Strip setting mask */ 980 #define RTE_ETH_VLAN_FILTER_MASK 0x0002 /**< VLAN Filter setting mask*/ 981 #define RTE_ETH_VLAN_EXTEND_MASK 0x0004 /**< VLAN Extend setting mask*/ 982 #define RTE_ETH_QINQ_STRIP_MASK 0x0008 /**< QINQ Strip setting mask */ 983 #define RTE_ETH_VLAN_ID_MAX 0x0FFF /**< VLAN ID is in lower 12 bits*/ 984 /**@}*/ 985 986 #define ETH_VLAN_STRIP_MASK RTE_DEPRECATED(ETH_VLAN_STRIP_MASK) RTE_ETH_VLAN_STRIP_MASK 987 #define ETH_VLAN_FILTER_MASK RTE_DEPRECATED(ETH_VLAN_FILTER_MASK) RTE_ETH_VLAN_FILTER_MASK 988 #define ETH_VLAN_EXTEND_MASK RTE_DEPRECATED(ETH_VLAN_EXTEND_MASK) RTE_ETH_VLAN_EXTEND_MASK 989 #define ETH_QINQ_STRIP_MASK RTE_DEPRECATED(ETH_QINQ_STRIP_MASK) RTE_ETH_QINQ_STRIP_MASK 990 #define ETH_VLAN_ID_MAX RTE_DEPRECATED(ETH_VLAN_ID_MAX) RTE_ETH_VLAN_ID_MAX 991 992 /* Definitions used for receive MAC address */ 993 #define RTE_ETH_NUM_RECEIVE_MAC_ADDR 128 /**< Maximum nb. of receive mac addr. */ 994 #define ETH_NUM_RECEIVE_MAC_ADDR RTE_DEPRECATED(ETH_NUM_RECEIVE_MAC_ADDR) RTE_ETH_NUM_RECEIVE_MAC_ADDR 995 996 /* Definitions used for unicast hash */ 997 #define RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 128 /**< Maximum nb. of UC hash array. */ 998 #define ETH_VMDQ_NUM_UC_HASH_ARRAY RTE_DEPRECATED(ETH_VMDQ_NUM_UC_HASH_ARRAY) RTE_ETH_VMDQ_NUM_UC_HASH_ARRAY 999 1000 /**@{@name VMDq Rx mode 1001 * @see rte_eth_vmdq_rx_conf.rx_mode 1002 */ 1003 /** Accept untagged packets. */ 1004 #define RTE_ETH_VMDQ_ACCEPT_UNTAG RTE_BIT32(0) 1005 /** Accept packets in multicast table. */ 1006 #define RTE_ETH_VMDQ_ACCEPT_HASH_MC RTE_BIT32(1) 1007 /** Accept packets in unicast table. */ 1008 #define RTE_ETH_VMDQ_ACCEPT_HASH_UC RTE_BIT32(2) 1009 /** Accept broadcast packets. */ 1010 #define RTE_ETH_VMDQ_ACCEPT_BROADCAST RTE_BIT32(3) 1011 /** Multicast promiscuous. */ 1012 #define RTE_ETH_VMDQ_ACCEPT_MULTICAST RTE_BIT32(4) 1013 /**@}*/ 1014 1015 #define ETH_VMDQ_ACCEPT_UNTAG RTE_DEPRECATED(ETH_VMDQ_ACCEPT_UNTAG) RTE_ETH_VMDQ_ACCEPT_UNTAG 1016 #define ETH_VMDQ_ACCEPT_HASH_MC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_MC) RTE_ETH_VMDQ_ACCEPT_HASH_MC 1017 #define ETH_VMDQ_ACCEPT_HASH_UC RTE_DEPRECATED(ETH_VMDQ_ACCEPT_HASH_UC) RTE_ETH_VMDQ_ACCEPT_HASH_UC 1018 #define ETH_VMDQ_ACCEPT_BROADCAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_BROADCAST) RTE_ETH_VMDQ_ACCEPT_BROADCAST 1019 #define ETH_VMDQ_ACCEPT_MULTICAST RTE_DEPRECATED(ETH_VMDQ_ACCEPT_MULTICAST) RTE_ETH_VMDQ_ACCEPT_MULTICAST 1020 1021 /** 1022 * A structure used to configure 64 entries of Redirection Table of the 1023 * Receive Side Scaling (RSS) feature of an Ethernet port. To configure 1024 * more than 64 entries supported by hardware, an array of this structure 1025 * is needed. 1026 */ 1027 struct rte_eth_rss_reta_entry64 { 1028 /** Mask bits indicate which entries need to be updated/queried. */ 1029 uint64_t mask; 1030 /** Group of 64 redirection table entries. */ 1031 uint16_t reta[RTE_ETH_RETA_GROUP_SIZE]; 1032 }; 1033 1034 /** 1035 * This enum indicates the possible number of traffic classes 1036 * in DCB configurations 1037 */ 1038 enum rte_eth_nb_tcs { 1039 RTE_ETH_4_TCS = 4, /**< 4 TCs with DCB. */ 1040 RTE_ETH_8_TCS = 8 /**< 8 TCs with DCB. */ 1041 }; 1042 #define ETH_4_TCS RTE_DEPRECATED(ETH_4_TCS) RTE_ETH_4_TCS 1043 #define ETH_8_TCS RTE_DEPRECATED(ETH_8_TCS) RTE_ETH_8_TCS 1044 1045 /** 1046 * This enum indicates the possible number of queue pools 1047 * in VMDq configurations. 1048 */ 1049 enum rte_eth_nb_pools { 1050 RTE_ETH_8_POOLS = 8, /**< 8 VMDq pools. */ 1051 RTE_ETH_16_POOLS = 16, /**< 16 VMDq pools. */ 1052 RTE_ETH_32_POOLS = 32, /**< 32 VMDq pools. */ 1053 RTE_ETH_64_POOLS = 64 /**< 64 VMDq pools. */ 1054 }; 1055 #define ETH_8_POOLS RTE_DEPRECATED(ETH_8_POOLS) RTE_ETH_8_POOLS 1056 #define ETH_16_POOLS RTE_DEPRECATED(ETH_16_POOLS) RTE_ETH_16_POOLS 1057 #define ETH_32_POOLS RTE_DEPRECATED(ETH_32_POOLS) RTE_ETH_32_POOLS 1058 #define ETH_64_POOLS RTE_DEPRECATED(ETH_64_POOLS) RTE_ETH_64_POOLS 1059 1060 /* This structure may be extended in future. */ 1061 struct rte_eth_dcb_rx_conf { 1062 enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs */ 1063 /** Traffic class each UP mapped to. */ 1064 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1065 }; 1066 1067 struct rte_eth_vmdq_dcb_tx_conf { 1068 enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools. */ 1069 /** Traffic class each UP mapped to. */ 1070 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1071 }; 1072 1073 struct rte_eth_dcb_tx_conf { 1074 enum rte_eth_nb_tcs nb_tcs; /**< Possible DCB TCs, 4 or 8 TCs. */ 1075 /** Traffic class each UP mapped to. */ 1076 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1077 }; 1078 1079 struct rte_eth_vmdq_tx_conf { 1080 enum rte_eth_nb_pools nb_queue_pools; /**< VMDq mode, 64 pools. */ 1081 }; 1082 1083 /** 1084 * A structure used to configure the VMDq+DCB feature 1085 * of an Ethernet port. 1086 * 1087 * Using this feature, packets are routed to a pool of queues, based 1088 * on the VLAN ID in the VLAN tag, and then to a specific queue within 1089 * that pool, using the user priority VLAN tag field. 1090 * 1091 * A default pool may be used, if desired, to route all traffic which 1092 * does not match the VLAN filter rules. 1093 */ 1094 struct rte_eth_vmdq_dcb_conf { 1095 enum rte_eth_nb_pools nb_queue_pools; /**< With DCB, 16 or 32 pools */ 1096 uint8_t enable_default_pool; /**< If non-zero, use a default pool */ 1097 uint8_t default_pool; /**< The default pool, if applicable */ 1098 uint8_t nb_pool_maps; /**< We can have up to 64 filters/mappings */ 1099 struct { 1100 uint16_t vlan_id; /**< The VLAN ID of the received frame */ 1101 uint64_t pools; /**< Bitmask of pools for packet Rx */ 1102 } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */ 1103 /** Selects a queue in a pool */ 1104 uint8_t dcb_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; 1105 }; 1106 1107 /** 1108 * A structure used to configure the VMDq feature of an Ethernet port when 1109 * not combined with the DCB feature. 1110 * 1111 * Using this feature, packets are routed to a pool of queues. By default, 1112 * the pool selection is based on the MAC address, the VLAN ID in the 1113 * VLAN tag as specified in the pool_map array. 1114 * Passing the RTE_ETH_VMDQ_ACCEPT_UNTAG in the rx_mode field allows pool 1115 * selection using only the MAC address. MAC address to pool mapping is done 1116 * using the rte_eth_dev_mac_addr_add function, with the pool parameter 1117 * corresponding to the pool ID. 1118 * 1119 * Queue selection within the selected pool will be done using RSS when 1120 * it is enabled or revert to the first queue of the pool if not. 1121 * 1122 * A default pool may be used, if desired, to route all traffic which 1123 * does not match the VLAN filter rules or any pool MAC address. 1124 */ 1125 struct rte_eth_vmdq_rx_conf { 1126 enum rte_eth_nb_pools nb_queue_pools; /**< VMDq only mode, 8 or 64 pools */ 1127 uint8_t enable_default_pool; /**< If non-zero, use a default pool */ 1128 uint8_t default_pool; /**< The default pool, if applicable */ 1129 uint8_t enable_loop_back; /**< Enable VT loop back */ 1130 uint8_t nb_pool_maps; /**< We can have up to 64 filters/mappings */ 1131 uint32_t rx_mode; /**< Flags from ETH_VMDQ_ACCEPT_* */ 1132 struct { 1133 uint16_t vlan_id; /**< The VLAN ID of the received frame */ 1134 uint64_t pools; /**< Bitmask of pools for packet Rx */ 1135 } pool_map[RTE_ETH_VMDQ_MAX_VLAN_FILTERS]; /**< VMDq VLAN pool maps. */ 1136 }; 1137 1138 /** 1139 * A structure used to configure the Tx features of an Ethernet port. 1140 */ 1141 struct rte_eth_txmode { 1142 enum rte_eth_tx_mq_mode mq_mode; /**< Tx multi-queues mode. */ 1143 /** 1144 * Per-port Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags. 1145 * Only offloads set on tx_offload_capa field on rte_eth_dev_info 1146 * structure are allowed to be set. 1147 */ 1148 uint64_t offloads; 1149 1150 uint16_t pvid; 1151 __extension__ 1152 uint8_t /** If set, reject sending out tagged pkts */ 1153 hw_vlan_reject_tagged : 1, 1154 /** If set, reject sending out untagged pkts */ 1155 hw_vlan_reject_untagged : 1, 1156 /** If set, enable port based VLAN insertion */ 1157 hw_vlan_insert_pvid : 1; 1158 1159 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1160 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1161 }; 1162 1163 /** 1164 * @warning 1165 * @b EXPERIMENTAL: this structure may change without prior notice. 1166 * 1167 * A structure used to configure an Rx packet segment to split. 1168 * 1169 * If RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT flag is set in offloads field, 1170 * the PMD will split the received packets into multiple segments 1171 * according to the specification in the description array: 1172 * 1173 * - The first network buffer will be allocated from the memory pool, 1174 * specified in the first array element, the second buffer, from the 1175 * pool in the second element, and so on. 1176 * 1177 * - The offsets from the segment description elements specify 1178 * the data offset from the buffer beginning except the first mbuf. 1179 * The first segment offset is added with RTE_PKTMBUF_HEADROOM. 1180 * 1181 * - The lengths in the elements define the maximal data amount 1182 * being received to each segment. The receiving starts with filling 1183 * up the first mbuf data buffer up to specified length. If the 1184 * there are data remaining (packet is longer than buffer in the first 1185 * mbuf) the following data will be pushed to the next segment 1186 * up to its own length, and so on. 1187 * 1188 * - If the length in the segment description element is zero 1189 * the actual buffer size will be deduced from the appropriate 1190 * memory pool properties. 1191 * 1192 * - If there is not enough elements to describe the buffer for entire 1193 * packet of maximal length the following parameters will be used 1194 * for the all remaining segments: 1195 * - pool from the last valid element 1196 * - the buffer size from this pool 1197 * - zero offset 1198 */ 1199 struct rte_eth_rxseg_split { 1200 struct rte_mempool *mp; /**< Memory pool to allocate segment from. */ 1201 uint16_t length; /**< Segment data length, configures split point. */ 1202 uint16_t offset; /**< Data offset from beginning of mbuf data buffer. */ 1203 uint32_t reserved; /**< Reserved field. */ 1204 }; 1205 1206 /** 1207 * @warning 1208 * @b EXPERIMENTAL: this structure may change without prior notice. 1209 * 1210 * A common structure used to describe Rx packet segment properties. 1211 */ 1212 union rte_eth_rxseg { 1213 /* The settings for buffer split offload. */ 1214 struct rte_eth_rxseg_split split; 1215 /* The other features settings should be added here. */ 1216 }; 1217 1218 /** 1219 * A structure used to configure an Rx ring of an Ethernet port. 1220 */ 1221 struct rte_eth_rxconf { 1222 struct rte_eth_thresh rx_thresh; /**< Rx ring threshold registers. */ 1223 uint16_t rx_free_thresh; /**< Drives the freeing of Rx descriptors. */ 1224 uint8_t rx_drop_en; /**< Drop packets if no descriptors are available. */ 1225 uint8_t rx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ 1226 uint16_t rx_nseg; /**< Number of descriptions in rx_seg array. */ 1227 /** 1228 * Share group index in Rx domain and switch domain. 1229 * Non-zero value to enable Rx queue share, zero value disable share. 1230 * PMD is responsible for Rx queue consistency checks to avoid member 1231 * port's configuration contradict to each other. 1232 */ 1233 uint16_t share_group; 1234 uint16_t share_qid; /**< Shared Rx queue ID in group */ 1235 /** 1236 * Per-queue Rx offloads to be set using RTE_ETH_RX_OFFLOAD_* flags. 1237 * Only offloads set on rx_queue_offload_capa or rx_offload_capa 1238 * fields on rte_eth_dev_info structure are allowed to be set. 1239 */ 1240 uint64_t offloads; 1241 /** 1242 * Points to the array of segment descriptions for an entire packet. 1243 * Array elements are properties for consecutive Rx segments. 1244 * 1245 * The supported capabilities of receiving segmentation is reported 1246 * in rte_eth_dev_info.rx_seg_capa field. 1247 */ 1248 union rte_eth_rxseg *rx_seg; 1249 1250 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1251 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1252 }; 1253 1254 /** 1255 * A structure used to configure a Tx ring of an Ethernet port. 1256 */ 1257 struct rte_eth_txconf { 1258 struct rte_eth_thresh tx_thresh; /**< Tx ring threshold registers. */ 1259 uint16_t tx_rs_thresh; /**< Drives the setting of RS bit on TXDs. */ 1260 uint16_t tx_free_thresh; /**< Start freeing Tx buffers if there are 1261 less free descriptors than this value. */ 1262 1263 uint8_t tx_deferred_start; /**< Do not start queue with rte_eth_dev_start(). */ 1264 /** 1265 * Per-queue Tx offloads to be set using RTE_ETH_TX_OFFLOAD_* flags. 1266 * Only offloads set on tx_queue_offload_capa or tx_offload_capa 1267 * fields on rte_eth_dev_info structure are allowed to be set. 1268 */ 1269 uint64_t offloads; 1270 1271 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1272 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1273 }; 1274 1275 /** 1276 * @warning 1277 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1278 * 1279 * A structure used to return the hairpin capabilities that are supported. 1280 */ 1281 struct rte_eth_hairpin_cap { 1282 /** The max number of hairpin queues (different bindings). */ 1283 uint16_t max_nb_queues; 1284 /** Max number of Rx queues to be connected to one Tx queue. */ 1285 uint16_t max_rx_2_tx; 1286 /** Max number of Tx queues to be connected to one Rx queue. */ 1287 uint16_t max_tx_2_rx; 1288 uint16_t max_nb_desc; /**< The max num of descriptors. */ 1289 }; 1290 1291 #define RTE_ETH_MAX_HAIRPIN_PEERS 32 1292 1293 /** 1294 * @warning 1295 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1296 * 1297 * A structure used to hold hairpin peer data. 1298 */ 1299 struct rte_eth_hairpin_peer { 1300 uint16_t port; /**< Peer port. */ 1301 uint16_t queue; /**< Peer queue. */ 1302 }; 1303 1304 /** 1305 * @warning 1306 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 1307 * 1308 * A structure used to configure hairpin binding. 1309 */ 1310 struct rte_eth_hairpin_conf { 1311 uint32_t peer_count:16; /**< The number of peers. */ 1312 1313 /** 1314 * Explicit Tx flow rule mode. 1315 * One hairpin pair of queues should have the same attribute. 1316 * 1317 * - When set, the user should be responsible for inserting the hairpin 1318 * Tx part flows and removing them. 1319 * - When clear, the PMD will try to handle the Tx part of the flows, 1320 * e.g., by splitting one flow into two parts. 1321 */ 1322 uint32_t tx_explicit:1; 1323 1324 /** 1325 * Manually bind hairpin queues. 1326 * One hairpin pair of queues should have the same attribute. 1327 * 1328 * - When set, to enable hairpin, the user should call the hairpin bind 1329 * function after all the queues are set up properly and the ports are 1330 * started. Also, the hairpin unbind function should be called 1331 * accordingly before stopping a port that with hairpin configured. 1332 * - When clear, the PMD will try to enable the hairpin with the queues 1333 * configured automatically during port start. 1334 */ 1335 uint32_t manual_bind:1; 1336 uint32_t reserved:14; /**< Reserved bits. */ 1337 struct rte_eth_hairpin_peer peers[RTE_ETH_MAX_HAIRPIN_PEERS]; 1338 }; 1339 1340 /** 1341 * A structure contains information about HW descriptor ring limitations. 1342 */ 1343 struct rte_eth_desc_lim { 1344 uint16_t nb_max; /**< Max allowed number of descriptors. */ 1345 uint16_t nb_min; /**< Min allowed number of descriptors. */ 1346 uint16_t nb_align; /**< Number of descriptors should be aligned to. */ 1347 1348 /** 1349 * Max allowed number of segments per whole packet. 1350 * 1351 * - For TSO packet this is the total number of data descriptors allowed 1352 * by device. 1353 * 1354 * @see nb_mtu_seg_max 1355 */ 1356 uint16_t nb_seg_max; 1357 1358 /** 1359 * Max number of segments per one MTU. 1360 * 1361 * - For non-TSO packet, this is the maximum allowed number of segments 1362 * in a single transmit packet. 1363 * 1364 * - For TSO packet each segment within the TSO may span up to this 1365 * value. 1366 * 1367 * @see nb_seg_max 1368 */ 1369 uint16_t nb_mtu_seg_max; 1370 }; 1371 1372 /** 1373 * This enum indicates the flow control mode 1374 */ 1375 enum rte_eth_fc_mode { 1376 RTE_ETH_FC_NONE = 0, /**< Disable flow control. */ 1377 RTE_ETH_FC_RX_PAUSE, /**< Rx pause frame, enable flowctrl on Tx side. */ 1378 RTE_ETH_FC_TX_PAUSE, /**< Tx pause frame, enable flowctrl on Rx side. */ 1379 RTE_ETH_FC_FULL /**< Enable flow control on both side. */ 1380 }; 1381 #define RTE_FC_NONE RTE_DEPRECATED(RTE_FC_NONE) RTE_ETH_FC_NONE 1382 #define RTE_FC_RX_PAUSE RTE_DEPRECATED(RTE_FC_RX_PAUSE) RTE_ETH_FC_RX_PAUSE 1383 #define RTE_FC_TX_PAUSE RTE_DEPRECATED(RTE_FC_TX_PAUSE) RTE_ETH_FC_TX_PAUSE 1384 #define RTE_FC_FULL RTE_DEPRECATED(RTE_FC_FULL) RTE_ETH_FC_FULL 1385 1386 /** 1387 * A structure used to configure Ethernet flow control parameter. 1388 * These parameters will be configured into the register of the NIC. 1389 * Please refer to the corresponding data sheet for proper value. 1390 */ 1391 struct rte_eth_fc_conf { 1392 uint32_t high_water; /**< High threshold value to trigger XOFF */ 1393 uint32_t low_water; /**< Low threshold value to trigger XON */ 1394 uint16_t pause_time; /**< Pause quota in the Pause frame */ 1395 uint16_t send_xon; /**< Is XON frame need be sent */ 1396 enum rte_eth_fc_mode mode; /**< Link flow control mode */ 1397 uint8_t mac_ctrl_frame_fwd; /**< Forward MAC control frames */ 1398 uint8_t autoneg; /**< Use Pause autoneg */ 1399 }; 1400 1401 /** 1402 * A structure used to configure Ethernet priority flow control parameter. 1403 * These parameters will be configured into the register of the NIC. 1404 * Please refer to the corresponding data sheet for proper value. 1405 */ 1406 struct rte_eth_pfc_conf { 1407 struct rte_eth_fc_conf fc; /**< General flow control parameter. */ 1408 uint8_t priority; /**< VLAN User Priority. */ 1409 }; 1410 1411 /** 1412 * Tunnel type for device-specific classifier configuration. 1413 * @see rte_eth_udp_tunnel 1414 */ 1415 enum rte_eth_tunnel_type { 1416 RTE_ETH_TUNNEL_TYPE_NONE = 0, 1417 RTE_ETH_TUNNEL_TYPE_VXLAN, 1418 RTE_ETH_TUNNEL_TYPE_GENEVE, 1419 RTE_ETH_TUNNEL_TYPE_TEREDO, 1420 RTE_ETH_TUNNEL_TYPE_NVGRE, 1421 RTE_ETH_TUNNEL_TYPE_IP_IN_GRE, 1422 RTE_ETH_L2_TUNNEL_TYPE_E_TAG, 1423 RTE_ETH_TUNNEL_TYPE_VXLAN_GPE, 1424 RTE_ETH_TUNNEL_TYPE_ECPRI, 1425 RTE_ETH_TUNNEL_TYPE_MAX, 1426 }; 1427 #define RTE_TUNNEL_TYPE_NONE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NONE) RTE_ETH_TUNNEL_TYPE_NONE 1428 #define RTE_TUNNEL_TYPE_VXLAN RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN) RTE_ETH_TUNNEL_TYPE_VXLAN 1429 #define RTE_TUNNEL_TYPE_GENEVE RTE_DEPRECATED(RTE_TUNNEL_TYPE_GENEVE) RTE_ETH_TUNNEL_TYPE_GENEVE 1430 #define RTE_TUNNEL_TYPE_TEREDO RTE_DEPRECATED(RTE_TUNNEL_TYPE_TEREDO) RTE_ETH_TUNNEL_TYPE_TEREDO 1431 #define RTE_TUNNEL_TYPE_NVGRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_NVGRE) RTE_ETH_TUNNEL_TYPE_NVGRE 1432 #define RTE_TUNNEL_TYPE_IP_IN_GRE RTE_DEPRECATED(RTE_TUNNEL_TYPE_IP_IN_GRE) RTE_ETH_TUNNEL_TYPE_IP_IN_GRE 1433 #define RTE_L2_TUNNEL_TYPE_E_TAG RTE_DEPRECATED(RTE_L2_TUNNEL_TYPE_E_TAG) RTE_ETH_L2_TUNNEL_TYPE_E_TAG 1434 #define RTE_TUNNEL_TYPE_VXLAN_GPE RTE_DEPRECATED(RTE_TUNNEL_TYPE_VXLAN_GPE) RTE_ETH_TUNNEL_TYPE_VXLAN_GPE 1435 #define RTE_TUNNEL_TYPE_ECPRI RTE_DEPRECATED(RTE_TUNNEL_TYPE_ECPRI) RTE_ETH_TUNNEL_TYPE_ECPRI 1436 #define RTE_TUNNEL_TYPE_MAX RTE_DEPRECATED(RTE_TUNNEL_TYPE_MAX) RTE_ETH_TUNNEL_TYPE_MAX 1437 1438 /* Deprecated API file for rte_eth_dev_filter_* functions */ 1439 #include "rte_eth_ctrl.h" 1440 1441 /** 1442 * Memory space that can be configured to store Flow Director filters 1443 * in the board memory. 1444 */ 1445 enum rte_eth_fdir_pballoc_type { 1446 RTE_ETH_FDIR_PBALLOC_64K = 0, /**< 64k. */ 1447 RTE_ETH_FDIR_PBALLOC_128K, /**< 128k. */ 1448 RTE_ETH_FDIR_PBALLOC_256K, /**< 256k. */ 1449 }; 1450 #define rte_fdir_pballoc_type rte_eth_fdir_pballoc_type 1451 1452 #define RTE_FDIR_PBALLOC_64K RTE_DEPRECATED(RTE_FDIR_PBALLOC_64K) RTE_ETH_FDIR_PBALLOC_64K 1453 #define RTE_FDIR_PBALLOC_128K RTE_DEPRECATED(RTE_FDIR_PBALLOC_128K) RTE_ETH_FDIR_PBALLOC_128K 1454 #define RTE_FDIR_PBALLOC_256K RTE_DEPRECATED(RTE_FDIR_PBALLOC_256K) RTE_ETH_FDIR_PBALLOC_256K 1455 1456 /** 1457 * Select report mode of FDIR hash information in Rx descriptors. 1458 */ 1459 enum rte_fdir_status_mode { 1460 RTE_FDIR_NO_REPORT_STATUS = 0, /**< Never report FDIR hash. */ 1461 RTE_FDIR_REPORT_STATUS, /**< Only report FDIR hash for matching pkts. */ 1462 RTE_FDIR_REPORT_STATUS_ALWAYS, /**< Always report FDIR hash. */ 1463 }; 1464 1465 /** 1466 * A structure used to configure the Flow Director (FDIR) feature 1467 * of an Ethernet port. 1468 * 1469 * If mode is RTE_FDIR_MODE_NONE, the pballoc value is ignored. 1470 */ 1471 struct rte_eth_fdir_conf { 1472 enum rte_fdir_mode mode; /**< Flow Director mode. */ 1473 enum rte_eth_fdir_pballoc_type pballoc; /**< Space for FDIR filters. */ 1474 enum rte_fdir_status_mode status; /**< How to report FDIR hash. */ 1475 /** Rx queue of packets matching a "drop" filter in perfect mode. */ 1476 uint8_t drop_queue; 1477 struct rte_eth_fdir_masks mask; 1478 /** Flex payload configuration. */ 1479 struct rte_eth_fdir_flex_conf flex_conf; 1480 }; 1481 #define rte_fdir_conf rte_eth_fdir_conf 1482 1483 /** 1484 * UDP tunneling configuration. 1485 * 1486 * Used to configure the classifier of a device, 1487 * associating an UDP port with a type of tunnel. 1488 * 1489 * Some NICs may need such configuration to properly parse a tunnel 1490 * with any standard or custom UDP port. 1491 */ 1492 struct rte_eth_udp_tunnel { 1493 uint16_t udp_port; /**< UDP port used for the tunnel. */ 1494 uint8_t prot_type; /**< Tunnel type. @see rte_eth_tunnel_type */ 1495 }; 1496 1497 /** 1498 * A structure used to enable/disable specific device interrupts. 1499 */ 1500 struct rte_eth_intr_conf { 1501 /** enable/disable lsc interrupt. 0 (default) - disable, 1 enable */ 1502 uint32_t lsc:1; 1503 /** enable/disable rxq interrupt. 0 (default) - disable, 1 enable */ 1504 uint32_t rxq:1; 1505 /** enable/disable rmv interrupt. 0 (default) - disable, 1 enable */ 1506 uint32_t rmv:1; 1507 }; 1508 1509 #define rte_intr_conf rte_eth_intr_conf 1510 1511 /** 1512 * A structure used to configure an Ethernet port. 1513 * Depending upon the Rx multi-queue mode, extra advanced 1514 * configuration settings may be needed. 1515 */ 1516 struct rte_eth_conf { 1517 uint32_t link_speeds; /**< bitmap of RTE_ETH_LINK_SPEED_XXX of speeds to be 1518 used. RTE_ETH_LINK_SPEED_FIXED disables link 1519 autonegotiation, and a unique speed shall be 1520 set. Otherwise, the bitmap defines the set of 1521 speeds to be advertised. If the special value 1522 RTE_ETH_LINK_SPEED_AUTONEG (0) is used, all speeds 1523 supported are advertised. */ 1524 struct rte_eth_rxmode rxmode; /**< Port Rx configuration. */ 1525 struct rte_eth_txmode txmode; /**< Port Tx configuration. */ 1526 uint32_t lpbk_mode; /**< Loopback operation mode. By default the value 1527 is 0, meaning the loopback mode is disabled. 1528 Read the datasheet of given Ethernet controller 1529 for details. The possible values of this field 1530 are defined in implementation of each driver. */ 1531 struct { 1532 struct rte_eth_rss_conf rss_conf; /**< Port RSS configuration */ 1533 /** Port VMDq+DCB configuration. */ 1534 struct rte_eth_vmdq_dcb_conf vmdq_dcb_conf; 1535 /** Port DCB Rx configuration. */ 1536 struct rte_eth_dcb_rx_conf dcb_rx_conf; 1537 /** Port VMDq Rx configuration. */ 1538 struct rte_eth_vmdq_rx_conf vmdq_rx_conf; 1539 } rx_adv_conf; /**< Port Rx filtering configuration. */ 1540 union { 1541 /** Port VMDq+DCB Tx configuration. */ 1542 struct rte_eth_vmdq_dcb_tx_conf vmdq_dcb_tx_conf; 1543 /** Port DCB Tx configuration. */ 1544 struct rte_eth_dcb_tx_conf dcb_tx_conf; 1545 /** Port VMDq Tx configuration. */ 1546 struct rte_eth_vmdq_tx_conf vmdq_tx_conf; 1547 } tx_adv_conf; /**< Port Tx DCB configuration (union). */ 1548 /** Currently,Priority Flow Control(PFC) are supported,if DCB with PFC 1549 is needed,and the variable must be set RTE_ETH_DCB_PFC_SUPPORT. */ 1550 uint32_t dcb_capability_en; 1551 struct rte_eth_fdir_conf fdir_conf; /**< FDIR configuration. DEPRECATED */ 1552 struct rte_eth_intr_conf intr_conf; /**< Interrupt mode configuration. */ 1553 }; 1554 1555 /** 1556 * Rx offload capabilities of a device. 1557 */ 1558 #define RTE_ETH_RX_OFFLOAD_VLAN_STRIP RTE_BIT64(0) 1559 #define RTE_ETH_RX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1) 1560 #define RTE_ETH_RX_OFFLOAD_UDP_CKSUM RTE_BIT64(2) 1561 #define RTE_ETH_RX_OFFLOAD_TCP_CKSUM RTE_BIT64(3) 1562 #define RTE_ETH_RX_OFFLOAD_TCP_LRO RTE_BIT64(4) 1563 #define RTE_ETH_RX_OFFLOAD_QINQ_STRIP RTE_BIT64(5) 1564 #define RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(6) 1565 #define RTE_ETH_RX_OFFLOAD_MACSEC_STRIP RTE_BIT64(7) 1566 #define RTE_ETH_RX_OFFLOAD_HEADER_SPLIT RTE_BIT64(8) 1567 #define RTE_ETH_RX_OFFLOAD_VLAN_FILTER RTE_BIT64(9) 1568 #define RTE_ETH_RX_OFFLOAD_VLAN_EXTEND RTE_BIT64(10) 1569 #define RTE_ETH_RX_OFFLOAD_SCATTER RTE_BIT64(13) 1570 /** 1571 * Timestamp is set by the driver in RTE_MBUF_DYNFIELD_TIMESTAMP_NAME 1572 * and RTE_MBUF_DYNFLAG_RX_TIMESTAMP_NAME is set in ol_flags. 1573 * The mbuf field and flag are registered when the offload is configured. 1574 */ 1575 #define RTE_ETH_RX_OFFLOAD_TIMESTAMP RTE_BIT64(14) 1576 #define RTE_ETH_RX_OFFLOAD_SECURITY RTE_BIT64(15) 1577 #define RTE_ETH_RX_OFFLOAD_KEEP_CRC RTE_BIT64(16) 1578 #define RTE_ETH_RX_OFFLOAD_SCTP_CKSUM RTE_BIT64(17) 1579 #define RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(18) 1580 #define RTE_ETH_RX_OFFLOAD_RSS_HASH RTE_BIT64(19) 1581 #define RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT RTE_BIT64(20) 1582 1583 #define DEV_RX_OFFLOAD_VLAN_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_STRIP) RTE_ETH_RX_OFFLOAD_VLAN_STRIP 1584 #define DEV_RX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_IPV4_CKSUM 1585 #define DEV_RX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_UDP_CKSUM 1586 #define DEV_RX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_CKSUM) RTE_ETH_RX_OFFLOAD_TCP_CKSUM 1587 #define DEV_RX_OFFLOAD_TCP_LRO RTE_DEPRECATED(DEV_RX_OFFLOAD_TCP_LRO) RTE_ETH_RX_OFFLOAD_TCP_LRO 1588 #define DEV_RX_OFFLOAD_QINQ_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_QINQ_STRIP) RTE_ETH_RX_OFFLOAD_QINQ_STRIP 1589 #define DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_IPV4_CKSUM 1590 #define DEV_RX_OFFLOAD_MACSEC_STRIP RTE_DEPRECATED(DEV_RX_OFFLOAD_MACSEC_STRIP) RTE_ETH_RX_OFFLOAD_MACSEC_STRIP 1591 #define DEV_RX_OFFLOAD_HEADER_SPLIT RTE_DEPRECATED(DEV_RX_OFFLOAD_HEADER_SPLIT) RTE_ETH_RX_OFFLOAD_HEADER_SPLIT 1592 #define DEV_RX_OFFLOAD_VLAN_FILTER RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_FILTER) RTE_ETH_RX_OFFLOAD_VLAN_FILTER 1593 #define DEV_RX_OFFLOAD_VLAN_EXTEND RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN_EXTEND) RTE_ETH_RX_OFFLOAD_VLAN_EXTEND 1594 #define DEV_RX_OFFLOAD_SCATTER RTE_DEPRECATED(DEV_RX_OFFLOAD_SCATTER) RTE_ETH_RX_OFFLOAD_SCATTER 1595 #define DEV_RX_OFFLOAD_TIMESTAMP RTE_DEPRECATED(DEV_RX_OFFLOAD_TIMESTAMP) RTE_ETH_RX_OFFLOAD_TIMESTAMP 1596 #define DEV_RX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_RX_OFFLOAD_SECURITY) RTE_ETH_RX_OFFLOAD_SECURITY 1597 #define DEV_RX_OFFLOAD_KEEP_CRC RTE_DEPRECATED(DEV_RX_OFFLOAD_KEEP_CRC) RTE_ETH_RX_OFFLOAD_KEEP_CRC 1598 #define DEV_RX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_SCTP_CKSUM) RTE_ETH_RX_OFFLOAD_SCTP_CKSUM 1599 #define DEV_RX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_RX_OFFLOAD_OUTER_UDP_CKSUM 1600 #define DEV_RX_OFFLOAD_RSS_HASH RTE_DEPRECATED(DEV_RX_OFFLOAD_RSS_HASH) RTE_ETH_RX_OFFLOAD_RSS_HASH 1601 1602 #define RTE_ETH_RX_OFFLOAD_CHECKSUM (RTE_ETH_RX_OFFLOAD_IPV4_CKSUM | \ 1603 RTE_ETH_RX_OFFLOAD_UDP_CKSUM | \ 1604 RTE_ETH_RX_OFFLOAD_TCP_CKSUM) 1605 #define DEV_RX_OFFLOAD_CHECKSUM RTE_DEPRECATED(DEV_RX_OFFLOAD_CHECKSUM) RTE_ETH_RX_OFFLOAD_CHECKSUM 1606 #define RTE_ETH_RX_OFFLOAD_VLAN (RTE_ETH_RX_OFFLOAD_VLAN_STRIP | \ 1607 RTE_ETH_RX_OFFLOAD_VLAN_FILTER | \ 1608 RTE_ETH_RX_OFFLOAD_VLAN_EXTEND | \ 1609 RTE_ETH_RX_OFFLOAD_QINQ_STRIP) 1610 #define DEV_RX_OFFLOAD_VLAN RTE_DEPRECATED(DEV_RX_OFFLOAD_VLAN) RTE_ETH_RX_OFFLOAD_VLAN 1611 1612 /* 1613 * If new Rx offload capabilities are defined, they also must be 1614 * mentioned in rte_rx_offload_names in rte_ethdev.c file. 1615 */ 1616 1617 /** 1618 * Tx offload capabilities of a device. 1619 */ 1620 #define RTE_ETH_TX_OFFLOAD_VLAN_INSERT RTE_BIT64(0) 1621 #define RTE_ETH_TX_OFFLOAD_IPV4_CKSUM RTE_BIT64(1) 1622 #define RTE_ETH_TX_OFFLOAD_UDP_CKSUM RTE_BIT64(2) 1623 #define RTE_ETH_TX_OFFLOAD_TCP_CKSUM RTE_BIT64(3) 1624 #define RTE_ETH_TX_OFFLOAD_SCTP_CKSUM RTE_BIT64(4) 1625 #define RTE_ETH_TX_OFFLOAD_TCP_TSO RTE_BIT64(5) 1626 #define RTE_ETH_TX_OFFLOAD_UDP_TSO RTE_BIT64(6) 1627 #define RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_BIT64(7) /**< Used for tunneling packet. */ 1628 #define RTE_ETH_TX_OFFLOAD_QINQ_INSERT RTE_BIT64(8) 1629 #define RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO RTE_BIT64(9) /**< Used for tunneling packet. */ 1630 #define RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO RTE_BIT64(10) /**< Used for tunneling packet. */ 1631 #define RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO RTE_BIT64(11) /**< Used for tunneling packet. */ 1632 #define RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO RTE_BIT64(12) /**< Used for tunneling packet. */ 1633 #define RTE_ETH_TX_OFFLOAD_MACSEC_INSERT RTE_BIT64(13) 1634 /** 1635 * Multiple threads can invoke rte_eth_tx_burst() concurrently on the same 1636 * Tx queue without SW lock. 1637 */ 1638 #define RTE_ETH_TX_OFFLOAD_MT_LOCKFREE RTE_BIT64(14) 1639 /** Device supports multi segment send. */ 1640 #define RTE_ETH_TX_OFFLOAD_MULTI_SEGS RTE_BIT64(15) 1641 /** 1642 * Device supports optimization for fast release of mbufs. 1643 * When set application must guarantee that per-queue all mbufs comes from 1644 * the same mempool and has refcnt = 1. 1645 */ 1646 #define RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE RTE_BIT64(16) 1647 #define RTE_ETH_TX_OFFLOAD_SECURITY RTE_BIT64(17) 1648 /** 1649 * Device supports generic UDP tunneled packet TSO. 1650 * Application must set RTE_MBUF_F_TX_TUNNEL_UDP and other mbuf fields required 1651 * for tunnel TSO. 1652 */ 1653 #define RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO RTE_BIT64(18) 1654 /** 1655 * Device supports generic IP tunneled packet TSO. 1656 * Application must set RTE_MBUF_F_TX_TUNNEL_IP and other mbuf fields required 1657 * for tunnel TSO. 1658 */ 1659 #define RTE_ETH_TX_OFFLOAD_IP_TNL_TSO RTE_BIT64(19) 1660 /** Device supports outer UDP checksum */ 1661 #define RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_BIT64(20) 1662 /** 1663 * Device sends on time read from RTE_MBUF_DYNFIELD_TIMESTAMP_NAME 1664 * if RTE_MBUF_DYNFLAG_TX_TIMESTAMP_NAME is set in ol_flags. 1665 * The mbuf field and flag are registered when the offload is configured. 1666 */ 1667 #define RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_BIT64(21) 1668 /* 1669 * If new Tx offload capabilities are defined, they also must be 1670 * mentioned in rte_tx_offload_names in rte_ethdev.c file. 1671 */ 1672 1673 #define DEV_TX_OFFLOAD_VLAN_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_VLAN_INSERT) RTE_ETH_TX_OFFLOAD_VLAN_INSERT 1674 #define DEV_TX_OFFLOAD_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_IPV4_CKSUM 1675 #define DEV_TX_OFFLOAD_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_UDP_CKSUM 1676 #define DEV_TX_OFFLOAD_TCP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_CKSUM) RTE_ETH_TX_OFFLOAD_TCP_CKSUM 1677 #define DEV_TX_OFFLOAD_SCTP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_SCTP_CKSUM) RTE_ETH_TX_OFFLOAD_SCTP_CKSUM 1678 #define DEV_TX_OFFLOAD_TCP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_TCP_TSO) RTE_ETH_TX_OFFLOAD_TCP_TSO 1679 #define DEV_TX_OFFLOAD_UDP_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TSO) RTE_ETH_TX_OFFLOAD_UDP_TSO 1680 #define DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_IPV4_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_IPV4_CKSUM 1681 #define DEV_TX_OFFLOAD_QINQ_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_QINQ_INSERT) RTE_ETH_TX_OFFLOAD_QINQ_INSERT 1682 #define DEV_TX_OFFLOAD_VXLAN_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_VXLAN_TNL_TSO) RTE_ETH_TX_OFFLOAD_VXLAN_TNL_TSO 1683 #define DEV_TX_OFFLOAD_GRE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GRE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GRE_TNL_TSO 1684 #define DEV_TX_OFFLOAD_IPIP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IPIP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IPIP_TNL_TSO 1685 #define DEV_TX_OFFLOAD_GENEVE_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_GENEVE_TNL_TSO) RTE_ETH_TX_OFFLOAD_GENEVE_TNL_TSO 1686 #define DEV_TX_OFFLOAD_MACSEC_INSERT RTE_DEPRECATED(DEV_TX_OFFLOAD_MACSEC_INSERT) RTE_ETH_TX_OFFLOAD_MACSEC_INSERT 1687 #define DEV_TX_OFFLOAD_MT_LOCKFREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MT_LOCKFREE) RTE_ETH_TX_OFFLOAD_MT_LOCKFREE 1688 #define DEV_TX_OFFLOAD_MULTI_SEGS RTE_DEPRECATED(DEV_TX_OFFLOAD_MULTI_SEGS) RTE_ETH_TX_OFFLOAD_MULTI_SEGS 1689 #define DEV_TX_OFFLOAD_MBUF_FAST_FREE RTE_DEPRECATED(DEV_TX_OFFLOAD_MBUF_FAST_FREE) RTE_ETH_TX_OFFLOAD_MBUF_FAST_FREE 1690 #define DEV_TX_OFFLOAD_SECURITY RTE_DEPRECATED(DEV_TX_OFFLOAD_SECURITY) RTE_ETH_TX_OFFLOAD_SECURITY 1691 #define DEV_TX_OFFLOAD_UDP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_UDP_TNL_TSO) RTE_ETH_TX_OFFLOAD_UDP_TNL_TSO 1692 #define DEV_TX_OFFLOAD_IP_TNL_TSO RTE_DEPRECATED(DEV_TX_OFFLOAD_IP_TNL_TSO) RTE_ETH_TX_OFFLOAD_IP_TNL_TSO 1693 #define DEV_TX_OFFLOAD_OUTER_UDP_CKSUM RTE_DEPRECATED(DEV_TX_OFFLOAD_OUTER_UDP_CKSUM) RTE_ETH_TX_OFFLOAD_OUTER_UDP_CKSUM 1694 #define DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP RTE_DEPRECATED(DEV_TX_OFFLOAD_SEND_ON_TIMESTAMP) RTE_ETH_TX_OFFLOAD_SEND_ON_TIMESTAMP 1695 1696 /**@{@name Device capabilities 1697 * Non-offload capabilities reported in rte_eth_dev_info.dev_capa. 1698 */ 1699 /** Device supports Rx queue setup after device started. */ 1700 #define RTE_ETH_DEV_CAPA_RUNTIME_RX_QUEUE_SETUP RTE_BIT64(0) 1701 /** Device supports Tx queue setup after device started. */ 1702 #define RTE_ETH_DEV_CAPA_RUNTIME_TX_QUEUE_SETUP RTE_BIT64(1) 1703 /** 1704 * Device supports shared Rx queue among ports within Rx domain and 1705 * switch domain. Mbufs are consumed by shared Rx queue instead of 1706 * each queue. Multiple groups are supported by share_group of Rx 1707 * queue configuration. Shared Rx queue is identified by PMD using 1708 * share_qid of Rx queue configuration. Polling any port in the group 1709 * receive packets of all member ports, source port identified by 1710 * mbuf->port field. 1711 */ 1712 #define RTE_ETH_DEV_CAPA_RXQ_SHARE RTE_BIT64(2) 1713 /** Device supports keeping flow rules across restart. */ 1714 #define RTE_ETH_DEV_CAPA_FLOW_RULE_KEEP RTE_BIT64(3) 1715 /** Device supports keeping shared flow objects across restart. */ 1716 #define RTE_ETH_DEV_CAPA_FLOW_SHARED_OBJECT_KEEP RTE_BIT64(4) 1717 /**@}*/ 1718 1719 /* 1720 * Fallback default preferred Rx/Tx port parameters. 1721 * These are used if an application requests default parameters 1722 * but the PMD does not provide preferred values. 1723 */ 1724 #define RTE_ETH_DEV_FALLBACK_RX_RINGSIZE 512 1725 #define RTE_ETH_DEV_FALLBACK_TX_RINGSIZE 512 1726 #define RTE_ETH_DEV_FALLBACK_RX_NBQUEUES 1 1727 #define RTE_ETH_DEV_FALLBACK_TX_NBQUEUES 1 1728 1729 /** 1730 * Preferred Rx/Tx port parameters. 1731 * There are separate instances of this structure for transmission 1732 * and reception respectively. 1733 */ 1734 struct rte_eth_dev_portconf { 1735 uint16_t burst_size; /**< Device-preferred burst size */ 1736 uint16_t ring_size; /**< Device-preferred size of queue rings */ 1737 uint16_t nb_queues; /**< Device-preferred number of queues */ 1738 }; 1739 1740 /** 1741 * Default values for switch domain ID when ethdev does not support switch 1742 * domain definitions. 1743 */ 1744 #define RTE_ETH_DEV_SWITCH_DOMAIN_ID_INVALID (UINT16_MAX) 1745 1746 /** 1747 * Ethernet device associated switch information 1748 */ 1749 struct rte_eth_switch_info { 1750 const char *name; /**< switch name */ 1751 uint16_t domain_id; /**< switch domain ID */ 1752 /** 1753 * Mapping to the devices physical switch port as enumerated from the 1754 * perspective of the embedded interconnect/switch. For SR-IOV enabled 1755 * device this may correspond to the VF_ID of each virtual function, 1756 * but each driver should explicitly define the mapping of switch 1757 * port identifier to that physical interconnect/switch 1758 */ 1759 uint16_t port_id; 1760 /** 1761 * Shared Rx queue sub-domain boundary. Only ports in same Rx domain 1762 * and switch domain can share Rx queue. Valid only if device advertised 1763 * RTE_ETH_DEV_CAPA_RXQ_SHARE capability. 1764 */ 1765 uint16_t rx_domain; 1766 }; 1767 1768 /** 1769 * @warning 1770 * @b EXPERIMENTAL: this structure may change without prior notice. 1771 * 1772 * Ethernet device Rx buffer segmentation capabilities. 1773 */ 1774 struct rte_eth_rxseg_capa { 1775 __extension__ 1776 uint32_t multi_pools:1; /**< Supports receiving to multiple pools.*/ 1777 uint32_t offset_allowed:1; /**< Supports buffer offsets. */ 1778 uint32_t offset_align_log2:4; /**< Required offset alignment. */ 1779 uint16_t max_nseg; /**< Maximum amount of segments to split. */ 1780 uint16_t reserved; /**< Reserved field. */ 1781 }; 1782 1783 /** 1784 * Ethernet device information 1785 */ 1786 1787 /** 1788 * Ethernet device representor port type. 1789 */ 1790 enum rte_eth_representor_type { 1791 RTE_ETH_REPRESENTOR_NONE, /**< not a representor. */ 1792 RTE_ETH_REPRESENTOR_VF, /**< representor of Virtual Function. */ 1793 RTE_ETH_REPRESENTOR_SF, /**< representor of Sub Function. */ 1794 RTE_ETH_REPRESENTOR_PF, /**< representor of Physical Function. */ 1795 }; 1796 1797 /** 1798 * A structure used to retrieve the contextual information of 1799 * an Ethernet device, such as the controlling driver of the 1800 * device, etc... 1801 */ 1802 struct rte_eth_dev_info { 1803 struct rte_device *device; /** Generic device information */ 1804 const char *driver_name; /**< Device Driver name. */ 1805 unsigned int if_index; /**< Index to bound host interface, or 0 if none. 1806 Use if_indextoname() to translate into an interface name. */ 1807 uint16_t min_mtu; /**< Minimum MTU allowed */ 1808 uint16_t max_mtu; /**< Maximum MTU allowed */ 1809 const uint32_t *dev_flags; /**< Device flags */ 1810 uint32_t min_rx_bufsize; /**< Minimum size of Rx buffer. */ 1811 uint32_t max_rx_pktlen; /**< Maximum configurable length of Rx pkt. */ 1812 /** Maximum configurable size of LRO aggregated packet. */ 1813 uint32_t max_lro_pkt_size; 1814 uint16_t max_rx_queues; /**< Maximum number of Rx queues. */ 1815 uint16_t max_tx_queues; /**< Maximum number of Tx queues. */ 1816 uint32_t max_mac_addrs; /**< Maximum number of MAC addresses. */ 1817 uint32_t max_hash_mac_addrs; 1818 /** Maximum number of hash MAC addresses for MTA and UTA. */ 1819 uint16_t max_vfs; /**< Maximum number of VFs. */ 1820 uint16_t max_vmdq_pools; /**< Maximum number of VMDq pools. */ 1821 struct rte_eth_rxseg_capa rx_seg_capa; /**< Segmentation capability.*/ 1822 /** All Rx offload capabilities including all per-queue ones */ 1823 uint64_t rx_offload_capa; 1824 /** All Tx offload capabilities including all per-queue ones */ 1825 uint64_t tx_offload_capa; 1826 /** Device per-queue Rx offload capabilities. */ 1827 uint64_t rx_queue_offload_capa; 1828 /** Device per-queue Tx offload capabilities. */ 1829 uint64_t tx_queue_offload_capa; 1830 /** Device redirection table size, the total number of entries. */ 1831 uint16_t reta_size; 1832 uint8_t hash_key_size; /**< Hash key size in bytes */ 1833 /** Bit mask of RSS offloads, the bit offset also means flow type */ 1834 uint64_t flow_type_rss_offloads; 1835 struct rte_eth_rxconf default_rxconf; /**< Default Rx configuration */ 1836 struct rte_eth_txconf default_txconf; /**< Default Tx configuration */ 1837 uint16_t vmdq_queue_base; /**< First queue ID for VMDq pools. */ 1838 uint16_t vmdq_queue_num; /**< Queue number for VMDq pools. */ 1839 uint16_t vmdq_pool_base; /**< First ID of VMDq pools. */ 1840 struct rte_eth_desc_lim rx_desc_lim; /**< Rx descriptors limits */ 1841 struct rte_eth_desc_lim tx_desc_lim; /**< Tx descriptors limits */ 1842 uint32_t speed_capa; /**< Supported speeds bitmap (RTE_ETH_LINK_SPEED_). */ 1843 /** Configured number of Rx/Tx queues */ 1844 uint16_t nb_rx_queues; /**< Number of Rx queues. */ 1845 uint16_t nb_tx_queues; /**< Number of Tx queues. */ 1846 /** Rx parameter recommendations */ 1847 struct rte_eth_dev_portconf default_rxportconf; 1848 /** Tx parameter recommendations */ 1849 struct rte_eth_dev_portconf default_txportconf; 1850 /** Generic device capabilities (RTE_ETH_DEV_CAPA_). */ 1851 uint64_t dev_capa; 1852 /** 1853 * Switching information for ports on a device with a 1854 * embedded managed interconnect/switch. 1855 */ 1856 struct rte_eth_switch_info switch_info; 1857 1858 uint64_t reserved_64s[2]; /**< Reserved for future fields */ 1859 void *reserved_ptrs[2]; /**< Reserved for future fields */ 1860 }; 1861 1862 /**@{@name Rx/Tx queue states */ 1863 #define RTE_ETH_QUEUE_STATE_STOPPED 0 /**< Queue stopped. */ 1864 #define RTE_ETH_QUEUE_STATE_STARTED 1 /**< Queue started. */ 1865 #define RTE_ETH_QUEUE_STATE_HAIRPIN 2 /**< Queue used for hairpin. */ 1866 /**@}*/ 1867 1868 /** 1869 * Ethernet device Rx queue information structure. 1870 * Used to retrieve information about configured queue. 1871 */ 1872 struct rte_eth_rxq_info { 1873 struct rte_mempool *mp; /**< mempool used by that queue. */ 1874 struct rte_eth_rxconf conf; /**< queue config parameters. */ 1875 uint8_t scattered_rx; /**< scattered packets Rx supported. */ 1876 uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */ 1877 uint16_t nb_desc; /**< configured number of RXDs. */ 1878 uint16_t rx_buf_size; /**< hardware receive buffer size. */ 1879 } __rte_cache_min_aligned; 1880 1881 /** 1882 * Ethernet device Tx queue information structure. 1883 * Used to retrieve information about configured queue. 1884 */ 1885 struct rte_eth_txq_info { 1886 struct rte_eth_txconf conf; /**< queue config parameters. */ 1887 uint16_t nb_desc; /**< configured number of TXDs. */ 1888 uint8_t queue_state; /**< one of RTE_ETH_QUEUE_STATE_*. */ 1889 } __rte_cache_min_aligned; 1890 1891 /* Generic Burst mode flag definition, values can be ORed. */ 1892 1893 /** 1894 * If the queues have different burst mode description, this bit will be set 1895 * by PMD, then the application can iterate to retrieve burst description for 1896 * all other queues. 1897 */ 1898 #define RTE_ETH_BURST_FLAG_PER_QUEUE RTE_BIT64(0) 1899 1900 /** 1901 * Ethernet device Rx/Tx queue packet burst mode information structure. 1902 * Used to retrieve information about packet burst mode setting. 1903 */ 1904 struct rte_eth_burst_mode { 1905 uint64_t flags; /**< The ORed values of RTE_ETH_BURST_FLAG_xxx */ 1906 1907 #define RTE_ETH_BURST_MODE_INFO_SIZE 1024 /**< Maximum size for information */ 1908 char info[RTE_ETH_BURST_MODE_INFO_SIZE]; /**< burst mode information */ 1909 }; 1910 1911 /** Maximum name length for extended statistics counters */ 1912 #define RTE_ETH_XSTATS_NAME_SIZE 64 1913 1914 /** 1915 * An Ethernet device extended statistic structure 1916 * 1917 * This structure is used by rte_eth_xstats_get() to provide 1918 * statistics that are not provided in the generic *rte_eth_stats* 1919 * structure. 1920 * It maps a name ID, corresponding to an index in the array returned 1921 * by rte_eth_xstats_get_names(), to a statistic value. 1922 */ 1923 struct rte_eth_xstat { 1924 uint64_t id; /**< The index in xstats name array. */ 1925 uint64_t value; /**< The statistic counter value. */ 1926 }; 1927 1928 /** 1929 * A name element for extended statistics. 1930 * 1931 * An array of this structure is returned by rte_eth_xstats_get_names(). 1932 * It lists the names of extended statistics for a PMD. The *rte_eth_xstat* 1933 * structure references these names by their array index. 1934 * 1935 * The xstats should follow a common naming scheme. 1936 * Some names are standardized in rte_stats_strings. 1937 * Examples: 1938 * - rx_missed_errors 1939 * - tx_q3_bytes 1940 * - tx_size_128_to_255_packets 1941 */ 1942 struct rte_eth_xstat_name { 1943 char name[RTE_ETH_XSTATS_NAME_SIZE]; /**< The statistic name. */ 1944 }; 1945 1946 #define RTE_ETH_DCB_NUM_TCS 8 1947 #define RTE_ETH_MAX_VMDQ_POOL 64 1948 1949 #define ETH_DCB_NUM_TCS RTE_DEPRECATED(ETH_DCB_NUM_TCS) RTE_ETH_DCB_NUM_TCS 1950 #define ETH_MAX_VMDQ_POOL RTE_DEPRECATED(ETH_MAX_VMDQ_POOL) RTE_ETH_MAX_VMDQ_POOL 1951 1952 /** 1953 * A structure used to get the information of queue and 1954 * TC mapping on both Tx and Rx paths. 1955 */ 1956 struct rte_eth_dcb_tc_queue_mapping { 1957 /** Rx queues assigned to tc per Pool */ 1958 struct { 1959 uint16_t base; 1960 uint16_t nb_queue; 1961 } tc_rxq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS]; 1962 /** Rx queues assigned to tc per Pool */ 1963 struct { 1964 uint16_t base; 1965 uint16_t nb_queue; 1966 } tc_txq[RTE_ETH_MAX_VMDQ_POOL][RTE_ETH_DCB_NUM_TCS]; 1967 }; 1968 1969 /** 1970 * A structure used to get the information of DCB. 1971 * It includes TC UP mapping and queue TC mapping. 1972 */ 1973 struct rte_eth_dcb_info { 1974 uint8_t nb_tcs; /**< number of TCs */ 1975 uint8_t prio_tc[RTE_ETH_DCB_NUM_USER_PRIORITIES]; /**< Priority to tc */ 1976 uint8_t tc_bws[RTE_ETH_DCB_NUM_TCS]; /**< Tx BW percentage for each TC */ 1977 /** Rx queues assigned to tc */ 1978 struct rte_eth_dcb_tc_queue_mapping tc_queue; 1979 }; 1980 1981 /** 1982 * This enum indicates the possible Forward Error Correction (FEC) modes 1983 * of an ethdev port. 1984 */ 1985 enum rte_eth_fec_mode { 1986 RTE_ETH_FEC_NOFEC = 0, /**< FEC is off */ 1987 RTE_ETH_FEC_AUTO, /**< FEC autonegotiation modes */ 1988 RTE_ETH_FEC_BASER, /**< FEC using common algorithm */ 1989 RTE_ETH_FEC_RS, /**< FEC using RS algorithm */ 1990 }; 1991 1992 /* Translate from FEC mode to FEC capa */ 1993 #define RTE_ETH_FEC_MODE_TO_CAPA(x) RTE_BIT32(x) 1994 1995 /* This macro indicates FEC capa mask */ 1996 #define RTE_ETH_FEC_MODE_CAPA_MASK(x) RTE_BIT32(RTE_ETH_FEC_ ## x) 1997 1998 /* A structure used to get capabilities per link speed */ 1999 struct rte_eth_fec_capa { 2000 uint32_t speed; /**< Link speed (see RTE_ETH_SPEED_NUM_*) */ 2001 uint32_t capa; /**< FEC capabilities bitmask */ 2002 }; 2003 2004 #define RTE_ETH_ALL RTE_MAX_ETHPORTS 2005 2006 /* Macros to check for valid port */ 2007 #define RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, retval) do { \ 2008 if (!rte_eth_dev_is_valid_port(port_id)) { \ 2009 RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ 2010 return retval; \ 2011 } \ 2012 } while (0) 2013 2014 #define RTE_ETH_VALID_PORTID_OR_RET(port_id) do { \ 2015 if (!rte_eth_dev_is_valid_port(port_id)) { \ 2016 RTE_ETHDEV_LOG(ERR, "Invalid port_id=%u\n", port_id); \ 2017 return; \ 2018 } \ 2019 } while (0) 2020 2021 /** 2022 * Function type used for Rx packet processing packet callbacks. 2023 * 2024 * The callback function is called on Rx with a burst of packets that have 2025 * been received on the given port and queue. 2026 * 2027 * @param port_id 2028 * The Ethernet port on which Rx is being performed. 2029 * @param queue 2030 * The queue on the Ethernet port which is being used to receive the packets. 2031 * @param pkts 2032 * The burst of packets that have just been received. 2033 * @param nb_pkts 2034 * The number of packets in the burst pointed to by "pkts". 2035 * @param max_pkts 2036 * The max number of packets that can be stored in the "pkts" array. 2037 * @param user_param 2038 * The arbitrary user parameter passed in by the application when the callback 2039 * was originally configured. 2040 * @return 2041 * The number of packets returned to the user. 2042 */ 2043 typedef uint16_t (*rte_rx_callback_fn)(uint16_t port_id, uint16_t queue, 2044 struct rte_mbuf *pkts[], uint16_t nb_pkts, uint16_t max_pkts, 2045 void *user_param); 2046 2047 /** 2048 * Function type used for Tx packet processing packet callbacks. 2049 * 2050 * The callback function is called on Tx with a burst of packets immediately 2051 * before the packets are put onto the hardware queue for transmission. 2052 * 2053 * @param port_id 2054 * The Ethernet port on which Tx is being performed. 2055 * @param queue 2056 * The queue on the Ethernet port which is being used to transmit the packets. 2057 * @param pkts 2058 * The burst of packets that are about to be transmitted. 2059 * @param nb_pkts 2060 * The number of packets in the burst pointed to by "pkts". 2061 * @param user_param 2062 * The arbitrary user parameter passed in by the application when the callback 2063 * was originally configured. 2064 * @return 2065 * The number of packets to be written to the NIC. 2066 */ 2067 typedef uint16_t (*rte_tx_callback_fn)(uint16_t port_id, uint16_t queue, 2068 struct rte_mbuf *pkts[], uint16_t nb_pkts, void *user_param); 2069 2070 /** 2071 * Possible states of an ethdev port. 2072 */ 2073 enum rte_eth_dev_state { 2074 /** Device is unused before being probed. */ 2075 RTE_ETH_DEV_UNUSED = 0, 2076 /** Device is attached when allocated in probing. */ 2077 RTE_ETH_DEV_ATTACHED, 2078 /** Device is in removed state when plug-out is detected. */ 2079 RTE_ETH_DEV_REMOVED, 2080 }; 2081 2082 struct rte_eth_dev_sriov { 2083 uint8_t active; /**< SRIOV is active with 16, 32 or 64 pools */ 2084 uint8_t nb_q_per_pool; /**< Rx queue number per pool */ 2085 uint16_t def_vmdq_idx; /**< Default pool num used for PF */ 2086 uint16_t def_pool_q_idx; /**< Default pool queue start reg index */ 2087 }; 2088 #define RTE_ETH_DEV_SRIOV(dev) ((dev)->data->sriov) 2089 2090 #define RTE_ETH_NAME_MAX_LEN RTE_DEV_NAME_MAX_LEN 2091 2092 #define RTE_ETH_DEV_NO_OWNER 0 2093 2094 #define RTE_ETH_MAX_OWNER_NAME_LEN 64 2095 2096 struct rte_eth_dev_owner { 2097 uint64_t id; /**< The owner unique identifier. */ 2098 char name[RTE_ETH_MAX_OWNER_NAME_LEN]; /**< The owner name. */ 2099 }; 2100 2101 /**@{@name Device flags 2102 * Flags internally saved in rte_eth_dev_data.dev_flags 2103 * and reported in rte_eth_dev_info.dev_flags. 2104 */ 2105 /** PMD supports thread-safe flow operations */ 2106 #define RTE_ETH_DEV_FLOW_OPS_THREAD_SAFE RTE_BIT32(0) 2107 /** Device supports link state interrupt */ 2108 #define RTE_ETH_DEV_INTR_LSC RTE_BIT32(1) 2109 /** Device is a bonded slave */ 2110 #define RTE_ETH_DEV_BONDED_SLAVE RTE_BIT32(2) 2111 /** Device supports device removal interrupt */ 2112 #define RTE_ETH_DEV_INTR_RMV RTE_BIT32(3) 2113 /** Device is port representor */ 2114 #define RTE_ETH_DEV_REPRESENTOR RTE_BIT32(4) 2115 /** Device does not support MAC change after started */ 2116 #define RTE_ETH_DEV_NOLIVE_MAC_ADDR RTE_BIT32(5) 2117 /** 2118 * Queue xstats filled automatically by ethdev layer. 2119 * PMDs filling the queue xstats themselves should not set this flag 2120 */ 2121 #define RTE_ETH_DEV_AUTOFILL_QUEUE_XSTATS RTE_BIT32(6) 2122 /**@}*/ 2123 2124 /** 2125 * Iterates over valid ethdev ports owned by a specific owner. 2126 * 2127 * @param port_id 2128 * The ID of the next possible valid owned port. 2129 * @param owner_id 2130 * The owner identifier. 2131 * RTE_ETH_DEV_NO_OWNER means iterate over all valid ownerless ports. 2132 * @return 2133 * Next valid port ID owned by owner_id, RTE_MAX_ETHPORTS if there is none. 2134 */ 2135 uint64_t rte_eth_find_next_owned_by(uint16_t port_id, 2136 const uint64_t owner_id); 2137 2138 /** 2139 * Macro to iterate over all enabled ethdev ports owned by a specific owner. 2140 */ 2141 #define RTE_ETH_FOREACH_DEV_OWNED_BY(p, o) \ 2142 for (p = rte_eth_find_next_owned_by(0, o); \ 2143 (unsigned int)p < (unsigned int)RTE_MAX_ETHPORTS; \ 2144 p = rte_eth_find_next_owned_by(p + 1, o)) 2145 2146 /** 2147 * Iterates over valid ethdev ports. 2148 * 2149 * @param port_id 2150 * The ID of the next possible valid port. 2151 * @return 2152 * Next valid port ID, RTE_MAX_ETHPORTS if there is none. 2153 */ 2154 uint16_t rte_eth_find_next(uint16_t port_id); 2155 2156 /** 2157 * Macro to iterate over all enabled and ownerless ethdev ports. 2158 */ 2159 #define RTE_ETH_FOREACH_DEV(p) \ 2160 RTE_ETH_FOREACH_DEV_OWNED_BY(p, RTE_ETH_DEV_NO_OWNER) 2161 2162 /** 2163 * Iterates over ethdev ports of a specified device. 2164 * 2165 * @param port_id_start 2166 * The ID of the next possible valid port. 2167 * @param parent 2168 * The generic device behind the ports to iterate. 2169 * @return 2170 * Next port ID of the device, possibly port_id_start, 2171 * RTE_MAX_ETHPORTS if there is none. 2172 */ 2173 uint16_t 2174 rte_eth_find_next_of(uint16_t port_id_start, 2175 const struct rte_device *parent); 2176 2177 /** 2178 * Macro to iterate over all ethdev ports of a specified device. 2179 * 2180 * @param port_id 2181 * The ID of the matching port being iterated. 2182 * @param parent 2183 * The rte_device pointer matching the iterated ports. 2184 */ 2185 #define RTE_ETH_FOREACH_DEV_OF(port_id, parent) \ 2186 for (port_id = rte_eth_find_next_of(0, parent); \ 2187 port_id < RTE_MAX_ETHPORTS; \ 2188 port_id = rte_eth_find_next_of(port_id + 1, parent)) 2189 2190 /** 2191 * Iterates over sibling ethdev ports (i.e. sharing the same rte_device). 2192 * 2193 * @param port_id_start 2194 * The ID of the next possible valid sibling port. 2195 * @param ref_port_id 2196 * The ID of a reference port to compare rte_device with. 2197 * @return 2198 * Next sibling port ID, possibly port_id_start or ref_port_id itself, 2199 * RTE_MAX_ETHPORTS if there is none. 2200 */ 2201 uint16_t 2202 rte_eth_find_next_sibling(uint16_t port_id_start, uint16_t ref_port_id); 2203 2204 /** 2205 * Macro to iterate over all ethdev ports sharing the same rte_device 2206 * as the specified port. 2207 * Note: the specified reference port is part of the loop iterations. 2208 * 2209 * @param port_id 2210 * The ID of the matching port being iterated. 2211 * @param ref_port_id 2212 * The ID of the port being compared. 2213 */ 2214 #define RTE_ETH_FOREACH_DEV_SIBLING(port_id, ref_port_id) \ 2215 for (port_id = rte_eth_find_next_sibling(0, ref_port_id); \ 2216 port_id < RTE_MAX_ETHPORTS; \ 2217 port_id = rte_eth_find_next_sibling(port_id + 1, ref_port_id)) 2218 2219 /** 2220 * @warning 2221 * @b EXPERIMENTAL: this API may change without prior notice. 2222 * 2223 * Get a new unique owner identifier. 2224 * An owner identifier is used to owns Ethernet devices by only one DPDK entity 2225 * to avoid multiple management of device by different entities. 2226 * 2227 * @param owner_id 2228 * Owner identifier pointer. 2229 * @return 2230 * Negative errno value on error, 0 on success. 2231 */ 2232 __rte_experimental 2233 int rte_eth_dev_owner_new(uint64_t *owner_id); 2234 2235 /** 2236 * @warning 2237 * @b EXPERIMENTAL: this API may change without prior notice. 2238 * 2239 * Set an Ethernet device owner. 2240 * 2241 * @param port_id 2242 * The identifier of the port to own. 2243 * @param owner 2244 * The owner pointer. 2245 * @return 2246 * Negative errno value on error, 0 on success. 2247 */ 2248 __rte_experimental 2249 int rte_eth_dev_owner_set(const uint16_t port_id, 2250 const struct rte_eth_dev_owner *owner); 2251 2252 /** 2253 * @warning 2254 * @b EXPERIMENTAL: this API may change without prior notice. 2255 * 2256 * Unset Ethernet device owner to make the device ownerless. 2257 * 2258 * @param port_id 2259 * The identifier of port to make ownerless. 2260 * @param owner_id 2261 * The owner identifier. 2262 * @return 2263 * 0 on success, negative errno value on error. 2264 */ 2265 __rte_experimental 2266 int rte_eth_dev_owner_unset(const uint16_t port_id, 2267 const uint64_t owner_id); 2268 2269 /** 2270 * @warning 2271 * @b EXPERIMENTAL: this API may change without prior notice. 2272 * 2273 * Remove owner from all Ethernet devices owned by a specific owner. 2274 * 2275 * @param owner_id 2276 * The owner identifier. 2277 * @return 2278 * 0 on success, negative errno value on error. 2279 */ 2280 __rte_experimental 2281 int rte_eth_dev_owner_delete(const uint64_t owner_id); 2282 2283 /** 2284 * @warning 2285 * @b EXPERIMENTAL: this API may change without prior notice. 2286 * 2287 * Get the owner of an Ethernet device. 2288 * 2289 * @param port_id 2290 * The port identifier. 2291 * @param owner 2292 * The owner structure pointer to fill. 2293 * @return 2294 * 0 on success, negative errno value on error.. 2295 */ 2296 __rte_experimental 2297 int rte_eth_dev_owner_get(const uint16_t port_id, 2298 struct rte_eth_dev_owner *owner); 2299 2300 /** 2301 * Get the number of ports which are usable for the application. 2302 * 2303 * These devices must be iterated by using the macro 2304 * ``RTE_ETH_FOREACH_DEV`` or ``RTE_ETH_FOREACH_DEV_OWNED_BY`` 2305 * to deal with non-contiguous ranges of devices. 2306 * 2307 * @return 2308 * The count of available Ethernet devices. 2309 */ 2310 uint16_t rte_eth_dev_count_avail(void); 2311 2312 /** 2313 * Get the total number of ports which are allocated. 2314 * 2315 * Some devices may not be available for the application. 2316 * 2317 * @return 2318 * The total count of Ethernet devices. 2319 */ 2320 uint16_t rte_eth_dev_count_total(void); 2321 2322 /** 2323 * Convert a numerical speed in Mbps to a bitmap flag that can be used in 2324 * the bitmap link_speeds of the struct rte_eth_conf 2325 * 2326 * @param speed 2327 * Numerical speed value in Mbps 2328 * @param duplex 2329 * RTE_ETH_LINK_[HALF/FULL]_DUPLEX (only for 10/100M speeds) 2330 * @return 2331 * 0 if the speed cannot be mapped 2332 */ 2333 uint32_t rte_eth_speed_bitflag(uint32_t speed, int duplex); 2334 2335 /** 2336 * Get RTE_ETH_RX_OFFLOAD_* flag name. 2337 * 2338 * @param offload 2339 * Offload flag. 2340 * @return 2341 * Offload name or 'UNKNOWN' if the flag cannot be recognised. 2342 */ 2343 const char *rte_eth_dev_rx_offload_name(uint64_t offload); 2344 2345 /** 2346 * Get RTE_ETH_TX_OFFLOAD_* flag name. 2347 * 2348 * @param offload 2349 * Offload flag. 2350 * @return 2351 * Offload name or 'UNKNOWN' if the flag cannot be recognised. 2352 */ 2353 const char *rte_eth_dev_tx_offload_name(uint64_t offload); 2354 2355 /** 2356 * @warning 2357 * @b EXPERIMENTAL: this API may change without prior notice. 2358 * 2359 * Get RTE_ETH_DEV_CAPA_* flag name. 2360 * 2361 * @param capability 2362 * Capability flag. 2363 * @return 2364 * Capability name or 'UNKNOWN' if the flag cannot be recognized. 2365 */ 2366 __rte_experimental 2367 const char *rte_eth_dev_capability_name(uint64_t capability); 2368 2369 /** 2370 * Configure an Ethernet device. 2371 * This function must be invoked first before any other function in the 2372 * Ethernet API. This function can also be re-invoked when a device is in the 2373 * stopped state. 2374 * 2375 * @param port_id 2376 * The port identifier of the Ethernet device to configure. 2377 * @param nb_rx_queue 2378 * The number of receive queues to set up for the Ethernet device. 2379 * @param nb_tx_queue 2380 * The number of transmit queues to set up for the Ethernet device. 2381 * @param eth_conf 2382 * The pointer to the configuration data to be used for the Ethernet device. 2383 * The *rte_eth_conf* structure includes: 2384 * - the hardware offload features to activate, with dedicated fields for 2385 * each statically configurable offload hardware feature provided by 2386 * Ethernet devices, such as IP checksum or VLAN tag stripping for 2387 * example. 2388 * The Rx offload bitfield API is obsolete and will be deprecated. 2389 * Applications should set the ignore_bitfield_offloads bit on *rxmode* 2390 * structure and use offloads field to set per-port offloads instead. 2391 * - Any offloading set in eth_conf->[rt]xmode.offloads must be within 2392 * the [rt]x_offload_capa returned from rte_eth_dev_info_get(). 2393 * Any type of device supported offloading set in the input argument 2394 * eth_conf->[rt]xmode.offloads to rte_eth_dev_configure() is enabled 2395 * on all queues and it can't be disabled in rte_eth_[rt]x_queue_setup() 2396 * - the Receive Side Scaling (RSS) configuration when using multiple Rx 2397 * queues per port. Any RSS hash function set in eth_conf->rss_conf.rss_hf 2398 * must be within the flow_type_rss_offloads provided by drivers via 2399 * rte_eth_dev_info_get() API. 2400 * 2401 * Embedding all configuration information in a single data structure 2402 * is the more flexible method that allows the addition of new features 2403 * without changing the syntax of the API. 2404 * @return 2405 * - 0: Success, device configured. 2406 * - <0: Error code returned by the driver configuration function. 2407 */ 2408 int rte_eth_dev_configure(uint16_t port_id, uint16_t nb_rx_queue, 2409 uint16_t nb_tx_queue, const struct rte_eth_conf *eth_conf); 2410 2411 /** 2412 * Check if an Ethernet device was physically removed. 2413 * 2414 * @param port_id 2415 * The port identifier of the Ethernet device. 2416 * @return 2417 * 1 when the Ethernet device is removed, otherwise 0. 2418 */ 2419 int 2420 rte_eth_dev_is_removed(uint16_t port_id); 2421 2422 /** 2423 * Allocate and set up a receive queue for an Ethernet device. 2424 * 2425 * The function allocates a contiguous block of memory for *nb_rx_desc* 2426 * receive descriptors from a memory zone associated with *socket_id* 2427 * and initializes each receive descriptor with a network buffer allocated 2428 * from the memory pool *mb_pool*. 2429 * 2430 * @param port_id 2431 * The port identifier of the Ethernet device. 2432 * @param rx_queue_id 2433 * The index of the receive queue to set up. 2434 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2435 * to rte_eth_dev_configure(). 2436 * @param nb_rx_desc 2437 * The number of receive descriptors to allocate for the receive ring. 2438 * @param socket_id 2439 * The *socket_id* argument is the socket identifier in case of NUMA. 2440 * The value can be *SOCKET_ID_ANY* if there is no NUMA constraint for 2441 * the DMA memory allocated for the receive descriptors of the ring. 2442 * @param rx_conf 2443 * The pointer to the configuration data to be used for the receive queue. 2444 * NULL value is allowed, in which case default Rx configuration 2445 * will be used. 2446 * The *rx_conf* structure contains an *rx_thresh* structure with the values 2447 * of the Prefetch, Host, and Write-Back threshold registers of the receive 2448 * ring. 2449 * In addition it contains the hardware offloads features to activate using 2450 * the RTE_ETH_RX_OFFLOAD_* flags. 2451 * If an offloading set in rx_conf->offloads 2452 * hasn't been set in the input argument eth_conf->rxmode.offloads 2453 * to rte_eth_dev_configure(), it is a new added offloading, it must be 2454 * per-queue type and it is enabled for the queue. 2455 * No need to repeat any bit in rx_conf->offloads which has already been 2456 * enabled in rte_eth_dev_configure() at port level. An offloading enabled 2457 * at port level can't be disabled at queue level. 2458 * The configuration structure also contains the pointer to the array 2459 * of the receiving buffer segment descriptions, see rx_seg and rx_nseg 2460 * fields, this extended configuration might be used by split offloads like 2461 * RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT. If mb_pool is not NULL, 2462 * the extended configuration fields must be set to NULL and zero. 2463 * @param mb_pool 2464 * The pointer to the memory pool from which to allocate *rte_mbuf* network 2465 * memory buffers to populate each descriptor of the receive ring. There are 2466 * two options to provide Rx buffer configuration: 2467 * - single pool: 2468 * mb_pool is not NULL, rx_conf.rx_nseg is 0. 2469 * - multiple segments description: 2470 * mb_pool is NULL, rx_conf.rx_seg is not NULL, rx_conf.rx_nseg is not 0. 2471 * Taken only if flag RTE_ETH_RX_OFFLOAD_BUFFER_SPLIT is set in offloads. 2472 * 2473 * @return 2474 * - 0: Success, receive queue correctly set up. 2475 * - -EIO: if device is removed. 2476 * - -ENODEV: if *port_id* is invalid. 2477 * - -EINVAL: The memory pool pointer is null or the size of network buffers 2478 * which can be allocated from this memory pool does not fit the various 2479 * buffer sizes allowed by the device controller. 2480 * - -ENOMEM: Unable to allocate the receive ring descriptors or to 2481 * allocate network memory buffers from the memory pool when 2482 * initializing receive descriptors. 2483 */ 2484 int rte_eth_rx_queue_setup(uint16_t port_id, uint16_t rx_queue_id, 2485 uint16_t nb_rx_desc, unsigned int socket_id, 2486 const struct rte_eth_rxconf *rx_conf, 2487 struct rte_mempool *mb_pool); 2488 2489 /** 2490 * @warning 2491 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2492 * 2493 * Allocate and set up a hairpin receive queue for an Ethernet device. 2494 * 2495 * The function set up the selected queue to be used in hairpin. 2496 * 2497 * @param port_id 2498 * The port identifier of the Ethernet device. 2499 * @param rx_queue_id 2500 * The index of the receive queue to set up. 2501 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2502 * to rte_eth_dev_configure(). 2503 * @param nb_rx_desc 2504 * The number of receive descriptors to allocate for the receive ring. 2505 * 0 means the PMD will use default value. 2506 * @param conf 2507 * The pointer to the hairpin configuration. 2508 * 2509 * @return 2510 * - (0) if successful. 2511 * - (-ENODEV) if *port_id* is invalid. 2512 * - (-ENOTSUP) if hardware doesn't support. 2513 * - (-EINVAL) if bad parameter. 2514 * - (-ENOMEM) if unable to allocate the resources. 2515 */ 2516 __rte_experimental 2517 int rte_eth_rx_hairpin_queue_setup 2518 (uint16_t port_id, uint16_t rx_queue_id, uint16_t nb_rx_desc, 2519 const struct rte_eth_hairpin_conf *conf); 2520 2521 /** 2522 * Allocate and set up a transmit queue for an Ethernet device. 2523 * 2524 * @param port_id 2525 * The port identifier of the Ethernet device. 2526 * @param tx_queue_id 2527 * The index of the transmit queue to set up. 2528 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2529 * to rte_eth_dev_configure(). 2530 * @param nb_tx_desc 2531 * The number of transmit descriptors to allocate for the transmit ring. 2532 * @param socket_id 2533 * The *socket_id* argument is the socket identifier in case of NUMA. 2534 * Its value can be *SOCKET_ID_ANY* if there is no NUMA constraint for 2535 * the DMA memory allocated for the transmit descriptors of the ring. 2536 * @param tx_conf 2537 * The pointer to the configuration data to be used for the transmit queue. 2538 * NULL value is allowed, in which case default Tx configuration 2539 * will be used. 2540 * The *tx_conf* structure contains the following data: 2541 * - The *tx_thresh* structure with the values of the Prefetch, Host, and 2542 * Write-Back threshold registers of the transmit ring. 2543 * When setting Write-Back threshold to the value greater then zero, 2544 * *tx_rs_thresh* value should be explicitly set to one. 2545 * - The *tx_free_thresh* value indicates the [minimum] number of network 2546 * buffers that must be pending in the transmit ring to trigger their 2547 * [implicit] freeing by the driver transmit function. 2548 * - The *tx_rs_thresh* value indicates the [minimum] number of transmit 2549 * descriptors that must be pending in the transmit ring before setting the 2550 * RS bit on a descriptor by the driver transmit function. 2551 * The *tx_rs_thresh* value should be less or equal then 2552 * *tx_free_thresh* value, and both of them should be less then 2553 * *nb_tx_desc* - 3. 2554 * - The *offloads* member contains Tx offloads to be enabled. 2555 * If an offloading set in tx_conf->offloads 2556 * hasn't been set in the input argument eth_conf->txmode.offloads 2557 * to rte_eth_dev_configure(), it is a new added offloading, it must be 2558 * per-queue type and it is enabled for the queue. 2559 * No need to repeat any bit in tx_conf->offloads which has already been 2560 * enabled in rte_eth_dev_configure() at port level. An offloading enabled 2561 * at port level can't be disabled at queue level. 2562 * 2563 * Note that setting *tx_free_thresh* or *tx_rs_thresh* value to 0 forces 2564 * the transmit function to use default values. 2565 * @return 2566 * - 0: Success, the transmit queue is correctly set up. 2567 * - -ENOMEM: Unable to allocate the transmit ring descriptors. 2568 */ 2569 int rte_eth_tx_queue_setup(uint16_t port_id, uint16_t tx_queue_id, 2570 uint16_t nb_tx_desc, unsigned int socket_id, 2571 const struct rte_eth_txconf *tx_conf); 2572 2573 /** 2574 * @warning 2575 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2576 * 2577 * Allocate and set up a transmit hairpin queue for an Ethernet device. 2578 * 2579 * @param port_id 2580 * The port identifier of the Ethernet device. 2581 * @param tx_queue_id 2582 * The index of the transmit queue to set up. 2583 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2584 * to rte_eth_dev_configure(). 2585 * @param nb_tx_desc 2586 * The number of transmit descriptors to allocate for the transmit ring. 2587 * 0 to set default PMD value. 2588 * @param conf 2589 * The hairpin configuration. 2590 * 2591 * @return 2592 * - (0) if successful. 2593 * - (-ENODEV) if *port_id* is invalid. 2594 * - (-ENOTSUP) if hardware doesn't support. 2595 * - (-EINVAL) if bad parameter. 2596 * - (-ENOMEM) if unable to allocate the resources. 2597 */ 2598 __rte_experimental 2599 int rte_eth_tx_hairpin_queue_setup 2600 (uint16_t port_id, uint16_t tx_queue_id, uint16_t nb_tx_desc, 2601 const struct rte_eth_hairpin_conf *conf); 2602 2603 /** 2604 * @warning 2605 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2606 * 2607 * Get all the hairpin peer Rx / Tx ports of the current port. 2608 * The caller should ensure that the array is large enough to save the ports 2609 * list. 2610 * 2611 * @param port_id 2612 * The port identifier of the Ethernet device. 2613 * @param peer_ports 2614 * Pointer to the array to store the peer ports list. 2615 * @param len 2616 * Length of the array to store the port identifiers. 2617 * @param direction 2618 * Current port to peer port direction 2619 * positive - current used as Tx to get all peer Rx ports. 2620 * zero - current used as Rx to get all peer Tx ports. 2621 * 2622 * @return 2623 * - (0 or positive) actual peer ports number. 2624 * - (-EINVAL) if bad parameter. 2625 * - (-ENODEV) if *port_id* invalid 2626 * - (-ENOTSUP) if hardware doesn't support. 2627 * - Others detailed errors from PMDs. 2628 */ 2629 __rte_experimental 2630 int rte_eth_hairpin_get_peer_ports(uint16_t port_id, uint16_t *peer_ports, 2631 size_t len, uint32_t direction); 2632 2633 /** 2634 * @warning 2635 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2636 * 2637 * Bind all hairpin Tx queues of one port to the Rx queues of the peer port. 2638 * It is only allowed to call this function after all hairpin queues are 2639 * configured properly and the devices are in started state. 2640 * 2641 * @param tx_port 2642 * The identifier of the Tx port. 2643 * @param rx_port 2644 * The identifier of peer Rx port. 2645 * RTE_MAX_ETHPORTS is allowed for the traversal of all devices. 2646 * Rx port ID could have the same value as Tx port ID. 2647 * 2648 * @return 2649 * - (0) if successful. 2650 * - (-ENODEV) if Tx port ID is invalid. 2651 * - (-EBUSY) if device is not in started state. 2652 * - (-ENOTSUP) if hardware doesn't support. 2653 * - Others detailed errors from PMDs. 2654 */ 2655 __rte_experimental 2656 int rte_eth_hairpin_bind(uint16_t tx_port, uint16_t rx_port); 2657 2658 /** 2659 * @warning 2660 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 2661 * 2662 * Unbind all hairpin Tx queues of one port from the Rx queues of the peer port. 2663 * This should be called before closing the Tx or Rx devices, if the bind 2664 * function is called before. 2665 * After unbinding the hairpin ports pair, it is allowed to bind them again. 2666 * Changing queues configuration should be after stopping the device(s). 2667 * 2668 * @param tx_port 2669 * The identifier of the Tx port. 2670 * @param rx_port 2671 * The identifier of peer Rx port. 2672 * RTE_MAX_ETHPORTS is allowed for traversal of all devices. 2673 * Rx port ID could have the same value as Tx port ID. 2674 * 2675 * @return 2676 * - (0) if successful. 2677 * - (-ENODEV) if Tx port ID is invalid. 2678 * - (-EBUSY) if device is in stopped state. 2679 * - (-ENOTSUP) if hardware doesn't support. 2680 * - Others detailed errors from PMDs. 2681 */ 2682 __rte_experimental 2683 int rte_eth_hairpin_unbind(uint16_t tx_port, uint16_t rx_port); 2684 2685 /** 2686 * Return the NUMA socket to which an Ethernet device is connected 2687 * 2688 * @param port_id 2689 * The port identifier of the Ethernet device 2690 * @return 2691 * The NUMA socket ID to which the Ethernet device is connected or 2692 * a default of zero if the socket could not be determined. 2693 * -1 is returned is the port_id value is out of range. 2694 */ 2695 int rte_eth_dev_socket_id(uint16_t port_id); 2696 2697 /** 2698 * Check if port_id of device is attached 2699 * 2700 * @param port_id 2701 * The port identifier of the Ethernet device 2702 * @return 2703 * - 0 if port is out of range or not attached 2704 * - 1 if device is attached 2705 */ 2706 int rte_eth_dev_is_valid_port(uint16_t port_id); 2707 2708 /** 2709 * Start specified Rx queue of a port. It is used when rx_deferred_start 2710 * flag of the specified queue is true. 2711 * 2712 * @param port_id 2713 * The port identifier of the Ethernet device 2714 * @param rx_queue_id 2715 * The index of the Rx queue to update the ring. 2716 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2717 * to rte_eth_dev_configure(). 2718 * @return 2719 * - 0: Success, the receive queue is started. 2720 * - -ENODEV: if *port_id* is invalid. 2721 * - -EINVAL: The queue_id out of range or belong to hairpin. 2722 * - -EIO: if device is removed. 2723 * - -ENOTSUP: The function not supported in PMD. 2724 */ 2725 int rte_eth_dev_rx_queue_start(uint16_t port_id, uint16_t rx_queue_id); 2726 2727 /** 2728 * Stop specified Rx queue of a port 2729 * 2730 * @param port_id 2731 * The port identifier of the Ethernet device 2732 * @param rx_queue_id 2733 * The index of the Rx queue to update the ring. 2734 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 2735 * to rte_eth_dev_configure(). 2736 * @return 2737 * - 0: Success, the receive queue is stopped. 2738 * - -ENODEV: if *port_id* is invalid. 2739 * - -EINVAL: The queue_id out of range or belong to hairpin. 2740 * - -EIO: if device is removed. 2741 * - -ENOTSUP: The function not supported in PMD. 2742 */ 2743 int rte_eth_dev_rx_queue_stop(uint16_t port_id, uint16_t rx_queue_id); 2744 2745 /** 2746 * Start Tx for specified queue of a port. It is used when tx_deferred_start 2747 * flag of the specified queue is true. 2748 * 2749 * @param port_id 2750 * The port identifier of the Ethernet device 2751 * @param tx_queue_id 2752 * The index of the Tx queue to update the ring. 2753 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2754 * to rte_eth_dev_configure(). 2755 * @return 2756 * - 0: Success, the transmit queue is started. 2757 * - -ENODEV: if *port_id* is invalid. 2758 * - -EINVAL: The queue_id out of range or belong to hairpin. 2759 * - -EIO: if device is removed. 2760 * - -ENOTSUP: The function not supported in PMD. 2761 */ 2762 int rte_eth_dev_tx_queue_start(uint16_t port_id, uint16_t tx_queue_id); 2763 2764 /** 2765 * Stop specified Tx queue of a port 2766 * 2767 * @param port_id 2768 * The port identifier of the Ethernet device 2769 * @param tx_queue_id 2770 * The index of the Tx queue to update the ring. 2771 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 2772 * to rte_eth_dev_configure(). 2773 * @return 2774 * - 0: Success, the transmit queue is stopped. 2775 * - -ENODEV: if *port_id* is invalid. 2776 * - -EINVAL: The queue_id out of range or belong to hairpin. 2777 * - -EIO: if device is removed. 2778 * - -ENOTSUP: The function not supported in PMD. 2779 */ 2780 int rte_eth_dev_tx_queue_stop(uint16_t port_id, uint16_t tx_queue_id); 2781 2782 /** 2783 * Start an Ethernet device. 2784 * 2785 * The device start step is the last one and consists of setting the configured 2786 * offload features and in starting the transmit and the receive units of the 2787 * device. 2788 * 2789 * Device RTE_ETH_DEV_NOLIVE_MAC_ADDR flag causes MAC address to be set before 2790 * PMD port start callback function is invoked. 2791 * 2792 * On success, all basic functions exported by the Ethernet API (link status, 2793 * receive/transmit, and so on) can be invoked. 2794 * 2795 * @param port_id 2796 * The port identifier of the Ethernet device. 2797 * @return 2798 * - 0: Success, Ethernet device started. 2799 * - <0: Error code of the driver device start function. 2800 */ 2801 int rte_eth_dev_start(uint16_t port_id); 2802 2803 /** 2804 * Stop an Ethernet device. The device can be restarted with a call to 2805 * rte_eth_dev_start() 2806 * 2807 * @param port_id 2808 * The port identifier of the Ethernet device. 2809 * @return 2810 * - 0: Success, Ethernet device stopped. 2811 * - <0: Error code of the driver device stop function. 2812 */ 2813 int rte_eth_dev_stop(uint16_t port_id); 2814 2815 /** 2816 * Link up an Ethernet device. 2817 * 2818 * Set device link up will re-enable the device Rx/Tx 2819 * functionality after it is previously set device linked down. 2820 * 2821 * @param port_id 2822 * The port identifier of the Ethernet device. 2823 * @return 2824 * - 0: Success, Ethernet device linked up. 2825 * - <0: Error code of the driver device link up function. 2826 */ 2827 int rte_eth_dev_set_link_up(uint16_t port_id); 2828 2829 /** 2830 * Link down an Ethernet device. 2831 * The device Rx/Tx functionality will be disabled if success, 2832 * and it can be re-enabled with a call to 2833 * rte_eth_dev_set_link_up() 2834 * 2835 * @param port_id 2836 * The port identifier of the Ethernet device. 2837 */ 2838 int rte_eth_dev_set_link_down(uint16_t port_id); 2839 2840 /** 2841 * Close a stopped Ethernet device. The device cannot be restarted! 2842 * The function frees all port resources. 2843 * 2844 * @param port_id 2845 * The port identifier of the Ethernet device. 2846 * @return 2847 * - Zero if the port is closed successfully. 2848 * - Negative if something went wrong. 2849 */ 2850 int rte_eth_dev_close(uint16_t port_id); 2851 2852 /** 2853 * Reset a Ethernet device and keep its port ID. 2854 * 2855 * When a port has to be reset passively, the DPDK application can invoke 2856 * this function. For example when a PF is reset, all its VFs should also 2857 * be reset. Normally a DPDK application can invoke this function when 2858 * RTE_ETH_EVENT_INTR_RESET event is detected, but can also use it to start 2859 * a port reset in other circumstances. 2860 * 2861 * When this function is called, it first stops the port and then calls the 2862 * PMD specific dev_uninit( ) and dev_init( ) to return the port to initial 2863 * state, in which no Tx and Rx queues are setup, as if the port has been 2864 * reset and not started. The port keeps the port ID it had before the 2865 * function call. 2866 * 2867 * After calling rte_eth_dev_reset( ), the application should use 2868 * rte_eth_dev_configure( ), rte_eth_rx_queue_setup( ), 2869 * rte_eth_tx_queue_setup( ), and rte_eth_dev_start( ) 2870 * to reconfigure the device as appropriate. 2871 * 2872 * Note: To avoid unexpected behavior, the application should stop calling 2873 * Tx and Rx functions before calling rte_eth_dev_reset( ). For thread 2874 * safety, all these controlling functions should be called from the same 2875 * thread. 2876 * 2877 * @param port_id 2878 * The port identifier of the Ethernet device. 2879 * 2880 * @return 2881 * - (0) if successful. 2882 * - (-ENODEV) if *port_id* is invalid. 2883 * - (-ENOTSUP) if hardware doesn't support this function. 2884 * - (-EPERM) if not ran from the primary process. 2885 * - (-EIO) if re-initialisation failed or device is removed. 2886 * - (-ENOMEM) if the reset failed due to OOM. 2887 * - (-EAGAIN) if the reset temporarily failed and should be retried later. 2888 */ 2889 int rte_eth_dev_reset(uint16_t port_id); 2890 2891 /** 2892 * Enable receipt in promiscuous mode for an Ethernet device. 2893 * 2894 * @param port_id 2895 * The port identifier of the Ethernet device. 2896 * @return 2897 * - (0) if successful. 2898 * - (-ENOTSUP) if support for promiscuous_enable() does not exist 2899 * for the device. 2900 * - (-ENODEV) if *port_id* invalid. 2901 */ 2902 int rte_eth_promiscuous_enable(uint16_t port_id); 2903 2904 /** 2905 * Disable receipt in promiscuous mode for an Ethernet device. 2906 * 2907 * @param port_id 2908 * The port identifier of the Ethernet device. 2909 * @return 2910 * - (0) if successful. 2911 * - (-ENOTSUP) if support for promiscuous_disable() does not exist 2912 * for the device. 2913 * - (-ENODEV) if *port_id* invalid. 2914 */ 2915 int rte_eth_promiscuous_disable(uint16_t port_id); 2916 2917 /** 2918 * Return the value of promiscuous mode for an Ethernet device. 2919 * 2920 * @param port_id 2921 * The port identifier of the Ethernet device. 2922 * @return 2923 * - (1) if promiscuous is enabled 2924 * - (0) if promiscuous is disabled. 2925 * - (-1) on error 2926 */ 2927 int rte_eth_promiscuous_get(uint16_t port_id); 2928 2929 /** 2930 * Enable the receipt of any multicast frame by an Ethernet device. 2931 * 2932 * @param port_id 2933 * The port identifier of the Ethernet device. 2934 * @return 2935 * - (0) if successful. 2936 * - (-ENOTSUP) if support for allmulticast_enable() does not exist 2937 * for the device. 2938 * - (-ENODEV) if *port_id* invalid. 2939 */ 2940 int rte_eth_allmulticast_enable(uint16_t port_id); 2941 2942 /** 2943 * Disable the receipt of all multicast frames by an Ethernet device. 2944 * 2945 * @param port_id 2946 * The port identifier of the Ethernet device. 2947 * @return 2948 * - (0) if successful. 2949 * - (-ENOTSUP) if support for allmulticast_disable() does not exist 2950 * for the device. 2951 * - (-ENODEV) if *port_id* invalid. 2952 */ 2953 int rte_eth_allmulticast_disable(uint16_t port_id); 2954 2955 /** 2956 * Return the value of allmulticast mode for an Ethernet device. 2957 * 2958 * @param port_id 2959 * The port identifier of the Ethernet device. 2960 * @return 2961 * - (1) if allmulticast is enabled 2962 * - (0) if allmulticast is disabled. 2963 * - (-1) on error 2964 */ 2965 int rte_eth_allmulticast_get(uint16_t port_id); 2966 2967 /** 2968 * Retrieve the link status (up/down), the duplex mode (half/full), 2969 * the negotiation (auto/fixed), and if available, the speed (Mbps). 2970 * 2971 * It might need to wait up to 9 seconds. 2972 * @see rte_eth_link_get_nowait. 2973 * 2974 * @param port_id 2975 * The port identifier of the Ethernet device. 2976 * @param link 2977 * Link information written back. 2978 * @return 2979 * - (0) if successful. 2980 * - (-ENOTSUP) if the function is not supported in PMD. 2981 * - (-ENODEV) if *port_id* invalid. 2982 * - (-EINVAL) if bad parameter. 2983 */ 2984 int rte_eth_link_get(uint16_t port_id, struct rte_eth_link *link); 2985 2986 /** 2987 * Retrieve the link status (up/down), the duplex mode (half/full), 2988 * the negotiation (auto/fixed), and if available, the speed (Mbps). 2989 * 2990 * @param port_id 2991 * The port identifier of the Ethernet device. 2992 * @param link 2993 * Link information written back. 2994 * @return 2995 * - (0) if successful. 2996 * - (-ENOTSUP) if the function is not supported in PMD. 2997 * - (-ENODEV) if *port_id* invalid. 2998 * - (-EINVAL) if bad parameter. 2999 */ 3000 int rte_eth_link_get_nowait(uint16_t port_id, struct rte_eth_link *link); 3001 3002 /** 3003 * @warning 3004 * @b EXPERIMENTAL: this API may change without prior notice. 3005 * 3006 * The function converts a link_speed to a string. It handles all special 3007 * values like unknown or none speed. 3008 * 3009 * @param link_speed 3010 * link_speed of rte_eth_link struct 3011 * @return 3012 * Link speed in textual format. It's pointer to immutable memory. 3013 * No free is required. 3014 */ 3015 __rte_experimental 3016 const char *rte_eth_link_speed_to_str(uint32_t link_speed); 3017 3018 /** 3019 * @warning 3020 * @b EXPERIMENTAL: this API may change without prior notice. 3021 * 3022 * The function converts a rte_eth_link struct representing a link status to 3023 * a string. 3024 * 3025 * @param str 3026 * A pointer to a string to be filled with textual representation of 3027 * device status. At least RTE_ETH_LINK_MAX_STR_LEN bytes should be allocated to 3028 * store default link status text. 3029 * @param len 3030 * Length of available memory at 'str' string. 3031 * @param eth_link 3032 * Link status returned by rte_eth_link_get function 3033 * @return 3034 * Number of bytes written to str array or -EINVAL if bad parameter. 3035 */ 3036 __rte_experimental 3037 int rte_eth_link_to_str(char *str, size_t len, 3038 const struct rte_eth_link *eth_link); 3039 3040 /** 3041 * Retrieve the general I/O statistics of an Ethernet device. 3042 * 3043 * @param port_id 3044 * The port identifier of the Ethernet device. 3045 * @param stats 3046 * A pointer to a structure of type *rte_eth_stats* to be filled with 3047 * the values of device counters for the following set of statistics: 3048 * - *ipackets* with the total of successfully received packets. 3049 * - *opackets* with the total of successfully transmitted packets. 3050 * - *ibytes* with the total of successfully received bytes. 3051 * - *obytes* with the total of successfully transmitted bytes. 3052 * - *ierrors* with the total of erroneous received packets. 3053 * - *oerrors* with the total of failed transmitted packets. 3054 * @return 3055 * Zero if successful. Non-zero otherwise. 3056 */ 3057 int rte_eth_stats_get(uint16_t port_id, struct rte_eth_stats *stats); 3058 3059 /** 3060 * Reset the general I/O statistics of an Ethernet device. 3061 * 3062 * @param port_id 3063 * The port identifier of the Ethernet device. 3064 * @return 3065 * - (0) if device notified to reset stats. 3066 * - (-ENOTSUP) if hardware doesn't support. 3067 * - (-ENODEV) if *port_id* invalid. 3068 * - (<0): Error code of the driver stats reset function. 3069 */ 3070 int rte_eth_stats_reset(uint16_t port_id); 3071 3072 /** 3073 * Retrieve names of extended statistics of an Ethernet device. 3074 * 3075 * There is an assumption that 'xstat_names' and 'xstats' arrays are matched 3076 * by array index: 3077 * xstats_names[i].name => xstats[i].value 3078 * 3079 * And the array index is same with id field of 'struct rte_eth_xstat': 3080 * xstats[i].id == i 3081 * 3082 * This assumption makes key-value pair matching less flexible but simpler. 3083 * 3084 * @param port_id 3085 * The port identifier of the Ethernet device. 3086 * @param xstats_names 3087 * An rte_eth_xstat_name array of at least *size* elements to 3088 * be filled. If set to NULL, the function returns the required number 3089 * of elements. 3090 * @param size 3091 * The size of the xstats_names array (number of elements). 3092 * @return 3093 * - A positive value lower or equal to size: success. The return value 3094 * is the number of entries filled in the stats table. 3095 * - A positive value higher than size: error, the given statistics table 3096 * is too small. The return value corresponds to the size that should 3097 * be given to succeed. The entries in the table are not valid and 3098 * shall not be used by the caller. 3099 * - A negative value on error (invalid port ID). 3100 */ 3101 int rte_eth_xstats_get_names(uint16_t port_id, 3102 struct rte_eth_xstat_name *xstats_names, 3103 unsigned int size); 3104 3105 /** 3106 * Retrieve extended statistics of an Ethernet device. 3107 * 3108 * There is an assumption that 'xstat_names' and 'xstats' arrays are matched 3109 * by array index: 3110 * xstats_names[i].name => xstats[i].value 3111 * 3112 * And the array index is same with id field of 'struct rte_eth_xstat': 3113 * xstats[i].id == i 3114 * 3115 * This assumption makes key-value pair matching less flexible but simpler. 3116 * 3117 * @param port_id 3118 * The port identifier of the Ethernet device. 3119 * @param xstats 3120 * A pointer to a table of structure of type *rte_eth_xstat* 3121 * to be filled with device statistics ids and values. 3122 * This parameter can be set to NULL if n is 0. 3123 * @param n 3124 * The size of the xstats array (number of elements). 3125 * @return 3126 * - A positive value lower or equal to n: success. The return value 3127 * is the number of entries filled in the stats table. 3128 * - A positive value higher than n: error, the given statistics table 3129 * is too small. The return value corresponds to the size that should 3130 * be given to succeed. The entries in the table are not valid and 3131 * shall not be used by the caller. 3132 * - A negative value on error (invalid port ID). 3133 */ 3134 int rte_eth_xstats_get(uint16_t port_id, struct rte_eth_xstat *xstats, 3135 unsigned int n); 3136 3137 /** 3138 * Retrieve names of extended statistics of an Ethernet device. 3139 * 3140 * @param port_id 3141 * The port identifier of the Ethernet device. 3142 * @param xstats_names 3143 * Array to be filled in with names of requested device statistics. 3144 * Must not be NULL if @p ids are specified (not NULL). 3145 * @param size 3146 * Number of elements in @p xstats_names array (if not NULL) and in 3147 * @p ids array (if not NULL). Must be 0 if both array pointers are NULL. 3148 * @param ids 3149 * IDs array given by app to retrieve specific statistics. May be NULL to 3150 * retrieve names of all available statistics or, if @p xstats_names is 3151 * NULL as well, just the number of available statistics. 3152 * @return 3153 * - A positive value lower or equal to size: success. The return value 3154 * is the number of entries filled in the stats table. 3155 * - A positive value higher than size: success. The given statistics table 3156 * is too small. The return value corresponds to the size that should 3157 * be given to succeed. The entries in the table are not valid and 3158 * shall not be used by the caller. 3159 * - A negative value on error. 3160 */ 3161 int 3162 rte_eth_xstats_get_names_by_id(uint16_t port_id, 3163 struct rte_eth_xstat_name *xstats_names, unsigned int size, 3164 uint64_t *ids); 3165 3166 /** 3167 * Retrieve extended statistics of an Ethernet device. 3168 * 3169 * @param port_id 3170 * The port identifier of the Ethernet device. 3171 * @param ids 3172 * IDs array given by app to retrieve specific statistics. May be NULL to 3173 * retrieve all available statistics or, if @p values is NULL as well, 3174 * just the number of available statistics. 3175 * @param values 3176 * Array to be filled in with requested device statistics. 3177 * Must not be NULL if ids are specified (not NULL). 3178 * @param size 3179 * Number of elements in @p values array (if not NULL) and in @p ids 3180 * array (if not NULL). Must be 0 if both array pointers are NULL. 3181 * @return 3182 * - A positive value lower or equal to size: success. The return value 3183 * is the number of entries filled in the stats table. 3184 * - A positive value higher than size: success: The given statistics table 3185 * is too small. The return value corresponds to the size that should 3186 * be given to succeed. The entries in the table are not valid and 3187 * shall not be used by the caller. 3188 * - A negative value on error. 3189 */ 3190 int rte_eth_xstats_get_by_id(uint16_t port_id, const uint64_t *ids, 3191 uint64_t *values, unsigned int size); 3192 3193 /** 3194 * Gets the ID of a statistic from its name. 3195 * 3196 * This function searches for the statistics using string compares, and 3197 * as such should not be used on the fast-path. For fast-path retrieval of 3198 * specific statistics, store the ID as provided in *id* from this function, 3199 * and pass the ID to rte_eth_xstats_get() 3200 * 3201 * @param port_id The port to look up statistics from 3202 * @param xstat_name The name of the statistic to return 3203 * @param[out] id A pointer to an app-supplied uint64_t which should be 3204 * set to the ID of the stat if the stat exists. 3205 * @return 3206 * 0 on success 3207 * -ENODEV for invalid port_id, 3208 * -EIO if device is removed, 3209 * -EINVAL if the xstat_name doesn't exist in port_id 3210 * -ENOMEM if bad parameter. 3211 */ 3212 int rte_eth_xstats_get_id_by_name(uint16_t port_id, const char *xstat_name, 3213 uint64_t *id); 3214 3215 /** 3216 * Reset extended statistics of an Ethernet device. 3217 * 3218 * @param port_id 3219 * The port identifier of the Ethernet device. 3220 * @return 3221 * - (0) if device notified to reset extended stats. 3222 * - (-ENOTSUP) if pmd doesn't support both 3223 * extended stats and basic stats reset. 3224 * - (-ENODEV) if *port_id* invalid. 3225 * - (<0): Error code of the driver xstats reset function. 3226 */ 3227 int rte_eth_xstats_reset(uint16_t port_id); 3228 3229 /** 3230 * Set a mapping for the specified transmit queue to the specified per-queue 3231 * statistics counter. 3232 * 3233 * @param port_id 3234 * The port identifier of the Ethernet device. 3235 * @param tx_queue_id 3236 * The index of the transmit queue for which a queue stats mapping is required. 3237 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 3238 * to rte_eth_dev_configure(). 3239 * @param stat_idx 3240 * The per-queue packet statistics functionality number that the transmit 3241 * queue is to be assigned. 3242 * The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1]. 3243 * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256. 3244 * @return 3245 * Zero if successful. Non-zero otherwise. 3246 */ 3247 int rte_eth_dev_set_tx_queue_stats_mapping(uint16_t port_id, 3248 uint16_t tx_queue_id, uint8_t stat_idx); 3249 3250 /** 3251 * Set a mapping for the specified receive queue to the specified per-queue 3252 * statistics counter. 3253 * 3254 * @param port_id 3255 * The port identifier of the Ethernet device. 3256 * @param rx_queue_id 3257 * The index of the receive queue for which a queue stats mapping is required. 3258 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3259 * to rte_eth_dev_configure(). 3260 * @param stat_idx 3261 * The per-queue packet statistics functionality number that the receive 3262 * queue is to be assigned. 3263 * The value must be in the range [0, RTE_ETHDEV_QUEUE_STAT_CNTRS - 1]. 3264 * Max RTE_ETHDEV_QUEUE_STAT_CNTRS being 256. 3265 * @return 3266 * Zero if successful. Non-zero otherwise. 3267 */ 3268 int rte_eth_dev_set_rx_queue_stats_mapping(uint16_t port_id, 3269 uint16_t rx_queue_id, 3270 uint8_t stat_idx); 3271 3272 /** 3273 * Retrieve the Ethernet address of an Ethernet device. 3274 * 3275 * @param port_id 3276 * The port identifier of the Ethernet device. 3277 * @param mac_addr 3278 * A pointer to a structure of type *ether_addr* to be filled with 3279 * the Ethernet address of the Ethernet device. 3280 * @return 3281 * - (0) if successful 3282 * - (-ENODEV) if *port_id* invalid. 3283 * - (-EINVAL) if bad parameter. 3284 */ 3285 int rte_eth_macaddr_get(uint16_t port_id, struct rte_ether_addr *mac_addr); 3286 3287 /** 3288 * @warning 3289 * @b EXPERIMENTAL: this API may change without prior notice 3290 * 3291 * Retrieve the Ethernet addresses of an Ethernet device. 3292 * 3293 * @param port_id 3294 * The port identifier of the Ethernet device. 3295 * @param ma 3296 * A pointer to an array of structures of type *ether_addr* to be filled with 3297 * the Ethernet addresses of the Ethernet device. 3298 * @param num 3299 * Number of elements in the @p ma array. 3300 * Note that rte_eth_dev_info::max_mac_addrs can be used to retrieve 3301 * max number of Ethernet addresses for given port. 3302 * @return 3303 * - number of retrieved addresses if successful 3304 * - (-ENODEV) if *port_id* invalid. 3305 * - (-EINVAL) if bad parameter. 3306 */ 3307 __rte_experimental 3308 int rte_eth_macaddrs_get(uint16_t port_id, struct rte_ether_addr *ma, 3309 unsigned int num); 3310 3311 /** 3312 * Retrieve the contextual information of an Ethernet device. 3313 * 3314 * As part of this function, a number of of fields in dev_info will be 3315 * initialized as follows: 3316 * 3317 * rx_desc_lim = lim 3318 * tx_desc_lim = lim 3319 * 3320 * Where lim is defined within the rte_eth_dev_info_get as 3321 * 3322 * const struct rte_eth_desc_lim lim = { 3323 * .nb_max = UINT16_MAX, 3324 * .nb_min = 0, 3325 * .nb_align = 1, 3326 * .nb_seg_max = UINT16_MAX, 3327 * .nb_mtu_seg_max = UINT16_MAX, 3328 * }; 3329 * 3330 * device = dev->device 3331 * min_mtu = RTE_ETHER_MIN_LEN - RTE_ETHER_HDR_LEN - RTE_ETHER_CRC_LEN 3332 * max_mtu = UINT16_MAX 3333 * 3334 * The following fields will be populated if support for dev_infos_get() 3335 * exists for the device and the rte_eth_dev 'dev' has been populated 3336 * successfully with a call to it: 3337 * 3338 * driver_name = dev->device->driver->name 3339 * nb_rx_queues = dev->data->nb_rx_queues 3340 * nb_tx_queues = dev->data->nb_tx_queues 3341 * dev_flags = &dev->data->dev_flags 3342 * 3343 * @param port_id 3344 * The port identifier of the Ethernet device. 3345 * @param dev_info 3346 * A pointer to a structure of type *rte_eth_dev_info* to be filled with 3347 * the contextual information of the Ethernet device. 3348 * @return 3349 * - (0) if successful. 3350 * - (-ENOTSUP) if support for dev_infos_get() does not exist for the device. 3351 * - (-ENODEV) if *port_id* invalid. 3352 * - (-EINVAL) if bad parameter. 3353 */ 3354 int rte_eth_dev_info_get(uint16_t port_id, struct rte_eth_dev_info *dev_info); 3355 3356 /** 3357 * @warning 3358 * @b EXPERIMENTAL: this API may change without prior notice. 3359 * 3360 * Retrieve the configuration of an Ethernet device. 3361 * 3362 * @param port_id 3363 * The port identifier of the Ethernet device. 3364 * @param dev_conf 3365 * Location for Ethernet device configuration to be filled in. 3366 * @return 3367 * - (0) if successful. 3368 * - (-ENODEV) if *port_id* invalid. 3369 * - (-EINVAL) if bad parameter. 3370 */ 3371 __rte_experimental 3372 int rte_eth_dev_conf_get(uint16_t port_id, struct rte_eth_conf *dev_conf); 3373 3374 /** 3375 * Retrieve the firmware version of a device. 3376 * 3377 * @param port_id 3378 * The port identifier of the device. 3379 * @param fw_version 3380 * A pointer to a string array storing the firmware version of a device, 3381 * the string includes terminating null. This pointer is allocated by caller. 3382 * @param fw_size 3383 * The size of the string array pointed by fw_version, which should be 3384 * large enough to store firmware version of the device. 3385 * @return 3386 * - (0) if successful. 3387 * - (-ENOTSUP) if operation is not supported. 3388 * - (-ENODEV) if *port_id* invalid. 3389 * - (-EIO) if device is removed. 3390 * - (-EINVAL) if bad parameter. 3391 * - (>0) if *fw_size* is not enough to store firmware version, return 3392 * the size of the non truncated string. 3393 */ 3394 int rte_eth_dev_fw_version_get(uint16_t port_id, 3395 char *fw_version, size_t fw_size); 3396 3397 /** 3398 * Retrieve the supported packet types of an Ethernet device. 3399 * 3400 * When a packet type is announced as supported, it *must* be recognized by 3401 * the PMD. For instance, if RTE_PTYPE_L2_ETHER, RTE_PTYPE_L2_ETHER_VLAN 3402 * and RTE_PTYPE_L3_IPV4 are announced, the PMD must return the following 3403 * packet types for these packets: 3404 * - Ether/IPv4 -> RTE_PTYPE_L2_ETHER | RTE_PTYPE_L3_IPV4 3405 * - Ether/VLAN/IPv4 -> RTE_PTYPE_L2_ETHER_VLAN | RTE_PTYPE_L3_IPV4 3406 * - Ether/[anything else] -> RTE_PTYPE_L2_ETHER 3407 * - Ether/VLAN/[anything else] -> RTE_PTYPE_L2_ETHER_VLAN 3408 * 3409 * When a packet is received by a PMD, the most precise type must be 3410 * returned among the ones supported. However a PMD is allowed to set 3411 * packet type that is not in the supported list, at the condition that it 3412 * is more precise. Therefore, a PMD announcing no supported packet types 3413 * can still set a matching packet type in a received packet. 3414 * 3415 * @note 3416 * Better to invoke this API after the device is already started or Rx burst 3417 * function is decided, to obtain correct supported ptypes. 3418 * @note 3419 * if a given PMD does not report what ptypes it supports, then the supported 3420 * ptype count is reported as 0. 3421 * @param port_id 3422 * The port identifier of the Ethernet device. 3423 * @param ptype_mask 3424 * A hint of what kind of packet type which the caller is interested in. 3425 * @param ptypes 3426 * An array pointer to store adequate packet types, allocated by caller. 3427 * @param num 3428 * Size of the array pointed by param ptypes. 3429 * @return 3430 * - (>=0) Number of supported ptypes. If the number of types exceeds num, 3431 * only num entries will be filled into the ptypes array, but the full 3432 * count of supported ptypes will be returned. 3433 * - (-ENODEV) if *port_id* invalid. 3434 * - (-EINVAL) if bad parameter. 3435 */ 3436 int rte_eth_dev_get_supported_ptypes(uint16_t port_id, uint32_t ptype_mask, 3437 uint32_t *ptypes, int num); 3438 /** 3439 * Inform Ethernet device about reduced range of packet types to handle. 3440 * 3441 * Application can use this function to set only specific ptypes that it's 3442 * interested. This information can be used by the PMD to optimize Rx path. 3443 * 3444 * The function accepts an array `set_ptypes` allocated by the caller to 3445 * store the packet types set by the driver, the last element of the array 3446 * is set to RTE_PTYPE_UNKNOWN. The size of the `set_ptype` array should be 3447 * `rte_eth_dev_get_supported_ptypes() + 1` else it might only be filled 3448 * partially. 3449 * 3450 * @param port_id 3451 * The port identifier of the Ethernet device. 3452 * @param ptype_mask 3453 * The ptype family that application is interested in should be bitwise OR of 3454 * RTE_PTYPE_*_MASK or 0. 3455 * @param set_ptypes 3456 * An array pointer to store set packet types, allocated by caller. The 3457 * function marks the end of array with RTE_PTYPE_UNKNOWN. 3458 * @param num 3459 * Size of the array pointed by param ptypes. 3460 * Should be rte_eth_dev_get_supported_ptypes() + 1 to accommodate the 3461 * set ptypes. 3462 * @return 3463 * - (0) if Success. 3464 * - (-ENODEV) if *port_id* invalid. 3465 * - (-EINVAL) if *ptype_mask* is invalid (or) set_ptypes is NULL and 3466 * num > 0. 3467 */ 3468 int rte_eth_dev_set_ptypes(uint16_t port_id, uint32_t ptype_mask, 3469 uint32_t *set_ptypes, unsigned int num); 3470 3471 /** 3472 * Retrieve the MTU of an Ethernet device. 3473 * 3474 * @param port_id 3475 * The port identifier of the Ethernet device. 3476 * @param mtu 3477 * A pointer to a uint16_t where the retrieved MTU is to be stored. 3478 * @return 3479 * - (0) if successful. 3480 * - (-ENODEV) if *port_id* invalid. 3481 * - (-EINVAL) if bad parameter. 3482 */ 3483 int rte_eth_dev_get_mtu(uint16_t port_id, uint16_t *mtu); 3484 3485 /** 3486 * Change the MTU of an Ethernet device. 3487 * 3488 * @param port_id 3489 * The port identifier of the Ethernet device. 3490 * @param mtu 3491 * A uint16_t for the MTU to be applied. 3492 * @return 3493 * - (0) if successful. 3494 * - (-ENOTSUP) if operation is not supported. 3495 * - (-ENODEV) if *port_id* invalid. 3496 * - (-EIO) if device is removed. 3497 * - (-EINVAL) if *mtu* invalid, validation of mtu can occur within 3498 * rte_eth_dev_set_mtu if dev_infos_get is supported by the device or 3499 * when the mtu is set using dev->dev_ops->mtu_set. 3500 * - (-EBUSY) if operation is not allowed when the port is running 3501 */ 3502 int rte_eth_dev_set_mtu(uint16_t port_id, uint16_t mtu); 3503 3504 /** 3505 * Enable/Disable hardware filtering by an Ethernet device of received 3506 * VLAN packets tagged with a given VLAN Tag Identifier. 3507 * 3508 * @param port_id 3509 * The port identifier of the Ethernet device. 3510 * @param vlan_id 3511 * The VLAN Tag Identifier whose filtering must be enabled or disabled. 3512 * @param on 3513 * If > 0, enable VLAN filtering of VLAN packets tagged with *vlan_id*. 3514 * Otherwise, disable VLAN filtering of VLAN packets tagged with *vlan_id*. 3515 * @return 3516 * - (0) if successful. 3517 * - (-ENOTSUP) if hardware-assisted VLAN filtering not configured. 3518 * - (-ENODEV) if *port_id* invalid. 3519 * - (-EIO) if device is removed. 3520 * - (-ENOSYS) if VLAN filtering on *port_id* disabled. 3521 * - (-EINVAL) if *vlan_id* > 4095. 3522 */ 3523 int rte_eth_dev_vlan_filter(uint16_t port_id, uint16_t vlan_id, int on); 3524 3525 /** 3526 * Enable/Disable hardware VLAN Strip by a Rx queue of an Ethernet device. 3527 * 3528 * @param port_id 3529 * The port identifier of the Ethernet device. 3530 * @param rx_queue_id 3531 * The index of the receive queue for which a queue stats mapping is required. 3532 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3533 * to rte_eth_dev_configure(). 3534 * @param on 3535 * If 1, Enable VLAN Stripping of the receive queue of the Ethernet port. 3536 * If 0, Disable VLAN Stripping of the receive queue of the Ethernet port. 3537 * @return 3538 * - (0) if successful. 3539 * - (-ENOTSUP) if hardware-assisted VLAN stripping not configured. 3540 * - (-ENODEV) if *port_id* invalid. 3541 * - (-EINVAL) if *rx_queue_id* invalid. 3542 */ 3543 int rte_eth_dev_set_vlan_strip_on_queue(uint16_t port_id, uint16_t rx_queue_id, 3544 int on); 3545 3546 /** 3547 * Set the Outer VLAN Ether Type by an Ethernet device, it can be inserted to 3548 * the VLAN header. 3549 * 3550 * @param port_id 3551 * The port identifier of the Ethernet device. 3552 * @param vlan_type 3553 * The VLAN type. 3554 * @param tag_type 3555 * The Tag Protocol ID 3556 * @return 3557 * - (0) if successful. 3558 * - (-ENOTSUP) if hardware-assisted VLAN TPID setup is not supported. 3559 * - (-ENODEV) if *port_id* invalid. 3560 * - (-EIO) if device is removed. 3561 */ 3562 int rte_eth_dev_set_vlan_ether_type(uint16_t port_id, 3563 enum rte_vlan_type vlan_type, 3564 uint16_t tag_type); 3565 3566 /** 3567 * Set VLAN offload configuration on an Ethernet device. 3568 * 3569 * @param port_id 3570 * The port identifier of the Ethernet device. 3571 * @param offload_mask 3572 * The VLAN Offload bit mask can be mixed use with "OR" 3573 * RTE_ETH_VLAN_STRIP_OFFLOAD 3574 * RTE_ETH_VLAN_FILTER_OFFLOAD 3575 * RTE_ETH_VLAN_EXTEND_OFFLOAD 3576 * RTE_ETH_QINQ_STRIP_OFFLOAD 3577 * @return 3578 * - (0) if successful. 3579 * - (-ENOTSUP) if hardware-assisted VLAN filtering not configured. 3580 * - (-ENODEV) if *port_id* invalid. 3581 * - (-EIO) if device is removed. 3582 */ 3583 int rte_eth_dev_set_vlan_offload(uint16_t port_id, int offload_mask); 3584 3585 /** 3586 * Read VLAN Offload configuration from an Ethernet device 3587 * 3588 * @param port_id 3589 * The port identifier of the Ethernet device. 3590 * @return 3591 * - (>0) if successful. Bit mask to indicate 3592 * RTE_ETH_VLAN_STRIP_OFFLOAD 3593 * RTE_ETH_VLAN_FILTER_OFFLOAD 3594 * RTE_ETH_VLAN_EXTEND_OFFLOAD 3595 * RTE_ETH_QINQ_STRIP_OFFLOAD 3596 * - (-ENODEV) if *port_id* invalid. 3597 */ 3598 int rte_eth_dev_get_vlan_offload(uint16_t port_id); 3599 3600 /** 3601 * Set port based Tx VLAN insertion on or off. 3602 * 3603 * @param port_id 3604 * The port identifier of the Ethernet device. 3605 * @param pvid 3606 * Port based Tx VLAN identifier together with user priority. 3607 * @param on 3608 * Turn on or off the port based Tx VLAN insertion. 3609 * 3610 * @return 3611 * - (0) if successful. 3612 * - negative if failed. 3613 */ 3614 int rte_eth_dev_set_vlan_pvid(uint16_t port_id, uint16_t pvid, int on); 3615 3616 typedef void (*buffer_tx_error_fn)(struct rte_mbuf **unsent, uint16_t count, 3617 void *userdata); 3618 3619 /** 3620 * Structure used to buffer packets for future Tx 3621 * Used by APIs rte_eth_tx_buffer and rte_eth_tx_buffer_flush 3622 */ 3623 struct rte_eth_dev_tx_buffer { 3624 buffer_tx_error_fn error_callback; 3625 void *error_userdata; 3626 uint16_t size; /**< Size of buffer for buffered Tx */ 3627 uint16_t length; /**< Number of packets in the array */ 3628 /** Pending packets to be sent on explicit flush or when full */ 3629 struct rte_mbuf *pkts[]; 3630 }; 3631 3632 /** 3633 * Calculate the size of the Tx buffer. 3634 * 3635 * @param sz 3636 * Number of stored packets. 3637 */ 3638 #define RTE_ETH_TX_BUFFER_SIZE(sz) \ 3639 (sizeof(struct rte_eth_dev_tx_buffer) + (sz) * sizeof(struct rte_mbuf *)) 3640 3641 /** 3642 * Initialize default values for buffered transmitting 3643 * 3644 * @param buffer 3645 * Tx buffer to be initialized. 3646 * @param size 3647 * Buffer size 3648 * @return 3649 * 0 if no error 3650 */ 3651 int 3652 rte_eth_tx_buffer_init(struct rte_eth_dev_tx_buffer *buffer, uint16_t size); 3653 3654 /** 3655 * Configure a callback for buffered packets which cannot be sent 3656 * 3657 * Register a specific callback to be called when an attempt is made to send 3658 * all packets buffered on an Ethernet port, but not all packets can 3659 * successfully be sent. The callback registered here will be called only 3660 * from calls to rte_eth_tx_buffer() and rte_eth_tx_buffer_flush() APIs. 3661 * The default callback configured for each queue by default just frees the 3662 * packets back to the calling mempool. If additional behaviour is required, 3663 * for example, to count dropped packets, or to retry transmission of packets 3664 * which cannot be sent, this function should be used to register a suitable 3665 * callback function to implement the desired behaviour. 3666 * The example callback "rte_eth_count_unsent_packet_callback()" is also 3667 * provided as reference. 3668 * 3669 * @param buffer 3670 * The port identifier of the Ethernet device. 3671 * @param callback 3672 * The function to be used as the callback. 3673 * @param userdata 3674 * Arbitrary parameter to be passed to the callback function 3675 * @return 3676 * 0 on success, or -EINVAL if bad parameter 3677 */ 3678 int 3679 rte_eth_tx_buffer_set_err_callback(struct rte_eth_dev_tx_buffer *buffer, 3680 buffer_tx_error_fn callback, void *userdata); 3681 3682 /** 3683 * Callback function for silently dropping unsent buffered packets. 3684 * 3685 * This function can be passed to rte_eth_tx_buffer_set_err_callback() to 3686 * adjust the default behavior when buffered packets cannot be sent. This 3687 * function drops any unsent packets silently and is used by Tx buffered 3688 * operations as default behavior. 3689 * 3690 * NOTE: this function should not be called directly, instead it should be used 3691 * as a callback for packet buffering. 3692 * 3693 * NOTE: when configuring this function as a callback with 3694 * rte_eth_tx_buffer_set_err_callback(), the final, userdata parameter 3695 * should point to an uint64_t value. 3696 * 3697 * @param pkts 3698 * The previously buffered packets which could not be sent 3699 * @param unsent 3700 * The number of unsent packets in the pkts array 3701 * @param userdata 3702 * Not used 3703 */ 3704 void 3705 rte_eth_tx_buffer_drop_callback(struct rte_mbuf **pkts, uint16_t unsent, 3706 void *userdata); 3707 3708 /** 3709 * Callback function for tracking unsent buffered packets. 3710 * 3711 * This function can be passed to rte_eth_tx_buffer_set_err_callback() to 3712 * adjust the default behavior when buffered packets cannot be sent. This 3713 * function drops any unsent packets, but also updates a user-supplied counter 3714 * to track the overall number of packets dropped. The counter should be an 3715 * uint64_t variable. 3716 * 3717 * NOTE: this function should not be called directly, instead it should be used 3718 * as a callback for packet buffering. 3719 * 3720 * NOTE: when configuring this function as a callback with 3721 * rte_eth_tx_buffer_set_err_callback(), the final, userdata parameter 3722 * should point to an uint64_t value. 3723 * 3724 * @param pkts 3725 * The previously buffered packets which could not be sent 3726 * @param unsent 3727 * The number of unsent packets in the pkts array 3728 * @param userdata 3729 * Pointer to an uint64_t value, which will be incremented by unsent 3730 */ 3731 void 3732 rte_eth_tx_buffer_count_callback(struct rte_mbuf **pkts, uint16_t unsent, 3733 void *userdata); 3734 3735 /** 3736 * Request the driver to free mbufs currently cached by the driver. The 3737 * driver will only free the mbuf if it is no longer in use. It is the 3738 * application's responsibility to ensure rte_eth_tx_buffer_flush(..) is 3739 * called if needed. 3740 * 3741 * @param port_id 3742 * The port identifier of the Ethernet device. 3743 * @param queue_id 3744 * The index of the transmit queue through which output packets must be 3745 * sent. 3746 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 3747 * to rte_eth_dev_configure(). 3748 * @param free_cnt 3749 * Maximum number of packets to free. Use 0 to indicate all possible packets 3750 * should be freed. Note that a packet may be using multiple mbufs. 3751 * @return 3752 * Failure: < 0 3753 * -ENODEV: Invalid interface 3754 * -EIO: device is removed 3755 * -ENOTSUP: Driver does not support function 3756 * Success: >= 0 3757 * 0-n: Number of packets freed. More packets may still remain in ring that 3758 * are in use. 3759 */ 3760 int 3761 rte_eth_tx_done_cleanup(uint16_t port_id, uint16_t queue_id, uint32_t free_cnt); 3762 3763 /** 3764 * Subtypes for IPsec offload event(@ref RTE_ETH_EVENT_IPSEC) raised by 3765 * eth device. 3766 */ 3767 enum rte_eth_event_ipsec_subtype { 3768 /** Unknown event type */ 3769 RTE_ETH_EVENT_IPSEC_UNKNOWN = 0, 3770 /** Sequence number overflow */ 3771 RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW, 3772 /** Soft time expiry of SA */ 3773 RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY, 3774 /** Soft byte expiry of SA */ 3775 RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY, 3776 /** Max value of this enum */ 3777 RTE_ETH_EVENT_IPSEC_MAX 3778 }; 3779 3780 /** 3781 * Descriptor for @ref RTE_ETH_EVENT_IPSEC event. Used by eth dev to send extra 3782 * information of the IPsec offload event. 3783 */ 3784 struct rte_eth_event_ipsec_desc { 3785 /** Type of RTE_ETH_EVENT_IPSEC_* event */ 3786 enum rte_eth_event_ipsec_subtype subtype; 3787 /** 3788 * Event specific metadata. 3789 * 3790 * For the following events, *userdata* registered 3791 * with the *rte_security_session* would be returned 3792 * as metadata, 3793 * 3794 * - @ref RTE_ETH_EVENT_IPSEC_ESN_OVERFLOW 3795 * - @ref RTE_ETH_EVENT_IPSEC_SA_TIME_EXPIRY 3796 * - @ref RTE_ETH_EVENT_IPSEC_SA_BYTE_EXPIRY 3797 * 3798 * @see struct rte_security_session_conf 3799 * 3800 */ 3801 uint64_t metadata; 3802 }; 3803 3804 /** 3805 * The eth device event type for interrupt, and maybe others in the future. 3806 */ 3807 enum rte_eth_event_type { 3808 RTE_ETH_EVENT_UNKNOWN, /**< unknown event type */ 3809 RTE_ETH_EVENT_INTR_LSC, /**< lsc interrupt event */ 3810 /** queue state event (enabled/disabled) */ 3811 RTE_ETH_EVENT_QUEUE_STATE, 3812 /** reset interrupt event, sent to VF on PF reset */ 3813 RTE_ETH_EVENT_INTR_RESET, 3814 RTE_ETH_EVENT_VF_MBOX, /**< message from the VF received by PF */ 3815 RTE_ETH_EVENT_MACSEC, /**< MACsec offload related event */ 3816 RTE_ETH_EVENT_INTR_RMV, /**< device removal event */ 3817 RTE_ETH_EVENT_NEW, /**< port is probed */ 3818 RTE_ETH_EVENT_DESTROY, /**< port is released */ 3819 RTE_ETH_EVENT_IPSEC, /**< IPsec offload related event */ 3820 RTE_ETH_EVENT_FLOW_AGED,/**< New aged-out flows is detected */ 3821 RTE_ETH_EVENT_MAX /**< max value of this enum */ 3822 }; 3823 3824 /** User application callback to be registered for interrupts. */ 3825 typedef int (*rte_eth_dev_cb_fn)(uint16_t port_id, 3826 enum rte_eth_event_type event, void *cb_arg, void *ret_param); 3827 3828 /** 3829 * Register a callback function for port event. 3830 * 3831 * @param port_id 3832 * Port ID. 3833 * RTE_ETH_ALL means register the event for all port ids. 3834 * @param event 3835 * Event interested. 3836 * @param cb_fn 3837 * User supplied callback function to be called. 3838 * @param cb_arg 3839 * Pointer to the parameters for the registered callback. 3840 * 3841 * @return 3842 * - On success, zero. 3843 * - On failure, a negative value. 3844 */ 3845 int rte_eth_dev_callback_register(uint16_t port_id, 3846 enum rte_eth_event_type event, 3847 rte_eth_dev_cb_fn cb_fn, void *cb_arg); 3848 3849 /** 3850 * Unregister a callback function for port event. 3851 * 3852 * @param port_id 3853 * Port ID. 3854 * RTE_ETH_ALL means unregister the event for all port ids. 3855 * @param event 3856 * Event interested. 3857 * @param cb_fn 3858 * User supplied callback function to be called. 3859 * @param cb_arg 3860 * Pointer to the parameters for the registered callback. -1 means to 3861 * remove all for the same callback address and same event. 3862 * 3863 * @return 3864 * - On success, zero. 3865 * - On failure, a negative value. 3866 */ 3867 int rte_eth_dev_callback_unregister(uint16_t port_id, 3868 enum rte_eth_event_type event, 3869 rte_eth_dev_cb_fn cb_fn, void *cb_arg); 3870 3871 /** 3872 * When there is no Rx packet coming in Rx Queue for a long time, we can 3873 * sleep lcore related to Rx Queue for power saving, and enable Rx interrupt 3874 * to be triggered when Rx packet arrives. 3875 * 3876 * The rte_eth_dev_rx_intr_enable() function enables Rx queue 3877 * interrupt on specific Rx queue of a port. 3878 * 3879 * @param port_id 3880 * The port identifier of the Ethernet device. 3881 * @param queue_id 3882 * The index of the receive queue from which to retrieve input packets. 3883 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3884 * to rte_eth_dev_configure(). 3885 * @return 3886 * - (0) if successful. 3887 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 3888 * that operation. 3889 * - (-ENODEV) if *port_id* invalid. 3890 * - (-EIO) if device is removed. 3891 */ 3892 int rte_eth_dev_rx_intr_enable(uint16_t port_id, uint16_t queue_id); 3893 3894 /** 3895 * When lcore wakes up from Rx interrupt indicating packet coming, disable Rx 3896 * interrupt and returns to polling mode. 3897 * 3898 * The rte_eth_dev_rx_intr_disable() function disables Rx queue 3899 * interrupt on specific Rx queue of a port. 3900 * 3901 * @param port_id 3902 * The port identifier of the Ethernet device. 3903 * @param queue_id 3904 * The index of the receive queue from which to retrieve input packets. 3905 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3906 * to rte_eth_dev_configure(). 3907 * @return 3908 * - (0) if successful. 3909 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 3910 * that operation. 3911 * - (-ENODEV) if *port_id* invalid. 3912 * - (-EIO) if device is removed. 3913 */ 3914 int rte_eth_dev_rx_intr_disable(uint16_t port_id, uint16_t queue_id); 3915 3916 /** 3917 * Rx Interrupt control per port. 3918 * 3919 * @param port_id 3920 * The port identifier of the Ethernet device. 3921 * @param epfd 3922 * Epoll instance fd which the intr vector associated to. 3923 * Using RTE_EPOLL_PER_THREAD allows to use per thread epoll instance. 3924 * @param op 3925 * The operation be performed for the vector. 3926 * Operation type of {RTE_INTR_EVENT_ADD, RTE_INTR_EVENT_DEL}. 3927 * @param data 3928 * User raw data. 3929 * @return 3930 * - On success, zero. 3931 * - On failure, a negative value. 3932 */ 3933 int rte_eth_dev_rx_intr_ctl(uint16_t port_id, int epfd, int op, void *data); 3934 3935 /** 3936 * Rx Interrupt control per queue. 3937 * 3938 * @param port_id 3939 * The port identifier of the Ethernet device. 3940 * @param queue_id 3941 * The index of the receive queue from which to retrieve input packets. 3942 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3943 * to rte_eth_dev_configure(). 3944 * @param epfd 3945 * Epoll instance fd which the intr vector associated to. 3946 * Using RTE_EPOLL_PER_THREAD allows to use per thread epoll instance. 3947 * @param op 3948 * The operation be performed for the vector. 3949 * Operation type of {RTE_INTR_EVENT_ADD, RTE_INTR_EVENT_DEL}. 3950 * @param data 3951 * User raw data. 3952 * @return 3953 * - On success, zero. 3954 * - On failure, a negative value. 3955 */ 3956 int rte_eth_dev_rx_intr_ctl_q(uint16_t port_id, uint16_t queue_id, 3957 int epfd, int op, void *data); 3958 3959 /** 3960 * Get interrupt fd per Rx queue. 3961 * 3962 * @param port_id 3963 * The port identifier of the Ethernet device. 3964 * @param queue_id 3965 * The index of the receive queue from which to retrieve input packets. 3966 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 3967 * to rte_eth_dev_configure(). 3968 * @return 3969 * - (>=0) the interrupt fd associated to the requested Rx queue if 3970 * successful. 3971 * - (-1) on error. 3972 */ 3973 int 3974 rte_eth_dev_rx_intr_ctl_q_get_fd(uint16_t port_id, uint16_t queue_id); 3975 3976 /** 3977 * Turn on the LED on the Ethernet device. 3978 * This function turns on the LED on the Ethernet device. 3979 * 3980 * @param port_id 3981 * The port identifier of the Ethernet device. 3982 * @return 3983 * - (0) if successful. 3984 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 3985 * that operation. 3986 * - (-ENODEV) if *port_id* invalid. 3987 * - (-EIO) if device is removed. 3988 */ 3989 int rte_eth_led_on(uint16_t port_id); 3990 3991 /** 3992 * Turn off the LED on the Ethernet device. 3993 * This function turns off the LED on the Ethernet device. 3994 * 3995 * @param port_id 3996 * The port identifier of the Ethernet device. 3997 * @return 3998 * - (0) if successful. 3999 * - (-ENOTSUP) if underlying hardware OR driver doesn't support 4000 * that operation. 4001 * - (-ENODEV) if *port_id* invalid. 4002 * - (-EIO) if device is removed. 4003 */ 4004 int rte_eth_led_off(uint16_t port_id); 4005 4006 /** 4007 * @warning 4008 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 4009 * 4010 * Get Forward Error Correction(FEC) capability. 4011 * 4012 * @param port_id 4013 * The port identifier of the Ethernet device. 4014 * @param speed_fec_capa 4015 * speed_fec_capa is out only with per-speed capabilities. 4016 * If set to NULL, the function returns the required number 4017 * of required array entries. 4018 * @param num 4019 * a number of elements in an speed_fec_capa array. 4020 * 4021 * @return 4022 * - A non-negative value lower or equal to num: success. The return value 4023 * is the number of entries filled in the fec capa array. 4024 * - A non-negative value higher than num: error, the given fec capa array 4025 * is too small. The return value corresponds to the num that should 4026 * be given to succeed. The entries in fec capa array are not valid and 4027 * shall not be used by the caller. 4028 * - (-ENOTSUP) if underlying hardware OR driver doesn't support. 4029 * that operation. 4030 * - (-EIO) if device is removed. 4031 * - (-ENODEV) if *port_id* invalid. 4032 * - (-EINVAL) if *num* or *speed_fec_capa* invalid 4033 */ 4034 __rte_experimental 4035 int rte_eth_fec_get_capability(uint16_t port_id, 4036 struct rte_eth_fec_capa *speed_fec_capa, 4037 unsigned int num); 4038 4039 /** 4040 * @warning 4041 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 4042 * 4043 * Get current Forward Error Correction(FEC) mode. 4044 * If link is down and AUTO is enabled, AUTO is returned, otherwise, 4045 * configured FEC mode is returned. 4046 * If link is up, current FEC mode is returned. 4047 * 4048 * @param port_id 4049 * The port identifier of the Ethernet device. 4050 * @param fec_capa 4051 * A bitmask of enabled FEC modes. If AUTO bit is set, other 4052 * bits specify FEC modes which may be negotiated. If AUTO 4053 * bit is clear, specify FEC modes to be used (only one valid 4054 * mode per speed may be set). 4055 * @return 4056 * - (0) if successful. 4057 * - (-ENOTSUP) if underlying hardware OR driver doesn't support. 4058 * that operation. 4059 * - (-EIO) if device is removed. 4060 * - (-ENODEV) if *port_id* invalid. 4061 */ 4062 __rte_experimental 4063 int rte_eth_fec_get(uint16_t port_id, uint32_t *fec_capa); 4064 4065 /** 4066 * @warning 4067 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 4068 * 4069 * Set Forward Error Correction(FEC) mode. 4070 * 4071 * @param port_id 4072 * The port identifier of the Ethernet device. 4073 * @param fec_capa 4074 * A bitmask of allowed FEC modes. If AUTO bit is set, other 4075 * bits specify FEC modes which may be negotiated. If AUTO 4076 * bit is clear, specify FEC modes to be used (only one valid 4077 * mode per speed may be set). 4078 * @return 4079 * - (0) if successful. 4080 * - (-EINVAL) if the FEC mode is not valid. 4081 * - (-ENOTSUP) if underlying hardware OR driver doesn't support. 4082 * - (-EIO) if device is removed. 4083 * - (-ENODEV) if *port_id* invalid. 4084 */ 4085 __rte_experimental 4086 int rte_eth_fec_set(uint16_t port_id, uint32_t fec_capa); 4087 4088 /** 4089 * Get current status of the Ethernet link flow control for Ethernet device 4090 * 4091 * @param port_id 4092 * The port identifier of the Ethernet device. 4093 * @param fc_conf 4094 * The pointer to the structure where to store the flow control parameters. 4095 * @return 4096 * - (0) if successful. 4097 * - (-ENOTSUP) if hardware doesn't support flow control. 4098 * - (-ENODEV) if *port_id* invalid. 4099 * - (-EIO) if device is removed. 4100 * - (-EINVAL) if bad parameter. 4101 */ 4102 int rte_eth_dev_flow_ctrl_get(uint16_t port_id, 4103 struct rte_eth_fc_conf *fc_conf); 4104 4105 /** 4106 * Configure the Ethernet link flow control for Ethernet device 4107 * 4108 * @param port_id 4109 * The port identifier of the Ethernet device. 4110 * @param fc_conf 4111 * The pointer to the structure of the flow control parameters. 4112 * @return 4113 * - (0) if successful. 4114 * - (-ENOTSUP) if hardware doesn't support flow control mode. 4115 * - (-ENODEV) if *port_id* invalid. 4116 * - (-EINVAL) if bad parameter 4117 * - (-EIO) if flow control setup failure or device is removed. 4118 */ 4119 int rte_eth_dev_flow_ctrl_set(uint16_t port_id, 4120 struct rte_eth_fc_conf *fc_conf); 4121 4122 /** 4123 * Configure the Ethernet priority flow control under DCB environment 4124 * for Ethernet device. 4125 * 4126 * @param port_id 4127 * The port identifier of the Ethernet device. 4128 * @param pfc_conf 4129 * The pointer to the structure of the priority flow control parameters. 4130 * @return 4131 * - (0) if successful. 4132 * - (-ENOTSUP) if hardware doesn't support priority flow control mode. 4133 * - (-ENODEV) if *port_id* invalid. 4134 * - (-EINVAL) if bad parameter 4135 * - (-EIO) if flow control setup failure or device is removed. 4136 */ 4137 int rte_eth_dev_priority_flow_ctrl_set(uint16_t port_id, 4138 struct rte_eth_pfc_conf *pfc_conf); 4139 4140 /** 4141 * Add a MAC address to the set used for filtering incoming packets. 4142 * 4143 * @param port_id 4144 * The port identifier of the Ethernet device. 4145 * @param mac_addr 4146 * The MAC address to add. 4147 * @param pool 4148 * VMDq pool index to associate address with (if VMDq is enabled). If VMDq is 4149 * not enabled, this should be set to 0. 4150 * @return 4151 * - (0) if successfully added or *mac_addr* was already added. 4152 * - (-ENOTSUP) if hardware doesn't support this feature. 4153 * - (-ENODEV) if *port* is invalid. 4154 * - (-EIO) if device is removed. 4155 * - (-ENOSPC) if no more MAC addresses can be added. 4156 * - (-EINVAL) if MAC address is invalid. 4157 */ 4158 int rte_eth_dev_mac_addr_add(uint16_t port_id, struct rte_ether_addr *mac_addr, 4159 uint32_t pool); 4160 4161 /** 4162 * Remove a MAC address from the internal array of addresses. 4163 * 4164 * @param port_id 4165 * The port identifier of the Ethernet device. 4166 * @param mac_addr 4167 * MAC address to remove. 4168 * @return 4169 * - (0) if successful, or *mac_addr* didn't exist. 4170 * - (-ENOTSUP) if hardware doesn't support. 4171 * - (-ENODEV) if *port* invalid. 4172 * - (-EADDRINUSE) if attempting to remove the default MAC address. 4173 * - (-EINVAL) if MAC address is invalid. 4174 */ 4175 int rte_eth_dev_mac_addr_remove(uint16_t port_id, 4176 struct rte_ether_addr *mac_addr); 4177 4178 /** 4179 * Set the default MAC address. 4180 * 4181 * @param port_id 4182 * The port identifier of the Ethernet device. 4183 * @param mac_addr 4184 * New default MAC address. 4185 * @return 4186 * - (0) if successful, or *mac_addr* didn't exist. 4187 * - (-ENOTSUP) if hardware doesn't support. 4188 * - (-ENODEV) if *port* invalid. 4189 * - (-EINVAL) if MAC address is invalid. 4190 */ 4191 int rte_eth_dev_default_mac_addr_set(uint16_t port_id, 4192 struct rte_ether_addr *mac_addr); 4193 4194 /** 4195 * Update Redirection Table(RETA) of Receive Side Scaling of Ethernet device. 4196 * 4197 * @param port_id 4198 * The port identifier of the Ethernet device. 4199 * @param reta_conf 4200 * RETA to update. 4201 * @param reta_size 4202 * Redirection table size. The table size can be queried by 4203 * rte_eth_dev_info_get(). 4204 * @return 4205 * - (0) if successful. 4206 * - (-ENODEV) if *port_id* is invalid. 4207 * - (-ENOTSUP) if hardware doesn't support. 4208 * - (-EINVAL) if bad parameter. 4209 * - (-EIO) if device is removed. 4210 */ 4211 int rte_eth_dev_rss_reta_update(uint16_t port_id, 4212 struct rte_eth_rss_reta_entry64 *reta_conf, 4213 uint16_t reta_size); 4214 4215 /** 4216 * Query Redirection Table(RETA) of Receive Side Scaling of Ethernet device. 4217 * 4218 * @param port_id 4219 * The port identifier of the Ethernet device. 4220 * @param reta_conf 4221 * RETA to query. For each requested reta entry, corresponding bit 4222 * in mask must be set. 4223 * @param reta_size 4224 * Redirection table size. The table size can be queried by 4225 * rte_eth_dev_info_get(). 4226 * @return 4227 * - (0) if successful. 4228 * - (-ENODEV) if *port_id* is invalid. 4229 * - (-ENOTSUP) if hardware doesn't support. 4230 * - (-EINVAL) if bad parameter. 4231 * - (-EIO) if device is removed. 4232 */ 4233 int rte_eth_dev_rss_reta_query(uint16_t port_id, 4234 struct rte_eth_rss_reta_entry64 *reta_conf, 4235 uint16_t reta_size); 4236 4237 /** 4238 * Updates unicast hash table for receiving packet with the given destination 4239 * MAC address, and the packet is routed to all VFs for which the Rx mode is 4240 * accept packets that match the unicast hash table. 4241 * 4242 * @param port_id 4243 * The port identifier of the Ethernet device. 4244 * @param addr 4245 * Unicast MAC address. 4246 * @param on 4247 * 1 - Set an unicast hash bit for receiving packets with the MAC address. 4248 * 0 - Clear an unicast hash bit. 4249 * @return 4250 * - (0) if successful. 4251 * - (-ENOTSUP) if hardware doesn't support. 4252 * - (-ENODEV) if *port_id* invalid. 4253 * - (-EIO) if device is removed. 4254 * - (-EINVAL) if bad parameter. 4255 */ 4256 int rte_eth_dev_uc_hash_table_set(uint16_t port_id, struct rte_ether_addr *addr, 4257 uint8_t on); 4258 4259 /** 4260 * Updates all unicast hash bitmaps for receiving packet with any Unicast 4261 * Ethernet MAC addresses,the packet is routed to all VFs for which the Rx 4262 * mode is accept packets that match the unicast hash table. 4263 * 4264 * @param port_id 4265 * The port identifier of the Ethernet device. 4266 * @param on 4267 * 1 - Set all unicast hash bitmaps for receiving all the Ethernet 4268 * MAC addresses 4269 * 0 - Clear all unicast hash bitmaps 4270 * @return 4271 * - (0) if successful. 4272 * - (-ENOTSUP) if hardware doesn't support. 4273 * - (-ENODEV) if *port_id* invalid. 4274 * - (-EIO) if device is removed. 4275 * - (-EINVAL) if bad parameter. 4276 */ 4277 int rte_eth_dev_uc_all_hash_table_set(uint16_t port_id, uint8_t on); 4278 4279 /** 4280 * Set the rate limitation for a queue on an Ethernet device. 4281 * 4282 * @param port_id 4283 * The port identifier of the Ethernet device. 4284 * @param queue_idx 4285 * The queue ID. 4286 * @param tx_rate 4287 * The Tx rate in Mbps. Allocated from the total port link speed. 4288 * @return 4289 * - (0) if successful. 4290 * - (-ENOTSUP) if hardware doesn't support this feature. 4291 * - (-ENODEV) if *port_id* invalid. 4292 * - (-EIO) if device is removed. 4293 * - (-EINVAL) if bad parameter. 4294 */ 4295 int rte_eth_set_queue_rate_limit(uint16_t port_id, uint16_t queue_idx, 4296 uint16_t tx_rate); 4297 4298 /** 4299 * Configuration of Receive Side Scaling hash computation of Ethernet device. 4300 * 4301 * @param port_id 4302 * The port identifier of the Ethernet device. 4303 * @param rss_conf 4304 * The new configuration to use for RSS hash computation on the port. 4305 * @return 4306 * - (0) if successful. 4307 * - (-ENODEV) if port identifier is invalid. 4308 * - (-EIO) if device is removed. 4309 * - (-ENOTSUP) if hardware doesn't support. 4310 * - (-EINVAL) if bad parameter. 4311 */ 4312 int rte_eth_dev_rss_hash_update(uint16_t port_id, 4313 struct rte_eth_rss_conf *rss_conf); 4314 4315 /** 4316 * Retrieve current configuration of Receive Side Scaling hash computation 4317 * of Ethernet device. 4318 * 4319 * @param port_id 4320 * The port identifier of the Ethernet device. 4321 * @param rss_conf 4322 * Where to store the current RSS hash configuration of the Ethernet device. 4323 * @return 4324 * - (0) if successful. 4325 * - (-ENODEV) if port identifier is invalid. 4326 * - (-EIO) if device is removed. 4327 * - (-ENOTSUP) if hardware doesn't support RSS. 4328 * - (-EINVAL) if bad parameter. 4329 */ 4330 int 4331 rte_eth_dev_rss_hash_conf_get(uint16_t port_id, 4332 struct rte_eth_rss_conf *rss_conf); 4333 4334 /** 4335 * Add UDP tunneling port for a type of tunnel. 4336 * 4337 * Some NICs may require such configuration to properly parse a tunnel 4338 * with any standard or custom UDP port. 4339 * The packets with this UDP port will be parsed for this type of tunnel. 4340 * The device parser will also check the rest of the tunnel headers 4341 * before classifying the packet. 4342 * 4343 * With some devices, this API will affect packet classification, i.e.: 4344 * - mbuf.packet_type reported on Rx 4345 * - rte_flow rules with tunnel items 4346 * 4347 * @param port_id 4348 * The port identifier of the Ethernet device. 4349 * @param tunnel_udp 4350 * UDP tunneling configuration. 4351 * 4352 * @return 4353 * - (0) if successful. 4354 * - (-ENODEV) if port identifier is invalid. 4355 * - (-EIO) if device is removed. 4356 * - (-ENOTSUP) if hardware doesn't support tunnel type. 4357 */ 4358 int 4359 rte_eth_dev_udp_tunnel_port_add(uint16_t port_id, 4360 struct rte_eth_udp_tunnel *tunnel_udp); 4361 4362 /** 4363 * Delete UDP tunneling port for a type of tunnel. 4364 * 4365 * The packets with this UDP port will not be classified as this type of tunnel 4366 * anymore if the device use such mapping for tunnel packet classification. 4367 * 4368 * @see rte_eth_dev_udp_tunnel_port_add 4369 * 4370 * @param port_id 4371 * The port identifier of the Ethernet device. 4372 * @param tunnel_udp 4373 * UDP tunneling configuration. 4374 * 4375 * @return 4376 * - (0) if successful. 4377 * - (-ENODEV) if port identifier is invalid. 4378 * - (-EIO) if device is removed. 4379 * - (-ENOTSUP) if hardware doesn't support tunnel type. 4380 */ 4381 int 4382 rte_eth_dev_udp_tunnel_port_delete(uint16_t port_id, 4383 struct rte_eth_udp_tunnel *tunnel_udp); 4384 4385 /** 4386 * Get DCB information on an Ethernet device. 4387 * 4388 * @param port_id 4389 * The port identifier of the Ethernet device. 4390 * @param dcb_info 4391 * DCB information. 4392 * @return 4393 * - (0) if successful. 4394 * - (-ENODEV) if port identifier is invalid. 4395 * - (-EIO) if device is removed. 4396 * - (-ENOTSUP) if hardware doesn't support. 4397 * - (-EINVAL) if bad parameter. 4398 */ 4399 int rte_eth_dev_get_dcb_info(uint16_t port_id, 4400 struct rte_eth_dcb_info *dcb_info); 4401 4402 struct rte_eth_rxtx_callback; 4403 4404 /** 4405 * Add a callback to be called on packet Rx on a given port and queue. 4406 * 4407 * This API configures a function to be called for each burst of 4408 * packets received on a given NIC port queue. The return value is a pointer 4409 * that can be used to later remove the callback using 4410 * rte_eth_remove_rx_callback(). 4411 * 4412 * Multiple functions are called in the order that they are added. 4413 * 4414 * @param port_id 4415 * The port identifier of the Ethernet device. 4416 * @param queue_id 4417 * The queue on the Ethernet device on which the callback is to be added. 4418 * @param fn 4419 * The callback function 4420 * @param user_param 4421 * A generic pointer parameter which will be passed to each invocation of the 4422 * callback function on this port and queue. Inter-thread synchronization 4423 * of any user data changes is the responsibility of the user. 4424 * 4425 * @return 4426 * NULL on error. 4427 * On success, a pointer value which can later be used to remove the callback. 4428 */ 4429 const struct rte_eth_rxtx_callback * 4430 rte_eth_add_rx_callback(uint16_t port_id, uint16_t queue_id, 4431 rte_rx_callback_fn fn, void *user_param); 4432 4433 /** 4434 * Add a callback that must be called first on packet Rx on a given port 4435 * and queue. 4436 * 4437 * This API configures a first function to be called for each burst of 4438 * packets received on a given NIC port queue. The return value is a pointer 4439 * that can be used to later remove the callback using 4440 * rte_eth_remove_rx_callback(). 4441 * 4442 * Multiple functions are called in the order that they are added. 4443 * 4444 * @param port_id 4445 * The port identifier of the Ethernet device. 4446 * @param queue_id 4447 * The queue on the Ethernet device on which the callback is to be added. 4448 * @param fn 4449 * The callback function 4450 * @param user_param 4451 * A generic pointer parameter which will be passed to each invocation of the 4452 * callback function on this port and queue. Inter-thread synchronization 4453 * of any user data changes is the responsibility of the user. 4454 * 4455 * @return 4456 * NULL on error. 4457 * On success, a pointer value which can later be used to remove the callback. 4458 */ 4459 const struct rte_eth_rxtx_callback * 4460 rte_eth_add_first_rx_callback(uint16_t port_id, uint16_t queue_id, 4461 rte_rx_callback_fn fn, void *user_param); 4462 4463 /** 4464 * Add a callback to be called on packet Tx on a given port and queue. 4465 * 4466 * This API configures a function to be called for each burst of 4467 * packets sent on a given NIC port queue. The return value is a pointer 4468 * that can be used to later remove the callback using 4469 * rte_eth_remove_tx_callback(). 4470 * 4471 * Multiple functions are called in the order that they are added. 4472 * 4473 * @param port_id 4474 * The port identifier of the Ethernet device. 4475 * @param queue_id 4476 * The queue on the Ethernet device on which the callback is to be added. 4477 * @param fn 4478 * The callback function 4479 * @param user_param 4480 * A generic pointer parameter which will be passed to each invocation of the 4481 * callback function on this port and queue. Inter-thread synchronization 4482 * of any user data changes is the responsibility of the user. 4483 * 4484 * @return 4485 * NULL on error. 4486 * On success, a pointer value which can later be used to remove the callback. 4487 */ 4488 const struct rte_eth_rxtx_callback * 4489 rte_eth_add_tx_callback(uint16_t port_id, uint16_t queue_id, 4490 rte_tx_callback_fn fn, void *user_param); 4491 4492 /** 4493 * Remove an Rx packet callback from a given port and queue. 4494 * 4495 * This function is used to removed callbacks that were added to a NIC port 4496 * queue using rte_eth_add_rx_callback(). 4497 * 4498 * Note: the callback is removed from the callback list but it isn't freed 4499 * since the it may still be in use. The memory for the callback can be 4500 * subsequently freed back by the application by calling rte_free(): 4501 * 4502 * - Immediately - if the port is stopped, or the user knows that no 4503 * callbacks are in flight e.g. if called from the thread doing Rx/Tx 4504 * on that queue. 4505 * 4506 * - After a short delay - where the delay is sufficient to allow any 4507 * in-flight callbacks to complete. Alternately, the RCU mechanism can be 4508 * used to detect when data plane threads have ceased referencing the 4509 * callback memory. 4510 * 4511 * @param port_id 4512 * The port identifier of the Ethernet device. 4513 * @param queue_id 4514 * The queue on the Ethernet device from which the callback is to be removed. 4515 * @param user_cb 4516 * User supplied callback created via rte_eth_add_rx_callback(). 4517 * 4518 * @return 4519 * - 0: Success. Callback was removed. 4520 * - -ENODEV: If *port_id* is invalid. 4521 * - -ENOTSUP: Callback support is not available. 4522 * - -EINVAL: The queue_id is out of range, or the callback 4523 * is NULL or not found for the port/queue. 4524 */ 4525 int rte_eth_remove_rx_callback(uint16_t port_id, uint16_t queue_id, 4526 const struct rte_eth_rxtx_callback *user_cb); 4527 4528 /** 4529 * Remove a Tx packet callback from a given port and queue. 4530 * 4531 * This function is used to removed callbacks that were added to a NIC port 4532 * queue using rte_eth_add_tx_callback(). 4533 * 4534 * Note: the callback is removed from the callback list but it isn't freed 4535 * since the it may still be in use. The memory for the callback can be 4536 * subsequently freed back by the application by calling rte_free(): 4537 * 4538 * - Immediately - if the port is stopped, or the user knows that no 4539 * callbacks are in flight e.g. if called from the thread doing Rx/Tx 4540 * on that queue. 4541 * 4542 * - After a short delay - where the delay is sufficient to allow any 4543 * in-flight callbacks to complete. Alternately, the RCU mechanism can be 4544 * used to detect when data plane threads have ceased referencing the 4545 * callback memory. 4546 * 4547 * @param port_id 4548 * The port identifier of the Ethernet device. 4549 * @param queue_id 4550 * The queue on the Ethernet device from which the callback is to be removed. 4551 * @param user_cb 4552 * User supplied callback created via rte_eth_add_tx_callback(). 4553 * 4554 * @return 4555 * - 0: Success. Callback was removed. 4556 * - -ENODEV: If *port_id* is invalid. 4557 * - -ENOTSUP: Callback support is not available. 4558 * - -EINVAL: The queue_id is out of range, or the callback 4559 * is NULL or not found for the port/queue. 4560 */ 4561 int rte_eth_remove_tx_callback(uint16_t port_id, uint16_t queue_id, 4562 const struct rte_eth_rxtx_callback *user_cb); 4563 4564 /** 4565 * Retrieve information about given port's Rx queue. 4566 * 4567 * @param port_id 4568 * The port identifier of the Ethernet device. 4569 * @param queue_id 4570 * The Rx queue on the Ethernet device for which information 4571 * will be retrieved. 4572 * @param qinfo 4573 * A pointer to a structure of type *rte_eth_rxq_info_info* to be filled with 4574 * the information of the Ethernet device. 4575 * 4576 * @return 4577 * - 0: Success 4578 * - -ENODEV: If *port_id* is invalid. 4579 * - -ENOTSUP: routine is not supported by the device PMD. 4580 * - -EINVAL: The queue_id is out of range, or the queue 4581 * is hairpin queue. 4582 */ 4583 int rte_eth_rx_queue_info_get(uint16_t port_id, uint16_t queue_id, 4584 struct rte_eth_rxq_info *qinfo); 4585 4586 /** 4587 * Retrieve information about given port's Tx queue. 4588 * 4589 * @param port_id 4590 * The port identifier of the Ethernet device. 4591 * @param queue_id 4592 * The Tx queue on the Ethernet device for which information 4593 * will be retrieved. 4594 * @param qinfo 4595 * A pointer to a structure of type *rte_eth_txq_info_info* to be filled with 4596 * the information of the Ethernet device. 4597 * 4598 * @return 4599 * - 0: Success 4600 * - -ENODEV: If *port_id* is invalid. 4601 * - -ENOTSUP: routine is not supported by the device PMD. 4602 * - -EINVAL: The queue_id is out of range, or the queue 4603 * is hairpin queue. 4604 */ 4605 int rte_eth_tx_queue_info_get(uint16_t port_id, uint16_t queue_id, 4606 struct rte_eth_txq_info *qinfo); 4607 4608 /** 4609 * Retrieve information about the Rx packet burst mode. 4610 * 4611 * @param port_id 4612 * The port identifier of the Ethernet device. 4613 * @param queue_id 4614 * The Rx queue on the Ethernet device for which information 4615 * will be retrieved. 4616 * @param mode 4617 * A pointer to a structure of type *rte_eth_burst_mode* to be filled 4618 * with the information of the packet burst mode. 4619 * 4620 * @return 4621 * - 0: Success 4622 * - -ENODEV: If *port_id* is invalid. 4623 * - -ENOTSUP: routine is not supported by the device PMD. 4624 * - -EINVAL: The queue_id is out of range. 4625 */ 4626 int rte_eth_rx_burst_mode_get(uint16_t port_id, uint16_t queue_id, 4627 struct rte_eth_burst_mode *mode); 4628 4629 /** 4630 * Retrieve information about the Tx packet burst mode. 4631 * 4632 * @param port_id 4633 * The port identifier of the Ethernet device. 4634 * @param queue_id 4635 * The Tx queue on the Ethernet device for which information 4636 * will be retrieved. 4637 * @param mode 4638 * A pointer to a structure of type *rte_eth_burst_mode* to be filled 4639 * with the information of the packet burst mode. 4640 * 4641 * @return 4642 * - 0: Success 4643 * - -ENODEV: If *port_id* is invalid. 4644 * - -ENOTSUP: routine is not supported by the device PMD. 4645 * - -EINVAL: The queue_id is out of range. 4646 */ 4647 int rte_eth_tx_burst_mode_get(uint16_t port_id, uint16_t queue_id, 4648 struct rte_eth_burst_mode *mode); 4649 4650 /** 4651 * @warning 4652 * @b EXPERIMENTAL: this API may change without prior notice. 4653 * 4654 * Retrieve the monitor condition for a given receive queue. 4655 * 4656 * @param port_id 4657 * The port identifier of the Ethernet device. 4658 * @param queue_id 4659 * The Rx queue on the Ethernet device for which information 4660 * will be retrieved. 4661 * @param pmc 4662 * The pointer to power-optimized monitoring condition structure. 4663 * 4664 * @return 4665 * - 0: Success. 4666 * -ENOTSUP: Operation not supported. 4667 * -EINVAL: Invalid parameters. 4668 * -ENODEV: Invalid port ID. 4669 */ 4670 __rte_experimental 4671 int rte_eth_get_monitor_addr(uint16_t port_id, uint16_t queue_id, 4672 struct rte_power_monitor_cond *pmc); 4673 4674 /** 4675 * Retrieve device registers and register attributes (number of registers and 4676 * register size) 4677 * 4678 * @param port_id 4679 * The port identifier of the Ethernet device. 4680 * @param info 4681 * Pointer to rte_dev_reg_info structure to fill in. If info->data is 4682 * NULL the function fills in the width and length fields. If non-NULL 4683 * the registers are put into the buffer pointed at by the data field. 4684 * @return 4685 * - (0) if successful. 4686 * - (-ENOTSUP) if hardware doesn't support. 4687 * - (-EINVAL) if bad parameter. 4688 * - (-ENODEV) if *port_id* invalid. 4689 * - (-EIO) if device is removed. 4690 * - others depends on the specific operations implementation. 4691 */ 4692 int rte_eth_dev_get_reg_info(uint16_t port_id, struct rte_dev_reg_info *info); 4693 4694 /** 4695 * Retrieve size of device EEPROM 4696 * 4697 * @param port_id 4698 * The port identifier of the Ethernet device. 4699 * @return 4700 * - (>=0) EEPROM size if successful. 4701 * - (-ENOTSUP) if hardware doesn't support. 4702 * - (-ENODEV) if *port_id* invalid. 4703 * - (-EIO) if device is removed. 4704 * - others depends on the specific operations implementation. 4705 */ 4706 int rte_eth_dev_get_eeprom_length(uint16_t port_id); 4707 4708 /** 4709 * Retrieve EEPROM and EEPROM attribute 4710 * 4711 * @param port_id 4712 * The port identifier of the Ethernet device. 4713 * @param info 4714 * The template includes buffer for return EEPROM data and 4715 * EEPROM attributes to be filled. 4716 * @return 4717 * - (0) if successful. 4718 * - (-ENOTSUP) if hardware doesn't support. 4719 * - (-EINVAL) if bad parameter. 4720 * - (-ENODEV) if *port_id* invalid. 4721 * - (-EIO) if device is removed. 4722 * - others depends on the specific operations implementation. 4723 */ 4724 int rte_eth_dev_get_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info); 4725 4726 /** 4727 * Program EEPROM with provided data 4728 * 4729 * @param port_id 4730 * The port identifier of the Ethernet device. 4731 * @param info 4732 * The template includes EEPROM data for programming and 4733 * EEPROM attributes to be filled 4734 * @return 4735 * - (0) if successful. 4736 * - (-ENOTSUP) if hardware doesn't support. 4737 * - (-ENODEV) if *port_id* invalid. 4738 * - (-EINVAL) if bad parameter. 4739 * - (-EIO) if device is removed. 4740 * - others depends on the specific operations implementation. 4741 */ 4742 int rte_eth_dev_set_eeprom(uint16_t port_id, struct rte_dev_eeprom_info *info); 4743 4744 /** 4745 * @warning 4746 * @b EXPERIMENTAL: this API may change without prior notice. 4747 * 4748 * Retrieve the type and size of plugin module EEPROM 4749 * 4750 * @param port_id 4751 * The port identifier of the Ethernet device. 4752 * @param modinfo 4753 * The type and size of plugin module EEPROM. 4754 * @return 4755 * - (0) if successful. 4756 * - (-ENOTSUP) if hardware doesn't support. 4757 * - (-ENODEV) if *port_id* invalid. 4758 * - (-EINVAL) if bad parameter. 4759 * - (-EIO) if device is removed. 4760 * - others depends on the specific operations implementation. 4761 */ 4762 __rte_experimental 4763 int 4764 rte_eth_dev_get_module_info(uint16_t port_id, 4765 struct rte_eth_dev_module_info *modinfo); 4766 4767 /** 4768 * @warning 4769 * @b EXPERIMENTAL: this API may change without prior notice. 4770 * 4771 * Retrieve the data of plugin module EEPROM 4772 * 4773 * @param port_id 4774 * The port identifier of the Ethernet device. 4775 * @param info 4776 * The template includes the plugin module EEPROM attributes, and the 4777 * buffer for return plugin module EEPROM data. 4778 * @return 4779 * - (0) if successful. 4780 * - (-ENOTSUP) if hardware doesn't support. 4781 * - (-EINVAL) if bad parameter. 4782 * - (-ENODEV) if *port_id* invalid. 4783 * - (-EIO) if device is removed. 4784 * - others depends on the specific operations implementation. 4785 */ 4786 __rte_experimental 4787 int 4788 rte_eth_dev_get_module_eeprom(uint16_t port_id, 4789 struct rte_dev_eeprom_info *info); 4790 4791 /** 4792 * Set the list of multicast addresses to filter on an Ethernet device. 4793 * 4794 * @param port_id 4795 * The port identifier of the Ethernet device. 4796 * @param mc_addr_set 4797 * The array of multicast addresses to set. Equal to NULL when the function 4798 * is invoked to flush the set of filtered addresses. 4799 * @param nb_mc_addr 4800 * The number of multicast addresses in the *mc_addr_set* array. Equal to 0 4801 * when the function is invoked to flush the set of filtered addresses. 4802 * @return 4803 * - (0) if successful. 4804 * - (-ENODEV) if *port_id* invalid. 4805 * - (-EIO) if device is removed. 4806 * - (-ENOTSUP) if PMD of *port_id* doesn't support multicast filtering. 4807 * - (-ENOSPC) if *port_id* has not enough multicast filtering resources. 4808 * - (-EINVAL) if bad parameter. 4809 */ 4810 int rte_eth_dev_set_mc_addr_list(uint16_t port_id, 4811 struct rte_ether_addr *mc_addr_set, 4812 uint32_t nb_mc_addr); 4813 4814 /** 4815 * Enable IEEE1588/802.1AS timestamping for an Ethernet device. 4816 * 4817 * @param port_id 4818 * The port identifier of the Ethernet device. 4819 * 4820 * @return 4821 * - 0: Success. 4822 * - -ENODEV: The port ID is invalid. 4823 * - -EIO: if device is removed. 4824 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4825 */ 4826 int rte_eth_timesync_enable(uint16_t port_id); 4827 4828 /** 4829 * Disable IEEE1588/802.1AS timestamping for an Ethernet device. 4830 * 4831 * @param port_id 4832 * The port identifier of the Ethernet device. 4833 * 4834 * @return 4835 * - 0: Success. 4836 * - -ENODEV: The port ID is invalid. 4837 * - -EIO: if device is removed. 4838 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4839 */ 4840 int rte_eth_timesync_disable(uint16_t port_id); 4841 4842 /** 4843 * Read an IEEE1588/802.1AS Rx timestamp from an Ethernet device. 4844 * 4845 * @param port_id 4846 * The port identifier of the Ethernet device. 4847 * @param timestamp 4848 * Pointer to the timestamp struct. 4849 * @param flags 4850 * Device specific flags. Used to pass the Rx timesync register index to 4851 * i40e. Unused in igb/ixgbe, pass 0 instead. 4852 * 4853 * @return 4854 * - 0: Success. 4855 * - -EINVAL: No timestamp is available. 4856 * - -ENODEV: The port ID is invalid. 4857 * - -EIO: if device is removed. 4858 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4859 */ 4860 int rte_eth_timesync_read_rx_timestamp(uint16_t port_id, 4861 struct timespec *timestamp, uint32_t flags); 4862 4863 /** 4864 * Read an IEEE1588/802.1AS Tx timestamp from an Ethernet device. 4865 * 4866 * @param port_id 4867 * The port identifier of the Ethernet device. 4868 * @param timestamp 4869 * Pointer to the timestamp struct. 4870 * 4871 * @return 4872 * - 0: Success. 4873 * - -EINVAL: No timestamp is available. 4874 * - -ENODEV: The port ID is invalid. 4875 * - -EIO: if device is removed. 4876 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4877 */ 4878 int rte_eth_timesync_read_tx_timestamp(uint16_t port_id, 4879 struct timespec *timestamp); 4880 4881 /** 4882 * Adjust the timesync clock on an Ethernet device. 4883 * 4884 * This is usually used in conjunction with other Ethdev timesync functions to 4885 * synchronize the device time using the IEEE1588/802.1AS protocol. 4886 * 4887 * @param port_id 4888 * The port identifier of the Ethernet device. 4889 * @param delta 4890 * The adjustment in nanoseconds. 4891 * 4892 * @return 4893 * - 0: Success. 4894 * - -ENODEV: The port ID is invalid. 4895 * - -EIO: if device is removed. 4896 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4897 */ 4898 int rte_eth_timesync_adjust_time(uint16_t port_id, int64_t delta); 4899 4900 /** 4901 * Read the time from the timesync clock on an Ethernet device. 4902 * 4903 * This is usually used in conjunction with other Ethdev timesync functions to 4904 * synchronize the device time using the IEEE1588/802.1AS protocol. 4905 * 4906 * @param port_id 4907 * The port identifier of the Ethernet device. 4908 * @param time 4909 * Pointer to the timespec struct that holds the time. 4910 * 4911 * @return 4912 * - 0: Success. 4913 * - -EINVAL: Bad parameter. 4914 */ 4915 int rte_eth_timesync_read_time(uint16_t port_id, struct timespec *time); 4916 4917 /** 4918 * Set the time of the timesync clock on an Ethernet device. 4919 * 4920 * This is usually used in conjunction with other Ethdev timesync functions to 4921 * synchronize the device time using the IEEE1588/802.1AS protocol. 4922 * 4923 * @param port_id 4924 * The port identifier of the Ethernet device. 4925 * @param time 4926 * Pointer to the timespec struct that holds the time. 4927 * 4928 * @return 4929 * - 0: Success. 4930 * - -EINVAL: No timestamp is available. 4931 * - -ENODEV: The port ID is invalid. 4932 * - -EIO: if device is removed. 4933 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4934 */ 4935 int rte_eth_timesync_write_time(uint16_t port_id, const struct timespec *time); 4936 4937 /** 4938 * @warning 4939 * @b EXPERIMENTAL: this API may change without prior notice. 4940 * 4941 * Read the current clock counter of an Ethernet device 4942 * 4943 * This returns the current raw clock value of an Ethernet device. It is 4944 * a raw amount of ticks, with no given time reference. 4945 * The value returned here is from the same clock than the one 4946 * filling timestamp field of Rx packets when using hardware timestamp 4947 * offload. Therefore it can be used to compute a precise conversion of 4948 * the device clock to the real time. 4949 * 4950 * E.g, a simple heuristic to derivate the frequency would be: 4951 * uint64_t start, end; 4952 * rte_eth_read_clock(port, start); 4953 * rte_delay_ms(100); 4954 * rte_eth_read_clock(port, end); 4955 * double freq = (end - start) * 10; 4956 * 4957 * Compute a common reference with: 4958 * uint64_t base_time_sec = current_time(); 4959 * uint64_t base_clock; 4960 * rte_eth_read_clock(port, base_clock); 4961 * 4962 * Then, convert the raw mbuf timestamp with: 4963 * base_time_sec + (double)(*timestamp_dynfield(mbuf) - base_clock) / freq; 4964 * 4965 * This simple example will not provide a very good accuracy. One must 4966 * at least measure multiple times the frequency and do a regression. 4967 * To avoid deviation from the system time, the common reference can 4968 * be repeated from time to time. The integer division can also be 4969 * converted by a multiplication and a shift for better performance. 4970 * 4971 * @param port_id 4972 * The port identifier of the Ethernet device. 4973 * @param clock 4974 * Pointer to the uint64_t that holds the raw clock value. 4975 * 4976 * @return 4977 * - 0: Success. 4978 * - -ENODEV: The port ID is invalid. 4979 * - -ENOTSUP: The function is not supported by the Ethernet driver. 4980 * - -EINVAL: if bad parameter. 4981 */ 4982 __rte_experimental 4983 int 4984 rte_eth_read_clock(uint16_t port_id, uint64_t *clock); 4985 4986 /** 4987 * Get the port ID from device name. The device name should be specified 4988 * as below: 4989 * - PCIe address (Domain:Bus:Device.Function), for example- 0000:2:00.0 4990 * - SoC device name, for example- fsl-gmac0 4991 * - vdev dpdk name, for example- net_[pcap0|null0|tap0] 4992 * 4993 * @param name 4994 * pci address or name of the device 4995 * @param port_id 4996 * pointer to port identifier of the device 4997 * @return 4998 * - (0) if successful and port_id is filled. 4999 * - (-ENODEV or -EINVAL) on failure. 5000 */ 5001 int 5002 rte_eth_dev_get_port_by_name(const char *name, uint16_t *port_id); 5003 5004 /** 5005 * Get the device name from port ID. The device name is specified as below: 5006 * - PCIe address (Domain:Bus:Device.Function), for example- 0000:02:00.0 5007 * - SoC device name, for example- fsl-gmac0 5008 * - vdev dpdk name, for example- net_[pcap0|null0|tun0|tap0] 5009 * 5010 * @param port_id 5011 * Port identifier of the device. 5012 * @param name 5013 * Buffer of size RTE_ETH_NAME_MAX_LEN to store the name. 5014 * @return 5015 * - (0) if successful. 5016 * - (-ENODEV) if *port_id* is invalid. 5017 * - (-EINVAL) on failure. 5018 */ 5019 int 5020 rte_eth_dev_get_name_by_port(uint16_t port_id, char *name); 5021 5022 /** 5023 * Check that numbers of Rx and Tx descriptors satisfy descriptors limits from 5024 * the Ethernet device information, otherwise adjust them to boundaries. 5025 * 5026 * @param port_id 5027 * The port identifier of the Ethernet device. 5028 * @param nb_rx_desc 5029 * A pointer to a uint16_t where the number of receive 5030 * descriptors stored. 5031 * @param nb_tx_desc 5032 * A pointer to a uint16_t where the number of transmit 5033 * descriptors stored. 5034 * @return 5035 * - (0) if successful. 5036 * - (-ENOTSUP, -ENODEV or -EINVAL) on failure. 5037 */ 5038 int rte_eth_dev_adjust_nb_rx_tx_desc(uint16_t port_id, 5039 uint16_t *nb_rx_desc, 5040 uint16_t *nb_tx_desc); 5041 5042 /** 5043 * Test if a port supports specific mempool ops. 5044 * 5045 * @param port_id 5046 * Port identifier of the Ethernet device. 5047 * @param [in] pool 5048 * The name of the pool operations to test. 5049 * @return 5050 * - 0: best mempool ops choice for this port. 5051 * - 1: mempool ops are supported for this port. 5052 * - -ENOTSUP: mempool ops not supported for this port. 5053 * - -ENODEV: Invalid port Identifier. 5054 * - -EINVAL: Pool param is null. 5055 */ 5056 int 5057 rte_eth_dev_pool_ops_supported(uint16_t port_id, const char *pool); 5058 5059 /** 5060 * Get the security context for the Ethernet device. 5061 * 5062 * @param port_id 5063 * Port identifier of the Ethernet device 5064 * @return 5065 * - NULL on error. 5066 * - pointer to security context on success. 5067 */ 5068 void * 5069 rte_eth_dev_get_sec_ctx(uint16_t port_id); 5070 5071 /** 5072 * @warning 5073 * @b EXPERIMENTAL: this API may change, or be removed, without prior notice 5074 * 5075 * Query the device hairpin capabilities. 5076 * 5077 * @param port_id 5078 * The port identifier of the Ethernet device. 5079 * @param cap 5080 * Pointer to a structure that will hold the hairpin capabilities. 5081 * @return 5082 * - (0) if successful. 5083 * - (-ENOTSUP) if hardware doesn't support. 5084 * - (-EINVAL) if bad parameter. 5085 */ 5086 __rte_experimental 5087 int rte_eth_dev_hairpin_capability_get(uint16_t port_id, 5088 struct rte_eth_hairpin_cap *cap); 5089 5090 /** 5091 * @warning 5092 * @b EXPERIMENTAL: this structure may change without prior notice. 5093 * 5094 * Ethernet device representor ID range entry 5095 */ 5096 struct rte_eth_representor_range { 5097 enum rte_eth_representor_type type; /**< Representor type */ 5098 int controller; /**< Controller index */ 5099 int pf; /**< Physical function index */ 5100 __extension__ 5101 union { 5102 int vf; /**< VF start index */ 5103 int sf; /**< SF start index */ 5104 }; 5105 uint32_t id_base; /**< Representor ID start index */ 5106 uint32_t id_end; /**< Representor ID end index */ 5107 char name[RTE_DEV_NAME_MAX_LEN]; /**< Representor name */ 5108 }; 5109 5110 /** 5111 * @warning 5112 * @b EXPERIMENTAL: this structure may change without prior notice. 5113 * 5114 * Ethernet device representor information 5115 */ 5116 struct rte_eth_representor_info { 5117 uint16_t controller; /**< Controller ID of caller device. */ 5118 uint16_t pf; /**< Physical function ID of caller device. */ 5119 uint32_t nb_ranges_alloc; /**< Size of the ranges array. */ 5120 uint32_t nb_ranges; /**< Number of initialized ranges. */ 5121 struct rte_eth_representor_range ranges[];/**< Representor ID range. */ 5122 }; 5123 5124 /** 5125 * Retrieve the representor info of the device. 5126 * 5127 * Get device representor info to be able to calculate a unique 5128 * representor ID. @see rte_eth_representor_id_get helper. 5129 * 5130 * @param port_id 5131 * The port identifier of the device. 5132 * @param info 5133 * A pointer to a representor info structure. 5134 * NULL to return number of range entries and allocate memory 5135 * for next call to store detail. 5136 * The number of ranges that were written into this structure 5137 * will be placed into its nb_ranges field. This number cannot be 5138 * larger than the nb_ranges_alloc that by the user before calling 5139 * this function. It can be smaller than the value returned by the 5140 * function, however. 5141 * @return 5142 * - (-ENOTSUP) if operation is not supported. 5143 * - (-ENODEV) if *port_id* invalid. 5144 * - (-EIO) if device is removed. 5145 * - (>=0) number of available representor range entries. 5146 */ 5147 __rte_experimental 5148 int rte_eth_representor_info_get(uint16_t port_id, 5149 struct rte_eth_representor_info *info); 5150 5151 /** The NIC is able to deliver flag (if set) with packets to the PMD. */ 5152 #define RTE_ETH_RX_METADATA_USER_FLAG RTE_BIT64(0) 5153 5154 /** The NIC is able to deliver mark ID with packets to the PMD. */ 5155 #define RTE_ETH_RX_METADATA_USER_MARK RTE_BIT64(1) 5156 5157 /** The NIC is able to deliver tunnel ID with packets to the PMD. */ 5158 #define RTE_ETH_RX_METADATA_TUNNEL_ID RTE_BIT64(2) 5159 5160 /** 5161 * @warning 5162 * @b EXPERIMENTAL: this API may change without prior notice 5163 * 5164 * Negotiate the NIC's ability to deliver specific kinds of metadata to the PMD. 5165 * 5166 * Invoke this API before the first rte_eth_dev_configure() invocation 5167 * to let the PMD make preparations that are inconvenient to do later. 5168 * 5169 * The negotiation process is as follows: 5170 * 5171 * - the application requests features intending to use at least some of them; 5172 * - the PMD responds with the guaranteed subset of the requested feature set; 5173 * - the application can retry negotiation with another set of features; 5174 * - the application can pass zero to clear the negotiation result; 5175 * - the last negotiated result takes effect upon 5176 * the ethdev configure and start. 5177 * 5178 * @note 5179 * The PMD is supposed to first consider enabling the requested feature set 5180 * in its entirety. Only if it fails to do so, does it have the right to 5181 * respond with a smaller set of the originally requested features. 5182 * 5183 * @note 5184 * Return code (-ENOTSUP) does not necessarily mean that the requested 5185 * features are unsupported. In this case, the application should just 5186 * assume that these features can be used without prior negotiations. 5187 * 5188 * @param port_id 5189 * Port (ethdev) identifier 5190 * 5191 * @param[inout] features 5192 * Feature selection buffer 5193 * 5194 * @return 5195 * - (-EBUSY) if the port can't handle this in its current state; 5196 * - (-ENOTSUP) if the method itself is not supported by the PMD; 5197 * - (-ENODEV) if *port_id* is invalid; 5198 * - (-EINVAL) if *features* is NULL; 5199 * - (-EIO) if the device is removed; 5200 * - (0) on success 5201 */ 5202 __rte_experimental 5203 int rte_eth_rx_metadata_negotiate(uint16_t port_id, uint64_t *features); 5204 5205 #include <rte_ethdev_core.h> 5206 5207 /** 5208 * @internal 5209 * Helper routine for rte_eth_rx_burst(). 5210 * Should be called at exit from PMD's rte_eth_rx_bulk implementation. 5211 * Does necessary post-processing - invokes Rx callbacks if any, etc. 5212 * 5213 * @param port_id 5214 * The port identifier of the Ethernet device. 5215 * @param queue_id 5216 * The index of the receive queue from which to retrieve input packets. 5217 * @param rx_pkts 5218 * The address of an array of pointers to *rte_mbuf* structures that 5219 * have been retrieved from the device. 5220 * @param nb_rx 5221 * The number of packets that were retrieved from the device. 5222 * @param nb_pkts 5223 * The number of elements in @p rx_pkts array. 5224 * @param opaque 5225 * Opaque pointer of Rx queue callback related data. 5226 * 5227 * @return 5228 * The number of packets effectively supplied to the @p rx_pkts array. 5229 */ 5230 uint16_t rte_eth_call_rx_callbacks(uint16_t port_id, uint16_t queue_id, 5231 struct rte_mbuf **rx_pkts, uint16_t nb_rx, uint16_t nb_pkts, 5232 void *opaque); 5233 5234 /** 5235 * 5236 * Retrieve a burst of input packets from a receive queue of an Ethernet 5237 * device. The retrieved packets are stored in *rte_mbuf* structures whose 5238 * pointers are supplied in the *rx_pkts* array. 5239 * 5240 * The rte_eth_rx_burst() function loops, parsing the Rx ring of the 5241 * receive queue, up to *nb_pkts* packets, and for each completed Rx 5242 * descriptor in the ring, it performs the following operations: 5243 * 5244 * - Initialize the *rte_mbuf* data structure associated with the 5245 * Rx descriptor according to the information provided by the NIC into 5246 * that Rx descriptor. 5247 * 5248 * - Store the *rte_mbuf* data structure into the next entry of the 5249 * *rx_pkts* array. 5250 * 5251 * - Replenish the Rx descriptor with a new *rte_mbuf* buffer 5252 * allocated from the memory pool associated with the receive queue at 5253 * initialization time. 5254 * 5255 * When retrieving an input packet that was scattered by the controller 5256 * into multiple receive descriptors, the rte_eth_rx_burst() function 5257 * appends the associated *rte_mbuf* buffers to the first buffer of the 5258 * packet. 5259 * 5260 * The rte_eth_rx_burst() function returns the number of packets 5261 * actually retrieved, which is the number of *rte_mbuf* data structures 5262 * effectively supplied into the *rx_pkts* array. 5263 * A return value equal to *nb_pkts* indicates that the Rx queue contained 5264 * at least *rx_pkts* packets, and this is likely to signify that other 5265 * received packets remain in the input queue. Applications implementing 5266 * a "retrieve as much received packets as possible" policy can check this 5267 * specific case and keep invoking the rte_eth_rx_burst() function until 5268 * a value less than *nb_pkts* is returned. 5269 * 5270 * This receive method has the following advantages: 5271 * 5272 * - It allows a run-to-completion network stack engine to retrieve and 5273 * to immediately process received packets in a fast burst-oriented 5274 * approach, avoiding the overhead of unnecessary intermediate packet 5275 * queue/dequeue operations. 5276 * 5277 * - Conversely, it also allows an asynchronous-oriented processing 5278 * method to retrieve bursts of received packets and to immediately 5279 * queue them for further parallel processing by another logical core, 5280 * for instance. However, instead of having received packets being 5281 * individually queued by the driver, this approach allows the caller 5282 * of the rte_eth_rx_burst() function to queue a burst of retrieved 5283 * packets at a time and therefore dramatically reduce the cost of 5284 * enqueue/dequeue operations per packet. 5285 * 5286 * - It allows the rte_eth_rx_burst() function of the driver to take 5287 * advantage of burst-oriented hardware features (CPU cache, 5288 * prefetch instructions, and so on) to minimize the number of CPU 5289 * cycles per packet. 5290 * 5291 * To summarize, the proposed receive API enables many 5292 * burst-oriented optimizations in both synchronous and asynchronous 5293 * packet processing environments with no overhead in both cases. 5294 * 5295 * @note 5296 * Some drivers using vector instructions require that *nb_pkts* is 5297 * divisible by 4 or 8, depending on the driver implementation. 5298 * 5299 * The rte_eth_rx_burst() function does not provide any error 5300 * notification to avoid the corresponding overhead. As a hint, the 5301 * upper-level application might check the status of the device link once 5302 * being systematically returned a 0 value for a given number of tries. 5303 * 5304 * @param port_id 5305 * The port identifier of the Ethernet device. 5306 * @param queue_id 5307 * The index of the receive queue from which to retrieve input packets. 5308 * The value must be in the range [0, nb_rx_queue - 1] previously supplied 5309 * to rte_eth_dev_configure(). 5310 * @param rx_pkts 5311 * The address of an array of pointers to *rte_mbuf* structures that 5312 * must be large enough to store *nb_pkts* pointers in it. 5313 * @param nb_pkts 5314 * The maximum number of packets to retrieve. 5315 * The value must be divisible by 8 in order to work with any driver. 5316 * @return 5317 * The number of packets actually retrieved, which is the number 5318 * of pointers to *rte_mbuf* structures effectively supplied to the 5319 * *rx_pkts* array. 5320 */ 5321 static inline uint16_t 5322 rte_eth_rx_burst(uint16_t port_id, uint16_t queue_id, 5323 struct rte_mbuf **rx_pkts, const uint16_t nb_pkts) 5324 { 5325 uint16_t nb_rx; 5326 struct rte_eth_fp_ops *p; 5327 void *qd; 5328 5329 #ifdef RTE_ETHDEV_DEBUG_RX 5330 if (port_id >= RTE_MAX_ETHPORTS || 5331 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5332 RTE_ETHDEV_LOG(ERR, 5333 "Invalid port_id=%u or queue_id=%u\n", 5334 port_id, queue_id); 5335 return 0; 5336 } 5337 #endif 5338 5339 /* fetch pointer to queue data */ 5340 p = &rte_eth_fp_ops[port_id]; 5341 qd = p->rxq.data[queue_id]; 5342 5343 #ifdef RTE_ETHDEV_DEBUG_RX 5344 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); 5345 5346 if (qd == NULL) { 5347 RTE_ETHDEV_LOG(ERR, "Invalid Rx queue_id=%u for port_id=%u\n", 5348 queue_id, port_id); 5349 return 0; 5350 } 5351 #endif 5352 5353 nb_rx = p->rx_pkt_burst(qd, rx_pkts, nb_pkts); 5354 5355 #ifdef RTE_ETHDEV_RXTX_CALLBACKS 5356 { 5357 void *cb; 5358 5359 /* __ATOMIC_RELEASE memory order was used when the 5360 * call back was inserted into the list. 5361 * Since there is a clear dependency between loading 5362 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is 5363 * not required. 5364 */ 5365 cb = __atomic_load_n((void **)&p->rxq.clbk[queue_id], 5366 __ATOMIC_RELAXED); 5367 if (unlikely(cb != NULL)) 5368 nb_rx = rte_eth_call_rx_callbacks(port_id, queue_id, 5369 rx_pkts, nb_rx, nb_pkts, cb); 5370 } 5371 #endif 5372 5373 rte_ethdev_trace_rx_burst(port_id, queue_id, (void **)rx_pkts, nb_rx); 5374 return nb_rx; 5375 } 5376 5377 /** 5378 * Get the number of used descriptors of a Rx queue 5379 * 5380 * @param port_id 5381 * The port identifier of the Ethernet device. 5382 * @param queue_id 5383 * The queue ID on the specific port. 5384 * @return 5385 * The number of used descriptors in the specific queue, or: 5386 * - (-ENODEV) if *port_id* is invalid. 5387 * (-EINVAL) if *queue_id* is invalid 5388 * (-ENOTSUP) if the device does not support this function 5389 */ 5390 static inline int 5391 rte_eth_rx_queue_count(uint16_t port_id, uint16_t queue_id) 5392 { 5393 struct rte_eth_fp_ops *p; 5394 void *qd; 5395 5396 if (port_id >= RTE_MAX_ETHPORTS || 5397 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5398 RTE_ETHDEV_LOG(ERR, 5399 "Invalid port_id=%u or queue_id=%u\n", 5400 port_id, queue_id); 5401 return -EINVAL; 5402 } 5403 5404 /* fetch pointer to queue data */ 5405 p = &rte_eth_fp_ops[port_id]; 5406 qd = p->rxq.data[queue_id]; 5407 5408 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); 5409 RTE_FUNC_PTR_OR_ERR_RET(*p->rx_queue_count, -ENOTSUP); 5410 if (qd == NULL) 5411 return -EINVAL; 5412 5413 return (int)(*p->rx_queue_count)(qd); 5414 } 5415 5416 /**@{@name Rx hardware descriptor states 5417 * @see rte_eth_rx_descriptor_status 5418 */ 5419 #define RTE_ETH_RX_DESC_AVAIL 0 /**< Desc available for hw. */ 5420 #define RTE_ETH_RX_DESC_DONE 1 /**< Desc done, filled by hw. */ 5421 #define RTE_ETH_RX_DESC_UNAVAIL 2 /**< Desc used by driver or hw. */ 5422 /**@}*/ 5423 5424 /** 5425 * Check the status of a Rx descriptor in the queue 5426 * 5427 * It should be called in a similar context than the Rx function: 5428 * - on a dataplane core 5429 * - not concurrently on the same queue 5430 * 5431 * Since it's a dataplane function, no check is performed on port_id and 5432 * queue_id. The caller must therefore ensure that the port is enabled 5433 * and the queue is configured and running. 5434 * 5435 * Note: accessing to a random descriptor in the ring may trigger cache 5436 * misses and have a performance impact. 5437 * 5438 * @param port_id 5439 * A valid port identifier of the Ethernet device which. 5440 * @param queue_id 5441 * A valid Rx queue identifier on this port. 5442 * @param offset 5443 * The offset of the descriptor starting from tail (0 is the next 5444 * packet to be received by the driver). 5445 * 5446 * @return 5447 * - (RTE_ETH_RX_DESC_AVAIL): Descriptor is available for the hardware to 5448 * receive a packet. 5449 * - (RTE_ETH_RX_DESC_DONE): Descriptor is done, it is filled by hw, but 5450 * not yet processed by the driver (i.e. in the receive queue). 5451 * - (RTE_ETH_RX_DESC_UNAVAIL): Descriptor is unavailable, either hold by 5452 * the driver and not yet returned to hw, or reserved by the hw. 5453 * - (-EINVAL) bad descriptor offset. 5454 * - (-ENOTSUP) if the device does not support this function. 5455 * - (-ENODEV) bad port or queue (only if compiled with debug). 5456 */ 5457 static inline int 5458 rte_eth_rx_descriptor_status(uint16_t port_id, uint16_t queue_id, 5459 uint16_t offset) 5460 { 5461 struct rte_eth_fp_ops *p; 5462 void *qd; 5463 5464 #ifdef RTE_ETHDEV_DEBUG_RX 5465 if (port_id >= RTE_MAX_ETHPORTS || 5466 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5467 RTE_ETHDEV_LOG(ERR, 5468 "Invalid port_id=%u or queue_id=%u\n", 5469 port_id, queue_id); 5470 return -EINVAL; 5471 } 5472 #endif 5473 5474 /* fetch pointer to queue data */ 5475 p = &rte_eth_fp_ops[port_id]; 5476 qd = p->rxq.data[queue_id]; 5477 5478 #ifdef RTE_ETHDEV_DEBUG_RX 5479 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); 5480 if (qd == NULL) 5481 return -ENODEV; 5482 #endif 5483 RTE_FUNC_PTR_OR_ERR_RET(*p->rx_descriptor_status, -ENOTSUP); 5484 return (*p->rx_descriptor_status)(qd, offset); 5485 } 5486 5487 /**@{@name Tx hardware descriptor states 5488 * @see rte_eth_tx_descriptor_status 5489 */ 5490 #define RTE_ETH_TX_DESC_FULL 0 /**< Desc filled for hw, waiting xmit. */ 5491 #define RTE_ETH_TX_DESC_DONE 1 /**< Desc done, packet is transmitted. */ 5492 #define RTE_ETH_TX_DESC_UNAVAIL 2 /**< Desc used by driver or hw. */ 5493 /**@}*/ 5494 5495 /** 5496 * Check the status of a Tx descriptor in the queue. 5497 * 5498 * It should be called in a similar context than the Tx function: 5499 * - on a dataplane core 5500 * - not concurrently on the same queue 5501 * 5502 * Since it's a dataplane function, no check is performed on port_id and 5503 * queue_id. The caller must therefore ensure that the port is enabled 5504 * and the queue is configured and running. 5505 * 5506 * Note: accessing to a random descriptor in the ring may trigger cache 5507 * misses and have a performance impact. 5508 * 5509 * @param port_id 5510 * A valid port identifier of the Ethernet device which. 5511 * @param queue_id 5512 * A valid Tx queue identifier on this port. 5513 * @param offset 5514 * The offset of the descriptor starting from tail (0 is the place where 5515 * the next packet will be send). 5516 * 5517 * @return 5518 * - (RTE_ETH_TX_DESC_FULL) Descriptor is being processed by the hw, i.e. 5519 * in the transmit queue. 5520 * - (RTE_ETH_TX_DESC_DONE) Hardware is done with this descriptor, it can 5521 * be reused by the driver. 5522 * - (RTE_ETH_TX_DESC_UNAVAIL): Descriptor is unavailable, reserved by the 5523 * driver or the hardware. 5524 * - (-EINVAL) bad descriptor offset. 5525 * - (-ENOTSUP) if the device does not support this function. 5526 * - (-ENODEV) bad port or queue (only if compiled with debug). 5527 */ 5528 static inline int rte_eth_tx_descriptor_status(uint16_t port_id, 5529 uint16_t queue_id, uint16_t offset) 5530 { 5531 struct rte_eth_fp_ops *p; 5532 void *qd; 5533 5534 #ifdef RTE_ETHDEV_DEBUG_TX 5535 if (port_id >= RTE_MAX_ETHPORTS || 5536 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5537 RTE_ETHDEV_LOG(ERR, 5538 "Invalid port_id=%u or queue_id=%u\n", 5539 port_id, queue_id); 5540 return -EINVAL; 5541 } 5542 #endif 5543 5544 /* fetch pointer to queue data */ 5545 p = &rte_eth_fp_ops[port_id]; 5546 qd = p->txq.data[queue_id]; 5547 5548 #ifdef RTE_ETHDEV_DEBUG_TX 5549 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, -ENODEV); 5550 if (qd == NULL) 5551 return -ENODEV; 5552 #endif 5553 RTE_FUNC_PTR_OR_ERR_RET(*p->tx_descriptor_status, -ENOTSUP); 5554 return (*p->tx_descriptor_status)(qd, offset); 5555 } 5556 5557 /** 5558 * @internal 5559 * Helper routine for rte_eth_tx_burst(). 5560 * Should be called before entry PMD's rte_eth_tx_bulk implementation. 5561 * Does necessary pre-processing - invokes Tx callbacks if any, etc. 5562 * 5563 * @param port_id 5564 * The port identifier of the Ethernet device. 5565 * @param queue_id 5566 * The index of the transmit queue through which output packets must be 5567 * sent. 5568 * @param tx_pkts 5569 * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures 5570 * which contain the output packets. 5571 * @param nb_pkts 5572 * The maximum number of packets to transmit. 5573 * @return 5574 * The number of output packets to transmit. 5575 */ 5576 uint16_t rte_eth_call_tx_callbacks(uint16_t port_id, uint16_t queue_id, 5577 struct rte_mbuf **tx_pkts, uint16_t nb_pkts, void *opaque); 5578 5579 /** 5580 * Send a burst of output packets on a transmit queue of an Ethernet device. 5581 * 5582 * The rte_eth_tx_burst() function is invoked to transmit output packets 5583 * on the output queue *queue_id* of the Ethernet device designated by its 5584 * *port_id*. 5585 * The *nb_pkts* parameter is the number of packets to send which are 5586 * supplied in the *tx_pkts* array of *rte_mbuf* structures, each of them 5587 * allocated from a pool created with rte_pktmbuf_pool_create(). 5588 * The rte_eth_tx_burst() function loops, sending *nb_pkts* packets, 5589 * up to the number of transmit descriptors available in the Tx ring of the 5590 * transmit queue. 5591 * For each packet to send, the rte_eth_tx_burst() function performs 5592 * the following operations: 5593 * 5594 * - Pick up the next available descriptor in the transmit ring. 5595 * 5596 * - Free the network buffer previously sent with that descriptor, if any. 5597 * 5598 * - Initialize the transmit descriptor with the information provided 5599 * in the *rte_mbuf data structure. 5600 * 5601 * In the case of a segmented packet composed of a list of *rte_mbuf* buffers, 5602 * the rte_eth_tx_burst() function uses several transmit descriptors 5603 * of the ring. 5604 * 5605 * The rte_eth_tx_burst() function returns the number of packets it 5606 * actually sent. A return value equal to *nb_pkts* means that all packets 5607 * have been sent, and this is likely to signify that other output packets 5608 * could be immediately transmitted again. Applications that implement a 5609 * "send as many packets to transmit as possible" policy can check this 5610 * specific case and keep invoking the rte_eth_tx_burst() function until 5611 * a value less than *nb_pkts* is returned. 5612 * 5613 * It is the responsibility of the rte_eth_tx_burst() function to 5614 * transparently free the memory buffers of packets previously sent. 5615 * This feature is driven by the *tx_free_thresh* value supplied to the 5616 * rte_eth_dev_configure() function at device configuration time. 5617 * When the number of free Tx descriptors drops below this threshold, the 5618 * rte_eth_tx_burst() function must [attempt to] free the *rte_mbuf* buffers 5619 * of those packets whose transmission was effectively completed. 5620 * 5621 * If the PMD is RTE_ETH_TX_OFFLOAD_MT_LOCKFREE capable, multiple threads can 5622 * invoke this function concurrently on the same Tx queue without SW lock. 5623 * @see rte_eth_dev_info_get, struct rte_eth_txconf::offloads 5624 * 5625 * @see rte_eth_tx_prepare to perform some prior checks or adjustments 5626 * for offloads. 5627 * 5628 * @param port_id 5629 * The port identifier of the Ethernet device. 5630 * @param queue_id 5631 * The index of the transmit queue through which output packets must be 5632 * sent. 5633 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 5634 * to rte_eth_dev_configure(). 5635 * @param tx_pkts 5636 * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures 5637 * which contain the output packets. 5638 * @param nb_pkts 5639 * The maximum number of packets to transmit. 5640 * @return 5641 * The number of output packets actually stored in transmit descriptors of 5642 * the transmit ring. The return value can be less than the value of the 5643 * *tx_pkts* parameter when the transmit ring is full or has been filled up. 5644 */ 5645 static inline uint16_t 5646 rte_eth_tx_burst(uint16_t port_id, uint16_t queue_id, 5647 struct rte_mbuf **tx_pkts, uint16_t nb_pkts) 5648 { 5649 struct rte_eth_fp_ops *p; 5650 void *qd; 5651 5652 #ifdef RTE_ETHDEV_DEBUG_TX 5653 if (port_id >= RTE_MAX_ETHPORTS || 5654 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5655 RTE_ETHDEV_LOG(ERR, 5656 "Invalid port_id=%u or queue_id=%u\n", 5657 port_id, queue_id); 5658 return 0; 5659 } 5660 #endif 5661 5662 /* fetch pointer to queue data */ 5663 p = &rte_eth_fp_ops[port_id]; 5664 qd = p->txq.data[queue_id]; 5665 5666 #ifdef RTE_ETHDEV_DEBUG_TX 5667 RTE_ETH_VALID_PORTID_OR_ERR_RET(port_id, 0); 5668 5669 if (qd == NULL) { 5670 RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", 5671 queue_id, port_id); 5672 return 0; 5673 } 5674 #endif 5675 5676 #ifdef RTE_ETHDEV_RXTX_CALLBACKS 5677 { 5678 void *cb; 5679 5680 /* __ATOMIC_RELEASE memory order was used when the 5681 * call back was inserted into the list. 5682 * Since there is a clear dependency between loading 5683 * cb and cb->fn/cb->next, __ATOMIC_ACQUIRE memory order is 5684 * not required. 5685 */ 5686 cb = __atomic_load_n((void **)&p->txq.clbk[queue_id], 5687 __ATOMIC_RELAXED); 5688 if (unlikely(cb != NULL)) 5689 nb_pkts = rte_eth_call_tx_callbacks(port_id, queue_id, 5690 tx_pkts, nb_pkts, cb); 5691 } 5692 #endif 5693 5694 nb_pkts = p->tx_pkt_burst(qd, tx_pkts, nb_pkts); 5695 5696 rte_ethdev_trace_tx_burst(port_id, queue_id, (void **)tx_pkts, nb_pkts); 5697 return nb_pkts; 5698 } 5699 5700 /** 5701 * Process a burst of output packets on a transmit queue of an Ethernet device. 5702 * 5703 * The rte_eth_tx_prepare() function is invoked to prepare output packets to be 5704 * transmitted on the output queue *queue_id* of the Ethernet device designated 5705 * by its *port_id*. 5706 * The *nb_pkts* parameter is the number of packets to be prepared which are 5707 * supplied in the *tx_pkts* array of *rte_mbuf* structures, each of them 5708 * allocated from a pool created with rte_pktmbuf_pool_create(). 5709 * For each packet to send, the rte_eth_tx_prepare() function performs 5710 * the following operations: 5711 * 5712 * - Check if packet meets devices requirements for Tx offloads. 5713 * 5714 * - Check limitations about number of segments. 5715 * 5716 * - Check additional requirements when debug is enabled. 5717 * 5718 * - Update and/or reset required checksums when Tx offload is set for packet. 5719 * 5720 * Since this function can modify packet data, provided mbufs must be safely 5721 * writable (e.g. modified data cannot be in shared segment). 5722 * 5723 * The rte_eth_tx_prepare() function returns the number of packets ready to be 5724 * sent. A return value equal to *nb_pkts* means that all packets are valid and 5725 * ready to be sent, otherwise stops processing on the first invalid packet and 5726 * leaves the rest packets untouched. 5727 * 5728 * When this functionality is not implemented in the driver, all packets are 5729 * are returned untouched. 5730 * 5731 * @param port_id 5732 * The port identifier of the Ethernet device. 5733 * The value must be a valid port ID. 5734 * @param queue_id 5735 * The index of the transmit queue through which output packets must be 5736 * sent. 5737 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 5738 * to rte_eth_dev_configure(). 5739 * @param tx_pkts 5740 * The address of an array of *nb_pkts* pointers to *rte_mbuf* structures 5741 * which contain the output packets. 5742 * @param nb_pkts 5743 * The maximum number of packets to process. 5744 * @return 5745 * The number of packets correct and ready to be sent. The return value can be 5746 * less than the value of the *tx_pkts* parameter when some packet doesn't 5747 * meet devices requirements with rte_errno set appropriately: 5748 * - EINVAL: offload flags are not correctly set 5749 * - ENOTSUP: the offload feature is not supported by the hardware 5750 * - ENODEV: if *port_id* is invalid (with debug enabled only) 5751 * 5752 */ 5753 5754 #ifndef RTE_ETHDEV_TX_PREPARE_NOOP 5755 5756 static inline uint16_t 5757 rte_eth_tx_prepare(uint16_t port_id, uint16_t queue_id, 5758 struct rte_mbuf **tx_pkts, uint16_t nb_pkts) 5759 { 5760 struct rte_eth_fp_ops *p; 5761 void *qd; 5762 5763 #ifdef RTE_ETHDEV_DEBUG_TX 5764 if (port_id >= RTE_MAX_ETHPORTS || 5765 queue_id >= RTE_MAX_QUEUES_PER_PORT) { 5766 RTE_ETHDEV_LOG(ERR, 5767 "Invalid port_id=%u or queue_id=%u\n", 5768 port_id, queue_id); 5769 rte_errno = ENODEV; 5770 return 0; 5771 } 5772 #endif 5773 5774 /* fetch pointer to queue data */ 5775 p = &rte_eth_fp_ops[port_id]; 5776 qd = p->txq.data[queue_id]; 5777 5778 #ifdef RTE_ETHDEV_DEBUG_TX 5779 if (!rte_eth_dev_is_valid_port(port_id)) { 5780 RTE_ETHDEV_LOG(ERR, "Invalid Tx port_id=%u\n", port_id); 5781 rte_errno = ENODEV; 5782 return 0; 5783 } 5784 if (qd == NULL) { 5785 RTE_ETHDEV_LOG(ERR, "Invalid Tx queue_id=%u for port_id=%u\n", 5786 queue_id, port_id); 5787 rte_errno = EINVAL; 5788 return 0; 5789 } 5790 #endif 5791 5792 if (!p->tx_pkt_prepare) 5793 return nb_pkts; 5794 5795 return p->tx_pkt_prepare(qd, tx_pkts, nb_pkts); 5796 } 5797 5798 #else 5799 5800 /* 5801 * Native NOOP operation for compilation targets which doesn't require any 5802 * preparations steps, and functional NOOP may introduce unnecessary performance 5803 * drop. 5804 * 5805 * Generally this is not a good idea to turn it on globally and didn't should 5806 * be used if behavior of tx_preparation can change. 5807 */ 5808 5809 static inline uint16_t 5810 rte_eth_tx_prepare(__rte_unused uint16_t port_id, 5811 __rte_unused uint16_t queue_id, 5812 __rte_unused struct rte_mbuf **tx_pkts, uint16_t nb_pkts) 5813 { 5814 return nb_pkts; 5815 } 5816 5817 #endif 5818 5819 /** 5820 * Send any packets queued up for transmission on a port and HW queue 5821 * 5822 * This causes an explicit flush of packets previously buffered via the 5823 * rte_eth_tx_buffer() function. It returns the number of packets successfully 5824 * sent to the NIC, and calls the error callback for any unsent packets. Unless 5825 * explicitly set up otherwise, the default callback simply frees the unsent 5826 * packets back to the owning mempool. 5827 * 5828 * @param port_id 5829 * The port identifier of the Ethernet device. 5830 * @param queue_id 5831 * The index of the transmit queue through which output packets must be 5832 * sent. 5833 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 5834 * to rte_eth_dev_configure(). 5835 * @param buffer 5836 * Buffer of packets to be transmit. 5837 * @return 5838 * The number of packets successfully sent to the Ethernet device. The error 5839 * callback is called for any packets which could not be sent. 5840 */ 5841 static inline uint16_t 5842 rte_eth_tx_buffer_flush(uint16_t port_id, uint16_t queue_id, 5843 struct rte_eth_dev_tx_buffer *buffer) 5844 { 5845 uint16_t sent; 5846 uint16_t to_send = buffer->length; 5847 5848 if (to_send == 0) 5849 return 0; 5850 5851 sent = rte_eth_tx_burst(port_id, queue_id, buffer->pkts, to_send); 5852 5853 buffer->length = 0; 5854 5855 /* All packets sent, or to be dealt with by callback below */ 5856 if (unlikely(sent != to_send)) 5857 buffer->error_callback(&buffer->pkts[sent], 5858 (uint16_t)(to_send - sent), 5859 buffer->error_userdata); 5860 5861 return sent; 5862 } 5863 5864 /** 5865 * Buffer a single packet for future transmission on a port and queue 5866 * 5867 * This function takes a single mbuf/packet and buffers it for later 5868 * transmission on the particular port and queue specified. Once the buffer is 5869 * full of packets, an attempt will be made to transmit all the buffered 5870 * packets. In case of error, where not all packets can be transmitted, a 5871 * callback is called with the unsent packets as a parameter. If no callback 5872 * is explicitly set up, the unsent packets are just freed back to the owning 5873 * mempool. The function returns the number of packets actually sent i.e. 5874 * 0 if no buffer flush occurred, otherwise the number of packets successfully 5875 * flushed 5876 * 5877 * @param port_id 5878 * The port identifier of the Ethernet device. 5879 * @param queue_id 5880 * The index of the transmit queue through which output packets must be 5881 * sent. 5882 * The value must be in the range [0, nb_tx_queue - 1] previously supplied 5883 * to rte_eth_dev_configure(). 5884 * @param buffer 5885 * Buffer used to collect packets to be sent. 5886 * @param tx_pkt 5887 * Pointer to the packet mbuf to be sent. 5888 * @return 5889 * 0 = packet has been buffered for later transmission 5890 * N > 0 = packet has been buffered, and the buffer was subsequently flushed, 5891 * causing N packets to be sent, and the error callback to be called for 5892 * the rest. 5893 */ 5894 static __rte_always_inline uint16_t 5895 rte_eth_tx_buffer(uint16_t port_id, uint16_t queue_id, 5896 struct rte_eth_dev_tx_buffer *buffer, struct rte_mbuf *tx_pkt) 5897 { 5898 buffer->pkts[buffer->length++] = tx_pkt; 5899 if (buffer->length < buffer->size) 5900 return 0; 5901 5902 return rte_eth_tx_buffer_flush(port_id, queue_id, buffer); 5903 } 5904 5905 #ifdef __cplusplus 5906 } 5907 #endif 5908 5909 #endif /* _RTE_ETHDEV_H_ */ 5910