1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2010-2015 Intel Corporation. 3 4Poll Mode Driver 5================ 6 7The DPDK includes 1 Gigabit, 10 Gigabit and 40 Gigabit and para virtualized virtio Poll Mode Drivers. 8 9A Poll Mode Driver (PMD) consists of APIs, provided through the BSD driver running in user space, 10to configure the devices and their respective queues. 11In addition, a PMD accesses the RX and TX descriptors directly without any interrupts 12(with the exception of Link Status Change interrupts) to quickly receive, 13process and deliver packets in the user's application. 14This section describes the requirements of the PMDs, 15their global design principles and proposes a high-level architecture and a generic external API for the Ethernet PMDs. 16 17Requirements and Assumptions 18---------------------------- 19 20The DPDK environment for packet processing applications allows for two models, run-to-completion and pipe-line: 21 22* In the *run-to-completion* model, a specific port's RX descriptor ring is polled for packets through an API. 23 Packets are then processed on the same core and placed on a port's TX descriptor ring through an API for transmission. 24 25* In the *pipe-line* model, one core polls one or more port's RX descriptor ring through an API. 26 Packets are received and passed to another core via a ring. 27 The other core continues to process the packet which then may be placed on a port's TX descriptor ring through an API for transmission. 28 29In a synchronous run-to-completion model, 30each logical core assigned to the DPDK executes a packet processing loop that includes the following steps: 31 32* Retrieve input packets through the PMD receive API 33 34* Process each received packet one at a time, up to its forwarding 35 36* Send pending output packets through the PMD transmit API 37 38Conversely, in an asynchronous pipe-line model, some logical cores may be dedicated to the retrieval of received packets and 39other logical cores to the processing of previously received packets. 40Received packets are exchanged between logical cores through rings. 41The loop for packet retrieval includes the following steps: 42 43* Retrieve input packets through the PMD receive API 44 45* Provide received packets to processing lcores through packet queues 46 47The loop for packet processing includes the following steps: 48 49* Retrieve the received packet from the packet queue 50 51* Process the received packet, up to its retransmission if forwarded 52 53To avoid any unnecessary interrupt processing overhead, the execution environment must not use any asynchronous notification mechanisms. 54Whenever needed and appropriate, asynchronous communication should be introduced as much as possible through the use of rings. 55 56Avoiding lock contention is a key issue in a multi-core environment. 57To address this issue, PMDs are designed to work with per-core private resources as much as possible. 58For example, a PMD maintains a separate transmit queue per-core, per-port, if the PMD is not ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable. 59In the same way, every receive queue of a port is assigned to and polled by a single logical core (lcore). 60 61To comply with Non-Uniform Memory Access (NUMA), memory management is designed to assign to each logical core 62a private buffer pool in local memory to minimize remote memory access. 63The configuration of packet buffer pools should take into account the underlying physical memory architecture in terms of DIMMS, 64channels and ranks. 65The application must ensure that appropriate parameters are given at memory pool creation time. 66See :doc:`../mempool_lib`. 67 68Design Principles 69----------------- 70 71The API and architecture of the Ethernet* PMDs are designed with the following guidelines in mind. 72 73PMDs must help global policy-oriented decisions to be enforced at the upper application level. 74Conversely, NIC PMD functions should not impede the benefits expected by upper-level global policies, 75or worse prevent such policies from being applied. 76 77For instance, both the receive and transmit functions of a PMD have a maximum number of packets/descriptors to poll. 78This allows a run-to-completion processing stack to statically fix or 79to dynamically adapt its overall behavior through different global loop policies, such as: 80 81* Receive, process immediately and transmit packets one at a time in a piecemeal fashion. 82 83* Receive as many packets as possible, then process all received packets, transmitting them immediately. 84 85* Receive a given maximum number of packets, process the received packets, accumulate them and finally send all accumulated packets to transmit. 86 87To achieve optimal performance, overall software design choices and pure software optimization techniques must be considered and 88balanced against available low-level hardware-based optimization features (CPU cache properties, bus speed, NIC PCI bandwidth, and so on). 89The case of packet transmission is an example of this software/hardware tradeoff issue when optimizing burst-oriented network packet processing engines. 90In the initial case, the PMD could export only an rte_eth_tx_one function to transmit one packet at a time on a given queue. 91On top of that, one can easily build an rte_eth_tx_burst function that loops invoking the rte_eth_tx_one function to transmit several packets at a time. 92However, an rte_eth_tx_burst function is effectively implemented by the PMD to minimize the driver-level transmit cost per packet through the following optimizations: 93 94* Share among multiple packets the un-amortized cost of invoking the rte_eth_tx_one function. 95 96* Enable the rte_eth_tx_burst function to take advantage of burst-oriented hardware features (prefetch data in cache, use of NIC head/tail registers) 97 to minimize the number of CPU cycles per packet, for example by avoiding unnecessary read memory accesses to ring transmit descriptors, 98 or by systematically using arrays of pointers that exactly fit cache line boundaries and sizes. 99 100* Apply burst-oriented software optimization techniques to remove operations that would otherwise be unavoidable, such as ring index wrap back management. 101 102Burst-oriented functions are also introduced via the API for services that are intensively used by the PMD. 103This applies in particular to buffer allocators used to populate NIC rings, which provide functions to allocate/free several buffers at a time. 104For example, an mbuf_multiple_alloc function returning an array of pointers to rte_mbuf buffers which speeds up the receive poll function of the PMD when 105replenishing multiple descriptors of the receive ring. 106 107Logical Cores, Memory and NIC Queues Relationships 108-------------------------------------------------- 109 110The DPDK supports NUMA allowing for better performance when a processor's logical cores and interfaces utilize its local memory. 111Therefore, mbuf allocation associated with local PCIe* interfaces should be allocated from memory pools created in the local memory. 112The buffers should, if possible, remain on the local processor to obtain the best performance results and RX and TX buffer descriptors 113should be populated with mbufs allocated from a mempool allocated from local memory. 114 115The run-to-completion model also performs better if packet or data manipulation is in local memory instead of a remote processors memory. 116This is also true for the pipe-line model provided all logical cores used are located on the same processor. 117 118Multiple logical cores should never share receive or transmit queues for interfaces since this would require global locks and hinder performance. 119 120If the PMD is ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capable, multiple threads can invoke ``rte_eth_tx_burst()`` 121concurrently on the same tx queue without SW lock. This PMD feature found in some NICs and useful in the following use cases: 122 123* Remove explicit spinlock in some applications where lcores are not mapped to Tx queues with 1:1 relation. 124 125* In the eventdev use case, avoid dedicating a separate TX core for transmitting and thus 126 enables more scaling as all workers can send the packets. 127 128See `Hardware Offload`_ for ``RTE_ETH_TX_OFFLOAD_MT_LOCKFREE`` capability probing details. 129 130Device Identification, Ownership and Configuration 131-------------------------------------------------- 132 133Device Identification 134~~~~~~~~~~~~~~~~~~~~~ 135 136Each NIC port is uniquely designated by its (bus/bridge, device, function) PCI 137identifiers assigned by the PCI probing/enumeration function executed at DPDK initialization. 138Based on their PCI identifier, NIC ports are assigned two other identifiers: 139 140* A port index used to designate the NIC port in all functions exported by the PMD API. 141 142* A port name used to designate the port in console messages, for administration or debugging purposes. 143 For ease of use, the port name includes the port index. 144 145Port Ownership 146~~~~~~~~~~~~~~ 147 148The Ethernet devices ports can be owned by a single DPDK entity (application, library, PMD, process, etc). 149The ownership mechanism is controlled by ethdev APIs and allows to set/remove/get a port owner by DPDK entities. 150It prevents Ethernet ports to be managed by different entities. 151 152.. note:: 153 154 It is the DPDK entity responsibility to set the port owner before using it and to manage the port usage synchronization between different threads or processes. 155 156It is recommended to set port ownership early, 157like during the probing notification ``RTE_ETH_EVENT_NEW``. 158 159Device Configuration 160~~~~~~~~~~~~~~~~~~~~ 161 162The configuration of each NIC port includes the following operations: 163 164* Allocate PCI resources 165 166* Reset the hardware (issue a Global Reset) to a well-known default state 167 168* Set up the PHY and the link 169 170* Initialize statistics counters 171 172The PMD API must also export functions to start/stop the all-multicast feature of a port and functions to set/unset the port in promiscuous mode. 173 174Some hardware offload features must be individually configured at port initialization through specific configuration parameters. 175This is the case for the Receive Side Scaling (RSS) and Data Center Bridging (DCB) features for example. 176 177On-the-Fly Configuration 178~~~~~~~~~~~~~~~~~~~~~~~~ 179 180All device features that can be started or stopped "on the fly" (that is, without stopping the device) do not require the PMD API to export dedicated functions for this purpose. 181 182All that is required is the mapping address of the device PCI registers to implement the configuration of these features in specific functions outside of the drivers. 183 184For this purpose, 185the PMD API exports a function that provides all the information associated with a device that can be used to set up a given device feature outside of the driver. 186This includes the PCI vendor identifier, the PCI device identifier, the mapping address of the PCI device registers, and the name of the driver. 187 188The main advantage of this approach is that it gives complete freedom on the choice of the API used to configure, to start, and to stop such features. 189 190As an example, refer to the configuration of the IEEE1588 feature for the Intel® 82576 Gigabit Ethernet Controller and 191the Intel® 82599 10 Gigabit Ethernet Controller controllers in the testpmd application. 192 193Other features such as the L3/L4 5-Tuple packet filtering feature of a port can be configured in the same way. 194Ethernet* flow control (pause frame) can be configured on the individual port. 195Refer to the testpmd source code for details. 196Also, L4 (UDP/TCP/ SCTP) checksum offload by the NIC can be enabled for an individual packet as long as the packet mbuf is set up correctly. See `Hardware Offload`_ for details. 197 198Configuration of Transmit Queues 199~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 200 201Each transmit queue is independently configured with the following information: 202 203* The number of descriptors of the transmit ring 204 205* The socket identifier used to identify the appropriate DMA memory zone from which to allocate the transmit ring in NUMA architectures 206 207* The values of the Prefetch, Host and Write-Back threshold registers of the transmit queue 208 209* The *minimum* transmit packets to free threshold (tx_free_thresh). 210 When the number of descriptors used to transmit packets exceeds this threshold, the network adaptor should be checked to see if it has written back descriptors. 211 A value of 0 can be passed during the TX queue configuration to indicate the default value should be used. 212 The default value for tx_free_thresh is 32. 213 This ensures that the PMD does not search for completed descriptors until at least 32 have been processed by the NIC for this queue. 214 215* The *minimum* RS bit threshold. The minimum number of transmit descriptors to use before setting the Report Status (RS) bit in the transmit descriptor. 216 Note that this parameter may only be valid for Intel 10 GbE network adapters. 217 The RS bit is set on the last descriptor used to transmit a packet if the number of descriptors used since the last RS bit setting, 218 up to the first descriptor used to transmit the packet, exceeds the transmit RS bit threshold (tx_rs_thresh). 219 In short, this parameter controls which transmit descriptors are written back to host memory by the network adapter. 220 A value of 0 can be passed during the TX queue configuration to indicate that the default value should be used. 221 The default value for tx_rs_thresh is 32. 222 This ensures that at least 32 descriptors are used before the network adapter writes back the most recently used descriptor. 223 This saves upstream PCIe* bandwidth resulting from TX descriptor write-backs. 224 It is important to note that the TX Write-back threshold (TX wthresh) should be set to 0 when tx_rs_thresh is greater than 1. 225 Refer to the Intel® 82599 10 Gigabit Ethernet Controller Datasheet for more details. 226 227The following constraints must be satisfied for tx_free_thresh and tx_rs_thresh: 228 229* tx_rs_thresh must be greater than 0. 230 231* tx_rs_thresh must be less than the size of the ring minus 2. 232 233* tx_rs_thresh must be less than or equal to tx_free_thresh. 234 235* tx_free_thresh must be greater than 0. 236 237* tx_free_thresh must be less than the size of the ring minus 3. 238 239* For optimal performance, TX wthresh should be set to 0 when tx_rs_thresh is greater than 1. 240 241One descriptor in the TX ring is used as a sentinel to avoid a hardware race condition, hence the maximum threshold constraints. 242 243.. note:: 244 245 When configuring for DCB operation, at port initialization, both the number of transmit queues and the number of receive queues must be set to 128. 246 247Free Tx mbuf on Demand 248~~~~~~~~~~~~~~~~~~~~~~ 249 250Many of the drivers do not release the mbuf back to the mempool, or local cache, 251immediately after the packet has been transmitted. 252Instead, they leave the mbuf in their Tx ring and 253either perform a bulk release when the ``tx_rs_thresh`` has been crossed 254or free the mbuf when a slot in the Tx ring is needed. 255 256An application can request the driver to release used mbufs with the ``rte_eth_tx_done_cleanup()`` API. 257This API requests the driver to release mbufs that are no longer in use, 258independent of whether or not the ``tx_rs_thresh`` has been crossed. 259There are two scenarios when an application may want the mbuf released immediately: 260 261* When a given packet needs to be sent to multiple destination interfaces 262 (either for Layer 2 flooding or Layer 3 multi-cast). 263 One option is to make a copy of the packet or a copy of the header portion that needs to be manipulated. 264 A second option is to transmit the packet and then poll the ``rte_eth_tx_done_cleanup()`` API 265 until the reference count on the packet is decremented. 266 Then the same packet can be transmitted to the next destination interface. 267 The application is still responsible for managing any packet manipulations needed 268 between the different destination interfaces, but a packet copy can be avoided. 269 This API is independent of whether the packet was transmitted or dropped, 270 only that the mbuf is no longer in use by the interface. 271 272* Some applications are designed to make multiple runs, like a packet generator. 273 For performance reasons and consistency between runs, 274 the application may want to reset back to an initial state 275 between each run, where all mbufs are returned to the mempool. 276 In this case, it can call the ``rte_eth_tx_done_cleanup()`` API 277 for each destination interface it has been using 278 to request it to release of all its used mbufs. 279 280To determine if a driver supports this API, check for the *Free Tx mbuf on demand* feature 281in the *Network Interface Controller Drivers* document. 282 283Hardware Offload 284~~~~~~~~~~~~~~~~ 285 286Depending on driver capabilities advertised by 287``rte_eth_dev_info_get()``, the PMD may support hardware offloading 288feature like checksumming, TCP segmentation, VLAN insertion or 289lockfree multithreaded TX burst on the same TX queue. 290 291The support of these offload features implies the addition of dedicated 292status bit(s) and value field(s) into the rte_mbuf data structure, along 293with their appropriate handling by the receive/transmit functions 294exported by each PMD. The list of flags and their precise meaning is 295described in the mbuf API documentation and in the :ref:`mbuf_meta` chapter. 296 297Per-Port and Per-Queue Offloads 298^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 299 300In the DPDK offload API, offloads are divided into per-port and per-queue offloads as follows: 301 302* A per-queue offloading can be enabled on a queue and disabled on another queue at the same time. 303* A pure per-port offload is the one supported by device but not per-queue type. 304* A pure per-port offloading can't be enabled on a queue and disabled on another queue at the same time. 305* A pure per-port offloading must be enabled or disabled on all queues at the same time. 306* Any offloading is per-queue or pure per-port type, but can't be both types at same devices. 307* Port capabilities = per-queue capabilities + pure per-port capabilities. 308* Any supported offloading can be enabled on all queues. 309 310The different offloads capabilities can be queried using ``rte_eth_dev_info_get()``. 311The ``dev_info->[rt]x_queue_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all per-queue offloading capabilities. 312The ``dev_info->[rt]x_offload_capa`` returned from ``rte_eth_dev_info_get()`` includes all pure per-port and per-queue offloading capabilities. 313Supported offloads can be either per-port or per-queue. 314 315Offloads are enabled using the existing ``RTE_ETH_TX_OFFLOAD_*`` or ``RTE_ETH_RX_OFFLOAD_*`` flags. 316Any requested offloading by an application must be within the device capabilities. 317Any offloading is disabled by default if it is not set in the parameter 318``dev_conf->[rt]xmode.offloads`` to ``rte_eth_dev_configure()`` and 319``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()``. 320 321If any offloading is enabled in ``rte_eth_dev_configure()`` by an application, 322it is enabled on all queues no matter whether it is per-queue or 323per-port type and no matter whether it is set or cleared in 324``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()``. 325 326If a per-queue offloading hasn't been enabled in ``rte_eth_dev_configure()``, 327it can be enabled or disabled in ``rte_eth_[rt]x_queue_setup()`` for individual queue. 328A newly added offloads in ``[rt]x_conf->offloads`` to ``rte_eth_[rt]x_queue_setup()`` input by application 329is the one which hasn't been enabled in ``rte_eth_dev_configure()`` and is requested to be enabled 330in ``rte_eth_[rt]x_queue_setup()``. It must be per-queue type, otherwise trigger an error log. 331 332Poll Mode Driver API 333-------------------- 334 335Generalities 336~~~~~~~~~~~~ 337 338By default, all functions exported by a PMD are lock-free functions that are assumed 339not to be invoked in parallel on different logical cores to work on the same target object. 340For instance, a PMD receive function cannot be invoked in parallel on two logical cores to poll the same RX queue of the same port. 341Of course, this function can be invoked in parallel by different logical cores on different RX queues. 342It is the responsibility of the upper-level application to enforce this rule. 343 344If needed, parallel accesses by multiple logical cores to shared queues can be explicitly protected by dedicated inline lock-aware functions 345built on top of their corresponding lock-free functions of the PMD API. 346 347Generic Packet Representation 348~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 349 350A packet is represented by an rte_mbuf structure, which is a generic metadata structure containing all necessary housekeeping information. 351This includes fields and status bits corresponding to offload hardware features, such as checksum computation of IP headers or VLAN tags. 352 353The rte_mbuf data structure includes specific fields to represent, in a generic way, the offload features provided by network controllers. 354For an input packet, most fields of the rte_mbuf structure are filled in by the PMD receive function with the information contained in the receive descriptor. 355Conversely, for output packets, most fields of rte_mbuf structures are used by the PMD transmit function to initialize transmit descriptors. 356 357See :doc:`../mbuf_lib` chapter for more details. 358 359Ethernet Device API 360~~~~~~~~~~~~~~~~~~~ 361 362The Ethernet device API exported by the Ethernet PMDs is described in the *DPDK API Reference*. 363 364.. _ethernet_device_standard_device_arguments: 365 366Ethernet Device Standard Device Arguments 367~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 368 369Standard Ethernet device arguments allow for a set of commonly used arguments/ 370parameters which are applicable to all Ethernet devices to be available to for 371specification of specific device and for passing common configuration 372parameters to those ports. 373 374* ``representor`` for a device which supports the creation of representor ports 375 this argument allows user to specify which switch ports to enable port 376 representors for:: 377 378 -a DBDF,representor=vf0 379 -a DBDF,representor=vf[0,4,6,9] 380 -a DBDF,representor=vf[0-31] 381 -a DBDF,representor=vf[0,2-4,7,9-11] 382 -a DBDF,representor=sf0 383 -a DBDF,representor=sf[1,3,5] 384 -a DBDF,representor=sf[0-1023] 385 -a DBDF,representor=sf[0,2-4,7,9-11] 386 -a DBDF,representor=pf1vf0 387 -a DBDF,representor=pf[0-1]sf[0-127] 388 -a DBDF,representor=pf1 389 -a DBDF,representor=[pf[0-1],pf2vf[0-2],pf3[3,5-8]] 390 (Multiple representors in one device argument can be represented as a list) 391 392Note: PMDs are not required to support the standard device arguments and users 393should consult the relevant PMD documentation to see support devargs. 394 395Extended Statistics API 396~~~~~~~~~~~~~~~~~~~~~~~ 397 398The extended statistics API allows a PMD to expose all statistics that are 399available to it, including statistics that are unique to the device. 400Each statistic has three properties ``name``, ``id`` and ``value``: 401 402* ``name``: A human readable string formatted by the scheme detailed below. 403* ``id``: An integer that represents only that statistic. 404* ``value``: A unsigned 64-bit integer that is the value of the statistic. 405 406Note that extended statistic identifiers are 407driver-specific, and hence might not be the same for different ports. 408The API consists of various ``rte_eth_xstats_*()`` functions, and allows an 409application to be flexible in how it retrieves statistics. 410 411Scheme for Human Readable Names 412^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 413 414A naming scheme exists for the strings exposed to clients of the API. This is 415to allow scraping of the API for statistics of interest. The naming scheme uses 416strings split by a single underscore ``_``. The scheme is as follows: 417 418* direction 419* detail 1 420* detail 2 421* detail n 422* unit 423 424Examples of common statistics xstats strings, formatted to comply to the scheme 425proposed above: 426 427* ``rx_bytes`` 428* ``rx_crc_errors`` 429* ``tx_multicast_packets`` 430 431The scheme, although quite simple, allows flexibility in presenting and reading 432information from the statistic strings. The following example illustrates the 433naming scheme:``rx_packets``. In this example, the string is split into two 434components. The first component ``rx`` indicates that the statistic is 435associated with the receive side of the NIC. The second component ``packets`` 436indicates that the unit of measure is packets. 437 438A more complicated example: ``tx_size_128_to_255_packets``. In this example, 439``tx`` indicates transmission, ``size`` is the first detail, ``128`` etc are 440more details, and ``packets`` indicates that this is a packet counter. 441 442Some additions in the metadata scheme are as follows: 443 444* If the first part does not match ``rx`` or ``tx``, the statistic does not 445 have an affinity with either receive of transmit. 446 447* If the first letter of the second part is ``q`` and this ``q`` is followed 448 by a number, this statistic is part of a specific queue. 449 450An example where queue numbers are used is as follows: ``tx_q7_bytes`` which 451indicates this statistic applies to queue number 7, and represents the number 452of transmitted bytes on that queue. 453 454API Design 455^^^^^^^^^^ 456 457The xstats API uses the ``name``, ``id``, and ``value`` to allow performant 458lookup of specific statistics. Performant lookup means two things; 459 460* No string comparisons with the ``name`` of the statistic in fast-path 461* Allow requesting of only the statistics of interest 462 463The API ensures these requirements are met by mapping the ``name`` of the 464statistic to a unique ``id``, which is used as a key for lookup in the fast-path. 465The API allows applications to request an array of ``id`` values, so that the 466PMD only performs the required calculations. Expected usage is that the 467application scans the ``name`` of each statistic, and caches the ``id`` 468if it has an interest in that statistic. On the fast-path, the integer can be used 469to retrieve the actual ``value`` of the statistic that the ``id`` represents. 470 471API Functions 472^^^^^^^^^^^^^ 473 474The API is built out of a small number of functions, which can be used to 475retrieve the number of statistics and the names, IDs and values of those 476statistics. 477 478* ``rte_eth_xstats_get_names_by_id()``: returns the names of the statistics. When given a 479 ``NULL`` parameter the function returns the number of statistics that are available. 480 481* ``rte_eth_xstats_get_id_by_name()``: Searches for the statistic ID that matches 482 ``xstat_name``. If found, the ``id`` integer is set. 483 484* ``rte_eth_xstats_get_by_id()``: Fills in an array of ``uint64_t`` values 485 with matching the provided ``ids`` array. If the ``ids`` array is NULL, it 486 returns all statistics that are available. 487 488 489Application Usage 490^^^^^^^^^^^^^^^^^ 491 492Imagine an application that wants to view the dropped packet count. If no 493packets are dropped, the application does not read any other metrics for 494performance reasons. If packets are dropped, the application has a particular 495set of statistics that it requests. This "set" of statistics allows the app to 496decide what next steps to perform. The following code-snippets show how the 497xstats API can be used to achieve this goal. 498 499First step is to get all statistics names and list them: 500 501.. code-block:: c 502 503 struct rte_eth_xstat_name *xstats_names; 504 uint64_t *values; 505 int len, i; 506 507 /* Get number of stats */ 508 len = rte_eth_xstats_get_names_by_id(port_id, NULL, NULL, 0); 509 if (len < 0) { 510 printf("Cannot get xstats count\n"); 511 goto err; 512 } 513 514 xstats_names = malloc(sizeof(struct rte_eth_xstat_name) * len); 515 if (xstats_names == NULL) { 516 printf("Cannot allocate memory for xstat names\n"); 517 goto err; 518 } 519 520 /* Retrieve xstats names, passing NULL for IDs to return all statistics */ 521 if (len != rte_eth_xstats_get_names_by_id(port_id, xstats_names, NULL, len)) { 522 printf("Cannot get xstat names\n"); 523 goto err; 524 } 525 526 values = malloc(sizeof(values) * len); 527 if (values == NULL) { 528 printf("Cannot allocate memory for xstats\n"); 529 goto err; 530 } 531 532 /* Getting xstats values */ 533 if (len != rte_eth_xstats_get_by_id(port_id, NULL, values, len)) { 534 printf("Cannot get xstat values\n"); 535 goto err; 536 } 537 538 /* Print all xstats names and values */ 539 for (i = 0; i < len; i++) { 540 printf("%s: %"PRIu64"\n", xstats_names[i].name, values[i]); 541 } 542 543The application has access to the names of all of the statistics that the PMD 544exposes. The application can decide which statistics are of interest, cache the 545ids of those statistics by looking up the name as follows: 546 547.. code-block:: c 548 549 uint64_t id; 550 uint64_t value; 551 const char *xstat_name = "rx_errors"; 552 553 if(!rte_eth_xstats_get_id_by_name(port_id, xstat_name, &id)) { 554 rte_eth_xstats_get_by_id(port_id, &id, &value, 1); 555 printf("%s: %"PRIu64"\n", xstat_name, value); 556 } 557 else { 558 printf("Cannot find xstats with a given name\n"); 559 goto err; 560 } 561 562The API provides flexibility to the application so that it can look up multiple 563statistics using an array containing multiple ``id`` numbers. This reduces the 564function call overhead of retrieving statistics, and makes lookup of multiple 565statistics simpler for the application. 566 567.. code-block:: c 568 569 #define APP_NUM_STATS 4 570 /* application cached these ids previously; see above */ 571 uint64_t ids_array[APP_NUM_STATS] = {3,4,7,21}; 572 uint64_t value_array[APP_NUM_STATS]; 573 574 /* Getting multiple xstats values from array of IDs */ 575 rte_eth_xstats_get_by_id(port_id, ids_array, value_array, APP_NUM_STATS); 576 577 uint32_t i; 578 for(i = 0; i < APP_NUM_STATS; i++) { 579 printf("%d: %"PRIu64"\n", ids_array[i], value_array[i]); 580 } 581 582 583This array lookup API for xstats allows the application create multiple 584"groups" of statistics, and look up the values of those IDs using a single API 585call. As an end result, the application is able to achieve its goal of 586monitoring a single statistic ("rx_errors" in this case), and if that shows 587packets being dropped, it can easily retrieve a "set" of statistics using the 588IDs array parameter to ``rte_eth_xstats_get_by_id`` function. 589 590NIC Reset API 591~~~~~~~~~~~~~ 592 593.. code-block:: c 594 595 int rte_eth_dev_reset(uint16_t port_id); 596 597Sometimes a port has to be reset passively. For example when a PF is 598reset, all its VFs should also be reset by the application to make them 599consistent with the PF. A DPDK application also can call this function 600to trigger a port reset. Normally, a DPDK application would invokes this 601function when an RTE_ETH_EVENT_INTR_RESET event is detected. 602 603It is the duty of the PMD to trigger RTE_ETH_EVENT_INTR_RESET events and 604the application should register a callback function to handle these 605events. When a PMD needs to trigger a reset, it can trigger an 606RTE_ETH_EVENT_INTR_RESET event. On receiving an RTE_ETH_EVENT_INTR_RESET 607event, applications can handle it as follows: Stop working queues, stop 608calling Rx and Tx functions, and then call rte_eth_dev_reset(). For 609thread safety all these operations should be called from the same thread. 610 611For example when PF is reset, the PF sends a message to notify VFs of 612this event and also trigger an interrupt to VFs. Then in the interrupt 613service routine the VFs detects this notification message and calls 614rte_eth_dev_callback_process(dev, RTE_ETH_EVENT_INTR_RESET, NULL). 615This means that a PF reset triggers an RTE_ETH_EVENT_INTR_RESET 616event within VFs. The function rte_eth_dev_callback_process() will 617call the registered callback function. The callback function can trigger 618the application to handle all operations the VF reset requires including 619stopping Rx/Tx queues and calling rte_eth_dev_reset(). 620 621The rte_eth_dev_reset() itself is a generic function which only does 622some hardware reset operations through calling dev_unint() and 623dev_init(), and itself does not handle synchronization, which is handled 624by application. 625 626The PMD itself should not call rte_eth_dev_reset(). The PMD can trigger 627the application to handle reset event. It is duty of application to 628handle all synchronization before it calls rte_eth_dev_reset(). 629 630The above error handling mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PASSIVE``. 631 632Proactive Error Handling Mode 633~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 634 635This mode is known as ``RTE_ETH_ERROR_HANDLE_MODE_PROACTIVE``, 636different from the application invokes recovery in PASSIVE mode, 637the PMD automatically recovers from error in PROACTIVE mode, 638and only a small amount of work is required for the application. 639 640During error detection and automatic recovery, 641the PMD sets the data path pointers to dummy functions 642(which will prevent the crash), 643and also make sure the control path operations fail with a return code ``-EBUSY``. 644 645Because the PMD recovers automatically, 646the application can only sense that the data flow is disconnected for a while 647and the control API returns an error in this period. 648 649In order to sense the error happening/recovering, 650as well as to restore some additional configuration, 651three events are available: 652 653``RTE_ETH_EVENT_ERR_RECOVERING`` 654 Notify the application that an error is detected 655 and the recovery is being started. 656 Upon receiving the event, the application should not invoke 657 any control path function until receiving 658 ``RTE_ETH_EVENT_RECOVERY_SUCCESS`` or ``RTE_ETH_EVENT_RECOVERY_FAILED`` event. 659 660.. note:: 661 662 Before the PMD reports the recovery result, 663 the PMD may report the ``RTE_ETH_EVENT_ERR_RECOVERING`` event again, 664 because a larger error may occur during the recovery. 665 666``RTE_ETH_EVENT_RECOVERY_SUCCESS`` 667 Notify the application that the recovery from error is successful, 668 the PMD already re-configures the port, 669 and the effect is the same as a restart operation. 670 671``RTE_ETH_EVENT_RECOVERY_FAILED`` 672 Notify the application that the recovery from error failed, 673 the port should not be usable anymore. 674 The application should close the port. 675 676The error handling mode supported by the PMD can be reported through 677``rte_eth_dev_info_get``. 678