1.. BSD LICENSE 2 Copyright(c) 2010-2014 Intel Corporation. All rights reserved. 3 All rights reserved. 4 5 Redistribution and use in source and binary forms, with or without 6 modification, are permitted provided that the following conditions 7 are met: 8 9 * Redistributions of source code must retain the above copyright 10 notice, this list of conditions and the following disclaimer. 11 * Redistributions in binary form must reproduce the above copyright 12 notice, this list of conditions and the following disclaimer in 13 the documentation and/or other materials provided with the 14 distribution. 15 * Neither the name of Intel Corporation nor the names of its 16 contributors may be used to endorse or promote products derived 17 from this software without specific prior written permission. 18 19 THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS 20 "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT 21 LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR 22 A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT 23 OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, 24 SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT 25 LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, 26 DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY 27 THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT 28 (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE 29 OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE. 30 31Packet Framework 32================ 33 34Design Objectives 35----------------- 36 37The main design objectives for the DPDK Packet Framework are: 38 39* Provide standard methodology to build complex packet processing pipelines. 40 Provide reusable and extensible templates for the commonly used pipeline functional blocks; 41 42* Provide capability to switch between pure software and hardware-accelerated implementations for the same pipeline functional block; 43 44* Provide the best trade-off between flexibility and performance. 45 Hardcoded pipelines usually provide the best performance, but are not flexible, 46 while developing flexible frameworks is never a problem, but performance is usually low; 47 48* Provide a framework that is logically similar to Open Flow. 49 50Overview 51-------- 52 53Packet processing applications are frequently structured as pipelines of multiple stages, 54with the logic of each stage glued around a lookup table. 55For each incoming packet, the table defines the set of actions to be applied to the packet, 56as well as the next stage to send the packet to. 57 58The DPDK Packet Framework minimizes the development effort required to build packet processing pipelines 59by defining a standard methodology for pipeline development, 60as well as providing libraries of reusable templates for the commonly used pipeline blocks. 61 62The pipeline is constructed by connecting the set of input ports with the set of output ports 63through the set of tables in a tree-like topology. 64As result of lookup operation for the current packet in the current table, 65one of the table entries (on lookup hit) or the default table entry (on lookup miss) 66provides the set of actions to be applied on the current packet, 67as well as the next hop for the packet, which can be either another table, an output port or packet drop. 68 69An example of packet processing pipeline is presented in Figure 32: 70 71.. _pg_figure_32: 72 73**Figure 32 Example of Packet Processing Pipeline where Input Ports 0 and 1 are Connected with Output Ports 0, 1 and 2 through Tables 0 and 1** 74 75.. Object_1_png has been renamed 76 77|figure32| 78 79Port Library Design 80------------------- 81 82Port Types 83~~~~~~~~~~ 84 85Table 19 is a non-exhaustive list of ports that can be implemented with the Packet Framework. 86 87.. _pg_table_19: 88 89**Table 19 Port Types** 90 91+---+------------------+---------------------------------------------------------------------------------------+ 92| # | Port type | Description | 93| | | | 94+===+==================+=======================================================================================+ 95| 1 | SW ring | SW circular buffer used for message passing between the application threads. Uses | 96| | | the DPDK rte_ring primitive. Expected to be the most commonly used type of | 97| | | port. | 98| | | | 99+---+------------------+---------------------------------------------------------------------------------------+ 100| 2 | HW ring | Queue of buffer descriptors used to interact with NIC, switch or accelerator ports. | 101| | | For NIC ports, it uses the DPDK rte_eth_rx_queue or rte_eth_tx_queue | 102| | | primitives. | 103| | | | 104+---+------------------+---------------------------------------------------------------------------------------+ 105| 3 | IP reassembly | Input packets are either IP fragments or complete IP datagrams. Output packets are | 106| | | complete IP datagrams. | 107| | | | 108+---+------------------+---------------------------------------------------------------------------------------+ 109| 4 | IP fragmentation | Input packets are jumbo (IP datagrams with length bigger than MTU) or non-jumbo | 110| | | packets. Output packets are non-jumbo packets. | 111| | | | 112+---+------------------+---------------------------------------------------------------------------------------+ 113| 5 | Traffic manager | Traffic manager attached to a specific NIC output port, performing congestion | 114| | | management and hierarchical scheduling according to pre-defined SLAs. | 115| | | | 116+---+------------------+---------------------------------------------------------------------------------------+ 117| 6 | KNI | Send/receive packets to/from Linux kernel space. | 118| | | | 119+---+------------------+---------------------------------------------------------------------------------------+ 120| 7 | Source | Input port used as packet generator. Similar to Linux kernel /dev/zero character | 121| | | device. | 122| | | | 123+---+------------------+---------------------------------------------------------------------------------------+ 124| 8 | Sink | Output port used to drop all input packets. Similar to Linux kernel /dev/null | 125| | | character device. | 126| | | | 127+---+------------------+---------------------------------------------------------------------------------------+ 128 129Port Interface 130~~~~~~~~~~~~~~ 131 132Each port is unidirectional, i.e. either input port or output port. 133Each input/output port is required to implement an abstract interface that 134defines the initialization and run-time operation of the port. 135The port abstract interface is described in. 136 137.. _pg_table_20: 138 139**Table 20 Port Abstract Interface** 140 141+---+----------------+-----------------------------------------------------------------------------------------+ 142| # | Port Operation | Description | 143| | | | 144+===+================+=========================================================================================+ 145| 1 | Create | Create the low-level port object (e.g. queue). Can internally allocate memory. | 146| | | | 147+---+----------------+-----------------------------------------------------------------------------------------+ 148| 2 | Free | Free the resources (e.g. memory) used by the low-level port object. | 149| | | | 150+---+----------------+-----------------------------------------------------------------------------------------+ 151| 3 | RX | Read a burst of input packets. Non-blocking operation. Only defined for input ports. | 152| | | | 153+---+----------------+-----------------------------------------------------------------------------------------+ 154| 4 | TX | Write a burst of input packets. Non-blocking operation. Only defined for output ports. | 155| | | | 156+---+----------------+-----------------------------------------------------------------------------------------+ 157| 5 | Flush | Flush the output buffer. Only defined for output ports. | 158| | | | 159+---+----------------+-----------------------------------------------------------------------------------------+ 160 161Table Library Design 162-------------------- 163 164Table Types 165~~~~~~~~~~~ 166 167.. _pg_table_21: 168 169Table 21 is a non-exhaustive list of types of tables that can be implemented with the Packet Framework. 170 171**Table 21 Table Types** 172 173+---+----------------------------+-----------------------------------------------------------------------------+ 174| # | Table Type | Description | 175| | | | 176+===+============================+=============================================================================+ 177| 1 | Hash table | Lookup key is n-tuple based. | 178| | | | 179| | | Typically, the lookup key is hashed to produce a signature that is used to | 180| | | identify a bucket of entries where the lookup key is searched next. | 181| | | | 182| | | The signature associated with the lookup key of each input packet is either | 183| | | read from the packet descriptor (pre-computed signature) or computed at | 184| | | table lookup time. | 185| | | | 186| | | The table lookup, add entry and delete entry operations, as well as any | 187| | | other pipeline block that pre-computes the signature all have to use the | 188| | | same hashing algorithm to generate the signature. | 189| | | | 190| | | Typically used to implement flow classification tables, ARP caches, routing | 191| | | table for tunnelling protocols, etc. | 192| | | | 193+---+----------------------------+-----------------------------------------------------------------------------+ 194| 2 | Longest Prefix Match (LPM) | Lookup key is the IP address. | 195| | | | 196| | | Each table entries has an associated IP prefix (IP and depth). | 197| | | | 198| | | The table lookup operation selects the IP prefix that is matched by the | 199| | | lookup key; in case of multiple matches, the entry with the longest prefix | 200| | | depth wins. | 201| | | | 202| | | Typically used to implement IP routing tables. | 203| | | | 204+---+----------------------------+-----------------------------------------------------------------------------+ 205| 3 | Access Control List (ACLs) | Lookup key is 7-tuple of two VLAN/MPLS labels, IP destination address, | 206| | | IP source addresses, L4 protocol, L4 destination port, L4 source port. | 207| | | | 208| | | Each table entry has an associated ACL and priority. The ACL contains bit | 209| | | masks for the VLAN/MPLS labels, IP prefix for IP destination address, IP | 210| | | prefix for IP source addresses, L4 protocol and bitmask, L4 destination | 211| | | port and bit mask, L4 source port and bit mask. | 212| | | | 213| | | The table lookup operation selects the ACL that is matched by the lookup | 214| | | key; in case of multiple matches, the entry with the highest priority wins. | 215| | | | 216| | | Typically used to implement rule databases for firewalls, etc. | 217| | | | 218+---+----------------------------+-----------------------------------------------------------------------------+ 219| 4 | Pattern matching search | Lookup key is the packet payload. | 220| | | | 221| | | Table is a database of patterns, with each pattern having a priority | 222| | | assigned. | 223| | | | 224| | | The table lookup operation selects the patterns that is matched by the | 225| | | input packet; in case of multiple matches, the matching pattern with the | 226| | | highest priority wins. | 227| | | | 228+---+----------------------------+-----------------------------------------------------------------------------+ 229| 5 | Array | Lookup key is the table entry index itself. | 230| | | | 231+---+----------------------------+-----------------------------------------------------------------------------+ 232 233Table Interface 234~~~~~~~~~~~~~~~ 235 236Each table is required to implement an abstract interface that defines the initialization 237and run-time operation of the table. 238The table abstract interface is described in Table 29. 239 240.. _pg_table_29_1: 241 242**Table 29 Table Abstract Interface** 243 244+---+-----------------+----------------------------------------------------------------------------------------+ 245| # | Table operation | Description | 246| | | | 247+===+=================+========================================================================================+ 248| 1 | Create | Create the low-level data structures of the lookup table. Can internally allocate | 249| | | memory. | 250| | | | 251+---+-----------------+----------------------------------------------------------------------------------------+ 252| 2 | Free | Free up all the resources used by the lookup table. | 253| | | | 254+---+-----------------+----------------------------------------------------------------------------------------+ 255| 3 | Add entry | Add new entry to the lookup table. | 256| | | | 257+---+-----------------+----------------------------------------------------------------------------------------+ 258| 4 | Delete entry | Delete specific entry from the lookup table. | 259| | | | 260+---+-----------------+----------------------------------------------------------------------------------------+ 261| 5 | Lookup | Look up a burst of input packets and return a bit mask specifying the result of the | 262| | | lookup operation for each packet: a set bit signifies lookup hit for the corresponding | 263| | | packet, while a cleared bit a lookup miss. | 264| | | | 265| | | For each lookup hit packet, the lookup operation also returns a pointer to the table | 266| | | entry that was hit, which contains the actions to be applied on the packet and any | 267| | | associated metadata. | 268| | | | 269| | | For each lookup miss packet, the actions to be applied on the packet and any | 270| | | associated metadata are specified by the default table entry preconfigured for lookup | 271| | | miss. | 272| | | | 273+---+-----------------+----------------------------------------------------------------------------------------+ 274 275 276Hash Table Design 277~~~~~~~~~~~~~~~~~ 278 279Hash Table Overview 280^^^^^^^^^^^^^^^^^^^ 281 282Hash tables are important because the key lookup operation is optimized for speed: 283instead of having to linearly search the lookup key through all the keys in the table, 284the search is limited to only the keys stored in a single table bucket. 285 286**Associative Arrays** 287 288An associative array is a function that can be specified as a set of (key, value) pairs, 289with each key from the possible set of input keys present at most once. 290For a given associative array, the possible operations are: 291 292#. *add (key, value)*: When no value is currently associated with *key*, then the (key, *value* ) association is created. 293 When *key* is already associated value *value0*, then the association (*key*, *value0*) is removed 294 and association *(key, value)* is created; 295 296#. *delete key*: When no value is currently associated with *key*, this operation has no effect. 297 When *key* is already associated *value*, then association *(key, value)* is removed; 298 299#. *lookup key*: When no value is currently associated with *key*, then this operation returns void value (lookup miss). 300 When *key* is associated with *value*, then this operation returns *value*. 301 The *(key, value)* association is not changed. 302 303The matching criterion used to compare the input key against the keys in the associative array is *exact match*, 304as the key size (number of bytes) and the key value (array of bytes) have to match exactly for the two keys under comparison. 305 306**Hash Function** 307 308A hash function deterministically maps data of variable length (key) to data of fixed size (hash value or key signature). 309Typically, the size of the key is bigger than the size of the key signature. 310The hash function basically compresses a long key into a short signature. 311Several keys can share the same signature (collisions). 312 313High quality hash functions have uniform distribution. 314For large number of keys, when dividing the space of signature values into a fixed number of equal intervals (buckets), 315it is desirable to have the key signatures evenly distributed across these intervals (uniform distribution), 316as opposed to most of the signatures going into only a few of the intervals 317and the rest of the intervals being largely unused (non-uniform distribution). 318 319**Hash Table** 320 321A hash table is an associative array that uses a hash function for its operation. 322The reason for using a hash function is to optimize the performance of the lookup operation 323by minimizing the number of table keys that have to be compared against the input key. 324 325Instead of storing the (key, value) pairs in a single list, the hash table maintains multiple lists (buckets). 326For any given key, there is a single bucket where that key might exist, and this bucket is uniquely identified based on the key signature. 327Once the key signature is computed and the hash table bucket identified, 328the key is either located in this bucket or it is not present in the hash table at all, 329so the key search can be narrowed down from the full set of keys currently in the table 330to just the set of keys currently in the identified table bucket. 331 332The performance of the hash table lookup operation is greatly improved, 333provided that the table keys are evenly distributed amongst the hash table buckets, 334which can be achieved by using a hash function with uniform distribution. 335The rule to map a key to its bucket can simply be to use the key signature (modulo the number of table buckets) as the table bucket ID: 336 337 *bucket_id = f_hash(key) % n_buckets;* 338 339By selecting the number of buckets to be a power of two, the modulo operator can be replaced by a bitwise AND logical operation: 340 341 *bucket_id = f_hash(key) & (n_buckets - 1);* 342 343considering *n_bits* as the number of bits set in *bucket_mask = n_buckets - 1*, 344this means that all the keys that end up in the same hash table bucket have the lower *n_bits* of their signature identical. 345In order to reduce the number of keys in the same bucket (collisions), the number of hash table buckets needs to be increased. 346 347In packet processing context, the sequence of operations involved in hash table operations is described in Figure 33: 348 349.. _pg_figure_33: 350 351**Figure 33 Sequence of Steps for Hash Table Operations in a Packet Processing Context** 352 353|figure33| 354 355 356Hash Table Use Cases 357^^^^^^^^^^^^^^^^^^^^ 358 359**Flow Classification** 360 361*Description:* The flow classification is executed at least once for each input packet. 362This operation maps each incoming packet against one of the known traffic flows in the flow database that typically contains millions of flows. 363 364*Hash table name:* Flow classification table 365 366*Number of keys:* Millions 367 368*Key format:* n-tuple of packet fields that uniquely identify a traffic flow/connection. 369Example: DiffServ 5-tuple of (Source IP address, Destination IP address, L4 protocol, L4 protocol source port, L4 protocol destination port). 370For IPv4 protocol and L4 protocols like TCP, UDP or SCTP, the size of the DiffServ 5-tuple is 13 bytes, while for IPv6 it is 37 bytes. 371 372*Key value (key data):* actions and action meta-data describing what processing to be applied for the packets of the current flow. 373The size of the data associated with each traffic flow can vary from 8 bytes to kilobytes. 374 375**Address Resolution Protocol (ARP)** 376 377*Description:* Once a route has been identified for an IP packet (so the output interface and the IP address of the next hop station are known), 378the MAC address of the next hop station is needed in order to send this packet onto the next leg of the journey 379towards its destination (as identified by its destination IP address). 380The MAC address of the next hop station becomes the destination MAC address of the outgoing Ethernet frame. 381 382*Hash table name:* ARP table 383 384*Number of keys:* Thousands 385 386*Key format:* The pair of (Output interface, Next Hop IP address), which is typically 5 bytes for IPv4 and 17 bytes for IPv6. 387 388*Key value (key data):* MAC address of the next hop station (6 bytes). 389 390Hash Table Types 391^^^^^^^^^^^^^^^^ 392 393.. _pg_table_22: 394 395Table 22 lists the hash table configuration parameters shared by all different hash table types. 396 397**Table 22 Configuration Parameters Common for All Hash Table Types** 398 399+---+---------------------------+------------------------------------------------------------------------------+ 400| # | Parameter | Details | 401| | | | 402+===+===========================+==============================================================================+ 403| 1 | Key size | Measured as number of bytes. All keys have the same size. | 404| | | | 405+---+---------------------------+------------------------------------------------------------------------------+ 406| 2 | Key value (key data) size | Measured as number of bytes. | 407| | | | 408+---+---------------------------+------------------------------------------------------------------------------+ 409| 3 | Number of buckets | Needs to be a power of two. | 410| | | | 411+---+---------------------------+------------------------------------------------------------------------------+ 412| 4 | Maximum number of keys | Needs to be a power of two. | 413| | | | 414+---+---------------------------+------------------------------------------------------------------------------+ 415| 5 | Hash function | Examples: jhash, CRC hash, etc. | 416| | | | 417+---+---------------------------+------------------------------------------------------------------------------+ 418| 6 | Hash function seed | Parameter to be passed to the hash function. | 419| | | | 420+---+---------------------------+------------------------------------------------------------------------------+ 421| 7 | Key offset | Offset of the lookup key byte array within the packet meta-data stored in | 422| | | the packet buffer. | 423| | | | 424+---+---------------------------+------------------------------------------------------------------------------+ 425 426Bucket Full Problem 427""""""""""""""""""" 428 429On initialization, each hash table bucket is allocated space for exactly 4 keys. 430As keys are added to the table, it can happen that a given bucket already has 4 keys when a new key has to be added to this bucket. 431The possible options are: 432 433#. **Least Recently Used (LRU) Hash Table.** 434 One of the existing keys in the bucket is deleted and the new key is added in its place. 435 The number of keys in each bucket never grows bigger than 4. The logic to pick the key to be dropped from the bucket is LRU. 436 The hash table lookup operation maintains the order in which the keys in the same bucket are hit, so every time a key is hit, 437 it becomes the new Most Recently Used (MRU) key, i.e. the last candidate for drop. 438 When a key is added to the bucket, it also becomes the new MRU key. 439 When a key needs to be picked and dropped, the first candidate for drop, i.e. the current LRU key, is always picked. 440 The LRU logic requires maintaining specific data structures per each bucket. 441 442#. **Extendible Bucket Hash Table.** 443 The bucket is extended with space for 4 more keys. 444 This is done by allocating additional memory at table initialization time, 445 which is used to create a pool of free keys (the size of this pool is configurable and always a multiple of 4). 446 On key add operation, the allocation of a group of 4 keys only happens successfully within the limit of free keys, 447 otherwise the key add operation fails. 448 On key delete operation, a group of 4 keys is freed back to the pool of free keys 449 when the key to be deleted is the only key that was used within its group of 4 keys at that time. 450 On key lookup operation, if the current bucket is in extended state and a match is not found in the first group of 4 keys, 451 the search continues beyond the first group of 4 keys, potentially until all keys in this bucket are examined. 452 The extendible bucket logic requires maintaining specific data structures per table and per each bucket. 453 454.. _pg_table_23: 455 456**Table 23 Configuration Parameters Specific to Extendible Bucket Hash Table** 457 458+---+---------------------------+--------------------------------------------------+ 459| # | Parameter | Details | 460| | | | 461+===+===========================+==================================================+ 462| 1 | Number of additional keys | Needs to be a power of two, at least equal to 4. | 463| | | | 464+---+---------------------------+--------------------------------------------------+ 465 466 467Signature Computation 468""""""""""""""""""""" 469 470The possible options for key signature computation are: 471 472#. **Pre-computed key signature.** 473 The key lookup operation is split between two CPU cores. 474 The first CPU core (typically the CPU core that performs packet RX) extracts the key from the input packet, 475 computes the key signature and saves both the key and the key signature in the packet buffer as packet meta-data. 476 The second CPU core reads both the key and the key signature from the packet meta-data 477 and performs the bucket search step of the key lookup operation. 478 479#. **Key signature computed on lookup ("do-sig" version).** 480 The same CPU core reads the key from the packet meta-data, uses it to compute the key signature 481 and also performs the bucket search step of the key lookup operation. 482 483.. _pg_table_24: 484 485**Table 24 Configuration Parameters Specific to Pre-computed Key Signature Hash Table** 486 487+---+------------------+-----------------------------------------------------------------------+ 488| # | Parameter | Details | 489| | | | 490+===+==================+=======================================================================+ 491| 1 | Signature offset | Offset of the pre-computed key signature within the packet meta-data. | 492| | | | 493+---+------------------+-----------------------------------------------------------------------+ 494 495Key Size Optimized Hash Tables 496"""""""""""""""""""""""""""""" 497 498For specific key sizes, the data structures and algorithm of key lookup operation can be specially handcrafted for further performance improvements, 499so following options are possible: 500 501#. **Implementation supporting configurable key size.** 502 503#. **Implementation supporting a single key size.** 504 Typical key sizes are 8 bytes and 16 bytes. 505 506Bucket Search Logic for Configurable Key Size Hash Tables 507^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ 508 509The performance of the bucket search logic is one of the main factors influencing the performance of the key lookup operation. 510The data structures and algorithm are designed to make the best use of Intel CPU architecture resources like: 511cache memory space, cache memory bandwidth, external memory bandwidth, multiple execution units working in parallel, 512out of order instruction execution, special CPU instructions, etc. 513 514The bucket search logic handles multiple input packets in parallel. 515It is built as a pipeline of several stages (3 or 4), with each pipeline stage handling two different packets from the burst of input packets. 516On each pipeline iteration, the packets are pushed to the next pipeline stage: for the 4-stage pipeline, 517two packets (that just completed stage 3) exit the pipeline, 518two packets (that just completed stage 2) are now executing stage 3, two packets (that just completed stage 1) are now executing stage 2, 519two packets (that just completed stage 0) are now executing stage 1 and two packets (next two packets to read from the burst of input packets) 520are entering the pipeline to execute stage 0. 521The pipeline iterations continue until all packets from the burst of input packets execute the last stage of the pipeline. 522 523The bucket search logic is broken into pipeline stages at the boundary of the next memory access. 524Each pipeline stage uses data structures that are stored (with high probability) into the L1 or L2 cache memory of the current CPU core and 525breaks just before the next memory access required by the algorithm. 526The current pipeline stage finalizes by prefetching the data structures required by the next pipeline stage, 527so given enough time for the prefetch to complete, 528when the next pipeline stage eventually gets executed for the same packets, 529it will read the data structures it needs from L1 or L2 cache memory and thus avoid the significant penalty incurred by L2 or L3 cache memory miss. 530 531By prefetching the data structures required by the next pipeline stage in advance (before they are used) 532and switching to executing another pipeline stage for different packets, 533the number of L2 or L3 cache memory misses is greatly reduced, hence one of the main reasons for improved performance. 534This is because the cost of L2/L3 cache memory miss on memory read accesses is high, as usually due to data dependency between instructions, 535the CPU execution units have to stall until the read operation is completed from L3 cache memory or external DRAM memory. 536By using prefetch instructions, the latency of memory read accesses is hidden, 537provided that it is preformed early enough before the respective data structure is actually used. 538 539By splitting the processing into several stages that are executed on different packets (the packets from the input burst are interlaced), 540enough work is created to allow the prefetch instructions to complete successfully (before the prefetched data structures are actually accessed) and 541also the data dependency between instructions is loosened. 542For example, for the 4-stage pipeline, stage 0 is executed on packets 0 and 1 and then, 543before same packets 0 and 1 are used (i.e. before stage 1 is executed on packets 0 and 1), 544different packets are used: packets 2 and 3 (executing stage 1), packets 4 and 5 (executing stage 2) and packets 6 and 7 (executing stage 3). 545By executing useful work while the data structures are brought into the L1 or L2 cache memory, the latency of the read memory accesses is hidden. 546By increasing the gap between two consecutive accesses to the same data structure, the data dependency between instructions is loosened; 547this allows making the best use of the super-scalar and out-of-order execution CPU architecture, 548as the number of CPU core execution units that are active (rather than idle or stalled due to data dependency constraints between instructions) is maximized. 549 550The bucket search logic is also implemented without using any branch instructions. 551This avoids the important cost associated with flushing the CPU core execution pipeline on every instance of branch misprediction. 552 553Configurable Key Size Hash Table 554"""""""""""""""""""""""""""""""" 555 556Figure 34, Table 25 and Table 26 detail the main data structures used to implement configurable key size hash tables (either LRU or extendable bucket, 557either with pre-computed signature or "do-sig"). 558 559.. _pg_figure_34: 560 561**Figure 34 Data Structures for Configurable Key Size Hash Tables** 562 563.. image65_png has been renamed 564 565|figure34| 566 567.. _pg_table_25: 568 569**Table 25 Main Large Data Structures (Arrays) used for Configurable Key Size Hash Tables** 570 571+---+-------------------------+------------------------------+---------------------------+-------------------------------+ 572| # | Array name | Number of entries | Entry size (bytes) | Description | 573| | | | | | 574+===+=========================+==============================+===========================+===============================+ 575| 1 | Bucket array | n_buckets (configurable) | 32 | Buckets of the hash table. | 576| | | | | | 577+---+-------------------------+------------------------------+---------------------------+-------------------------------+ 578| 2 | Bucket extensions array | n_buckets_ext (configurable) | 32 | This array is only created | 579| | | | | for extendible bucket tables. | 580| | | | | | 581+---+-------------------------+------------------------------+---------------------------+-------------------------------+ 582| 3 | Key array | n_keys | key_size (configurable) | Keys added to the hash table. | 583| | | | | | 584+---+-------------------------+------------------------------+---------------------------+-------------------------------+ 585| 4 | Data array | n_keys | entry_size (configurable) | Key values (key data) | 586| | | | | associated with the hash | 587| | | | | table keys. | 588| | | | | | 589+---+-------------------------+------------------------------+---------------------------+-------------------------------+ 590 591.. _pg_table_26: 592 593**Table 26 Field Description for Bucket Array Entry (Configurable Key Size Hash Tables)** 594 595+---+------------------+--------------------+------------------------------------------------------------------+ 596| # | Field name | Field size (bytes) | Description | 597| | | | | 598+===+==================+====================+==================================================================+ 599| 1 | Next Ptr/LRU | 8 | For LRU tables, this fields represents the LRU list for the | 600| | | | current bucket stored as array of 4 entries of 2 bytes each. | 601| | | | Entry 0 stores the index (0 .. 3) of the MRU key, while entry 3 | 602| | | | stores the index of the LRU key. | 603| | | | | 604| | | | For extendible bucket tables, this field represents the next | 605| | | | pointer (i.e. the pointer to the next group of 4 keys linked to | 606| | | | the current bucket). The next pointer is not NULL if the bucket | 607| | | | is currently extended or NULL otherwise. | 608| | | | To help the branchless implementation, bit 0 (least significant | 609| | | | bit) of this field is set to 1 if the next pointer is not NULL | 610| | | | and to 0 otherwise. | 611| | | | | 612+---+------------------+--------------------+------------------------------------------------------------------+ 613| 2 | Sig[0 .. 3] | 4 x 2 | If key X (X = 0 .. 3) is valid, then sig X bits 15 .. 1 store | 614| | | | the most significant 15 bits of key X signature and sig X bit 0 | 615| | | | is set to 1. | 616| | | | | 617| | | | If key X is not valid, then sig X is set to zero. | 618| | | | | 619+---+------------------+--------------------+------------------------------------------------------------------+ 620| 3 | Key Pos [0 .. 3] | 4 x 4 | If key X is valid (X = 0 .. 3), then Key Pos X represents the | 621| | | | index into the key array where key X is stored, as well as the | 622| | | | index into the data array where the value associated with key X | 623| | | | is stored. | 624| | | | | 625| | | | If key X is not valid, then the value of Key Pos X is undefined. | 626| | | | | 627+---+------------------+--------------------+------------------------------------------------------------------+ 628 629 630Figure 35 and Table 27 detail the bucket search pipeline stages (either LRU or extendable bucket, 631either with pre-computed signature or "do-sig"). 632For each pipeline stage, the described operations are applied to each of the two packets handled by that stage. 633 634.. _pg_figure_35: 635 636**Figure 35 Bucket Search Pipeline for Key Lookup Operation (Configurable Key Size Hash Tables)** 637 638|figure35| 639 640.. _pg_table_27: 641 642**Table 27 Description of the Bucket Search Pipeline Stages (Configurable Key Size Hash Tables)** 643 644+---+---------------------------+------------------------------------------------------------------------------+ 645| # | Stage name | Description | 646| | | | 647+===+===========================+==============================================================================+ 648| 0 | Prefetch packet meta-data | Select next two packets from the burst of input packets. | 649| | | | 650| | | Prefetch packet meta-data containing the key and key signature. | 651| | | | 652+---+---------------------------+------------------------------------------------------------------------------+ 653| 1 | Prefetch table bucket | Read the key signature from the packet meta-data (for extendable bucket hash | 654| | | tables) or read the key from the packet meta-data and compute key signature | 655| | | (for LRU tables). | 656| | | | 657| | | Identify the bucket ID using the key signature. | 658| | | | 659| | | Set bit 0 of the signature to 1 (to match only signatures of valid keys from | 660| | | the table). | 661| | | | 662| | | Prefetch the bucket. | 663| | | | 664+---+---------------------------+------------------------------------------------------------------------------+ 665| 2 | Prefetch table key | Read the key signatures from the bucket. | 666| | | | 667| | | Compare the signature of the input key against the 4 key signatures from the | 668| | | packet. As result, the following is obtained: | 669| | | | 670| | | *match* | 671| | | = equal to TRUE if there was at least one signature match and to FALSE in | 672| | | the case of no signature match; | 673| | | | 674| | | *match_many* | 675| | | = equal to TRUE is there were more than one signature matches (can be up to | 676| | | 4 signature matches in the worst case scenario) and to FALSE otherwise; | 677| | | | 678| | | *match_pos* | 679| | | = the index of the first key that produced signature match (only valid if | 680| | | match is true). | 681| | | | 682| | | For extendable bucket hash tables only, set | 683| | | *match_many* | 684| | | to TRUE if next pointer is valid. | 685| | | | 686| | | Prefetch the bucket key indicated by | 687| | | *match_pos* | 688| | | (even if | 689| | | *match_pos* | 690| | | does not point to valid key valid). | 691| | | | 692+---+---------------------------+------------------------------------------------------------------------------+ 693| 3 | Prefetch table data | Read the bucket key indicated by | 694| | | *match_pos*. | 695| | | | 696| | | Compare the bucket key against the input key. As result, the following is | 697| | | obtained: | 698| | | *match_key* | 699| | | = equal to TRUE if the two keys match and to FALSE otherwise. | 700| | | | 701| | | Report input key as lookup hit only when both | 702| | | *match* | 703| | | and | 704| | | *match_key* | 705| | | are equal to TRUE and as lookup miss otherwise. | 706| | | | 707| | | For LRU tables only, use branchless logic to update the bucket LRU list | 708| | | (the current key becomes the new MRU) only on lookup hit. | 709| | | | 710| | | Prefetch the key value (key data) associated with the current key (to avoid | 711| | | branches, this is done on both lookup hit and miss). | 712| | | | 713+---+---------------------------+------------------------------------------------------------------------------+ 714 715 716Additional notes: 717 718#. The pipelined version of the bucket search algorithm is executed only if there are at least 7 packets in the burst of input packets. 719 If there are less than 7 packets in the burst of input packets, 720 a non-optimized implementation of the bucket search algorithm is executed. 721 722#. Once the pipelined version of the bucket search algorithm has been executed for all the packets in the burst of input packets, 723 the non-optimized implementation of the bucket search algorithm is also executed for any packets that did not produce a lookup hit, 724 but have the *match_many* flag set. 725 As result of executing the non-optimized version, some of these packets may produce a lookup hit or lookup miss. 726 This does not impact the performance of the key lookup operation, 727 as the probability of matching more than one signature in the same group of 4 keys or of having the bucket in extended state 728 (for extendable bucket hash tables only) is relatively small. 729 730**Key Signature Comparison Logic** 731 732The key signature comparison logic is described in Table 28. 733 734.. _pg_table_28: 735 736**Table 28 Lookup Tables for Match, Match_Many and Match_Pos** 737 738+----+------+---------------+--------------------+--------------------+ 739| # | mask | match (1 bit) | match_many (1 bit) | match_pos (2 bits) | 740| | | | | | 741+----+------+---------------+--------------------+--------------------+ 742| 0 | 0000 | 0 | 0 | 00 | 743| | | | | | 744+----+------+---------------+--------------------+--------------------+ 745| 1 | 0001 | 1 | 0 | 00 | 746| | | | | | 747+----+------+---------------+--------------------+--------------------+ 748| 2 | 0010 | 1 | 0 | 01 | 749| | | | | | 750+----+------+---------------+--------------------+--------------------+ 751| 3 | 0011 | 1 | 1 | 00 | 752| | | | | | 753+----+------+---------------+--------------------+--------------------+ 754| 4 | 0100 | 1 | 0 | 10 | 755| | | | | | 756+----+------+---------------+--------------------+--------------------+ 757| 5 | 0101 | 1 | 1 | 00 | 758| | | | | | 759+----+------+---------------+--------------------+--------------------+ 760| 6 | 0110 | 1 | 1 | 01 | 761| | | | | | 762+----+------+---------------+--------------------+--------------------+ 763| 7 | 0111 | 1 | 1 | 00 | 764| | | | | | 765+----+------+---------------+--------------------+--------------------+ 766| 8 | 1000 | 1 | 0 | 11 | 767| | | | | | 768+----+------+---------------+--------------------+--------------------+ 769| 9 | 1001 | 1 | 1 | 00 | 770| | | | | | 771+----+------+---------------+--------------------+--------------------+ 772| 10 | 1010 | 1 | 1 | 01 | 773| | | | | | 774+----+------+---------------+--------------------+--------------------+ 775| 11 | 1011 | 1 | 1 | 00 | 776| | | | | | 777+----+------+---------------+--------------------+--------------------+ 778| 12 | 1100 | 1 | 1 | 10 | 779| | | | | | 780+----+------+---------------+--------------------+--------------------+ 781| 13 | 1101 | 1 | 1 | 00 | 782| | | | | | 783+----+------+---------------+--------------------+--------------------+ 784| 14 | 1110 | 1 | 1 | 01 | 785| | | | | | 786+----+------+---------------+--------------------+--------------------+ 787| 15 | 1111 | 1 | 1 | 00 | 788| | | | | | 789+----+------+---------------+--------------------+--------------------+ 790 791The input *mask* hash bit X (X = 0 .. 3) set to 1 if input signature is equal to bucket signature X and set to 0 otherwise. 792The outputs *match*, *match_many* and *match_pos* are 1 bit, 1 bit and 2 bits in size respectively and their meaning has been explained above. 793 794As displayed in Table 29, the lookup tables for *match* and *match_many* can be collapsed into a single 32-bit value and the lookup table for 795*match_pos* can be collapsed into a 64-bit value. 796Given the input *mask*, the values for *match*, *match_many* and *match_pos* can be obtained by indexing their respective bit array to extract 1 bit, 7971 bit and 2 bits respectively with branchless logic. 798 799.. _pg_table_29: 800 801**Table 29 Collapsed Lookup Tables for Match, Match_Many and Match_Pos** 802 803+------------+------------------------------------------+-------------------+ 804| | Bit array | Hexadecimal value | 805| | | | 806+------------+------------------------------------------+-------------------+ 807| match | 1111_1111_1111_1110 | 0xFFFELLU | 808| | | | 809+------------+------------------------------------------+-------------------+ 810| match_many | 1111_1110_1110_1000 | 0xFEE8LLU | 811| | | | 812+------------+------------------------------------------+-------------------+ 813| match_pos | 0001_0010_0001_0011__0001_0010_0001_0000 | 0x12131210LLU | 814| | | | 815+------------+------------------------------------------+-------------------+ 816 817The pseudo-code is displayed in Figure 36. 818 819.. _pg_figure_36: 820 821**Figure 36 Pseudo-code for match, match_many and match_pos** 822 823 match = (0xFFFELLU >> mask) & 1; 824 825 match_many = (0xFEE8LLU >> mask) & 1; 826 827 match_pos = (0x12131210LLU >> (mask << 1)) & 3; 828 829Single Key Size Hash Tables 830""""""""""""""""""""""""""" 831 832Figure 37, Figure 38, Table 30 and 31 detail the main data structures used to implement 8-byte and 16-byte key hash tables 833(either LRU or extendable bucket, either with pre-computed signature or "do-sig"). 834 835.. _pg_figure_37: 836 837**Figure 37 Data Structures for 8-byte Key Hash Tables** 838 839.. image66_png has been renamed 840 841|figure37| 842 843.. _pg_figure_38: 844 845**Figure 38 Data Structures for 16-byte Key Hash Tables** 846 847.. image67_png has been renamed 848 849|figure38| 850 851.. _pg_table_30: 852 853**Table 30 Main Large Data Structures (Arrays) used for 8-byte and 16-byte Key Size Hash Tables** 854 855+---+-------------------------+------------------------------+----------------------+------------------------------------+ 856| # | Array name | Number of entries | Entry size (bytes) | Description | 857| | | | | | 858+===+=========================+==============================+======================+====================================+ 859| 1 | Bucket array | n_buckets (configurable) | *8-byte key size:* | Buckets of the hash table. | 860| | | | | | 861| | | | 64 + 4 x entry_size | | 862| | | | | | 863| | | | | | 864| | | | *16-byte key size:* | | 865| | | | | | 866| | | | 128 + 4 x entry_size | | 867| | | | | | 868+---+-------------------------+------------------------------+----------------------+------------------------------------+ 869| 2 | Bucket extensions array | n_buckets_ext (configurable) | *8-byte key size:* | This array is only created for | 870| | | | | extendible bucket tables. | 871| | | | | | 872| | | | 64 + 4 x entry_size | | 873| | | | | | 874| | | | | | 875| | | | *16-byte key size:* | | 876| | | | | | 877| | | | 128 + 4 x entry_size | | 878| | | | | | 879+---+-------------------------+------------------------------+----------------------+------------------------------------+ 880 881.. _pg_table_31: 882 883**Table 31 Field Description for Bucket Array Entry (8-byte and 16-byte Key Hash Tables)** 884 885+---+---------------+--------------------+-------------------------------------------------------------------------------+ 886| # | Field name | Field size (bytes) | Description | 887| | | | | 888+===+===============+====================+===============================================================================+ 889| 1 | Valid | 8 | Bit X (X = 0 .. 3) is set to 1 if key X is valid or to 0 otherwise. | 890| | | | | 891| | | | Bit 4 is only used for extendible bucket tables to help with the | 892| | | | implementation of the branchless logic. In this case, bit 4 is set to 1 if | 893| | | | next pointer is valid (not NULL) or to 0 otherwise. | 894| | | | | 895+---+---------------+--------------------+-------------------------------------------------------------------------------+ 896| 2 | Next Ptr/LRU | 8 | For LRU tables, this fields represents the LRU list for the current bucket | 897| | | | stored as array of 4 entries of 2 bytes each. Entry 0 stores the index | 898| | | | (0 .. 3) of the MRU key, while entry 3 stores the index of the LRU key. | 899| | | | | 900| | | | For extendible bucket tables, this field represents the next pointer (i.e. | 901| | | | the pointer to the next group of 4 keys linked to the current bucket). The | 902| | | | next pointer is not NULL if the bucket is currently extended or NULL | 903| | | | otherwise. | 904| | | | | 905+---+---------------+--------------------+-------------------------------------------------------------------------------+ 906| 3 | Key [0 .. 3] | 4 x key_size | Full keys. | 907| | | | | 908+---+---------------+--------------------+-------------------------------------------------------------------------------+ 909| 4 | Data [0 .. 3] | 4 x entry_size | Full key values (key data) associated with keys 0 .. 3. | 910| | | | | 911+---+---------------+--------------------+-------------------------------------------------------------------------------+ 912 913and detail the bucket search pipeline used to implement 8-byte and 16-byte key hash tables (either LRU or extendable bucket, 914either with pre-computed signature or "do-sig"). 915For each pipeline stage, the described operations are applied to each of the two packets handled by that stage. 916 917.. _pg_figure_39: 918 919**Figure 39 Bucket Search Pipeline for Key Lookup Operation (Single Key Size Hash Tables)** 920 921|figure39| 922 923.. _pg_table_32: 924 925**Table 32 Description of the Bucket Search Pipeline Stages (8-byte and 16-byte Key Hash Tables)** 926 927+---+---------------------------+-----------------------------------------------------------------------------+ 928| # | Stage name | Description | 929| | | | 930+===+===========================+=============================================================================+ 931| 0 | Prefetch packet meta-data | #. Select next two packets from the burst of input packets. | 932| | | | 933| | | #. Prefetch packet meta-data containing the key and key signature. | 934| | | | 935+---+---------------------------+-----------------------------------------------------------------------------+ 936| 1 | Prefetch table bucket | #. Read the key signature from the packet meta-data (for extendable bucket | 937| | | hash tables) or read the key from the packet meta-data and compute key | 938| | | signature (for LRU tables). | 939| | | | 940| | | #. Identify the bucket ID using the key signature. | 941| | | | 942| | | #. Prefetch the bucket. | 943| | | | 944+---+---------------------------+-----------------------------------------------------------------------------+ 945| 2 | Prefetch table data | #. Read the bucket. | 946| | | | 947| | | #. Compare all 4 bucket keys against the input key. | 948| | | | 949| | | #. Report input key as lookup hit only when a match is identified (more | 950| | | than one key match is not possible) | 951| | | | 952| | | #. For LRU tables only, use branchless logic to update the bucket LRU list | 953| | | (the current key becomes the new MRU) only on lookup hit. | 954| | | | 955| | | #. Prefetch the key value (key data) associated with the matched key (to | 956| | | avoid branches, this is done on both lookup hit and miss). | 957| | | | 958+---+---------------------------+-----------------------------------------------------------------------------+ 959 960Additional notes: 961 962#. The pipelined version of the bucket search algorithm is executed only if there are at least 5 packets in the burst of input packets. 963 If there are less than 5 packets in the burst of input packets, a non-optimized implementation of the bucket search algorithm is executed. 964 965#. For extendible bucket hash tables only, 966 once the pipelined version of the bucket search algorithm has been executed for all the packets in the burst of input packets, 967 the non-optimized implementation of the bucket search algorithm is also executed for any packets that did not produce a lookup hit, 968 but have the bucket in extended state. 969 As result of executing the non-optimized version, some of these packets may produce a lookup hit or lookup miss. 970 This does not impact the performance of the key lookup operation, 971 as the probability of having the bucket in extended state is relatively small. 972 973Pipeline Library Design 974----------------------- 975 976A pipeline is defined by: 977 978#. The set of input ports; 979 980#. The set of output ports; 981 982#. The set of tables; 983 984#. The set of actions. 985 986The input ports are connected with the output ports through tree-like topologies of interconnected tables. 987The table entries contain the actions defining the operations to be executed on the input packets and the packet flow within the pipeline. 988 989Connectivity of Ports and Tables 990~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 991 992To avoid any dependencies on the order in which pipeline elements are created, 993the connectivity of pipeline elements is defined after all the pipeline input ports, 994output ports and tables have been created. 995 996General connectivity rules: 997 998#. Each input port is connected to a single table. No input port should be left unconnected; 999 1000#. The table connectivity to other tables or to output ports is regulated by the next hop actions of each table entry and the default table entry. 1001 The table connectivity is fluid, as the table entries and the default table entry can be updated during run-time. 1002 1003 * A table can have multiple entries (including the default entry) connected to the same output port. 1004 A table can have different entries connected to different output ports. 1005 Different tables can have entries (including default table entry) connected to the same output port. 1006 1007 * A table can have multiple entries (including the default entry) connected to another table, 1008 in which case all these entries have to point to the same table. 1009 This constraint is enforced by the API and prevents tree-like topologies from being created (allowing table chaining only), 1010 with the purpose of simplifying the implementation of the pipeline run-time execution engine. 1011 1012Port Actions 1013~~~~~~~~~~~~ 1014 1015Port Action Handler 1016^^^^^^^^^^^^^^^^^^^ 1017 1018An action handler can be assigned to each input/output port to define actions to be executed on each input packet that is received by the port. 1019Defining the action handler for a specific input/output port is optional (i.e. the action handler can be disabled). 1020 1021For input ports, the action handler is executed after RX function. For output ports, the action handler is executed before the TX function. 1022 1023The action handler can decide to drop packets. 1024 1025Table Actions 1026~~~~~~~~~~~~~ 1027 1028Table Action Handler 1029^^^^^^^^^^^^^^^^^^^^ 1030 1031An action handler to be executed on each input packet can be assigned to each table. 1032Defining the action handler for a specific table is optional (i.e. the action handler can be disabled). 1033 1034The action handler is executed after the table lookup operation is performed and the table entry associated with each input packet is identified. 1035The action handler can only handle the user-defined actions, while the reserved actions (e.g. the next hop actions) are handled by the Packet Framework. 1036The action handler can decide to drop the input packet. 1037 1038Reserved Actions 1039^^^^^^^^^^^^^^^^ 1040 1041The reserved actions are handled directly by the Packet Framework without the user being able to change their meaning 1042through the table action handler configuration. 1043A special category of the reserved actions is represented by the next hop actions, which regulate the packet flow between input ports, 1044tables and output ports through the pipeline. 1045Table 33 lists the next hop actions. 1046 1047.. _pg_table_33: 1048 1049**Table 33 Next Hop Actions (Reserved)** 1050 1051+---+---------------------+-----------------------------------------------------------------------------------+ 1052| # | Next hop action | Description | 1053| | | | 1054+===+=====================+===================================================================================+ 1055| 1 | Drop | Drop the current packet. | 1056| | | | 1057+---+---------------------+-----------------------------------------------------------------------------------+ 1058| 2 | Send to output port | Send the current packet to specified output port. The output port ID is metadata | 1059| | | stored in the same table entry. | 1060| | | | 1061+---+---------------------+-----------------------------------------------------------------------------------+ 1062| 3 | Send to table | Send the current packet to specified table. The table ID is metadata stored in | 1063| | | the same table entry. | 1064| | | | 1065+---+---------------------+-----------------------------------------------------------------------------------+ 1066 1067User Actions 1068^^^^^^^^^^^^ 1069 1070For each table, the meaning of user actions is defined through the configuration of the table action handler. 1071Different tables can be configured with different action handlers, therefore the meaning of the user actions 1072and their associated meta-data is private to each table. 1073Within the same table, all the table entries (including the table default entry) share the same definition 1074for the user actions and their associated meta-data, 1075with each table entry having its own set of enabled user actions and its own copy of the action meta-data. 1076Table 34 contains a non-exhaustive list of user action examples. 1077 1078.. _pg_table_34: 1079 1080**Table 34 User Action Examples** 1081 1082+---+-----------------------------------+---------------------------------------------------------------------+ 1083| # | User action | Description | 1084| | | | 1085+===+===================================+=====================================================================+ 1086| 1 | Metering | Per flow traffic metering using the srTCM and trTCM algorithms. | 1087| | | | 1088+---+-----------------------------------+---------------------------------------------------------------------+ 1089| 2 | Statistics | Update the statistics counters maintained per flow. | 1090| | | | 1091+---+-----------------------------------+---------------------------------------------------------------------+ 1092| 3 | App ID | Per flow state machine fed by variable length sequence of packets | 1093| | | at the flow initialization with the purpose of identifying the | 1094| | | traffic type and application. | 1095| | | | 1096+---+-----------------------------------+---------------------------------------------------------------------+ 1097| 4 | Push/pop labels | Push/pop VLAN/MPLS labels to/from the current packet. | 1098| | | | 1099+---+-----------------------------------+---------------------------------------------------------------------+ 1100| 5 | Network Address Translation (NAT) | Translate between the internal (LAN) and external (WAN) IP | 1101| | | destination/source address and/or L4 protocol destination/source | 1102| | | port. | 1103| | | | 1104+---+-----------------------------------+---------------------------------------------------------------------+ 1105| 6 | TTL update | Decrement IP TTL and, in case of IPv4 packets, update the IP | 1106| | | checksum. | 1107| | | | 1108+---+-----------------------------------+---------------------------------------------------------------------+ 1109 1110Multicore Scaling 1111----------------- 1112 1113A complex application is typically split across multiple cores, with cores communicating through SW queues. 1114There is usually a performance limit on the number of table lookups 1115and actions that can be fitted on the same CPU core due to HW constraints like: 1116available CPU cycles, cache memory size, cache transfer BW, memory transfer BW, etc. 1117 1118As the application is split across multiple CPU cores, the Packet Framework facilitates the creation of several pipelines, 1119the assignment of each such pipeline to a different CPU core 1120and the interconnection of all CPU core-level pipelines into a single application-level complex pipeline. 1121For example, if CPU core A is assigned to run pipeline P1 and CPU core B pipeline P2, 1122then the interconnection of P1 with P2 could be achieved by having the same set of SW queues act like output ports 1123for P1 and input ports for P2. 1124 1125This approach enables the application development using the pipeline, run-to-completion (clustered) or hybrid (mixed) models. 1126 1127It is allowed for the same core to run several pipelines, but it is not allowed for several cores to run the same pipeline. 1128 1129Shared Data Structures 1130~~~~~~~~~~~~~~~~~~~~~~ 1131 1132The threads performing table lookup are actually table writers rather than just readers. 1133Even if the specific table lookup algorithm is thread-safe for multiple readers 1134(e. g. read-only access of the search algorithm data structures is enough to conduct the lookup operation), 1135once the table entry for the current packet is identified, the thread is typically expected to update the action meta-data stored in the table entry 1136(e.g. increment the counter tracking the number of packets that hit this table entry), and thus modify the table entry. 1137During the time this thread is accessing this table entry (either writing or reading; duration is application specific), 1138for data consistency reasons, no other threads (threads performing table lookup or entry add/delete operations) are allowed to modify this table entry. 1139 1140Mechanisms to share the same table between multiple threads: 1141 1142#. **Multiple writer threads.** 1143 Threads need to use synchronization primitives like semaphores (distinct semaphore per table entry) or atomic instructions. 1144 The cost of semaphores is usually high, even when the semaphore is free. 1145 The cost of atomic instructions is normally higher than the cost of regular instructions. 1146 1147#. **Multiple writer threads, with single thread performing table lookup operations and multiple threads performing table entry add/delete operations.** 1148 The threads performing table entry add/delete operations send table update requests to the reader (typically through message passing queues), 1149 which does the actual table updates and then sends the response back to the request initiator. 1150 1151#. **Single writer thread performing table entry add/delete operations and multiple reader threads that performtable lookup operations with read-only access to the table entries.** 1152 The reader threads use the main table copy while the writer is updating the mirror copy. 1153 Once the writer update is done, the writer can signal to the readers and busy wait until all readers swaps between the mirror copy (which now becomes the main copy) and 1154 the mirror copy (which now becomes the main copy). 1155 1156Interfacing with Accelerators 1157----------------------------- 1158 1159The presence of accelerators is usually detected during the initialization phase by inspecting the HW devices that are part of the system (e.g. by PCI bus enumeration). 1160Typical devices with acceleration capabilities are: 1161 1162* Inline accelerators: NICs, switches, FPGAs, etc; 1163 1164* Look-aside accelerators: chipsets, FPGAs, etc. 1165 1166Usually, to support a specific functional block, specific implementation of Packet Framework tables and/or ports and/or actions has to be provided for each accelerator, 1167with all the implementations sharing the same API: pure SW implementation (no acceleration), implementation using accelerator A, implementation using accelerator B, etc. 1168The selection between these implementations could be done at build time or at run-time (recommended), based on which accelerators are present in the system, 1169with no application changes required. 1170 1171.. |figure33| image:: img/figure33.png 1172 1173.. |figure35| image:: img/figure35.png 1174 1175.. |figure39| image:: img/figure39.png 1176 1177.. |figure34| image:: img/figure34.png 1178 1179.. |figure32| image:: img/figure32.png 1180 1181.. |figure37| image:: img/figure37.png 1182 1183.. |figure38| image:: img/figure38.png 1184