1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2017-2018 Cavium Networks. 3 4Compression Device Library 5=========================== 6 7The compression framework provides a generic set of APIs to perform compression services 8as well as to query and configure compression devices both physical(hardware) and virtual(software) 9to perform those services. The framework currently only supports lossless compression schemes: 10Deflate and LZS. 11 12Device Management 13----------------- 14 15Device Creation 16~~~~~~~~~~~~~~~ 17 18Physical compression devices are discovered during the bus probe of the EAL function 19which is executed at DPDK initialization, based on their unique device identifier. 20For e.g. PCI devices can be identified using PCI BDF (bus/bridge, device, function). 21Specific physical compression devices, like other physical devices in DPDK can be 22listed using the EAL command line options. 23 24Virtual devices can be created by two mechanisms, either using the EAL command 25line options or from within the application using an EAL API directly. 26 27From the command line using the --vdev EAL option 28 29.. code-block:: console 30 31 --vdev '<pmd name>,socket_id=0' 32 33.. Note:: 34 35 * If DPDK application requires multiple software compression PMD devices then required 36 number of ``--vdev`` with appropriate libraries are to be added. 37 38 * An Application with multiple compression device instances exposed by the same PMD must 39 specify a unique name for each device. 40 41 Example: ``--vdev 'pmd0' --vdev 'pmd1'`` 42 43Or, by using the rte_vdev_init API within the application code. 44 45.. code-block:: c 46 47 rte_vdev_init("<pmd_name>","socket_id=0") 48 49All virtual compression devices support the following initialization parameters: 50 51* ``socket_id`` - socket on which to allocate the device resources on. 52 53Device Identification 54~~~~~~~~~~~~~~~~~~~~~ 55 56Each device, whether virtual or physical is uniquely designated by two 57identifiers: 58 59- A unique device index used to designate the compression device in all functions 60 exported by the compressdev API. 61 62- A device name used to designate the compression device in console messages, for 63 administration or debugging purposes. 64 65Device Configuration 66~~~~~~~~~~~~~~~~~~~~ 67 68The configuration of each compression device includes the following operations: 69 70- Allocation of resources, including hardware resources if a physical device. 71- Resetting the device into a well-known default state. 72- Initialization of statistics counters. 73 74The ``rte_compressdev_configure`` API is used to configure a compression device. 75 76The ``rte_compressdev_config`` structure is used to pass the configuration 77parameters. 78 79See *DPDK API Reference* for details. 80 81Configuration of Queue Pairs 82~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 83 84Each compression device queue pair is individually configured through the 85``rte_compressdev_queue_pair_setup`` API. 86 87The ``max_inflight_ops`` is used to pass maximum number of 88rte_comp_op that could be present in a queue at-a-time. 89PMD then can allocate resources accordingly on a specified socket. 90 91See *DPDK API Reference* for details. 92 93Logical Cores, Memory and Queues Pair Relationships 94~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 95 96Library supports NUMA similarly as described in Cryptodev library section. 97 98A queue pair cannot be shared and should be exclusively used by a single processing 99context for enqueuing operations or dequeuing operations on the same compression device 100since sharing would require global locks and hinder performance. It is however possible 101to use a different logical core to dequeue an operation on a queue pair from the logical 102core on which it was enqueued. This means that a compression burst enqueue/dequeue 103APIs are a logical place to transition from one logical core to another in a 104data processing pipeline. 105 106Device Features and Capabilities 107--------------------------------- 108 109Compression devices define their functionality through two mechanisms, global device 110features and algorithm features. Global devices features identify device 111wide level features which are applicable to the whole device such as supported hardware 112acceleration and CPU features. List of compression device features can be seen in the 113RTE_COMPDEV_FF_XXX macros. 114 115The algorithm features lists individual algo feature which device supports per-algorithm, 116such as a stateful compression/decompression, checksums operation etc. List of algorithm 117features can be seen in the RTE_COMP_FF_XXX macros. 118 119Capabilities 120~~~~~~~~~~~~ 121Each PMD has a list of capabilities, including algorithms listed in 122enum ``rte_comp_algorithm`` and its associated feature flag and 123sliding window range in log base 2 value. Sliding window tells 124the minimum and maximum size of lookup window that algorithm uses 125to find duplicates. 126 127See *DPDK API Reference* for details. 128 129Each Compression poll mode driver defines its array of capabilities 130for each algorithm it supports. See PMD implementation for capability 131initialization. 132 133Capabilities Discovery 134~~~~~~~~~~~~~~~~~~~~~~ 135 136PMD capability and features are discovered via ``rte_compressdev_info_get`` function. 137 138The ``rte_compressdev_info`` structure contains all the relevant information for the device. 139 140See *DPDK API Reference* for details. 141 142Compression Operation 143---------------------- 144 145DPDK compression supports two types of compression methodologies: 146 147- Stateless, data associated to a compression operation is compressed without any reference 148 to another compression operation. 149 150- Stateful, data in each compression operation is compressed with reference to previous compression 151 operations in the same data stream i.e. history of data is maintained between the operations. 152 153For more explanation, please refer RFC https://www.ietf.org/rfc/rfc1951.txt 154 155Operation Representation 156~~~~~~~~~~~~~~~~~~~~~~~~ 157 158Compression operation is described via ``struct rte_comp_op``, which contains both input and 159output data. The operation structure includes the operation type (stateless or stateful), 160the operation status and the priv_xform/stream handle, source, destination and checksum buffer 161pointers. It also contains the source mempool from which the operation is allocated. 162PMD updates consumed field with amount of data read from source buffer and produced 163field with amount of data of written into destination buffer along with status of 164operation. See section *Produced, Consumed And Operation Status* for more details. 165 166Compression operations mempool also has an ability to allocate private memory with the 167operation for application's purposes. Application software is responsible for specifying 168all the operation specific fields in the ``rte_comp_op`` structure which are then used 169by the compression PMD to process the requested operation. 170 171 172Operation Management and Allocation 173~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 174 175The compressdev library provides an API set for managing compression operations which 176utilize the Mempool Library to allocate operation buffers. Therefore, it ensures 177that the compression operation is interleaved optimally across the channels and 178ranks for optimal processing. 179 180A ``rte_comp_op`` contains a field indicating the pool it originated from. 181 182``rte_comp_op_alloc()`` and ``rte_comp_op_bulk_alloc()`` are used to allocate 183compression operations from a given compression operation mempool. 184The operation gets reset before being returned to a user so that operation 185is always in a good known state before use by the application. 186 187``rte_comp_op_free()`` is called by the application to return an operation to 188its allocating pool. 189 190See *DPDK API Reference* for details. 191 192Passing source data as mbuf-chain 193~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 194If input data is scattered across several different buffers, then 195Application can either parse through all such buffers and make one 196mbuf-chain and enqueue it for processing or, alternatively, it can 197make multiple sequential enqueue_burst() calls for each of them 198processing them statefully. See *Compression API Stateful Operation* 199for stateful processing of ops. 200 201Operation Status 202~~~~~~~~~~~~~~~~ 203Each operation carries a status information updated by PMD after it is processed. 204Following are currently supported: 205 206- RTE_COMP_OP_STATUS_SUCCESS, 207 Operation is successfully completed 208 209- RTE_COMP_OP_STATUS_NOT_PROCESSED, 210 Operation has not yet been processed by the device 211 212- RTE_COMP_OP_STATUS_INVALID_ARGS, 213 Operation failed due to invalid arguments in request 214 215- RTE_COMP_OP_STATUS_ERROR, 216 Operation failed because of internal error 217 218- RTE_COMP_OP_STATUS_INVALID_STATE, 219 Operation is invoked in invalid state 220 221- RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED, 222 Output buffer ran out of space during processing. Error case, 223 PMD cannot continue from here. 224 225- RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE, 226 Output buffer ran out of space before operation completed, but this 227 is not an error case. Output data up to op.produced can be used and 228 next op in the stream should continue on from op.consumed+1. 229 230Operation status after enqueue / dequeue 231~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 232Some of the above values may arise in the op after an 233``rte_compressdev_enqueue_burst()``. If number ops enqueued < number ops requested then 234the app should check the op.status of nb_enqd+1. If status is RTE_COMP_OP_STATUS_NOT_PROCESSED, 235it likely indicates a full-queue case for a hardware device and a retry after dequeuing some ops is likely 236to be successful. If the op holds any other status, e.g. RTE_COMP_OP_STATUS_INVALID_ARGS, a retry with 237the same op is unlikely to be successful. 238 239 240Produced, Consumed And Operation Status 241~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 242 243- If status is RTE_COMP_OP_STATUS_SUCCESS, 244 consumed = amount of data read from input buffer, and 245 produced = amount of data written in destination buffer 246- If status is RTE_COMP_OP_STATUS_ERROR, 247 consumed = produced = undefined 248- If status is RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED, 249 consumed = 0 and 250 produced = usually 0, but in decompression cases a PMD may return > 0 251 i.e. amount of data successfully produced until out of space condition 252 hit. Application can consume output data in this case, if required. 253- If status is RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE, 254 consumed = amount of data read, and 255 produced = amount of data successfully produced until 256 out of space condition hit. PMD has ability to recover 257 from here, so application can submit next op from 258 consumed+1 and a destination buffer with available space. 259 260Transforms 261---------- 262 263Compression transforms (``rte_comp_xform``) are the mechanism 264to specify the details of the compression operation such as algorithm, 265window size and checksum. 266 267Compression API Hash support 268---------------------------- 269 270Compression API allows application to enable digest calculation 271alongside compression and decompression of data. A PMD reflects its 272support for hash algorithms via capability algo feature flags. 273If supported, PMD calculates digest always on plaintext i.e. 274before compression and after decompression. 275 276Currently supported list of hash algos are SHA-1 and SHA2 family 277SHA256. 278 279See *DPDK API Reference* for details. 280 281If required, application should set valid hash algo in compress 282or decompress xforms during ``rte_compressdev_stream_create()`` 283or ``rte_compressdev_private_xform_create()`` and pass a valid 284output buffer in ``rte_comp_op`` hash field struct to store the 285resulting digest. Buffer passed should be contiguous and large 286enough to store digest which is 20 bytes for SHA-1 and 28732 bytes for SHA2-256. 288 289Compression API Stateless operation 290------------------------------------ 291 292An op is processed stateless if it has 293- op_type set to RTE_COMP_OP_STATELESS 294- flush value set to RTE_COMP_FLUSH_FULL or RTE_COMP_FLUSH_FINAL 295(required only on compression side), 296- All required input in source buffer 297 298When all of the above conditions are met, PMD initiates stateless processing 299and releases acquired resources after processing of current operation is 300complete. Application can enqueue multiple stateless ops in a single burst 301and must attach priv_xform handle to such ops. 302 303priv_xform in Stateless operation 304~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 305 306priv_xform is PMD internally managed private data that it maintains to do stateless processing. 307priv_xforms are initialized provided a generic xform structure by an application via making call 308to ``rte_compressdev_private_xform_create``, at an output PMD returns an opaque priv_xform reference. 309If PMD support SHAREABLE priv_xform indicated via algorithm feature flag, then application can 310attach same priv_xform with many stateless ops at-a-time. If not, then application needs to 311create as many priv_xforms as it expects to have stateless operations in-flight. 312 313.. figure:: img/stateless-op.* 314 315 Stateless Ops using Non-Shareable priv_xform 316 317 318.. figure:: img/stateless-op-shared.* 319 320 Stateless Ops using Shareable priv_xform 321 322 323Application should call ``rte_compressdev_private_xform_create()`` and attach to stateless op before 324enqueuing them for processing and free via ``rte_compressdev_private_xform_free()`` during termination. 325 326An example pseudocode to setup and process NUM_OPS stateless ops with each of length OP_LEN 327using priv_xform would look like: 328 329.. code-block:: c 330 331 /* 332 * pseudocode for stateless compression 333 */ 334 335 uint8_t cdev_id = rte_compressdev_get_dev_id(<pmd name>); 336 337 /* configure the device. */ 338 if (rte_compressdev_configure(cdev_id, &conf) < 0) 339 rte_exit(EXIT_FAILURE, "Failed to configure compressdev %u", cdev_id); 340 341 if (rte_compressdev_queue_pair_setup(cdev_id, 0, NUM_MAX_INFLIGHT_OPS, 342 socket_id()) < 0) 343 rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n"); 344 345 if (rte_compressdev_start(cdev_id) < 0) 346 rte_exit(EXIT_FAILURE, "Failed to start device\n"); 347 348 /* setup compress transform */ 349 struct rte_comp_xform compress_xform = { 350 .type = RTE_COMP_COMPRESS, 351 .compress = { 352 .algo = RTE_COMP_ALGO_DEFLATE, 353 .deflate = { 354 .huffman = RTE_COMP_HUFFMAN_DEFAULT 355 }, 356 .level = RTE_COMP_LEVEL_PMD_DEFAULT, 357 .chksum = RTE_COMP_CHECKSUM_NONE, 358 .window_size = DEFAULT_WINDOW_SIZE, 359 .hash_algo = RTE_COMP_HASH_ALGO_NONE 360 } 361 }; 362 363 /* create priv_xform and initialize it for the compression device. */ 364 rte_compressdev_info dev_info; 365 void *priv_xform = NULL; 366 int shareable = 1; 367 rte_compressdev_info_get(cdev_id, &dev_info); 368 if (dev_info.capabilities->comp_feature_flags & RTE_COMP_FF_SHAREABLE_PRIV_XFORM) { 369 rte_compressdev_private_xform_create(cdev_id, &compress_xform, &priv_xform); 370 } else { 371 shareable = 0; 372 } 373 374 /* create operation pool via call to rte_comp_op_pool_create and alloc ops */ 375 struct rte_comp_op *comp_ops[NUM_OPS]; 376 rte_comp_op_bulk_alloc(op_pool, comp_ops, NUM_OPS); 377 378 /* prepare ops for compression operations */ 379 for (i = 0; i < NUM_OPS; i++) { 380 struct rte_comp_op *op = comp_ops[i]; 381 if (!shareable) 382 rte_compressdev_private_xform_create(cdev_id, &compress_xform, &op->priv_xform) 383 else 384 op->private_xform = priv_xform; 385 op->op_type = RTE_COMP_OP_STATELESS; 386 op->flush_flag = RTE_COMP_FLUSH_FINAL; 387 388 op->src.offset = 0; 389 op->dst.offset = 0; 390 op->src.length = OP_LEN; 391 op->input_chksum = 0; 392 setup op->m_src and op->m_dst; 393 } 394 num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, comp_ops, NUM_OPS); 395 /* wait for this to complete before enqueuing next*/ 396 do { 397 num_deque = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, NUM_OPS); 398 } while (num_dqud < num_enqd); 399 400 401Stateless and OUT_OF_SPACE 402~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 403 404OUT_OF_SPACE is a condition when output buffer runs out of space and where PMD 405still has more data to produce. If PMD runs into such condition, then PMD returns 406RTE_COMP_OP_OUT_OF_SPACE_TERMINATED error. In such case, PMD resets itself and can set 407consumed=0 and produced=amount of output it could produce before hitting out_of_space. 408Application would need to resubmit the whole input with a larger output buffer, if it 409wants the operation to be completed. 410 411Hash in Stateless 412~~~~~~~~~~~~~~~~~ 413If hash is enabled, digest buffer will contain valid data after op is successfully 414processed i.e. dequeued with status = RTE_COMP_OP_STATUS_SUCCESS. 415 416Checksum in Stateless 417~~~~~~~~~~~~~~~~~~~~~ 418If checksum is enabled, checksum will only be available after op is successfully 419processed i.e. dequeued with status = RTE_COMP_OP_STATUS_SUCCESS. 420 421Compression API Stateful operation 422----------------------------------- 423 424Compression API provide RTE_COMP_FF_STATEFUL_COMPRESSION and 425RTE_COMP_FF_STATEFUL_DECOMPRESSION feature flag for PMD to reflect 426its support for Stateful operations. 427 428A Stateful operation in DPDK compression means application invokes enqueue 429burst() multiple times to process related chunk of data because 430application broke data into several ops. 431 432In such case 433- ops are setup with op_type RTE_COMP_OP_STATEFUL, 434- all ops except last set to flush value = RTE_COMP_FLUSH_NONE/SYNC 435and last set to flush value RTE_COMP_FLUSH_FULL/FINAL. 436 437In case of either one or all of the above conditions, PMD initiates 438stateful processing and releases acquired resources after processing 439operation with flush value = RTE_COMP_FLUSH_FULL/FINAL is complete. 440Unlike stateless, application can enqueue only one stateful op from 441a particular stream at a time and must attach stream handle 442to each op. 443 444Stream in Stateful operation 445~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 446 447`stream` in DPDK compression is a logical entity which identifies related set of ops, say, a one large 448file broken into multiple chunks then file is represented by a stream and each chunk of that file is 449represented by compression op `rte_comp_op`. Whenever application wants a stateful processing of such 450data, then it must get a stream handle via making call to ``rte_compressdev_stream_create()`` 451with xform, at an output the target PMD will return an opaque stream handle to application which 452it must attach to all of the ops carrying data of that stream. In stateful processing, every op 453requires previous op data for compression/decompression. A PMD allocates and set up resources such 454as history, states, etc. within a stream, which are maintained during the processing of the related ops. 455 456Unlike priv_xforms, stream is always a NON_SHAREABLE entity. One stream handle must be attached to only 457one set of related ops and cannot be reused until all of them are processed with status Success or failure. 458 459.. figure:: img/stateful-op.* 460 461 Stateful Ops 462 463 464Application should call ``rte_compressdev_stream_create()`` and attach to op before 465enqueuing them for processing and free via ``rte_compressdev_stream_free()`` during 466termination. All ops that are to be processed statefully should carry *same* stream. 467 468See *DPDK API Reference* document for details. 469 470An example pseudocode to set up and process a stream having NUM_CHUNKS with each chunk size of CHUNK_LEN would look like: 471 472.. code-block:: c 473 474 /* 475 * pseudocode for stateful compression 476 */ 477 478 uint8_t cdev_id = rte_compressdev_get_dev_id(<pmd name>); 479 480 /* configure the device. */ 481 if (rte_compressdev_configure(cdev_id, &conf) < 0) 482 rte_exit(EXIT_FAILURE, "Failed to configure compressdev %u", cdev_id); 483 484 if (rte_compressdev_queue_pair_setup(cdev_id, 0, NUM_MAX_INFLIGHT_OPS, 485 socket_id()) < 0) 486 rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n"); 487 488 if (rte_compressdev_start(cdev_id) < 0) 489 rte_exit(EXIT_FAILURE, "Failed to start device\n"); 490 491 /* setup compress transform. */ 492 struct rte_comp_xform compress_xform = { 493 .type = RTE_COMP_COMPRESS, 494 .compress = { 495 .algo = RTE_COMP_ALGO_DEFLATE, 496 .deflate = { 497 .huffman = RTE_COMP_HUFFMAN_DEFAULT 498 }, 499 .level = RTE_COMP_LEVEL_PMD_DEFAULT, 500 .chksum = RTE_COMP_CHECKSUM_NONE, 501 .window_size = DEFAULT_WINDOW_SIZE, 502 .hash_algo = RTE_COMP_HASH_ALGO_NONE 503 } 504 }; 505 506 /* create stream */ 507 void *stream; 508 rte_compressdev_stream_create(cdev_id, &compress_xform, &stream); 509 510 /* create an op pool and allocate ops */ 511 rte_comp_op_bulk_alloc(op_pool, comp_ops, NUM_CHUNKS); 512 513 /* Prepare source and destination mbufs for compression operations */ 514 unsigned int i; 515 for (i = 0; i < NUM_CHUNKS; i++) { 516 if (rte_pktmbuf_append(mbufs[i], CHUNK_LEN) == NULL) 517 rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n"); 518 comp_ops[i]->m_src = mbufs[i]; 519 if (rte_pktmbuf_append(dst_mbufs[i], CHUNK_LEN) == NULL) 520 rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n"); 521 comp_ops[i]->m_dst = dst_mbufs[i]; 522 } 523 524 /* Set up the compress operations. */ 525 for (i = 0; i < NUM_CHUNKS; i++) { 526 struct rte_comp_op *op = comp_ops[i]; 527 op->stream = stream; 528 op->m_src = src_buf[i]; 529 op->m_dst = dst_buf[i]; 530 op->op_type = RTE_COMP_OP_STATEFUL; 531 if (i == NUM_CHUNKS-1) { 532 /* set to final, if last chunk*/ 533 op->flush_flag = RTE_COMP_FLUSH_FINAL; 534 } else { 535 /* set to NONE, for all intermediary ops */ 536 op->flush_flag = RTE_COMP_FLUSH_NONE; 537 } 538 op->src.offset = 0; 539 op->dst.offset = 0; 540 op->src.length = CHUNK_LEN; 541 op->input_chksum = 0; 542 num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, &op[i], 1); 543 /* wait for this to complete before enqueuing next*/ 544 do { 545 num_deqd = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, 1); 546 } while (num_deqd < num_enqd); 547 /* analyze the amount of consumed and produced data before pushing next op*/ 548 } 549 550 551Stateful and OUT_OF_SPACE 552~~~~~~~~~~~~~~~~~~~~~~~~~~~ 553 554If PMD supports stateful operation, then OUT_OF_SPACE status is not an actual 555error for the PMD. In such case, PMD returns with status 556RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE with consumed = number of input bytes 557read and produced = length of complete output buffer. 558Application should enqueue next op with source starting at consumed+1 and an 559output buffer with available space. 560 561Hash in Stateful 562~~~~~~~~~~~~~~~~ 563If enabled, digest buffer will contain valid digest after last op in stream 564(having flush = RTE_COMP_FLUSH_FINAL) is successfully processed i.e. dequeued 565with status = RTE_COMP_OP_STATUS_SUCCESS. 566 567Checksum in Stateful 568~~~~~~~~~~~~~~~~~~~~ 569If enabled, checksum will only be available after last op in stream 570(having flush = RTE_COMP_FLUSH_FINAL) is successfully processed i.e. dequeued 571with status = RTE_COMP_OP_STATUS_SUCCESS. 572 573Burst in compression API 574------------------------- 575 576Scheduling of compression operations on DPDK's application data path is 577performed using a burst oriented asynchronous API set. A queue pair on a compression 578device accepts a burst of compression operations using enqueue burst API. On physical 579devices the enqueue burst API will place the operations to be processed 580on the device's hardware input queue, for virtual devices the processing of the 581operations is usually completed during the enqueue call to the compression 582device. The dequeue burst API will retrieve any processed operations available 583from the queue pair on the compression device, from physical devices this is usually 584directly from the devices processed queue, and for virtual device's from a 585``rte_ring`` where processed operations are placed after being processed on the 586enqueue call. 587 588A burst in DPDK compression can be a combination of stateless and stateful operations with a condition 589that for stateful ops only one op at-a-time should be enqueued from a particular stream i.e. no-two ops 590should belong to same stream in a single burst. However a burst may contain multiple stateful ops as long 591as each op is attached to a different stream i.e. a burst can look like: 592 593+---------------+--------------+--------------+-----------------+--------------+--------------+ 594| enqueue_burst | op1.no_flush | op2.no_flush | op3.flush_final | op4.no_flush | op5.no_flush | 595+---------------+--------------+--------------+-----------------+--------------+--------------+ 596 597Where, op1 .. op5 all belong to different independent data units. op1, op2, op4, op5 must be stateful 598as stateless ops can only use flush full or final and op3 can be of type stateless or stateful. 599Every op with type set to RTE_COMP_OP_STATELESS must be attached to priv_xform and 600Every op with type set to RTE_COMP_OP_STATEFUL *must* be attached to stream. 601 602Since each operation in a burst is independent and thus can be completed 603out-of-order, applications which need ordering, should setup per-op user data 604area with reordering information so that it can determine enqueue order at 605dequeue. 606 607Also if multiple threads calls enqueue_burst() on same queue pair then it’s 608application onus to use proper locking mechanism to ensure exclusive enqueuing 609of operations. 610 611Enqueue / Dequeue Burst APIs 612~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 613 614The burst enqueue API uses a compression device identifier and a queue pair 615identifier to specify the compression device queue pair to schedule the processing on. 616The ``nb_ops`` parameter is the number of operations to process which are 617supplied in the ``ops`` array of ``rte_comp_op`` structures. 618The enqueue function returns the number of operations it actually enqueued for 619processing, a return value equal to ``nb_ops`` means that all packets have been 620enqueued. 621 622The dequeue API uses the same format as the enqueue API but 623the ``nb_ops`` and ``ops`` parameters are now used to specify the max processed 624operations the user wishes to retrieve and the location in which to store them. 625The API call returns the actual number of processed operations returned, this 626can never be larger than ``nb_ops``. 627 628Sample code 629----------- 630 631There are unit test applications that show how to use the compressdev library inside 632app/test/test_compressdev.c 633 634Compression Device API 635~~~~~~~~~~~~~~~~~~~~~~ 636 637The compressdev Library API is described in the *DPDK API Reference* document. 638