1.. SPDX-License-Identifier: BSD-3-Clause 2 Copyright(c) 2017-2018 Cavium Networks. 3 4Compression Device Library 5=========================== 6 7The compression framework provides a generic set of APIs to perform compression services 8as well as to query and configure compression devices both physical(hardware) and virtual(software) 9to perform those services. The framework currently only supports lossless compression schemes: 10Deflate and LZS. 11 12Device Management 13----------------- 14 15Device Creation 16~~~~~~~~~~~~~~~ 17 18Physical compression devices are discovered during the bus probe of the EAL function 19which is executed at DPDK initialization, based on their unique device identifier. 20For e.g. PCI devices can be identified using PCI BDF (bus/bridge, device, function). 21Specific physical compression devices, like other physical devices in DPDK can be 22white-listed or black-listed using the EAL command line options. 23 24Virtual devices can be created by two mechanisms, either using the EAL command 25line options or from within the application using an EAL API directly. 26 27From the command line using the --vdev EAL option 28 29.. code-block:: console 30 31 --vdev '<pmd name>,socket_id=0' 32 33.. Note:: 34 35 * If DPDK application requires multiple software compression PMD devices then required 36 number of ``--vdev`` with appropriate libraries are to be added. 37 38 * An Application with multiple compression device instances exposed by the same PMD must 39 specify a unique name for each device. 40 41 Example: ``--vdev 'pmd0' --vdev 'pmd1'`` 42 43Or, by using the rte_vdev_init API within the application code. 44 45.. code-block:: c 46 47 rte_vdev_init("<pmd_name>","socket_id=0") 48 49All virtual compression devices support the following initialization parameters: 50 51* ``socket_id`` - socket on which to allocate the device resources on. 52 53Device Identification 54~~~~~~~~~~~~~~~~~~~~~ 55 56Each device, whether virtual or physical is uniquely designated by two 57identifiers: 58 59- A unique device index used to designate the compression device in all functions 60 exported by the compressdev API. 61 62- A device name used to designate the compression device in console messages, for 63 administration or debugging purposes. 64 65Device Configuration 66~~~~~~~~~~~~~~~~~~~~ 67 68The configuration of each compression device includes the following operations: 69 70- Allocation of resources, including hardware resources if a physical device. 71- Resetting the device into a well-known default state. 72- Initialization of statistics counters. 73 74The ``rte_compressdev_configure`` API is used to configure a compression device. 75 76The ``rte_compressdev_config`` structure is used to pass the configuration 77parameters. 78 79See *DPDK API Reference* for details. 80 81Configuration of Queue Pairs 82~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 83 84Each compression device queue pair is individually configured through the 85``rte_compressdev_queue_pair_setup`` API. 86 87The ``max_inflight_ops`` is used to pass maximum number of 88rte_comp_op that could be present in a queue at-a-time. 89PMD then can allocate resources accordingly on a specified socket. 90 91See *DPDK API Reference* for details. 92 93Logical Cores, Memory and Queues Pair Relationships 94~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 95 96Library supports NUMA similarly as described in Cryptodev library section. 97 98A queue pair cannot be shared and should be exclusively used by a single processing 99context for enqueuing operations or dequeuing operations on the same compression device 100since sharing would require global locks and hinder performance. It is however possible 101to use a different logical core to dequeue an operation on a queue pair from the logical 102core on which it was enqueued. This means that a compression burst enqueue/dequeue 103APIs are a logical place to transition from one logical core to another in a 104data processing pipeline. 105 106Device Features and Capabilities 107--------------------------------- 108 109Compression devices define their functionality through two mechanisms, global device 110features and algorithm features. Global devices features identify device 111wide level features which are applicable to the whole device such as supported hardware 112acceleration and CPU features. List of compression device features can be seen in the 113RTE_COMPDEV_FF_XXX macros. 114 115The algorithm features lists individual algo feature which device supports per-algorithm, 116such as a stateful compression/decompression, checksums operation etc. List of algorithm 117features can be seen in the RTE_COMP_FF_XXX macros. 118 119Capabilities 120~~~~~~~~~~~~ 121Each PMD has a list of capabilities, including algorithms listed in 122enum ``rte_comp_algorithm`` and its associated feature flag and 123sliding window range in log base 2 value. Sliding window tells 124the minimum and maximum size of lookup window that algorithm uses 125to find duplicates. 126 127See *DPDK API Reference* for details. 128 129Each Compression poll mode driver defines its array of capabilities 130for each algorithm it supports. See PMD implementation for capability 131initialization. 132 133Capabilities Discovery 134~~~~~~~~~~~~~~~~~~~~~~ 135 136PMD capability and features are discovered via ``rte_compressdev_info_get`` function. 137 138The ``rte_compressdev_info`` structure contains all the relevant information for the device. 139 140See *DPDK API Reference* for details. 141 142Compression Operation 143---------------------- 144 145DPDK compression supports two types of compression methodologies: 146 147- Stateless, data associated to a compression operation is compressed without any reference 148 to another compression operation. 149 150- Stateful, data in each compression operation is compressed with reference to previous compression 151 operations in the same data stream i.e. history of data is maintained between the operations. 152 153For more explanation, please refer RFC https://www.ietf.org/rfc/rfc1951.txt 154 155Operation Representation 156~~~~~~~~~~~~~~~~~~~~~~~~ 157 158Compression operation is described via ``struct rte_comp_op``, which contains both input and 159output data. The operation structure includes the operation type (stateless or stateful), 160the operation status and the priv_xform/stream handle, source, destination and checksum buffer 161pointers. It also contains the source mempool from which the operation is allocated. 162PMD updates consumed field with amount of data read from source buffer and produced 163field with amount of data of written into destination buffer along with status of 164operation. See section *Produced, Consumed And Operation Status* for more details. 165 166Compression operations mempool also has an ability to allocate private memory with the 167operation for application's purposes. Application software is responsible for specifying 168all the operation specific fields in the ``rte_comp_op`` structure which are then used 169by the compression PMD to process the requested operation. 170 171 172Operation Management and Allocation 173~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 174 175The compressdev library provides an API set for managing compression operations which 176utilize the Mempool Library to allocate operation buffers. Therefore, it ensures 177that the compression operation is interleaved optimally across the channels and 178ranks for optimal processing. 179 180A ``rte_comp_op`` contains a field indicating the pool it originated from. 181 182``rte_comp_op_alloc()`` and ``rte_comp_op_bulk_alloc()`` are used to allocate 183compression operations from a given compression operation mempool. 184The operation gets reset before being returned to a user so that operation 185is always in a good known state before use by the application. 186 187``rte_comp_op_free()`` is called by the application to return an operation to 188its allocating pool. 189 190See *DPDK API Reference* for details. 191 192Passing source data as mbuf-chain 193~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 194If input data is scattered across several different buffers, then 195Application can either parse through all such buffers and make one 196mbuf-chain and enqueue it for processing or, alternatively, it can 197make multiple sequential enqueue_burst() calls for each of them 198processing them statefully. See *Compression API Stateful Operation* 199for stateful processing of ops. 200 201Operation Status 202~~~~~~~~~~~~~~~~ 203Each operation carries a status information updated by PMD after it is processed. 204Following are currently supported: 205 206- RTE_COMP_OP_STATUS_SUCCESS, 207 Operation is successfully completed 208 209- RTE_COMP_OP_STATUS_NOT_PROCESSED, 210 Operation has not yet been processed by the device 211 212- RTE_COMP_OP_STATUS_INVALID_ARGS, 213 Operation failed due to invalid arguments in request 214 215- RTE_COMP_OP_STATUS_ERROR, 216 Operation failed because of internal error 217 218- RTE_COMP_OP_STATUS_INVALID_STATE, 219 Operation is invoked in invalid state 220 221- RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED, 222 Output buffer ran out of space during processing. Error case, 223 PMD cannot continue from here. 224 225- RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE, 226 Output buffer ran out of space before operation completed, but this 227 is not an error case. Output data up to op.produced can be used and 228 next op in the stream should continue on from op.consumed+1. 229 230Operation status after enqueue / dequeue 231~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 232Some of the above values may arise in the op after an 233``rte_compressdev_enqueue_burst()``. If number ops enqueued < number ops requested then 234the app should check the op.status of nb_enqd+1. If status is RTE_COMP_OP_STATUS_NOT_PROCESSED, 235it likely indicates a full-queue case for a hardware device and a retry after dequeuing some ops is likely 236to be successful. If the op holds any other status, e.g. RTE_COMP_OP_STATUS_INVALID_ARGS, a retry with 237the same op is unlikely to be successful. 238 239 240Produced, Consumed And Operation Status 241~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 242 243- If status is RTE_COMP_OP_STATUS_SUCCESS, 244 consumed = amount of data read from input buffer, and 245 produced = amount of data written in destination buffer 246- If status is RTE_COMP_OP_STATUS_ERROR, 247 consumed = produced = undefined 248- If status is RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED, 249 consumed = 0 and 250 produced = usually 0, but in decompression cases a PMD may return > 0 251 i.e. amount of data successfully produced until out of space condition 252 hit. Application can consume output data in this case, if required. 253- If status is RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE, 254 consumed = amount of data read, and 255 produced = amount of data successfully produced until 256 out of space condition hit. PMD has ability to recover 257 from here, so application can submit next op from 258 consumed+1 and a destination buffer with available space. 259 260Transforms 261---------- 262 263Compression transforms (``rte_comp_xform``) are the mechanism 264to specify the details of the compression operation such as algorithm, 265window size and checksum. 266 267Compression API Hash support 268---------------------------- 269 270Compression API allows application to enable digest calculation 271alongside compression and decompression of data. A PMD reflects its 272support for hash algorithms via capability algo feature flags. 273If supported, PMD calculates digest always on plaintext i.e. 274before compression and after decompression. 275 276Currently supported list of hash algos are SHA-1 and SHA2 family 277SHA256. 278 279See *DPDK API Reference* for details. 280 281If required, application should set valid hash algo in compress 282or decompress xforms during ``rte_compressdev_stream_create()`` 283or ``rte_compressdev_private_xform_create()`` and pass a valid 284output buffer in ``rte_comp_op`` hash field struct to store the 285resulting digest. Buffer passed should be contiguous and large 286enough to store digest which is 20 bytes for SHA-1 and 28732 bytes for SHA2-256. 288 289Compression API Stateless operation 290------------------------------------ 291 292An op is processed stateless if it has 293- op_type set to RTE_COMP_OP_STATELESS 294- flush value set to RTE_FLUSH_FULL or RTE_FLUSH_FINAL 295(required only on compression side), 296- All required input in source buffer 297 298When all of the above conditions are met, PMD initiates stateless processing 299and releases acquired resources after processing of current operation is 300complete. Application can enqueue multiple stateless ops in a single burst 301and must attach priv_xform handle to such ops. 302 303priv_xform in Stateless operation 304~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 305 306priv_xform is PMD internally managed private data that it maintains to do stateless processing. 307priv_xforms are initialized provided a generic xform structure by an application via making call 308to ``rte_comp_private_xform_create``, at an output PMD returns an opaque priv_xform reference. 309If PMD support SHAREABLE priv_xform indicated via algorithm feature flag, then application can 310attach same priv_xform with many stateless ops at-a-time. If not, then application needs to 311create as many priv_xforms as it expects to have stateless operations in-flight. 312 313.. figure:: img/stateless-op.* 314 315 Stateless Ops using Non-Shareable priv_xform 316 317 318.. figure:: img/stateless-op-shared.* 319 320 Stateless Ops using Shareable priv_xform 321 322 323Application should call ``rte_compressdev_private_xform_create()`` and attach to stateless op before 324enqueuing them for processing and free via ``rte_compressdev_private_xform_free()`` during termination. 325 326An example pseudocode to setup and process NUM_OPS stateless ops with each of length OP_LEN 327using priv_xform would look like: 328 329.. code-block:: c 330 331 /* 332 * pseudocode for stateless compression 333 */ 334 335 uint8_t cdev_id = rte_compdev_get_dev_id(<pmd name>); 336 337 /* configure the device. */ 338 if (rte_compressdev_configure(cdev_id, &conf) < 0) 339 rte_exit(EXIT_FAILURE, "Failed to configure compressdev %u", cdev_id); 340 341 if (rte_compressdev_queue_pair_setup(cdev_id, 0, NUM_MAX_INFLIGHT_OPS, 342 socket_id()) < 0) 343 rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n"); 344 345 if (rte_compressdev_start(cdev_id) < 0) 346 rte_exit(EXIT_FAILURE, "Failed to start device\n"); 347 348 /* setup compress transform */ 349 struct rte_compress_compress_xform compress_xform = { 350 .type = RTE_COMP_COMPRESS, 351 .compress = { 352 .algo = RTE_COMP_ALGO_DEFLATE, 353 .deflate = { 354 .huffman = RTE_COMP_HUFFMAN_DEFAULT 355 }, 356 .level = RTE_COMP_LEVEL_PMD_DEFAULT, 357 .chksum = RTE_COMP_CHECKSUM_NONE, 358 .window_size = DEFAULT_WINDOW_SIZE, 359 .hash_algo = RTE_COMP_HASH_ALGO_NONE 360 } 361 }; 362 363 /* create priv_xform and initialize it for the compression device. */ 364 void *priv_xform = NULL; 365 rte_compressdev_info_get(cdev_id, &dev_info); 366 if(dev_info.capability->comps_feature_flag & RTE_COMP_FF_SHAREABLE_PRIV_XFORM) { 367 rte_comp_priv_xform_create(cdev_id, &compress_xform, &priv_xform); 368 } else { 369 shareable = 0; 370 } 371 372 /* create operation pool via call to rte_comp_op_pool_create and alloc ops */ 373 rte_comp_op_bulk_alloc(op_pool, comp_ops, NUM_OPS); 374 375 /* prepare ops for compression operations */ 376 for (i = 0; i < NUM_OPS; i++) { 377 struct rte_comp_op *op = comp_ops[i]; 378 if (!shareable) 379 rte_priv_xform_create(cdev_id, &compress_xform, &op->priv_xform) 380 else 381 op->priv_xform = priv_xform; 382 op->type = RTE_COMP_OP_STATELESS; 383 op->flush = RTE_COMP_FLUSH_FINAL; 384 385 op->src.offset = 0; 386 op->dst.offset = 0; 387 op->src.length = OP_LEN; 388 op->input_chksum = 0; 389 setup op->m_src and op->m_dst; 390 } 391 num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, comp_ops, NUM_OPS); 392 /* wait for this to complete before enqueuing next*/ 393 do { 394 num_deque = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, NUM_OPS); 395 } while (num_dqud < num_enqd); 396 397 398Stateless and OUT_OF_SPACE 399~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 400 401OUT_OF_SPACE is a condition when output buffer runs out of space and where PMD 402still has more data to produce. If PMD runs into such condition, then PMD returns 403RTE_COMP_OP_OUT_OF_SPACE_TERMINATED error. In such case, PMD resets itself and can set 404consumed=0 and produced=amount of output it could produce before hitting out_of_space. 405Application would need to resubmit the whole input with a larger output buffer, if it 406wants the operation to be completed. 407 408Hash in Stateless 409~~~~~~~~~~~~~~~~~ 410If hash is enabled, digest buffer will contain valid data after op is successfully 411processed i.e. dequeued with status = RTE_COMP_OP_STATUS_SUCCESS. 412 413Checksum in Stateless 414~~~~~~~~~~~~~~~~~~~~~ 415If checksum is enabled, checksum will only be available after op is successfully 416processed i.e. dequeued with status = RTE_COMP_OP_STATUS_SUCCESS. 417 418Compression API Stateful operation 419----------------------------------- 420 421Compression API provide RTE_COMP_FF_STATEFUL_COMPRESSION and 422RTE_COMP_FF_STATEFUL_DECOMPRESSION feature flag for PMD to reflect 423its support for Stateful operations. 424 425A Stateful operation in DPDK compression means application invokes enqueue 426burst() multiple times to process related chunk of data because 427application broke data into several ops. 428 429In such case 430- ops are setup with op_type RTE_COMP_OP_STATEFUL, 431- all ops except last set to flush value = RTE_COMP_NO/SYNC_FLUSH 432and last set to flush value RTE_COMP_FULL/FINAL_FLUSH. 433 434In case of either one or all of the above conditions, PMD initiates 435stateful processing and releases acquired resources after processing 436operation with flush value = RTE_COMP_FLUSH_FULL/FINAL is complete. 437Unlike stateless, application can enqueue only one stateful op from 438a particular stream at a time and must attach stream handle 439to each op. 440 441Stream in Stateful operation 442~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 443 444`stream` in DPDK compression is a logical entity which identifies related set of ops, say, a one large 445file broken into multiple chunks then file is represented by a stream and each chunk of that file is 446represented by compression op `rte_comp_op`. Whenever application wants a stateful processing of such 447data, then it must get a stream handle via making call to ``rte_comp_stream_create()`` 448with xform, at an output the target PMD will return an opaque stream handle to application which 449it must attach to all of the ops carrying data of that stream. In stateful processing, every op 450requires previous op data for compression/decompression. A PMD allocates and set up resources such 451as history, states, etc. within a stream, which are maintained during the processing of the related ops. 452 453Unlike priv_xforms, stream is always a NON_SHAREABLE entity. One stream handle must be attached to only 454one set of related ops and cannot be reused until all of them are processed with status Success or failure. 455 456.. figure:: img/stateful-op.* 457 458 Stateful Ops 459 460 461Application should call ``rte_comp_stream_create()`` and attach to op before 462enqueuing them for processing and free via ``rte_comp_stream_free()`` during 463termination. All ops that are to be processed statefully should carry *same* stream. 464 465See *DPDK API Reference* document for details. 466 467An example pseudocode to set up and process a stream having NUM_CHUNKS with each chunk size of CHUNK_LEN would look like: 468 469.. code-block:: c 470 471 /* 472 * pseudocode for stateful compression 473 */ 474 475 uint8_t cdev_id = rte_compdev_get_dev_id(<pmd name>); 476 477 /* configure the device. */ 478 if (rte_compressdev_configure(cdev_id, &conf) < 0) 479 rte_exit(EXIT_FAILURE, "Failed to configure compressdev %u", cdev_id); 480 481 if (rte_compressdev_queue_pair_setup(cdev_id, 0, NUM_MAX_INFLIGHT_OPS, 482 socket_id()) < 0) 483 rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n"); 484 485 if (rte_compressdev_start(cdev_id) < 0) 486 rte_exit(EXIT_FAILURE, "Failed to start device\n"); 487 488 /* setup compress transform. */ 489 struct rte_compress_compress_xform compress_xform = { 490 .type = RTE_COMP_COMPRESS, 491 .compress = { 492 .algo = RTE_COMP_ALGO_DEFLATE, 493 .deflate = { 494 .huffman = RTE_COMP_HUFFMAN_DEFAULT 495 }, 496 .level = RTE_COMP_LEVEL_PMD_DEFAULT, 497 .chksum = RTE_COMP_CHECKSUM_NONE, 498 .window_size = DEFAULT_WINDOW_SIZE, 499 .hash_algo = RTE_COMP_HASH_ALGO_NONE 500 } 501 }; 502 503 /* create stream */ 504 rte_comp_stream_create(cdev_id, &compress_xform, &stream); 505 506 /* create an op pool and allocate ops */ 507 rte_comp_op_bulk_alloc(op_pool, comp_ops, NUM_CHUNKS); 508 509 /* Prepare source and destination mbufs for compression operations */ 510 unsigned int i; 511 for (i = 0; i < NUM_CHUNKS; i++) { 512 if (rte_pktmbuf_append(mbufs[i], CHUNK_LEN) == NULL) 513 rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n"); 514 comp_ops[i]->m_src = mbufs[i]; 515 if (rte_pktmbuf_append(dst_mbufs[i], CHUNK_LEN) == NULL) 516 rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n"); 517 comp_ops[i]->m_dst = dst_mbufs[i]; 518 } 519 520 /* Set up the compress operations. */ 521 for (i = 0; i < NUM_CHUNKS; i++) { 522 struct rte_comp_op *op = comp_ops[i]; 523 op->stream = stream; 524 op->m_src = src_buf[i]; 525 op->m_dst = dst_buf[i]; 526 op->type = RTE_COMP_OP_STATEFUL; 527 if(i == NUM_CHUNKS-1) { 528 /* set to final, if last chunk*/ 529 op->flush = RTE_COMP_FLUSH_FINAL; 530 } else { 531 /* set to NONE, for all intermediary ops */ 532 op->flush = RTE_COMP_FLUSH_NONE; 533 } 534 op->src.offset = 0; 535 op->dst.offset = 0; 536 op->src.length = CHUNK_LEN; 537 op->input_chksum = 0; 538 num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, &op[i], 1); 539 /* wait for this to complete before enqueuing next*/ 540 do { 541 num_deqd = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, 1); 542 } while (num_deqd < num_enqd); 543 /* push next op*/ 544 } 545 546 547Stateful and OUT_OF_SPACE 548~~~~~~~~~~~~~~~~~~~~~~~~~~~ 549 550If PMD supports stateful operation, then OUT_OF_SPACE status is not an actual 551error for the PMD. In such case, PMD returns with status 552RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE with consumed = number of input bytes 553read and produced = length of complete output buffer. 554Application should enqueue next op with source starting at consumed+1 and an 555output buffer with available space. 556 557Hash in Stateful 558~~~~~~~~~~~~~~~~ 559If enabled, digest buffer will contain valid digest after last op in stream 560(having flush = RTE_COMP_OP_FLUSH_FINAL) is successfully processed i.e. dequeued 561with status = RTE_COMP_OP_STATUS_SUCCESS. 562 563Checksum in Stateful 564~~~~~~~~~~~~~~~~~~~~ 565If enabled, checksum will only be available after last op in stream 566(having flush = RTE_COMP_OP_FLUSH_FINAL) is successfully processed i.e. dequeued 567with status = RTE_COMP_OP_STATUS_SUCCESS. 568 569Burst in compression API 570------------------------- 571 572Scheduling of compression operations on DPDK's application data path is 573performed using a burst oriented asynchronous API set. A queue pair on a compression 574device accepts a burst of compression operations using enqueue burst API. On physical 575devices the enqueue burst API will place the operations to be processed 576on the device's hardware input queue, for virtual devices the processing of the 577operations is usually completed during the enqueue call to the compression 578device. The dequeue burst API will retrieve any processed operations available 579from the queue pair on the compression device, from physical devices this is usually 580directly from the devices processed queue, and for virtual device's from a 581``rte_ring`` where processed operations are placed after being processed on the 582enqueue call. 583 584A burst in DPDK compression can be a combination of stateless and stateful operations with a condition 585that for stateful ops only one op at-a-time should be enqueued from a particular stream i.e. no-two ops 586should belong to same stream in a single burst. However a burst may contain multiple stateful ops as long 587as each op is attached to a different stream i.e. a burst can look like: 588 589+---------------+--------------+--------------+-----------------+--------------+--------------+ 590| enqueue_burst | op1.no_flush | op2.no_flush | op3.flush_final | op4.no_flush | op5.no_flush | 591+---------------+--------------+--------------+-----------------+--------------+--------------+ 592 593Where, op1 .. op5 all belong to different independent data units. op1, op2, op4, op5 must be stateful 594as stateless ops can only use flush full or final and op3 can be of type stateless or stateful. 595Every op with type set to RTE_COMP_OP_TYPE_STATELESS must be attached to priv_xform and 596Every op with type set to RTE_COMP_OP_TYPE_STATEFUL *must* be attached to stream. 597 598Since each operation in a burst is independent and thus can be completed 599out-of-order, applications which need ordering, should setup per-op user data 600area with reordering information so that it can determine enqueue order at 601dequeue. 602 603Also if multiple threads calls enqueue_burst() on same queue pair then it’s 604application onus to use proper locking mechanism to ensure exclusive enqueuing 605of operations. 606 607Enqueue / Dequeue Burst APIs 608~~~~~~~~~~~~~~~~~~~~~~~~~~~~ 609 610The burst enqueue API uses a compression device identifier and a queue pair 611identifier to specify the compression device queue pair to schedule the processing on. 612The ``nb_ops`` parameter is the number of operations to process which are 613supplied in the ``ops`` array of ``rte_comp_op`` structures. 614The enqueue function returns the number of operations it actually enqueued for 615processing, a return value equal to ``nb_ops`` means that all packets have been 616enqueued. 617 618The dequeue API uses the same format as the enqueue API but 619the ``nb_ops`` and ``ops`` parameters are now used to specify the max processed 620operations the user wishes to retrieve and the location in which to store them. 621The API call returns the actual number of processed operations returned, this 622can never be larger than ``nb_ops``. 623 624Sample code 625----------- 626 627There are unit test applications that show how to use the compressdev library inside 628app/test/test_compressdev.c 629 630Compression Device API 631~~~~~~~~~~~~~~~~~~~~~~ 632 633The compressdev Library API is described in the *DPDK API Reference* document. 634