xref: /dpdk/doc/guides/prog_guide/compressdev.rst (revision 6b1a74ef805fd48486e2af6d7ca2439adff484c3)
1a584d3beSAshish Gupta..  SPDX-License-Identifier: BSD-3-Clause
2a584d3beSAshish Gupta    Copyright(c) 2017-2018 Cavium Networks.
3a584d3beSAshish Gupta
4a584d3beSAshish GuptaCompression Device Library
5a584d3beSAshish Gupta===========================
6a584d3beSAshish Gupta
7a584d3beSAshish GuptaThe compression framework provides a generic set of APIs to perform compression services
8a584d3beSAshish Guptaas well as to query and configure compression devices both physical(hardware) and virtual(software)
9a584d3beSAshish Guptato perform those services. The framework currently only supports lossless compression schemes:
10a584d3beSAshish GuptaDeflate and LZS.
11a584d3beSAshish Gupta
12a584d3beSAshish GuptaDevice Management
13a584d3beSAshish Gupta-----------------
14a584d3beSAshish Gupta
15a584d3beSAshish GuptaDevice Creation
16a584d3beSAshish Gupta~~~~~~~~~~~~~~~
17a584d3beSAshish Gupta
18a584d3beSAshish GuptaPhysical compression devices are discovered during the bus probe of the EAL function
19a584d3beSAshish Guptawhich is executed at DPDK initialization, based on their unique device identifier.
20d629b7b5SJohn McNamaraFor e.g. PCI devices can be identified using PCI BDF (bus/bridge, device, function).
21a584d3beSAshish GuptaSpecific physical compression devices, like other physical devices in DPDK can be
22a584d3beSAshish Guptawhite-listed or black-listed using the EAL command line options.
23a584d3beSAshish Gupta
24a584d3beSAshish GuptaVirtual devices can be created by two mechanisms, either using the EAL command
25a584d3beSAshish Guptaline options or from within the application using an EAL API directly.
26a584d3beSAshish Gupta
27a584d3beSAshish GuptaFrom the command line using the --vdev EAL option
28a584d3beSAshish Gupta
29a584d3beSAshish Gupta.. code-block:: console
30a584d3beSAshish Gupta
31a584d3beSAshish Gupta   --vdev  '<pmd name>,socket_id=0'
32a584d3beSAshish Gupta
33a584d3beSAshish Gupta.. Note::
34a584d3beSAshish Gupta
35a584d3beSAshish Gupta   * If DPDK application requires multiple software compression PMD devices then required
36a584d3beSAshish Gupta     number of ``--vdev`` with appropriate libraries are to be added.
37a584d3beSAshish Gupta
38a584d3beSAshish Gupta   * An Application with multiple compression device instances exposed by the same PMD must
39a584d3beSAshish Gupta     specify a unique name for each device.
40a584d3beSAshish Gupta
41a584d3beSAshish Gupta   Example: ``--vdev  'pmd0' --vdev  'pmd1'``
42a584d3beSAshish Gupta
43a584d3beSAshish GuptaOr, by using the rte_vdev_init API within the application code.
44a584d3beSAshish Gupta
45a584d3beSAshish Gupta.. code-block:: c
46a584d3beSAshish Gupta
47a584d3beSAshish Gupta   rte_vdev_init("<pmd_name>","socket_id=0")
48a584d3beSAshish Gupta
49a584d3beSAshish GuptaAll virtual compression devices support the following initialization parameters:
50a584d3beSAshish Gupta
51a584d3beSAshish Gupta* ``socket_id`` - socket on which to allocate the device resources on.
52a584d3beSAshish Gupta
53a584d3beSAshish GuptaDevice Identification
54a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~
55a584d3beSAshish Gupta
56a584d3beSAshish GuptaEach device, whether virtual or physical is uniquely designated by two
57a584d3beSAshish Guptaidentifiers:
58a584d3beSAshish Gupta
59a584d3beSAshish Gupta- A unique device index used to designate the compression device in all functions
60a584d3beSAshish Gupta  exported by the compressdev API.
61a584d3beSAshish Gupta
62a584d3beSAshish Gupta- A device name used to designate the compression device in console messages, for
63a584d3beSAshish Gupta  administration or debugging purposes.
64a584d3beSAshish Gupta
65a584d3beSAshish GuptaDevice Configuration
66a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~
67a584d3beSAshish Gupta
68a584d3beSAshish GuptaThe configuration of each compression device includes the following operations:
69a584d3beSAshish Gupta
70a584d3beSAshish Gupta- Allocation of resources, including hardware resources if a physical device.
71a584d3beSAshish Gupta- Resetting the device into a well-known default state.
72a584d3beSAshish Gupta- Initialization of statistics counters.
73a584d3beSAshish Gupta
74a584d3beSAshish GuptaThe ``rte_compressdev_configure`` API is used to configure a compression device.
75a584d3beSAshish Gupta
76a584d3beSAshish GuptaThe ``rte_compressdev_config`` structure is used to pass the configuration
77a584d3beSAshish Guptaparameters.
78a584d3beSAshish Gupta
79a584d3beSAshish GuptaSee *DPDK API Reference* for details.
80a584d3beSAshish Gupta
81a584d3beSAshish GuptaConfiguration of Queue Pairs
82a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~
83a584d3beSAshish Gupta
84a584d3beSAshish GuptaEach compression device queue pair is individually configured through the
85a584d3beSAshish Gupta``rte_compressdev_queue_pair_setup`` API.
86a584d3beSAshish Gupta
87a584d3beSAshish GuptaThe ``max_inflight_ops`` is used to pass maximum number of
88a584d3beSAshish Guptarte_comp_op that could be present in a queue at-a-time.
89a584d3beSAshish GuptaPMD then can allocate resources accordingly on a specified socket.
90a584d3beSAshish Gupta
91a584d3beSAshish GuptaSee *DPDK API Reference* for details.
92a584d3beSAshish Gupta
93a584d3beSAshish GuptaLogical Cores, Memory and Queues Pair Relationships
94a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
95a584d3beSAshish Gupta
96a584d3beSAshish GuptaLibrary supports NUMA similarly as described in Cryptodev library section.
97a584d3beSAshish Gupta
98a584d3beSAshish GuptaA queue pair cannot be shared and should be exclusively used by a single processing
99a584d3beSAshish Guptacontext for enqueuing operations or dequeuing operations on the same compression device
100a584d3beSAshish Guptasince sharing would require global locks and hinder performance. It is however possible
101a584d3beSAshish Guptato use a different logical core to dequeue an operation on a queue pair from the logical
102a584d3beSAshish Guptacore on which it was enqueued. This means that a compression burst enqueue/dequeue
103a584d3beSAshish GuptaAPIs are a logical place to transition from one logical core to another in a
104a584d3beSAshish Guptadata processing pipeline.
105a584d3beSAshish Gupta
106a584d3beSAshish GuptaDevice Features and Capabilities
107a584d3beSAshish Gupta---------------------------------
108a584d3beSAshish Gupta
109a584d3beSAshish GuptaCompression devices define their functionality through two mechanisms, global device
110a584d3beSAshish Guptafeatures and algorithm features. Global devices features identify device
111a584d3beSAshish Guptawide level features which are applicable to the whole device such as supported hardware
112a584d3beSAshish Guptaacceleration and CPU features. List of compression device features can be seen in the
113a584d3beSAshish GuptaRTE_COMPDEV_FF_XXX macros.
114a584d3beSAshish Gupta
115a584d3beSAshish GuptaThe algorithm features lists individual algo feature which device supports per-algorithm,
116a584d3beSAshish Guptasuch as a stateful compression/decompression, checksums operation etc. List of algorithm
117a584d3beSAshish Guptafeatures can be seen in the RTE_COMP_FF_XXX macros.
118a584d3beSAshish Gupta
119a584d3beSAshish GuptaCapabilities
120a584d3beSAshish Gupta~~~~~~~~~~~~
121a584d3beSAshish GuptaEach PMD has a list of capabilities, including algorithms listed in
122a584d3beSAshish Guptaenum ``rte_comp_algorithm`` and its associated feature flag and
123a584d3beSAshish Guptasliding window range in log base 2 value. Sliding window tells
124a584d3beSAshish Guptathe minimum and maximum size of lookup window that algorithm uses
125a584d3beSAshish Guptato find duplicates.
126a584d3beSAshish Gupta
127a584d3beSAshish GuptaSee *DPDK API Reference* for details.
128a584d3beSAshish Gupta
129a584d3beSAshish GuptaEach Compression poll mode driver defines its array of capabilities
130a584d3beSAshish Guptafor each algorithm it supports. See PMD implementation for capability
131a584d3beSAshish Guptainitialization.
132a584d3beSAshish Gupta
133a584d3beSAshish GuptaCapabilities Discovery
134a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~
135a584d3beSAshish Gupta
136a584d3beSAshish GuptaPMD capability and features are discovered via ``rte_compressdev_info_get`` function.
137a584d3beSAshish Gupta
138a584d3beSAshish GuptaThe ``rte_compressdev_info`` structure contains all the relevant information for the device.
139a584d3beSAshish Gupta
140a584d3beSAshish GuptaSee *DPDK API Reference* for details.
141a584d3beSAshish Gupta
142a584d3beSAshish GuptaCompression Operation
143a584d3beSAshish Gupta----------------------
144a584d3beSAshish Gupta
145a584d3beSAshish GuptaDPDK compression supports two types of compression methodologies:
146a584d3beSAshish Gupta
147a584d3beSAshish Gupta- Stateless, data associated to a compression operation is compressed without any reference
148a584d3beSAshish Gupta  to another compression operation.
149a584d3beSAshish Gupta
150a584d3beSAshish Gupta- Stateful, data in each compression operation is compressed with reference to previous compression
151a584d3beSAshish Gupta  operations in the same data stream i.e. history of data is maintained between the operations.
152a584d3beSAshish Gupta
153a584d3beSAshish GuptaFor more explanation, please refer RFC https://www.ietf.org/rfc/rfc1951.txt
154a584d3beSAshish Gupta
155a584d3beSAshish GuptaOperation Representation
156a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~
157a584d3beSAshish Gupta
158a584d3beSAshish GuptaCompression operation is described via ``struct rte_comp_op``, which contains both input and
159a584d3beSAshish Guptaoutput data. The operation structure includes the operation type (stateless or stateful),
160a584d3beSAshish Guptathe operation status and the priv_xform/stream handle, source, destination and checksum buffer
161a584d3beSAshish Guptapointers. It also contains the source mempool from which the operation is allocated.
162a584d3beSAshish GuptaPMD updates consumed field with amount of data read from source buffer and produced
163a584d3beSAshish Guptafield with amount of data of written into destination buffer along with status of
164a584d3beSAshish Guptaoperation. See section *Produced, Consumed And Operation Status* for more details.
165a584d3beSAshish Gupta
166a584d3beSAshish GuptaCompression operations mempool also has an ability to allocate private memory with the
167a584d3beSAshish Guptaoperation for application's purposes. Application software is responsible for specifying
168a584d3beSAshish Guptaall the operation specific fields in the ``rte_comp_op`` structure which are then used
169a584d3beSAshish Guptaby the compression PMD to process the requested operation.
170a584d3beSAshish Gupta
171a584d3beSAshish Gupta
172a584d3beSAshish GuptaOperation Management and Allocation
173a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
174a584d3beSAshish Gupta
175a584d3beSAshish GuptaThe compressdev library provides an API set for managing compression operations which
176a584d3beSAshish Guptautilize the Mempool Library to allocate operation buffers. Therefore, it ensures
177a584d3beSAshish Guptathat the compression operation is interleaved optimally across the channels and
178a584d3beSAshish Guptaranks for optimal processing.
179a584d3beSAshish Gupta
180a584d3beSAshish GuptaA ``rte_comp_op`` contains a field indicating the pool it originated from.
181a584d3beSAshish Gupta
182a584d3beSAshish Gupta``rte_comp_op_alloc()`` and ``rte_comp_op_bulk_alloc()`` are used to allocate
183a584d3beSAshish Guptacompression operations from a given compression operation mempool.
184a584d3beSAshish GuptaThe operation gets reset before being returned to a user so that operation
185a584d3beSAshish Guptais always in a good known state before use by the application.
186a584d3beSAshish Gupta
187a584d3beSAshish Gupta``rte_comp_op_free()`` is called by the application to return an operation to
188a584d3beSAshish Guptaits allocating pool.
189a584d3beSAshish Gupta
190a584d3beSAshish GuptaSee *DPDK API Reference* for details.
191a584d3beSAshish Gupta
192a584d3beSAshish GuptaPassing source data as mbuf-chain
193a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
194a584d3beSAshish GuptaIf input data is scattered across several different buffers, then
195a584d3beSAshish GuptaApplication can either parse through all such buffers and make one
196a584d3beSAshish Guptambuf-chain and enqueue it for processing or, alternatively, it can
197a584d3beSAshish Guptamake multiple sequential enqueue_burst() calls for each of them
198a584d3beSAshish Guptaprocessing them statefully. See *Compression API Stateful Operation*
199a584d3beSAshish Guptafor stateful processing of ops.
200a584d3beSAshish Gupta
201a584d3beSAshish GuptaOperation Status
202a584d3beSAshish Gupta~~~~~~~~~~~~~~~~
203a584d3beSAshish GuptaEach operation carries a status information updated by PMD after it is processed.
204a584d3beSAshish Guptafollowing are currently supported status:
205a584d3beSAshish Gupta
206a584d3beSAshish Gupta- RTE_COMP_OP_STATUS_SUCCESS,
207a584d3beSAshish Gupta    Operation is successfully completed
208a584d3beSAshish Gupta
209a584d3beSAshish Gupta- RTE_COMP_OP_STATUS_NOT_PROCESSED,
210a584d3beSAshish Gupta    Operation has not yet been processed by the device
211a584d3beSAshish Gupta
212a584d3beSAshish Gupta- RTE_COMP_OP_STATUS_INVALID_ARGS,
213a584d3beSAshish Gupta    Operation failed due to invalid arguments in request
214a584d3beSAshish Gupta
215a584d3beSAshish Gupta- RTE_COMP_OP_STATUS_ERROR,
216a584d3beSAshish Gupta    Operation failed because of internal error
217a584d3beSAshish Gupta
218a584d3beSAshish Gupta- RTE_COMP_OP_STATUS_INVALID_STATE,
219a584d3beSAshish Gupta    Operation is invoked in invalid state
220a584d3beSAshish Gupta
221a584d3beSAshish Gupta- RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED,
222a584d3beSAshish Gupta    Output buffer ran out of space during processing. Error case,
223a584d3beSAshish Gupta    PMD cannot continue from here.
224a584d3beSAshish Gupta
225a584d3beSAshish Gupta- RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE,
226a584d3beSAshish Gupta    Output buffer ran out of space before operation completed, but this
227a584d3beSAshish Gupta    is not an error case. Output data up to op.produced can be used and
228a584d3beSAshish Gupta    next op in the stream should continue on from op.consumed+1.
229a584d3beSAshish Gupta
230a584d3beSAshish GuptaProduced, Consumed And Operation Status
231a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
232a584d3beSAshish Gupta
233a584d3beSAshish Gupta- If status is RTE_COMP_OP_STATUS_SUCCESS,
234a584d3beSAshish Gupta    consumed = amount of data read from input buffer, and
235a584d3beSAshish Gupta    produced = amount of data written in destination buffer
236a584d3beSAshish Gupta- If status is RTE_COMP_OP_STATUS_FAILURE,
237a584d3beSAshish Gupta    consumed = produced = 0 or undefined
238a584d3beSAshish Gupta- If status is RTE_COMP_OP_STATUS_OUT_OF_SPACE_TERMINATED,
239a584d3beSAshish Gupta    consumed = 0 and
240a584d3beSAshish Gupta    produced = usually 0, but in decompression cases a PMD may return > 0
241a584d3beSAshish Gupta    i.e. amount of data successfully produced until out of space condition
242a584d3beSAshish Gupta    hit. Application can consume output data in this case, if required.
243a584d3beSAshish Gupta- If status is RTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE,
244a584d3beSAshish Gupta    consumed = amount of data read, and
245a584d3beSAshish Gupta    produced = amount of data successfully produced until
246a584d3beSAshish Gupta    out of space condition hit. PMD has ability to recover
247a584d3beSAshish Gupta    from here, so application can submit next op from
248a584d3beSAshish Gupta    consumed+1 and a destination buffer with available space.
249a584d3beSAshish Gupta
250a584d3beSAshish GuptaTransforms
251a584d3beSAshish Gupta----------
252a584d3beSAshish Gupta
253a584d3beSAshish GuptaCompression transforms (``rte_comp_xform``) are the mechanism
254a584d3beSAshish Guptato specify the details of the compression operation such as algorithm,
255a584d3beSAshish Guptawindow size and checksum.
256a584d3beSAshish Gupta
257a584d3beSAshish GuptaCompression API Hash support
258a584d3beSAshish Gupta----------------------------
259a584d3beSAshish Gupta
260a584d3beSAshish GuptaCompression API allows application to enable digest calculation
261a584d3beSAshish Guptaalongside compression and decompression of data. A PMD reflects its
262a584d3beSAshish Guptasupport for hash algorithms via capability algo feature flags.
263a584d3beSAshish GuptaIf supported, PMD calculates digest always on plaintext i.e.
264a584d3beSAshish Guptabefore compression and after decompression.
265a584d3beSAshish Gupta
266a584d3beSAshish GuptaCurrently supported list of hash algos are SHA-1 and SHA2 family
267a584d3beSAshish GuptaSHA256.
268a584d3beSAshish Gupta
269a584d3beSAshish GuptaSee *DPDK API Reference* for details.
270a584d3beSAshish Gupta
271a584d3beSAshish GuptaIf required, application should set valid hash algo in compress
272a584d3beSAshish Guptaor decompress xforms during ``rte_compressdev_stream_create()``
273a584d3beSAshish Guptaor ``rte_compressdev_private_xform_create()`` and pass a valid
274a584d3beSAshish Guptaoutput buffer in ``rte_comp_op`` hash field struct to store the
275a584d3beSAshish Guptaresulting digest. Buffer passed should be contiguous and large
276a584d3beSAshish Guptaenough to store digest which is 20 bytes for SHA-1 and
277a584d3beSAshish Gupta32 bytes for SHA2-256.
278a584d3beSAshish Gupta
279a584d3beSAshish GuptaCompression API Stateless operation
280a584d3beSAshish Gupta------------------------------------
281a584d3beSAshish Gupta
282a584d3beSAshish GuptaAn op is processed stateless if it has
283a584d3beSAshish Gupta- op_type set to RTE_COMP_OP_STATELESS
284a584d3beSAshish Gupta- flush value set to RTE_FLUSH_FULL or RTE_FLUSH_FINAL
285a584d3beSAshish Gupta(required only on compression side),
286a584d3beSAshish Gupta- All required input in source buffer
287a584d3beSAshish Gupta
288a584d3beSAshish GuptaWhen all of the above conditions are met, PMD initiates stateless processing
289a584d3beSAshish Guptaand releases acquired resources after processing of current operation is
290a584d3beSAshish Guptacomplete. Application can enqueue multiple stateless ops in a single burst
291a584d3beSAshish Guptaand must attach priv_xform handle to such ops.
292a584d3beSAshish Gupta
293a584d3beSAshish Guptapriv_xform in Stateless operation
294a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
295a584d3beSAshish Gupta
296a584d3beSAshish Guptapriv_xform is PMD internally managed private data that it maintains to do stateless processing.
297a584d3beSAshish Guptapriv_xforms are initialized provided a generic xform structure by an application via making call
298a584d3beSAshish Guptato ``rte_comp_private_xform_create``, at an output PMD returns an opaque priv_xform reference.
299a584d3beSAshish GuptaIf PMD support SHAREABLE priv_xform indicated via algorithm feature flag, then application can
300a584d3beSAshish Guptaattach same priv_xform with many stateless ops at-a-time. If not, then application needs to
301a584d3beSAshish Guptacreate as many priv_xforms as it expects to have stateless operations in-flight.
302a584d3beSAshish Gupta
303a584d3beSAshish Gupta.. figure:: img/stateless-op.*
304a584d3beSAshish Gupta
305a584d3beSAshish Gupta   Stateless Ops using Non-Shareable priv_xform
306a584d3beSAshish Gupta
307a584d3beSAshish Gupta
308a584d3beSAshish Gupta.. figure:: img/stateless-op-shared.*
309a584d3beSAshish Gupta
310a584d3beSAshish Gupta   Stateless Ops using Shareable priv_xform
311a584d3beSAshish Gupta
312a584d3beSAshish Gupta
313a584d3beSAshish GuptaApplication should call ``rte_compressdev_private_xform_create()`` and attach to stateless op before
314a584d3beSAshish Guptaenqueuing them for processing and free via ``rte_compressdev_private_xform_free()`` during termination.
315a584d3beSAshish Gupta
316a584d3beSAshish GuptaAn example pseudocode to setup and process NUM_OPS stateless ops with each of length OP_LEN
317a584d3beSAshish Guptausing priv_xform would look like:
318a584d3beSAshish Gupta
319a584d3beSAshish Gupta.. code-block:: c
320a584d3beSAshish Gupta
321a584d3beSAshish Gupta    /*
322a584d3beSAshish Gupta     * pseudocode for stateless compression
323a584d3beSAshish Gupta     */
324a584d3beSAshish Gupta
325a584d3beSAshish Gupta    uint8_t cdev_id = rte_compdev_get_dev_id(<pmd name>);
326a584d3beSAshish Gupta
327a584d3beSAshish Gupta    /* configure the device. */
328a584d3beSAshish Gupta    if (rte_compressdev_configure(cdev_id, &conf) < 0)
329a584d3beSAshish Gupta        rte_exit(EXIT_FAILURE, "Failed to configure compressdev %u", cdev_id);
330a584d3beSAshish Gupta
331a584d3beSAshish Gupta    if (rte_compressdev_queue_pair_setup(cdev_id, 0, NUM_MAX_INFLIGHT_OPS,
332a584d3beSAshish Gupta                            socket_id()) < 0)
333a584d3beSAshish Gupta        rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n");
334a584d3beSAshish Gupta
335a584d3beSAshish Gupta    if (rte_compressdev_start(cdev_id) < 0)
336a584d3beSAshish Gupta        rte_exit(EXIT_FAILURE, "Failed to start device\n");
337a584d3beSAshish Gupta
338a584d3beSAshish Gupta    /* setup compress transform */
339a584d3beSAshish Gupta    struct rte_compress_compress_xform compress_xform = {
340a584d3beSAshish Gupta        .type = RTE_COMP_COMPRESS,
341a584d3beSAshish Gupta        .compress = {
342a584d3beSAshish Gupta            .algo = RTE_COMP_ALGO_DEFLATE,
343a584d3beSAshish Gupta            .deflate = {
344a584d3beSAshish Gupta                .huffman = RTE_COMP_HUFFMAN_DEFAULT
345a584d3beSAshish Gupta            },
346a584d3beSAshish Gupta            .level = RTE_COMP_LEVEL_PMD_DEFAULT,
347a584d3beSAshish Gupta            .chksum = RTE_COMP_CHECKSUM_NONE,
348a584d3beSAshish Gupta            .window_size = DEFAULT_WINDOW_SIZE,
349a584d3beSAshish Gupta            .hash_algo = RTE_COMP_HASH_ALGO_NONE
350a584d3beSAshish Gupta        }
351a584d3beSAshish Gupta    };
352a584d3beSAshish Gupta
353a584d3beSAshish Gupta    /* create priv_xform and initialize it for the compression device. */
354a584d3beSAshish Gupta    void *priv_xform = NULL;
355a584d3beSAshish Gupta    rte_compressdev_info_get(cdev_id, &dev_info);
356a584d3beSAshish Gupta    if(dev_info.capability->comps_feature_flag & RTE_COMP_FF_SHAREABLE_PRIV_XFORM) {
357a584d3beSAshish Gupta        rte_comp_priv_xform_create(cdev_id, &compress_xform, &priv_xform);
358a584d3beSAshish Gupta    } else {
359a584d3beSAshish Gupta        shareable = 0;
360a584d3beSAshish Gupta    }
361a584d3beSAshish Gupta
362a584d3beSAshish Gupta    /* create operation pool via call to rte_comp_op_pool_create and alloc ops */
363a584d3beSAshish Gupta    rte_comp_op_bulk_alloc(op_pool, comp_ops, NUM_OPS);
364a584d3beSAshish Gupta
365a584d3beSAshish Gupta    /* prepare ops for compression operations */
366a584d3beSAshish Gupta    for (i = 0; i < NUM_OPS; i++) {
367a584d3beSAshish Gupta        struct rte_comp_op *op = comp_ops[i];
368a584d3beSAshish Gupta        if (!shareable)
369a584d3beSAshish Gupta            rte_priv_xform_create(cdev_id, &compress_xform, &op->priv_xform)
370a584d3beSAshish Gupta        else
371a584d3beSAshish Gupta            op->priv_xform = priv_xform;
372a584d3beSAshish Gupta        op->type = RTE_COMP_OP_STATELESS;
373a584d3beSAshish Gupta        op->flush = RTE_COMP_FLUSH_FINAL;
374a584d3beSAshish Gupta
375a584d3beSAshish Gupta        op->src.offset = 0;
376a584d3beSAshish Gupta        op->dst.offset = 0;
377a584d3beSAshish Gupta        op->src.length = OP_LEN;
378a584d3beSAshish Gupta        op->input_chksum = 0;
379a584d3beSAshish Gupta        setup op->m_src and op->m_dst;
380a584d3beSAshish Gupta    }
381a584d3beSAshish Gupta    num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, comp_ops, NUM_OPS);
382d629b7b5SJohn McNamara    /* wait for this to complete before enqueuing next*/
383a584d3beSAshish Gupta    do {
384a584d3beSAshish Gupta        num_deque = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, NUM_OPS);
385a584d3beSAshish Gupta    } while (num_dqud < num_enqd);
386a584d3beSAshish Gupta
387a584d3beSAshish Gupta
388a584d3beSAshish GuptaStateless and OUT_OF_SPACE
389a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~
390a584d3beSAshish Gupta
391a584d3beSAshish GuptaOUT_OF_SPACE is a condition when output buffer runs out of space and where PMD
392a584d3beSAshish Guptastill has more data to produce. If PMD runs into such condition, then PMD returns
393a584d3beSAshish GuptaRTE_COMP_OP_OUT_OF_SPACE_TERMINATED error. In such case, PMD resets itself and can set
394a584d3beSAshish Guptaconsumed=0 and produced=amount of output it could produce before hitting out_of_space.
395a584d3beSAshish GuptaApplication would need to resubmit the whole input with a larger output buffer, if it
396a584d3beSAshish Guptawants the operation to be completed.
397a584d3beSAshish Gupta
398a584d3beSAshish GuptaHash in Stateless
399a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~
400a584d3beSAshish GuptaIf hash is enabled, digest buffer will contain valid data after op is successfully
401a584d3beSAshish Guptaprocessed i.e. dequeued with status = RTE_COMP_OP_STATUS_SUCCESS.
402a584d3beSAshish Gupta
403a584d3beSAshish GuptaChecksum in Stateless
404a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~
405a584d3beSAshish GuptaIf checksum is enabled, checksum will only be available after op is successfully
406a584d3beSAshish Guptaprocessed i.e. dequeued with status = RTE_COMP_OP_STATUS_SUCCESS.
407a584d3beSAshish Gupta
408a584d3beSAshish GuptaCompression API Stateful operation
409a584d3beSAshish Gupta-----------------------------------
410a584d3beSAshish Gupta
411a584d3beSAshish GuptaCompression API provide RTE_COMP_FF_STATEFUL_COMPRESSION and
412a584d3beSAshish GuptaRTE_COMP_FF_STATEFUL_DECOMPRESSION feature flag for PMD to reflect
413a584d3beSAshish Guptaits support for Stateful operations.
414a584d3beSAshish Gupta
415a584d3beSAshish GuptaA Stateful operation in DPDK compression means application invokes enqueue
416a584d3beSAshish Guptaburst() multiple times to process related chunk of data because
417a584d3beSAshish Guptaapplication broke data into several ops.
418a584d3beSAshish Gupta
419a584d3beSAshish GuptaIn such case
420a584d3beSAshish Gupta- ops are setup with op_type RTE_COMP_OP_STATEFUL,
421a584d3beSAshish Gupta- all ops except last set to flush value = RTE_COMP_NO/SYNC_FLUSH
422a584d3beSAshish Guptaand last set to flush value RTE_COMP_FULL/FINAL_FLUSH.
423a584d3beSAshish Gupta
424a584d3beSAshish GuptaIn case of either one or all of the above conditions, PMD initiates
425a584d3beSAshish Guptastateful processing and releases acquired resources after processing
426a584d3beSAshish Guptaoperation with flush value = RTE_COMP_FLUSH_FULL/FINAL is complete.
427a584d3beSAshish GuptaUnlike stateless, application can enqueue only one stateful op from
428a584d3beSAshish Guptaa particular stream at a time and must attach stream handle
429a584d3beSAshish Guptato each op.
430a584d3beSAshish Gupta
431a584d3beSAshish GuptaStream in Stateful operation
432a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~
433a584d3beSAshish Gupta
434a584d3beSAshish Gupta`stream` in DPDK compression is a logical entity which identifies related set of ops, say, a one large
435a584d3beSAshish Guptafile broken into multiple chunks then file is represented by a stream and each chunk of that file is
436a584d3beSAshish Guptarepresented by compression op `rte_comp_op`. Whenever application wants a stateful processing of such
437a584d3beSAshish Guptadata, then it must get a stream handle via making call to ``rte_comp_stream_create()``
438a584d3beSAshish Guptawith xform, at an output the target PMD will return an opaque stream handle to application which
439a584d3beSAshish Guptait must attach to all of the ops carrying data of that stream. In stateful processing, every op
440a584d3beSAshish Guptarequires previous op data for compression/decompression. A PMD allocates and set up resources such
441a584d3beSAshish Guptaas history, states, etc. within a stream, which are maintained during the processing of the related ops.
442a584d3beSAshish Gupta
443a584d3beSAshish GuptaUnlike priv_xforms, stream is always a NON_SHAREABLE entity. One stream handle must be attached to only
444a584d3beSAshish Guptaone set of related ops and cannot be reused until all of them are processed with status Success or failure.
445a584d3beSAshish Gupta
446a584d3beSAshish Gupta.. figure:: img/stateful-op.*
447a584d3beSAshish Gupta
448a584d3beSAshish Gupta   Stateful Ops
449a584d3beSAshish Gupta
450a584d3beSAshish Gupta
451a584d3beSAshish GuptaApplication should call ``rte_comp_stream_create()`` and attach to op before
452a584d3beSAshish Guptaenqueuing them for processing and free via ``rte_comp_stream_free()`` during
453a584d3beSAshish Guptatermination. All ops that are to be processed statefully should carry *same* stream.
454a584d3beSAshish Gupta
455a584d3beSAshish GuptaSee *DPDK API Reference* document for details.
456a584d3beSAshish Gupta
457a584d3beSAshish GuptaAn example pseudocode to set up and process a stream having NUM_CHUNKS with each chunk size of CHUNK_LEN would look like:
458a584d3beSAshish Gupta
459a584d3beSAshish Gupta.. code-block:: c
460a584d3beSAshish Gupta
461a584d3beSAshish Gupta    /*
462a584d3beSAshish Gupta     * pseudocode for stateful compression
463a584d3beSAshish Gupta     */
464a584d3beSAshish Gupta
465a584d3beSAshish Gupta    uint8_t cdev_id = rte_compdev_get_dev_id(<pmd name>);
466a584d3beSAshish Gupta
467a584d3beSAshish Gupta    /* configure the  device. */
468a584d3beSAshish Gupta    if (rte_compressdev_configure(cdev_id, &conf) < 0)
469a584d3beSAshish Gupta        rte_exit(EXIT_FAILURE, "Failed to configure compressdev %u", cdev_id);
470a584d3beSAshish Gupta
471a584d3beSAshish Gupta    if (rte_compressdev_queue_pair_setup(cdev_id, 0, NUM_MAX_INFLIGHT_OPS,
472a584d3beSAshish Gupta                                    socket_id()) < 0)
473a584d3beSAshish Gupta        rte_exit(EXIT_FAILURE, "Failed to setup queue pair\n");
474a584d3beSAshish Gupta
475a584d3beSAshish Gupta    if (rte_compressdev_start(cdev_id) < 0)
476a584d3beSAshish Gupta        rte_exit(EXIT_FAILURE, "Failed to start device\n");
477a584d3beSAshish Gupta
478a584d3beSAshish Gupta    /* setup compress transform. */
479a584d3beSAshish Gupta    struct rte_compress_compress_xform compress_xform = {
480a584d3beSAshish Gupta        .type = RTE_COMP_COMPRESS,
481a584d3beSAshish Gupta        .compress = {
482a584d3beSAshish Gupta            .algo = RTE_COMP_ALGO_DEFLATE,
483a584d3beSAshish Gupta            .deflate = {
484a584d3beSAshish Gupta                .huffman = RTE_COMP_HUFFMAN_DEFAULT
485a584d3beSAshish Gupta            },
486a584d3beSAshish Gupta            .level = RTE_COMP_LEVEL_PMD_DEFAULT,
487a584d3beSAshish Gupta            .chksum = RTE_COMP_CHECKSUM_NONE,
488a584d3beSAshish Gupta            .window_size = DEFAULT_WINDOW_SIZE,
489a584d3beSAshish Gupta                        .hash_algo = RTE_COMP_HASH_ALGO_NONE
490a584d3beSAshish Gupta        }
491a584d3beSAshish Gupta    };
492a584d3beSAshish Gupta
493a584d3beSAshish Gupta    /* create stream */
494a584d3beSAshish Gupta    rte_comp_stream_create(cdev_id, &compress_xform, &stream);
495a584d3beSAshish Gupta
496a584d3beSAshish Gupta    /* create an op pool and allocate ops */
497a584d3beSAshish Gupta    rte_comp_op_bulk_alloc(op_pool, comp_ops, NUM_CHUNKS);
498a584d3beSAshish Gupta
499a584d3beSAshish Gupta    /* Prepare source and destination mbufs for compression operations */
500a584d3beSAshish Gupta    unsigned int i;
501a584d3beSAshish Gupta    for (i = 0; i < NUM_CHUNKS; i++) {
502a584d3beSAshish Gupta        if (rte_pktmbuf_append(mbufs[i], CHUNK_LEN) == NULL)
503a584d3beSAshish Gupta            rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n");
504a584d3beSAshish Gupta        comp_ops[i]->m_src = mbufs[i];
505a584d3beSAshish Gupta        if (rte_pktmbuf_append(dst_mbufs[i], CHUNK_LEN) == NULL)
506a584d3beSAshish Gupta            rte_exit(EXIT_FAILURE, "Not enough room in the mbuf\n");
507a584d3beSAshish Gupta        comp_ops[i]->m_dst = dst_mbufs[i];
508a584d3beSAshish Gupta    }
509a584d3beSAshish Gupta
510a584d3beSAshish Gupta    /* Set up the compress operations. */
511a584d3beSAshish Gupta    for (i = 0; i < NUM_CHUNKS; i++) {
512a584d3beSAshish Gupta        struct rte_comp_op *op = comp_ops[i];
513a584d3beSAshish Gupta        op->stream = stream;
514a584d3beSAshish Gupta        op->m_src = src_buf[i];
515a584d3beSAshish Gupta        op->m_dst = dst_buf[i];
516a584d3beSAshish Gupta        op->type = RTE_COMP_OP_STATEFUL;
517a584d3beSAshish Gupta        if(i == NUM_CHUNKS-1) {
518a584d3beSAshish Gupta            /* set to final, if last chunk*/
519a584d3beSAshish Gupta            op->flush = RTE_COMP_FLUSH_FINAL;
520a584d3beSAshish Gupta        } else {
521a584d3beSAshish Gupta            /* set to NONE, for all intermediary ops */
522a584d3beSAshish Gupta            op->flush = RTE_COMP_FLUSH_NONE;
523a584d3beSAshish Gupta        }
524a584d3beSAshish Gupta        op->src.offset = 0;
525a584d3beSAshish Gupta        op->dst.offset = 0;
526a584d3beSAshish Gupta        op->src.length = CHUNK_LEN;
527a584d3beSAshish Gupta        op->input_chksum = 0;
528a584d3beSAshish Gupta        num_enqd = rte_compressdev_enqueue_burst(cdev_id, 0, &op[i], 1);
529d629b7b5SJohn McNamara        /* wait for this to complete before enqueuing next*/
530a584d3beSAshish Gupta        do {
531a584d3beSAshish Gupta            num_deqd = rte_compressdev_dequeue_burst(cdev_id, 0 , &processed_ops, 1);
532a584d3beSAshish Gupta        } while (num_deqd < num_enqd);
533a584d3beSAshish Gupta        /* push next op*/
534a584d3beSAshish Gupta    }
535a584d3beSAshish Gupta
536a584d3beSAshish Gupta
537a584d3beSAshish GuptaStateful and OUT_OF_SPACE
538a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~
539a584d3beSAshish Gupta
540a584d3beSAshish GuptaIf PMD supports stateful operation, then OUT_OF_SPACE status is not an actual
541a584d3beSAshish Guptaerror for the PMD. In such case, PMD returns with status
542a584d3beSAshish GuptaRTE_COMP_OP_STATUS_OUT_OF_SPACE_RECOVERABLE with consumed = number of input bytes
543a584d3beSAshish Guptaread and produced = length of complete output buffer.
544a584d3beSAshish GuptaApplication should enqueue next op with source starting at consumed+1 and an
545a584d3beSAshish Guptaoutput buffer with available space.
546a584d3beSAshish Gupta
547a584d3beSAshish GuptaHash in Stateful
548a584d3beSAshish Gupta~~~~~~~~~~~~~~~~
549a584d3beSAshish GuptaIf enabled, digest buffer will contain valid digest after last op in stream
550a584d3beSAshish Gupta(having flush = RTE_COMP_OP_FLUSH_FINAL) is successfully processed i.e. dequeued
551a584d3beSAshish Guptawith status = RTE_COMP_OP_STATUS_SUCCESS.
552a584d3beSAshish Gupta
553a584d3beSAshish GuptaChecksum in Stateful
554a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~
555a584d3beSAshish GuptaIf enabled, checksum will only be available after last op in stream
556a584d3beSAshish Gupta(having flush = RTE_COMP_OP_FLUSH_FINAL) is successfully processed i.e. dequeued
557a584d3beSAshish Guptawith status = RTE_COMP_OP_STATUS_SUCCESS.
558a584d3beSAshish Gupta
559a584d3beSAshish GuptaBurst in compression API
560a584d3beSAshish Gupta-------------------------
561a584d3beSAshish Gupta
562a584d3beSAshish GuptaScheduling of compression operations on DPDK's application data path is
563a584d3beSAshish Guptaperformed using a burst oriented asynchronous API set. A queue pair on a compression
564a584d3beSAshish Guptadevice accepts a burst of compression operations using enqueue burst API. On physical
565a584d3beSAshish Guptadevices the enqueue burst API will place the operations to be processed
566a584d3beSAshish Guptaon the device's hardware input queue, for virtual devices the processing of the
567a584d3beSAshish Guptaoperations is usually completed during the enqueue call to the compression
568a584d3beSAshish Guptadevice. The dequeue burst API will retrieve any processed operations available
569a584d3beSAshish Guptafrom the queue pair on the compression device, from physical devices this is usually
570a584d3beSAshish Guptadirectly from the devices processed queue, and for virtual device's from a
571*6b1a74efSThierry Herbelot``rte_ring`` where processed operations are placed after being processed on the
572a584d3beSAshish Guptaenqueue call.
573a584d3beSAshish Gupta
574a584d3beSAshish GuptaA burst in DPDK compression can be a combination of stateless and stateful operations with a condition
575a584d3beSAshish Guptathat for stateful ops only one op at-a-time should be enqueued from a particular stream i.e. no-two ops
576a584d3beSAshish Guptashould belong to same stream in a single burst. However a burst may contain multiple stateful ops as long
577a584d3beSAshish Guptaas each op is attached to a different stream i.e. a burst can look like:
578a584d3beSAshish Gupta
579a584d3beSAshish Gupta+---------------+--------------+--------------+-----------------+--------------+--------------+
580a584d3beSAshish Gupta| enqueue_burst | op1.no_flush | op2.no_flush | op3.flush_final | op4.no_flush | op5.no_flush |
581a584d3beSAshish Gupta+---------------+--------------+--------------+-----------------+--------------+--------------+
582a584d3beSAshish Gupta
583a584d3beSAshish GuptaWhere, op1 .. op5 all belong to different independent data units. op1, op2, op4, op5 must be stateful
584a584d3beSAshish Guptaas stateless ops can only use flush full or final and op3 can be of type stateless or stateful.
585a584d3beSAshish GuptaEvery op with type set to RTE_COMP_OP_TYPE_STATELESS must be attached to priv_xform and
586a584d3beSAshish GuptaEvery op with type set to RTE_COMP_OP_TYPE_STATEFUL *must* be attached to stream.
587a584d3beSAshish Gupta
588a584d3beSAshish GuptaSince each operation in a burst is independent and thus can be completed
589a584d3beSAshish Guptaout-of-order,  applications which need ordering, should setup per-op user data
590a584d3beSAshish Guptaarea with reordering information so that it can determine enqueue order at
591a584d3beSAshish Guptadequeue.
592a584d3beSAshish Gupta
593a584d3beSAshish GuptaAlso if multiple threads calls enqueue_burst() on same queue pair then it’s
594a584d3beSAshish Guptaapplication onus to use proper locking mechanism to ensure exclusive enqueuing
595a584d3beSAshish Guptaof operations.
596a584d3beSAshish Gupta
597a584d3beSAshish GuptaEnqueue / Dequeue Burst APIs
598a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~~~~~~~
599a584d3beSAshish Gupta
600a584d3beSAshish GuptaThe burst enqueue API uses a compression device identifier and a queue pair
601a584d3beSAshish Guptaidentifier to specify the compression device queue pair to schedule the processing on.
602a584d3beSAshish GuptaThe ``nb_ops`` parameter is the number of operations to process which are
603a584d3beSAshish Guptasupplied in the ``ops`` array of ``rte_comp_op`` structures.
604a584d3beSAshish GuptaThe enqueue function returns the number of operations it actually enqueued for
605a584d3beSAshish Guptaprocessing, a return value equal to ``nb_ops`` means that all packets have been
606a584d3beSAshish Guptaenqueued.
607a584d3beSAshish Gupta
608a584d3beSAshish GuptaThe dequeue API uses the same format as the enqueue API but
609a584d3beSAshish Guptathe ``nb_ops`` and ``ops`` parameters are now used to specify the max processed
610a584d3beSAshish Guptaoperations the user wishes to retrieve and the location in which to store them.
611a584d3beSAshish GuptaThe API call returns the actual number of processed operations returned, this
612a584d3beSAshish Guptacan never be larger than ``nb_ops``.
613a584d3beSAshish Gupta
614a584d3beSAshish GuptaSample code
615a584d3beSAshish Gupta-----------
616a584d3beSAshish Gupta
617a584d3beSAshish GuptaThere are unit test applications that show how to use the compressdev library inside
618a9de470cSBruce Richardsonapp/test/test_compressdev.c
619a584d3beSAshish Gupta
620a584d3beSAshish GuptaCompression Device API
621a584d3beSAshish Gupta~~~~~~~~~~~~~~~~~~~~~~
622a584d3beSAshish Gupta
623a584d3beSAshish GuptaThe compressdev Library API is described in the *DPDK API Reference* document.
624