Revision tags: v24.07-rc2, v24.07-rc1, v24.03, v24.03-rc4, v24.03-rc3, v24.03-rc2, v24.03-rc1 |
|
#
65b078dc |
| 24-Jan-2024 |
David Marchand <david.marchand@redhat.com> |
lib: remove duplicate prefix in logs
RTE_LOG() macros prefixe the log messages based on the logtype. This results in logs like:
TMTY: TELEMETRY: Attempting socket bind to path '/run/user/...' TMTY:
lib: remove duplicate prefix in logs
RTE_LOG() macros prefixe the log messages based on the logtype. This results in logs like:
TMTY: TELEMETRY: Attempting socket bind to path '/run/user/...' TMTY: TELEMETRY: Socket creation and binding ok TMTY: TELEMETRY: Telemetry initialized ok
Remove redundancy in some libraries following their conversion to RTE_LOG/RTE_LOG_LINE.
Note: for consistency, dmadev logs are now prefixed with "DMADEV: " instead of a too generic "dma: ".
Fixes: 97433132c2ed ("lib: use per line logging in helpers") Fixes: 0e21c7c07d62 ("lib: replace logging helpers")
Reported-by: Thomas Monjalon <thomas@monjalon.net> Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Ciara Power <ciara.power@intel.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com>
show more ...
|
Revision tags: v23.11, v23.11-rc4 |
|
#
97433132 |
| 17-Nov-2023 |
David Marchand <david.marchand@redhat.com> |
lib: use per line logging in helpers
Use RTE_LOG_LINE in existing macros that append a \n. This will help catching unwanted newline character or multilines in log messages.
Signed-off-by: David Mar
lib: use per line logging in helpers
Use RTE_LOG_LINE in existing macros that append a \n. This will help catching unwanted newline character or multilines in log messages.
Signed-off-by: David Marchand <david.marchand@redhat.com> Reviewed-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Andrew Rybchenko <andrew.rybchenko@oktetlabs.ru>
show more ...
|
Revision tags: v23.11-rc3, v23.11-rc2 |
|
#
5dbd4e93 |
| 26-Oct-2023 |
Tyler Retzlaff <roretzla@linux.microsoft.com> |
gpudev: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com
gpudev: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional stdatomic API
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: David Marchand <david.marchand@redhat.com>
show more ...
|
Revision tags: v23.11-rc1, v23.07, v23.07-rc4, v23.07-rc3, v23.07-rc2, v23.07-rc1, v23.03, v23.03-rc4, v23.03-rc3, v23.03-rc2, v23.03-rc1 |
|
#
75d75530 |
| 03-Jan-2023 |
David Marchand <david.marchand@redhat.com> |
gpudev: fix deadlocks when registering callback
gpu_callback_lock was not released in some branches of the register helper. While at it, set rte_errno in one of those branches.
Fixes: 18cb07563165
gpudev: fix deadlocks when registering callback
gpu_callback_lock was not released in some branches of the register helper. While at it, set rte_errno in one of those branches.
Fixes: 18cb07563165 ("gpudev: add event notification") Cc: stable@dpdk.org
Signed-off-by: David Marchand <david.marchand@redhat.com> Acked-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
Revision tags: v22.11, v22.11-rc4, v22.11-rc3, v22.11-rc2, v22.11-rc1 |
|
#
72b452c5 |
| 27-Aug-2022 |
Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> |
eal: remove unneeded includes from a public header
Do not include <ctype.h>, <errno.h>, and <stdlib.h> from <rte_common.h>, because they are not used by this file. Include the needed headers directl
eal: remove unneeded includes from a public header
Do not include <ctype.h>, <errno.h>, and <stdlib.h> from <rte_common.h>, because they are not used by this file. Include the needed headers directly from the files that need them.
Signed-off-by: Dmitry Kozlyuk <dmitry.kozliuk@gmail.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com>
show more ...
|
Revision tags: v22.07, v22.07-rc4, v22.07-rc3, v22.07-rc2 |
|
#
564178d3 |
| 20-Jun-2022 |
Sean Morrissey <sean.morrissey@intel.com> |
gpudev: remove unneeded header includes
These header includes have been flagged by the iwyu_tool and removed.
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
|
Revision tags: v22.07-rc1, v22.03, v22.03-rc4 |
|
#
1fd3de64 |
| 08-Mar-2022 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: fix page alignment in communication list
Memory allocated for CPU mapping the status flag in the communication list should be aligned to the GPU page size, which can be different of CPU page
gpudev: fix page alignment in communication list
Memory allocated for CPU mapping the status flag in the communication list should be aligned to the GPU page size, which can be different of CPU page alignment.
The GPU page size is added to the GPU info, and is used when creating a communication list.
Fixes: 9b8cae4d991e ("gpudev: use CPU mapping in communication list")
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
Revision tags: v22.03-rc3, v22.03-rc2 |
|
#
9b8cae4d |
| 22-Feb-2022 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: use CPU mapping in communication list
rte_gpu_mem_cpu_map() exposes a GPU memory area to the CPU. In gpudev communication list this is useful to store the status flag.
A communication list
gpudev: use CPU mapping in communication list
rte_gpu_mem_cpu_map() exposes a GPU memory area to the CPU. In gpudev communication list this is useful to store the status flag.
A communication list status flag allocated on GPU memory and mapped for CPU visibility can be updated by CPU and polled by a GPU workload.
The polling operation is more frequent than the CPU update operation. Having the status flag in GPU memory reduces the GPU workload polling latency.
If CPU mapping feature is not enabled, status flag resides in CPU memory registered so it's visible from the GPU.
To facilitate the interaction with the status flag, this patch provides also the set/get functions for it.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
#
30a1de10 |
| 15-Feb-2022 |
Sean Morrissey <sean.morrissey@intel.com> |
lib: remove unneeded header includes
These header includes have been flagged by the iwyu_tool and removed.
Signed-off-by: Sean Morrissey <sean.morrissey@intel.com>
|
Revision tags: v22.03-rc1 |
|
#
d69bb47d |
| 27-Jan-2022 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: expose GPU memory to CPU
Enable the possibility to expose a GPU memory area and make it accessible from the CPU.
GPU memory has to be allocated via rte_gpu_mem_alloc().
This patch allows t
gpudev: expose GPU memory to CPU
Enable the possibility to expose a GPU memory area and make it accessible from the CPU.
GPU memory has to be allocated via rte_gpu_mem_alloc().
This patch allows the gpudev library to map (and unmap), through the GPU driver, a chunk of GPU memory and to return a memory pointer usable by the CPU to access the GPU memory area.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
#
c8557ed4 |
| 08-Jan-2022 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: add alignment for memory allocation
Similarly to rte_malloc, rte_gpu_mem_alloc accepts as input the memory alignment size.
GPU driver should return GPU memory address aligned with the input
gpudev: add alignment for memory allocation
Similarly to rte_malloc, rte_gpu_mem_alloc accepts as input the memory alignment size.
GPU driver should return GPU memory address aligned with the input value.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
Revision tags: v21.11 |
|
#
579147d7 |
| 25-Nov-2021 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: remove unnecessary memory barrier
Remove unnecessary rte_gpu_wmb from rte_gpu_comm_populate_list_pkts. It causes a performance degradation in case of NVIDIA GPU V100.
This change doesn't af
gpudev: remove unnecessary memory barrier
Remove unnecessary rte_gpu_wmb from rte_gpu_comm_populate_list_pkts. It causes a performance degradation in case of NVIDIA GPU V100.
This change doesn't affect any functionality as the status resides in CPU registered memory.
Fixes: c7ebd65c1372 ("gpudev: add communication list")
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
Revision tags: v21.11-rc4 |
|
#
1674c56d |
| 23-Nov-2021 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: manage null parameters in memory functions
The gpudev functions free, register and unregister return gracefully if input pointer is NULL or size 0, as API doc was indicating no-op accepted v
gpudev: manage null parameters in memory functions
The gpudev functions free, register and unregister return gracefully if input pointer is NULL or size 0, as API doc was indicating no-op accepted values.
CUDA driver checks are removed because redundant with the checks added in gpudev library.
Fixes: e818c4e2bf50 ("gpudev: add memory API")
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
Revision tags: v21.11-rc3, v21.11-rc2 |
|
#
c7ebd65c |
| 08-Nov-2021 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: add communication list
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel. When mixing network activity with tas
gpudev: add communication list
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel. When mixing network activity with task processing there may be the need to put in communication the CPU with the device in order to synchronize operations.
An example could be a receive-and-process application where CPU is responsible for receiving packets in multiple mbufs and the GPU is responsible for processing the content of those packets.
The purpose of this list is to provide a buffer in CPU memory visible from the GPU that can be treated as a circular buffer to let the CPU provide fondamental info of received packets to the GPU.
A possible use-case is described below.
CPU: - Trigger some task on the GPU - in a loop: - receive a number of packets - provide packets info to the GPU
GPU: - Do some pre-processing - Wait to receive a new set of packet to be processed
Layout of a communication list would be:
------- | 0 | => pkt_list | status | | #pkts | ------- | 1 | => pkt_list | status | | #pkts | ------- | 2 | => pkt_list | status | | #pkts | ------- | .... | => pkt_list -------
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
#
f56160a2 |
| 08-Nov-2021 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: add communication flag
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel. When mixing network activity with tas
gpudev: add communication flag
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel. When mixing network activity with task processing there may be the need to put in communication the CPU with the device in order to synchronize operations.
The purpose of this flag is to allow the CPU and the GPU to exchange ACKs. A possible use-case is described below.
CPU: - Trigger some task on the GPU - Prepare some data - Signal to the GPU the data is ready updating the communication flag
GPU: - Do some pre-processing - Wait for more data from the CPU polling on the communication flag - Consume the data prepared by the CPU
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
#
2d61b429 |
| 08-Nov-2021 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: add memory barrier
Add a function for the application to ensure the coherency of the writes executed by another device into the GPU memory.
Signed-off-by: Elena Agostini <eagostini@nvidia.c
gpudev: add memory barrier
Add a function for the application to ensure the coherency of the writes executed by another device into the GPU memory.
Signed-off-by: Elena Agostini <eagostini@nvidia.com>
show more ...
|
#
e818c4e2 |
| 08-Nov-2021 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: add memory API
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel. Such workload distribution can be achieved by
gpudev: add memory API
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel. Such workload distribution can be achieved by sharing some memory.
As a first step, the features are focused on memory management. A function allows to allocate memory inside the device, or in the main (CPU) memory while making it visible for the device. This memory may be used to save packets or for synchronization data.
The next step should focus on GPU processing task control.
Signed-off-by: Elena Agostini <eagostini@nvidia.com> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
#
a9af048a |
| 08-Nov-2021 |
Thomas Monjalon <thomas@monjalon.net> |
gpudev: support multi-process
The device data shared between processes are moved in a struct allocated in a shared memory (a new memzone for all GPUs). The main struct rte_gpu references the shared
gpudev: support multi-process
The device data shared between processes are moved in a struct allocated in a shared memory (a new memzone for all GPUs). The main struct rte_gpu references the shared memory via the pointer mpshared.
The API function rte_gpu_attach() is added to attach a device from the secondary process. The function rte_gpu_allocate() can be used only by primary process.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
#
82e5f6b6 |
| 08-Nov-2021 |
Thomas Monjalon <thomas@monjalon.net> |
gpudev: add child device representing a device context
The computing device may operate in some isolated contexts. Memory and processing are isolated in a silo represented by a child device. The con
gpudev: add child device representing a device context
The computing device may operate in some isolated contexts. Memory and processing are isolated in a silo represented by a child device. The context is provided as an opaque by the caller of rte_gpu_add_child().
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
#
18cb0756 |
| 08-Nov-2021 |
Thomas Monjalon <thomas@monjalon.net> |
gpudev: add event notification
Callback functions may be registered for a device event. Callback management is per-process and not thread-safe.
The events RTE_GPU_EVENT_NEW and RTE_GPU_EVENT_DEL ar
gpudev: add event notification
Callback functions may be registered for a device event. Callback management is per-process and not thread-safe.
The events RTE_GPU_EVENT_NEW and RTE_GPU_EVENT_DEL are notified respectively after creation and before removal of a device, as part of the library functions. Some future events may be emitted from drivers.
Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|
#
8b8036a6 |
| 08-Nov-2021 |
Elena Agostini <eagostini@nvidia.com> |
gpudev: introduce GPU device class library
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel.
The new library gpudev i
gpudev: introduce GPU device class library
In heterogeneous computing system, processing is not only in the CPU. Some tasks can be delegated to devices working in parallel.
The new library gpudev is for dealing with GPGPU computing devices from a DPDK application running on the CPU.
The infrastructure is prepared to welcome drivers in drivers/gpu/.
Signed-off-by: Elena Agostini <eagostini@nvidia.com> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
show more ...
|