#
e12a0166 |
| 14-May-2024 |
Tyler Retzlaff <roretzla@linux.microsoft.com> |
drivers: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microso
drivers: use stdatomic API
Replace the use of gcc builtin __atomic_xxx intrinsics with corresponding rte_atomic_xxx optional rte stdatomic API.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Stephen Hemminger <stephen@networkplumber.org> Reviewed-by: Mattias Rönnblom <mattias.ronnblom@ericsson.com>
show more ...
|
#
f805f70b |
| 24-Jan-2024 |
Ferruh Yigit <ferruh.yigit@amd.com> |
common/mlx5: fix calloc parameters
gcc [1] generates warning [2] about calloc usage, because calloc parameter order is wrong, fixing it by replacing parameters.
[1] gcc (GCC) 14.0.1 20240124 (exper
common/mlx5: fix calloc parameters
gcc [1] generates warning [2] about calloc usage, because calloc parameter order is wrong, fixing it by replacing parameters.
[1] gcc (GCC) 14.0.1 20240124 (experimental)
[2] Compiling C object .../common_mlx5_mlx5_common_mr.c.o .../mlx5/mlx5_common_mr.c: In function ‘mlx5_mempool_get_chunks’: .../common/mlx5/mlx5_common_mr.c:1384:29: warning: ‘calloc’ sizes specified with ‘sizeof’ in the earlier argument and not in the later argument [-Wcalloc-transposed-args] 1384 | *out = calloc(sizeof(**out), n); | ^
Fixes: 7297d2cdecce ("common/mlx5: fix external memory pool registration") Cc: stable@dpdk.org
Signed-off-by: Ferruh Yigit <ferruh.yigit@amd.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Dariusz Sosnowski <dsosnowski@nvidia.com>
show more ...
|
#
bd6f2207 |
| 20-Jun-2023 |
Suanming Mou <suanmingm@nvidia.com> |
common/mlx5: export memory region lookup by address
In case user provides the address without mempool. Export the function to lookup the address without mempool is required.
Signed-off-by: Suanming
common/mlx5: export memory region lookup by address
In case user provides the address without mempool. Export the function to lookup the address without mempool is required.
Signed-off-by: Suanming Mou <suanmingm@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
ed090599 |
| 20-Mar-2023 |
Tyler Retzlaff <roretzla@linux.microsoft.com> |
rework atomic intrinsics fetch operations
Use __atomic_fetch_{add,and,or,sub,xor} instead of __atomic_{add,and,or,sub,xor}_fetch adding the necessary code to allow consumption of the resulting value
rework atomic intrinsics fetch operations
Use __atomic_fetch_{add,and,or,sub,xor} instead of __atomic_{add,and,or,sub,xor}_fetch adding the necessary code to allow consumption of the resulting value.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Reviewed-by: Ruifeng Wang <ruifeng.wang@arm.com> Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Acked-by: Nithin Dabilpuram <ndabilpuram@marvell.com> Reviewed-by: David Marchand <david.marchand@redhat.com>
show more ...
|
#
f9eb7a4b |
| 02-Mar-2023 |
Tyler Retzlaff <roretzla@linux.microsoft.com> |
use atomic intrinsics closer to C11
Use __atomic_fetch_{add,and,or,sub,xor} instead of __atomic_{add,and,or,sub,xor}_fetch when we have no interest in the result of the operation.
This change reduc
use atomic intrinsics closer to C11
Use __atomic_fetch_{add,and,or,sub,xor} instead of __atomic_{add,and,or,sub,xor}_fetch when we have no interest in the result of the operation.
This change reduces unnecessary code that provided the result of the atomic operation while this result was not used.
It also brings us to a closer alignment with atomics available in C11 standard and will reduce review effort when they are integrated.
Signed-off-by: Tyler Retzlaff <roretzla@linux.microsoft.com> Acked-by: Morten Brørup <mb@smartsharesystems.com> Acked-by: Ruifeng Wang <ruifeng.wang@arm.com> Acked-by: Chengwen Feng <fengchengwen@huawei.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Acked-by: Bruce Richardson <bruce.richardson@intel.com> Acked-by: Pavan Nikhilesh <pbhagavatula@marvell.com> Reviewed-by: David Marchand <david.marchand@redhat.com>
show more ...
|
#
aeca11f8 |
| 03-Nov-2022 |
Gregory Etelson <getelson@nvidia.com> |
common/mlx5: fix shared mempool subscription
MLX5 PMD counted each mempool subscribe invocation. The PMD expected that the mempool subscription will be deleted after the mempool counter dropped to 0
common/mlx5: fix shared mempool subscription
MLX5 PMD counted each mempool subscribe invocation. The PMD expected that the mempool subscription will be deleted after the mempool counter dropped to 0. However, current PMD design unsubscribes mempool callbacks only once. As the result, the PMD destroyed mlx5_common_device but kept shared RX subscription callback. EAL tried to activate that callback and crashed.
The patch removes mempool subscriptions counter. The PMD registers mempool subscription once only. An attempt to register existing subscription returns EEXIST. Also, the PMD expects to remove subscription when mempool unsubscribe was activated.
Fixes: 8ad97e4b3215 ("common/mlx5: fix multi-process mempool registration") Cc: stable@dpdk.org
Signed-off-by: Gregory Etelson <getelson@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
8ad97e4b |
| 08-Aug-2022 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix multi-process mempool registration
The `mp_cb_registered` flag shared between all processes was used to ensure that for any IB device (MLX5 common device) mempool event callback was
common/mlx5: fix multi-process mempool registration
The `mp_cb_registered` flag shared between all processes was used to ensure that for any IB device (MLX5 common device) mempool event callback was registered only once and mempools that had been existing before the device start were traversed only once to register them. Since mempool callback registrations have become process-private, callback registration must be done by every process. The flag can no longer reflect the state for any single process. Replace it with a registration counter to track when no more callbacks are registered for the device in any process. It is sufficient to only register pre-existing mempools in the primary process because it is the one that starts the device.
Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
e96d3d02 |
| 29-Jun-2022 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix non-expandable global MR cache
The number of memory regions (MR) that MLX5 PMD can use was limited by 512 per IB device, the size of the global MR cache that was fixed at compile ti
common/mlx5: fix non-expandable global MR cache
The number of memory regions (MR) that MLX5 PMD can use was limited by 512 per IB device, the size of the global MR cache that was fixed at compile time. The cache allows to search MR LKey by address efficiently, therefore it is the last place searched on data path (skipped is the global MR database which would be slow). If the application logic caused the PMD to create more than 512 MRs, which can be the case with external memory, those MRs would never be found on data path and later cause a HW failure.
The cache size was fixed because at the time of overflow the EAL memory hotplug lock may be held, prohibiting to allocate a larger cache (it must reside in DPDK memory for multi-process support). This patch adds logic to release the necessary locks, extend the cache, and repeat the attempt to insert new entries.
`mlx5_mr_btree` structure had `overflow` field that was set when a cache (not only the global one) could not accept new entries. However, it was only checked for the global cache, because caches of upper layers were dynamically expandable. With the global cache size limitation removed, this field is not needed. Cache size was previously limited by 16-bit indices. Use the space in the structure previously field by `overflow` field to extend indices to 32 bits. With this patch, it is the HW and RAM that limit the number of MRs.
Fixes: 974f1e7ef146 ("net/mlx5: add new memory region support") Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
81132518 |
| 31-Mar-2022 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix memory region range calculation
MR end for a mempool chunk may be calculated incorrectly. For example, for chunk with addr=1.5M and len=1M with 2M page size the range would be [0, 2
common/mlx5: fix memory region range calculation
MR end for a mempool chunk may be calculated incorrectly. For example, for chunk with addr=1.5M and len=1M with 2M page size the range would be [0, 2M), while the proper result is [0, 4M). Fix the calculation.
Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") Cc: stable@dpdk.org
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
06c047b6 |
| 09-Feb-2022 |
Stephen Hemminger <stephen@networkplumber.org> |
remove unnecessary null checks
Functions like free, rte_free, and rte_mempool_free already handle NULL pointer so the checks here are not necessary.
Remove redundant NULL pointer checks before free
remove unnecessary null checks
Functions like free, rte_free, and rte_mempool_free already handle NULL pointer so the checks here are not necessary.
Remove redundant NULL pointer checks before free functions found by nullfree.cocci
Signed-off-by: Stephen Hemminger <stephen@networkplumber.org>
show more ...
|
#
2eb92b0f |
| 14-Jan-2022 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix MR lookup for non-contiguous mempool
Memory region (MR) lookup by address inside mempool MRs was not accounting for the upper bound of an MR. For mempools covered by multiple MRs th
common/mlx5: fix MR lookup for non-contiguous mempool
Memory region (MR) lookup by address inside mempool MRs was not accounting for the upper bound of an MR. For mempools covered by multiple MRs this could return a wrong MR LKey, typically resulting in an unrecoverable TxQ failure:
mlx5_net: Cannot change Tx QP state to INIT Invalid argument
Corresponding message from /var/log/dpdk_mlx5_port_X_txq_Y_index_Z*:
Unexpected CQE error syndrome 0x04 CQN = 128 SQN = 4848 wqe_counter = 0 wq_ci = 9 cq_ci = 122
This is likely to happen with --legacy-mem and IOVA-as-PA, because EAL intentionally maps pages at non-adjacent PA to non-adjacent VA in this mode, and MLX5 PMD works with VA.
Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") Cc: stable@dpdk.org
Reported-by: Wang Yunjian <wangyunjian@huawei.com> Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
7be78d02 |
| 29-Nov-2021 |
Josh Soref <jsoref@gmail.com> |
fix spelling in comments and strings
The tool comes from https://github.com/jsoref
Signed-off-by: Josh Soref <jsoref@gmail.com> Signed-off-by: Thomas Monjalon <thomas@monjalon.net>
|
#
63625c5d |
| 25-Nov-2021 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix memory region lookup on slow path
Memory region (MR) was being looked up incorrectly for the data address of an externally-attached mbuf. A search was attempted for the mempool of t
common/mlx5: fix memory region lookup on slow path
Memory region (MR) was being looked up incorrectly for the data address of an externally-attached mbuf. A search was attempted for the mempool of the mbuf, while mbuf data address does not belong to this mempool in case of externally-attached mbuf. Only attempt the search: 1) for not externally-attached mbufs; 2) for mbufs coming from MPRQ mempool; 3) for externally-attached mbufs from mempools with pinned external buffers.
Fixes: 08ac03580ef2 ("common/mlx5: fix mempool registration")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Reviewed-by: Matan Azrad <matan@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
8947eebc |
| 23-Nov-2021 |
Bing Zhao <bingz@nvidia.com> |
common/mlx5: fix shared memory region ranges allocation
Memory regions (MRs) were allocated in one chunk of memory with a mempool registration object. However, MRs can be reused among different memp
common/mlx5: fix shared memory region ranges allocation
Memory regions (MRs) were allocated in one chunk of memory with a mempool registration object. However, MRs can be reused among different mempool registrations.
When the registration that allocated the MRs originally was destroyed, the dangling pointers to the MRs could be left in other registrations sharing these MRs.
Splitting the memory allocation of registration structure and MRs in this commit solves this pointer reference issue.
Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities")
Signed-off-by: Bing Zhao <bingz@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com>
show more ...
|
#
08ac0358 |
| 19-Nov-2021 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix mempool registration
Mempool registration was not correctly processing mempools with RTE_PKTMBUF_F_PINEND_EXT_BUF flag set ("pinned mempools" for short), because it is not known at
common/mlx5: fix mempool registration
Mempool registration was not correctly processing mempools with RTE_PKTMBUF_F_PINEND_EXT_BUF flag set ("pinned mempools" for short), because it is not known at registration time whether the mempool is a pktmbuf one, and its elements may not yet be initialized to analyze them. Attempts had been made to recognize such pools, but there was no robust solution, only the owner of a mempool (the application or a device) knows its type. This patch extends common/mlx5 registration code to accept a hint that the mempool is a pinned one and uses this capability from net/mlx5 driver.
1. Remove all code assuming pktmbuf pool type or trying to recognize the type of a pool. 2. Register pinned mempools used for Rx and their external memory on port start. Populate the MR cache with all their MRs. 3. Change Tx slow path logic as follows: 3.1. Search the mempool database for a memory region (MR) by the mbuf pool and its buffer address. 3.2. If not MR for the address is found for the mempool, and the mempool contains only pinned external buffers, perform the mempool registration of the mempool and its external pinned memory. 3.3. Fall back to using page-based MRs in other cases (for example, a buffer with externally attached memory, but not from a pinned mempool).
Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") Fixes: fec28ca0e3a9 ("net/mlx5: support mempool registration")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Reviewed-by: Matan Azrad <matan@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
e4c402af |
| 16-Nov-2021 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix MPRQ mempool registration
Mempool registration code had a wrong assumption that it is always dealing with packet mempools and always called rte_pktmbuf_priv_flags(), which returned
common/mlx5: fix MPRQ mempool registration
Mempool registration code had a wrong assumption that it is always dealing with packet mempools and always called rte_pktmbuf_priv_flags(), which returned a random value for different types of mempools. In particular, it could consider MPRQ mempools as having externally pinned buffers, which is wrong. Packet mempools cannot be reliably recognized, but it is sufficient to check that the mempool is not a packet one, so it cannot have externally pinned buffers. Compare mempool private data size to that of packet mempools to check.
Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities") Fixes: fec28ca0e3a9 ("net/mlx5: support mempool registration")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
71304b5c |
| 16-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: fix redundant field in MR control structure
Inside the MR control structure there is a pointer to the common device. This pointer enables access to the global cache as well as hardware
common/mlx5: fix redundant field in MR control structure
Inside the MR control structure there is a pointer to the common device. This pointer enables access to the global cache as well as hardware objects that may be required in case a new MR needs to be created.
The purpose of adding this pointer into the MR control structure was to avoid its transfer as a parameter to all the functions of searching MR in the caches. However, adding it to this structure increased the Rx and Tx data-path structures, all the fields that followed it were slightly moved away which caused to a reduction in performance.
This patch removes the pointer from the structure. It can be accessed through the "dev_gen_ptr" existing field using the "container_of" operator.
Fixes: 334ed198ab4d ("common/mlx5: remove redundant parameter in MR search")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
7297d2cd |
| 09-Nov-2021 |
Dmitry Kozlyuk <dkozlyuk@nvidia.com> |
common/mlx5: fix external memory pool registration
Registration of packet mempools with RTE_PKTMBUF_POOL_PINNED_EXT_MEM was performed incorrectly: after population of such mempool chunks only contai
common/mlx5: fix external memory pool registration
Registration of packet mempools with RTE_PKTMBUF_POOL_PINNED_EXT_MEM was performed incorrectly: after population of such mempool chunks only contain memory for rte_mbuf structures, while pointers to actual external memory are not yet filled. MR LKeys could not be obtained for external memory addresses of such mempools. Rx datapath assumes all used mempools are registered and does not fallback to dynamic MR creation in such case, so no packets could be received.
Skip registration of extmem pools on population because it is useless. If used for Rx, they are registered at port start. During registration, recognize such pools, inspect their mbufs and recover the pages they reside in.
While MRs for these pages may already be created by rte_dev_dma_map(), they are not reused to avoid synchronization on Rx datapath in case these MRs are changed in the database.
Fixes: 690b2a88c2f7 ("common/mlx5: add mempool registration facilities")
Signed-off-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Reviewed-by: Matan Azrad <matan@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com>
show more ...
|
#
20489176 |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: make multi-process MR management port-agnostic
In the multi-process mechanism, there are things that the secondary process does not perform itself but asks the primary process to perfor
common/mlx5: make multi-process MR management port-agnostic
In the multi-process mechanism, there are things that the secondary process does not perform itself but asks the primary process to perform for it. There is a special API for communication between the processes that receives parameters necessary for the specific action required as well as a special structure called mp_id that contains the port number of the processes through which the initial process finds the relevant ETH device for the processes.
One of the operations performed through this mechanism is the creation of a memory region, where the secondary process sends the virtual address as a parameter and the mp_id structure with the port number inside it. However, once the memory area management is shared between the drivers and either port number or ETH device is no longer relevant to them, it seems unnecessary to continue communicating between the processes through the mp_id variable.
In this patch we will remove the use of the above structure for all MR management, and add to the specific parameter of operations a pointer to the common device that contains everything needed to create/register MR.
Fixes: 9f1d636f3ef08 ("common/mlx5: share MR management")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
334ed198 |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: remove redundant parameter in MR search
Memory region management has recently been shared between drivers, including the search for caches in the data plane. The initial search in the l
common/mlx5: remove redundant parameter in MR search
Memory region management has recently been shared between drivers, including the search for caches in the data plane. The initial search in the local linear cache of the queue, usually yields a result and one should not continue searching in the next level caches.
The function that searches in the local cache gets the pointer to a device as a parameter, that is not necessary for its operation but for subsequent searches (which, as mentioned, usually do not happen). Transferring the device to a function and maintaining it, takes some time and causes some impact on performance.
Add the pointer to the device as a field of the mr_ctrl structure. The field will be updated during control path and will be used only when needed in the search.
Fixes: fc59a1ec556b ("common/mlx5: share MR mempool registration")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
6a4e4385 |
| 03-Nov-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: fix MR search inlining
Memory region management has recently been shared between drivers, including the search for caches in the data plane. The initial search in the local linear cache
common/mlx5: fix MR search inlining
Memory region management has recently been shared between drivers, including the search for caches in the data plane. The initial search in the local linear cache of the queue, usually yields a result and one should not continue searching in the next layer caches.
Prior to cache sharing the local linear cache lookup function was defined with "static inline" attributes, those were missed in routine commoditizing step and this caused performance degradation.
Set the common function as static inline.
Fixes: fc59a1ec556b ("common/mlx5: share MR mempool registration")
Signed-off-by: Michael Baum <michaelba@nvidia.com> Reviewed-by: Viacheslav Ovsiienko <viacheslavo@nvidia.com> Reviewed-by: Dmitry Kozlyuk <dkozlyuk@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
fc59a1ec |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share MR mempool registration
Expand the use of mempool registration to MR management for other drivers.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan
common/mlx5: share MR mempool registration
Expand the use of mempool registration to MR management for other drivers.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
a5d06c90 |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: support device DMA map and unmap
Since MR management has moved to the common area, there is no longer a need for the DMA map and unmap function for each driver. This patch share those f
common/mlx5: support device DMA map and unmap
Since MR management has moved to the common area, there is no longer a need for the DMA map and unmap function for each driver. This patch share those functions. For most drivers it supports these operations for the first time.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
9f1d636f |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share MR management
Add global shared MR cache as a field of common device structure. Move MR management to use this global cache for all drivers.
Signed-off-by: Michael Baum <michaelb
common/mlx5: share MR management
Add global shared MR cache as a field of common device structure. Move MR management to use this global cache for all drivers.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|
#
fb690f71 |
| 19-Oct-2021 |
Michael Baum <michaelba@nvidia.com> |
common/mlx5: share MR top-half search function
Add function to search in local liniar cache and use it in the drivers instead of their functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com>
common/mlx5: share MR top-half search function
Add function to search in local liniar cache and use it in the drivers instead of their functions.
Signed-off-by: Michael Baum <michaelba@nvidia.com> Acked-by: Matan Azrad <matan@nvidia.com>
show more ...
|