| #
8e148e49 |
| 19-Mar-2019 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: optimize flags update for packed ring
Cache the AVAIL, USED and WRITE bits to avoid calculating them as much as possible. Note that, the WRITE bit isn't cached for control queue.
Signed
net/virtio: optimize flags update for packed ring
Cache the AVAIL, USED and WRITE bits to avoid calculating them as much as possible. Note that, the WRITE bit isn't cached for control queue.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Jens Freimann <jfreimann@redhat.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
4905ed3a |
| 19-Feb-2019 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: optimize Tx enqueue for packed ring
This patch introduces an optimized enqueue function in packed ring for the case that virtio net header can be prepended to the unchained mbuf.
Signed
net/virtio: optimize Tx enqueue for packed ring
This patch introduces an optimized enqueue function in packed ring for the case that virtio net header can be prepended to the unchained mbuf.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
b92f1429 |
| 19-Feb-2019 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: introduce helper for clearing net header
This patch introduces a helper for clearing the virtio net header to avoid the code duplication. Macro is used as it shows slightly better perfor
net/virtio: introduce helper for clearing net header
This patch introduces a helper for clearing the virtio net header to avoid the code duplication. Macro is used as it shows slightly better performance.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
5c75a8ef |
| 19-Feb-2019 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: fix in-order Tx path for packed ring
When IN_ORDER feature is negotiated, device may just write out a single used descriptor for a batch of buffers:
""" Some devices always use descript
net/virtio: fix in-order Tx path for packed ring
When IN_ORDER feature is negotiated, device may just write out a single used descriptor for a batch of buffers:
""" Some devices always use descriptors in the same order in which they have been made available. These devices can offer the VIRTIO_F_IN_ORDER feature. If negotiated, this knowledge allows devices to notify the use of a batch of buffers to the driver by only writing out a single used descriptor with the Buffer ID corresponding to the last descriptor in the batch.
The device then skips forward in the ring according to the size of the batch. The driver needs to look up the used Buffer ID and calculate the batch size to be able to advance to where the next used descriptor will be written by the device. """
But the Tx path of packed ring can't handle this. With this patch, when IN_ORDER is negotiated, driver will manage the IDs linearly, look up the used buffer ID and advance to the next used descriptor that will be written by the device.
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues") Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
e788032a |
| 19-Feb-2019 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: fix in-order Tx path for split ring
When IN_ORDER feature is negotiated, device may just write out a single used ring entry for a batch of buffers:
""" Some devices always use descripto
net/virtio: fix in-order Tx path for split ring
When IN_ORDER feature is negotiated, device may just write out a single used ring entry for a batch of buffers:
""" Some devices always use descriptors in the same order in which they have been made available. These devices can offer the VIRTIO_F_IN_ORDER feature. If negotiated, this knowledge allows devices to notify the use of a batch of buffers to the driver by only writing out a single used ring entry with the id corresponding to the head entry of the descriptor chain describing the last buffer in the batch.
The device then skips forward in the ring according to the size of the batch. Accordingly, it increments the used idx by the size of the batch.
The driver needs to look up the used id and calculate the batch size to be able to advance to where the next used ring entry will be written by the device. """
Currently, the in-order Tx path in split ring can't handle this. With this patch, driver will allocate desc_extra[] based on the index in avail/used ring instead of the index in descriptor table. And driver can just relay on the used->idx written by device to reclaim the descriptors and Tx buffers.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx") Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
91397bdc |
| 19-Feb-2019 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: fix Tx desc cleanup for packed ring
We should try to cleanup at least the 'need' number of descs.
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues") Cc: stable@dpdk
net/virtio: fix Tx desc cleanup for packed ring
We should try to cleanup at least the 'need' number of descs.
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues") Cc: stable@dpdk.org
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
4b058fdd |
| 24-Jan-2019 |
Ilya Maximets <i.maximets@samsung.com> |
net/virtio: add missing read barrier for packed dequeue
Read barrier is required between reading the flags (desc_is_used) and the content of descriptor to ensure the ordering. Otherwise, speculative
net/virtio: add missing read barrier for packed dequeue
Read barrier is required between reading the flags (desc_is_used) and the content of descriptor to ensure the ordering. Otherwise, speculative read of desc.id could be reordered with reading of the desc.flags.
Fixes: a76290c8f1cf ("net/virtio: implement Rx path for packed queues") Cc: stable@dpdk.org
Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Reviewed-by: Jens Freimann <jfreimann@redhat.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
23d25f1a |
| 24-Jan-2019 |
Ilya Maximets <i.maximets@samsung.com> |
net/virtio: add barriers for extra descriptors on Rx split
There should be read barrier between checking VIRTQUEUE_NUSED (reading the used->idx) and reading these descriptors. It's done for the firs
net/virtio: add barriers for extra descriptors on Rx split
There should be read barrier between checking VIRTQUEUE_NUSED (reading the used->idx) and reading these descriptors. It's done for the first checks at the beginning of these functions but missed while checking for extra required descriptors.
Fixes: e5f456a98d3c ("net/virtio: support in-order Rx and Tx") Fixes: 13ce5e7eb94f ("virtio: mergeable buffers") Cc: stable@dpdk.org
Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
3eb50f0c |
| 24-Jan-2019 |
Ilya Maximets <i.maximets@samsung.com> |
net/virtio: fix read barriers on packed Tx cleanup
Read barrier must be implied between reading descriptor flags and descriptor id. Otherwise, in case of reordering, we could read wrong descriptor i
net/virtio: fix read barriers on packed Tx cleanup
Read barrier must be implied between reading descriptor flags and descriptor id. Otherwise, in case of reordering, we could read wrong descriptor id.
For the reference, similar barrier for split rings is the read barrier between VIRTQUEUE_NUSED (reading the used->idx) and the call to the virtio_xmit_cleanup().
Additionally removed double update of 'used_idx'. It's enough to set it in the end of the loop.
Fixes: 892dc798fa9c ("net/virtio: implement Tx path for packed queues") Cc: stable@dpdk.org
Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Reviewed-by: Jens Freimann <jfreimann@redhat.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
18f42d78 |
| 22-Jan-2019 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: use virtio barrier in packed ring
Always use the virtio variants which support the platform memory ordering.
Fixes: 9230ab8d7913 ("net/virtio: support platform memory ordering")
Signed
net/virtio: use virtio barrier in packed ring
Always use the virtio variants which support the platform memory ordering.
Fixes: 9230ab8d7913 ("net/virtio: support platform memory ordering")
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Acked-by: Ilya Maximets <i.maximets@samsung.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
9230ab8d |
| 09-Jan-2019 |
Ilya Maximets <i.maximets@samsung.com> |
net/virtio: support platform memory ordering
VIRTIO_F_ORDER_PLATFORM is required to use proper memory barriers in case of HW vhost implementations like vDPA.
DMA barriers (rte_cio_*) are sufficent
net/virtio: support platform memory ordering
VIRTIO_F_ORDER_PLATFORM is required to use proper memory barriers in case of HW vhost implementations like vDPA.
DMA barriers (rte_cio_*) are sufficent for that purpose.
Previously known as VIRTIO_F_IO_BARRIER.
Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com> Acked-by: Michael S. Tsirkin <mst@redhat.com>
show more ...
|
| #
bcac5aa2 |
| 20-Dec-2018 |
Maxime Coquelin <maxime.coquelin@redhat.com> |
net/virtio: improve batching in mergeable path
This patch improves both descriptors dequeue and refill, by using the same batching strategy as done in in-order path.
Signed-off-by: Maxime Coquelin
net/virtio: improve batching in mergeable path
This patch improves both descriptors dequeue and refill, by using the same batching strategy as done in in-order path.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Tested-by: Jens Freimann <jfreimann@redhat.com> Reviewed-by: Jens Freimann <jfreimann@redhat.com> Reviewed-by: Gavin Hu <gavin.hu@arm.com>
show more ...
|
| #
efcda136 |
| 20-Dec-2018 |
Maxime Coquelin <maxime.coquelin@redhat.com> |
net/virtio: add non-mergeable support to in-order path
This patch adds support for in-order path when meargeable buffers feature hasn't been negotiated.
Signed-off-by: Maxime Coquelin <maxime.coque
net/virtio: add non-mergeable support to in-order path
This patch adds support for in-order path when meargeable buffers feature hasn't been negotiated.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
show more ...
|
| #
2d51f1e2 |
| 20-Dec-2018 |
Maxime Coquelin <maxime.coquelin@redhat.com> |
net/virtio: inline refill and offload helpers
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Jens Freimann <jfreimann@redhat.com>
|
| #
517ad3e0 |
| 20-Dec-2018 |
Jens Freimann <jfreimann@redhat.com> |
net/virtio: avoid double accounting of bytes
Accounting of bytes was moved to a common function, so at the moment we do it twice. This patches fixes it for sending packets with packed virtqueues.
S
net/virtio: avoid double accounting of bytes
Accounting of bytes was moved to a common function, so at the moment we do it twice. This patches fixes it for sending packets with packed virtqueues.
Signed-off-by: Jens Freimann <jfreimann@redhat.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
a76290c8 |
| 17-Dec-2018 |
Jens Freimann <jfreimann@redhat.com> |
net/virtio: implement Rx path for packed queues
Implement the receive part.
Signed-off-by: Jens Freimann <jfreimann@redhat.com> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Co
net/virtio: implement Rx path for packed queues
Implement the receive part.
Signed-off-by: Jens Freimann <jfreimann@redhat.com> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
892dc798 |
| 17-Dec-2018 |
Jens Freimann <jfreimann@redhat.com> |
net/virtio: implement Tx path for packed queues
This implements the transmit path for devices with support for packed virtqueues.
Signed-off-by: Jens Freimann <jfreimann@redhat.com> Signed-off-by:
net/virtio: implement Tx path for packed queues
This implements the transmit path for devices with support for packed virtqueues.
Signed-off-by: Jens Freimann <jfreimann@redhat.com> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
81e5cdf1 |
| 03-Dec-2018 |
Ilya Maximets <i.maximets@samsung.com> |
net/virtio: move bytes accounting to common function
There is no need to count 'bytes' separately.
Signed-off-by: Ilya Maximets <i.maximets@samsung.com> Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
|
| #
8855839c |
| 25-Jul-2018 |
Tiwei Bie <tiwei.bie@intel.com> |
net/virtio: remove unnecessary Rx error assignments
Remove the unnecessary assignments in Rx functions as they are useless and misleading.
Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
|
| #
db8d6790 |
| 02-Jul-2018 |
Maxime Coquelin <maxime.coquelin@redhat.com> |
net/virtio: improve offload check performance
Instead of checking the multiple Virtio features bits for every packet, let's do the check once at configure time and store it in virtio_hw struct.
Sig
net/virtio: improve offload check performance
Instead of checking the multiple Virtio features bits for every packet, let's do the check once at configure time and store it in virtio_hw struct.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
show more ...
|
| #
57f81896 |
| 02-Jul-2018 |
Maxime Coquelin <maxime.coquelin@redhat.com> |
net/virtio: remove simple Tx path
The simple Tx path does not comply with the Virtio specification. Now that VIRTIO_F_IN_ORDER feature is supported by the Virtio PMD, let's use this optimized path i
net/virtio: remove simple Tx path
The simple Tx path does not comply with the Virtio specification. Now that VIRTIO_F_IN_ORDER feature is supported by the Virtio PMD, let's use this optimized path instead.
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com> Reviewed-by: Tiwei Bie <tiwei.bie@intel.com>
show more ...
|
| #
e5f456a9 |
| 02-Jul-2018 |
Marvin Liu <yong.liu@intel.com> |
net/virtio: support in-order Rx and Tx
IN_ORDER Rx function depends on merge-able feature. Descriptors allocation and free will be done in bulk.
Virtio dequeue logic: dequeue_burst_rx(burst mbu
net/virtio: support in-order Rx and Tx
IN_ORDER Rx function depends on merge-able feature. Descriptors allocation and free will be done in bulk.
Virtio dequeue logic: dequeue_burst_rx(burst mbufs) for (each mbuf b) { if (b need merge) { merge remained mbufs add merged mbuf to return mbufs list } else { add mbuf to return mbufs list } } if (last mbuf c need merge) { dequeue_burst_rx(required mbufs) merge last mbuf c } refill_avail_ring_bulk() update_avail_ring() return mbufs list
IN_ORDER Tx function can support offloading features. Packets which matched "can_push" option will be handled by simple xmit function. Those packets can't match "can_push" will be handled by original xmit function with in-order flag.
Virtio enqueue logic: xmit_cleanup(used descs) for (each xmit mbuf b) { if (b can inorder xmit) { add mbuf b to inorder burst list continue } else { xmit inorder burst list xmit mbuf b by original function } } if (inorder burst list not empty) { xmit inorder burst list } update_avail_ring()
Signed-off-by: Marvin Liu <yong.liu@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
7f9934d5 |
| 02-Jul-2018 |
Marvin Liu <yong.liu@intel.com> |
net/virtio: extract common part for in-order functions
IN_ORDER virtio-user Tx function support Tx checksum offloading and TSO which also support on normal Tx function. So extracts common part into
net/virtio: extract common part for in-order functions
IN_ORDER virtio-user Tx function support Tx checksum offloading and TSO which also support on normal Tx function. So extracts common part into separated function for reuse.
Signed-off-by: Marvin Liu <yong.liu@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
7097ca1b |
| 02-Jul-2018 |
Marvin Liu <yong.liu@intel.com> |
net/virtio: free in-order descriptors before device start
Add new function for freeing IN_ORDER descriptors. As descriptors will be allocated and freed sequentially when IN_ORDER feature was negotia
net/virtio: free in-order descriptors before device start
Add new function for freeing IN_ORDER descriptors. As descriptors will be allocated and freed sequentially when IN_ORDER feature was negotiated. There will be no need to utilize chain for freed descriptors management, only index update is enough.
Signed-off-by: Marvin Liu <yong.liu@intel.com> Reviewed-by: Maxime Coquelin <maxime.coquelin@redhat.com>
show more ...
|
| #
a4996bd8 |
| 10-May-2018 |
Wei Dai <wei.dai@intel.com> |
ethdev: new Rx/Tx offloads API
This patch check if a input requested offloading is valid or not. Any reuqested offloading must be supported in the device capabilities. Any offloading is disabled by
ethdev: new Rx/Tx offloads API
This patch check if a input requested offloading is valid or not. Any reuqested offloading must be supported in the device capabilities. Any offloading is disabled by default if it is not set in the parameter dev_conf->[rt]xmode.offloads to rte_eth_dev_configure() and [rt]x_conf->offloads to rte_eth_[rt]x_queue_setup(). If any offloading is enabled in rte_eth_dev_configure() by application, it is enabled on all queues no matter whether it is per-queue or per-port type and no matter whether it is set or cleared in [rt]x_conf->offloads to rte_eth_[rt]x_queue_setup(). If a per-queue offloading hasn't be enabled in rte_eth_dev_configure(), it can be enabled or disabled for individual queue in ret_eth_[rt]x_queue_setup(). A new added offloading is the one which hasn't been enabled in rte_eth_dev_configure() and is reuqested to be enabled in rte_eth_[rt]x_queue_setup(), it must be per-queue type, otherwise trigger an error log. The underlying PMD must be aware that the requested offloadings to PMD specific queue_setup() function only carries those new added offloadings of per-queue type.
This patch can make above such checking in a common way in rte_ethdev layer to avoid same checking in underlying PMD.
This patch assumes that all PMDs in 18.05-rc2 have already converted to offload API defined in 17.11 . It also assumes that all PMDs can return correct offloading capabilities in rte_eth_dev_infos_get().
In the beginning of [rt]x_queue_setup() of underlying PMD, add offloads = [rt]xconf->offloads | dev->data->dev_conf.[rt]xmode.offloads; to keep same as offload API defined in 17.11 to avoid upper application broken due to offload API change. PMD can use the info that input [rt]xconf->offloads only carry the new added per-queue offloads to do some optimization or some code change on base of this patch.
Signed-off-by: Wei Dai <wei.dai@intel.com> Signed-off-by: Ferruh Yigit <ferruh.yigit@intel.com> Signed-off-by: Qi Zhang <qi.z.zhang@intel.com>
show more ...
|