All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 00/15] Vhost: add support to packed ring layout
@ 2018-07-02  8:16 Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
                   ` (14 more replies)
  0 siblings, 15 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

This series is a handover from Jen's "[PATCH v4 00/20]
implement packed virtqueues", which only implements the
vhost side. Virtio PMD implementation will follow in a 
next series.

The series applies on top of previous reworks I posted
during this cycle that merges mergeable and non-mergeable
receive paths, and refactors transmit path to re-use
vector buffers.

I haven't run performance tests for now as the Virtio PMD
side isn't ready.

The series has been tested with Tiwei's series implementing
packed ring support to Kernel's virtio-net driver, and
with Wei series implementing the Qemu side.

To test it, I have used testpmd on host side with a vhost
vdev and a tap vdev forwarding to each other. Transferts
of big random files have been done in both ways with
integrity verified.

Tests have been run with Rx mrg ON/OFF and events suppression
ON/OFF.

Tests have also been run with legacy spli ring layout to
ensure no regression have been introduced.

Changes since v5:
=================
- Remove duplicated VHOST_USER_F_PROTOCOL_FEATURES definition (Tiwei)
- Fix vq_is_ready (Maxime)
- Fix local used index overflow in flush_shadow_used_ring_packed (Tiwei)
- Make desc_is_avail() a bool (Tiwei)
- Improve desc_is_avail() logic (Tiwei)
- Remove split rings addr NULL assignment in the right patch (Tiwei)
- Make buffer id a uint16_t (Tiwei)

Jens Freimann (2):
  vhost: add virtio packed virtqueue defines
  vhost: add helpers for packed virtqueues

Maxime Coquelin (12):
  vhost: clear shadow used table index at flush time
  vhost: make indirect desc table copy desc type agnostic
  vhost: clear batch copy index at copy time
  vhost: extract split ring handling from Rx and Tx functions
  vhost: append shadow used ring function names with split
  vhost: add shadow used ring support for packed rings
  vhost: create descriptor mapping function
  vhost: add vector filling support for packed ring
  vhost: add Rx support for packed ring
  vhost: add Tx support for packed ring
  vhost: add notification for packed ring
  vhost: advertize packed ring layout support

Yuanhan Liu (1):
  vhost: vring address setup for packed queues

 lib/librte_vhost/vhost.c         | 126 ++++++-
 lib/librte_vhost/vhost.h         | 104 +++++-
 lib/librte_vhost/vhost_user.c    | 128 ++++++-
 lib/librte_vhost/virtio-packed.h |  41 +++
 lib/librte_vhost/virtio_net.c    | 772 ++++++++++++++++++++++++++++++---------
 5 files changed, 963 insertions(+), 208 deletions(-)
 create mode 100644 lib/librte_vhost/virtio-packed.h

-- 
2.14.4

^ permalink raw reply	[flat|nested] 34+ messages in thread

* [PATCH v6 01/15] vhost: add virtio packed virtqueue defines
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-04  5:37   ` Tiwei Bie
  2018-07-02  8:16 ` [PATCH v6 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev; +Cc: mst, jasowang, wexu

From: Jens Freimann <jfreimann@redhat.com>

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 lib/librte_vhost/vhost.h         |  3 +++
 lib/librte_vhost/virtio-packed.h | 22 ++++++++++++++++++++++
 2 files changed, 25 insertions(+)
 create mode 100644 lib/librte_vhost/virtio-packed.h

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 3f8fa3a78..cecd5c1ee 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -191,6 +191,9 @@ struct vhost_msg {
 #ifndef VIRTIO_F_VERSION_1
  #define VIRTIO_F_VERSION_1 32
 #endif
+#ifndef VIRTIO_F_RING_PACKED
+ #define VIRTIO_F_RING_PACKED 34
+#endif
 
 /* Features supported by this builtin vhost-user net driver. */
 #define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
new file mode 100644
index 000000000..744b3991b
--- /dev/null
+++ b/lib/librte_vhost/virtio-packed.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) Red Hat Inc.
+ */
+
+#ifndef __VIRTIO_PACKED_H
+#define __VIRTIO_PACKED_H
+
+#define VRING_DESC_F_NEXT       1
+#define VRING_DESC_F_WRITE      2
+#define VRING_DESC_F_INDIRECT   4
+
+#define VRING_DESC_F_AVAIL      (1ULL << 7)
+#define VRING_DESC_F_USED	(1ULL << 15)
+
+struct vring_desc_packed {
+	uint64_t addr;
+	uint32_t len;
+	uint16_t index;
+	uint16_t flags;
+};
+
+#endif /* __VIRTIO_PACKED_H */
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 02/15] vhost: add helpers for packed virtqueues
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-04  5:39   ` Tiwei Bie
  2018-07-02  8:16 ` [PATCH v6 03/15] vhost: vring address setup for packed queues Maxime Coquelin
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

From: Jens Freimann <jfreimann@redhat.com>

Add some helper functions to check descriptor flags
and check if a vring is of type packed.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.h         | 6 ++++++
 lib/librte_vhost/virtio-packed.h | 8 ++++++++
 2 files changed, 14 insertions(+)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index cecd5c1ee..cabbc3ab7 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -318,6 +318,12 @@ struct virtio_net {
 	struct vhost_user_extern_ops extern_ops;
 } __rte_cache_aligned;
 
+static __rte_always_inline bool
+vq_is_packed(struct virtio_net *dev)
+{
+	return dev->features & (1ull << VIRTIO_F_RING_PACKED);
+}
+
 #define VHOST_LOG_PAGE	4096
 
 /*
diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
index 744b3991b..d386cb6df 100644
--- a/lib/librte_vhost/virtio-packed.h
+++ b/lib/librte_vhost/virtio-packed.h
@@ -19,4 +19,12 @@ struct vring_desc_packed {
 	uint16_t flags;
 };
 
+
+static inline bool
+desc_is_avail(struct vring_desc_packed *desc, bool wrap_counter)
+{
+	return wrap_counter == !!(desc->flags & VRING_DESC_F_AVAIL) &&
+		wrap_counter != !!(desc->flags & VRING_DESC_F_USED);
+}
+
 #endif /* __VIRTIO_PACKED_H */
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 03/15] vhost: vring address setup for packed queues
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 04/15] vhost: clear shadow used table index at flush time Maxime Coquelin
                   ` (11 subsequent siblings)
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Yuanhan Liu, Jens Freimann, Maxime Coquelin

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Add code to set up packed queues when enabled.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Jens Freimann <jfreiman@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.c      | 44 ++++++++++++++++++++++++++++++++++++++-----
 lib/librte_vhost/vhost.h      |  7 ++++++-
 lib/librte_vhost/vhost_user.c | 40 +++++++++++++++++++++++++++++++++++----
 3 files changed, 81 insertions(+), 10 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index afded4952..05d2bb182 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -23,6 +23,7 @@
 #include "iotlb.h"
 #include "vhost.h"
 #include "vhost_user.h"
+#include "virtio-packed.h"
 
 struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
 
@@ -115,14 +116,11 @@ free_device(struct virtio_net *dev)
 	rte_free(dev);
 }
 
-int
-vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
+static int
+vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
 	uint64_t req_size, size;
 
-	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
-		goto out;
-
 	req_size = sizeof(struct vring_desc) * vq->size;
 	size = req_size;
 	vq->desc = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, vq,
@@ -153,6 +151,40 @@ vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	if (!vq->used || size != req_size)
 		return -1;
 
+	return 0;
+}
+
+static int
+vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
+{
+	uint64_t req_size, size;
+
+	req_size = sizeof(struct vring_desc_packed) * vq->size;
+	size = req_size;
+	vq->desc_packed =
+		(struct vring_desc_packed *)(uintptr_t)vhost_iova_to_vva(dev,
+					vq, vq->ring_addrs.desc_user_addr,
+					&size, VHOST_ACCESS_RW);
+	if (!vq->desc_packed || size != req_size)
+		return -1;
+
+	return 0;
+}
+
+int
+vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
+{
+
+	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
+		goto out;
+
+	if (vq_is_packed(dev)) {
+		if (vring_translate_packed(dev, vq) < 0)
+			return -1;
+	} else {
+		if (vring_translate_split(dev, vq) < 0)
+			return -1;
+	}
 out:
 	vq->access_ok = 1;
 
@@ -234,6 +266,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
 	dev->virtqueue[vring_idx] = vq;
 	init_vring_queue(dev, vring_idx);
 	rte_spinlock_init(&vq->access_lock);
+	vq->avail_wrap_counter = 1;
+	vq->used_wrap_counter = 1;
 
 	dev->nr_vring += 1;
 
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index cabbc3ab7..35beff699 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -84,7 +84,10 @@ struct log_cache_entry {
  * Structure contains variables relevant to RX/TX virtqueues.
  */
 struct vhost_virtqueue {
-	struct vring_desc	*desc;
+	union {
+		struct vring_desc	*desc;
+		struct vring_desc_packed   *desc_packed;
+	};
 	struct vring_avail	*avail;
 	struct vring_used	*used;
 	uint32_t		size;
@@ -122,6 +125,8 @@ struct vhost_virtqueue {
 
 	struct batch_copy_elem	*batch_copy_elems;
 	uint16_t		batch_copy_nb_elems;
+	bool			used_wrap_counter;
+	bool			avail_wrap_counter;
 
 	struct log_cache_entry log_cache[VHOST_LOG_CACHE_NR];
 	uint16_t log_cache_nb_elem;
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index 947290fc3..ae03ec29b 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -39,6 +39,7 @@
 #include "iotlb.h"
 #include "vhost.h"
 #include "vhost_user.h"
+#include "virtio-packed.h"
 
 #define VIRTIO_MIN_MTU 68
 #define VIRTIO_MAX_MTU 65535
@@ -477,6 +478,27 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 	struct vhost_vring_addr *addr = &vq->ring_addrs;
 	uint64_t len;
 
+	if (vq_is_packed(dev)) {
+		len = sizeof(struct vring_desc_packed) * vq->size;
+		vq->desc_packed = (struct vring_desc_packed *) ring_addr_to_vva
+			(dev, vq, addr->desc_user_addr, &len);
+		vq->log_guest_addr = 0;
+		if (vq->desc_packed == NULL ||
+				len != sizeof(struct vring_desc_packed) *
+				vq->size) {
+			RTE_LOG(DEBUG, VHOST_CONFIG,
+				"(%d) failed to map desc_packed ring.\n",
+				dev->vid);
+			return dev;
+		}
+
+		dev = numa_realloc(dev, vq_index);
+		vq = dev->virtqueue[vq_index];
+		addr = &vq->ring_addrs;
+
+		return dev;
+	}
+
 	/* The addresses are converted from QEMU virtual to Vhost virtual. */
 	if (vq->desc && vq->avail && vq->used)
 		return dev;
@@ -885,10 +907,20 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *pmsg)
 	return -1;
 }
 
-static int
-vq_is_ready(struct vhost_virtqueue *vq)
+static bool
+vq_is_ready(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
-	return vq && vq->desc && vq->avail && vq->used &&
+	bool rings_ok;
+
+	if (!vq)
+		return false;
+
+	if (vq_is_packed(dev))
+		rings_ok = !!vq->desc_packed;
+	else
+		rings_ok = vq->desc && vq->avail && vq->used;
+
+	return rings_ok &&
 	       vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
 	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
 }
@@ -905,7 +937,7 @@ virtio_is_ready(struct virtio_net *dev)
 	for (i = 0; i < dev->nr_vring; i++) {
 		vq = dev->virtqueue[i];
 
-		if (!vq_is_ready(vq))
+		if (!vq_is_ready(dev, vq))
 			return 0;
 	}
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 04/15] vhost: clear shadow used table index at flush time
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (2 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 03/15] vhost: vring address setup for packed queues Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 05/15] vhost: make indirect desc table copy desc type agnostic Maxime Coquelin
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2c13a59a2..28ce2d073 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -115,6 +115,7 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	vhost_log_cache_sync(dev, vq);
 
 	*(volatile uint16_t *)&vq->used->idx += vq->shadow_used_idx;
+	vq->shadow_used_idx = 0;
 	vhost_log_used_vring(dev, vq, offsetof(struct vring_used, idx),
 		sizeof(vq->used->idx));
 }
@@ -561,7 +562,6 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
 
-	vq->shadow_used_idx = 0;
 	avail_head = *((volatile uint16_t *)&vq->avail->idx);
 	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
 		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
@@ -1048,7 +1048,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		goto out_access_unlock;
 
 	vq->batch_copy_nb_elems = 0;
-	vq->shadow_used_idx = 0;
 
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_lock(vq);
@@ -1079,7 +1078,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 
 		flush_shadow_used_ring(dev, vq);
 		vhost_vring_call(dev, vq);
-		vq->shadow_used_idx = 0;
 	}
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 05/15] vhost: make indirect desc table copy desc type agnostic
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (3 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 04/15] vhost: clear shadow used table index at flush time Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 06/15] vhost: clear batch copy index at copy time Maxime Coquelin
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 28ce2d073..5cfc100a9 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -37,16 +37,15 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring)
 	return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
 }
 
-static __rte_always_inline struct vring_desc *
+static __rte_always_inline void *
 alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq,
-					 struct vring_desc *desc)
+					 uint64_t desc_addr, uint64_t desc_len)
 {
-	struct vring_desc *idesc;
+	void *idesc;
 	uint64_t src, dst;
-	uint64_t len, remain = desc->len;
-	uint64_t desc_addr = desc->addr;
+	uint64_t len, remain = desc_len;
 
-	idesc = rte_malloc(__func__, desc->len, 0);
+	idesc = rte_malloc(__func__, desc_len, 0);
 	if (unlikely(!idesc))
 		return 0;
 
@@ -72,7 +71,7 @@ alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq,
 }
 
 static __rte_always_inline void
-free_ind_table(struct vring_desc *idesc)
+free_ind_table(void *idesc)
 {
 	rte_free(idesc);
 }
@@ -251,7 +250,8 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			 * The indirect desc table is not contiguous
 			 * in process VA space, we have to copy it.
 			 */
-			idesc = alloc_copy_ind_table(dev, vq, &vq->desc[idx]);
+			idesc = alloc_copy_ind_table(dev, vq,
+					vq->desc[idx].addr, vq->desc[idx].len);
 			if (unlikely(!idesc))
 				return -1;
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 06/15] vhost: clear batch copy index at copy time
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (4 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 05/15] vhost: make indirect desc table copy desc type agnostic Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 07/15] vhost: extract split ring handling from Rx and Tx functions Maxime Coquelin
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 5cfc100a9..816d5fc1d 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -141,6 +141,8 @@ do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq)
 		vhost_log_cache_write(dev, vq, elem[i].log_addr, elem[i].len);
 		PRINT_PACKET(dev, (uintptr_t)elem[i].dst, elem[i].len, 0);
 	}
+
+	vq->batch_copy_nb_elems = 0;
 }
 
 static inline void
@@ -152,6 +154,8 @@ do_data_copy_dequeue(struct vhost_virtqueue *vq)
 
 	for (i = 0; i < count; i++)
 		rte_memcpy(elem[i].dst, elem[i].src, elem[i].len);
+
+	vq->batch_copy_nb_elems = 0;
 }
 
 /* avoid write operation when necessary, to lessen cache issues */
@@ -558,8 +562,6 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	if (count == 0)
 		goto out;
 
-	vq->batch_copy_nb_elems = 0;
-
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
 
 	avail_head = *((volatile uint16_t *)&vq->avail->idx);
@@ -1047,8 +1049,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	if (unlikely(vq->enabled == 0))
 		goto out_access_unlock;
 
-	vq->batch_copy_nb_elems = 0;
-
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_lock(vq);
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 07/15] vhost: extract split ring handling from Rx and Tx functions
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (5 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 06/15] vhost: clear batch copy index at copy time Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-04  6:51   ` Tiwei Bie
  2018-07-02  8:16 ` [PATCH v6 08/15] vhost: append shadow used ring function names with split Maxime Coquelin
                   ` (7 subsequent siblings)
  14 siblings, 1 reply; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 240 +++++++++++++++++++++++-------------------
 1 file changed, 131 insertions(+), 109 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 816d5fc1d..385876527 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -226,13 +226,13 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 }
 
 static __rte_always_inline int
-fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
-			 uint32_t avail_idx, uint32_t *vec_idx,
+fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
+			 uint32_t avail_idx, uint16_t *vec_idx,
 			 struct buf_vector *buf_vec, uint16_t *desc_chain_head,
 			 uint16_t *desc_chain_len, uint8_t perm)
 {
 	uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)];
-	uint32_t vec_id = *vec_idx;
+	uint16_t vec_id = *vec_idx;
 	uint32_t len    = 0;
 	uint64_t dlen, desc_avail, desc_iova;
 	struct vring_desc *descs = vq->desc;
@@ -317,13 +317,13 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
  * Returns -1 on fail, 0 on success
  */
 static inline int
-reserve_avail_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
+reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 				uint32_t size, struct buf_vector *buf_vec,
 				uint16_t *num_buffers, uint16_t avail_head,
 				uint16_t *nr_vec)
 {
 	uint16_t cur_idx;
-	uint32_t vec_idx = 0;
+	uint16_t vec_idx = 0;
 	uint16_t max_tries, tries = 0;
 
 	uint16_t head_idx = 0;
@@ -341,7 +341,8 @@ reserve_avail_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		if (unlikely(cur_idx == avail_head))
 			return -1;
 
-		if (unlikely(fill_vec_buf(dev, vq, cur_idx, &vec_idx, buf_vec,
+		if (unlikely(fill_vec_buf_split(dev, vq, cur_idx,
+						&vec_idx, buf_vec,
 						&head_idx, &len,
 						VHOST_ACCESS_RO) < 0))
 			return -1;
@@ -528,48 +529,22 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 }
 
 static __rte_always_inline uint32_t
-virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
+virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	struct rte_mbuf **pkts, uint32_t count)
 {
-	struct vhost_virtqueue *vq;
 	uint32_t pkt_idx = 0;
 	uint16_t num_buffers;
 	struct buf_vector buf_vec[BUF_VECTOR_MAX];
 	uint16_t avail_head;
 
-	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
-		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
-			dev->vid, __func__, queue_id);
-		return 0;
-	}
-
-	vq = dev->virtqueue[queue_id];
-
-	rte_spinlock_lock(&vq->access_lock);
-
-	if (unlikely(vq->enabled == 0))
-		goto out_access_unlock;
-
-	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-		vhost_user_iotlb_rd_lock(vq);
-
-	if (unlikely(vq->access_ok == 0))
-		if (unlikely(vring_translate(dev, vq) < 0))
-			goto out;
-
-	count = RTE_MIN((uint32_t)MAX_PKT_BURST, count);
-	if (count == 0)
-		goto out;
-
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
-
 	avail_head = *((volatile uint16_t *)&vq->avail->idx);
+
 	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
 		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
 		uint16_t nr_vec = 0;
 
-		if (unlikely(reserve_avail_buf(dev, vq,
+		if (unlikely(reserve_avail_buf_split(dev, vq,
 						pkt_len, buf_vec, &num_buffers,
 						avail_head, &nr_vec) < 0)) {
 			VHOST_LOG_DEBUG(VHOST_DATA,
@@ -602,6 +577,42 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 		vhost_vring_call(dev, vq);
 	}
 
+	return pkt_idx;
+}
+
+static __rte_always_inline uint32_t
+virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
+	struct rte_mbuf **pkts, uint32_t count)
+{
+	struct vhost_virtqueue *vq;
+
+	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
+	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
+		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
+			dev->vid, __func__, queue_id);
+		return 0;
+	}
+
+	vq = dev->virtqueue[queue_id];
+
+	rte_spinlock_lock(&vq->access_lock);
+
+	if (unlikely(vq->enabled == 0))
+		goto out_access_unlock;
+
+	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
+		vhost_user_iotlb_rd_lock(vq);
+
+	if (unlikely(vq->access_ok == 0))
+		if (unlikely(vring_translate(dev, vq) < 0))
+			goto out;
+
+	count = RTE_MIN((uint32_t)MAX_PKT_BURST, count);
+	if (count == 0)
+		goto out;
+
+	count = virtio_dev_rx_split(dev, vq, pkts, count);
+
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_unlock(vq);
@@ -609,7 +620,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 out_access_unlock:
 	rte_spinlock_unlock(&vq->access_lock);
 
-	return pkt_idx;
+	return count;
 }
 
 uint16_t
@@ -1014,48 +1025,13 @@ restore_mbuf(struct rte_mbuf *m)
 	}
 }
 
-uint16_t
-rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
+static __rte_always_inline uint16_t
+virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
 {
-	struct virtio_net *dev;
-	struct rte_mbuf *rarp_mbuf = NULL;
-	struct vhost_virtqueue *vq;
-	uint32_t i = 0;
+	uint16_t i;
 	uint16_t free_entries;
 
-	dev = get_device(vid);
-	if (!dev)
-		return 0;
-
-	if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
-		RTE_LOG(ERR, VHOST_DATA,
-			"(%d) %s: built-in vhost net backend is disabled.\n",
-			dev->vid, __func__);
-		return 0;
-	}
-
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
-		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
-			dev->vid, __func__, queue_id);
-		return 0;
-	}
-
-	vq = dev->virtqueue[queue_id];
-
-	if (unlikely(rte_spinlock_trylock(&vq->access_lock) == 0))
-		return 0;
-
-	if (unlikely(vq->enabled == 0))
-		goto out_access_unlock;
-
-	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-		vhost_user_iotlb_rd_lock(vq);
-
-	if (unlikely(vq->access_ok == 0))
-		if (unlikely(vring_translate(dev, vq) < 0))
-			goto out;
-
 	if (unlikely(dev->dequeue_zero_copy)) {
 		struct zcopy_mbuf *zmbuf, *next;
 		int nr_updated = 0;
@@ -1082,39 +1058,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
 
-	/*
-	 * Construct a RARP broadcast packet, and inject it to the "pkts"
-	 * array, to looks like that guest actually send such packet.
-	 *
-	 * Check user_send_rarp() for more information.
-	 *
-	 * broadcast_rarp shares a cacheline in the virtio_net structure
-	 * with some fields that are accessed during enqueue and
-	 * rte_atomic16_cmpset() causes a write if using cmpxchg. This could
-	 * result in false sharing between enqueue and dequeue.
-	 *
-	 * Prevent unnecessary false sharing by reading broadcast_rarp first
-	 * and only performing cmpset if the read indicates it is likely to
-	 * be set.
-	 */
-
-	if (unlikely(rte_atomic16_read(&dev->broadcast_rarp) &&
-			rte_atomic16_cmpset((volatile uint16_t *)
-				&dev->broadcast_rarp.cnt, 1, 0))) {
-
-		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
-		if (rarp_mbuf == NULL) {
-			RTE_LOG(ERR, VHOST_DATA,
-				"Failed to make RARP packet.\n");
-			return 0;
-		}
-		count -= 1;
-	}
-
 	free_entries = *((volatile uint16_t *)&vq->avail->idx) -
 		vq->last_avail_idx;
 	if (free_entries == 0)
-		goto out;
+		return 0;
 
 	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
 
@@ -1126,10 +1073,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	for (i = 0; i < count; i++) {
 		struct buf_vector buf_vec[BUF_VECTOR_MAX];
 		uint16_t head_idx, dummy_len;
-		uint32_t nr_vec = 0;
+		uint16_t nr_vec = 0;
 		int err;
 
-		if (unlikely(fill_vec_buf(dev, vq,
+		if (unlikely(fill_vec_buf_split(dev, vq,
 						vq->last_avail_idx + i,
 						&nr_vec, buf_vec,
 						&head_idx, &dummy_len,
@@ -1188,6 +1135,81 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		vhost_vring_call(dev, vq);
 	}
 
+	return i;
+}
+
+uint16_t
+rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
+	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
+{
+	struct virtio_net *dev;
+	struct rte_mbuf *rarp_mbuf = NULL;
+	struct vhost_virtqueue *vq;
+
+	dev = get_device(vid);
+	if (!dev)
+		return 0;
+
+	if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
+		RTE_LOG(ERR, VHOST_DATA,
+			"(%d) %s: built-in vhost net backend is disabled.\n",
+			dev->vid, __func__);
+		return 0;
+	}
+
+	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
+		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
+			dev->vid, __func__, queue_id);
+		return 0;
+	}
+
+	vq = dev->virtqueue[queue_id];
+
+	if (unlikely(rte_spinlock_trylock(&vq->access_lock) == 0))
+		return 0;
+
+	if (unlikely(vq->enabled == 0))
+		goto out_access_unlock;
+
+	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
+		vhost_user_iotlb_rd_lock(vq);
+
+	if (unlikely(vq->access_ok == 0))
+		if (unlikely(vring_translate(dev, vq) < 0))
+			goto out;
+
+
+	/*
+	 * Construct a RARP broadcast packet, and inject it to the "pkts"
+	 * array, to looks like that guest actually send such packet.
+	 *
+	 * Check user_send_rarp() for more information.
+	 *
+	 * broadcast_rarp shares a cacheline in the virtio_net structure
+	 * with some fields that are accessed during enqueue and
+	 * rte_atomic16_cmpset() causes a write if using cmpxchg. This could
+	 * result in false sharing between enqueue and dequeue.
+	 *
+	 * Prevent unnecessary false sharing by reading broadcast_rarp first
+	 * and only performing cmpset if the read indicates it is likely to
+	 * be set.
+	 */
+
+	if (unlikely(rte_atomic16_read(&dev->broadcast_rarp) &&
+			rte_atomic16_cmpset((volatile uint16_t *)
+				&dev->broadcast_rarp.cnt, 1, 0))) {
+
+		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
+		if (rarp_mbuf == NULL) {
+			RTE_LOG(ERR, VHOST_DATA,
+				"Failed to make RARP packet.\n");
+			return 0;
+		}
+		count -= 1;
+	}
+
+	count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
+
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_unlock(vq);
@@ -1200,10 +1222,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		 * Inject it to the head of "pkts" array, so that switch's mac
 		 * learning table will get updated first.
 		 */
-		memmove(&pkts[1], pkts, i * sizeof(struct rte_mbuf *));
+		memmove(&pkts[1], pkts, count * sizeof(struct rte_mbuf *));
 		pkts[0] = rarp_mbuf;
-		i += 1;
+		count += 1;
 	}
 
-	return i;
+	return count;
 }
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 08/15] vhost: append shadow used ring function names with split
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (6 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 07/15] vhost: extract split ring handling from Rx and Tx functions Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 09/15] vhost: add shadow used ring support for packed rings Maxime Coquelin
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 385876527..35f8cf90a 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -77,8 +77,9 @@ free_ind_table(void *idesc)
 }
 
 static __rte_always_inline void
-do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
-			  uint16_t to, uint16_t from, uint16_t size)
+do_flush_shadow_used_ring_split(struct virtio_net *dev,
+			struct vhost_virtqueue *vq,
+			uint16_t to, uint16_t from, uint16_t size)
 {
 	rte_memcpy(&vq->used->ring[to],
 			&vq->shadow_used_ring[from],
@@ -89,22 +90,22 @@ do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
 }
 
 static __rte_always_inline void
-flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq)
+flush_shadow_used_ring_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
 	uint16_t used_idx = vq->last_used_idx & (vq->size - 1);
 
 	if (used_idx + vq->shadow_used_idx <= vq->size) {
-		do_flush_shadow_used_ring(dev, vq, used_idx, 0,
+		do_flush_shadow_used_ring_split(dev, vq, used_idx, 0,
 					  vq->shadow_used_idx);
 	} else {
 		uint16_t size;
 
 		/* update used ring interval [used_idx, vq->size] */
 		size = vq->size - used_idx;
-		do_flush_shadow_used_ring(dev, vq, used_idx, 0, size);
+		do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, size);
 
 		/* update the left half used ring interval [0, left_size] */
-		do_flush_shadow_used_ring(dev, vq, 0, size,
+		do_flush_shadow_used_ring_split(dev, vq, 0, size,
 					  vq->shadow_used_idx - size);
 	}
 	vq->last_used_idx += vq->shadow_used_idx;
@@ -120,7 +121,7 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq)
 }
 
 static __rte_always_inline void
-update_shadow_used_ring(struct vhost_virtqueue *vq,
+update_shadow_used_ring_split(struct vhost_virtqueue *vq,
 			 uint16_t desc_idx, uint16_t len)
 {
 	uint16_t i = vq->shadow_used_idx++;
@@ -347,7 +348,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 						VHOST_ACCESS_RO) < 0))
 			return -1;
 		len = RTE_MIN(len, size);
-		update_shadow_used_ring(vq, head_idx, len);
+		update_shadow_used_ring_split(vq, head_idx, len);
 		size -= len;
 
 		cur_idx++;
@@ -573,7 +574,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	do_data_copy_enqueue(dev, vq);
 
 	if (likely(vq->shadow_used_idx)) {
-		flush_shadow_used_ring(dev, vq);
+		flush_shadow_used_ring_split(dev, vq);
 		vhost_vring_call(dev, vq);
 	}
 
@@ -1041,7 +1042,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			next = TAILQ_NEXT(zmbuf, next);
 
 			if (mbuf_is_consumed(zmbuf->mbuf)) {
-				update_shadow_used_ring(vq, zmbuf->desc_idx, 0);
+				update_shadow_used_ring_split(vq,
+						zmbuf->desc_idx, 0);
 				nr_updated += 1;
 
 				TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next);
@@ -1052,7 +1054,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			}
 		}
 
-		flush_shadow_used_ring(dev, vq);
+		flush_shadow_used_ring_split(dev, vq);
 		vhost_vring_call(dev, vq);
 	}
 
@@ -1084,7 +1086,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			break;
 
 		if (likely(dev->dequeue_zero_copy == 0))
-			update_shadow_used_ring(vq, head_idx, 0);
+			update_shadow_used_ring_split(vq, head_idx, 0);
 
 		rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr);
 
@@ -1131,7 +1133,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		do_data_copy_dequeue(vq);
 		if (unlikely(i < count))
 			vq->shadow_used_idx = i;
-		flush_shadow_used_ring(dev, vq);
+		flush_shadow_used_ring_split(dev, vq);
 		vhost_vring_call(dev, vq);
 	}
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 09/15] vhost: add shadow used ring support for packed rings
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (7 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 08/15] vhost: append shadow used ring function names with split Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 10/15] vhost: create descriptor mapping function Maxime Coquelin
                   ` (5 subsequent siblings)
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.c      |  9 ++++--
 lib/librte_vhost/vhost.h      | 13 ++++++--
 lib/librte_vhost/vhost_user.c | 64 ++++++++++++++++++++++++++++----------
 lib/librte_vhost/virtio_net.c | 72 +++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 134 insertions(+), 24 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 05d2bb182..8538302c9 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -94,9 +94,12 @@ cleanup_device(struct virtio_net *dev, int destroy)
 }
 
 void
-free_vq(struct vhost_virtqueue *vq)
+free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
-	rte_free(vq->shadow_used_ring);
+	if (vq_is_packed(dev))
+		rte_free(vq->shadow_used_packed);
+	else
+		rte_free(vq->shadow_used_split);
 	rte_free(vq->batch_copy_elems);
 	rte_mempool_free(vq->iotlb_pool);
 	rte_free(vq);
@@ -111,7 +114,7 @@ free_device(struct virtio_net *dev)
 	uint32_t i;
 
 	for (i = 0; i < dev->nr_vring; i++)
-		free_vq(dev->virtqueue[i]);
+		free_vq(dev, dev->virtqueue[i]);
 
 	rte_free(dev);
 }
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 35beff699..a7320469a 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -80,6 +80,12 @@ struct log_cache_entry {
 	unsigned long val;
 };
 
+struct vring_used_elem_packed {
+	uint16_t id;
+	uint32_t len;
+	uint32_t count;
+};
+
 /**
  * Structure contains variables relevant to RX/TX virtqueues.
  */
@@ -119,7 +125,10 @@ struct vhost_virtqueue {
 	struct zcopy_mbuf	*zmbufs;
 	struct zcopy_mbuf_list	zmbuf_list;
 
-	struct vring_used_elem  *shadow_used_ring;
+	union {
+		struct vring_used_elem  *shadow_used_split;
+		struct vring_used_elem_packed *shadow_used_packed;
+	};
 	uint16_t                shadow_used_idx;
 	struct vhost_vring_addr ring_addrs;
 
@@ -552,7 +561,7 @@ void reset_device(struct virtio_net *dev);
 void vhost_destroy_device(int);
 
 void cleanup_vq(struct vhost_virtqueue *vq, int destroy);
-void free_vq(struct vhost_virtqueue *vq);
+void free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq);
 
 int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx);
 
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index ae03ec29b..b2b57de57 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -244,7 +244,7 @@ vhost_user_set_features(struct virtio_net *dev, uint64_t features)
 
 			dev->virtqueue[dev->nr_vring] = NULL;
 			cleanup_vq(vq, 1);
-			free_vq(vq);
+			free_vq(dev, vq);
 		}
 	}
 
@@ -293,13 +293,26 @@ vhost_user_set_vring_num(struct virtio_net *dev,
 		TAILQ_INIT(&vq->zmbuf_list);
 	}
 
-	vq->shadow_used_ring = rte_malloc(NULL,
+	if (vq_is_packed(dev)) {
+		vq->shadow_used_packed = rte_malloc(NULL,
+				vq->size *
+				sizeof(struct vring_used_elem_packed),
+				RTE_CACHE_LINE_SIZE);
+		if (!vq->shadow_used_packed) {
+			RTE_LOG(ERR, VHOST_CONFIG,
+					"failed to allocate memory for shadow used ring.\n");
+			return -1;
+		}
+
+	} else {
+		vq->shadow_used_split = rte_malloc(NULL,
 				vq->size * sizeof(struct vring_used_elem),
 				RTE_CACHE_LINE_SIZE);
-	if (!vq->shadow_used_ring) {
-		RTE_LOG(ERR, VHOST_CONFIG,
-			"failed to allocate memory for shadow used ring.\n");
-		return -1;
+		if (!vq->shadow_used_split) {
+			RTE_LOG(ERR, VHOST_CONFIG,
+					"failed to allocate memory for shadow used ring.\n");
+			return -1;
+		}
 	}
 
 	vq->batch_copy_elems = rte_malloc(NULL,
@@ -326,7 +339,8 @@ numa_realloc(struct virtio_net *dev, int index)
 	struct virtio_net *old_dev;
 	struct vhost_virtqueue *old_vq, *vq;
 	struct zcopy_mbuf *new_zmbuf;
-	struct vring_used_elem *new_shadow_used_ring;
+	struct vring_used_elem *new_shadow_used_split;
+	struct vring_used_elem_packed *new_shadow_used_packed;
 	struct batch_copy_elem *new_batch_copy_elems;
 	int ret;
 
@@ -361,13 +375,26 @@ numa_realloc(struct virtio_net *dev, int index)
 			vq->zmbufs = new_zmbuf;
 		}
 
-		new_shadow_used_ring = rte_malloc_socket(NULL,
-			vq->size * sizeof(struct vring_used_elem),
-			RTE_CACHE_LINE_SIZE,
-			newnode);
-		if (new_shadow_used_ring) {
-			rte_free(vq->shadow_used_ring);
-			vq->shadow_used_ring = new_shadow_used_ring;
+		if (vq_is_packed(dev)) {
+			new_shadow_used_packed = rte_malloc_socket(NULL,
+					vq->size *
+					sizeof(struct vring_used_elem_packed),
+					RTE_CACHE_LINE_SIZE,
+					newnode);
+			if (new_shadow_used_packed) {
+				rte_free(vq->shadow_used_packed);
+				vq->shadow_used_packed = new_shadow_used_packed;
+			}
+		} else {
+			new_shadow_used_split = rte_malloc_socket(NULL,
+					vq->size *
+					sizeof(struct vring_used_elem),
+					RTE_CACHE_LINE_SIZE,
+					newnode);
+			if (new_shadow_used_split) {
+				rte_free(vq->shadow_used_split);
+				vq->shadow_used_split = new_shadow_used_split;
+			}
 		}
 
 		new_batch_copy_elems = rte_malloc_socket(NULL,
@@ -1067,8 +1094,13 @@ vhost_user_get_vring_base(struct virtio_net *dev,
 
 	if (dev->dequeue_zero_copy)
 		free_zmbufs(vq);
-	rte_free(vq->shadow_used_ring);
-	vq->shadow_used_ring = NULL;
+	if (vq_is_packed(dev)) {
+		rte_free(vq->shadow_used_packed);
+		vq->shadow_used_packed = NULL;
+	} else {
+		rte_free(vq->shadow_used_split);
+		vq->shadow_used_split = NULL;
+	}
 
 	rte_free(vq->batch_copy_elems);
 	vq->batch_copy_elems = NULL;
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 35f8cf90a..f0e2e6a1f 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -20,6 +20,7 @@
 
 #include "iotlb.h"
 #include "vhost.h"
+#include "virtio-packed.h"
 
 #define MAX_PKT_BURST 32
 
@@ -82,7 +83,7 @@ do_flush_shadow_used_ring_split(struct virtio_net *dev,
 			uint16_t to, uint16_t from, uint16_t size)
 {
 	rte_memcpy(&vq->used->ring[to],
-			&vq->shadow_used_ring[from],
+			&vq->shadow_used_split[from],
 			size * sizeof(struct vring_used_elem));
 	vhost_log_cache_used_vring(dev, vq,
 			offsetof(struct vring_used, ring[to]),
@@ -126,8 +127,73 @@ update_shadow_used_ring_split(struct vhost_virtqueue *vq,
 {
 	uint16_t i = vq->shadow_used_idx++;
 
-	vq->shadow_used_ring[i].id  = desc_idx;
-	vq->shadow_used_ring[i].len = len;
+	vq->shadow_used_split[i].id  = desc_idx;
+	vq->shadow_used_split[i].len = len;
+}
+
+static __rte_always_inline void
+flush_shadow_used_ring_packed(struct virtio_net *dev,
+			struct vhost_virtqueue *vq)
+{
+	int i;
+	uint16_t used_idx = vq->last_used_idx;
+
+	/* Split loop in two to save memory barriers */
+	for (i = 0; i < vq->shadow_used_idx; i++) {
+		vq->desc_packed[used_idx].index = vq->shadow_used_packed[i].id;
+		vq->desc_packed[used_idx].len = vq->shadow_used_packed[i].len;
+
+		used_idx += vq->shadow_used_packed[i].count;
+		if (used_idx >= vq->size)
+			used_idx -= vq->size;
+	}
+
+	rte_smp_wmb();
+
+	for (i = 0; i < vq->shadow_used_idx; i++) {
+		uint16_t flags;
+
+		if (vq->shadow_used_packed[i].len)
+			flags = VRING_DESC_F_WRITE;
+		else
+			flags = 0;
+
+		if (vq->used_wrap_counter) {
+			flags |= VRING_DESC_F_USED;
+			flags |= VRING_DESC_F_AVAIL;
+		} else {
+			flags &= ~VRING_DESC_F_USED;
+			flags &= ~VRING_DESC_F_AVAIL;
+		}
+
+		vq->desc_packed[vq->last_used_idx].flags = flags;
+
+		vhost_log_cache_used_vring(dev, vq,
+					vq->last_used_idx *
+					sizeof(struct vring_desc_packed),
+					sizeof(struct vring_desc_packed));
+
+		vq->last_used_idx += vq->shadow_used_packed[i].count;
+		if (vq->last_used_idx >= vq->size) {
+			vq->used_wrap_counter ^= 1;
+			vq->last_used_idx -= vq->size;
+		}
+	}
+
+	rte_smp_wmb();
+	vq->shadow_used_idx = 0;
+	vhost_log_cache_sync(dev, vq);
+}
+
+static __rte_always_inline void
+update_shadow_used_ring_packed(struct vhost_virtqueue *vq,
+			 uint16_t desc_idx, uint16_t len, uint16_t count)
+{
+	uint16_t i = vq->shadow_used_idx++;
+
+	vq->shadow_used_packed[i].id  = desc_idx;
+	vq->shadow_used_packed[i].len = len;
+	vq->shadow_used_packed[i].count = count;
 }
 
 static inline void
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 10/15] vhost: create descriptor mapping function
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (8 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 09/15] vhost: add shadow used ring support for packed rings Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-04  5:56   ` Tiwei Bie
  2018-07-02  8:16 ` [PATCH v6 11/15] vhost: add vector filling support for packed ring Maxime Coquelin
                   ` (4 subsequent siblings)
  14 siblings, 1 reply; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 61 ++++++++++++++++++++++++++-----------------
 1 file changed, 37 insertions(+), 24 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index f0e2e6a1f..64664b7de 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -292,6 +292,37 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 	}
 }
 
+static __rte_always_inline int
+map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
+		struct buf_vector *buf_vec, uint16_t *vec_idx,
+		uint64_t desc_iova, uint64_t desc_len, uint8_t perm)
+{
+	uint16_t vec_id = *vec_idx;
+
+	while (desc_len) {
+		uint64_t desc_addr;
+		uint64_t desc_chunck_len = desc_len;
+
+		desc_addr = vhost_iova_to_vva(dev, vq,
+				desc_iova,
+				&desc_chunck_len,
+				perm);
+		if (unlikely(!desc_addr))
+			return -1;
+
+		buf_vec[vec_id].buf_iova = desc_iova;
+		buf_vec[vec_id].buf_addr = desc_addr;
+		buf_vec[vec_id].buf_len  = desc_chunck_len;
+
+		desc_len -= desc_chunck_len;
+		desc_iova += desc_chunck_len;
+		vec_id++;
+	}
+	*vec_idx = vec_id;
+
+	return 0;
+}
+
 static __rte_always_inline int
 fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			 uint32_t avail_idx, uint16_t *vec_idx,
@@ -301,7 +332,7 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)];
 	uint16_t vec_id = *vec_idx;
 	uint32_t len    = 0;
-	uint64_t dlen, desc_avail, desc_iova;
+	uint64_t dlen;
 	struct vring_desc *descs = vq->desc;
 	struct vring_desc *idesc = NULL;
 
@@ -339,30 +370,12 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 
 		len += descs[idx].len;
-		desc_avail = descs[idx].len;
-		desc_iova = descs[idx].addr;
-
-		while (desc_avail) {
-			uint64_t desc_addr;
-			uint64_t desc_chunck_len = desc_avail;
-
-			desc_addr = vhost_iova_to_vva(dev, vq,
-					desc_iova,
-					&desc_chunck_len,
-					perm);
-			if (unlikely(!desc_addr)) {
-				free_ind_table(idesc);
-				return -1;
-			}
-
-			buf_vec[vec_id].buf_iova = desc_iova;
-			buf_vec[vec_id].buf_addr = desc_addr;
-			buf_vec[vec_id].buf_len  = desc_chunck_len;
-			buf_vec[vec_id].desc_idx = idx;
 
-			desc_avail -= desc_chunck_len;
-			desc_iova += desc_chunck_len;
-			vec_id++;
+		if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
+						descs[idx].addr, descs[idx].len,
+						perm))) {
+			free_ind_table(idesc);
+			return -1;
 		}
 
 		if ((descs[idx].flags & VRING_DESC_F_NEXT) == 0)
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 11/15] vhost: add vector filling support for packed ring
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (9 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 10/15] vhost: create descriptor mapping function Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-04  5:53   ` Tiwei Bie
  2018-07-02  8:16 ` [PATCH v6 12/15] vhost: add Rx " Maxime Coquelin
                   ` (3 subsequent siblings)
  14 siblings, 1 reply; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 111 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 111 insertions(+)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 64664b7de..2d867e88e 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -448,6 +448,117 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return 0;
 }
 
+static __rte_always_inline int
+fill_vec_buf_packed_indirect(struct virtio_net *dev,
+			struct vhost_virtqueue *vq,
+			struct vring_desc_packed *desc, uint16_t *vec_idx,
+			struct buf_vector *buf_vec, uint16_t *len, uint8_t perm)
+{
+	uint16_t i;
+	uint32_t nr_decs;
+	uint16_t vec_id = *vec_idx;
+	uint64_t dlen;
+	struct vring_desc_packed *descs, *idescs = NULL;
+
+	dlen = desc->len;
+	descs = (struct vring_desc_packed *)(uintptr_t)
+		vhost_iova_to_vva(dev, vq, desc->addr, &dlen, VHOST_ACCESS_RO);
+	if (unlikely(!descs))
+		return -1;
+
+	if (unlikely(dlen < desc->len)) {
+		/*
+		 * The indirect desc table is not contiguous
+		 * in process VA space, we have to copy it.
+		 */
+		idescs = alloc_copy_ind_table(dev, vq, desc->addr, desc->len);
+		if (unlikely(!idescs))
+			return -1;
+
+		descs = idescs;
+	}
+
+	nr_decs =  desc->len / sizeof(struct vring_desc_packed);
+	if (unlikely(nr_decs >= vq->size)) {
+		free_ind_table(idescs);
+		return -1;
+	}
+
+	for (i = 0; i < nr_decs; i++) {
+		if (unlikely(vec_id >= BUF_VECTOR_MAX)) {
+			free_ind_table(idescs);
+			return -1;
+		}
+
+		*len += descs[i].len;
+		if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
+						descs[i].addr, descs[i].len,
+						perm)))
+			return -1;
+	}
+	*vec_idx = vec_id;
+
+	if (unlikely(!!idescs))
+		free_ind_table(idescs);
+
+	return 0;
+}
+
+static inline int
+fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+				uint16_t avail_idx, uint16_t *desc_count,
+				struct buf_vector *buf_vec, uint16_t *vec_idx,
+				uint16_t *buf_id, uint16_t *len, uint8_t perm)
+{
+	bool wrap_counter = vq->avail_wrap_counter;
+	struct vring_desc_packed *descs = vq->desc_packed;
+	uint16_t vec_id = *vec_idx;
+
+	if (avail_idx < vq->last_avail_idx)
+		wrap_counter ^= 1;
+
+	if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter)))
+		return -1;
+
+	*desc_count = 0;
+
+	while (1) {
+		if (unlikely(vec_id >= BUF_VECTOR_MAX))
+			return -1;
+
+		*desc_count += 1;
+		*buf_id = descs[avail_idx].index;
+
+		if (descs[avail_idx].flags & VRING_DESC_F_INDIRECT) {
+			if (unlikely(fill_vec_buf_packed_indirect(dev, vq,
+							&descs[avail_idx],
+							&vec_id, buf_vec,
+							len, perm) < 0))
+				return -1;
+		} else {
+			*len += descs[avail_idx].len;
+
+			if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
+							descs[avail_idx].addr,
+							descs[avail_idx].len,
+							perm)))
+				return -1;
+		}
+
+		if ((descs[avail_idx].flags & VRING_DESC_F_NEXT) == 0)
+			break;
+
+		if (++avail_idx >= vq->size) {
+			avail_idx -= vq->size;
+			wrap_counter ^= 1;
+		}
+	}
+
+	*vec_idx = vec_id;
+
+	return 0;
+}
+
 static __rte_always_inline int
 copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			    struct rte_mbuf *m, struct buf_vector *buf_vec,
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 12/15] vhost: add Rx support for packed ring
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (10 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 11/15] vhost: add vector filling support for packed ring Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 13/15] vhost: add Tx " Maxime Coquelin
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 117 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 116 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2d867e88e..2e286e228 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -559,6 +559,65 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return 0;
 }
 
+/*
+ * Returns -1 on fail, 0 on success
+ */
+static inline int
+reserve_avail_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+				uint32_t size, struct buf_vector *buf_vec,
+				uint16_t *nr_vec, uint16_t *num_buffers,
+				uint16_t *nr_descs)
+{
+	uint16_t avail_idx;
+	uint16_t vec_idx = 0;
+	uint16_t max_tries, tries = 0;
+
+	uint16_t buf_id = 0;
+	uint16_t len = 0;
+	uint16_t desc_count;
+
+	*num_buffers = 0;
+	avail_idx = vq->last_avail_idx;
+
+	if (rxvq_is_mergeable(dev))
+		max_tries = vq->size;
+	else
+		max_tries = 1;
+
+	while (size > 0) {
+		if (unlikely(fill_vec_buf_packed(dev, vq,
+						avail_idx, &desc_count,
+						buf_vec, &vec_idx,
+						&buf_id, &len,
+						VHOST_ACCESS_RO) < 0))
+			return -1;
+
+		len = RTE_MIN(len, size);
+		update_shadow_used_ring_packed(vq, buf_id, len, desc_count);
+		size -= len;
+
+		avail_idx += desc_count;
+		if (avail_idx >= vq->size)
+			avail_idx -= vq->size;
+
+		*nr_descs += desc_count;
+		tries++;
+		*num_buffers += 1;
+
+		/*
+		 * if we tried all available ring items, and still
+		 * can't get enough buf, it means something abnormal
+		 * happened.
+		 */
+		if (unlikely(tries > max_tries))
+			return -1;
+	}
+
+	*nr_vec = vec_idx;
+
+	return 0;
+}
+
 static __rte_always_inline int
 copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			    struct rte_mbuf *m, struct buf_vector *buf_vec,
@@ -771,6 +830,59 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return pkt_idx;
 }
 
+static __rte_always_inline uint32_t
+virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+	struct rte_mbuf **pkts, uint32_t count)
+{
+	uint32_t pkt_idx = 0;
+	uint16_t num_buffers;
+	struct buf_vector buf_vec[BUF_VECTOR_MAX];
+
+	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
+		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
+		uint16_t nr_vec = 0;
+		uint16_t nr_descs = 0;
+
+		if (unlikely(reserve_avail_buf_packed(dev, vq,
+						pkt_len, buf_vec, &nr_vec,
+						&num_buffers, &nr_descs) < 0)) {
+			VHOST_LOG_DEBUG(VHOST_DATA,
+				"(%d) failed to get enough desc from vring\n",
+				dev->vid);
+			vq->shadow_used_idx -= num_buffers;
+			break;
+		}
+
+		rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr);
+
+		VHOST_LOG_DEBUG(VHOST_DATA, "(%d) current index %d | end index %d\n",
+			dev->vid, vq->last_avail_idx,
+			vq->last_avail_idx + num_buffers);
+
+		if (copy_mbuf_to_desc(dev, vq, pkts[pkt_idx],
+						buf_vec, nr_vec,
+						num_buffers) < 0) {
+			vq->shadow_used_idx -= num_buffers;
+			break;
+		}
+
+		vq->last_avail_idx += nr_descs;
+		if (vq->last_avail_idx >= vq->size) {
+			vq->last_avail_idx -= vq->size;
+			vq->avail_wrap_counter ^= 1;
+		}
+	}
+
+	do_data_copy_enqueue(dev, vq);
+
+	if (likely(vq->shadow_used_idx)) {
+		flush_shadow_used_ring_packed(dev, vq);
+		vhost_vring_call(dev, vq);
+	}
+
+	return pkt_idx;
+}
+
 static __rte_always_inline uint32_t
 virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	struct rte_mbuf **pkts, uint32_t count)
@@ -802,7 +914,10 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	if (count == 0)
 		goto out;
 
-	count = virtio_dev_rx_split(dev, vq, pkts, count);
+	if (vq_is_packed(dev))
+		count = virtio_dev_rx_packed(dev, vq, pkts, count);
+	else
+		count = virtio_dev_rx_split(dev, vq, pkts, count);
 
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 13/15] vhost: add Tx support for packed ring
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (11 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 12/15] vhost: add Rx " Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-04  5:45   ` Tiwei Bie
  2018-07-02  8:16 ` [PATCH v6 14/15] vhost: add notification " Maxime Coquelin
  2018-07-02  8:16 ` [PATCH v6 15/15] vhost: advertize packed ring layout support Maxime Coquelin
  14 siblings, 1 reply; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.h      |   1 +
 lib/librte_vhost/virtio_net.c | 121 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index a7320469a..6ea8fb896 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -56,6 +56,7 @@ struct buf_vector {
 struct zcopy_mbuf {
 	struct rte_mbuf *mbuf;
 	uint32_t desc_idx;
+	uint16_t desc_count;
 	uint16_t in_use;
 
 	TAILQ_ENTRY(zcopy_mbuf) next;
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2e286e228..03dd38235 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1445,6 +1445,122 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return i;
 }
 
+static __rte_always_inline uint16_t
+virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
+{
+	uint16_t i;
+
+	rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
+
+	if (unlikely(dev->dequeue_zero_copy)) {
+		struct zcopy_mbuf *zmbuf, *next;
+		int nr_updated = 0;
+
+		for (zmbuf = TAILQ_FIRST(&vq->zmbuf_list);
+		     zmbuf != NULL; zmbuf = next) {
+			next = TAILQ_NEXT(zmbuf, next);
+
+			if (mbuf_is_consumed(zmbuf->mbuf)) {
+				update_shadow_used_ring_packed(vq,
+						zmbuf->desc_idx,
+						0,
+						zmbuf->desc_count);
+				nr_updated += 1;
+
+				TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next);
+				restore_mbuf(zmbuf->mbuf);
+				rte_pktmbuf_free(zmbuf->mbuf);
+				put_zmbuf(zmbuf);
+				vq->nr_zmbuf -= 1;
+			}
+		}
+
+		flush_shadow_used_ring_packed(dev, vq);
+		vhost_vring_call(dev, vq);
+	}
+
+	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
+
+	count = RTE_MIN(count, MAX_PKT_BURST);
+	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) about to dequeue %u buffers\n",
+			dev->vid, count);
+
+	for (i = 0; i < count; i++) {
+		struct buf_vector buf_vec[BUF_VECTOR_MAX];
+		uint16_t buf_id, dummy_len;
+		uint16_t desc_count, nr_vec = 0;
+		int err;
+
+		if (unlikely(fill_vec_buf_packed(dev, vq,
+						vq->last_avail_idx, &desc_count,
+						buf_vec, &nr_vec,
+						&buf_id, &dummy_len,
+						VHOST_ACCESS_RW) < 0))
+			break;
+
+		if (likely(dev->dequeue_zero_copy == 0))
+			update_shadow_used_ring_packed(vq, buf_id, 0,
+					desc_count);
+
+		rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr);
+
+		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
+		if (unlikely(pkts[i] == NULL)) {
+			RTE_LOG(ERR, VHOST_DATA,
+				"Failed to allocate memory for mbuf.\n");
+			break;
+		}
+
+		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
+				mbuf_pool);
+		if (unlikely(err)) {
+			rte_pktmbuf_free(pkts[i]);
+			break;
+		}
+
+		if (unlikely(dev->dequeue_zero_copy)) {
+			struct zcopy_mbuf *zmbuf;
+
+			zmbuf = get_zmbuf(vq);
+			if (!zmbuf) {
+				rte_pktmbuf_free(pkts[i]);
+				break;
+			}
+			zmbuf->mbuf = pkts[i];
+			zmbuf->desc_idx = buf_id;
+			zmbuf->desc_count = desc_count;
+
+			/*
+			 * Pin lock the mbuf; we will check later to see
+			 * whether the mbuf is freed (when we are the last
+			 * user) or not. If that's the case, we then could
+			 * update the used ring safely.
+			 */
+			rte_mbuf_refcnt_update(pkts[i], 1);
+
+			vq->nr_zmbuf += 1;
+			TAILQ_INSERT_TAIL(&vq->zmbuf_list, zmbuf, next);
+		}
+
+		vq->last_avail_idx += desc_count;
+		if (vq->last_avail_idx >= vq->size) {
+			vq->last_avail_idx -= vq->size;
+			vq->avail_wrap_counter ^= 1;
+		}
+	}
+
+	if (likely(dev->dequeue_zero_copy == 0)) {
+		do_data_copy_dequeue(vq);
+		if (unlikely(i < count))
+			vq->shadow_used_idx = i;
+		flush_shadow_used_ring_packed(dev, vq);
+		vhost_vring_call(dev, vq);
+	}
+
+	return i;
+}
+
 uint16_t
 rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
@@ -1515,7 +1631,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		count -= 1;
 	}
 
-	count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
+	if (vq_is_packed(dev))
+		count = virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count);
+	else
+		count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
 
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 14/15] vhost: add notification for packed ring
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (12 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 13/15] vhost: add Tx " Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  2018-07-03  5:57   ` Jason Wang
                     ` (2 more replies)
  2018-07-02  8:16 ` [PATCH v6 15/15] vhost: advertize packed ring layout support Maxime Coquelin
  14 siblings, 3 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.c         | 73 ++++++++++++++++++++++++++++++++++++----
 lib/librte_vhost/vhost.h         | 71 ++++++++++++++++++++++++++++++++++----
 lib/librte_vhost/vhost_user.c    | 24 +++++++++++++
 lib/librte_vhost/virtio-packed.h | 11 ++++++
 lib/librte_vhost/virtio_net.c    | 12 +++----
 5 files changed, 172 insertions(+), 19 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 8538302c9..78f20c402 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -171,6 +171,24 @@ vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	if (!vq->desc_packed || size != req_size)
 		return -1;
 
+	req_size = sizeof(struct vring_packed_desc_event);
+	size = req_size;
+	vq->driver_event = (struct vring_packed_desc_event *)(uintptr_t)
+		vhost_iova_to_vva(dev,
+					vq,	vq->ring_addrs.avail_user_addr,
+					&size, VHOST_ACCESS_RW);
+	if (!vq->driver_event || size != req_size)
+		return -1;
+
+	req_size = sizeof(struct vring_packed_desc_event);
+	size = req_size;
+	vq->device_event = (struct vring_packed_desc_event *)(uintptr_t)
+		vhost_iova_to_vva(dev,
+					vq, vq->ring_addrs.used_user_addr,
+					&size, VHOST_ACCESS_RW);
+	if (!vq->device_event || size != req_size)
+		return -1;
+
 	return 0;
 }
 
@@ -595,7 +613,11 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
 	if (!vq)
 		return -1;
 
-	vhost_vring_call(dev, vq);
+	if (vq_is_packed(dev))
+		vhost_vring_call_packed(dev, vq);
+	else
+		vhost_vring_call_split(dev, vq);
+
 	return 0;
 }
 
@@ -616,20 +638,59 @@ rte_vhost_avail_entries(int vid, uint16_t queue_id)
 	return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx;
 }
 
+static inline int
+vhost_enable_notify_split(struct vhost_virtqueue *vq, int enable)
+{
+	if (enable)
+		vq->used->flags &= ~VRING_USED_F_NO_NOTIFY;
+	else
+		vq->used->flags |= VRING_USED_F_NO_NOTIFY;
+
+	return 0;
+}
+
+static inline int
+vhost_enable_notify_packed(struct virtio_net *dev,
+		struct vhost_virtqueue *vq, int enable)
+{
+	uint16_t flags;
+
+	if (!enable) {
+		vq->device_event->desc_event_flags = RING_EVENT_FLAGS_DISABLE;
+		return 0;
+	}
+
+	flags = RING_EVENT_FLAGS_ENABLE;
+	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) {
+		flags = RING_EVENT_FLAGS_DESC;
+		vq->device_event->desc_event_off_wrap = vq->last_avail_idx |
+			vq->avail_wrap_counter << 15;
+	}
+
+	rte_smp_wmb();
+
+	vq->device_event->desc_event_flags = flags;
+
+	rte_smp_wmb();
+
+	return 0;
+}
+
 int
 rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 {
 	struct virtio_net *dev = get_device(vid);
+	struct vhost_virtqueue *vq;
 
 	if (!dev)
 		return -1;
 
-	if (enable)
-		dev->virtqueue[queue_id]->used->flags &=
-			~VRING_USED_F_NO_NOTIFY;
+	vq = dev->virtqueue[queue_id];
+
+	if (vq_is_packed(dev))
+		return vhost_enable_notify_packed(dev, vq, enable);
 	else
-		dev->virtqueue[queue_id]->used->flags |= VRING_USED_F_NO_NOTIFY;
-	return 0;
+		return vhost_enable_notify_split(vq, enable);
 }
 
 void
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 6ea8fb896..728fd2f6b 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -21,6 +21,7 @@
 
 #include "rte_vhost.h"
 #include "rte_vdpa.h"
+#include "virtio-packed.h"
 
 /* Used to indicate that the device is running on a data core */
 #define VIRTIO_DEV_RUNNING 1
@@ -95,8 +96,14 @@ struct vhost_virtqueue {
 		struct vring_desc	*desc;
 		struct vring_desc_packed   *desc_packed;
 	};
-	struct vring_avail	*avail;
-	struct vring_used	*used;
+	union {
+		struct vring_avail	*avail;
+		struct vring_packed_desc_event *driver_event;
+	};
+	union {
+		struct vring_used	*used;
+		struct vring_packed_desc_event *device_event;
+	};
 	uint32_t		size;
 
 	uint16_t		last_avail_idx;
@@ -613,7 +620,7 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old)
 }
 
 static __rte_always_inline void
-vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
+vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
 	/* Flush used->idx update before we read avail->flags. */
 	rte_mb();
@@ -624,11 +631,11 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
 		uint16_t new = vq->last_used_idx;
 
 		VHOST_LOG_DEBUG(VHOST_DATA, "%s: used_event_idx=%d, old=%d, new=%d\n",
-			__func__,
-			vhost_used_event(vq),
-			old, new);
+				__func__,
+				vhost_used_event(vq),
+				old, new);
 		if (vhost_need_event(vhost_used_event(vq), new, old)
-			&& (vq->callfd >= 0)) {
+				&& (vq->callfd >= 0)) {
 			vq->signalled_used = vq->last_used_idx;
 			eventfd_write(vq->callfd, (eventfd_t) 1);
 		}
@@ -640,4 +647,54 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	}
 }
 
+static __rte_always_inline void
+vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
+{
+	uint16_t old, new, off, off_wrap, wrap;
+	bool kick = false;
+
+
+	/*  Flush used desc update. */
+	rte_smp_mb();
+
+	if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) {
+		if (vq->driver_event->desc_event_flags !=
+				RING_EVENT_FLAGS_DISABLE)
+			kick = true;
+		goto kick;
+	}
+
+	old = vq->signalled_used;
+	new = vq->last_used_idx;
+	vq->signalled_used = new;
+
+	if (vq->driver_event->desc_event_flags != RING_EVENT_FLAGS_DESC) {
+		if (vq->driver_event->desc_event_flags !=
+				RING_EVENT_FLAGS_DISABLE)
+			kick = true;
+		goto kick;
+	}
+
+	rte_smp_rmb();
+
+	off_wrap = vq->driver_event->desc_event_off_wrap;
+	off = off_wrap & ~(1 << 15);
+	wrap = vq->used_wrap_counter;
+
+	if (new < old) {
+		new += vq->size;
+		wrap ^= 1;
+	}
+
+	if (wrap != off_wrap >> 15)
+		off += vq->size;
+
+	if (vhost_need_event(off, new, old))
+		kick = true;
+
+kick:
+	if (kick)
+		eventfd_write(vq->callfd, (eventfd_t)1);
+}
+
 #endif /* _VHOST_NET_CDEV_H_ */
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index b2b57de57..bda515bdb 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -523,6 +523,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 		vq = dev->virtqueue[vq_index];
 		addr = &vq->ring_addrs;
 
+		len = sizeof(struct vring_packed_desc_event);
+		vq->driver_event = (struct vring_packed_desc_event *)
+					(uintptr_t)ring_addr_to_vva(dev,
+					vq, addr->avail_user_addr, &len);
+		if (vq->driver_event == 0 ||
+				len != sizeof(struct vring_packed_desc_event)) {
+			RTE_LOG(DEBUG, VHOST_CONFIG,
+				"(%d) failed to find driver area address.\n",
+				dev->vid);
+			return dev;
+		}
+
+		len = sizeof(struct vring_packed_desc_event);
+		vq->device_event = (struct vring_packed_desc_event *)
+					(uintptr_t)ring_addr_to_vva(dev,
+					vq, addr->used_user_addr, &len);
+		if (vq->device_event == 0 ||
+				len != sizeof(struct vring_packed_desc_event)) {
+			RTE_LOG(DEBUG, VHOST_CONFIG,
+				"(%d) failed to find device area address.\n",
+				dev->vid);
+			return dev;
+		}
+
 		return dev;
 	}
 
diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
index d386cb6df..ce3b28313 100644
--- a/lib/librte_vhost/virtio-packed.h
+++ b/lib/librte_vhost/virtio-packed.h
@@ -19,6 +19,17 @@ struct vring_desc_packed {
 	uint16_t flags;
 };
 
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+#define RING_EVENT_FLAGS_MASK 0xFFFC
+#define RING_EVENT_WRAP_MASK 0x8000
+#define RING_EVENT_OFF_MASK 0x7FFF
+
+struct vring_packed_desc_event {
+	uint16_t desc_event_off_wrap;
+	uint16_t desc_event_flags;
+};
 
 static inline bool
 desc_is_avail(struct vring_desc_packed *desc, bool wrap_counter)
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 03dd38235..11c10aaf8 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -824,7 +824,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 
 	if (likely(vq->shadow_used_idx)) {
 		flush_shadow_used_ring_split(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_split(dev, vq);
 	}
 
 	return pkt_idx;
@@ -877,7 +877,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 
 	if (likely(vq->shadow_used_idx)) {
 		flush_shadow_used_ring_packed(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_packed(dev, vq);
 	}
 
 	return pkt_idx;
@@ -1360,7 +1360,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 
 		flush_shadow_used_ring_split(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_split(dev, vq);
 	}
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
@@ -1439,7 +1439,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		if (unlikely(i < count))
 			vq->shadow_used_idx = i;
 		flush_shadow_used_ring_split(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_split(dev, vq);
 	}
 
 	return i;
@@ -1477,7 +1477,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 
 		flush_shadow_used_ring_packed(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_packed(dev, vq);
 	}
 
 	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
@@ -1555,7 +1555,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		if (unlikely(i < count))
 			vq->shadow_used_idx = i;
 		flush_shadow_used_ring_packed(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_packed(dev, vq);
 	}
 
 	return i;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* [PATCH v6 15/15] vhost: advertize packed ring layout support
  2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (13 preceding siblings ...)
  2018-07-02  8:16 ` [PATCH v6 14/15] vhost: add notification " Maxime Coquelin
@ 2018-07-02  8:16 ` Maxime Coquelin
  14 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-02  8:16 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 728fd2f6b..ef2558795 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -241,7 +241,8 @@ struct vhost_msg {
 				(1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
 				(1ULL << VIRTIO_RING_F_EVENT_IDX) | \
 				(1ULL << VIRTIO_NET_F_MTU) | \
-				(1ULL << VIRTIO_F_IOMMU_PLATFORM))
+				(1ULL << VIRTIO_F_IOMMU_PLATFORM) | \
+				(1ULL << VIRTIO_F_RING_PACKED))
 
 
 struct guest_page {
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 14/15] vhost: add notification for packed ring
  2018-07-02  8:16 ` [PATCH v6 14/15] vhost: add notification " Maxime Coquelin
@ 2018-07-03  5:57   ` Jason Wang
  2018-07-03  6:43     ` Maxime Coquelin
  2018-07-03  6:06   ` Jason Wang
  2018-07-04  6:25   ` Tiwei Bie
  2 siblings, 1 reply; 34+ messages in thread
From: Jason Wang @ 2018-07-03  5:57 UTC (permalink / raw)
  To: Maxime Coquelin, tiwei.bie, zhihong.wang, jfreimann, dev; +Cc: mst, wexu



On 2018年07月02日 16:16, Maxime Coquelin wrote:
> +static inline int
> +vhost_enable_notify_split(struct vhost_virtqueue *vq, int enable)
> +{
> +	if (enable)
> +		vq->used->flags &= ~VRING_USED_F_NO_NOTIFY;
> +	else
> +		vq->used->flags |= VRING_USED_F_NO_NOTIFY;
> +
> +	return 0;
> +}
> +
> +static inline int
> +vhost_enable_notify_packed(struct virtio_net *dev,
> +		struct vhost_virtqueue *vq, int enable)
> +{
> +	uint16_t flags;
> +
> +	if (!enable) {
> +		vq->device_event->desc_event_flags = RING_EVENT_FLAGS_DISABLE;
> +		return 0;
> +	}
> +
> +	flags = RING_EVENT_FLAGS_ENABLE;
> +	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) {
> +		flags = RING_EVENT_FLAGS_DESC;
> +		vq->device_event->desc_event_off_wrap = vq->last_avail_idx |
> +			vq->avail_wrap_counter << 15;
> +	}
> +
> +	rte_smp_wmb();
> +
> +	vq->device_event->desc_event_flags = flags;
> +
> +	rte_smp_wmb();
> +

We don't do this for split version. Any specific reason for this?

Btw, looks like we don't care about the return value so using void instead?

Thanks

> +	return 0;
> +}
> +

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 14/15] vhost: add notification for packed ring
  2018-07-02  8:16 ` [PATCH v6 14/15] vhost: add notification " Maxime Coquelin
  2018-07-03  5:57   ` Jason Wang
@ 2018-07-03  6:06   ` Jason Wang
  2018-07-04 16:02     ` Maxime Coquelin
  2018-07-04  6:25   ` Tiwei Bie
  2 siblings, 1 reply; 34+ messages in thread
From: Jason Wang @ 2018-07-03  6:06 UTC (permalink / raw)
  To: Maxime Coquelin, tiwei.bie, zhihong.wang, jfreimann, dev; +Cc: mst, wexu



On 2018年07月02日 16:16, Maxime Coquelin wrote:
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>   lib/librte_vhost/vhost.c         | 73 ++++++++++++++++++++++++++++++++++++----
>   lib/librte_vhost/vhost.h         | 71 ++++++++++++++++++++++++++++++++++----
>   lib/librte_vhost/vhost_user.c    | 24 +++++++++++++
>   lib/librte_vhost/virtio-packed.h | 11 ++++++
>   lib/librte_vhost/virtio_net.c    | 12 +++----
>   5 files changed, 172 insertions(+), 19 deletions(-)
>
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 8538302c9..78f20c402 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -171,6 +171,24 @@ vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
>   	if (!vq->desc_packed || size != req_size)
>   		return -1;
>   
> +	req_size = sizeof(struct vring_packed_desc_event);
> +	size = req_size;
> +	vq->driver_event = (struct vring_packed_desc_event *)(uintptr_t)
> +		vhost_iova_to_vva(dev,
> +					vq,	vq->ring_addrs.avail_user_addr,
> +					&size, VHOST_ACCESS_RW);
> +	if (!vq->driver_event || size != req_size)
> +		return -1;
> +
> +	req_size = sizeof(struct vring_packed_desc_event);
> +	size = req_size;
> +	vq->device_event = (struct vring_packed_desc_event *)(uintptr_t)
> +		vhost_iova_to_vva(dev,
> +					vq, vq->ring_addrs.used_user_addr,
> +					&size, VHOST_ACCESS_RW);
> +	if (!vq->device_event || size != req_size)
> +		return -1;
> +
>   	return 0;
>   }
>   
> @@ -595,7 +613,11 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
>   	if (!vq)
>   		return -1;
>   
> -	vhost_vring_call(dev, vq);
> +	if (vq_is_packed(dev))
> +		vhost_vring_call_packed(dev, vq);
> +	else
> +		vhost_vring_call_split(dev, vq);
> +
>   	return 0;
>   }
>   
> @@ -616,20 +638,59 @@ rte_vhost_avail_entries(int vid, uint16_t queue_id)
>   	return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx;
>   }
>   
> +static inline int
> +vhost_enable_notify_split(struct vhost_virtqueue *vq, int enable)
> +{
> +	if (enable)
> +		vq->used->flags &= ~VRING_USED_F_NO_NOTIFY;
> +	else
> +		vq->used->flags |= VRING_USED_F_NO_NOTIFY;
> +
> +	return 0;
> +}
> +
> +static inline int
> +vhost_enable_notify_packed(struct virtio_net *dev,
> +		struct vhost_virtqueue *vq, int enable)
> +{
> +	uint16_t flags;
> +
> +	if (!enable) {
> +		vq->device_event->desc_event_flags = RING_EVENT_FLAGS_DISABLE;
> +		return 0;
> +	}
> +
> +	flags = RING_EVENT_FLAGS_ENABLE;
> +	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) {
> +		flags = RING_EVENT_FLAGS_DESC;
> +		vq->device_event->desc_event_off_wrap = vq->last_avail_idx |
> +			vq->avail_wrap_counter << 15;
> +	}
> +
> +	rte_smp_wmb();
> +
> +	vq->device_event->desc_event_flags = flags;
> +
> +	rte_smp_wmb();
> +
> +	return 0;
> +}
> +
>   int
>   rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
>   {
>   	struct virtio_net *dev = get_device(vid);
> +	struct vhost_virtqueue *vq;
>   
>   	if (!dev)
>   		return -1;
>   
> -	if (enable)
> -		dev->virtqueue[queue_id]->used->flags &=
> -			~VRING_USED_F_NO_NOTIFY;
> +	vq = dev->virtqueue[queue_id];
> +
> +	if (vq_is_packed(dev))
> +		return vhost_enable_notify_packed(dev, vq, enable);
>   	else
> -		dev->virtqueue[queue_id]->used->flags |= VRING_USED_F_NO_NOTIFY;
> -	return 0;
> +		return vhost_enable_notify_split(vq, enable);
>   }
>   
>   void
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 6ea8fb896..728fd2f6b 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -21,6 +21,7 @@
>   
>   #include "rte_vhost.h"
>   #include "rte_vdpa.h"
> +#include "virtio-packed.h"
>   
>   /* Used to indicate that the device is running on a data core */
>   #define VIRTIO_DEV_RUNNING 1
> @@ -95,8 +96,14 @@ struct vhost_virtqueue {
>   		struct vring_desc	*desc;
>   		struct vring_desc_packed   *desc_packed;
>   	};
> -	struct vring_avail	*avail;
> -	struct vring_used	*used;
> +	union {
> +		struct vring_avail	*avail;
> +		struct vring_packed_desc_event *driver_event;
> +	};
> +	union {
> +		struct vring_used	*used;
> +		struct vring_packed_desc_event *device_event;
> +	};
>   	uint32_t		size;
>   
>   	uint16_t		last_avail_idx;
> @@ -613,7 +620,7 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old)
>   }
>   
>   static __rte_always_inline void
> -vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
>   {
>   	/* Flush used->idx update before we read avail->flags. */
>   	rte_mb();
> @@ -624,11 +631,11 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
>   		uint16_t new = vq->last_used_idx;
>   
>   		VHOST_LOG_DEBUG(VHOST_DATA, "%s: used_event_idx=%d, old=%d, new=%d\n",
> -			__func__,
> -			vhost_used_event(vq),
> -			old, new);
> +				__func__,
> +				vhost_used_event(vq),
> +				old, new);
>   		if (vhost_need_event(vhost_used_event(vq), new, old)
> -			&& (vq->callfd >= 0)) {
> +				&& (vq->callfd >= 0)) {
>   			vq->signalled_used = vq->last_used_idx;
>   			eventfd_write(vq->callfd, (eventfd_t) 1);
>   		}

Looks like a style fixes, move those to another patch?

> @@ -640,4 +647,54 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
>   	}
>   }
>   
> +static __rte_always_inline void
> +vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +{
> +	uint16_t old, new, off, off_wrap, wrap;
> +	bool kick = false;
> +
> +
> +	/*  Flush used desc update. */
> +	rte_smp_mb();
> +
> +	if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) {
> +		if (vq->driver_event->desc_event_flags !=
> +				RING_EVENT_FLAGS_DISABLE)
> +			kick = true;
> +		goto kick;
> +	}
> +
> +	old = vq->signalled_used;
> +	new = vq->last_used_idx;
> +	vq->signalled_used = new;
> +
> +	if (vq->driver_event->desc_event_flags != RING_EVENT_FLAGS_DESC) {
> +		if (vq->driver_event->desc_event_flags !=
> +				RING_EVENT_FLAGS_DISABLE)
> +			kick = true;
> +		goto kick;
> +	}
> +
> +	rte_smp_rmb();
> +
> +	off_wrap = vq->driver_event->desc_event_off_wrap;
> +	off = off_wrap & ~(1 << 15);
> +	wrap = vq->used_wrap_counter;
> +
> +	if (new < old) {
> +		new += vq->size;
> +		wrap ^= 1;
> +	}
> +
> +	if (wrap != off_wrap >> 15)
> +		off += vq->size;
> +

Jusy FYI. Maybe we can switch to a more compact version like Tiwei used:

...
         wrap_counter = off_wrap >> 15;
         event_idx = off_wrap & ~(1<<15);
         if (wrap_counter != vq->avail_wrap_counter)
                 event_idx -= vq->vring_packed.num;

         if (flags == VRING_EVENT_F_DESC)
                 needs_kick = vring_need_event(event_idx, new, old);
         else
                 needs_kick = (flags != VRING_EVENT_F_DISABLE);

(I've switched to this version in vhost).

Thanks

> +	if (vhost_need_event(off, new, old))
> +		kick = true;
> +
> +kick:
> +	if (kick)
> +		eventfd_write(vq->callfd, (eventfd_t)1);
> +}
> +
>   #endif /* _VHOST_NET_CDEV_H_ */
> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
> index b2b57de57..bda515bdb 100644
> --- a/lib/librte_vhost/vhost_user.c
> +++ b/lib/librte_vhost/vhost_user.c
> @@ -523,6 +523,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>   		vq = dev->virtqueue[vq_index];
>   		addr = &vq->ring_addrs;
>   
> +		len = sizeof(struct vring_packed_desc_event);
> +		vq->driver_event = (struct vring_packed_desc_event *)
> +					(uintptr_t)ring_addr_to_vva(dev,
> +					vq, addr->avail_user_addr, &len);
> +		if (vq->driver_event == 0 ||
> +				len != sizeof(struct vring_packed_desc_event)) {
> +			RTE_LOG(DEBUG, VHOST_CONFIG,
> +				"(%d) failed to find driver area address.\n",
> +				dev->vid);
> +			return dev;
> +		}
> +
> +		len = sizeof(struct vring_packed_desc_event);
> +		vq->device_event = (struct vring_packed_desc_event *)
> +					(uintptr_t)ring_addr_to_vva(dev,
> +					vq, addr->used_user_addr, &len);
> +		if (vq->device_event == 0 ||
> +				len != sizeof(struct vring_packed_desc_event)) {
> +			RTE_LOG(DEBUG, VHOST_CONFIG,
> +				"(%d) failed to find device area address.\n",
> +				dev->vid);
> +			return dev;
> +		}
> +
>   		return dev;
>   	}
>   
> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
> index d386cb6df..ce3b28313 100644
> --- a/lib/librte_vhost/virtio-packed.h
> +++ b/lib/librte_vhost/virtio-packed.h
> @@ -19,6 +19,17 @@ struct vring_desc_packed {
>   	uint16_t flags;
>   };
>   
> +#define RING_EVENT_FLAGS_ENABLE 0x0
> +#define RING_EVENT_FLAGS_DISABLE 0x1
> +#define RING_EVENT_FLAGS_DESC 0x2
> +#define RING_EVENT_FLAGS_MASK 0xFFFC
> +#define RING_EVENT_WRAP_MASK 0x8000
> +#define RING_EVENT_OFF_MASK 0x7FFF
> +
> +struct vring_packed_desc_event {
> +	uint16_t desc_event_off_wrap;
> +	uint16_t desc_event_flags;
> +};
>   
>   static inline bool
>   desc_is_avail(struct vring_desc_packed *desc, bool wrap_counter)
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 03dd38235..11c10aaf8 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -824,7 +824,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>   
>   	if (likely(vq->shadow_used_idx)) {
>   		flush_shadow_used_ring_split(dev, vq);
> -		vhost_vring_call(dev, vq);
> +		vhost_vring_call_split(dev, vq);
>   	}
>   
>   	return pkt_idx;
> @@ -877,7 +877,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
>   
>   	if (likely(vq->shadow_used_idx)) {
>   		flush_shadow_used_ring_packed(dev, vq);
> -		vhost_vring_call(dev, vq);
> +		vhost_vring_call_packed(dev, vq);
>   	}
>   
>   	return pkt_idx;
> @@ -1360,7 +1360,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>   		}
>   
>   		flush_shadow_used_ring_split(dev, vq);
> -		vhost_vring_call(dev, vq);
> +		vhost_vring_call_split(dev, vq);
>   	}
>   
>   	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
> @@ -1439,7 +1439,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>   		if (unlikely(i < count))
>   			vq->shadow_used_idx = i;
>   		flush_shadow_used_ring_split(dev, vq);
> -		vhost_vring_call(dev, vq);
> +		vhost_vring_call_split(dev, vq);
>   	}
>   
>   	return i;
> @@ -1477,7 +1477,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
>   		}
>   
>   		flush_shadow_used_ring_packed(dev, vq);
> -		vhost_vring_call(dev, vq);
> +		vhost_vring_call_packed(dev, vq);
>   	}
>   
>   	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
> @@ -1555,7 +1555,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
>   		if (unlikely(i < count))
>   			vq->shadow_used_idx = i;
>   		flush_shadow_used_ring_packed(dev, vq);
> -		vhost_vring_call(dev, vq);
> +		vhost_vring_call_packed(dev, vq);
>   	}
>   
>   	return i;

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 14/15] vhost: add notification for packed ring
  2018-07-03  5:57   ` Jason Wang
@ 2018-07-03  6:43     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-03  6:43 UTC (permalink / raw)
  To: Jason Wang, tiwei.bie, zhihong.wang, jfreimann, dev; +Cc: mst, wexu



On 07/03/2018 07:57 AM, Jason Wang wrote:
> 
> 
> On 2018年07月02日 16:16, Maxime Coquelin wrote:
>> +static inline int
>> +vhost_enable_notify_split(struct vhost_virtqueue *vq, int enable)
>> +{
>> +    if (enable)
>> +        vq->used->flags &= ~VRING_USED_F_NO_NOTIFY;
>> +    else
>> +        vq->used->flags |= VRING_USED_F_NO_NOTIFY;
>> +
>> +    return 0;
>> +}
>> +
>> +static inline int
>> +vhost_enable_notify_packed(struct virtio_net *dev,
>> +        struct vhost_virtqueue *vq, int enable)
>> +{
>> +    uint16_t flags;
>> +
>> +    if (!enable) {
>> +        vq->device_event->desc_event_flags = RING_EVENT_FLAGS_DISABLE;
>> +        return 0;
>> +    }
>> +
>> +    flags = RING_EVENT_FLAGS_ENABLE;
>> +    if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) {
>> +        flags = RING_EVENT_FLAGS_DESC;
>> +        vq->device_event->desc_event_off_wrap = vq->last_avail_idx |
>> +            vq->avail_wrap_counter << 15;
>> +    }
>> +
>> +    rte_smp_wmb();
>> +
>> +    vq->device_event->desc_event_flags = flags;
>> +
>> +    rte_smp_wmb();
>> +
> 
> We don't do this for split version. Any specific reason for this?

Yeah, maybe this is no necessary.

> Btw, looks like we don't care about the return value so using void instead?

Right, it could be a void.

Thanks!
Maxime

> Thanks
> 
>> +    return 0;
>> +}
>> +
> 
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 01/15] vhost: add virtio packed virtqueue defines
  2018-07-02  8:16 ` [PATCH v6 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
@ 2018-07-04  5:37   ` Tiwei Bie
  2018-07-04 16:03     ` Maxime Coquelin
  0 siblings, 1 reply; 34+ messages in thread
From: Tiwei Bie @ 2018-07-04  5:37 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Mon, Jul 02, 2018 at 10:16:15AM +0200, Maxime Coquelin wrote:
> From: Jens Freimann <jfreimann@redhat.com>
> 
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
>  lib/librte_vhost/vhost.h         |  3 +++
>  lib/librte_vhost/virtio-packed.h | 22 ++++++++++++++++++++++
>  2 files changed, 25 insertions(+)
>  create mode 100644 lib/librte_vhost/virtio-packed.h
> 
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 3f8fa3a78..cecd5c1ee 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -191,6 +191,9 @@ struct vhost_msg {
>  #ifndef VIRTIO_F_VERSION_1
>   #define VIRTIO_F_VERSION_1 32
>  #endif
> +#ifndef VIRTIO_F_RING_PACKED
> + #define VIRTIO_F_RING_PACKED 34
> +#endif
>  
>  /* Features supported by this builtin vhost-user net driver. */
>  #define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
> new file mode 100644
> index 000000000..744b3991b
> --- /dev/null
> +++ b/lib/librte_vhost/virtio-packed.h
> @@ -0,0 +1,22 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) Red Hat Inc.
> + */
> +
> +#ifndef __VIRTIO_PACKED_H
> +#define __VIRTIO_PACKED_H
> +
> +#define VRING_DESC_F_NEXT       1
> +#define VRING_DESC_F_WRITE      2
> +#define VRING_DESC_F_INDIRECT   4
> +
> +#define VRING_DESC_F_AVAIL      (1ULL << 7)
> +#define VRING_DESC_F_USED	(1ULL << 15)

Please use space or tab consistently.

> +
> +struct vring_desc_packed {
> +	uint64_t addr;
> +	uint32_t len;
> +	uint16_t index;
> +	uint16_t flags;
> +};
> +
> +#endif /* __VIRTIO_PACKED_H */
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 02/15] vhost: add helpers for packed virtqueues
  2018-07-02  8:16 ` [PATCH v6 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
@ 2018-07-04  5:39   ` Tiwei Bie
  2018-07-04 16:03     ` Maxime Coquelin
  0 siblings, 1 reply; 34+ messages in thread
From: Tiwei Bie @ 2018-07-04  5:39 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Mon, Jul 02, 2018 at 10:16:16AM +0200, Maxime Coquelin wrote:
> From: Jens Freimann <jfreimann@redhat.com>
> 
> Add some helper functions to check descriptor flags
> and check if a vring is of type packed.
> 
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/vhost.h         | 6 ++++++
>  lib/librte_vhost/virtio-packed.h | 8 ++++++++
>  2 files changed, 14 insertions(+)
> 
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index cecd5c1ee..cabbc3ab7 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -318,6 +318,12 @@ struct virtio_net {
>  	struct vhost_user_extern_ops extern_ops;
>  } __rte_cache_aligned;
>  
> +static __rte_always_inline bool
> +vq_is_packed(struct virtio_net *dev)
> +{
> +	return dev->features & (1ull << VIRTIO_F_RING_PACKED);
> +}
> +
>  #define VHOST_LOG_PAGE	4096
>  
>  /*
> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
> index 744b3991b..d386cb6df 100644
> --- a/lib/librte_vhost/virtio-packed.h
> +++ b/lib/librte_vhost/virtio-packed.h
> @@ -19,4 +19,12 @@ struct vring_desc_packed {
>  	uint16_t flags;
>  };
>  
> +

There is no need to add above blank line.

> +static inline bool
> +desc_is_avail(struct vring_desc_packed *desc, bool wrap_counter)
> +{
> +	return wrap_counter == !!(desc->flags & VRING_DESC_F_AVAIL) &&
> +		wrap_counter != !!(desc->flags & VRING_DESC_F_USED);
> +}
> +
>  #endif /* __VIRTIO_PACKED_H */
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 13/15] vhost: add Tx support for packed ring
  2018-07-02  8:16 ` [PATCH v6 13/15] vhost: add Tx " Maxime Coquelin
@ 2018-07-04  5:45   ` Tiwei Bie
  2018-07-04 16:09     ` Maxime Coquelin
  0 siblings, 1 reply; 34+ messages in thread
From: Tiwei Bie @ 2018-07-04  5:45 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Mon, Jul 02, 2018 at 10:16:27AM +0200, Maxime Coquelin wrote:
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/vhost.h      |   1 +
>  lib/librte_vhost/virtio_net.c | 121 +++++++++++++++++++++++++++++++++++++++++-
>  2 files changed, 121 insertions(+), 1 deletion(-)
> 
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index a7320469a..6ea8fb896 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -56,6 +56,7 @@ struct buf_vector {
>  struct zcopy_mbuf {
>  	struct rte_mbuf *mbuf;
>  	uint32_t desc_idx;
> +	uint16_t desc_count;
>  	uint16_t in_use;
>  
>  	TAILQ_ENTRY(zcopy_mbuf) next;
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index 2e286e228..03dd38235 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -1445,6 +1445,122 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>  	return i;
>  }
>  
> +static __rte_always_inline uint16_t
> +virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> +	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
> +{
> +	uint16_t i;
> +
> +	rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
> +
> +	if (unlikely(dev->dequeue_zero_copy)) {
> +		struct zcopy_mbuf *zmbuf, *next;
> +		int nr_updated = 0;
> +
> +		for (zmbuf = TAILQ_FIRST(&vq->zmbuf_list);
> +		     zmbuf != NULL; zmbuf = next) {
> +			next = TAILQ_NEXT(zmbuf, next);
> +
> +			if (mbuf_is_consumed(zmbuf->mbuf)) {
> +				update_shadow_used_ring_packed(vq,
> +						zmbuf->desc_idx,
> +						0,
> +						zmbuf->desc_count);
> +				nr_updated += 1;

nr_updated isn't really used.

> +
> +				TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next);
> +				restore_mbuf(zmbuf->mbuf);
> +				rte_pktmbuf_free(zmbuf->mbuf);
> +				put_zmbuf(zmbuf);
> +				vq->nr_zmbuf -= 1;
> +			}
> +		}
> +
> +		flush_shadow_used_ring_packed(dev, vq);
> +		vhost_vring_call(dev, vq);
> +	}
> +
[...]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 11/15] vhost: add vector filling support for packed ring
  2018-07-02  8:16 ` [PATCH v6 11/15] vhost: add vector filling support for packed ring Maxime Coquelin
@ 2018-07-04  5:53   ` Tiwei Bie
  2018-07-04 16:18     ` Maxime Coquelin
  0 siblings, 1 reply; 34+ messages in thread
From: Tiwei Bie @ 2018-07-04  5:53 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Mon, Jul 02, 2018 at 10:16:25AM +0200, Maxime Coquelin wrote:
[...]
> +static __rte_always_inline int
> +fill_vec_buf_packed_indirect(struct virtio_net *dev,
> +			struct vhost_virtqueue *vq,
> +			struct vring_desc_packed *desc, uint16_t *vec_idx,
> +			struct buf_vector *buf_vec, uint16_t *len, uint8_t perm)
> +{
> +	uint16_t i;
> +	uint32_t nr_decs;
> +	uint16_t vec_id = *vec_idx;
> +	uint64_t dlen;
> +	struct vring_desc_packed *descs, *idescs = NULL;
> +
> +	dlen = desc->len;
> +	descs = (struct vring_desc_packed *)(uintptr_t)
> +		vhost_iova_to_vva(dev, vq, desc->addr, &dlen, VHOST_ACCESS_RO);
> +	if (unlikely(!descs))
> +		return -1;
> +
> +	if (unlikely(dlen < desc->len)) {
> +		/*
> +		 * The indirect desc table is not contiguous
> +		 * in process VA space, we have to copy it.
> +		 */
> +		idescs = alloc_copy_ind_table(dev, vq, desc->addr, desc->len);
> +		if (unlikely(!idescs))
> +			return -1;
> +
> +		descs = idescs;
> +	}
> +
> +	nr_decs =  desc->len / sizeof(struct vring_desc_packed);

s/nr_decs =  /nr_desc = /


> +	if (unlikely(nr_decs >= vq->size)) {
> +		free_ind_table(idescs);
> +		return -1;
> +	}
[...]
> +
> +static inline int
> +fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
> +				uint16_t avail_idx, uint16_t *desc_count,
> +				struct buf_vector *buf_vec, uint16_t *vec_idx,
> +				uint16_t *buf_id, uint16_t *len, uint8_t perm)
> +{
> +	bool wrap_counter = vq->avail_wrap_counter;
> +	struct vring_desc_packed *descs = vq->desc_packed;
> +	uint16_t vec_id = *vec_idx;
> +
> +	if (avail_idx < vq->last_avail_idx)
> +		wrap_counter ^= 1;

In which case avail_idx will be less than vq->last_avail_idx
and we need to wrap the wrap_counter?

> +
> +	if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter)))
> +		return -1;
> +
> +	*desc_count = 0;
> +
> +	while (1) {
> +		if (unlikely(vec_id >= BUF_VECTOR_MAX))
> +			return -1;
> +
> +		*desc_count += 1;
> +		*buf_id = descs[avail_idx].index;
> +
> +		if (descs[avail_idx].flags & VRING_DESC_F_INDIRECT) {
> +			if (unlikely(fill_vec_buf_packed_indirect(dev, vq,
> +							&descs[avail_idx],
> +							&vec_id, buf_vec,
> +							len, perm) < 0))
> +				return -1;
> +		} else {
> +			*len += descs[avail_idx].len;
> +
> +			if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
> +							descs[avail_idx].addr,
> +							descs[avail_idx].len,
> +							perm)))
> +				return -1;
> +		}
> +
> +		if ((descs[avail_idx].flags & VRING_DESC_F_NEXT) == 0)
> +			break;
> +
> +		if (++avail_idx >= vq->size) {
> +			avail_idx -= vq->size;
> +			wrap_counter ^= 1;
> +		}
> +	}
> +
> +	*vec_idx = vec_id;
> +
> +	return 0;
> +}
[...]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 10/15] vhost: create descriptor mapping function
  2018-07-02  8:16 ` [PATCH v6 10/15] vhost: create descriptor mapping function Maxime Coquelin
@ 2018-07-04  5:56   ` Tiwei Bie
  2018-07-04 16:18     ` Maxime Coquelin
  0 siblings, 1 reply; 34+ messages in thread
From: Tiwei Bie @ 2018-07-04  5:56 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Mon, Jul 02, 2018 at 10:16:24AM +0200, Maxime Coquelin wrote:
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/virtio_net.c | 61 ++++++++++++++++++++++++++-----------------
>  1 file changed, 37 insertions(+), 24 deletions(-)
> 
> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
> index f0e2e6a1f..64664b7de 100644
> --- a/lib/librte_vhost/virtio_net.c
> +++ b/lib/librte_vhost/virtio_net.c
> @@ -292,6 +292,37 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
>  	}
>  }
>  
> +static __rte_always_inline int
> +map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
> +		struct buf_vector *buf_vec, uint16_t *vec_idx,
> +		uint64_t desc_iova, uint64_t desc_len, uint8_t perm)
> +{
> +	uint16_t vec_id = *vec_idx;
> +
> +	while (desc_len) {
> +		uint64_t desc_addr;
> +		uint64_t desc_chunck_len = desc_len;
> +
> +		desc_addr = vhost_iova_to_vva(dev, vq,
> +				desc_iova,
> +				&desc_chunck_len,
> +				perm);
> +		if (unlikely(!desc_addr))
> +			return -1;
> +
> +		buf_vec[vec_id].buf_iova = desc_iova;
> +		buf_vec[vec_id].buf_addr = desc_addr;
> +		buf_vec[vec_id].buf_len  = desc_chunck_len;
> +
> +		desc_len -= desc_chunck_len;
> +		desc_iova += desc_chunck_len;
> +		vec_id++;

FYI, a rebase on top of the latest "vhost: generalize
buffer vectors" series is needed to make sure that
access buf_vec[vec_id] won't cause overflow.

> +	}
> +	*vec_idx = vec_id;
> +
> +	return 0;
> +}
> +
[...]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 14/15] vhost: add notification for packed ring
  2018-07-02  8:16 ` [PATCH v6 14/15] vhost: add notification " Maxime Coquelin
  2018-07-03  5:57   ` Jason Wang
  2018-07-03  6:06   ` Jason Wang
@ 2018-07-04  6:25   ` Tiwei Bie
  2018-07-04 20:20     ` Maxime Coquelin
  2 siblings, 1 reply; 34+ messages in thread
From: Tiwei Bie @ 2018-07-04  6:25 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Mon, Jul 02, 2018 at 10:16:28AM +0200, Maxime Coquelin wrote:
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/vhost.c         | 73 ++++++++++++++++++++++++++++++++++++----
>  lib/librte_vhost/vhost.h         | 71 ++++++++++++++++++++++++++++++++++----
>  lib/librte_vhost/vhost_user.c    | 24 +++++++++++++
>  lib/librte_vhost/virtio-packed.h | 11 ++++++
>  lib/librte_vhost/virtio_net.c    | 12 +++----
>  5 files changed, 172 insertions(+), 19 deletions(-)
> 
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 8538302c9..78f20c402 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -171,6 +171,24 @@ vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  	if (!vq->desc_packed || size != req_size)
>  		return -1;
>  
> +	req_size = sizeof(struct vring_packed_desc_event);
> +	size = req_size;
> +	vq->driver_event = (struct vring_packed_desc_event *)(uintptr_t)
> +		vhost_iova_to_vva(dev,
> +					vq,	vq->ring_addrs.avail_user_addr,
> +					&size, VHOST_ACCESS_RW);

It should be a space instead of a tab after "vq,"

Why not put "vq, vq->ring_addrs.avail_user_addr,"
and "vhost_iova_to_vva(dev," on the same line?

> +	if (!vq->driver_event || size != req_size)
> +		return -1;
> +
> +	req_size = sizeof(struct vring_packed_desc_event);
> +	size = req_size;
> +	vq->device_event = (struct vring_packed_desc_event *)(uintptr_t)
> +		vhost_iova_to_vva(dev,
> +					vq, vq->ring_addrs.used_user_addr,

It's better to put "vhost_iova_to_vva(dev," and
"vq, vq->ring_addrs.used_user_addr," on the same line.

Currently, it looks like something as this in my editor:

¦       vq->device_event = (struct vring_packed_desc_event *)(uintptr_t)
¦       ¦       vhost_iova_to_vva(dev,
¦       ¦       ¦       ¦       ¦       vq, vq->ring_addrs.used_user_addr,
¦       ¦       ¦       ¦       ¦       &size, VHOST_ACCESS_RW);

> +					&size, VHOST_ACCESS_RW);
> +	if (!vq->device_event || size != req_size)
> +		return -1;
> +
>  	return 0;
>  }
>  
[...]
> @@ -640,4 +647,54 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  	}
>  }
>  
> +static __rte_always_inline void
> +vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +{
> +	uint16_t old, new, off, off_wrap, wrap;
> +	bool kick = false;
> +
> +

There is no need to have two blank lines.

> +	/*  Flush used desc update. */

Just need one space between "/*" and "Flush".

> +	rte_smp_mb();
> +
> +	if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) {
> +		if (vq->driver_event->desc_event_flags !=
> +				RING_EVENT_FLAGS_DISABLE)
> +			kick = true;
> +		goto kick;
> +	}
> +
> +	old = vq->signalled_used;
> +	new = vq->last_used_idx;
> +	vq->signalled_used = new;
> +
> +	if (vq->driver_event->desc_event_flags != RING_EVENT_FLAGS_DESC) {
> +		if (vq->driver_event->desc_event_flags !=
> +				RING_EVENT_FLAGS_DISABLE)
> +			kick = true;
> +		goto kick;
> +	}
> +
> +	rte_smp_rmb();
> +
> +	off_wrap = vq->driver_event->desc_event_off_wrap;
> +	off = off_wrap & ~(1 << 15);

Maybe it's better to use: RING_EVENT_OFF_MASK

> +	wrap = vq->used_wrap_counter;
> +
> +	if (new < old) {
> +		new += vq->size;
> +		wrap ^= 1;
> +	}
> +
> +	if (wrap != off_wrap >> 15)
> +		off += vq->size;
> +
> +	if (vhost_need_event(off, new, old))
> +		kick = true;
> +
> +kick:
> +	if (kick)
> +		eventfd_write(vq->callfd, (eventfd_t)1);
> +}
> +
>  #endif /* _VHOST_NET_CDEV_H_ */
> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
> index b2b57de57..bda515bdb 100644
> --- a/lib/librte_vhost/vhost_user.c
> +++ b/lib/librte_vhost/vhost_user.c
> @@ -523,6 +523,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>  		vq = dev->virtqueue[vq_index];
>  		addr = &vq->ring_addrs;
>  
> +		len = sizeof(struct vring_packed_desc_event);
> +		vq->driver_event = (struct vring_packed_desc_event *)
> +					(uintptr_t)ring_addr_to_vva(dev,
> +					vq, addr->avail_user_addr, &len);
> +		if (vq->driver_event == 0 ||

It's better to compare with NULL.

> +				len != sizeof(struct vring_packed_desc_event)) {
> +			RTE_LOG(DEBUG, VHOST_CONFIG,
> +				"(%d) failed to find driver area address.\n",
> +				dev->vid);
> +			return dev;
> +		}
> +
> +		len = sizeof(struct vring_packed_desc_event);
> +		vq->device_event = (struct vring_packed_desc_event *)
> +					(uintptr_t)ring_addr_to_vva(dev,
> +					vq, addr->used_user_addr, &len);
> +		if (vq->device_event == 0 ||

It's better to compare with NULL.

> +				len != sizeof(struct vring_packed_desc_event)) {
> +			RTE_LOG(DEBUG, VHOST_CONFIG,
> +				"(%d) failed to find device area address.\n",
> +				dev->vid);
> +			return dev;
> +		}
> +
>  		return dev;
>  	}
>  
> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
> index d386cb6df..ce3b28313 100644
> --- a/lib/librte_vhost/virtio-packed.h
> +++ b/lib/librte_vhost/virtio-packed.h

Normally, we name c/h files as something like: virtio_packed.h

Besides, it'd be better to move the definitions into
vhost.h and define the types and flags for packed
ring only when e.g. VIRTIO_F_RING_PACKED not defined.
Just like how VIRTIO_F_IOMMU_PLATFORM works:

/* Declare IOMMU related bits for older kernels */
#ifndef VIRTIO_F_IOMMU_PLATFORM

#define VIRTIO_F_IOMMU_PLATFORM 33

struct vhost_iotlb_msg {
	__u64 iova;
	__u64 size;
	__u64 uaddr;
#define VHOST_ACCESS_RO      0x1
#define VHOST_ACCESS_WO      0x2
#define VHOST_ACCESS_RW      0x3
	__u8 perm;
#define VHOST_IOTLB_MISS           1
#define VHOST_IOTLB_UPDATE         2
#define VHOST_IOTLB_INVALIDATE     3
#define VHOST_IOTLB_ACCESS_FAIL    4
	__u8 type;
};

#define VHOST_IOTLB_MSG 0x1

struct vhost_msg {
	int type;
	union {
		struct vhost_iotlb_msg iotlb;
		__u8 padding[64];
	};
};
#endif

> @@ -19,6 +19,17 @@ struct vring_desc_packed {
>  	uint16_t flags;
>  };
>  
> +#define RING_EVENT_FLAGS_ENABLE 0x0
> +#define RING_EVENT_FLAGS_DISABLE 0x1
> +#define RING_EVENT_FLAGS_DESC 0x2
> +#define RING_EVENT_FLAGS_MASK 0xFFFC

RING_EVENT_FLAGS_MASK should be 0x3?

> +#define RING_EVENT_WRAP_MASK 0x8000
> +#define RING_EVENT_OFF_MASK 0x7FFF

It's better to define above macros as VRING_EVENT_*

> +
> +struct vring_packed_desc_event {
> +	uint16_t desc_event_off_wrap;
> +	uint16_t desc_event_flags;
> +};
>  
[...]

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 07/15] vhost: extract split ring handling from Rx and Tx functions
  2018-07-02  8:16 ` [PATCH v6 07/15] vhost: extract split ring handling from Rx and Tx functions Maxime Coquelin
@ 2018-07-04  6:51   ` Tiwei Bie
  2018-07-04 21:04     ` Maxime Coquelin
  0 siblings, 1 reply; 34+ messages in thread
From: Tiwei Bie @ 2018-07-04  6:51 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Mon, Jul 02, 2018 at 10:16:21AM +0200, Maxime Coquelin wrote:
[...]
> +
> +	if (unlikely(vq->access_ok == 0))
> +		if (unlikely(vring_translate(dev, vq) < 0))
> +			goto out;
> +
> +

Please just keep one blank line.

> +	/*
> +	 * Construct a RARP broadcast packet, and inject it to the "pkts"
> +	 * array, to looks like that guest actually send such packet.
> +	 *
> +	 * Check user_send_rarp() for more information.
> +	 *
> +	 * broadcast_rarp shares a cacheline in the virtio_net structure
> +	 * with some fields that are accessed during enqueue and
> +	 * rte_atomic16_cmpset() causes a write if using cmpxchg. This could
> +	 * result in false sharing between enqueue and dequeue.
> +	 *
> +	 * Prevent unnecessary false sharing by reading broadcast_rarp first
> +	 * and only performing cmpset if the read indicates it is likely to
> +	 * be set.
> +	 */
> +

Please remove above blank line.

> +	if (unlikely(rte_atomic16_read(&dev->broadcast_rarp) &&
> +			rte_atomic16_cmpset((volatile uint16_t *)
> +				&dev->broadcast_rarp.cnt, 1, 0))) {
> +
> +		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
> +		if (rarp_mbuf == NULL) {
> +			RTE_LOG(ERR, VHOST_DATA,
> +				"Failed to make RARP packet.\n");
> +			return 0;
> +		}
> +		count -= 1;
> +	}
> +
> +	count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
> +
>  out:
>  	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
>  		vhost_user_iotlb_rd_unlock(vq);
> @@ -1200,10 +1222,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
>  		 * Inject it to the head of "pkts" array, so that switch's mac
>  		 * learning table will get updated first.
>  		 */
> -		memmove(&pkts[1], pkts, i * sizeof(struct rte_mbuf *));
> +		memmove(&pkts[1], pkts, count * sizeof(struct rte_mbuf *));
>  		pkts[0] = rarp_mbuf;
> -		i += 1;
> +		count += 1;
>  	}
>  
> -	return i;
> +	return count;
>  }
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 14/15] vhost: add notification for packed ring
  2018-07-03  6:06   ` Jason Wang
@ 2018-07-04 16:02     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 16:02 UTC (permalink / raw)
  To: Jason Wang, tiwei.bie, zhihong.wang, jfreimann, dev; +Cc: mst, wexu



On 07/03/2018 08:06 AM, Jason Wang wrote:
> 
> 
> On 2018年07月02日 16:16, Maxime Coquelin wrote:
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.c         | 73 
>> ++++++++++++++++++++++++++++++++++++----
>>   lib/librte_vhost/vhost.h         | 71 
>> ++++++++++++++++++++++++++++++++++----
>>   lib/librte_vhost/vhost_user.c    | 24 +++++++++++++
>>   lib/librte_vhost/virtio-packed.h | 11 ++++++
>>   lib/librte_vhost/virtio_net.c    | 12 +++----
>>   5 files changed, 172 insertions(+), 19 deletions(-)
>>
>> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
>> index 8538302c9..78f20c402 100644
>> --- a/lib/librte_vhost/vhost.c
>> +++ b/lib/librte_vhost/vhost.c
>> @@ -171,6 +171,24 @@ vring_translate_packed(struct virtio_net *dev, 
>> struct vhost_virtqueue *vq)
>>       if (!vq->desc_packed || size != req_size)
>>           return -1;
>> +    req_size = sizeof(struct vring_packed_desc_event);
>> +    size = req_size;
>> +    vq->driver_event = (struct vring_packed_desc_event *)(uintptr_t)
>> +        vhost_iova_to_vva(dev,
>> +                    vq,    vq->ring_addrs.avail_user_addr,
>> +                    &size, VHOST_ACCESS_RW);
>> +    if (!vq->driver_event || size != req_size)
>> +        return -1;
>> +
>> +    req_size = sizeof(struct vring_packed_desc_event);
>> +    size = req_size;
>> +    vq->device_event = (struct vring_packed_desc_event *)(uintptr_t)
>> +        vhost_iova_to_vva(dev,
>> +                    vq, vq->ring_addrs.used_user_addr,
>> +                    &size, VHOST_ACCESS_RW);
>> +    if (!vq->device_event || size != req_size)
>> +        return -1;
>> +
>>       return 0;
>>   }
>> @@ -595,7 +613,11 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
>>       if (!vq)
>>           return -1;
>> -    vhost_vring_call(dev, vq);
>> +    if (vq_is_packed(dev))
>> +        vhost_vring_call_packed(dev, vq);
>> +    else
>> +        vhost_vring_call_split(dev, vq);
>> +
>>       return 0;
>>   }
>> @@ -616,20 +638,59 @@ rte_vhost_avail_entries(int vid, uint16_t queue_id)
>>       return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx;
>>   }
>> +static inline int
>> +vhost_enable_notify_split(struct vhost_virtqueue *vq, int enable)
>> +{
>> +    if (enable)
>> +        vq->used->flags &= ~VRING_USED_F_NO_NOTIFY;
>> +    else
>> +        vq->used->flags |= VRING_USED_F_NO_NOTIFY;
>> +
>> +    return 0;
>> +}
>> +
>> +static inline int
>> +vhost_enable_notify_packed(struct virtio_net *dev,
>> +        struct vhost_virtqueue *vq, int enable)
>> +{
>> +    uint16_t flags;
>> +
>> +    if (!enable) {
>> +        vq->device_event->desc_event_flags = RING_EVENT_FLAGS_DISABLE;
>> +        return 0;
>> +    }
>> +
>> +    flags = RING_EVENT_FLAGS_ENABLE;
>> +    if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) {
>> +        flags = RING_EVENT_FLAGS_DESC;
>> +        vq->device_event->desc_event_off_wrap = vq->last_avail_idx |
>> +            vq->avail_wrap_counter << 15;
>> +    }
>> +
>> +    rte_smp_wmb();
>> +
>> +    vq->device_event->desc_event_flags = flags;
>> +
>> +    rte_smp_wmb();
>> +
>> +    return 0;
>> +}
>> +
>>   int
>>   rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int 
>> enable)
>>   {
>>       struct virtio_net *dev = get_device(vid);
>> +    struct vhost_virtqueue *vq;
>>       if (!dev)
>>           return -1;
>> -    if (enable)
>> -        dev->virtqueue[queue_id]->used->flags &=
>> -            ~VRING_USED_F_NO_NOTIFY;
>> +    vq = dev->virtqueue[queue_id];
>> +
>> +    if (vq_is_packed(dev))
>> +        return vhost_enable_notify_packed(dev, vq, enable);
>>       else
>> -        dev->virtqueue[queue_id]->used->flags |= VRING_USED_F_NO_NOTIFY;
>> -    return 0;
>> +        return vhost_enable_notify_split(vq, enable);
>>   }
>>   void
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index 6ea8fb896..728fd2f6b 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -21,6 +21,7 @@
>>   #include "rte_vhost.h"
>>   #include "rte_vdpa.h"
>> +#include "virtio-packed.h"
>>   /* Used to indicate that the device is running on a data core */
>>   #define VIRTIO_DEV_RUNNING 1
>> @@ -95,8 +96,14 @@ struct vhost_virtqueue {
>>           struct vring_desc    *desc;
>>           struct vring_desc_packed   *desc_packed;
>>       };
>> -    struct vring_avail    *avail;
>> -    struct vring_used    *used;
>> +    union {
>> +        struct vring_avail    *avail;
>> +        struct vring_packed_desc_event *driver_event;
>> +    };
>> +    union {
>> +        struct vring_used    *used;
>> +        struct vring_packed_desc_event *device_event;
>> +    };
>>       uint32_t        size;
>>       uint16_t        last_avail_idx;
>> @@ -613,7 +620,7 @@ vhost_need_event(uint16_t event_idx, uint16_t 
>> new_idx, uint16_t old)
>>   }
>>   static __rte_always_inline void
>> -vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
>> +vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue 
>> *vq)
>>   {
>>       /* Flush used->idx update before we read avail->flags. */
>>       rte_mb();
>> @@ -624,11 +631,11 @@ vhost_vring_call(struct virtio_net *dev, struct 
>> vhost_virtqueue *vq)
>>           uint16_t new = vq->last_used_idx;
>>           VHOST_LOG_DEBUG(VHOST_DATA, "%s: used_event_idx=%d, old=%d, 
>> new=%d\n",
>> -            __func__,
>> -            vhost_used_event(vq),
>> -            old, new);
>> +                __func__,
>> +                vhost_used_event(vq),
>> +                old, new);
>>           if (vhost_need_event(vhost_used_event(vq), new, old)
>> -            && (vq->callfd >= 0)) {
>> +                && (vq->callfd >= 0)) {
>>               vq->signalled_used = vq->last_used_idx;
>>               eventfd_write(vq->callfd, (eventfd_t) 1);
>>           }
> 
> Looks like a style fixes, move those to another patch?

Indeed, I'll remove this style fixes in next version.

>> @@ -640,4 +647,54 @@ vhost_vring_call(struct virtio_net *dev, struct 
>> vhost_virtqueue *vq)
>>       }
>>   }
>> +static __rte_always_inline void
>> +vhost_vring_call_packed(struct virtio_net *dev, struct 
>> vhost_virtqueue *vq)
>> +{
>> +    uint16_t old, new, off, off_wrap, wrap;
>> +    bool kick = false;
>> +
>> +
>> +    /*  Flush used desc update. */
>> +    rte_smp_mb();
>> +
>> +    if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) {
>> +        if (vq->driver_event->desc_event_flags !=
>> +                RING_EVENT_FLAGS_DISABLE)
>> +            kick = true;
>> +        goto kick;
>> +    }
>> +
>> +    old = vq->signalled_used;
>> +    new = vq->last_used_idx;
>> +    vq->signalled_used = new;
>> +
>> +    if (vq->driver_event->desc_event_flags != RING_EVENT_FLAGS_DESC) {
>> +        if (vq->driver_event->desc_event_flags !=
>> +                RING_EVENT_FLAGS_DISABLE)
>> +            kick = true;
>> +        goto kick;
>> +    }
>> +
>> +    rte_smp_rmb();
>> +
>> +    off_wrap = vq->driver_event->desc_event_off_wrap;
>> +    off = off_wrap & ~(1 << 15);
>> +    wrap = vq->used_wrap_counter;
>> +
>> +    if (new < old) {
>> +        new += vq->size;
>> +        wrap ^= 1;
>> +    }
>> +
>> +    if (wrap != off_wrap >> 15)
>> +        off += vq->size;
>> +
> 
> Jusy FYI. Maybe we can switch to a more compact version like Tiwei used:
> 
> ...
>          wrap_counter = off_wrap >> 15;
>          event_idx = off_wrap & ~(1<<15);
>          if (wrap_counter != vq->avail_wrap_counter)
>                  event_idx -= vq->vring_packed.num;
> 
>          if (flags == VRING_EVENT_F_DESC)
>                  needs_kick = vring_need_event(event_idx, new, old);
>          else
>                  needs_kick = (flags != VRING_EVENT_F_DISABLE);
> 
> (I've switched to this version in vhost).

Sure, it is indeed simpler.

Thanks,
Maxime

> Thanks
> 
>> +    if (vhost_need_event(off, new, old))
>> +        kick = true;
>> +
>> +kick:
>> +    if (kick)
>> +        eventfd_write(vq->callfd, (eventfd_t)1);
>> +}
>> +
>>   #endif /* _VHOST_NET_CDEV_H_ */
>> diff --git a/lib/librte_vhost/vhost_user.c 
>> b/lib/librte_vhost/vhost_user.c
>> index b2b57de57..bda515bdb 100644
>> --- a/lib/librte_vhost/vhost_user.c
>> +++ b/lib/librte_vhost/vhost_user.c
>> @@ -523,6 +523,30 @@ translate_ring_addresses(struct virtio_net *dev, 
>> int vq_index)
>>           vq = dev->virtqueue[vq_index];
>>           addr = &vq->ring_addrs;
>> +        len = sizeof(struct vring_packed_desc_event);
>> +        vq->driver_event = (struct vring_packed_desc_event *)
>> +                    (uintptr_t)ring_addr_to_vva(dev,
>> +                    vq, addr->avail_user_addr, &len);
>> +        if (vq->driver_event == 0 ||
>> +                len != sizeof(struct vring_packed_desc_event)) {
>> +            RTE_LOG(DEBUG, VHOST_CONFIG,
>> +                "(%d) failed to find driver area address.\n",
>> +                dev->vid);
>> +            return dev;
>> +        }
>> +
>> +        len = sizeof(struct vring_packed_desc_event);
>> +        vq->device_event = (struct vring_packed_desc_event *)
>> +                    (uintptr_t)ring_addr_to_vva(dev,
>> +                    vq, addr->used_user_addr, &len);
>> +        if (vq->device_event == 0 ||
>> +                len != sizeof(struct vring_packed_desc_event)) {
>> +            RTE_LOG(DEBUG, VHOST_CONFIG,
>> +                "(%d) failed to find device area address.\n",
>> +                dev->vid);
>> +            return dev;
>> +        }
>> +
>>           return dev;
>>       }
>> diff --git a/lib/librte_vhost/virtio-packed.h 
>> b/lib/librte_vhost/virtio-packed.h
>> index d386cb6df..ce3b28313 100644
>> --- a/lib/librte_vhost/virtio-packed.h
>> +++ b/lib/librte_vhost/virtio-packed.h
>> @@ -19,6 +19,17 @@ struct vring_desc_packed {
>>       uint16_t flags;
>>   };
>> +#define RING_EVENT_FLAGS_ENABLE 0x0
>> +#define RING_EVENT_FLAGS_DISABLE 0x1
>> +#define RING_EVENT_FLAGS_DESC 0x2
>> +#define RING_EVENT_FLAGS_MASK 0xFFFC
>> +#define RING_EVENT_WRAP_MASK 0x8000
>> +#define RING_EVENT_OFF_MASK 0x7FFF
>> +
>> +struct vring_packed_desc_event {
>> +    uint16_t desc_event_off_wrap;
>> +    uint16_t desc_event_flags;
>> +};
>>   static inline bool
>>   desc_is_avail(struct vring_desc_packed *desc, bool wrap_counter)
>> diff --git a/lib/librte_vhost/virtio_net.c 
>> b/lib/librte_vhost/virtio_net.c
>> index 03dd38235..11c10aaf8 100644
>> --- a/lib/librte_vhost/virtio_net.c
>> +++ b/lib/librte_vhost/virtio_net.c
>> @@ -824,7 +824,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct 
>> vhost_virtqueue *vq,
>>       if (likely(vq->shadow_used_idx)) {
>>           flush_shadow_used_ring_split(dev, vq);
>> -        vhost_vring_call(dev, vq);
>> +        vhost_vring_call_split(dev, vq);
>>       }
>>       return pkt_idx;
>> @@ -877,7 +877,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, 
>> struct vhost_virtqueue *vq,
>>       if (likely(vq->shadow_used_idx)) {
>>           flush_shadow_used_ring_packed(dev, vq);
>> -        vhost_vring_call(dev, vq);
>> +        vhost_vring_call_packed(dev, vq);
>>       }
>>       return pkt_idx;
>> @@ -1360,7 +1360,7 @@ virtio_dev_tx_split(struct virtio_net *dev, 
>> struct vhost_virtqueue *vq,
>>           }
>>           flush_shadow_used_ring_split(dev, vq);
>> -        vhost_vring_call(dev, vq);
>> +        vhost_vring_call_split(dev, vq);
>>       }
>>       rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 
>> 1)]);
>> @@ -1439,7 +1439,7 @@ virtio_dev_tx_split(struct virtio_net *dev, 
>> struct vhost_virtqueue *vq,
>>           if (unlikely(i < count))
>>               vq->shadow_used_idx = i;
>>           flush_shadow_used_ring_split(dev, vq);
>> -        vhost_vring_call(dev, vq);
>> +        vhost_vring_call_split(dev, vq);
>>       }
>>       return i;
>> @@ -1477,7 +1477,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, 
>> struct vhost_virtqueue *vq,
>>           }
>>           flush_shadow_used_ring_packed(dev, vq);
>> -        vhost_vring_call(dev, vq);
>> +        vhost_vring_call_packed(dev, vq);
>>       }
>>       VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
>> @@ -1555,7 +1555,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, 
>> struct vhost_virtqueue *vq,
>>           if (unlikely(i < count))
>>               vq->shadow_used_idx = i;
>>           flush_shadow_used_ring_packed(dev, vq);
>> -        vhost_vring_call(dev, vq);
>> +        vhost_vring_call_packed(dev, vq);
>>       }
>>       return i;
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 01/15] vhost: add virtio packed virtqueue defines
  2018-07-04  5:37   ` Tiwei Bie
@ 2018-07-04 16:03     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 16:03 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 07/04/2018 07:37 AM, Tiwei Bie wrote:
> On Mon, Jul 02, 2018 at 10:16:15AM +0200, Maxime Coquelin wrote:
>> From: Jens Freimann <jfreimann@redhat.com>
>>
>> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.h         |  3 +++
>>   lib/librte_vhost/virtio-packed.h | 22 ++++++++++++++++++++++
>>   2 files changed, 25 insertions(+)
>>   create mode 100644 lib/librte_vhost/virtio-packed.h
>>
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index 3f8fa3a78..cecd5c1ee 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -191,6 +191,9 @@ struct vhost_msg {
>>   #ifndef VIRTIO_F_VERSION_1
>>    #define VIRTIO_F_VERSION_1 32
>>   #endif
>> +#ifndef VIRTIO_F_RING_PACKED
>> + #define VIRTIO_F_RING_PACKED 34
>> +#endif
>>   
>>   /* Features supported by this builtin vhost-user net driver. */
>>   #define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
>> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
>> new file mode 100644
>> index 000000000..744b3991b
>> --- /dev/null
>> +++ b/lib/librte_vhost/virtio-packed.h
>> @@ -0,0 +1,22 @@
>> +/* SPDX-License-Identifier: BSD-3-Clause
>> + * Copyright(c) Red Hat Inc.
>> + */
>> +
>> +#ifndef __VIRTIO_PACKED_H
>> +#define __VIRTIO_PACKED_H
>> +
>> +#define VRING_DESC_F_NEXT       1
>> +#define VRING_DESC_F_WRITE      2
>> +#define VRING_DESC_F_INDIRECT   4
>> +
>> +#define VRING_DESC_F_AVAIL      (1ULL << 7)
>> +#define VRING_DESC_F_USED	(1ULL << 15)
> 
> Please use space or tab consistently.

Right.

Thanks,
Maxime
>> +
>> +struct vring_desc_packed {
>> +	uint64_t addr;
>> +	uint32_t len;
>> +	uint16_t index;
>> +	uint16_t flags;
>> +};
>> +
>> +#endif /* __VIRTIO_PACKED_H */
>> -- 
>> 2.14.4
>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 02/15] vhost: add helpers for packed virtqueues
  2018-07-04  5:39   ` Tiwei Bie
@ 2018-07-04 16:03     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 16:03 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 07/04/2018 07:39 AM, Tiwei Bie wrote:
> On Mon, Jul 02, 2018 at 10:16:16AM +0200, Maxime Coquelin wrote:
>> From: Jens Freimann <jfreimann@redhat.com>
>>
>> Add some helper functions to check descriptor flags
>> and check if a vring is of type packed.
>>
>> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.h         | 6 ++++++
>>   lib/librte_vhost/virtio-packed.h | 8 ++++++++
>>   2 files changed, 14 insertions(+)
>>
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index cecd5c1ee..cabbc3ab7 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -318,6 +318,12 @@ struct virtio_net {
>>   	struct vhost_user_extern_ops extern_ops;
>>   } __rte_cache_aligned;
>>   
>> +static __rte_always_inline bool
>> +vq_is_packed(struct virtio_net *dev)
>> +{
>> +	return dev->features & (1ull << VIRTIO_F_RING_PACKED);
>> +}
>> +
>>   #define VHOST_LOG_PAGE	4096
>>   
>>   /*
>> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
>> index 744b3991b..d386cb6df 100644
>> --- a/lib/librte_vhost/virtio-packed.h
>> +++ b/lib/librte_vhost/virtio-packed.h
>> @@ -19,4 +19,12 @@ struct vring_desc_packed {
>>   	uint16_t flags;
>>   };
>>   
>> +
> 
> There is no need to add above blank line.

Removed.

>> +static inline bool
>> +desc_is_avail(struct vring_desc_packed *desc, bool wrap_counter)
>> +{
>> +	return wrap_counter == !!(desc->flags & VRING_DESC_F_AVAIL) &&
>> +		wrap_counter != !!(desc->flags & VRING_DESC_F_USED);
>> +}
>> +
>>   #endif /* __VIRTIO_PACKED_H */
>> -- 
>> 2.14.4
>>

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 13/15] vhost: add Tx support for packed ring
  2018-07-04  5:45   ` Tiwei Bie
@ 2018-07-04 16:09     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 16:09 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 07/04/2018 07:45 AM, Tiwei Bie wrote:
> On Mon, Jul 02, 2018 at 10:16:27AM +0200, Maxime Coquelin wrote:
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.h      |   1 +
>>   lib/librte_vhost/virtio_net.c | 121 +++++++++++++++++++++++++++++++++++++++++-
>>   2 files changed, 121 insertions(+), 1 deletion(-)
>>
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index a7320469a..6ea8fb896 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -56,6 +56,7 @@ struct buf_vector {
>>   struct zcopy_mbuf {
>>   	struct rte_mbuf *mbuf;
>>   	uint32_t desc_idx;
>> +	uint16_t desc_count;
>>   	uint16_t in_use;
>>   
>>   	TAILQ_ENTRY(zcopy_mbuf) next;
>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>> index 2e286e228..03dd38235 100644
>> --- a/lib/librte_vhost/virtio_net.c
>> +++ b/lib/librte_vhost/virtio_net.c
>> @@ -1445,6 +1445,122 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
>>   	return i;
>>   }
>>   
>> +static __rte_always_inline uint16_t
>> +virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
>> +	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
>> +{
>> +	uint16_t i;
>> +
>> +	rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
>> +
>> +	if (unlikely(dev->dequeue_zero_copy)) {
>> +		struct zcopy_mbuf *zmbuf, *next;
>> +		int nr_updated = 0;
>> +
>> +		for (zmbuf = TAILQ_FIRST(&vq->zmbuf_list);
>> +		     zmbuf != NULL; zmbuf = next) {
>> +			next = TAILQ_NEXT(zmbuf, next);
>> +
>> +			if (mbuf_is_consumed(zmbuf->mbuf)) {
>> +				update_shadow_used_ring_packed(vq,
>> +						zmbuf->desc_idx,
>> +						0,
>> +						zmbuf->desc_count);
>> +				nr_updated += 1;
> 
> nr_updated isn't really used.

Right, it seems it should be removed in the _split version too, but I'll 
do that in a separate patch.

Thanks,
Maxime
>> +
>> +				TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next);
>> +				restore_mbuf(zmbuf->mbuf);
>> +				rte_pktmbuf_free(zmbuf->mbuf);
>> +				put_zmbuf(zmbuf);
>> +				vq->nr_zmbuf -= 1;
>> +			}
>> +		}
>> +
>> +		flush_shadow_used_ring_packed(dev, vq);
>> +		vhost_vring_call(dev, vq);
>> +	}
>> +
> [...]
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 11/15] vhost: add vector filling support for packed ring
  2018-07-04  5:53   ` Tiwei Bie
@ 2018-07-04 16:18     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 16:18 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 07/04/2018 07:53 AM, Tiwei Bie wrote:
> On Mon, Jul 02, 2018 at 10:16:25AM +0200, Maxime Coquelin wrote:
> [...]
>> +static __rte_always_inline int
>> +fill_vec_buf_packed_indirect(struct virtio_net *dev,
>> +			struct vhost_virtqueue *vq,
>> +			struct vring_desc_packed *desc, uint16_t *vec_idx,
>> +			struct buf_vector *buf_vec, uint16_t *len, uint8_t perm)
>> +{
>> +	uint16_t i;
>> +	uint32_t nr_decs;
>> +	uint16_t vec_id = *vec_idx;
>> +	uint64_t dlen;
>> +	struct vring_desc_packed *descs, *idescs = NULL;
>> +
>> +	dlen = desc->len;
>> +	descs = (struct vring_desc_packed *)(uintptr_t)
>> +		vhost_iova_to_vva(dev, vq, desc->addr, &dlen, VHOST_ACCESS_RO);
>> +	if (unlikely(!descs))
>> +		return -1;
>> +
>> +	if (unlikely(dlen < desc->len)) {
>> +		/*
>> +		 * The indirect desc table is not contiguous
>> +		 * in process VA space, we have to copy it.
>> +		 */
>> +		idescs = alloc_copy_ind_table(dev, vq, desc->addr, desc->len);
>> +		if (unlikely(!idescs))
>> +			return -1;
>> +
>> +		descs = idescs;
>> +	}
>> +
>> +	nr_decs =  desc->len / sizeof(struct vring_desc_packed);
> 
> s/nr_decs =  /nr_desc = /

Fixed.

> 
> 
>> +	if (unlikely(nr_decs >= vq->size)) {
>> +		free_ind_table(idescs);
>> +		return -1;
>> +	}
> [...]
>> +
>> +static inline int
>> +fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
>> +				uint16_t avail_idx, uint16_t *desc_count,
>> +				struct buf_vector *buf_vec, uint16_t *vec_idx,
>> +				uint16_t *buf_id, uint16_t *len, uint8_t perm)
>> +{
>> +	bool wrap_counter = vq->avail_wrap_counter;
>> +	struct vring_desc_packed *descs = vq->desc_packed;
>> +	uint16_t vec_id = *vec_idx;
>> +
>> +	if (avail_idx < vq->last_avail_idx)
>> +		wrap_counter ^= 1;
> 
> In which case avail_idx will be less than vq->last_avail_idx
> and we need to wrap the wrap_counter?

In the receive mergeable case, it can happen (see patch 12):

static inline int
reserve_avail_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
				uint32_t size, struct buf_vector *buf_vec,
				uint16_t *nr_vec, uint16_t *num_buffers,
				uint16_t *nr_descs)
{
	uint16_t avail_idx;
	uint16_t vec_idx = 0;
	uint16_t max_tries, tries = 0;

	uint16_t buf_id = 0;
	uint16_t len = 0;
	uint16_t desc_count;

	*num_buffers = 0;
	avail_idx = vq->last_avail_idx;

	if (rxvq_is_mergeable(dev))
		max_tries = vq->size;
	else
		max_tries = 1;

	while (size > 0) {
		if (unlikely(fill_vec_buf_packed(dev, vq,
						avail_idx, &desc_count,
						buf_vec, &vec_idx,
						&buf_id, &len,
						VHOST_ACCESS_RO) < 0))
			return -1;

		len = RTE_MIN(len, size);
		update_shadow_used_ring_packed(vq, buf_id, len, desc_count);
		size -= len;

		avail_idx += desc_count;
		if (avail_idx >= vq->size)
			avail_idx -= vq->size;

		*nr_descs += desc_count;
		tries++;
		*num_buffers += 1;

		/*
		 * if we tried all available ring items, and still
		 * can't get enough buf, it means something abnormal
		 * happened.
		 */
		if (unlikely(tries > max_tries))
			return -1;
	}

	*nr_vec = vec_idx;

	return 0;
}

>> +
>> +	if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter)))
>> +		return -1;
>> +
>> +	*desc_count = 0;
>> +
>> +	while (1) {
>> +		if (unlikely(vec_id >= BUF_VECTOR_MAX))
>> +			return -1;
>> +
>> +		*desc_count += 1;
>> +		*buf_id = descs[avail_idx].index;
>> +
>> +		if (descs[avail_idx].flags & VRING_DESC_F_INDIRECT) {
>> +			if (unlikely(fill_vec_buf_packed_indirect(dev, vq,
>> +							&descs[avail_idx],
>> +							&vec_id, buf_vec,
>> +							len, perm) < 0))
>> +				return -1;
>> +		} else {
>> +			*len += descs[avail_idx].len;
>> +
>> +			if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
>> +							descs[avail_idx].addr,
>> +							descs[avail_idx].len,
>> +							perm)))
>> +				return -1;
>> +		}
>> +
>> +		if ((descs[avail_idx].flags & VRING_DESC_F_NEXT) == 0)
>> +			break;
>> +
>> +		if (++avail_idx >= vq->size) {
>> +			avail_idx -= vq->size;
>> +			wrap_counter ^= 1;
>> +		}
>> +	}
>> +
>> +	*vec_idx = vec_id;
>> +
>> +	return 0;
>> +}
> [...]
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 10/15] vhost: create descriptor mapping function
  2018-07-04  5:56   ` Tiwei Bie
@ 2018-07-04 16:18     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 16:18 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 07/04/2018 07:56 AM, Tiwei Bie wrote:
> On Mon, Jul 02, 2018 at 10:16:24AM +0200, Maxime Coquelin wrote:
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/virtio_net.c | 61 ++++++++++++++++++++++++++-----------------
>>   1 file changed, 37 insertions(+), 24 deletions(-)
>>
>> diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
>> index f0e2e6a1f..64664b7de 100644
>> --- a/lib/librte_vhost/virtio_net.c
>> +++ b/lib/librte_vhost/virtio_net.c
>> @@ -292,6 +292,37 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
>>   	}
>>   }
>>   
>> +static __rte_always_inline int
>> +map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
>> +		struct buf_vector *buf_vec, uint16_t *vec_idx,
>> +		uint64_t desc_iova, uint64_t desc_len, uint8_t perm)
>> +{
>> +	uint16_t vec_id = *vec_idx;
>> +
>> +	while (desc_len) {
>> +		uint64_t desc_addr;
>> +		uint64_t desc_chunck_len = desc_len;
>> +
>> +		desc_addr = vhost_iova_to_vva(dev, vq,
>> +				desc_iova,
>> +				&desc_chunck_len,
>> +				perm);
>> +		if (unlikely(!desc_addr))
>> +			return -1;
>> +
>> +		buf_vec[vec_id].buf_iova = desc_iova;
>> +		buf_vec[vec_id].buf_addr = desc_addr;
>> +		buf_vec[vec_id].buf_len  = desc_chunck_len;
>> +
>> +		desc_len -= desc_chunck_len;
>> +		desc_iova += desc_chunck_len;
>> +		vec_id++;
> 
> FYI, a rebase on top of the latest "vhost: generalize
> buffer vectors" series is needed to make sure that
> access buf_vec[vec_id] won't cause overflow.

Right, this is fixed now that I rebased.

Thanks,
Maxime

>> +	}
>> +	*vec_idx = vec_id;
>> +
>> +	return 0;
>> +}
>> +
> [...]
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 14/15] vhost: add notification for packed ring
  2018-07-04  6:25   ` Tiwei Bie
@ 2018-07-04 20:20     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 20:20 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 07/04/2018 08:25 AM, Tiwei Bie wrote:
> On Mon, Jul 02, 2018 at 10:16:28AM +0200, Maxime Coquelin wrote:
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.c         | 73 ++++++++++++++++++++++++++++++++++++----
>>   lib/librte_vhost/vhost.h         | 71 ++++++++++++++++++++++++++++++++++----
>>   lib/librte_vhost/vhost_user.c    | 24 +++++++++++++
>>   lib/librte_vhost/virtio-packed.h | 11 ++++++
>>   lib/librte_vhost/virtio_net.c    | 12 +++----
>>   5 files changed, 172 insertions(+), 19 deletions(-)
>>
>> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
>> index 8538302c9..78f20c402 100644
>> --- a/lib/librte_vhost/vhost.c
>> +++ b/lib/librte_vhost/vhost.c
>> @@ -171,6 +171,24 @@ vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
>>   	if (!vq->desc_packed || size != req_size)
>>   		return -1;
>>   
>> +	req_size = sizeof(struct vring_packed_desc_event);
>> +	size = req_size;
>> +	vq->driver_event = (struct vring_packed_desc_event *)(uintptr_t)
>> +		vhost_iova_to_vva(dev,
>> +					vq,	vq->ring_addrs.avail_user_addr,
>> +					&size, VHOST_ACCESS_RW);
> 
> It should be a space instead of a tab after "vq,"
> 
> Why not put "vq, vq->ring_addrs.avail_user_addr,"
> and "vhost_iova_to_vva(dev," on the same line?
> 
>> +	if (!vq->driver_event || size != req_size)
>> +		return -1;
>> +
>> +	req_size = sizeof(struct vring_packed_desc_event);
>> +	size = req_size;
>> +	vq->device_event = (struct vring_packed_desc_event *)(uintptr_t)
>> +		vhost_iova_to_vva(dev,
>> +					vq, vq->ring_addrs.used_user_addr,
> 
> It's better to put "vhost_iova_to_vva(dev," and
> "vq, vq->ring_addrs.used_user_addr," on the same line.
> 
> Currently, it looks like something as this in my editor:
> 
> ¦       vq->device_event = (struct vring_packed_desc_event *)(uintptr_t)
> ¦       ¦       vhost_iova_to_vva(dev,
> ¦       ¦       ¦       ¦       ¦       vq, vq->ring_addrs.used_user_addr,
> ¦       ¦       ¦       ¦       ¦       &size, VHOST_ACCESS_RW);
> 

It looked a bit cleaner in my editor :) This is fixed now.

>> +					&size, VHOST_ACCESS_RW);
>> +	if (!vq->device_event || size != req_size)
>> +		return -1;
>> +
>>   	return 0;
>>   }
>>   
> [...]
>> @@ -640,4 +647,54 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
>>   	}
>>   }
>>   
>> +static __rte_always_inline void
>> +vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
>> +{
>> +	uint16_t old, new, off, off_wrap, wrap;
>> +	bool kick = false;
>> +
>> +
> 
> There is no need to have two blank lines.
> 
>> +	/*  Flush used desc update. */
> 
> Just need one space between "/*" and "Flush".
> 
>> +	rte_smp_mb();
>> +
>> +	if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) {
>> +		if (vq->driver_event->desc_event_flags !=
>> +				RING_EVENT_FLAGS_DISABLE)
>> +			kick = true;
>> +		goto kick;
>> +	}
>> +
>> +	old = vq->signalled_used;
>> +	new = vq->last_used_idx;
>> +	vq->signalled_used = new;
>> +
>> +	if (vq->driver_event->desc_event_flags != RING_EVENT_FLAGS_DESC) {
>> +		if (vq->driver_event->desc_event_flags !=
>> +				RING_EVENT_FLAGS_DISABLE)
>> +			kick = true;
>> +		goto kick;
>> +	}
>> +
>> +	rte_smp_rmb();
>> +
>> +	off_wrap = vq->driver_event->desc_event_off_wrap;
>> +	off = off_wrap & ~(1 << 15);
> 
> Maybe it's better to use: RING_EVENT_OFF_MASK

I planned to remove its definition to be consistent with
the virtio_ring.h kernel header.

> 
>> +	wrap = vq->used_wrap_counter;
>> +
>> +	if (new < old) {
>> +		new += vq->size;
>> +		wrap ^= 1;
>> +	}
>> +
>> +	if (wrap != off_wrap >> 15)
>> +		off += vq->size;
>> +
>> +	if (vhost_need_event(off, new, old))
>> +		kick = true;
>> +
>> +kick:
>> +	if (kick)
>> +		eventfd_write(vq->callfd, (eventfd_t)1);
>> +}
>> +
>>   #endif /* _VHOST_NET_CDEV_H_ */
>> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
>> index b2b57de57..bda515bdb 100644
>> --- a/lib/librte_vhost/vhost_user.c
>> +++ b/lib/librte_vhost/vhost_user.c
>> @@ -523,6 +523,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>>   		vq = dev->virtqueue[vq_index];
>>   		addr = &vq->ring_addrs;
>>   
>> +		len = sizeof(struct vring_packed_desc_event);
>> +		vq->driver_event = (struct vring_packed_desc_event *)
>> +					(uintptr_t)ring_addr_to_vva(dev,
>> +					vq, addr->avail_user_addr, &len);
>> +		if (vq->driver_event == 0 ||
> 
> It's better to compare with NULL.

Yes.

>> +				len != sizeof(struct vring_packed_desc_event)) {
>> +			RTE_LOG(DEBUG, VHOST_CONFIG,
>> +				"(%d) failed to find driver area address.\n",
>> +				dev->vid);
>> +			return dev;
>> +		}
>> +
>> +		len = sizeof(struct vring_packed_desc_event);
>> +		vq->device_event = (struct vring_packed_desc_event *)
>> +					(uintptr_t)ring_addr_to_vva(dev,
>> +					vq, addr->used_user_addr, &len);
>> +		if (vq->device_event == 0 ||
> 
> It's better to compare with NULL.
> 
>> +				len != sizeof(struct vring_packed_desc_event)) {
>> +			RTE_LOG(DEBUG, VHOST_CONFIG,
>> +				"(%d) failed to find device area address.\n",
>> +				dev->vid);
>> +			return dev;
>> +		}
>> +
>>   		return dev;
>>   	}
>>   
>> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
>> index d386cb6df..ce3b28313 100644
>> --- a/lib/librte_vhost/virtio-packed.h
>> +++ b/lib/librte_vhost/virtio-packed.h
> 
> Normally, we name c/h files as something like: virtio_packed.h
> 
> Besides, it'd be better to move the definitions into
> vhost.h and define the types and flags for packed
> ring only when e.g. VIRTIO_F_RING_PACKED not defined.
> Just like how VIRTIO_F_IOMMU_PLATFORM works:
> 
> /* Declare IOMMU related bits for older kernels */
> #ifndef VIRTIO_F_IOMMU_PLATFORM
> 
> #define VIRTIO_F_IOMMU_PLATFORM 33
> 
> struct vhost_iotlb_msg {
> 	__u64 iova;
> 	__u64 size;
> 	__u64 uaddr;
> #define VHOST_ACCESS_RO      0x1
> #define VHOST_ACCESS_WO      0x2
> #define VHOST_ACCESS_RW      0x3
> 	__u8 perm;
> #define VHOST_IOTLB_MISS           1
> #define VHOST_IOTLB_UPDATE         2
> #define VHOST_IOTLB_INVALIDATE     3
> #define VHOST_IOTLB_ACCESS_FAIL    4
> 	__u8 type;
> };
> 
> #define VHOST_IOTLB_MSG 0x1
> 
> struct vhost_msg {
> 	int type;
> 	union {
> 		struct vhost_iotlb_msg iotlb;
> 		__u8 padding[64];
> 	};
> };
> #endif

I agree. I replied to Jason series so that the defines name are
consistent.

>> @@ -19,6 +19,17 @@ struct vring_desc_packed {
>>   	uint16_t flags;
>>   };
>>   
>> +#define RING_EVENT_FLAGS_ENABLE 0x0
>> +#define RING_EVENT_FLAGS_DISABLE 0x1
>> +#define RING_EVENT_FLAGS_DESC 0x2
>> +#define RING_EVENT_FLAGS_MASK 0xFFFC
> 
> RING_EVENT_FLAGS_MASK should be 0x3?

I actually removed it, but yes I agree.

>> +#define RING_EVENT_WRAP_MASK 0x8000
>> +#define RING_EVENT_OFF_MASK 0x7FFF
> 
> It's better to define above macros as VRING_EVENT_*

Yes, I change it to VRING_EVENT_F_* as you suggested to Jason.

>> +
>> +struct vring_packed_desc_event {
>> +	uint16_t desc_event_off_wrap;
>> +	uint16_t desc_event_flags;
>> +};
>>   
> [...]
> 

^ permalink raw reply	[flat|nested] 34+ messages in thread

* Re: [PATCH v6 07/15] vhost: extract split ring handling from Rx and Tx functions
  2018-07-04  6:51   ` Tiwei Bie
@ 2018-07-04 21:04     ` Maxime Coquelin
  0 siblings, 0 replies; 34+ messages in thread
From: Maxime Coquelin @ 2018-07-04 21:04 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 07/04/2018 08:51 AM, Tiwei Bie wrote:
> On Mon, Jul 02, 2018 at 10:16:21AM +0200, Maxime Coquelin wrote:
> [...]
>> +
>> +	if (unlikely(vq->access_ok == 0))
>> +		if (unlikely(vring_translate(dev, vq) < 0))
>> +			goto out;
>> +
>> +
> 
> Please just keep one blank line.
> 
>> +	/*
>> +	 * Construct a RARP broadcast packet, and inject it to the "pkts"
>> +	 * array, to looks like that guest actually send such packet.
>> +	 *
>> +	 * Check user_send_rarp() for more information.
>> +	 *
>> +	 * broadcast_rarp shares a cacheline in the virtio_net structure
>> +	 * with some fields that are accessed during enqueue and
>> +	 * rte_atomic16_cmpset() causes a write if using cmpxchg. This could
>> +	 * result in false sharing between enqueue and dequeue.
>> +	 *
>> +	 * Prevent unnecessary false sharing by reading broadcast_rarp first
>> +	 * and only performing cmpset if the read indicates it is likely to
>> +	 * be set.
>> +	 */
>> +
> 
> Please remove above blank line.
> 
>> +	if (unlikely(rte_atomic16_read(&dev->broadcast_rarp) &&
>> +			rte_atomic16_cmpset((volatile uint16_t *)
>> +				&dev->broadcast_rarp.cnt, 1, 0))) {
>> +
>> +		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
>> +		if (rarp_mbuf == NULL) {
>> +			RTE_LOG(ERR, VHOST_DATA,
>> +				"Failed to make RARP packet.\n");
>> +			return 0;
>> +		}
>> +		count -= 1;
>> +	}
>> +
>> +	count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
>> +
>>   out:
>>   	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
>>   		vhost_user_iotlb_rd_unlock(vq);
>> @@ -1200,10 +1222,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
>>   		 * Inject it to the head of "pkts" array, so that switch's mac
>>   		 * learning table will get updated first.
>>   		 */
>> -		memmove(&pkts[1], pkts, i * sizeof(struct rte_mbuf *));
>> +		memmove(&pkts[1], pkts, count * sizeof(struct rte_mbuf *));
>>   		pkts[0] = rarp_mbuf;
>> -		i += 1;
>> +		count += 1;
>>   	}
>>   
>> -	return i;
>> +	return count;
>>   }
>> -- 
>> 2.14.4
>>

Done,

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 34+ messages in thread

end of thread, other threads:[~2018-07-04 21:04 UTC | newest]

Thread overview: 34+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-02  8:16 [PATCH v6 00/15] Vhost: add support to packed ring layout Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
2018-07-04  5:37   ` Tiwei Bie
2018-07-04 16:03     ` Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
2018-07-04  5:39   ` Tiwei Bie
2018-07-04 16:03     ` Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 03/15] vhost: vring address setup for packed queues Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 04/15] vhost: clear shadow used table index at flush time Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 05/15] vhost: make indirect desc table copy desc type agnostic Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 06/15] vhost: clear batch copy index at copy time Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 07/15] vhost: extract split ring handling from Rx and Tx functions Maxime Coquelin
2018-07-04  6:51   ` Tiwei Bie
2018-07-04 21:04     ` Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 08/15] vhost: append shadow used ring function names with split Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 09/15] vhost: add shadow used ring support for packed rings Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 10/15] vhost: create descriptor mapping function Maxime Coquelin
2018-07-04  5:56   ` Tiwei Bie
2018-07-04 16:18     ` Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 11/15] vhost: add vector filling support for packed ring Maxime Coquelin
2018-07-04  5:53   ` Tiwei Bie
2018-07-04 16:18     ` Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 12/15] vhost: add Rx " Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 13/15] vhost: add Tx " Maxime Coquelin
2018-07-04  5:45   ` Tiwei Bie
2018-07-04 16:09     ` Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 14/15] vhost: add notification " Maxime Coquelin
2018-07-03  5:57   ` Jason Wang
2018-07-03  6:43     ` Maxime Coquelin
2018-07-03  6:06   ` Jason Wang
2018-07-04 16:02     ` Maxime Coquelin
2018-07-04  6:25   ` Tiwei Bie
2018-07-04 20:20     ` Maxime Coquelin
2018-07-02  8:16 ` [PATCH v6 15/15] vhost: advertize packed ring layout support Maxime Coquelin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.