All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 00/15] Vhost: add support to packed ring layout
@ 2018-06-22 13:43 Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
                   ` (14 more replies)
  0 siblings, 15 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

This series is a handover from Jen's "[PATCH v4 00/20]
implement packed virtqueues", which only implements the
vhost side. Virtio PMD implementation will follow in a 
next series.

The series applies on top of previous reworks I posted
during this cycle that merges mergeable and non-mergeable
receive paths, and refactors transmit path to re-use
vector buffers.

I haven't run performance tests for now as the Virtio PMD
side isn't ready.

The series has been tested with Tiwei's series implementing
packed ring support to Kernel's virtio-net driver, and
with Wei series implementing the Qemu side.

To test it, I have used testpmd on host side with a vhost
vdev and a tap vdev forwarding to each other. Transferts
of big random files have been done in both ways with
integrity verified.

Tests have been run with Rx mrg ON/OFF and events suppression
ON/OFF.

Tests have also been run with legacy spli ring layout to
ensure no regression have been introduced.

Jens Freimann (2):
  vhost: add virtio packed virtqueue defines
  vhost: add helpers for packed virtqueues

Maxime Coquelin (12):
  vhost: clear shadow used table index at flush time
  vhost: make indirect desc table copy desc type agnostic
  vhost: clear batch copy index at copy time
  vhost: extract split ring handling from Rx and Tx functions
  vhost: append shadow used ring function names with split
  vhost: add shadow used ring support for packed rings
  vhost: create descriptor mapping function
  vhost: add vector filling support for packed ring
  vhost: add Rx support for packed ring
  vhost: add Tx support for packed ring
  vhost: add notification for packed ring
  vhost: advertize packed ring layout support

Yuanhan Liu (1):
  vhost: vring address setup for packed queues

 lib/librte_vhost/vhost.c         | 108 +++++-
 lib/librte_vhost/vhost.h         | 105 +++++-
 lib/librte_vhost/vhost_user.c    | 114 +++++-
 lib/librte_vhost/virtio-packed.h |  49 +++
 lib/librte_vhost/virtio_net.c    | 770 ++++++++++++++++++++++++++++++---------
 5 files changed, 940 insertions(+), 206 deletions(-)
 create mode 100644 lib/librte_vhost/virtio-packed.h

-- 
2.14.4

^ permalink raw reply	[flat|nested] 26+ messages in thread

* [PATCH v5 01/15] vhost: add virtio packed virtqueue defines
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-29 15:47   ` Tiwei Bie
  2018-06-22 13:43 ` [PATCH v5 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
                   ` (13 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev; +Cc: mst, jasowang, wexu

From: Jens Freimann <jfreimann@redhat.com>

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
---
 lib/librte_vhost/vhost.h         |  4 ++++
 lib/librte_vhost/virtio-packed.h | 22 ++++++++++++++++++++++
 2 files changed, 26 insertions(+)
 create mode 100644 lib/librte_vhost/virtio-packed.h

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 3f8fa3a78..bf2059198 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -191,6 +191,10 @@ struct vhost_msg {
 #ifndef VIRTIO_F_VERSION_1
  #define VIRTIO_F_VERSION_1 32
 #endif
+#ifndef VIRTIO_F_RING_PACKED
+ #define VIRTIO_F_RING_PACKED 34
+#endif
+#define VHOST_USER_F_PROTOCOL_FEATURES	30
 
 /* Features supported by this builtin vhost-user net driver. */
 #define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
new file mode 100644
index 000000000..744b3991b
--- /dev/null
+++ b/lib/librte_vhost/virtio-packed.h
@@ -0,0 +1,22 @@
+/* SPDX-License-Identifier: BSD-3-Clause
+ * Copyright(c) Red Hat Inc.
+ */
+
+#ifndef __VIRTIO_PACKED_H
+#define __VIRTIO_PACKED_H
+
+#define VRING_DESC_F_NEXT       1
+#define VRING_DESC_F_WRITE      2
+#define VRING_DESC_F_INDIRECT   4
+
+#define VRING_DESC_F_AVAIL      (1ULL << 7)
+#define VRING_DESC_F_USED	(1ULL << 15)
+
+struct vring_desc_packed {
+	uint64_t addr;
+	uint32_t len;
+	uint16_t index;
+	uint16_t flags;
+};
+
+#endif /* __VIRTIO_PACKED_H */
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 02/15] vhost: add helpers for packed virtqueues
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-29 15:51   ` Tiwei Bie
  2018-06-22 13:43 ` [PATCH v5 03/15] vhost: vring address setup for packed queues Maxime Coquelin
                   ` (12 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

From: Jens Freimann <jfreimann@redhat.com>

Add some helper functions to check descriptor flags
and check if a vring is of type packed.

Signed-off-by: Jens Freimann <jfreimann@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.h         |  6 ++++++
 lib/librte_vhost/virtio-packed.h | 16 ++++++++++++++++
 2 files changed, 22 insertions(+)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index bf2059198..34a614c97 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -319,6 +319,12 @@ struct virtio_net {
 	struct vhost_user_extern_ops extern_ops;
 } __rte_cache_aligned;
 
+static __rte_always_inline bool
+vq_is_packed(struct virtio_net *dev)
+{
+	return dev->features & (1ull << VIRTIO_F_RING_PACKED);
+}
+
 #define VHOST_LOG_PAGE	4096
 
 /*
diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
index 744b3991b..21fe5045b 100644
--- a/lib/librte_vhost/virtio-packed.h
+++ b/lib/librte_vhost/virtio-packed.h
@@ -19,4 +19,20 @@ struct vring_desc_packed {
 	uint16_t flags;
 };
 
+
+static inline int
+desc_is_avail(struct vring_desc_packed *desc, uint16_t wrap_counter)
+{
+	if (wrap_counter == 1) {
+		if ((desc->flags & VRING_DESC_F_AVAIL) &&
+				!(desc->flags & VRING_DESC_F_USED))
+			return 1;
+	} else {
+		if (!(desc->flags & VRING_DESC_F_AVAIL) &&
+				(desc->flags & VRING_DESC_F_USED))
+			return 1;
+	}
+	return 0;
+}
+
 #endif /* __VIRTIO_PACKED_H */
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 03/15] vhost: vring address setup for packed queues
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-29 15:59   ` Tiwei Bie
  2018-06-22 13:43 ` [PATCH v5 04/15] vhost: clear shadow used table index at flush time Maxime Coquelin
                   ` (11 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Yuanhan Liu, Jens Freimann, Maxime Coquelin

From: Yuanhan Liu <yuanhan.liu@linux.intel.com>

Add code to set up packed queues when enabled.

Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
Signed-off-by: Jens Freimann <jfreiman@redhat.com>
Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.c      | 44 ++++++++++++++++++++++++++++++++++++++-----
 lib/librte_vhost/vhost.h      |  7 ++++++-
 lib/librte_vhost/vhost_user.c | 29 +++++++++++++++++++++++++++-
 3 files changed, 73 insertions(+), 7 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index afded4952..a85c6646f 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -23,6 +23,7 @@
 #include "iotlb.h"
 #include "vhost.h"
 #include "vhost_user.h"
+#include "virtio-packed.h"
 
 struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
 
@@ -115,14 +116,11 @@ free_device(struct virtio_net *dev)
 	rte_free(dev);
 }
 
-int
-vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
+static int
+vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
 	uint64_t req_size, size;
 
-	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
-		goto out;
-
 	req_size = sizeof(struct vring_desc) * vq->size;
 	size = req_size;
 	vq->desc = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, vq,
@@ -153,6 +151,40 @@ vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	if (!vq->used || size != req_size)
 		return -1;
 
+	return 0;
+}
+
+static int
+vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
+{
+	uint64_t req_size, size;
+
+	req_size = sizeof(struct vring_desc_packed) * vq->size;
+	size = req_size;
+	vq->desc_packed =
+		(struct vring_desc_packed *)(uintptr_t)vhost_iova_to_vva(dev,
+					vq, vq->ring_addrs.desc_user_addr,
+					&size, VHOST_ACCESS_RW);
+	if (!vq->desc || size != req_size)
+		return -1;
+
+	return 0;
+}
+
+int
+vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
+{
+
+	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
+		goto out;
+
+	if (vq_is_packed(dev)) {
+		if (vring_translate_packed(dev, vq) < 0)
+			return -1;
+	} else {
+		if (vring_translate_split(dev, vq) < 0)
+			return -1;
+	}
 out:
 	vq->access_ok = 1;
 
@@ -234,6 +266,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
 	dev->virtqueue[vring_idx] = vq;
 	init_vring_queue(dev, vring_idx);
 	rte_spinlock_init(&vq->access_lock);
+	vq->avail_wrap_counter = 1;
+	vq->used_wrap_counter = 1;
 
 	dev->nr_vring += 1;
 
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 34a614c97..671b4b3bf 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -84,7 +84,10 @@ struct log_cache_entry {
  * Structure contains variables relevant to RX/TX virtqueues.
  */
 struct vhost_virtqueue {
-	struct vring_desc	*desc;
+	union {
+		struct vring_desc	*desc;
+		struct vring_desc_packed   *desc_packed;
+	};
 	struct vring_avail	*avail;
 	struct vring_used	*used;
 	uint32_t		size;
@@ -122,6 +125,8 @@ struct vhost_virtqueue {
 
 	struct batch_copy_elem	*batch_copy_elems;
 	uint16_t		batch_copy_nb_elems;
+	uint16_t		used_wrap_counter;
+	uint16_t		avail_wrap_counter;
 
 	struct log_cache_entry log_cache[VHOST_LOG_CACHE_NR];
 	uint16_t log_cache_nb_elem;
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index 947290fc3..b6097c085 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -39,6 +39,7 @@
 #include "iotlb.h"
 #include "vhost.h"
 #include "vhost_user.h"
+#include "virtio-packed.h"
 
 #define VIRTIO_MIN_MTU 68
 #define VIRTIO_MAX_MTU 65535
@@ -477,6 +478,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 	struct vhost_vring_addr *addr = &vq->ring_addrs;
 	uint64_t len;
 
+	if (vq_is_packed(dev)) {
+		len = sizeof(struct vring_desc_packed) * vq->size;
+		vq->desc_packed = (struct vring_desc_packed *) ring_addr_to_vva
+			(dev, vq, addr->desc_user_addr, &len);
+		vq->desc = NULL;
+		vq->avail = NULL;
+		vq->used = NULL;
+		vq->log_guest_addr = 0;
+		if (vq->desc_packed == 0 ||
+				len != sizeof(struct vring_desc_packed) *
+				vq->size) {
+			RTE_LOG(DEBUG, VHOST_CONFIG,
+				"(%d) failed to map desc_packed ring.\n",
+				dev->vid);
+			return dev;
+		}
+
+		dev = numa_realloc(dev, vq_index);
+		vq = dev->virtqueue[vq_index];
+		addr = &vq->ring_addrs;
+
+		return dev;
+	}
+
 	/* The addresses are converted from QEMU virtual to Vhost virtual. */
 	if (vq->desc && vq->avail && vq->used)
 		return dev;
@@ -490,6 +515,7 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 			dev->vid);
 		return dev;
 	}
+	vq->desc_packed = NULL;
 
 	dev = numa_realloc(dev, vq_index);
 	vq = dev->virtqueue[vq_index];
@@ -888,7 +914,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *pmsg)
 static int
 vq_is_ready(struct vhost_virtqueue *vq)
 {
-	return vq && vq->desc && vq->avail && vq->used &&
+	return vq &&
+	       (vq->desc_packed || (vq->desc && vq->avail && vq->used)) &&
 	       vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
 	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
 }
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 04/15] vhost: clear shadow used table index at flush time
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (2 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 03/15] vhost: vring address setup for packed queues Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 05/15] vhost: make indirect desc table copy desc type agnostic Maxime Coquelin
                   ` (10 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 4 +---
 1 file changed, 1 insertion(+), 3 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 2c13a59a2..28ce2d073 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -115,6 +115,7 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	vhost_log_cache_sync(dev, vq);
 
 	*(volatile uint16_t *)&vq->used->idx += vq->shadow_used_idx;
+	vq->shadow_used_idx = 0;
 	vhost_log_used_vring(dev, vq, offsetof(struct vring_used, idx),
 		sizeof(vq->used->idx));
 }
@@ -561,7 +562,6 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
 
-	vq->shadow_used_idx = 0;
 	avail_head = *((volatile uint16_t *)&vq->avail->idx);
 	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
 		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
@@ -1048,7 +1048,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		goto out_access_unlock;
 
 	vq->batch_copy_nb_elems = 0;
-	vq->shadow_used_idx = 0;
 
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_lock(vq);
@@ -1079,7 +1078,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 
 		flush_shadow_used_ring(dev, vq);
 		vhost_vring_call(dev, vq);
-		vq->shadow_used_idx = 0;
 	}
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 05/15] vhost: make indirect desc table copy desc type agnostic
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (3 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 04/15] vhost: clear shadow used table index at flush time Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 06/15] vhost: clear batch copy index at copy time Maxime Coquelin
                   ` (9 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 28ce2d073..5cfc100a9 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -37,16 +37,15 @@ is_valid_virt_queue_idx(uint32_t idx, int is_tx, uint32_t nr_vring)
 	return (is_tx ^ (idx & 1)) == 0 && idx < nr_vring;
 }
 
-static __rte_always_inline struct vring_desc *
+static __rte_always_inline void *
 alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq,
-					 struct vring_desc *desc)
+					 uint64_t desc_addr, uint64_t desc_len)
 {
-	struct vring_desc *idesc;
+	void *idesc;
 	uint64_t src, dst;
-	uint64_t len, remain = desc->len;
-	uint64_t desc_addr = desc->addr;
+	uint64_t len, remain = desc_len;
 
-	idesc = rte_malloc(__func__, desc->len, 0);
+	idesc = rte_malloc(__func__, desc_len, 0);
 	if (unlikely(!idesc))
 		return 0;
 
@@ -72,7 +71,7 @@ alloc_copy_ind_table(struct virtio_net *dev, struct vhost_virtqueue *vq,
 }
 
 static __rte_always_inline void
-free_ind_table(struct vring_desc *idesc)
+free_ind_table(void *idesc)
 {
 	rte_free(idesc);
 }
@@ -251,7 +250,8 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			 * The indirect desc table is not contiguous
 			 * in process VA space, we have to copy it.
 			 */
-			idesc = alloc_copy_ind_table(dev, vq, &vq->desc[idx]);
+			idesc = alloc_copy_ind_table(dev, vq,
+					vq->desc[idx].addr, vq->desc[idx].len);
 			if (unlikely(!idesc))
 				return -1;
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 06/15] vhost: clear batch copy index at copy time
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (4 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 05/15] vhost: make indirect desc table copy desc type agnostic Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 07/15] vhost: extract split ring handling from Rx and Tx functions Maxime Coquelin
                   ` (8 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 8 ++++----
 1 file changed, 4 insertions(+), 4 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 5cfc100a9..816d5fc1d 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -141,6 +141,8 @@ do_data_copy_enqueue(struct virtio_net *dev, struct vhost_virtqueue *vq)
 		vhost_log_cache_write(dev, vq, elem[i].log_addr, elem[i].len);
 		PRINT_PACKET(dev, (uintptr_t)elem[i].dst, elem[i].len, 0);
 	}
+
+	vq->batch_copy_nb_elems = 0;
 }
 
 static inline void
@@ -152,6 +154,8 @@ do_data_copy_dequeue(struct vhost_virtqueue *vq)
 
 	for (i = 0; i < count; i++)
 		rte_memcpy(elem[i].dst, elem[i].src, elem[i].len);
+
+	vq->batch_copy_nb_elems = 0;
 }
 
 /* avoid write operation when necessary, to lessen cache issues */
@@ -558,8 +562,6 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	if (count == 0)
 		goto out;
 
-	vq->batch_copy_nb_elems = 0;
-
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
 
 	avail_head = *((volatile uint16_t *)&vq->avail->idx);
@@ -1047,8 +1049,6 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	if (unlikely(vq->enabled == 0))
 		goto out_access_unlock;
 
-	vq->batch_copy_nb_elems = 0;
-
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_lock(vq);
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 07/15] vhost: extract split ring handling from Rx and Tx functions
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (5 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 06/15] vhost: clear batch copy index at copy time Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 08/15] vhost: append shadow used ring function names with split Maxime Coquelin
                   ` (7 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 240 +++++++++++++++++++++++-------------------
 1 file changed, 131 insertions(+), 109 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 816d5fc1d..385876527 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -226,13 +226,13 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 }
 
 static __rte_always_inline int
-fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
-			 uint32_t avail_idx, uint32_t *vec_idx,
+fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
+			 uint32_t avail_idx, uint16_t *vec_idx,
 			 struct buf_vector *buf_vec, uint16_t *desc_chain_head,
 			 uint16_t *desc_chain_len, uint8_t perm)
 {
 	uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)];
-	uint32_t vec_id = *vec_idx;
+	uint16_t vec_id = *vec_idx;
 	uint32_t len    = 0;
 	uint64_t dlen, desc_avail, desc_iova;
 	struct vring_desc *descs = vq->desc;
@@ -317,13 +317,13 @@ fill_vec_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
  * Returns -1 on fail, 0 on success
  */
 static inline int
-reserve_avail_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
+reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 				uint32_t size, struct buf_vector *buf_vec,
 				uint16_t *num_buffers, uint16_t avail_head,
 				uint16_t *nr_vec)
 {
 	uint16_t cur_idx;
-	uint32_t vec_idx = 0;
+	uint16_t vec_idx = 0;
 	uint16_t max_tries, tries = 0;
 
 	uint16_t head_idx = 0;
@@ -341,7 +341,8 @@ reserve_avail_buf(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		if (unlikely(cur_idx == avail_head))
 			return -1;
 
-		if (unlikely(fill_vec_buf(dev, vq, cur_idx, &vec_idx, buf_vec,
+		if (unlikely(fill_vec_buf_split(dev, vq, cur_idx,
+						&vec_idx, buf_vec,
 						&head_idx, &len,
 						VHOST_ACCESS_RO) < 0))
 			return -1;
@@ -528,48 +529,22 @@ copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 }
 
 static __rte_always_inline uint32_t
-virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
+virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	struct rte_mbuf **pkts, uint32_t count)
 {
-	struct vhost_virtqueue *vq;
 	uint32_t pkt_idx = 0;
 	uint16_t num_buffers;
 	struct buf_vector buf_vec[BUF_VECTOR_MAX];
 	uint16_t avail_head;
 
-	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
-		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
-			dev->vid, __func__, queue_id);
-		return 0;
-	}
-
-	vq = dev->virtqueue[queue_id];
-
-	rte_spinlock_lock(&vq->access_lock);
-
-	if (unlikely(vq->enabled == 0))
-		goto out_access_unlock;
-
-	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-		vhost_user_iotlb_rd_lock(vq);
-
-	if (unlikely(vq->access_ok == 0))
-		if (unlikely(vring_translate(dev, vq) < 0))
-			goto out;
-
-	count = RTE_MIN((uint32_t)MAX_PKT_BURST, count);
-	if (count == 0)
-		goto out;
-
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
-
 	avail_head = *((volatile uint16_t *)&vq->avail->idx);
+
 	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
 		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
 		uint16_t nr_vec = 0;
 
-		if (unlikely(reserve_avail_buf(dev, vq,
+		if (unlikely(reserve_avail_buf_split(dev, vq,
 						pkt_len, buf_vec, &num_buffers,
 						avail_head, &nr_vec) < 0)) {
 			VHOST_LOG_DEBUG(VHOST_DATA,
@@ -602,6 +577,42 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 		vhost_vring_call(dev, vq);
 	}
 
+	return pkt_idx;
+}
+
+static __rte_always_inline uint32_t
+virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
+	struct rte_mbuf **pkts, uint32_t count)
+{
+	struct vhost_virtqueue *vq;
+
+	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
+	if (unlikely(!is_valid_virt_queue_idx(queue_id, 0, dev->nr_vring))) {
+		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
+			dev->vid, __func__, queue_id);
+		return 0;
+	}
+
+	vq = dev->virtqueue[queue_id];
+
+	rte_spinlock_lock(&vq->access_lock);
+
+	if (unlikely(vq->enabled == 0))
+		goto out_access_unlock;
+
+	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
+		vhost_user_iotlb_rd_lock(vq);
+
+	if (unlikely(vq->access_ok == 0))
+		if (unlikely(vring_translate(dev, vq) < 0))
+			goto out;
+
+	count = RTE_MIN((uint32_t)MAX_PKT_BURST, count);
+	if (count == 0)
+		goto out;
+
+	count = virtio_dev_rx_split(dev, vq, pkts, count);
+
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_unlock(vq);
@@ -609,7 +620,7 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 out_access_unlock:
 	rte_spinlock_unlock(&vq->access_lock);
 
-	return pkt_idx;
+	return count;
 }
 
 uint16_t
@@ -1014,48 +1025,13 @@ restore_mbuf(struct rte_mbuf *m)
 	}
 }
 
-uint16_t
-rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
+static __rte_always_inline uint16_t
+virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
 {
-	struct virtio_net *dev;
-	struct rte_mbuf *rarp_mbuf = NULL;
-	struct vhost_virtqueue *vq;
-	uint32_t i = 0;
+	uint16_t i;
 	uint16_t free_entries;
 
-	dev = get_device(vid);
-	if (!dev)
-		return 0;
-
-	if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
-		RTE_LOG(ERR, VHOST_DATA,
-			"(%d) %s: built-in vhost net backend is disabled.\n",
-			dev->vid, __func__);
-		return 0;
-	}
-
-	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
-		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
-			dev->vid, __func__, queue_id);
-		return 0;
-	}
-
-	vq = dev->virtqueue[queue_id];
-
-	if (unlikely(rte_spinlock_trylock(&vq->access_lock) == 0))
-		return 0;
-
-	if (unlikely(vq->enabled == 0))
-		goto out_access_unlock;
-
-	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-		vhost_user_iotlb_rd_lock(vq);
-
-	if (unlikely(vq->access_ok == 0))
-		if (unlikely(vring_translate(dev, vq) < 0))
-			goto out;
-
 	if (unlikely(dev->dequeue_zero_copy)) {
 		struct zcopy_mbuf *zmbuf, *next;
 		int nr_updated = 0;
@@ -1082,39 +1058,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
 
-	/*
-	 * Construct a RARP broadcast packet, and inject it to the "pkts"
-	 * array, to looks like that guest actually send such packet.
-	 *
-	 * Check user_send_rarp() for more information.
-	 *
-	 * broadcast_rarp shares a cacheline in the virtio_net structure
-	 * with some fields that are accessed during enqueue and
-	 * rte_atomic16_cmpset() causes a write if using cmpxchg. This could
-	 * result in false sharing between enqueue and dequeue.
-	 *
-	 * Prevent unnecessary false sharing by reading broadcast_rarp first
-	 * and only performing cmpset if the read indicates it is likely to
-	 * be set.
-	 */
-
-	if (unlikely(rte_atomic16_read(&dev->broadcast_rarp) &&
-			rte_atomic16_cmpset((volatile uint16_t *)
-				&dev->broadcast_rarp.cnt, 1, 0))) {
-
-		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
-		if (rarp_mbuf == NULL) {
-			RTE_LOG(ERR, VHOST_DATA,
-				"Failed to make RARP packet.\n");
-			return 0;
-		}
-		count -= 1;
-	}
-
 	free_entries = *((volatile uint16_t *)&vq->avail->idx) -
 		vq->last_avail_idx;
 	if (free_entries == 0)
-		goto out;
+		return 0;
 
 	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
 
@@ -1126,10 +1073,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	for (i = 0; i < count; i++) {
 		struct buf_vector buf_vec[BUF_VECTOR_MAX];
 		uint16_t head_idx, dummy_len;
-		uint32_t nr_vec = 0;
+		uint16_t nr_vec = 0;
 		int err;
 
-		if (unlikely(fill_vec_buf(dev, vq,
+		if (unlikely(fill_vec_buf_split(dev, vq,
 						vq->last_avail_idx + i,
 						&nr_vec, buf_vec,
 						&head_idx, &dummy_len,
@@ -1188,6 +1135,81 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		vhost_vring_call(dev, vq);
 	}
 
+	return i;
+}
+
+uint16_t
+rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
+	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
+{
+	struct virtio_net *dev;
+	struct rte_mbuf *rarp_mbuf = NULL;
+	struct vhost_virtqueue *vq;
+
+	dev = get_device(vid);
+	if (!dev)
+		return 0;
+
+	if (unlikely(!(dev->flags & VIRTIO_DEV_BUILTIN_VIRTIO_NET))) {
+		RTE_LOG(ERR, VHOST_DATA,
+			"(%d) %s: built-in vhost net backend is disabled.\n",
+			dev->vid, __func__);
+		return 0;
+	}
+
+	if (unlikely(!is_valid_virt_queue_idx(queue_id, 1, dev->nr_vring))) {
+		RTE_LOG(ERR, VHOST_DATA, "(%d) %s: invalid virtqueue idx %d.\n",
+			dev->vid, __func__, queue_id);
+		return 0;
+	}
+
+	vq = dev->virtqueue[queue_id];
+
+	if (unlikely(rte_spinlock_trylock(&vq->access_lock) == 0))
+		return 0;
+
+	if (unlikely(vq->enabled == 0))
+		goto out_access_unlock;
+
+	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
+		vhost_user_iotlb_rd_lock(vq);
+
+	if (unlikely(vq->access_ok == 0))
+		if (unlikely(vring_translate(dev, vq) < 0))
+			goto out;
+
+
+	/*
+	 * Construct a RARP broadcast packet, and inject it to the "pkts"
+	 * array, to looks like that guest actually send such packet.
+	 *
+	 * Check user_send_rarp() for more information.
+	 *
+	 * broadcast_rarp shares a cacheline in the virtio_net structure
+	 * with some fields that are accessed during enqueue and
+	 * rte_atomic16_cmpset() causes a write if using cmpxchg. This could
+	 * result in false sharing between enqueue and dequeue.
+	 *
+	 * Prevent unnecessary false sharing by reading broadcast_rarp first
+	 * and only performing cmpset if the read indicates it is likely to
+	 * be set.
+	 */
+
+	if (unlikely(rte_atomic16_read(&dev->broadcast_rarp) &&
+			rte_atomic16_cmpset((volatile uint16_t *)
+				&dev->broadcast_rarp.cnt, 1, 0))) {
+
+		rarp_mbuf = rte_net_make_rarp_packet(mbuf_pool, &dev->mac);
+		if (rarp_mbuf == NULL) {
+			RTE_LOG(ERR, VHOST_DATA,
+				"Failed to make RARP packet.\n");
+			return 0;
+		}
+		count -= 1;
+	}
+
+	count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
+
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
 		vhost_user_iotlb_rd_unlock(vq);
@@ -1200,10 +1222,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		 * Inject it to the head of "pkts" array, so that switch's mac
 		 * learning table will get updated first.
 		 */
-		memmove(&pkts[1], pkts, i * sizeof(struct rte_mbuf *));
+		memmove(&pkts[1], pkts, count * sizeof(struct rte_mbuf *));
 		pkts[0] = rarp_mbuf;
-		i += 1;
+		count += 1;
 	}
 
-	return i;
+	return count;
 }
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 08/15] vhost: append shadow used ring function names with split
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (6 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 07/15] vhost: extract split ring handling from Rx and Tx functions Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 09/15] vhost: add shadow used ring support for packed rings Maxime Coquelin
                   ` (6 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 28 +++++++++++++++-------------
 1 file changed, 15 insertions(+), 13 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 385876527..35f8cf90a 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -77,8 +77,9 @@ free_ind_table(void *idesc)
 }
 
 static __rte_always_inline void
-do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
-			  uint16_t to, uint16_t from, uint16_t size)
+do_flush_shadow_used_ring_split(struct virtio_net *dev,
+			struct vhost_virtqueue *vq,
+			uint16_t to, uint16_t from, uint16_t size)
 {
 	rte_memcpy(&vq->used->ring[to],
 			&vq->shadow_used_ring[from],
@@ -89,22 +90,22 @@ do_flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq,
 }
 
 static __rte_always_inline void
-flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq)
+flush_shadow_used_ring_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
 	uint16_t used_idx = vq->last_used_idx & (vq->size - 1);
 
 	if (used_idx + vq->shadow_used_idx <= vq->size) {
-		do_flush_shadow_used_ring(dev, vq, used_idx, 0,
+		do_flush_shadow_used_ring_split(dev, vq, used_idx, 0,
 					  vq->shadow_used_idx);
 	} else {
 		uint16_t size;
 
 		/* update used ring interval [used_idx, vq->size] */
 		size = vq->size - used_idx;
-		do_flush_shadow_used_ring(dev, vq, used_idx, 0, size);
+		do_flush_shadow_used_ring_split(dev, vq, used_idx, 0, size);
 
 		/* update the left half used ring interval [0, left_size] */
-		do_flush_shadow_used_ring(dev, vq, 0, size,
+		do_flush_shadow_used_ring_split(dev, vq, 0, size,
 					  vq->shadow_used_idx - size);
 	}
 	vq->last_used_idx += vq->shadow_used_idx;
@@ -120,7 +121,7 @@ flush_shadow_used_ring(struct virtio_net *dev, struct vhost_virtqueue *vq)
 }
 
 static __rte_always_inline void
-update_shadow_used_ring(struct vhost_virtqueue *vq,
+update_shadow_used_ring_split(struct vhost_virtqueue *vq,
 			 uint16_t desc_idx, uint16_t len)
 {
 	uint16_t i = vq->shadow_used_idx++;
@@ -347,7 +348,7 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 						VHOST_ACCESS_RO) < 0))
 			return -1;
 		len = RTE_MIN(len, size);
-		update_shadow_used_ring(vq, head_idx, len);
+		update_shadow_used_ring_split(vq, head_idx, len);
 		size -= len;
 
 		cur_idx++;
@@ -573,7 +574,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	do_data_copy_enqueue(dev, vq);
 
 	if (likely(vq->shadow_used_idx)) {
-		flush_shadow_used_ring(dev, vq);
+		flush_shadow_used_ring_split(dev, vq);
 		vhost_vring_call(dev, vq);
 	}
 
@@ -1041,7 +1042,8 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			next = TAILQ_NEXT(zmbuf, next);
 
 			if (mbuf_is_consumed(zmbuf->mbuf)) {
-				update_shadow_used_ring(vq, zmbuf->desc_idx, 0);
+				update_shadow_used_ring_split(vq,
+						zmbuf->desc_idx, 0);
 				nr_updated += 1;
 
 				TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next);
@@ -1052,7 +1054,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			}
 		}
 
-		flush_shadow_used_ring(dev, vq);
+		flush_shadow_used_ring_split(dev, vq);
 		vhost_vring_call(dev, vq);
 	}
 
@@ -1084,7 +1086,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			break;
 
 		if (likely(dev->dequeue_zero_copy == 0))
-			update_shadow_used_ring(vq, head_idx, 0);
+			update_shadow_used_ring_split(vq, head_idx, 0);
 
 		rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr);
 
@@ -1131,7 +1133,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		do_data_copy_dequeue(vq);
 		if (unlikely(i < count))
 			vq->shadow_used_idx = i;
-		flush_shadow_used_ring(dev, vq);
+		flush_shadow_used_ring_split(dev, vq);
 		vhost_vring_call(dev, vq);
 	}
 
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 09/15] vhost: add shadow used ring support for packed rings
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (7 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 08/15] vhost: append shadow used ring function names with split Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-29 16:08   ` Tiwei Bie
  2018-06-22 13:43 ` [PATCH v5 10/15] vhost: create descriptor mapping function Maxime Coquelin
                   ` (5 subsequent siblings)
  14 siblings, 1 reply; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.c      |  9 ++++--
 lib/librte_vhost/vhost.h      | 13 ++++++--
 lib/librte_vhost/vhost_user.c | 64 +++++++++++++++++++++++++++++----------
 lib/librte_vhost/virtio_net.c | 70 +++++++++++++++++++++++++++++++++++++++++--
 4 files changed, 132 insertions(+), 24 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index a85c6646f..7cbf1eded 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -94,9 +94,12 @@ cleanup_device(struct virtio_net *dev, int destroy)
 }
 
 void
-free_vq(struct vhost_virtqueue *vq)
+free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
-	rte_free(vq->shadow_used_ring);
+	if (vq_is_packed(dev))
+		rte_free(vq->shadow_used_packed);
+	else
+		rte_free(vq->shadow_used_split);
 	rte_free(vq->batch_copy_elems);
 	rte_mempool_free(vq->iotlb_pool);
 	rte_free(vq);
@@ -111,7 +114,7 @@ free_device(struct virtio_net *dev)
 	uint32_t i;
 
 	for (i = 0; i < dev->nr_vring; i++)
-		free_vq(dev->virtqueue[i]);
+		free_vq(dev, dev->virtqueue[i]);
 
 	rte_free(dev);
 }
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 671b4b3bf..62d49f238 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -80,6 +80,12 @@ struct log_cache_entry {
 	unsigned long val;
 };
 
+struct vring_used_elem_packed {
+	uint32_t id;
+	uint32_t len;
+	uint32_t count;
+};
+
 /**
  * Structure contains variables relevant to RX/TX virtqueues.
  */
@@ -119,7 +125,10 @@ struct vhost_virtqueue {
 	struct zcopy_mbuf	*zmbufs;
 	struct zcopy_mbuf_list	zmbuf_list;
 
-	struct vring_used_elem  *shadow_used_ring;
+	union {
+		struct vring_used_elem  *shadow_used_split;
+		struct vring_used_elem_packed *shadow_used_packed;
+	};
 	uint16_t                shadow_used_idx;
 	struct vhost_vring_addr ring_addrs;
 
@@ -553,7 +562,7 @@ void reset_device(struct virtio_net *dev);
 void vhost_destroy_device(int);
 
 void cleanup_vq(struct vhost_virtqueue *vq, int destroy);
-void free_vq(struct vhost_virtqueue *vq);
+void free_vq(struct virtio_net *dev, struct vhost_virtqueue *vq);
 
 int alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx);
 
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index b6097c085..a08d99314 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -244,7 +244,7 @@ vhost_user_set_features(struct virtio_net *dev, uint64_t features)
 
 			dev->virtqueue[dev->nr_vring] = NULL;
 			cleanup_vq(vq, 1);
-			free_vq(vq);
+			free_vq(dev, vq);
 		}
 	}
 
@@ -293,13 +293,26 @@ vhost_user_set_vring_num(struct virtio_net *dev,
 		TAILQ_INIT(&vq->zmbuf_list);
 	}
 
-	vq->shadow_used_ring = rte_malloc(NULL,
+	if (vq_is_packed(dev)) {
+		vq->shadow_used_packed = rte_malloc(NULL,
+				vq->size *
+				sizeof(struct vring_used_elem_packed),
+				RTE_CACHE_LINE_SIZE);
+		if (!vq->shadow_used_packed) {
+			RTE_LOG(ERR, VHOST_CONFIG,
+					"failed to allocate memory for shadow used ring.\n");
+			return -1;
+		}
+
+	} else {
+		vq->shadow_used_split = rte_malloc(NULL,
 				vq->size * sizeof(struct vring_used_elem),
 				RTE_CACHE_LINE_SIZE);
-	if (!vq->shadow_used_ring) {
-		RTE_LOG(ERR, VHOST_CONFIG,
-			"failed to allocate memory for shadow used ring.\n");
-		return -1;
+		if (!vq->shadow_used_split) {
+			RTE_LOG(ERR, VHOST_CONFIG,
+					"failed to allocate memory for shadow used ring.\n");
+			return -1;
+		}
 	}
 
 	vq->batch_copy_elems = rte_malloc(NULL,
@@ -326,7 +339,8 @@ numa_realloc(struct virtio_net *dev, int index)
 	struct virtio_net *old_dev;
 	struct vhost_virtqueue *old_vq, *vq;
 	struct zcopy_mbuf *new_zmbuf;
-	struct vring_used_elem *new_shadow_used_ring;
+	struct vring_used_elem *new_shadow_used_split;
+	struct vring_used_elem_packed *new_shadow_used_packed;
 	struct batch_copy_elem *new_batch_copy_elems;
 	int ret;
 
@@ -361,13 +375,26 @@ numa_realloc(struct virtio_net *dev, int index)
 			vq->zmbufs = new_zmbuf;
 		}
 
-		new_shadow_used_ring = rte_malloc_socket(NULL,
-			vq->size * sizeof(struct vring_used_elem),
-			RTE_CACHE_LINE_SIZE,
-			newnode);
-		if (new_shadow_used_ring) {
-			rte_free(vq->shadow_used_ring);
-			vq->shadow_used_ring = new_shadow_used_ring;
+		if (vq_is_packed(dev)) {
+			new_shadow_used_packed = rte_malloc_socket(NULL,
+					vq->size *
+					sizeof(struct vring_used_elem_packed),
+					RTE_CACHE_LINE_SIZE,
+					newnode);
+			if (new_shadow_used_packed) {
+				rte_free(vq->shadow_used_packed);
+				vq->shadow_used_packed = new_shadow_used_packed;
+			}
+		} else {
+			new_shadow_used_split = rte_malloc_socket(NULL,
+					vq->size *
+					sizeof(struct vring_used_elem),
+					RTE_CACHE_LINE_SIZE,
+					newnode);
+			if (new_shadow_used_split) {
+				rte_free(vq->shadow_used_split);
+				vq->shadow_used_split = new_shadow_used_split;
+			}
 		}
 
 		new_batch_copy_elems = rte_malloc_socket(NULL,
@@ -1062,8 +1089,13 @@ vhost_user_get_vring_base(struct virtio_net *dev,
 
 	if (dev->dequeue_zero_copy)
 		free_zmbufs(vq);
-	rte_free(vq->shadow_used_ring);
-	vq->shadow_used_ring = NULL;
+	if (vq_is_packed(dev)) {
+		rte_free(vq->shadow_used_packed);
+		vq->shadow_used_packed = NULL;
+	} else {
+		rte_free(vq->shadow_used_split);
+		vq->shadow_used_split = NULL;
+	}
 
 	rte_free(vq->batch_copy_elems);
 	vq->batch_copy_elems = NULL;
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 35f8cf90a..9571d5cdc 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -20,6 +20,7 @@
 
 #include "iotlb.h"
 #include "vhost.h"
+#include "virtio-packed.h"
 
 #define MAX_PKT_BURST 32
 
@@ -82,7 +83,7 @@ do_flush_shadow_used_ring_split(struct virtio_net *dev,
 			uint16_t to, uint16_t from, uint16_t size)
 {
 	rte_memcpy(&vq->used->ring[to],
-			&vq->shadow_used_ring[from],
+			&vq->shadow_used_split[from],
 			size * sizeof(struct vring_used_elem));
 	vhost_log_cache_used_vring(dev, vq,
 			offsetof(struct vring_used, ring[to]),
@@ -126,8 +127,71 @@ update_shadow_used_ring_split(struct vhost_virtqueue *vq,
 {
 	uint16_t i = vq->shadow_used_idx++;
 
-	vq->shadow_used_ring[i].id  = desc_idx;
-	vq->shadow_used_ring[i].len = len;
+	vq->shadow_used_split[i].id  = desc_idx;
+	vq->shadow_used_split[i].len = len;
+}
+
+static __rte_always_inline void
+flush_shadow_used_ring_packed(struct virtio_net *dev,
+			struct vhost_virtqueue *vq)
+{
+	int i;
+	uint16_t used_idx = vq->last_used_idx;
+
+	/* Split loop in two to save memory barriers */
+	for (i = 0; i < vq->shadow_used_idx; i++) {
+		vq->desc_packed[used_idx].index = vq->shadow_used_packed[i].id;
+		vq->desc_packed[used_idx].len = vq->shadow_used_packed[i].len;
+
+		used_idx += vq->shadow_used_packed[i].count;
+	}
+
+	rte_smp_wmb();
+
+	for (i = 0; i < vq->shadow_used_idx; i++) {
+		uint16_t flags;
+
+		if (vq->shadow_used_packed[i].len)
+			flags = VRING_DESC_F_WRITE;
+		else
+			flags = 0;
+
+		if (vq->used_wrap_counter) {
+			flags |= VRING_DESC_F_USED;
+			flags |= VRING_DESC_F_AVAIL;
+		} else {
+			flags &= ~VRING_DESC_F_USED;
+			flags &= ~VRING_DESC_F_AVAIL;
+		}
+
+		vq->desc_packed[vq->last_used_idx].flags = flags;
+
+		vhost_log_cache_used_vring(dev, vq,
+					vq->last_used_idx *
+					sizeof(struct vring_desc_packed),
+					sizeof(struct vring_desc_packed));
+
+		vq->last_used_idx += vq->shadow_used_packed[i].count;
+		if (vq->last_used_idx >= vq->size) {
+			vq->used_wrap_counter ^= 1;
+			vq->last_used_idx -= vq->size;
+		}
+	}
+
+	rte_smp_wmb();
+	vq->shadow_used_idx = 0;
+	vhost_log_cache_sync(dev, vq);
+}
+
+static __rte_always_inline void
+update_shadow_used_ring_packed(struct vhost_virtqueue *vq,
+			 uint16_t desc_idx, uint16_t len, uint16_t count)
+{
+	uint16_t i = vq->shadow_used_idx++;
+
+	vq->shadow_used_packed[i].id  = desc_idx;
+	vq->shadow_used_packed[i].len = len;
+	vq->shadow_used_packed[i].count = count;
 }
 
 static inline void
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 10/15] vhost: create descriptor mapping function
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (8 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 09/15] vhost: add shadow used ring support for packed rings Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 11/15] vhost: add vector filling support for packed ring Maxime Coquelin
                   ` (4 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 61 ++++++++++++++++++++++++++-----------------
 1 file changed, 37 insertions(+), 24 deletions(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 9571d5cdc..58c7144d0 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -290,6 +290,37 @@ virtio_enqueue_offload(struct rte_mbuf *m_buf, struct virtio_net_hdr *net_hdr)
 	}
 }
 
+static __rte_always_inline int
+map_one_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
+		struct buf_vector *buf_vec, uint16_t *vec_idx,
+		uint64_t desc_iova, uint64_t desc_len, uint8_t perm)
+{
+	uint16_t vec_id = *vec_idx;
+
+	while (desc_len) {
+		uint64_t desc_addr;
+		uint64_t desc_chunck_len = desc_len;
+
+		desc_addr = vhost_iova_to_vva(dev, vq,
+				desc_iova,
+				&desc_chunck_len,
+				perm);
+		if (unlikely(!desc_addr))
+			return -1;
+
+		buf_vec[vec_id].buf_iova = desc_iova;
+		buf_vec[vec_id].buf_addr = desc_addr;
+		buf_vec[vec_id].buf_len  = desc_chunck_len;
+
+		desc_len -= desc_chunck_len;
+		desc_iova += desc_chunck_len;
+		vec_id++;
+	}
+	*vec_idx = vec_id;
+
+	return 0;
+}
+
 static __rte_always_inline int
 fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			 uint32_t avail_idx, uint16_t *vec_idx,
@@ -299,7 +330,7 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	uint16_t idx = vq->avail->ring[avail_idx & (vq->size - 1)];
 	uint16_t vec_id = *vec_idx;
 	uint32_t len    = 0;
-	uint64_t dlen, desc_avail, desc_iova;
+	uint64_t dlen;
 	struct vring_desc *descs = vq->desc;
 	struct vring_desc *idesc = NULL;
 
@@ -337,30 +368,12 @@ fill_vec_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 
 		len += descs[idx].len;
-		desc_avail = descs[idx].len;
-		desc_iova = descs[idx].addr;
-
-		while (desc_avail) {
-			uint64_t desc_addr;
-			uint64_t desc_chunck_len = desc_avail;
-
-			desc_addr = vhost_iova_to_vva(dev, vq,
-					desc_iova,
-					&desc_chunck_len,
-					perm);
-			if (unlikely(!desc_addr)) {
-				free_ind_table(idesc);
-				return -1;
-			}
-
-			buf_vec[vec_id].buf_iova = desc_iova;
-			buf_vec[vec_id].buf_addr = desc_addr;
-			buf_vec[vec_id].buf_len  = desc_chunck_len;
-			buf_vec[vec_id].desc_idx = idx;
 
-			desc_avail -= desc_chunck_len;
-			desc_iova += desc_chunck_len;
-			vec_id++;
+		if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
+						descs[idx].addr, descs[idx].len,
+						perm))) {
+			free_ind_table(idesc);
+			return -1;
 		}
 
 		if ((descs[idx].flags & VRING_DESC_F_NEXT) == 0)
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 11/15] vhost: add vector filling support for packed ring
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (9 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 10/15] vhost: create descriptor mapping function Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 12/15] vhost: add Rx " Maxime Coquelin
                   ` (3 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 111 ++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 111 insertions(+)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 58c7144d0..28b0cb566 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -446,6 +446,117 @@ reserve_avail_buf_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return 0;
 }
 
+static __rte_always_inline int
+fill_vec_buf_packed_indirect(struct virtio_net *dev,
+			struct vhost_virtqueue *vq,
+			struct vring_desc_packed *desc, uint16_t *vec_idx,
+			struct buf_vector *buf_vec, uint16_t *len, uint8_t perm)
+{
+	uint16_t i;
+	uint32_t nr_decs;
+	uint16_t vec_id = *vec_idx;
+	uint64_t dlen;
+	struct vring_desc_packed *descs, *idescs = NULL;
+
+	dlen = desc->len;
+	descs = (struct vring_desc_packed *)(uintptr_t)
+		vhost_iova_to_vva(dev, vq, desc->addr, &dlen, VHOST_ACCESS_RO);
+	if (unlikely(!descs))
+		return -1;
+
+	if (unlikely(dlen < desc->len)) {
+		/*
+		 * The indirect desc table is not contiguous
+		 * in process VA space, we have to copy it.
+		 */
+		idescs = alloc_copy_ind_table(dev, vq, desc->addr, desc->len);
+		if (unlikely(!idescs))
+			return -1;
+
+		descs = idescs;
+	}
+
+	nr_decs =  desc->len / sizeof(struct vring_desc_packed);
+	if (unlikely(nr_decs >= vq->size)) {
+		free_ind_table(idescs);
+		return -1;
+	}
+
+	for (i = 0; i < nr_decs; i++) {
+		if (unlikely(vec_id >= BUF_VECTOR_MAX)) {
+			free_ind_table(idescs);
+			return -1;
+		}
+
+		*len += descs[i].len;
+		if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
+						descs[i].addr, descs[i].len,
+						perm)))
+			return -1;
+	}
+	*vec_idx = vec_id;
+
+	if (unlikely(!!idescs))
+		free_ind_table(idescs);
+
+	return 0;
+}
+
+static inline int
+fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+				uint16_t avail_idx, uint16_t *desc_count,
+				struct buf_vector *buf_vec, uint16_t *vec_idx,
+				uint16_t *buf_id, uint16_t *len, uint8_t perm)
+{
+	uint16_t wrap_counter = vq->avail_wrap_counter;
+	struct vring_desc_packed *descs = vq->desc_packed;
+	uint16_t vec_id = *vec_idx;
+
+	if (avail_idx < vq->last_avail_idx)
+		wrap_counter ^= 1;
+
+	if (unlikely(!desc_is_avail(&descs[avail_idx], wrap_counter)))
+		return -1;
+
+	*desc_count = 0;
+
+	while (1) {
+		if (unlikely(vec_id >= BUF_VECTOR_MAX))
+			return -1;
+
+		*desc_count += 1;
+		*buf_id = descs[avail_idx].index;
+
+		if (descs[avail_idx].flags & VRING_DESC_F_INDIRECT) {
+			if (unlikely(fill_vec_buf_packed_indirect(dev, vq,
+							&descs[avail_idx],
+							&vec_id, buf_vec,
+							len, perm) < 0))
+				return -1;
+		} else {
+			*len += descs[avail_idx].len;
+
+			if (unlikely(map_one_desc(dev, vq, buf_vec, &vec_id,
+							descs[avail_idx].addr,
+							descs[avail_idx].len,
+							perm)))
+				return -1;
+		}
+
+		if ((descs[avail_idx].flags & VRING_DESC_F_NEXT) == 0)
+			break;
+
+		if (++avail_idx >= vq->size) {
+			avail_idx -= vq->size;
+			wrap_counter ^= 1;
+		}
+	}
+
+	*vec_idx = vec_id;
+
+	return 0;
+}
+
 static __rte_always_inline int
 copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			    struct rte_mbuf *m, struct buf_vector *buf_vec,
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 12/15] vhost: add Rx support for packed ring
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (10 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 11/15] vhost: add vector filling support for packed ring Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 13/15] vhost: add Tx " Maxime Coquelin
                   ` (2 subsequent siblings)
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/virtio_net.c | 117 +++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 116 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index 28b0cb566..e094357be 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -557,6 +557,65 @@ fill_vec_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return 0;
 }
 
+/*
+ * Returns -1 on fail, 0 on success
+ */
+static inline int
+reserve_avail_buf_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+				uint32_t size, struct buf_vector *buf_vec,
+				uint16_t *nr_vec, uint16_t *num_buffers,
+				uint16_t *nr_descs)
+{
+	uint16_t avail_idx;
+	uint16_t vec_idx = 0;
+	uint16_t max_tries, tries = 0;
+
+	uint16_t buf_id = 0;
+	uint16_t len = 0;
+	uint16_t desc_count;
+
+	*num_buffers = 0;
+	avail_idx = vq->last_avail_idx;
+
+	if (rxvq_is_mergeable(dev))
+		max_tries = vq->size;
+	else
+		max_tries = 1;
+
+	while (size > 0) {
+		if (unlikely(fill_vec_buf_packed(dev, vq,
+						avail_idx, &desc_count,
+						buf_vec, &vec_idx,
+						&buf_id, &len,
+						VHOST_ACCESS_RO) < 0))
+			return -1;
+
+		len = RTE_MIN(len, size);
+		update_shadow_used_ring_packed(vq, buf_id, len, desc_count);
+		size -= len;
+
+		avail_idx += desc_count;
+		if (avail_idx >= vq->size)
+			avail_idx -= vq->size;
+
+		*nr_descs += desc_count;
+		tries++;
+		*num_buffers += 1;
+
+		/*
+		 * if we tried all available ring items, and still
+		 * can't get enough buf, it means something abnormal
+		 * happened.
+		 */
+		if (unlikely(tries > max_tries))
+			return -1;
+	}
+
+	*nr_vec = vec_idx;
+
+	return 0;
+}
+
 static __rte_always_inline int
 copy_mbuf_to_desc(struct virtio_net *dev, struct vhost_virtqueue *vq,
 			    struct rte_mbuf *m, struct buf_vector *buf_vec,
@@ -769,6 +828,59 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return pkt_idx;
 }
 
+static __rte_always_inline uint32_t
+virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+	struct rte_mbuf **pkts, uint32_t count)
+{
+	uint32_t pkt_idx = 0;
+	uint16_t num_buffers;
+	struct buf_vector buf_vec[BUF_VECTOR_MAX];
+
+	for (pkt_idx = 0; pkt_idx < count; pkt_idx++) {
+		uint32_t pkt_len = pkts[pkt_idx]->pkt_len + dev->vhost_hlen;
+		uint16_t nr_vec = 0;
+		uint16_t nr_descs = 0;
+
+		if (unlikely(reserve_avail_buf_packed(dev, vq,
+						pkt_len, buf_vec, &nr_vec,
+						&num_buffers, &nr_descs) < 0)) {
+			VHOST_LOG_DEBUG(VHOST_DATA,
+				"(%d) failed to get enough desc from vring\n",
+				dev->vid);
+			vq->shadow_used_idx -= num_buffers;
+			break;
+		}
+
+		rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr);
+
+		VHOST_LOG_DEBUG(VHOST_DATA, "(%d) current index %d | end index %d\n",
+			dev->vid, vq->last_avail_idx,
+			vq->last_avail_idx + num_buffers);
+
+		if (copy_mbuf_to_desc(dev, vq, pkts[pkt_idx],
+						buf_vec, nr_vec,
+						num_buffers) < 0) {
+			vq->shadow_used_idx -= num_buffers;
+			break;
+		}
+
+		vq->last_avail_idx += nr_descs;
+		if (vq->last_avail_idx >= vq->size) {
+			vq->last_avail_idx -= vq->size;
+			vq->avail_wrap_counter ^= 1;
+		}
+	}
+
+	do_data_copy_enqueue(dev, vq);
+
+	if (likely(vq->shadow_used_idx)) {
+		flush_shadow_used_ring_packed(dev, vq);
+		vhost_vring_call(dev, vq);
+	}
+
+	return pkt_idx;
+}
+
 static __rte_always_inline uint32_t
 virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	struct rte_mbuf **pkts, uint32_t count)
@@ -800,7 +912,10 @@ virtio_dev_rx(struct virtio_net *dev, uint16_t queue_id,
 	if (count == 0)
 		goto out;
 
-	count = virtio_dev_rx_split(dev, vq, pkts, count);
+	if (vq_is_packed(dev))
+		count = virtio_dev_rx_packed(dev, vq, pkts, count);
+	else
+		count = virtio_dev_rx_split(dev, vq, pkts, count);
 
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 13/15] vhost: add Tx support for packed ring
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (11 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 12/15] vhost: add Rx " Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 14/15] vhost: add notification " Maxime Coquelin
  2018-06-22 13:43 ` [PATCH v5 15/15] vhost: advertize packed ring layout support Maxime Coquelin
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.h      |   1 +
 lib/librte_vhost/virtio_net.c | 121 +++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 121 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 62d49f238..5dcb61c71 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -56,6 +56,7 @@ struct buf_vector {
 struct zcopy_mbuf {
 	struct rte_mbuf *mbuf;
 	uint32_t desc_idx;
+	uint16_t desc_count;
 	uint16_t in_use;
 
 	TAILQ_ENTRY(zcopy_mbuf) next;
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index e094357be..b9f49175c 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -1443,6 +1443,122 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 	return i;
 }
 
+static __rte_always_inline uint16_t
+virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
+	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
+{
+	uint16_t i;
+
+	rte_prefetch0(&vq->desc_packed[vq->last_avail_idx]);
+
+	if (unlikely(dev->dequeue_zero_copy)) {
+		struct zcopy_mbuf *zmbuf, *next;
+		int nr_updated = 0;
+
+		for (zmbuf = TAILQ_FIRST(&vq->zmbuf_list);
+		     zmbuf != NULL; zmbuf = next) {
+			next = TAILQ_NEXT(zmbuf, next);
+
+			if (mbuf_is_consumed(zmbuf->mbuf)) {
+				update_shadow_used_ring_packed(vq,
+						zmbuf->desc_idx,
+						0,
+						zmbuf->desc_count);
+				nr_updated += 1;
+
+				TAILQ_REMOVE(&vq->zmbuf_list, zmbuf, next);
+				restore_mbuf(zmbuf->mbuf);
+				rte_pktmbuf_free(zmbuf->mbuf);
+				put_zmbuf(zmbuf);
+				vq->nr_zmbuf -= 1;
+			}
+		}
+
+		flush_shadow_used_ring_packed(dev, vq);
+		vhost_vring_call(dev, vq);
+	}
+
+	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
+
+	count = RTE_MIN(count, MAX_PKT_BURST);
+	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) about to dequeue %u buffers\n",
+			dev->vid, count);
+
+	for (i = 0; i < count; i++) {
+		struct buf_vector buf_vec[BUF_VECTOR_MAX];
+		uint16_t buf_id, dummy_len;
+		uint16_t desc_count, nr_vec = 0;
+		int err;
+
+		if (unlikely(fill_vec_buf_packed(dev, vq,
+						vq->last_avail_idx, &desc_count,
+						buf_vec, &nr_vec,
+						&buf_id, &dummy_len,
+						VHOST_ACCESS_RW) < 0))
+			break;
+
+		if (likely(dev->dequeue_zero_copy == 0))
+			update_shadow_used_ring_packed(vq, buf_id, 0,
+					desc_count);
+
+		rte_prefetch0((void *)(uintptr_t)buf_vec[0].buf_addr);
+
+		pkts[i] = rte_pktmbuf_alloc(mbuf_pool);
+		if (unlikely(pkts[i] == NULL)) {
+			RTE_LOG(ERR, VHOST_DATA,
+				"Failed to allocate memory for mbuf.\n");
+			break;
+		}
+
+		err = copy_desc_to_mbuf(dev, vq, buf_vec, nr_vec, pkts[i],
+				mbuf_pool);
+		if (unlikely(err)) {
+			rte_pktmbuf_free(pkts[i]);
+			break;
+		}
+
+		if (unlikely(dev->dequeue_zero_copy)) {
+			struct zcopy_mbuf *zmbuf;
+
+			zmbuf = get_zmbuf(vq);
+			if (!zmbuf) {
+				rte_pktmbuf_free(pkts[i]);
+				break;
+			}
+			zmbuf->mbuf = pkts[i];
+			zmbuf->desc_idx = buf_id;
+			zmbuf->desc_count = desc_count;
+
+			/*
+			 * Pin lock the mbuf; we will check later to see
+			 * whether the mbuf is freed (when we are the last
+			 * user) or not. If that's the case, we then could
+			 * update the used ring safely.
+			 */
+			rte_mbuf_refcnt_update(pkts[i], 1);
+
+			vq->nr_zmbuf += 1;
+			TAILQ_INSERT_TAIL(&vq->zmbuf_list, zmbuf, next);
+		}
+
+		vq->last_avail_idx += desc_count;
+		if (vq->last_avail_idx >= vq->size) {
+			vq->last_avail_idx -= vq->size;
+			vq->avail_wrap_counter ^= 1;
+		}
+	}
+
+	if (likely(dev->dequeue_zero_copy == 0)) {
+		do_data_copy_dequeue(vq);
+		if (unlikely(i < count))
+			vq->shadow_used_idx = i;
+		flush_shadow_used_ring_packed(dev, vq);
+		vhost_vring_call(dev, vq);
+	}
+
+	return i;
+}
+
 uint16_t
 rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 	struct rte_mempool *mbuf_pool, struct rte_mbuf **pkts, uint16_t count)
@@ -1513,7 +1629,10 @@ rte_vhost_dequeue_burst(int vid, uint16_t queue_id,
 		count -= 1;
 	}
 
-	count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
+	if (vq_is_packed(dev))
+		count = virtio_dev_tx_packed(dev, vq, mbuf_pool, pkts, count);
+	else
+		count = virtio_dev_tx_split(dev, vq, mbuf_pool, pkts, count);
 
 out:
 	if (dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM))
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 14/15] vhost: add notification for packed ring
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (12 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 13/15] vhost: add Tx " Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  2018-06-29 16:22   ` Tiwei Bie
  2018-06-22 13:43 ` [PATCH v5 15/15] vhost: advertize packed ring layout support Maxime Coquelin
  14 siblings, 1 reply; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.c         | 55 +++++++++++++++++++++++++++----
 lib/librte_vhost/vhost.h         | 71 ++++++++++++++++++++++++++++++++++++----
 lib/librte_vhost/vhost_user.c    | 29 +++++++++++++---
 lib/librte_vhost/virtio-packed.h | 11 +++++++
 lib/librte_vhost/virtio_net.c    | 12 +++----
 5 files changed, 154 insertions(+), 24 deletions(-)

diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
index 7cbf1eded..c26162542 100644
--- a/lib/librte_vhost/vhost.c
+++ b/lib/librte_vhost/vhost.c
@@ -595,7 +595,11 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
 	if (!vq)
 		return -1;
 
-	vhost_vring_call(dev, vq);
+	if (vq_is_packed(dev))
+		vhost_vring_call_packed(dev, vq);
+	else
+		vhost_vring_call_split(dev, vq);
+
 	return 0;
 }
 
@@ -616,20 +620,59 @@ rte_vhost_avail_entries(int vid, uint16_t queue_id)
 	return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx;
 }
 
+static inline int
+vhost_enable_notify_split(struct vhost_virtqueue *vq, int enable)
+{
+	if (enable)
+		vq->used->flags &= ~VRING_USED_F_NO_NOTIFY;
+	else
+		vq->used->flags |= VRING_USED_F_NO_NOTIFY;
+
+	return 0;
+}
+
+static inline int
+vhost_enable_notify_packed(struct virtio_net *dev,
+		struct vhost_virtqueue *vq, int enable)
+{
+	uint16_t flags;
+
+	if (!enable) {
+		vq->device_event->desc_event_flags = RING_EVENT_FLAGS_DISABLE;
+		return 0;
+	}
+
+	flags = RING_EVENT_FLAGS_DISABLE;
+	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) {
+		flags = RING_EVENT_FLAGS_DESC;
+		vq->device_event->desc_event_off_wrap = vq->last_avail_idx |
+			vq->avail_wrap_counter << 15;
+	}
+
+	rte_smp_wmb();
+
+	vq->device_event->desc_event_flags = flags;
+
+	rte_smp_wmb();
+
+	return 0;
+}
+
 int
 rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
 {
 	struct virtio_net *dev = get_device(vid);
+	struct vhost_virtqueue *vq;
 
 	if (!dev)
 		return -1;
 
-	if (enable)
-		dev->virtqueue[queue_id]->used->flags &=
-			~VRING_USED_F_NO_NOTIFY;
+	vq = dev->virtqueue[queue_id];
+
+	if (vq_is_packed(dev))
+		return vhost_enable_notify_packed(dev, vq, enable);
 	else
-		dev->virtqueue[queue_id]->used->flags |= VRING_USED_F_NO_NOTIFY;
-	return 0;
+		return vhost_enable_notify_split(vq, enable);
 }
 
 void
diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 5dcb61c71..99db688a8 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -21,6 +21,7 @@
 
 #include "rte_vhost.h"
 #include "rte_vdpa.h"
+#include "virtio-packed.h"
 
 /* Used to indicate that the device is running on a data core */
 #define VIRTIO_DEV_RUNNING 1
@@ -95,8 +96,14 @@ struct vhost_virtqueue {
 		struct vring_desc	*desc;
 		struct vring_desc_packed   *desc_packed;
 	};
-	struct vring_avail	*avail;
-	struct vring_used	*used;
+	union {
+		struct vring_avail	*avail;
+		struct vring_packed_desc_event *driver_event;
+	};
+	union {
+		struct vring_used	*used;
+		struct vring_packed_desc_event *device_event;
+	};
 	uint32_t		size;
 
 	uint16_t		last_avail_idx;
@@ -614,7 +621,7 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old)
 }
 
 static __rte_always_inline void
-vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
+vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
 {
 	/* Flush used->idx update before we read avail->flags. */
 	rte_mb();
@@ -625,11 +632,11 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
 		uint16_t new = vq->last_used_idx;
 
 		VHOST_LOG_DEBUG(VHOST_DATA, "%s: used_event_idx=%d, old=%d, new=%d\n",
-			__func__,
-			vhost_used_event(vq),
-			old, new);
+				__func__,
+				vhost_used_event(vq),
+				old, new);
 		if (vhost_need_event(vhost_used_event(vq), new, old)
-			&& (vq->callfd >= 0)) {
+				&& (vq->callfd >= 0)) {
 			vq->signalled_used = vq->last_used_idx;
 			eventfd_write(vq->callfd, (eventfd_t) 1);
 		}
@@ -641,4 +648,54 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
 	}
 }
 
+static __rte_always_inline void
+vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
+{
+	uint16_t old, new, off, off_wrap, wrap;
+	bool kick = false;
+
+
+	/*  Flush used desc update. */
+	rte_smp_mb();
+
+	if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) {
+		if (vq->driver_event->desc_event_flags !=
+				RING_EVENT_FLAGS_DISABLE)
+			kick = true;
+		goto kick;
+	}
+
+	old = vq->signalled_used;
+	new = vq->last_used_idx;
+	vq->signalled_used = new;
+
+	if (vq->driver_event->desc_event_flags != RING_EVENT_FLAGS_DESC) {
+		if (vq->driver_event->desc_event_flags !=
+				RING_EVENT_FLAGS_DISABLE)
+			kick = true;
+		goto kick;
+	}
+
+	rte_smp_rmb();
+
+	off_wrap = vq->driver_event->desc_event_off_wrap;
+	off = off_wrap & ~(1 << 15);
+	wrap = vq->used_wrap_counter;
+
+	if (new < old) {
+		new += vq->size;
+		wrap ^= 1;
+	}
+
+	if (wrap != off_wrap >> 15)
+		off += vq->size;
+
+	if (vhost_need_event(off, new, old))
+		kick = true;
+
+kick:
+	if (kick)
+		eventfd_write(vq->callfd, (eventfd_t)1);
+}
+
 #endif /* _VHOST_NET_CDEV_H_ */
diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
index a08d99314..c74924564 100644
--- a/lib/librte_vhost/vhost_user.c
+++ b/lib/librte_vhost/vhost_user.c
@@ -509,9 +509,6 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 		len = sizeof(struct vring_desc_packed) * vq->size;
 		vq->desc_packed = (struct vring_desc_packed *) ring_addr_to_vva
 			(dev, vq, addr->desc_user_addr, &len);
-		vq->desc = NULL;
-		vq->avail = NULL;
-		vq->used = NULL;
 		vq->log_guest_addr = 0;
 		if (vq->desc_packed == 0 ||
 				len != sizeof(struct vring_desc_packed) *
@@ -526,6 +523,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 		vq = dev->virtqueue[vq_index];
 		addr = &vq->ring_addrs;
 
+		len = sizeof(struct vring_packed_desc_event);
+		vq->driver_event = (struct vring_packed_desc_event *)
+					(uintptr_t)ring_addr_to_vva(dev,
+					vq, addr->avail_user_addr, &len);
+		if (vq->driver_event == 0 ||
+				len != sizeof(struct vring_packed_desc_event)) {
+			RTE_LOG(DEBUG, VHOST_CONFIG,
+				"(%d) failed to find driver area address.\n",
+				dev->vid);
+			return dev;
+		}
+
+		len = sizeof(struct vring_packed_desc_event);
+		vq->device_event = (struct vring_packed_desc_event *)
+					(uintptr_t)ring_addr_to_vva(dev,
+					vq, addr->used_user_addr, &len);
+		if (vq->device_event == 0 ||
+				len != sizeof(struct vring_packed_desc_event)) {
+			RTE_LOG(DEBUG, VHOST_CONFIG,
+				"(%d) failed to find device area address.\n",
+				dev->vid);
+			return dev;
+		}
+
 		return dev;
 	}
 
@@ -542,8 +563,6 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
 			dev->vid);
 		return dev;
 	}
-	vq->desc_packed = NULL;
-
 	dev = numa_realloc(dev, vq_index);
 	vq = dev->virtqueue[vq_index];
 	addr = &vq->ring_addrs;
diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
index 21fe5045b..f271249d8 100644
--- a/lib/librte_vhost/virtio-packed.h
+++ b/lib/librte_vhost/virtio-packed.h
@@ -19,6 +19,17 @@ struct vring_desc_packed {
 	uint16_t flags;
 };
 
+#define RING_EVENT_FLAGS_ENABLE 0x0
+#define RING_EVENT_FLAGS_DISABLE 0x1
+#define RING_EVENT_FLAGS_DESC 0x2
+#define RING_EVENT_FLAGS_MASK 0xFFFC
+#define RING_EVENT_WRAP_MASK 0x8000
+#define RING_EVENT_OFF_MASK 0x7FFF
+
+struct vring_packed_desc_event {
+	uint16_t desc_event_off_wrap;
+	uint16_t desc_event_flags;
+};
 
 static inline int
 desc_is_avail(struct vring_desc_packed *desc, uint16_t wrap_counter)
diff --git a/lib/librte_vhost/virtio_net.c b/lib/librte_vhost/virtio_net.c
index b9f49175c..9e8c9969b 100644
--- a/lib/librte_vhost/virtio_net.c
+++ b/lib/librte_vhost/virtio_net.c
@@ -822,7 +822,7 @@ virtio_dev_rx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 
 	if (likely(vq->shadow_used_idx)) {
 		flush_shadow_used_ring_split(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_split(dev, vq);
 	}
 
 	return pkt_idx;
@@ -875,7 +875,7 @@ virtio_dev_rx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 
 	if (likely(vq->shadow_used_idx)) {
 		flush_shadow_used_ring_packed(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_packed(dev, vq);
 	}
 
 	return pkt_idx;
@@ -1358,7 +1358,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 
 		flush_shadow_used_ring_split(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_split(dev, vq);
 	}
 
 	rte_prefetch0(&vq->avail->ring[vq->last_avail_idx & (vq->size - 1)]);
@@ -1437,7 +1437,7 @@ virtio_dev_tx_split(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		if (unlikely(i < count))
 			vq->shadow_used_idx = i;
 		flush_shadow_used_ring_split(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_split(dev, vq);
 	}
 
 	return i;
@@ -1475,7 +1475,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		}
 
 		flush_shadow_used_ring_packed(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_packed(dev, vq);
 	}
 
 	VHOST_LOG_DEBUG(VHOST_DATA, "(%d) %s\n", dev->vid, __func__);
@@ -1553,7 +1553,7 @@ virtio_dev_tx_packed(struct virtio_net *dev, struct vhost_virtqueue *vq,
 		if (unlikely(i < count))
 			vq->shadow_used_idx = i;
 		flush_shadow_used_ring_packed(dev, vq);
-		vhost_vring_call(dev, vq);
+		vhost_vring_call_packed(dev, vq);
 	}
 
 	return i;
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* [PATCH v5 15/15] vhost: advertize packed ring layout support
  2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
                   ` (13 preceding siblings ...)
  2018-06-22 13:43 ` [PATCH v5 14/15] vhost: add notification " Maxime Coquelin
@ 2018-06-22 13:43 ` Maxime Coquelin
  14 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-22 13:43 UTC (permalink / raw)
  To: tiwei.bie, zhihong.wang, jfreimann, dev
  Cc: mst, jasowang, wexu, Maxime Coquelin

Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
---
 lib/librte_vhost/vhost.h | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
index 99db688a8..579233545 100644
--- a/lib/librte_vhost/vhost.h
+++ b/lib/librte_vhost/vhost.h
@@ -242,7 +242,8 @@ struct vhost_msg {
 				(1ULL << VIRTIO_RING_F_INDIRECT_DESC) | \
 				(1ULL << VIRTIO_RING_F_EVENT_IDX) | \
 				(1ULL << VIRTIO_NET_F_MTU) | \
-				(1ULL << VIRTIO_F_IOMMU_PLATFORM))
+				(1ULL << VIRTIO_F_IOMMU_PLATFORM) | \
+				(1ULL << VIRTIO_F_RING_PACKED))
 
 
 struct guest_page {
-- 
2.14.4

^ permalink raw reply related	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 01/15] vhost: add virtio packed virtqueue defines
  2018-06-22 13:43 ` [PATCH v5 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
@ 2018-06-29 15:47   ` Tiwei Bie
  2018-06-29 16:20     ` Maxime Coquelin
  0 siblings, 1 reply; 26+ messages in thread
From: Tiwei Bie @ 2018-06-29 15:47 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Fri, Jun 22, 2018 at 03:43:13PM +0200, Maxime Coquelin wrote:
> From: Jens Freimann <jfreimann@redhat.com>
> 
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> ---
>  lib/librte_vhost/vhost.h         |  4 ++++
>  lib/librte_vhost/virtio-packed.h | 22 ++++++++++++++++++++++
>  2 files changed, 26 insertions(+)
>  create mode 100644 lib/librte_vhost/virtio-packed.h
> 
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 3f8fa3a78..bf2059198 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -191,6 +191,10 @@ struct vhost_msg {
>  #ifndef VIRTIO_F_VERSION_1
>   #define VIRTIO_F_VERSION_1 32
>  #endif
> +#ifndef VIRTIO_F_RING_PACKED
> + #define VIRTIO_F_RING_PACKED 34
> +#endif
> +#define VHOST_USER_F_PROTOCOL_FEATURES	30

VHOST_USER_F_PROTOCOL_FEATURES has already been
defined in rte_vhost.h which has been included
in this file.

>  
>  /* Features supported by this builtin vhost-user net driver. */
>  #define VIRTIO_NET_SUPPORTED_FEATURES ((1ULL << VIRTIO_NET_F_MRG_RXBUF) | \
> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
> new file mode 100644
> index 000000000..744b3991b
> --- /dev/null
> +++ b/lib/librte_vhost/virtio-packed.h
> @@ -0,0 +1,22 @@
> +/* SPDX-License-Identifier: BSD-3-Clause
> + * Copyright(c) Red Hat Inc.
> + */
> +
> +#ifndef __VIRTIO_PACKED_H
> +#define __VIRTIO_PACKED_H
> +
> +#define VRING_DESC_F_NEXT       1
> +#define VRING_DESC_F_WRITE      2
> +#define VRING_DESC_F_INDIRECT   4
> +
> +#define VRING_DESC_F_AVAIL      (1ULL << 7)
> +#define VRING_DESC_F_USED	(1ULL << 15)
> +
> +struct vring_desc_packed {
> +	uint64_t addr;
> +	uint32_t len;
> +	uint16_t index;
> +	uint16_t flags;
> +};
> +
> +#endif /* __VIRTIO_PACKED_H */
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 02/15] vhost: add helpers for packed virtqueues
  2018-06-22 13:43 ` [PATCH v5 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
@ 2018-06-29 15:51   ` Tiwei Bie
  2018-06-29 16:21     ` Maxime Coquelin
  0 siblings, 1 reply; 26+ messages in thread
From: Tiwei Bie @ 2018-06-29 15:51 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Fri, Jun 22, 2018 at 03:43:14PM +0200, Maxime Coquelin wrote:
> From: Jens Freimann <jfreimann@redhat.com>
> 
> Add some helper functions to check descriptor flags
> and check if a vring is of type packed.
> 
> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/vhost.h         |  6 ++++++
>  lib/librte_vhost/virtio-packed.h | 16 ++++++++++++++++
>  2 files changed, 22 insertions(+)
> 
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index bf2059198..34a614c97 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -319,6 +319,12 @@ struct virtio_net {
>  	struct vhost_user_extern_ops extern_ops;
>  } __rte_cache_aligned;
>  
> +static __rte_always_inline bool
> +vq_is_packed(struct virtio_net *dev)
> +{
> +	return dev->features & (1ull << VIRTIO_F_RING_PACKED);
> +}
> +
>  #define VHOST_LOG_PAGE	4096
>  
>  /*
> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
> index 744b3991b..21fe5045b 100644
> --- a/lib/librte_vhost/virtio-packed.h
> +++ b/lib/librte_vhost/virtio-packed.h
> @@ -19,4 +19,20 @@ struct vring_desc_packed {
>  	uint16_t flags;
>  };
>  
> +
> +static inline int

Better to return bool.

> +desc_is_avail(struct vring_desc_packed *desc, uint16_t wrap_counter)
> +{
> +	if (wrap_counter == 1) {
> +		if ((desc->flags & VRING_DESC_F_AVAIL) &&
> +				!(desc->flags & VRING_DESC_F_USED))
> +			return 1;
> +	} else {
> +		if (!(desc->flags & VRING_DESC_F_AVAIL) &&
> +				(desc->flags & VRING_DESC_F_USED))
> +			return 1;
> +	}
> +	return 0;

Maybe something like:

return wrap_counter == !!(desc->flags & VRING_DESC_F_AVAIL) &&
	wrap_counter != !!(dsec->flags & VRING_DESC_F_USED);

is better?

Best regards,
Tiwei Bie

> +}
> +
>  #endif /* __VIRTIO_PACKED_H */
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 03/15] vhost: vring address setup for packed queues
  2018-06-22 13:43 ` [PATCH v5 03/15] vhost: vring address setup for packed queues Maxime Coquelin
@ 2018-06-29 15:59   ` Tiwei Bie
  2018-06-29 16:34     ` Maxime Coquelin
  2018-07-01  9:58     ` Maxime Coquelin
  0 siblings, 2 replies; 26+ messages in thread
From: Tiwei Bie @ 2018-06-29 15:59 UTC (permalink / raw)
  To: Maxime Coquelin
  Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu, Yuanhan Liu,
	Jens Freimann

On Fri, Jun 22, 2018 at 03:43:15PM +0200, Maxime Coquelin wrote:
> From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> 
> Add code to set up packed queues when enabled.
> 
> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
> Signed-off-by: Jens Freimann <jfreiman@redhat.com>
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/vhost.c      | 44 ++++++++++++++++++++++++++++++++++++++-----
>  lib/librte_vhost/vhost.h      |  7 ++++++-
>  lib/librte_vhost/vhost_user.c | 29 +++++++++++++++++++++++++++-
>  3 files changed, 73 insertions(+), 7 deletions(-)
> 
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index afded4952..a85c6646f 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -23,6 +23,7 @@
>  #include "iotlb.h"
>  #include "vhost.h"
>  #include "vhost_user.h"
> +#include "virtio-packed.h"
>  
>  struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
>  
> @@ -115,14 +116,11 @@ free_device(struct virtio_net *dev)
>  	rte_free(dev);
>  }
>  
> -int
> -vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +static int
> +vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  {
>  	uint64_t req_size, size;
>  
> -	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
> -		goto out;
> -
>  	req_size = sizeof(struct vring_desc) * vq->size;
>  	size = req_size;
>  	vq->desc = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, vq,
> @@ -153,6 +151,40 @@ vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  	if (!vq->used || size != req_size)
>  		return -1;
>  
> +	return 0;
> +}
> +
> +static int
> +vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +{
> +	uint64_t req_size, size;
> +
> +	req_size = sizeof(struct vring_desc_packed) * vq->size;
> +	size = req_size;
> +	vq->desc_packed =
> +		(struct vring_desc_packed *)(uintptr_t)vhost_iova_to_vva(dev,
> +					vq, vq->ring_addrs.desc_user_addr,
> +					&size, VHOST_ACCESS_RW);
> +	if (!vq->desc || size != req_size)

Should check vq->desc_packed instead of vq->desc.

> +		return -1;

Also need to get vq->driver_event and vq->device_event.
Maybe shouldn't do it in this patch. But it's missing
in the whole patch set.

> +
> +	return 0;
> +}
> +
> +int
> +vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +{
> +
> +	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
> +		goto out;
> +
> +	if (vq_is_packed(dev)) {
> +		if (vring_translate_packed(dev, vq) < 0)
> +			return -1;
> +	} else {
> +		if (vring_translate_split(dev, vq) < 0)
> +			return -1;
> +	}
>  out:
>  	vq->access_ok = 1;
>  
> @@ -234,6 +266,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
>  	dev->virtqueue[vring_idx] = vq;
>  	init_vring_queue(dev, vring_idx);
>  	rte_spinlock_init(&vq->access_lock);
> +	vq->avail_wrap_counter = 1;
> +	vq->used_wrap_counter = 1;
>  
>  	dev->nr_vring += 1;
>  
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 34a614c97..671b4b3bf 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -84,7 +84,10 @@ struct log_cache_entry {
>   * Structure contains variables relevant to RX/TX virtqueues.
>   */
>  struct vhost_virtqueue {
> -	struct vring_desc	*desc;
> +	union {
> +		struct vring_desc	*desc;
> +		struct vring_desc_packed   *desc_packed;
> +	};
>  	struct vring_avail	*avail;
>  	struct vring_used	*used;
>  	uint32_t		size;
> @@ -122,6 +125,8 @@ struct vhost_virtqueue {
>  
>  	struct batch_copy_elem	*batch_copy_elems;
>  	uint16_t		batch_copy_nb_elems;
> +	uint16_t		used_wrap_counter;
> +	uint16_t		avail_wrap_counter;

Not quite sure about this. Do you think it will be
better to define wrap counters as bool (as only 0 or
1 are available)?

>  
>  	struct log_cache_entry log_cache[VHOST_LOG_CACHE_NR];
>  	uint16_t log_cache_nb_elem;
> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
> index 947290fc3..b6097c085 100644
> --- a/lib/librte_vhost/vhost_user.c
> +++ b/lib/librte_vhost/vhost_user.c
> @@ -39,6 +39,7 @@
>  #include "iotlb.h"
>  #include "vhost.h"
>  #include "vhost_user.h"
> +#include "virtio-packed.h"
>  
>  #define VIRTIO_MIN_MTU 68
>  #define VIRTIO_MAX_MTU 65535
> @@ -477,6 +478,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>  	struct vhost_vring_addr *addr = &vq->ring_addrs;
>  	uint64_t len;
>  
> +	if (vq_is_packed(dev)) {
> +		len = sizeof(struct vring_desc_packed) * vq->size;
> +		vq->desc_packed = (struct vring_desc_packed *) ring_addr_to_vva
> +			(dev, vq, addr->desc_user_addr, &len);
> +		vq->desc = NULL;

`vq->desc = NULL` will set vq->desc_packed to NULL.

> +		vq->avail = NULL;
> +		vq->used = NULL;
> +		vq->log_guest_addr = 0;
> +		if (vq->desc_packed == 0 ||

It's better to compare with NULL.

> +				len != sizeof(struct vring_desc_packed) *
> +				vq->size) {
> +			RTE_LOG(DEBUG, VHOST_CONFIG,
> +				"(%d) failed to map desc_packed ring.\n",
> +				dev->vid);
> +			return dev;
> +		}
> +
> +		dev = numa_realloc(dev, vq_index);
> +		vq = dev->virtqueue[vq_index];
> +		addr = &vq->ring_addrs;
> +
> +		return dev;
> +	}
> +
>  	/* The addresses are converted from QEMU virtual to Vhost virtual. */
>  	if (vq->desc && vq->avail && vq->used)
>  		return dev;
> @@ -490,6 +515,7 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>  			dev->vid);
>  		return dev;
>  	}
> +	vq->desc_packed = NULL;

It seems that above assignment isn't needed.

>  
>  	dev = numa_realloc(dev, vq_index);
>  	vq = dev->virtqueue[vq_index];
> @@ -888,7 +914,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *pmsg)
>  static int
>  vq_is_ready(struct vhost_virtqueue *vq)
>  {
> -	return vq && vq->desc && vq->avail && vq->used &&
> +	return vq &&
> +	       (vq->desc_packed || (vq->desc && vq->avail && vq->used)) &&
>  	       vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
>  	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
>  }
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 09/15] vhost: add shadow used ring support for packed rings
  2018-06-22 13:43 ` [PATCH v5 09/15] vhost: add shadow used ring support for packed rings Maxime Coquelin
@ 2018-06-29 16:08   ` Tiwei Bie
  0 siblings, 0 replies; 26+ messages in thread
From: Tiwei Bie @ 2018-06-29 16:08 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Fri, Jun 22, 2018 at 03:43:21PM +0200, Maxime Coquelin wrote:
[...]
> index 671b4b3bf..62d49f238 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -80,6 +80,12 @@ struct log_cache_entry {
>  	unsigned long val;
>  };
>  
> +struct vring_used_elem_packed {
> +	uint32_t id;

Id in packed ring is 16 bits long. Maybe it's better
to define it as uint16_t here.

> +	uint32_t len;
> +	uint32_t count;
> +};
> +
[...]
> +
> +static __rte_always_inline void
> +flush_shadow_used_ring_packed(struct virtio_net *dev,
> +			struct vhost_virtqueue *vq)
> +{
> +	int i;
> +	uint16_t used_idx = vq->last_used_idx;
> +
> +	/* Split loop in two to save memory barriers */
> +	for (i = 0; i < vq->shadow_used_idx; i++) {
> +		vq->desc_packed[used_idx].index = vq->shadow_used_packed[i].id;
> +		vq->desc_packed[used_idx].len = vq->shadow_used_packed[i].len;
> +
> +		used_idx += vq->shadow_used_packed[i].count;

used_idx may wrap.

> +	}
> +
> +	rte_smp_wmb();
> +
> +	for (i = 0; i < vq->shadow_used_idx; i++) {
> +		uint16_t flags;
> +
> +		if (vq->shadow_used_packed[i].len)
> +			flags = VRING_DESC_F_WRITE;
> +		else
> +			flags = 0;
> +
> +		if (vq->used_wrap_counter) {
> +			flags |= VRING_DESC_F_USED;
> +			flags |= VRING_DESC_F_AVAIL;
> +		} else {
> +			flags &= ~VRING_DESC_F_USED;
> +			flags &= ~VRING_DESC_F_AVAIL;
> +		}
> +
> +		vq->desc_packed[vq->last_used_idx].flags = flags;
> +
> +		vhost_log_cache_used_vring(dev, vq,
> +					vq->last_used_idx *
> +					sizeof(struct vring_desc_packed),
> +					sizeof(struct vring_desc_packed));
> +
> +		vq->last_used_idx += vq->shadow_used_packed[i].count;
> +		if (vq->last_used_idx >= vq->size) {
> +			vq->used_wrap_counter ^= 1;
> +			vq->last_used_idx -= vq->size;
> +		}
> +	}
> +
> +	rte_smp_wmb();
> +	vq->shadow_used_idx = 0;
> +	vhost_log_cache_sync(dev, vq);
> +}
> +
> +static __rte_always_inline void
> +update_shadow_used_ring_packed(struct vhost_virtqueue *vq,
> +			 uint16_t desc_idx, uint16_t len, uint16_t count)
> +{
> +	uint16_t i = vq->shadow_used_idx++;
> +
> +	vq->shadow_used_packed[i].id  = desc_idx;
> +	vq->shadow_used_packed[i].len = len;
> +	vq->shadow_used_packed[i].count = count;
>  }
>  
>  static inline void
> -- 
> 2.14.4
> 

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 01/15] vhost: add virtio packed virtqueue defines
  2018-06-29 15:47   ` Tiwei Bie
@ 2018-06-29 16:20     ` Maxime Coquelin
  0 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-29 16:20 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 06/29/2018 05:47 PM, Tiwei Bie wrote:
> On Fri, Jun 22, 2018 at 03:43:13PM +0200, Maxime Coquelin wrote:
>> From: Jens Freimann<jfreimann@redhat.com>
>>
>> Signed-off-by: Jens Freimann<jfreimann@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.h         |  4 ++++
>>   lib/librte_vhost/virtio-packed.h | 22 ++++++++++++++++++++++
>>   2 files changed, 26 insertions(+)
>>   create mode 100644 lib/librte_vhost/virtio-packed.h
>>
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index 3f8fa3a78..bf2059198 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -191,6 +191,10 @@ struct vhost_msg {
>>   #ifndef VIRTIO_F_VERSION_1
>>    #define VIRTIO_F_VERSION_1 32
>>   #endif
>> +#ifndef VIRTIO_F_RING_PACKED
>> + #define VIRTIO_F_RING_PACKED 34
>> +#endif
>> +#define VHOST_USER_F_PROTOCOL_FEATURES	30
> VHOST_USER_F_PROTOCOL_FEATURES has already been
> defined in rte_vhost.h which has been included
> in this file.

Good catch!

Thanks,
Maxime

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 02/15] vhost: add helpers for packed virtqueues
  2018-06-29 15:51   ` Tiwei Bie
@ 2018-06-29 16:21     ` Maxime Coquelin
  0 siblings, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-29 16:21 UTC (permalink / raw)
  To: Tiwei Bie; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu



On 06/29/2018 05:51 PM, Tiwei Bie wrote:
> On Fri, Jun 22, 2018 at 03:43:14PM +0200, Maxime Coquelin wrote:
>> From: Jens Freimann <jfreimann@redhat.com>
>>
>> Add some helper functions to check descriptor flags
>> and check if a vring is of type packed.
>>
>> Signed-off-by: Jens Freimann <jfreimann@redhat.com>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.h         |  6 ++++++
>>   lib/librte_vhost/virtio-packed.h | 16 ++++++++++++++++
>>   2 files changed, 22 insertions(+)
>>
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index bf2059198..34a614c97 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -319,6 +319,12 @@ struct virtio_net {
>>   	struct vhost_user_extern_ops extern_ops;
>>   } __rte_cache_aligned;
>>   
>> +static __rte_always_inline bool
>> +vq_is_packed(struct virtio_net *dev)
>> +{
>> +	return dev->features & (1ull << VIRTIO_F_RING_PACKED);
>> +}
>> +
>>   #define VHOST_LOG_PAGE	4096
>>   
>>   /*
>> diff --git a/lib/librte_vhost/virtio-packed.h b/lib/librte_vhost/virtio-packed.h
>> index 744b3991b..21fe5045b 100644
>> --- a/lib/librte_vhost/virtio-packed.h
>> +++ b/lib/librte_vhost/virtio-packed.h
>> @@ -19,4 +19,20 @@ struct vring_desc_packed {
>>   	uint16_t flags;
>>   };
>>   
>> +
>> +static inline int
> 
> Better to return bool.

Yes.

>> +desc_is_avail(struct vring_desc_packed *desc, uint16_t wrap_counter)
>> +{
>> +	if (wrap_counter == 1) {
>> +		if ((desc->flags & VRING_DESC_F_AVAIL) &&
>> +				!(desc->flags & VRING_DESC_F_USED))
>> +			return 1;
>> +	} else {
>> +		if (!(desc->flags & VRING_DESC_F_AVAIL) &&
>> +				(desc->flags & VRING_DESC_F_USED))
>> +			return 1;
>> +	}
>> +	return 0;
> 
> Maybe something like:
> 
> return wrap_counter == !!(desc->flags & VRING_DESC_F_AVAIL) &&
> 	wrap_counter != !!(dsec->flags & VRING_DESC_F_USED);
> 
> is better?

Definitely!

Thanks,
Maxime

> Best regards,
> Tiwei Bie
> 
>> +}
>> +
>>   #endif /* __VIRTIO_PACKED_H */
>> -- 
>> 2.14.4
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 14/15] vhost: add notification for packed ring
  2018-06-22 13:43 ` [PATCH v5 14/15] vhost: add notification " Maxime Coquelin
@ 2018-06-29 16:22   ` Tiwei Bie
  0 siblings, 0 replies; 26+ messages in thread
From: Tiwei Bie @ 2018-06-29 16:22 UTC (permalink / raw)
  To: Maxime Coquelin; +Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu

On Fri, Jun 22, 2018 at 03:43:26PM +0200, Maxime Coquelin wrote:
> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
> ---
>  lib/librte_vhost/vhost.c         | 55 +++++++++++++++++++++++++++----
>  lib/librte_vhost/vhost.h         | 71 ++++++++++++++++++++++++++++++++++++----
>  lib/librte_vhost/vhost_user.c    | 29 +++++++++++++---
>  lib/librte_vhost/virtio-packed.h | 11 +++++++
>  lib/librte_vhost/virtio_net.c    | 12 +++----
>  5 files changed, 154 insertions(+), 24 deletions(-)
> 
> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
> index 7cbf1eded..c26162542 100644
> --- a/lib/librte_vhost/vhost.c
> +++ b/lib/librte_vhost/vhost.c
> @@ -595,7 +595,11 @@ rte_vhost_vring_call(int vid, uint16_t vring_idx)
>  	if (!vq)
>  		return -1;
>  
> -	vhost_vring_call(dev, vq);
> +	if (vq_is_packed(dev))
> +		vhost_vring_call_packed(dev, vq);
> +	else
> +		vhost_vring_call_split(dev, vq);
> +
>  	return 0;
>  }
>  
> @@ -616,20 +620,59 @@ rte_vhost_avail_entries(int vid, uint16_t queue_id)
>  	return *(volatile uint16_t *)&vq->avail->idx - vq->last_used_idx;
>  }
>  
> +static inline int
> +vhost_enable_notify_split(struct vhost_virtqueue *vq, int enable)
> +{
> +	if (enable)
> +		vq->used->flags &= ~VRING_USED_F_NO_NOTIFY;
> +	else
> +		vq->used->flags |= VRING_USED_F_NO_NOTIFY;
> +
> +	return 0;
> +}
> +
> +static inline int
> +vhost_enable_notify_packed(struct virtio_net *dev,
> +		struct vhost_virtqueue *vq, int enable)
> +{
> +	uint16_t flags;
> +
> +	if (!enable) {
> +		vq->device_event->desc_event_flags = RING_EVENT_FLAGS_DISABLE;
> +		return 0;
> +	}
> +
> +	flags = RING_EVENT_FLAGS_DISABLE;

Should be RING_EVENT_FLAGS_ENABLE.

> +	if (dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX)) {
> +		flags = RING_EVENT_FLAGS_DESC;
> +		vq->device_event->desc_event_off_wrap = vq->last_avail_idx |
> +			vq->avail_wrap_counter << 15;
> +	}
> +
> +	rte_smp_wmb();
> +
> +	vq->device_event->desc_event_flags = flags;
> +
> +	rte_smp_wmb();
> +
> +	return 0;
> +}
> +
>  int
>  rte_vhost_enable_guest_notification(int vid, uint16_t queue_id, int enable)
>  {
>  	struct virtio_net *dev = get_device(vid);
> +	struct vhost_virtqueue *vq;
>  
>  	if (!dev)
>  		return -1;
>  
> -	if (enable)
> -		dev->virtqueue[queue_id]->used->flags &=
> -			~VRING_USED_F_NO_NOTIFY;
> +	vq = dev->virtqueue[queue_id];
> +
> +	if (vq_is_packed(dev))
> +		return vhost_enable_notify_packed(dev, vq, enable);
>  	else
> -		dev->virtqueue[queue_id]->used->flags |= VRING_USED_F_NO_NOTIFY;
> -	return 0;
> +		return vhost_enable_notify_split(vq, enable);
>  }
>  
>  void
> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> index 5dcb61c71..99db688a8 100644
> --- a/lib/librte_vhost/vhost.h
> +++ b/lib/librte_vhost/vhost.h
> @@ -21,6 +21,7 @@
>  
>  #include "rte_vhost.h"
>  #include "rte_vdpa.h"
> +#include "virtio-packed.h"

The definitions in virtio-packed.h probably will conflit
with the definitions in linux/virtio_ring.h which is
included by linux/vhost.h (which is also included by
this file) when packed ring is upstreamed to linux.
It probably will cause build issues in the future.

>  
>  /* Used to indicate that the device is running on a data core */
>  #define VIRTIO_DEV_RUNNING 1
> @@ -95,8 +96,14 @@ struct vhost_virtqueue {
>  		struct vring_desc	*desc;
>  		struct vring_desc_packed   *desc_packed;
>  	};
> -	struct vring_avail	*avail;
> -	struct vring_used	*used;
> +	union {
> +		struct vring_avail	*avail;
> +		struct vring_packed_desc_event *driver_event;
> +	};
> +	union {
> +		struct vring_used	*used;
> +		struct vring_packed_desc_event *device_event;
> +	};
>  	uint32_t		size;
>  
>  	uint16_t		last_avail_idx;
> @@ -614,7 +621,7 @@ vhost_need_event(uint16_t event_idx, uint16_t new_idx, uint16_t old)
>  }
>  
>  static __rte_always_inline void
> -vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +vhost_vring_call_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  {
>  	/* Flush used->idx update before we read avail->flags. */
>  	rte_mb();
> @@ -625,11 +632,11 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  		uint16_t new = vq->last_used_idx;
>  
>  		VHOST_LOG_DEBUG(VHOST_DATA, "%s: used_event_idx=%d, old=%d, new=%d\n",
> -			__func__,
> -			vhost_used_event(vq),
> -			old, new);
> +				__func__,
> +				vhost_used_event(vq),
> +				old, new);
>  		if (vhost_need_event(vhost_used_event(vq), new, old)
> -			&& (vq->callfd >= 0)) {
> +				&& (vq->callfd >= 0)) {
>  			vq->signalled_used = vq->last_used_idx;
>  			eventfd_write(vq->callfd, (eventfd_t) 1);
>  		}
> @@ -641,4 +648,54 @@ vhost_vring_call(struct virtio_net *dev, struct vhost_virtqueue *vq)
>  	}
>  }
>  
> +static __rte_always_inline void
> +vhost_vring_call_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
> +{
> +	uint16_t old, new, off, off_wrap, wrap;
> +	bool kick = false;
> +
> +

There is no need to have two blank lines.

> +	/*  Flush used desc update. */
> +	rte_smp_mb();
> +
> +	if (!(dev->features & (1ULL << VIRTIO_RING_F_EVENT_IDX))) {
> +		if (vq->driver_event->desc_event_flags !=
> +				RING_EVENT_FLAGS_DISABLE)
> +			kick = true;
> +		goto kick;
> +	}
> +
> +	old = vq->signalled_used;
> +	new = vq->last_used_idx;
> +	vq->signalled_used = new;
> +
> +	if (vq->driver_event->desc_event_flags != RING_EVENT_FLAGS_DESC) {
> +		if (vq->driver_event->desc_event_flags !=
> +				RING_EVENT_FLAGS_DISABLE)
> +			kick = true;
> +		goto kick;
> +	}
> +
> +	rte_smp_rmb();
> +
> +	off_wrap = vq->driver_event->desc_event_off_wrap;
> +	off = off_wrap & ~(1 << 15);
> +	wrap = vq->used_wrap_counter;
> +
> +	if (new < old) {
> +		new += vq->size;
> +		wrap ^= 1;
> +	}
> +
> +	if (wrap != off_wrap >> 15)
> +		off += vq->size;
> +
> +	if (vhost_need_event(off, new, old))
> +		kick = true;
> +
> +kick:
> +	if (kick)
> +		eventfd_write(vq->callfd, (eventfd_t)1);
> +}
> +
>  #endif /* _VHOST_NET_CDEV_H_ */
> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
> index a08d99314..c74924564 100644
> --- a/lib/librte_vhost/vhost_user.c
> +++ b/lib/librte_vhost/vhost_user.c
> @@ -509,9 +509,6 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>  		len = sizeof(struct vring_desc_packed) * vq->size;
>  		vq->desc_packed = (struct vring_desc_packed *) ring_addr_to_vva
>  			(dev, vq, addr->desc_user_addr, &len);
> -		vq->desc = NULL;
> -		vq->avail = NULL;
> -		vq->used = NULL;

There is no need to add them from the beginning.

>  		vq->log_guest_addr = 0;
>  		if (vq->desc_packed == 0 ||
>  				len != sizeof(struct vring_desc_packed) *
> @@ -526,6 +523,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>  		vq = dev->virtqueue[vq_index];
>  		addr = &vq->ring_addrs;
>  
> +		len = sizeof(struct vring_packed_desc_event);
> +		vq->driver_event = (struct vring_packed_desc_event *)
> +					(uintptr_t)ring_addr_to_vva(dev,
> +					vq, addr->avail_user_addr, &len);
> +		if (vq->driver_event == 0 ||

It's better to compare with NULL.

> +				len != sizeof(struct vring_packed_desc_event)) {
> +			RTE_LOG(DEBUG, VHOST_CONFIG,
> +				"(%d) failed to find driver area address.\n",
> +				dev->vid);
> +			return dev;
> +		}
> +
> +		len = sizeof(struct vring_packed_desc_event);
> +		vq->device_event = (struct vring_packed_desc_event *)
> +					(uintptr_t)ring_addr_to_vva(dev,
> +					vq, addr->used_user_addr, &len);
> +		if (vq->device_event == 0 ||

It's better to compare with NULL.

> +				len != sizeof(struct vring_packed_desc_event)) {
> +			RTE_LOG(DEBUG, VHOST_CONFIG,
> +				"(%d) failed to find device area address.\n",
> +				dev->vid);
> +			return dev;
> +		}
> +
>  		return dev;
>  	}
>  
> @@ -542,8 +563,6 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>  			dev->vid);
>  		return dev;
>  	}
> -	vq->desc_packed = NULL;

There is no need to add it from the beginning.

Best regards,
Tiwei Bie

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 03/15] vhost: vring address setup for packed queues
  2018-06-29 15:59   ` Tiwei Bie
@ 2018-06-29 16:34     ` Maxime Coquelin
  2018-06-30  2:18       ` Tiwei Bie
  2018-07-01  9:58     ` Maxime Coquelin
  1 sibling, 1 reply; 26+ messages in thread
From: Maxime Coquelin @ 2018-06-29 16:34 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu, Yuanhan Liu,
	Jens Freimann



On 06/29/2018 05:59 PM, Tiwei Bie wrote:
> On Fri, Jun 22, 2018 at 03:43:15PM +0200, Maxime Coquelin wrote:
>> From: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>>
>> Add code to set up packed queues when enabled.
>>
>> Signed-off-by: Yuanhan Liu <yuanhan.liu@linux.intel.com>
>> Signed-off-by: Jens Freimann <jfreiman@redhat.com>
>> Signed-off-by: Maxime Coquelin <maxime.coquelin@redhat.com>
>> ---
>>   lib/librte_vhost/vhost.c      | 44 ++++++++++++++++++++++++++++++++++++++-----
>>   lib/librte_vhost/vhost.h      |  7 ++++++-
>>   lib/librte_vhost/vhost_user.c | 29 +++++++++++++++++++++++++++-
>>   3 files changed, 73 insertions(+), 7 deletions(-)
>>
>> diff --git a/lib/librte_vhost/vhost.c b/lib/librte_vhost/vhost.c
>> index afded4952..a85c6646f 100644
>> --- a/lib/librte_vhost/vhost.c
>> +++ b/lib/librte_vhost/vhost.c
>> @@ -23,6 +23,7 @@
>>   #include "iotlb.h"
>>   #include "vhost.h"
>>   #include "vhost_user.h"
>> +#include "virtio-packed.h"
>>   
>>   struct virtio_net *vhost_devices[MAX_VHOST_DEVICE];
>>   
>> @@ -115,14 +116,11 @@ free_device(struct virtio_net *dev)
>>   	rte_free(dev);
>>   }
>>   
>> -int
>> -vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
>> +static int
>> +vring_translate_split(struct virtio_net *dev, struct vhost_virtqueue *vq)
>>   {
>>   	uint64_t req_size, size;
>>   
>> -	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
>> -		goto out;
>> -
>>   	req_size = sizeof(struct vring_desc) * vq->size;
>>   	size = req_size;
>>   	vq->desc = (struct vring_desc *)(uintptr_t)vhost_iova_to_vva(dev, vq,
>> @@ -153,6 +151,40 @@ vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
>>   	if (!vq->used || size != req_size)
>>   		return -1;
>>   
>> +	return 0;
>> +}
>> +
>> +static int
>> +vring_translate_packed(struct virtio_net *dev, struct vhost_virtqueue *vq)
>> +{
>> +	uint64_t req_size, size;
>> +
>> +	req_size = sizeof(struct vring_desc_packed) * vq->size;
>> +	size = req_size;
>> +	vq->desc_packed =
>> +		(struct vring_desc_packed *)(uintptr_t)vhost_iova_to_vva(dev,
>> +					vq, vq->ring_addrs.desc_user_addr,
>> +					&size, VHOST_ACCESS_RW);
>> +	if (!vq->desc || size != req_size)
> 
> Should check vq->desc_packed instead of vq->desc.

Yes

>> +		return -1;
> 
> Also need to get vq->driver_event and vq->device_event.
> Maybe shouldn't do it in this patch. But it's missing
> in the whole patch set.

I'll add it in the notif patch.

Thanks,
Maxime

>> +
>> +	return 0;
>> +}
>> +
>> +int
>> +vring_translate(struct virtio_net *dev, struct vhost_virtqueue *vq)
>> +{
>> +
>> +	if (!(dev->features & (1ULL << VIRTIO_F_IOMMU_PLATFORM)))
>> +		goto out;
>> +
>> +	if (vq_is_packed(dev)) {
>> +		if (vring_translate_packed(dev, vq) < 0)
>> +			return -1;
>> +	} else {
>> +		if (vring_translate_split(dev, vq) < 0)
>> +			return -1;
>> +	}
>>   out:
>>   	vq->access_ok = 1;
>>   
>> @@ -234,6 +266,8 @@ alloc_vring_queue(struct virtio_net *dev, uint32_t vring_idx)
>>   	dev->virtqueue[vring_idx] = vq;
>>   	init_vring_queue(dev, vring_idx);
>>   	rte_spinlock_init(&vq->access_lock);
>> +	vq->avail_wrap_counter = 1;
>> +	vq->used_wrap_counter = 1;
>>   
>>   	dev->nr_vring += 1;
>>   
>> diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
>> index 34a614c97..671b4b3bf 100644
>> --- a/lib/librte_vhost/vhost.h
>> +++ b/lib/librte_vhost/vhost.h
>> @@ -84,7 +84,10 @@ struct log_cache_entry {
>>    * Structure contains variables relevant to RX/TX virtqueues.
>>    */
>>   struct vhost_virtqueue {
>> -	struct vring_desc	*desc;
>> +	union {
>> +		struct vring_desc	*desc;
>> +		struct vring_desc_packed   *desc_packed;
>> +	};
>>   	struct vring_avail	*avail;
>>   	struct vring_used	*used;
>>   	uint32_t		size;
>> @@ -122,6 +125,8 @@ struct vhost_virtqueue {
>>   
>>   	struct batch_copy_elem	*batch_copy_elems;
>>   	uint16_t		batch_copy_nb_elems;
>> +	uint16_t		used_wrap_counter;
>> +	uint16_t		avail_wrap_counter;
> 
> Not quite sure about this. Do you think it will be
> better to define wrap counters as bool (as only 0 or
> 1 are available)?

I think it should work, but not sure this is cleaner.

When defining something as bool, I expect to use true or false
for its assignment, but maybe that's me.

But I could certainly use uint8_t instead to reduce the size of the
struct.

What do you think?

>>   
>>   	struct log_cache_entry log_cache[VHOST_LOG_CACHE_NR];
>>   	uint16_t log_cache_nb_elem;
>> diff --git a/lib/librte_vhost/vhost_user.c b/lib/librte_vhost/vhost_user.c
>> index 947290fc3..b6097c085 100644
>> --- a/lib/librte_vhost/vhost_user.c
>> +++ b/lib/librte_vhost/vhost_user.c
>> @@ -39,6 +39,7 @@
>>   #include "iotlb.h"
>>   #include "vhost.h"
>>   #include "vhost_user.h"
>> +#include "virtio-packed.h"
>>   
>>   #define VIRTIO_MIN_MTU 68
>>   #define VIRTIO_MAX_MTU 65535
>> @@ -477,6 +478,30 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>>   	struct vhost_vring_addr *addr = &vq->ring_addrs;
>>   	uint64_t len;
>>   
>> +	if (vq_is_packed(dev)) {
>> +		len = sizeof(struct vring_desc_packed) * vq->size;
>> +		vq->desc_packed = (struct vring_desc_packed *) ring_addr_to_vva
>> +			(dev, vq, addr->desc_user_addr, &len);
>> +		vq->desc = NULL;
> 
> `vq->desc = NULL` will set vq->desc_packed to NULL.

Indeed, I was sure to have remove that line, as it makes
translate_ring_addresses to fail systematically...

Ha, I squashed the fix in the wrong patch (PATCH 14).

>> +		vq->avail = NULL;
>> +		vq->used = NULL;
>> +		vq->log_guest_addr = 0;
>> +		if (vq->desc_packed == 0 ||
> 
> It's better to compare with NULL.

I agree.

>> +				len != sizeof(struct vring_desc_packed) *
>> +				vq->size) {
>> +			RTE_LOG(DEBUG, VHOST_CONFIG,
>> +				"(%d) failed to map desc_packed ring.\n",
>> +				dev->vid);
>> +			return dev;
>> +		}
>> +
>> +		dev = numa_realloc(dev, vq_index);
>> +		vq = dev->virtqueue[vq_index];
>> +		addr = &vq->ring_addrs;
>> +
>> +		return dev;
>> +	}
>> +
>>   	/* The addresses are converted from QEMU virtual to Vhost virtual. */
>>   	if (vq->desc && vq->avail && vq->used)
>>   		return dev;
>> @@ -490,6 +515,7 @@ translate_ring_addresses(struct virtio_net *dev, int vq_index)
>>   			dev->vid);
>>   		return dev;
>>   	}
>> +	vq->desc_packed = NULL;

Same mistake, I squashed the change in patch 14.

> It seems that above assignment isn't needed.
> 
>>   
>>   	dev = numa_realloc(dev, vq_index);
>>   	vq = dev->virtqueue[vq_index];
>> @@ -888,7 +914,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *pmsg)
>>   static int
>>   vq_is_ready(struct vhost_virtqueue *vq)
>>   {
>> -	return vq && vq->desc && vq->avail && vq->used &&
>> +	return vq &&
>> +	       (vq->desc_packed || (vq->desc && vq->avail && vq->used)) &&
>>   	       vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
>>   	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;
>>   }
>> -- 
>> 2.14.4
>>

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 03/15] vhost: vring address setup for packed queues
  2018-06-29 16:34     ` Maxime Coquelin
@ 2018-06-30  2:18       ` Tiwei Bie
  0 siblings, 0 replies; 26+ messages in thread
From: Tiwei Bie @ 2018-06-30  2:18 UTC (permalink / raw)
  To: Maxime Coquelin
  Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu, Yuanhan Liu,
	Jens Freimann

On Fri, Jun 29, 2018 at 06:34:09PM +0200, Maxime Coquelin wrote:
> On 06/29/2018 05:59 PM, Tiwei Bie wrote:
> > On Fri, Jun 22, 2018 at 03:43:15PM +0200, Maxime Coquelin wrote:
[...]
> > > diff --git a/lib/librte_vhost/vhost.h b/lib/librte_vhost/vhost.h
> > > index 34a614c97..671b4b3bf 100644
> > > --- a/lib/librte_vhost/vhost.h
> > > +++ b/lib/librte_vhost/vhost.h
> > > @@ -84,7 +84,10 @@ struct log_cache_entry {
> > >    * Structure contains variables relevant to RX/TX virtqueues.
> > >    */
> > >   struct vhost_virtqueue {
> > > -	struct vring_desc	*desc;
> > > +	union {
> > > +		struct vring_desc	*desc;
> > > +		struct vring_desc_packed   *desc_packed;
> > > +	};
> > >   	struct vring_avail	*avail;
> > >   	struct vring_used	*used;
> > >   	uint32_t		size;
> > > @@ -122,6 +125,8 @@ struct vhost_virtqueue {
> > >   	struct batch_copy_elem	*batch_copy_elems;
> > >   	uint16_t		batch_copy_nb_elems;
> > > +	uint16_t		used_wrap_counter;
> > > +	uint16_t		avail_wrap_counter;
> > 
> > Not quite sure about this. Do you think it will be
> > better to define wrap counters as bool (as only 0 or
> > 1 are available)?
> 
> I think it should work, but not sure this is cleaner.
> 
> When defining something as bool, I expect to use true or false
> for its assignment, but maybe that's me.
> 
> But I could certainly use uint8_t instead to reduce the size of the
> struct.
> 
> What do you think?

I don't have a strong opinion on this.
Jason suggested me to use bool in my patch [1].
So I wanted to hear your opinion too.

[1] https://lkml.org/lkml/2018/5/29/47

Best regards,
Tiwei Bie

^ permalink raw reply	[flat|nested] 26+ messages in thread

* Re: [PATCH v5 03/15] vhost: vring address setup for packed queues
  2018-06-29 15:59   ` Tiwei Bie
  2018-06-29 16:34     ` Maxime Coquelin
@ 2018-07-01  9:58     ` Maxime Coquelin
  1 sibling, 0 replies; 26+ messages in thread
From: Maxime Coquelin @ 2018-07-01  9:58 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: zhihong.wang, jfreimann, dev, mst, jasowang, wexu, Yuanhan Liu,
	Jens Freimann



On 06/29/2018 05:59 PM, Tiwei Bie wrote:
> @@ -888,7 +914,8 @@ vhost_user_set_mem_table(struct virtio_net **pdev, struct VhostUserMsg *pmsg)
>   static int
>   vq_is_ready(struct vhost_virtqueue *vq)
>   {
> -	return vq && vq->desc && vq->avail && vq->used &&
> +	return vq &&
> +	       (vq->desc_packed || (vq->desc && vq->avail && vq->used)) &&
>   	       vq->kickfd != VIRTIO_UNINITIALIZED_EVENTFD &&
>   	       vq->callfd != VIRTIO_UNINITIALIZED_EVENTFD;

It seems that the check is wrong here as desc_packed and desc are in a
union. We may have to check whether packed ring has been negotiated.

^ permalink raw reply	[flat|nested] 26+ messages in thread

end of thread, other threads:[~2018-07-01  9:58 UTC | newest]

Thread overview: 26+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-22 13:43 [PATCH v5 00/15] Vhost: add support to packed ring layout Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 01/15] vhost: add virtio packed virtqueue defines Maxime Coquelin
2018-06-29 15:47   ` Tiwei Bie
2018-06-29 16:20     ` Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 02/15] vhost: add helpers for packed virtqueues Maxime Coquelin
2018-06-29 15:51   ` Tiwei Bie
2018-06-29 16:21     ` Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 03/15] vhost: vring address setup for packed queues Maxime Coquelin
2018-06-29 15:59   ` Tiwei Bie
2018-06-29 16:34     ` Maxime Coquelin
2018-06-30  2:18       ` Tiwei Bie
2018-07-01  9:58     ` Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 04/15] vhost: clear shadow used table index at flush time Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 05/15] vhost: make indirect desc table copy desc type agnostic Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 06/15] vhost: clear batch copy index at copy time Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 07/15] vhost: extract split ring handling from Rx and Tx functions Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 08/15] vhost: append shadow used ring function names with split Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 09/15] vhost: add shadow used ring support for packed rings Maxime Coquelin
2018-06-29 16:08   ` Tiwei Bie
2018-06-22 13:43 ` [PATCH v5 10/15] vhost: create descriptor mapping function Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 11/15] vhost: add vector filling support for packed ring Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 12/15] vhost: add Rx " Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 13/15] vhost: add Tx " Maxime Coquelin
2018-06-22 13:43 ` [PATCH v5 14/15] vhost: add notification " Maxime Coquelin
2018-06-29 16:22   ` Tiwei Bie
2018-06-22 13:43 ` [PATCH v5 15/15] vhost: advertize packed ring layout support Maxime Coquelin

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.