linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next v2 0/5] virtio: support packed ring
@ 2018-07-11  2:27 Tiwei Bie
  2018-07-11  2:27 ` [PATCH net-next v2 1/5] virtio: add packed ring definitions Tiwei Bie
                   ` (7 more replies)
  0 siblings, 8 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-07-11  2:27 UTC (permalink / raw)
  To: mst, jasowang, virtualization, linux-kernel, netdev, virtio-dev
  Cc: wexu, jfreimann, tiwei.bie

Hello everyone,

This patch set implements packed ring support in virtio driver.

Some functional tests have been done with Jason's
packed ring implementation in vhost:

https://lkml.org/lkml/2018/7/3/33

Both of ping and netperf worked as expected.

v1 -> v2:
- Use READ_ONCE() to read event off_wrap and flags together (Jason);
- Add comments related to ccw (Jason);

RFC (v6) -> v1:
- Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
  when event idx is off (Jason);
- Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
- Test the state of the desc at used_idx instead of last_used_idx
  in virtqueue_enable_cb_delayed_packed() (Jason);
- Save wrap counter (as part of queue state) in the return value
  of virtqueue_enable_cb_prepare_packed();
- Refine the packed ring definitions in uapi;
- Rebase on the net-next tree;

RFC v5 -> RFC v6:
- Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
- Define wrap counter as bool (Jason);
- Use ALIGN() in vring_init_packed() (Jason);
- Avoid using pointer to track `next` in detach_buf_packed() (Jason);
- Add comments for barriers (Jason);
- Don't enable RING_PACKED on ccw for now (noticed by Jason);
- Refine the memory barrier in virtqueue_poll();
- Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
- Remove the hacks in virtqueue_enable_cb_prepare_packed();

RFC v4 -> RFC v5:
- Save DMA addr, etc in desc state (Jason);
- Track used wrap counter;

RFC v3 -> RFC v4:
- Make ID allocation support out-of-order (Jason);
- Various fixes for EVENT_IDX support;

RFC v2 -> RFC v3:
- Split into small patches (Jason);
- Add helper virtqueue_use_indirect() (Jason);
- Just set id for the last descriptor of a list (Jason);
- Calculate the prev in virtqueue_add_packed() (Jason);
- Fix/improve desc suppression code (Jason/MST);
- Refine the code layout for XXX_split/packed and wrappers (MST);
- Fix the comments and API in uapi (MST);
- Remove the BUG_ON() for indirect (Jason);
- Some other refinements and bug fixes;

RFC v1 -> RFC v2:
- Add indirect descriptor support - compile test only;
- Add event suppression supprt - compile test only;
- Move vring_packed_init() out of uapi (Jason, MST);
- Merge two loops into one in virtqueue_add_packed() (Jason);
- Split vring_unmap_one() for packed ring and split ring (Jason);
- Avoid using '%' operator (Jason);
- Rename free_head -> next_avail_idx (Jason);
- Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
- Some other refinements and bug fixes;

Thanks!

Tiwei Bie (5):
  virtio: add packed ring definitions
  virtio_ring: support creating packed ring
  virtio_ring: add packed ring support
  virtio_ring: add event idx support in packed ring
  virtio_ring: enable packed ring

 drivers/s390/virtio/virtio_ccw.c   |   14 +
 drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
 include/linux/virtio_ring.h        |    8 +-
 include/uapi/linux/virtio_config.h |    3 +
 include/uapi/linux/virtio_ring.h   |   43 +
 5 files changed, 1157 insertions(+), 276 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 53+ messages in thread

* [PATCH net-next v2 1/5] virtio: add packed ring definitions
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
@ 2018-07-11  2:27 ` Tiwei Bie
  2018-09-07 13:51   ` Michael S. Tsirkin
  2018-07-11  2:27 ` [PATCH net-next v2 2/5] virtio_ring: support creating packed ring Tiwei Bie
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-07-11  2:27 UTC (permalink / raw)
  To: mst, jasowang, virtualization, linux-kernel, netdev, virtio-dev
  Cc: wexu, jfreimann, tiwei.bie

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 include/uapi/linux/virtio_config.h |  3 +++
 include/uapi/linux/virtio_ring.h   | 43 ++++++++++++++++++++++++++++++
 2 files changed, 46 insertions(+)

diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h
index 449132c76b1c..1196e1c1d4f6 100644
--- a/include/uapi/linux/virtio_config.h
+++ b/include/uapi/linux/virtio_config.h
@@ -75,6 +75,9 @@
  */
 #define VIRTIO_F_IOMMU_PLATFORM		33
 
+/* This feature indicates support for the packed virtqueue layout. */
+#define VIRTIO_F_RING_PACKED		34
+
 /*
  * Does the device support Single Root I/O Virtualization?
  */
diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_ring.h
index 6d5d5faa989b..0254a2ba29cf 100644
--- a/include/uapi/linux/virtio_ring.h
+++ b/include/uapi/linux/virtio_ring.h
@@ -44,6 +44,10 @@
 /* This means the buffer contains a list of buffer descriptors. */
 #define VRING_DESC_F_INDIRECT	4
 
+/* Mark a descriptor as available or used. */
+#define VRING_DESC_F_AVAIL	(1ul << 7)
+#define VRING_DESC_F_USED	(1ul << 15)
+
 /* The Host uses this in used->flags to advise the Guest: don't kick me when
  * you add a buffer.  It's unreliable, so it's simply an optimization.  Guest
  * will still kick if it's out of buffers. */
@@ -53,6 +57,17 @@
  * optimization.  */
 #define VRING_AVAIL_F_NO_INTERRUPT	1
 
+/* Enable events. */
+#define VRING_EVENT_F_ENABLE	0x0
+/* Disable events. */
+#define VRING_EVENT_F_DISABLE	0x1
+/*
+ * Enable events for a specific descriptor
+ * (as specified by Descriptor Ring Change Event Offset/Wrap Counter).
+ * Only valid if VIRTIO_RING_F_EVENT_IDX has been negotiated.
+ */
+#define VRING_EVENT_F_DESC	0x2
+
 /* We support indirect buffer descriptors */
 #define VIRTIO_RING_F_INDIRECT_DESC	28
 
@@ -171,4 +186,32 @@ static inline int vring_need_event(__u16 event_idx, __u16 new_idx, __u16 old)
 	return (__u16)(new_idx - event_idx - 1) < (__u16)(new_idx - old);
 }
 
+struct vring_packed_desc_event {
+	/* Descriptor Ring Change Event Offset/Wrap Counter. */
+	__virtio16 off_wrap;
+	/* Descriptor Ring Change Event Flags. */
+	__virtio16 flags;
+};
+
+struct vring_packed_desc {
+	/* Buffer Address. */
+	__virtio64 addr;
+	/* Buffer Length. */
+	__virtio32 len;
+	/* Buffer ID. */
+	__virtio16 id;
+	/* The flags depending on descriptor type. */
+	__virtio16 flags;
+};
+
+struct vring_packed {
+	unsigned int num;
+
+	struct vring_packed_desc *desc;
+
+	struct vring_packed_desc_event *driver;
+
+	struct vring_packed_desc_event *device;
+};
+
 #endif /* _UAPI_LINUX_VIRTIO_RING_H */
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH net-next v2 2/5] virtio_ring: support creating packed ring
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
  2018-07-11  2:27 ` [PATCH net-next v2 1/5] virtio: add packed ring definitions Tiwei Bie
@ 2018-07-11  2:27 ` Tiwei Bie
  2018-09-07 14:03   ` Michael S. Tsirkin
  2018-07-11  2:27 ` [PATCH net-next v2 3/5] virtio_ring: add packed ring support Tiwei Bie
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-07-11  2:27 UTC (permalink / raw)
  To: mst, jasowang, virtualization, linux-kernel, netdev, virtio-dev
  Cc: wexu, jfreimann, tiwei.bie

This commit introduces the support for creating packed ring.
All split ring specific functions are added _split suffix.
Some necessary stubs for packed ring are also added.

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/virtio/virtio_ring.c | 801 +++++++++++++++++++++++------------
 include/linux/virtio_ring.h  |   8 +-
 2 files changed, 546 insertions(+), 263 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 814b395007b2..c4f8abc7445a 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -60,11 +60,15 @@ struct vring_desc_state {
 	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
 };
 
+struct vring_desc_state_packed {
+	int next;			/* The next desc state. */
+};
+
 struct vring_virtqueue {
 	struct virtqueue vq;
 
-	/* Actual memory layout for this queue */
-	struct vring vring;
+	/* Is this a packed ring? */
+	bool packed;
 
 	/* Can we use weak barriers? */
 	bool weak_barriers;
@@ -86,11 +90,39 @@ struct vring_virtqueue {
 	/* Last used index we've seen. */
 	u16 last_used_idx;
 
-	/* Last written value to avail->flags */
-	u16 avail_flags_shadow;
+	union {
+		/* Available for split ring */
+		struct {
+			/* Actual memory layout for this queue. */
+			struct vring vring;
 
-	/* Last written value to avail->idx in guest byte order */
-	u16 avail_idx_shadow;
+			/* Last written value to avail->flags */
+			u16 avail_flags_shadow;
+
+			/* Last written value to avail->idx in
+			 * guest byte order. */
+			u16 avail_idx_shadow;
+		};
+
+		/* Available for packed ring */
+		struct {
+			/* Actual memory layout for this queue. */
+			struct vring_packed vring_packed;
+
+			/* Driver ring wrap counter. */
+			bool avail_wrap_counter;
+
+			/* Device ring wrap counter. */
+			bool used_wrap_counter;
+
+			/* Index of the next avail descriptor. */
+			u16 next_avail_idx;
+
+			/* Last written value to driver->flags in
+			 * guest byte order. */
+			u16 event_flags_shadow;
+		};
+	};
 
 	/* How to notify other side. FIXME: commonalize hcalls! */
 	bool (*notify)(struct virtqueue *vq);
@@ -110,11 +142,24 @@ struct vring_virtqueue {
 #endif
 
 	/* Per-descriptor state. */
-	struct vring_desc_state desc_state[];
+	union {
+		struct vring_desc_state desc_state[1];
+		struct vring_desc_state_packed desc_state_packed[1];
+	};
 };
 
 #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
 
+static inline bool virtqueue_use_indirect(struct virtqueue *_vq,
+					  unsigned int total_sg)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	/* If the host supports indirect descriptor tables, and we have multiple
+	 * buffers, then go indirect. FIXME: tune this threshold */
+	return (vq->indirect && total_sg > 1 && vq->vq.num_free);
+}
+
 /*
  * Modern virtio devices have feature bits to specify whether they need a
  * quirk and bypass the IOMMU. If not there, just use the DMA API.
@@ -200,8 +245,17 @@ static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
 			      cpu_addr, size, direction);
 }
 
-static void vring_unmap_one(const struct vring_virtqueue *vq,
-			    struct vring_desc *desc)
+static int vring_mapping_error(const struct vring_virtqueue *vq,
+			       dma_addr_t addr)
+{
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return 0;
+
+	return dma_mapping_error(vring_dma_dev(vq), addr);
+}
+
+static void vring_unmap_one_split(const struct vring_virtqueue *vq,
+				  struct vring_desc *desc)
 {
 	u16 flags;
 
@@ -225,17 +279,9 @@ static void vring_unmap_one(const struct vring_virtqueue *vq,
 	}
 }
 
-static int vring_mapping_error(const struct vring_virtqueue *vq,
-			       dma_addr_t addr)
-{
-	if (!vring_use_dma_api(vq->vq.vdev))
-		return 0;
-
-	return dma_mapping_error(vring_dma_dev(vq), addr);
-}
-
-static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
-					 unsigned int total_sg, gfp_t gfp)
+static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq,
+					       unsigned int total_sg,
+					       gfp_t gfp)
 {
 	struct vring_desc *desc;
 	unsigned int i;
@@ -256,14 +302,14 @@ static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
 	return desc;
 }
 
-static inline int virtqueue_add(struct virtqueue *_vq,
-				struct scatterlist *sgs[],
-				unsigned int total_sg,
-				unsigned int out_sgs,
-				unsigned int in_sgs,
-				void *data,
-				void *ctx,
-				gfp_t gfp)
+static inline int virtqueue_add_split(struct virtqueue *_vq,
+				      struct scatterlist *sgs[],
+				      unsigned int total_sg,
+				      unsigned int out_sgs,
+				      unsigned int in_sgs,
+				      void *data,
+				      void *ctx,
+				      gfp_t gfp)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 	struct scatterlist *sg;
@@ -299,10 +345,8 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 
 	head = vq->free_head;
 
-	/* If the host supports indirect descriptor tables, and we have multiple
-	 * buffers, then go indirect. FIXME: tune this threshold */
-	if (vq->indirect && total_sg > 1 && vq->vq.num_free)
-		desc = alloc_indirect(_vq, total_sg, gfp);
+	if (virtqueue_use_indirect(_vq, total_sg))
+		desc = alloc_indirect_split(_vq, total_sg, gfp);
 	else {
 		desc = NULL;
 		WARN_ON_ONCE(total_sg > vq->vring.num && !vq->indirect);
@@ -423,7 +467,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	for (n = 0; n < total_sg; n++) {
 		if (i == err_idx)
 			break;
-		vring_unmap_one(vq, &desc[i]);
+		vring_unmap_one_split(vq, &desc[i]);
 		i = virtio16_to_cpu(_vq->vdev, vq->vring.desc[i].next);
 	}
 
@@ -434,6 +478,355 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	return -EIO;
 }
 
+static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	u16 new, old;
+	bool needs_kick;
+
+	START_USE(vq);
+	/* We need to expose available array entries before checking avail
+	 * event. */
+	virtio_mb(vq->weak_barriers);
+
+	old = vq->avail_idx_shadow - vq->num_added;
+	new = vq->avail_idx_shadow;
+	vq->num_added = 0;
+
+#ifdef DEBUG
+	if (vq->last_add_time_valid) {
+		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
+					      vq->last_add_time)) > 100);
+	}
+	vq->last_add_time_valid = false;
+#endif
+
+	if (vq->event) {
+		needs_kick = vring_need_event(virtio16_to_cpu(_vq->vdev, vring_avail_event(&vq->vring)),
+					      new, old);
+	} else {
+		needs_kick = !(vq->vring.used->flags & cpu_to_virtio16(_vq->vdev, VRING_USED_F_NO_NOTIFY));
+	}
+	END_USE(vq);
+	return needs_kick;
+}
+
+static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
+			     void **ctx)
+{
+	unsigned int i, j;
+	__virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
+
+	/* Clear data ptr. */
+	vq->desc_state[head].data = NULL;
+
+	/* Put back on free list: unmap first-level descriptors and find end */
+	i = head;
+
+	while (vq->vring.desc[i].flags & nextflag) {
+		vring_unmap_one_split(vq, &vq->vring.desc[i]);
+		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
+		vq->vq.num_free++;
+	}
+
+	vring_unmap_one_split(vq, &vq->vring.desc[i]);
+	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
+	vq->free_head = head;
+
+	/* Plus final descriptor */
+	vq->vq.num_free++;
+
+	if (vq->indirect) {
+		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
+		u32 len;
+
+		/* Free the indirect table, if any, now that it's unmapped. */
+		if (!indir_desc)
+			return;
+
+		len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
+
+		BUG_ON(!(vq->vring.desc[head].flags &
+			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
+		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
+
+		for (j = 0; j < len / sizeof(struct vring_desc); j++)
+			vring_unmap_one_split(vq, &indir_desc[j]);
+
+		kfree(indir_desc);
+		vq->desc_state[head].indir_desc = NULL;
+	} else if (ctx) {
+		*ctx = vq->desc_state[head].indir_desc;
+	}
+}
+
+static inline bool more_used_split(const struct vring_virtqueue *vq)
+{
+	return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, vq->vring.used->idx);
+}
+
+static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq,
+					 unsigned int *len,
+					 void **ctx)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	void *ret;
+	unsigned int i;
+	u16 last_used;
+
+	START_USE(vq);
+
+	if (unlikely(vq->broken)) {
+		END_USE(vq);
+		return NULL;
+	}
+
+	if (!more_used_split(vq)) {
+		pr_debug("No more buffers in queue\n");
+		END_USE(vq);
+		return NULL;
+	}
+
+	/* Only get used array entries after they have been exposed by host. */
+	virtio_rmb(vq->weak_barriers);
+
+	last_used = (vq->last_used_idx & (vq->vring.num - 1));
+	i = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].id);
+	*len = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].len);
+
+	if (unlikely(i >= vq->vring.num)) {
+		BAD_RING(vq, "id %u out of range\n", i);
+		return NULL;
+	}
+	if (unlikely(!vq->desc_state[i].data)) {
+		BAD_RING(vq, "id %u is not a head!\n", i);
+		return NULL;
+	}
+
+	/* detach_buf_split clears data, so grab it now. */
+	ret = vq->desc_state[i].data;
+	detach_buf_split(vq, i, ctx);
+	vq->last_used_idx++;
+	/* If we expect an interrupt for the next entry, tell host
+	 * by writing event index and flush out the write before
+	 * the read in the next get_buf call. */
+	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
+		virtio_store_mb(vq->weak_barriers,
+				&vring_used_event(&vq->vring),
+				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
+
+#ifdef DEBUG
+	vq->last_add_time_valid = false;
+#endif
+
+	END_USE(vq);
+	return ret;
+}
+
+static void virtqueue_disable_cb_split(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
+		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
+		if (!vq->event)
+			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
+	}
+}
+
+static unsigned virtqueue_enable_cb_prepare_split(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	u16 last_used_idx;
+
+	START_USE(vq);
+
+	/* We optimistically turn back on interrupts, then check if there was
+	 * more to do. */
+	/* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to
+	 * either clear the flags bit or point the event index at the next
+	 * entry. Always do both to keep code simple. */
+	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
+		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
+		if (!vq->event)
+			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
+	}
+	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx);
+	END_USE(vq);
+	return last_used_idx;
+}
+
+static bool virtqueue_poll_split(struct virtqueue *_vq, unsigned last_used_idx)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	virtio_mb(vq->weak_barriers);
+	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
+}
+
+static bool virtqueue_enable_cb_delayed_split(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	u16 bufs;
+
+	START_USE(vq);
+
+	/* We optimistically turn back on interrupts, then check if there was
+	 * more to do. */
+	/* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to
+	 * either clear the flags bit or point the event index at the next
+	 * entry. Always update the event index to keep code simple. */
+	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
+		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
+		if (!vq->event)
+			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
+	}
+	/* TODO: tune this threshold */
+	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
+
+	virtio_store_mb(vq->weak_barriers,
+			&vring_used_event(&vq->vring),
+			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
+
+	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
+		END_USE(vq);
+		return false;
+	}
+
+	END_USE(vq);
+	return true;
+}
+
+static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	unsigned int i;
+	void *buf;
+
+	START_USE(vq);
+
+	for (i = 0; i < vq->vring.num; i++) {
+		if (!vq->desc_state[i].data)
+			continue;
+		/* detach_buf clears data, so grab it now. */
+		buf = vq->desc_state[i].data;
+		detach_buf_split(vq, i, NULL);
+		vq->avail_idx_shadow--;
+		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
+		END_USE(vq);
+		return buf;
+	}
+	/* That should have freed everything. */
+	BUG_ON(vq->vq.num_free != vq->vring.num);
+
+	END_USE(vq);
+	return NULL;
+}
+
+/*
+ * The layout for the packed ring is a continuous chunk of memory
+ * which looks like this.
+ *
+ * struct vring_packed {
+ *	// The actual descriptors (16 bytes each)
+ *	struct vring_packed_desc desc[num];
+ *
+ *	// Padding to the next align boundary.
+ *	char pad[];
+ *
+ *	// Driver Event Suppression
+ *	struct vring_packed_desc_event driver;
+ *
+ *	// Device Event Suppression
+ *	struct vring_packed_desc_event device;
+ * };
+ */
+static inline void vring_init_packed(struct vring_packed *vr, unsigned int num,
+				     void *p, unsigned long align)
+{
+	vr->num = num;
+	vr->desc = p;
+	vr->driver = (void *)ALIGN(((uintptr_t)p +
+		sizeof(struct vring_packed_desc) * num), align);
+	vr->device = vr->driver + 1;
+}
+
+static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
+{
+	return ((sizeof(struct vring_packed_desc) * num + align - 1)
+		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
+}
+
+static inline int virtqueue_add_packed(struct virtqueue *_vq,
+				       struct scatterlist *sgs[],
+				       unsigned int total_sg,
+				       unsigned int out_sgs,
+				       unsigned int in_sgs,
+				       void *data,
+				       void *ctx,
+				       gfp_t gfp)
+{
+	return -EIO;
+}
+
+static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
+{
+	return false;
+}
+
+static inline bool more_used_packed(const struct vring_virtqueue *vq)
+{
+	return false;
+}
+
+static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
+					  unsigned int *len,
+					  void **ctx)
+{
+	return NULL;
+}
+
+static void virtqueue_disable_cb_packed(struct virtqueue *_vq)
+{
+}
+
+static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
+{
+	return 0;
+}
+
+static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned last_used_idx)
+{
+	return false;
+}
+
+static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
+{
+	return false;
+}
+
+static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq)
+{
+	return NULL;
+}
+
+static inline int virtqueue_add(struct virtqueue *_vq,
+				struct scatterlist *sgs[],
+				unsigned int total_sg,
+				unsigned int out_sgs,
+				unsigned int in_sgs,
+				void *data,
+				void *ctx,
+				gfp_t gfp)
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	return vq->packed ? virtqueue_add_packed(_vq, sgs, total_sg, out_sgs,
+						 in_sgs, data, ctx, gfp) :
+			    virtqueue_add_split(_vq, sgs, total_sg, out_sgs,
+						in_sgs, data, ctx, gfp);
+}
+
 /**
  * virtqueue_add_sgs - expose buffers to other end
  * @vq: the struct virtqueue we're talking about.
@@ -550,34 +943,9 @@ EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx);
 bool virtqueue_kick_prepare(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
-	u16 new, old;
-	bool needs_kick;
 
-	START_USE(vq);
-	/* We need to expose available array entries before checking avail
-	 * event. */
-	virtio_mb(vq->weak_barriers);
-
-	old = vq->avail_idx_shadow - vq->num_added;
-	new = vq->avail_idx_shadow;
-	vq->num_added = 0;
-
-#ifdef DEBUG
-	if (vq->last_add_time_valid) {
-		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
-					      vq->last_add_time)) > 100);
-	}
-	vq->last_add_time_valid = false;
-#endif
-
-	if (vq->event) {
-		needs_kick = vring_need_event(virtio16_to_cpu(_vq->vdev, vring_avail_event(&vq->vring)),
-					      new, old);
-	} else {
-		needs_kick = !(vq->vring.used->flags & cpu_to_virtio16(_vq->vdev, VRING_USED_F_NO_NOTIFY));
-	}
-	END_USE(vq);
-	return needs_kick;
+	return vq->packed ? virtqueue_kick_prepare_packed(_vq) :
+			    virtqueue_kick_prepare_split(_vq);
 }
 EXPORT_SYMBOL_GPL(virtqueue_kick_prepare);
 
@@ -625,58 +993,9 @@ bool virtqueue_kick(struct virtqueue *vq)
 }
 EXPORT_SYMBOL_GPL(virtqueue_kick);
 
-static void detach_buf(struct vring_virtqueue *vq, unsigned int head,
-		       void **ctx)
-{
-	unsigned int i, j;
-	__virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
-
-	/* Clear data ptr. */
-	vq->desc_state[head].data = NULL;
-
-	/* Put back on free list: unmap first-level descriptors and find end */
-	i = head;
-
-	while (vq->vring.desc[i].flags & nextflag) {
-		vring_unmap_one(vq, &vq->vring.desc[i]);
-		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
-		vq->vq.num_free++;
-	}
-
-	vring_unmap_one(vq, &vq->vring.desc[i]);
-	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
-	vq->free_head = head;
-
-	/* Plus final descriptor */
-	vq->vq.num_free++;
-
-	if (vq->indirect) {
-		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
-		u32 len;
-
-		/* Free the indirect table, if any, now that it's unmapped. */
-		if (!indir_desc)
-			return;
-
-		len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
-
-		BUG_ON(!(vq->vring.desc[head].flags &
-			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
-		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
-
-		for (j = 0; j < len / sizeof(struct vring_desc); j++)
-			vring_unmap_one(vq, &indir_desc[j]);
-
-		kfree(indir_desc);
-		vq->desc_state[head].indir_desc = NULL;
-	} else if (ctx) {
-		*ctx = vq->desc_state[head].indir_desc;
-	}
-}
-
 static inline bool more_used(const struct vring_virtqueue *vq)
 {
-	return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, vq->vring.used->idx);
+	return vq->packed ? more_used_packed(vq) : more_used_split(vq);
 }
 
 /**
@@ -699,57 +1018,9 @@ void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len,
 			    void **ctx)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
-	void *ret;
-	unsigned int i;
-	u16 last_used;
 
-	START_USE(vq);
-
-	if (unlikely(vq->broken)) {
-		END_USE(vq);
-		return NULL;
-	}
-
-	if (!more_used(vq)) {
-		pr_debug("No more buffers in queue\n");
-		END_USE(vq);
-		return NULL;
-	}
-
-	/* Only get used array entries after they have been exposed by host. */
-	virtio_rmb(vq->weak_barriers);
-
-	last_used = (vq->last_used_idx & (vq->vring.num - 1));
-	i = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].id);
-	*len = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].len);
-
-	if (unlikely(i >= vq->vring.num)) {
-		BAD_RING(vq, "id %u out of range\n", i);
-		return NULL;
-	}
-	if (unlikely(!vq->desc_state[i].data)) {
-		BAD_RING(vq, "id %u is not a head!\n", i);
-		return NULL;
-	}
-
-	/* detach_buf clears data, so grab it now. */
-	ret = vq->desc_state[i].data;
-	detach_buf(vq, i, ctx);
-	vq->last_used_idx++;
-	/* If we expect an interrupt for the next entry, tell host
-	 * by writing event index and flush out the write before
-	 * the read in the next get_buf call. */
-	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
-		virtio_store_mb(vq->weak_barriers,
-				&vring_used_event(&vq->vring),
-				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
-
-#ifdef DEBUG
-	vq->last_add_time_valid = false;
-#endif
-
-	END_USE(vq);
-	return ret;
+	return vq->packed ? virtqueue_get_buf_ctx_packed(_vq, len, ctx) :
+			    virtqueue_get_buf_ctx_split(_vq, len, ctx);
 }
 EXPORT_SYMBOL_GPL(virtqueue_get_buf_ctx);
 
@@ -771,12 +1042,10 @@ void virtqueue_disable_cb(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
-		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
-		if (!vq->event)
-			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
-	}
-
+	if (vq->packed)
+		virtqueue_disable_cb_packed(_vq);
+	else
+		virtqueue_disable_cb_split(_vq);
 }
 EXPORT_SYMBOL_GPL(virtqueue_disable_cb);
 
@@ -795,23 +1064,9 @@ EXPORT_SYMBOL_GPL(virtqueue_disable_cb);
 unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
-	u16 last_used_idx;
 
-	START_USE(vq);
-
-	/* We optimistically turn back on interrupts, then check if there was
-	 * more to do. */
-	/* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to
-	 * either clear the flags bit or point the event index at the next
-	 * entry. Always do both to keep code simple. */
-	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
-		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
-		if (!vq->event)
-			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
-	}
-	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx);
-	END_USE(vq);
-	return last_used_idx;
+	return vq->packed ? virtqueue_enable_cb_prepare_packed(_vq) :
+			    virtqueue_enable_cb_prepare_split(_vq);
 }
 EXPORT_SYMBOL_GPL(virtqueue_enable_cb_prepare);
 
@@ -828,8 +1083,8 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	virtio_mb(vq->weak_barriers);
-	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
+	return vq->packed ? virtqueue_poll_packed(_vq, last_used_idx) :
+			    virtqueue_poll_split(_vq, last_used_idx);
 }
 EXPORT_SYMBOL_GPL(virtqueue_poll);
 
@@ -867,34 +1122,9 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb);
 bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
-	u16 bufs;
 
-	START_USE(vq);
-
-	/* We optimistically turn back on interrupts, then check if there was
-	 * more to do. */
-	/* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to
-	 * either clear the flags bit or point the event index at the next
-	 * entry. Always update the event index to keep code simple. */
-	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
-		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
-		if (!vq->event)
-			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
-	}
-	/* TODO: tune this threshold */
-	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
-
-	virtio_store_mb(vq->weak_barriers,
-			&vring_used_event(&vq->vring),
-			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
-
-	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
-		END_USE(vq);
-		return false;
-	}
-
-	END_USE(vq);
-	return true;
+	return vq->packed ? virtqueue_enable_cb_delayed_packed(_vq) :
+			    virtqueue_enable_cb_delayed_split(_vq);
 }
 EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed);
 
@@ -909,27 +1139,9 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed);
 void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
-	unsigned int i;
-	void *buf;
 
-	START_USE(vq);
-
-	for (i = 0; i < vq->vring.num; i++) {
-		if (!vq->desc_state[i].data)
-			continue;
-		/* detach_buf clears data, so grab it now. */
-		buf = vq->desc_state[i].data;
-		detach_buf(vq, i, NULL);
-		vq->avail_idx_shadow--;
-		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
-		END_USE(vq);
-		return buf;
-	}
-	/* That should have freed everything. */
-	BUG_ON(vq->vq.num_free != vq->vring.num);
-
-	END_USE(vq);
-	return NULL;
+	return vq->packed ? virtqueue_detach_unused_buf_packed(_vq) :
+			    virtqueue_detach_unused_buf_split(_vq);
 }
 EXPORT_SYMBOL_GPL(virtqueue_detach_unused_buf);
 
@@ -954,7 +1166,8 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
 EXPORT_SYMBOL_GPL(vring_interrupt);
 
 struct virtqueue *__vring_new_virtqueue(unsigned int index,
-					struct vring vring,
+					union vring_union vring,
+					bool packed,
 					struct virtio_device *vdev,
 					bool weak_barriers,
 					bool context,
@@ -962,19 +1175,22 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
 					void (*callback)(struct virtqueue *),
 					const char *name)
 {
-	unsigned int i;
 	struct vring_virtqueue *vq;
+	unsigned int num, i;
+	size_t size;
 
-	vq = kmalloc(sizeof(*vq) + vring.num * sizeof(struct vring_desc_state),
-		     GFP_KERNEL);
+	num = packed ? vring.vring_packed.num : vring.vring_split.num;
+	size = packed ? num * sizeof(struct vring_desc_state_packed) :
+			num * sizeof(struct vring_desc_state);
+
+	vq = kmalloc(sizeof(*vq) + size, GFP_KERNEL);
 	if (!vq)
 		return NULL;
 
-	vq->vring = vring;
 	vq->vq.callback = callback;
 	vq->vq.vdev = vdev;
 	vq->vq.name = name;
-	vq->vq.num_free = vring.num;
+	vq->vq.num_free = num;
 	vq->vq.index = index;
 	vq->we_own_ring = false;
 	vq->queue_dma_addr = 0;
@@ -983,9 +1199,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
 	vq->weak_barriers = weak_barriers;
 	vq->broken = false;
 	vq->last_used_idx = 0;
-	vq->avail_flags_shadow = 0;
-	vq->avail_idx_shadow = 0;
 	vq->num_added = 0;
+	vq->packed = packed;
 	list_add_tail(&vq->vq.list, &vdev->vqs);
 #ifdef DEBUG
 	vq->in_use = false;
@@ -996,19 +1211,48 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
 		!context;
 	vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX);
 
+	if (vq->packed) {
+		vq->vring_packed = vring.vring_packed;
+		vq->next_avail_idx = 0;
+		vq->avail_wrap_counter = 1;
+		vq->used_wrap_counter = 1;
+		vq->event_flags_shadow = 0;
+
+		memset(vq->desc_state_packed, 0,
+			num * sizeof(struct vring_desc_state_packed));
+
+		/* Put everything in free lists. */
+		vq->free_head = 0;
+		for (i = 0; i < num-1; i++)
+			vq->desc_state_packed[i].next = i + 1;
+	} else {
+		vq->vring = vring.vring_split;
+		vq->avail_flags_shadow = 0;
+		vq->avail_idx_shadow = 0;
+
+		/* Put everything in free lists. */
+		vq->free_head = 0;
+		for (i = 0; i < num-1; i++)
+			vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
+
+		memset(vq->desc_state, 0,
+			num * sizeof(struct vring_desc_state));
+	}
+
 	/* No callback?  Tell other side not to bother us. */
 	if (!callback) {
-		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
-		if (!vq->event)
-			vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow);
+		if (packed) {
+			vq->event_flags_shadow = VRING_EVENT_F_DISABLE;
+			vq->vring_packed.driver->flags = cpu_to_virtio16(vdev,
+						vq->event_flags_shadow);
+		} else {
+			vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
+			if (!vq->event)
+				vq->vring.avail->flags = cpu_to_virtio16(vdev,
+						vq->avail_flags_shadow);
+		}
 	}
 
-	/* Put everything in free lists. */
-	vq->free_head = 0;
-	for (i = 0; i < vring.num-1; i++)
-		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
-	memset(vq->desc_state, 0, vring.num * sizeof(struct vring_desc_state));
-
 	return &vq->vq;
 }
 EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
@@ -1055,6 +1299,12 @@ static void vring_free_queue(struct virtio_device *vdev, size_t size,
 	}
 }
 
+static inline int
+__vring_size(unsigned int num, unsigned long align, bool packed)
+{
+	return packed ? vring_size_packed(num, align) : vring_size(num, align);
+}
+
 struct virtqueue *vring_create_virtqueue(
 	unsigned int index,
 	unsigned int num,
@@ -1071,7 +1321,8 @@ struct virtqueue *vring_create_virtqueue(
 	void *queue = NULL;
 	dma_addr_t dma_addr;
 	size_t queue_size_in_bytes;
-	struct vring vring;
+	union vring_union vring;
+	bool packed;
 
 	/* We assume num is a power of 2. */
 	if (num & (num - 1)) {
@@ -1079,9 +1330,13 @@ struct virtqueue *vring_create_virtqueue(
 		return NULL;
 	}
 
+	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
+
 	/* TODO: allocate each queue chunk individually */
-	for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {
-		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+	for (; num && __vring_size(num, vring_align, packed) > PAGE_SIZE;
+			num /= 2) {
+		queue = vring_alloc_queue(vdev, __vring_size(num, vring_align,
+							     packed),
 					  &dma_addr,
 					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
 		if (queue)
@@ -1093,17 +1348,21 @@ struct virtqueue *vring_create_virtqueue(
 
 	if (!queue) {
 		/* Try to get a single page. You are my only hope! */
-		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
+		queue = vring_alloc_queue(vdev, __vring_size(num, vring_align,
+							     packed),
 					  &dma_addr, GFP_KERNEL|__GFP_ZERO);
 	}
 	if (!queue)
 		return NULL;
 
-	queue_size_in_bytes = vring_size(num, vring_align);
-	vring_init(&vring, num, queue, vring_align);
+	queue_size_in_bytes = __vring_size(num, vring_align, packed);
+	if (packed)
+		vring_init_packed(&vring.vring_packed, num, queue, vring_align);
+	else
+		vring_init(&vring.vring_split, num, queue, vring_align);
 
-	vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
-				   notify, callback, name);
+	vq = __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
+				   context, notify, callback, name);
 	if (!vq) {
 		vring_free_queue(vdev, queue_size_in_bytes, queue,
 				 dma_addr);
@@ -1129,10 +1388,17 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      void (*callback)(struct virtqueue *vq),
 				      const char *name)
 {
-	struct vring vring;
-	vring_init(&vring, num, pages, vring_align);
-	return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
-				     notify, callback, name);
+	union vring_union vring;
+	bool packed;
+
+	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
+	if (packed)
+		vring_init_packed(&vring.vring_packed, num, pages, vring_align);
+	else
+		vring_init(&vring.vring_split, num, pages, vring_align);
+
+	return __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
+				     context, notify, callback, name);
 }
 EXPORT_SYMBOL_GPL(vring_new_virtqueue);
 
@@ -1142,7 +1408,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
 
 	if (vq->we_own_ring) {
 		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
-				 vq->vring.desc, vq->queue_dma_addr);
+				 vq->packed ? (void *)vq->vring_packed.desc :
+					      (void *)vq->vring.desc,
+				 vq->queue_dma_addr);
 	}
 	list_del(&_vq->list);
 	kfree(vq);
@@ -1184,7 +1452,7 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *_vq)
 
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	return vq->vring.num;
+	return vq->packed ? vq->vring_packed.num : vq->vring.num;
 }
 EXPORT_SYMBOL_GPL(virtqueue_get_vring_size);
 
@@ -1227,6 +1495,10 @@ dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
 
 	BUG_ON(!vq->we_own_ring);
 
+	if (vq->packed)
+		return vq->queue_dma_addr + ((char *)vq->vring_packed.driver -
+				(char *)vq->vring_packed.desc);
+
 	return vq->queue_dma_addr +
 		((char *)vq->vring.avail - (char *)vq->vring.desc);
 }
@@ -1238,11 +1510,16 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
 
 	BUG_ON(!vq->we_own_ring);
 
+	if (vq->packed)
+		return vq->queue_dma_addr + ((char *)vq->vring_packed.device -
+				(char *)vq->vring_packed.desc);
+
 	return vq->queue_dma_addr +
 		((char *)vq->vring.used - (char *)vq->vring.desc);
 }
 EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
 
+/* Only available for split ring */
 const struct vring *virtqueue_get_vring(struct virtqueue *vq)
 {
 	return &to_vvq(vq)->vring;
diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
index fab02133a919..992142b35f55 100644
--- a/include/linux/virtio_ring.h
+++ b/include/linux/virtio_ring.h
@@ -60,6 +60,11 @@ static inline void virtio_store_mb(bool weak_barriers,
 struct virtio_device;
 struct virtqueue;
 
+union vring_union {
+	struct vring vring_split;
+	struct vring_packed vring_packed;
+};
+
 /*
  * Creates a virtqueue and allocates the descriptor ring.  If
  * may_reduce_num is set, then this may allocate a smaller ring than
@@ -79,7 +84,8 @@ struct virtqueue *vring_create_virtqueue(unsigned int index,
 
 /* Creates a virtqueue with a custom layout. */
 struct virtqueue *__vring_new_virtqueue(unsigned int index,
-					struct vring vring,
+					union vring_union vring,
+					bool packed,
 					struct virtio_device *vdev,
 					bool weak_barriers,
 					bool ctx,
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
  2018-07-11  2:27 ` [PATCH net-next v2 1/5] virtio: add packed ring definitions Tiwei Bie
  2018-07-11  2:27 ` [PATCH net-next v2 2/5] virtio_ring: support creating packed ring Tiwei Bie
@ 2018-07-11  2:27 ` Tiwei Bie
  2018-09-07 13:49   ` Michael S. Tsirkin
                     ` (2 more replies)
  2018-07-11  2:27 ` [PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring Tiwei Bie
                   ` (4 subsequent siblings)
  7 siblings, 3 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-07-11  2:27 UTC (permalink / raw)
  To: mst, jasowang, virtualization, linux-kernel, netdev, virtio-dev
  Cc: wexu, jfreimann, tiwei.bie

This commit introduces the support (without EVENT_IDX) for
packed ring.

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/virtio/virtio_ring.c | 495 ++++++++++++++++++++++++++++++++++-
 1 file changed, 487 insertions(+), 8 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c4f8abc7445a..f317b485ba54 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -55,12 +55,21 @@
 #define END_USE(vq)
 #endif
 
+#define _VRING_DESC_F_AVAIL(b)	((__u16)(b) << 7)
+#define _VRING_DESC_F_USED(b)	((__u16)(b) << 15)
+
 struct vring_desc_state {
 	void *data;			/* Data for callback. */
 	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
 };
 
 struct vring_desc_state_packed {
+	void *data;			/* Data for callback. */
+	struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
+	int num;			/* Descriptor list length. */
+	dma_addr_t addr;		/* Buffer DMA addr. */
+	u32 len;			/* Buffer length. */
+	u16 flags;			/* Descriptor flags. */
 	int next;			/* The next desc state. */
 };
 
@@ -660,7 +669,6 @@ static bool virtqueue_poll_split(struct virtqueue *_vq, unsigned last_used_idx)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
-	virtio_mb(vq->weak_barriers);
 	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
 }
 
@@ -757,6 +765,72 @@ static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
 		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
 }
 
+static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
+				     struct vring_desc_state_packed *state)
+{
+	u16 flags;
+
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return;
+
+	flags = state->flags;
+
+	if (flags & VRING_DESC_F_INDIRECT) {
+		dma_unmap_single(vring_dma_dev(vq),
+				 state->addr, state->len,
+				 (flags & VRING_DESC_F_WRITE) ?
+				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	} else {
+		dma_unmap_page(vring_dma_dev(vq),
+			       state->addr, state->len,
+			       (flags & VRING_DESC_F_WRITE) ?
+			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
+				   struct vring_packed_desc *desc)
+{
+	u16 flags;
+
+	if (!vring_use_dma_api(vq->vq.vdev))
+		return;
+
+	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
+
+	if (flags & VRING_DESC_F_INDIRECT) {
+		dma_unmap_single(vring_dma_dev(vq),
+				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
+				 virtio32_to_cpu(vq->vq.vdev, desc->len),
+				 (flags & VRING_DESC_F_WRITE) ?
+				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	} else {
+		dma_unmap_page(vring_dma_dev(vq),
+			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
+			       virtio32_to_cpu(vq->vq.vdev, desc->len),
+			       (flags & VRING_DESC_F_WRITE) ?
+			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
+	}
+}
+
+static struct vring_packed_desc *alloc_indirect_packed(struct virtqueue *_vq,
+						       unsigned int total_sg,
+						       gfp_t gfp)
+{
+	struct vring_packed_desc *desc;
+
+	/*
+	 * We require lowmem mappings for the descriptors because
+	 * otherwise virt_to_phys will give us bogus addresses in the
+	 * virtqueue.
+	 */
+	gfp &= ~__GFP_HIGHMEM;
+
+	desc = kmalloc(total_sg * sizeof(struct vring_packed_desc), gfp);
+
+	return desc;
+}
+
 static inline int virtqueue_add_packed(struct virtqueue *_vq,
 				       struct scatterlist *sgs[],
 				       unsigned int total_sg,
@@ -766,47 +840,449 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 				       void *ctx,
 				       gfp_t gfp)
 {
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	struct vring_packed_desc *desc;
+	struct scatterlist *sg;
+	unsigned int i, n, descs_used, uninitialized_var(prev), err_idx;
+	__virtio16 uninitialized_var(head_flags), flags;
+	u16 head, avail_wrap_counter, id, curr;
+	bool indirect;
+
+	START_USE(vq);
+
+	BUG_ON(data == NULL);
+	BUG_ON(ctx && vq->indirect);
+
+	if (unlikely(vq->broken)) {
+		END_USE(vq);
+		return -EIO;
+	}
+
+#ifdef DEBUG
+	{
+		ktime_t now = ktime_get();
+
+		/* No kick or get, with .1 second between?  Warn. */
+		if (vq->last_add_time_valid)
+			WARN_ON(ktime_to_ms(ktime_sub(now, vq->last_add_time))
+					    > 100);
+		vq->last_add_time = now;
+		vq->last_add_time_valid = true;
+	}
+#endif
+
+	BUG_ON(total_sg == 0);
+
+	head = vq->next_avail_idx;
+	avail_wrap_counter = vq->avail_wrap_counter;
+
+	if (virtqueue_use_indirect(_vq, total_sg))
+		desc = alloc_indirect_packed(_vq, total_sg, gfp);
+	else {
+		desc = NULL;
+		WARN_ON_ONCE(total_sg > vq->vring_packed.num && !vq->indirect);
+	}
+
+	if (desc) {
+		/* Use a single buffer which doesn't continue */
+		indirect = true;
+		/* Set up rest to use this indirect table. */
+		i = 0;
+		descs_used = 1;
+	} else {
+		indirect = false;
+		desc = vq->vring_packed.desc;
+		i = head;
+		descs_used = total_sg;
+	}
+
+	if (vq->vq.num_free < descs_used) {
+		pr_debug("Can't add buf len %i - avail = %i\n",
+			 descs_used, vq->vq.num_free);
+		/* FIXME: for historical reasons, we force a notify here if
+		 * there are outgoing parts to the buffer.  Presumably the
+		 * host should service the ring ASAP. */
+		if (out_sgs)
+			vq->notify(&vq->vq);
+		if (indirect)
+			kfree(desc);
+		END_USE(vq);
+		return -ENOSPC;
+	}
+
+	id = vq->free_head;
+	BUG_ON(id == vq->vring_packed.num);
+
+	curr = id;
+	for (n = 0; n < out_sgs + in_sgs; n++) {
+		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
+			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
+					       DMA_TO_DEVICE : DMA_FROM_DEVICE);
+			if (vring_mapping_error(vq, addr))
+				goto unmap_release;
+
+			flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT |
+				  (n < out_sgs ? 0 : VRING_DESC_F_WRITE) |
+				  _VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
+				  _VRING_DESC_F_USED(!vq->avail_wrap_counter));
+			if (!indirect && i == head)
+				head_flags = flags;
+			else
+				desc[i].flags = flags;
+
+			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
+			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
+			i++;
+			if (!indirect) {
+				if (vring_use_dma_api(_vq->vdev)) {
+					vq->desc_state_packed[curr].addr = addr;
+					vq->desc_state_packed[curr].len =
+						sg->length;
+					vq->desc_state_packed[curr].flags =
+						virtio16_to_cpu(_vq->vdev,
+								flags);
+				}
+				curr = vq->desc_state_packed[curr].next;
+
+				if (i >= vq->vring_packed.num) {
+					i = 0;
+					vq->avail_wrap_counter ^= 1;
+				}
+			}
+		}
+	}
+
+	prev = (i > 0 ? i : vq->vring_packed.num) - 1;
+	desc[prev].id = cpu_to_virtio16(_vq->vdev, id);
+
+	/* Last one doesn't continue. */
+	if (total_sg == 1)
+		head_flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
+	else
+		desc[prev].flags &= cpu_to_virtio16(_vq->vdev,
+						~VRING_DESC_F_NEXT);
+
+	if (indirect) {
+		/* Now that the indirect table is filled in, map it. */
+		dma_addr_t addr = vring_map_single(
+			vq, desc, total_sg * sizeof(struct vring_packed_desc),
+			DMA_TO_DEVICE);
+		if (vring_mapping_error(vq, addr))
+			goto unmap_release;
+
+		head_flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT |
+				      _VRING_DESC_F_AVAIL(avail_wrap_counter) |
+				      _VRING_DESC_F_USED(!avail_wrap_counter));
+		vq->vring_packed.desc[head].addr = cpu_to_virtio64(_vq->vdev,
+								   addr);
+		vq->vring_packed.desc[head].len = cpu_to_virtio32(_vq->vdev,
+				total_sg * sizeof(struct vring_packed_desc));
+		vq->vring_packed.desc[head].id = cpu_to_virtio16(_vq->vdev, id);
+
+		if (vring_use_dma_api(_vq->vdev)) {
+			vq->desc_state_packed[id].addr = addr;
+			vq->desc_state_packed[id].len = total_sg *
+					sizeof(struct vring_packed_desc);
+			vq->desc_state_packed[id].flags =
+					virtio16_to_cpu(_vq->vdev, head_flags);
+		}
+	}
+
+	/* We're using some buffers from the free list. */
+	vq->vq.num_free -= descs_used;
+
+	/* Update free pointer */
+	if (indirect) {
+		n = head + 1;
+		if (n >= vq->vring_packed.num) {
+			n = 0;
+			vq->avail_wrap_counter ^= 1;
+		}
+		vq->next_avail_idx = n;
+		vq->free_head = vq->desc_state_packed[id].next;
+	} else {
+		vq->next_avail_idx = i;
+		vq->free_head = curr;
+	}
+
+	/* Store token and indirect buffer state. */
+	vq->desc_state_packed[id].num = descs_used;
+	vq->desc_state_packed[id].data = data;
+	if (indirect)
+		vq->desc_state_packed[id].indir_desc = desc;
+	else
+		vq->desc_state_packed[id].indir_desc = ctx;
+
+	/* A driver MUST NOT make the first descriptor in the list
+	 * available before all subsequent descriptors comprising
+	 * the list are made available. */
+	virtio_wmb(vq->weak_barriers);
+	vq->vring_packed.desc[head].flags = head_flags;
+	vq->num_added += descs_used;
+
+	pr_debug("Added buffer head %i to %p\n", head, vq);
+	END_USE(vq);
+
+	return 0;
+
+unmap_release:
+	err_idx = i;
+	i = head;
+
+	for (n = 0; n < total_sg; n++) {
+		if (i == err_idx)
+			break;
+		vring_unmap_desc_packed(vq, &desc[i]);
+		i++;
+		if (!indirect && i >= vq->vring_packed.num)
+			i = 0;
+	}
+
+	vq->avail_wrap_counter = avail_wrap_counter;
+
+	if (indirect)
+		kfree(desc);
+
+	END_USE(vq);
 	return -EIO;
 }
 
 static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
 {
-	return false;
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	u16 flags;
+	bool needs_kick;
+	u32 snapshot;
+
+	START_USE(vq);
+	/* We need to expose the new flags value before checking notification
+	 * suppressions. */
+	virtio_mb(vq->weak_barriers);
+
+	snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
+	flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
+
+#ifdef DEBUG
+	if (vq->last_add_time_valid) {
+		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
+					      vq->last_add_time)) > 100);
+	}
+	vq->last_add_time_valid = false;
+#endif
+
+	needs_kick = (flags != VRING_EVENT_F_DISABLE);
+	END_USE(vq);
+	return needs_kick;
+}
+
+static void detach_buf_packed(struct vring_virtqueue *vq,
+			      unsigned int id, void **ctx)
+{
+	struct vring_desc_state_packed *state = NULL;
+	struct vring_packed_desc *desc;
+	unsigned int curr, i;
+
+	/* Clear data ptr. */
+	vq->desc_state_packed[id].data = NULL;
+
+	curr = id;
+	for (i = 0; i < vq->desc_state_packed[id].num; i++) {
+		state = &vq->desc_state_packed[curr];
+		vring_unmap_state_packed(vq, state);
+		curr = state->next;
+	}
+
+	BUG_ON(state == NULL);
+	vq->vq.num_free += vq->desc_state_packed[id].num;
+	state->next = vq->free_head;
+	vq->free_head = id;
+
+	if (vq->indirect) {
+		u32 len;
+
+		/* Free the indirect table, if any, now that it's unmapped. */
+		desc = vq->desc_state_packed[id].indir_desc;
+		if (!desc)
+			return;
+
+		if (vring_use_dma_api(vq->vq.vdev)) {
+			len = vq->desc_state_packed[id].len;
+			for (i = 0; i < len / sizeof(struct vring_packed_desc);
+					i++)
+				vring_unmap_desc_packed(vq, &desc[i]);
+		}
+		kfree(desc);
+		vq->desc_state_packed[id].indir_desc = NULL;
+	} else if (ctx) {
+		*ctx = vq->desc_state_packed[id].indir_desc;
+	}
+}
+
+static inline bool is_used_desc_packed(const struct vring_virtqueue *vq,
+				       u16 idx, bool used_wrap_counter)
+{
+	u16 flags;
+	bool avail, used;
+
+	flags = virtio16_to_cpu(vq->vq.vdev,
+				vq->vring_packed.desc[idx].flags);
+	avail = !!(flags & VRING_DESC_F_AVAIL);
+	used = !!(flags & VRING_DESC_F_USED);
+
+	return avail == used && used == used_wrap_counter;
 }
 
 static inline bool more_used_packed(const struct vring_virtqueue *vq)
 {
-	return false;
+	return is_used_desc_packed(vq, vq->last_used_idx,
+			vq->used_wrap_counter);
 }
 
 static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
 					  unsigned int *len,
 					  void **ctx)
 {
-	return NULL;
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	u16 last_used, id;
+	void *ret;
+
+	START_USE(vq);
+
+	if (unlikely(vq->broken)) {
+		END_USE(vq);
+		return NULL;
+	}
+
+	if (!more_used_packed(vq)) {
+		pr_debug("No more buffers in queue\n");
+		END_USE(vq);
+		return NULL;
+	}
+
+	/* Only get used elements after they have been exposed by host. */
+	virtio_rmb(vq->weak_barriers);
+
+	last_used = vq->last_used_idx;
+	id = virtio16_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].id);
+	*len = virtio32_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].len);
+
+	if (unlikely(id >= vq->vring_packed.num)) {
+		BAD_RING(vq, "id %u out of range\n", id);
+		return NULL;
+	}
+	if (unlikely(!vq->desc_state_packed[id].data)) {
+		BAD_RING(vq, "id %u is not a head!\n", id);
+		return NULL;
+	}
+
+	vq->last_used_idx += vq->desc_state_packed[id].num;
+	if (vq->last_used_idx >= vq->vring_packed.num) {
+		vq->last_used_idx -= vq->vring_packed.num;
+		vq->used_wrap_counter ^= 1;
+	}
+
+	/* detach_buf_packed clears data, so grab it now. */
+	ret = vq->desc_state_packed[id].data;
+	detach_buf_packed(vq, id, ctx);
+
+#ifdef DEBUG
+	vq->last_add_time_valid = false;
+#endif
+
+	END_USE(vq);
+	return ret;
 }
 
 static void virtqueue_disable_cb_packed(struct virtqueue *_vq)
 {
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	if (vq->event_flags_shadow != VRING_EVENT_F_DISABLE) {
+		vq->event_flags_shadow = VRING_EVENT_F_DISABLE;
+		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
+							vq->event_flags_shadow);
+	}
 }
 
 static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
 {
-	return 0;
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	START_USE(vq);
+
+	/* We optimistically turn back on interrupts, then check if there was
+	 * more to do. */
+
+	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
+		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
+		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
+							vq->event_flags_shadow);
+	}
+
+	END_USE(vq);
+	return vq->last_used_idx | ((u16)vq->used_wrap_counter << 15);
 }
 
-static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned last_used_idx)
+static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned off_wrap)
 {
-	return false;
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	bool wrap_counter;
+	u16 used_idx;
+
+	wrap_counter = off_wrap >> 15;
+	used_idx = off_wrap & ~(1 << 15);
+
+	return is_used_desc_packed(vq, used_idx, wrap_counter);
 }
 
 static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
 {
-	return false;
+	struct vring_virtqueue *vq = to_vvq(_vq);
+
+	START_USE(vq);
+
+	/* We optimistically turn back on interrupts, then check if there was
+	 * more to do. */
+
+	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
+		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
+		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
+							vq->event_flags_shadow);
+		/* We need to enable interrupts first before re-checking
+		 * for more used buffers. */
+		virtio_mb(vq->weak_barriers);
+	}
+
+	if (more_used_packed(vq)) {
+		END_USE(vq);
+		return false;
+	}
+
+	END_USE(vq);
+	return true;
 }
 
 static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq)
 {
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	unsigned int i;
+	void *buf;
+
+	START_USE(vq);
+
+	for (i = 0; i < vq->vring_packed.num; i++) {
+		if (!vq->desc_state_packed[i].data)
+			continue;
+		/* detach_buf clears data, so grab it now. */
+		buf = vq->desc_state_packed[i].data;
+		detach_buf_packed(vq, i, NULL);
+		END_USE(vq);
+		return buf;
+	}
+	/* That should have freed everything. */
+	BUG_ON(vq->vq.num_free != vq->vring_packed.num);
+
+	END_USE(vq);
 	return NULL;
 }
 
@@ -1083,6 +1559,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
+	/* We need to enable interrupts first before re-checking
+	 * for more used buffers. */
+	virtio_mb(vq->weak_barriers);
 	return vq->packed ? virtqueue_poll_packed(_vq, last_used_idx) :
 			    virtqueue_poll_split(_vq, last_used_idx);
 }
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
                   ` (2 preceding siblings ...)
  2018-07-11  2:27 ` [PATCH net-next v2 3/5] virtio_ring: add packed ring support Tiwei Bie
@ 2018-07-11  2:27 ` Tiwei Bie
  2018-09-07 14:10   ` Michael S. Tsirkin
  2018-07-11  2:27 ` [PATCH net-next v2 5/5] virtio_ring: enable " Tiwei Bie
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-07-11  2:27 UTC (permalink / raw)
  To: mst, jasowang, virtualization, linux-kernel, netdev, virtio-dev
  Cc: wexu, jfreimann, tiwei.bie

This commit introduces the EVENT_IDX support in packed ring.

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/virtio/virtio_ring.c | 73 ++++++++++++++++++++++++++++++++----
 1 file changed, 65 insertions(+), 8 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index f317b485ba54..f79a1e17f7d1 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1050,7 +1050,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
-	u16 flags;
+	u16 new, old, off_wrap, flags, wrap_counter, event_idx;
 	bool needs_kick;
 	u32 snapshot;
 
@@ -1059,9 +1059,19 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
 	 * suppressions. */
 	virtio_mb(vq->weak_barriers);
 
+	old = vq->next_avail_idx - vq->num_added;
+	new = vq->next_avail_idx;
+	vq->num_added = 0;
+
 	snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
+	off_wrap = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot & 0xffff));
 	flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
 
+	wrap_counter = off_wrap >> 15;
+	event_idx = off_wrap & ~(1 << 15);
+	if (wrap_counter != vq->avail_wrap_counter)
+		event_idx -= vq->vring_packed.num;
+
 #ifdef DEBUG
 	if (vq->last_add_time_valid) {
 		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
@@ -1070,7 +1080,10 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
 	vq->last_add_time_valid = false;
 #endif
 
-	needs_kick = (flags != VRING_EVENT_F_DISABLE);
+	if (flags == VRING_EVENT_F_DESC)
+		needs_kick = vring_need_event(event_idx, new, old);
+	else
+		needs_kick = (flags != VRING_EVENT_F_DISABLE);
 	END_USE(vq);
 	return needs_kick;
 }
@@ -1185,6 +1198,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
 	ret = vq->desc_state_packed[id].data;
 	detach_buf_packed(vq, id, ctx);
 
+	/* If we expect an interrupt for the next entry, tell host
+	 * by writing event index and flush out the write before
+	 * the read in the next get_buf call. */
+	if (vq->event_flags_shadow == VRING_EVENT_F_DESC)
+		virtio_store_mb(vq->weak_barriers,
+				&vq->vring_packed.driver->off_wrap,
+				cpu_to_virtio16(_vq->vdev, vq->last_used_idx |
+					((u16)vq->used_wrap_counter << 15)));
+
 #ifdef DEBUG
 	vq->last_add_time_valid = false;
 #endif
@@ -1213,8 +1235,18 @@ static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
 	/* We optimistically turn back on interrupts, then check if there was
 	 * more to do. */
 
+	if (vq->event) {
+		vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
+				vq->last_used_idx |
+				((u16)vq->used_wrap_counter << 15));
+		/* We need to update event offset and event wrap
+		 * counter first before updating event flags. */
+		virtio_wmb(vq->weak_barriers);
+	}
+
 	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
-		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
+		vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
+						     VRING_EVENT_F_ENABLE;
 		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
 							vq->event_flags_shadow);
 	}
@@ -1238,22 +1270,47 @@ static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned off_wrap)
 static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
+	u16 bufs, used_idx, wrap_counter;
 
 	START_USE(vq);
 
 	/* We optimistically turn back on interrupts, then check if there was
 	 * more to do. */
 
+	if (vq->event) {
+		/* TODO: tune this threshold */
+		bufs = (vq->vring_packed.num - _vq->num_free) * 3 / 4;
+		wrap_counter = vq->used_wrap_counter;
+
+		used_idx = vq->last_used_idx + bufs;
+		if (used_idx >= vq->vring_packed.num) {
+			used_idx -= vq->vring_packed.num;
+			wrap_counter ^= 1;
+		}
+
+		vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
+				used_idx | (wrap_counter << 15));
+
+		/* We need to update event offset and event wrap
+		 * counter first before updating event flags. */
+		virtio_wmb(vq->weak_barriers);
+	} else {
+		used_idx = vq->last_used_idx;
+		wrap_counter = vq->used_wrap_counter;
+	}
+
 	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
-		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
+		vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
+						     VRING_EVENT_F_ENABLE;
 		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
 							vq->event_flags_shadow);
-		/* We need to enable interrupts first before re-checking
-		 * for more used buffers. */
-		virtio_mb(vq->weak_barriers);
 	}
 
-	if (more_used_packed(vq)) {
+	/* We need to update event suppression structure first
+	 * before re-checking for more used buffers. */
+	virtio_mb(vq->weak_barriers);
+
+	if (is_used_desc_packed(vq, used_idx, wrap_counter)) {
 		END_USE(vq);
 		return false;
 	}
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* [PATCH net-next v2 5/5] virtio_ring: enable packed ring
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
                   ` (3 preceding siblings ...)
  2018-07-11  2:27 ` [PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring Tiwei Bie
@ 2018-07-11  2:27 ` Tiwei Bie
  2018-07-11  2:52 ` [PATCH net-next v2 0/5] virtio: support " Jason Wang
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-07-11  2:27 UTC (permalink / raw)
  To: mst, jasowang, virtualization, linux-kernel, netdev, virtio-dev
  Cc: wexu, jfreimann, tiwei.bie

Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
---
 drivers/s390/virtio/virtio_ccw.c | 14 ++++++++++++++
 drivers/virtio/virtio_ring.c     |  2 ++
 2 files changed, 16 insertions(+)

diff --git a/drivers/s390/virtio/virtio_ccw.c b/drivers/s390/virtio/virtio_ccw.c
index 8f5c1d7f751a..8654f3a94635 100644
--- a/drivers/s390/virtio/virtio_ccw.c
+++ b/drivers/s390/virtio/virtio_ccw.c
@@ -765,6 +765,17 @@ static u64 virtio_ccw_get_features(struct virtio_device *vdev)
 	return rc;
 }
 
+static void ccw_transport_features(struct virtio_device *vdev)
+{
+	/*
+	 * Packed ring isn't enabled on virtio_ccw for now,
+	 * because virtio_ccw uses some legacy accessors,
+	 * e.g. virtqueue_get_avail() and virtqueue_get_used()
+	 * which aren't available in packed ring currently.
+	 */
+	__virtio_clear_bit(vdev, VIRTIO_F_RING_PACKED);
+}
+
 static int virtio_ccw_finalize_features(struct virtio_device *vdev)
 {
 	struct virtio_ccw_device *vcdev = to_vc_device(vdev);
@@ -791,6 +802,9 @@ static int virtio_ccw_finalize_features(struct virtio_device *vdev)
 	/* Give virtio_ring a chance to accept features. */
 	vring_transport_features(vdev);
 
+	/* Give virtio_ccw a chance to accept features. */
+	ccw_transport_features(vdev);
+
 	features->index = 0;
 	features->features = cpu_to_le32((u32)vdev->features);
 	/* Write the first half of the feature bits to the host. */
diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index f79a1e17f7d1..807ed4b362c5 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1968,6 +1968,8 @@ void vring_transport_features(struct virtio_device *vdev)
 			break;
 		case VIRTIO_F_IOMMU_PLATFORM:
 			break;
+		case VIRTIO_F_RING_PACKED:
+			break;
 		default:
 			/* We don't understand this bit. */
 			__virtio_clear_bit(vdev, i);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
                   ` (4 preceding siblings ...)
  2018-07-11  2:27 ` [PATCH net-next v2 5/5] virtio_ring: enable " Tiwei Bie
@ 2018-07-11  2:52 ` Jason Wang
  2018-07-12 21:44 ` David Miller
  2018-08-27 14:00 ` Michael S. Tsirkin
  7 siblings, 0 replies; 53+ messages in thread
From: Jason Wang @ 2018-07-11  2:52 UTC (permalink / raw)
  To: Tiwei Bie, mst, virtualization, linux-kernel, netdev, virtio-dev
  Cc: wexu, jfreimann



On 2018年07月11日 10:27, Tiwei Bie wrote:
> Hello everyone,
>
> This patch set implements packed ring support in virtio driver.
>
> Some functional tests have been done with Jason's
> packed ring implementation in vhost:
>
> https://lkml.org/lkml/2018/7/3/33
>
> Both of ping and netperf worked as expected.
>
> v1 -> v2:
> - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> - Add comments related to ccw (Jason);
>
> RFC (v6) -> v1:
> - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
>    when event idx is off (Jason);
> - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> - Test the state of the desc at used_idx instead of last_used_idx
>    in virtqueue_enable_cb_delayed_packed() (Jason);
> - Save wrap counter (as part of queue state) in the return value
>    of virtqueue_enable_cb_prepare_packed();
> - Refine the packed ring definitions in uapi;
> - Rebase on the net-next tree;
>
> RFC v5 -> RFC v6:
> - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> - Define wrap counter as bool (Jason);
> - Use ALIGN() in vring_init_packed() (Jason);
> - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> - Add comments for barriers (Jason);
> - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> - Refine the memory barrier in virtqueue_poll();
> - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> - Remove the hacks in virtqueue_enable_cb_prepare_packed();
>
> RFC v4 -> RFC v5:
> - Save DMA addr, etc in desc state (Jason);
> - Track used wrap counter;
>
> RFC v3 -> RFC v4:
> - Make ID allocation support out-of-order (Jason);
> - Various fixes for EVENT_IDX support;
>
> RFC v2 -> RFC v3:
> - Split into small patches (Jason);
> - Add helper virtqueue_use_indirect() (Jason);
> - Just set id for the last descriptor of a list (Jason);
> - Calculate the prev in virtqueue_add_packed() (Jason);
> - Fix/improve desc suppression code (Jason/MST);
> - Refine the code layout for XXX_split/packed and wrappers (MST);
> - Fix the comments and API in uapi (MST);
> - Remove the BUG_ON() for indirect (Jason);
> - Some other refinements and bug fixes;
>
> RFC v1 -> RFC v2:
> - Add indirect descriptor support - compile test only;
> - Add event suppression supprt - compile test only;
> - Move vring_packed_init() out of uapi (Jason, MST);
> - Merge two loops into one in virtqueue_add_packed() (Jason);
> - Split vring_unmap_one() for packed ring and split ring (Jason);
> - Avoid using '%' operator (Jason);
> - Rename free_head -> next_avail_idx (Jason);
> - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> - Some other refinements and bug fixes;
>
> Thanks!
>
> Tiwei Bie (5):
>    virtio: add packed ring definitions
>    virtio_ring: support creating packed ring
>    virtio_ring: add packed ring support
>    virtio_ring: add event idx support in packed ring
>    virtio_ring: enable packed ring
>
>   drivers/s390/virtio/virtio_ccw.c   |   14 +
>   drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
>   include/linux/virtio_ring.h        |    8 +-
>   include/uapi/linux/virtio_config.h |    3 +
>   include/uapi/linux/virtio_ring.h   |   43 +
>   5 files changed, 1157 insertions(+), 276 deletions(-)
>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks!


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
                   ` (5 preceding siblings ...)
  2018-07-11  2:52 ` [PATCH net-next v2 0/5] virtio: support " Jason Wang
@ 2018-07-12 21:44 ` David Miller
  2018-07-13  0:52   ` Jason Wang
  2018-07-13  3:26   ` Michael S. Tsirkin
  2018-08-27 14:00 ` Michael S. Tsirkin
  7 siblings, 2 replies; 53+ messages in thread
From: David Miller @ 2018-07-12 21:44 UTC (permalink / raw)
  To: tiwei.bie
  Cc: mst, jasowang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

From: Tiwei Bie <tiwei.bie@intel.com>
Date: Wed, 11 Jul 2018 10:27:06 +0800

> Hello everyone,
> 
> This patch set implements packed ring support in virtio driver.
> 
> Some functional tests have been done with Jason's
> packed ring implementation in vhost:
> 
> https://lkml.org/lkml/2018/7/3/33
> 
> Both of ping and netperf worked as expected.

Michael and Jason, where are we with this series?

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-07-12 21:44 ` David Miller
@ 2018-07-13  0:52   ` Jason Wang
  2018-07-13  3:26   ` Michael S. Tsirkin
  1 sibling, 0 replies; 53+ messages in thread
From: Jason Wang @ 2018-07-13  0:52 UTC (permalink / raw)
  To: David Miller, tiwei.bie
  Cc: mst, virtualization, linux-kernel, netdev, virtio-dev, wexu, jfreimann



On 2018年07月13日 05:44, David Miller wrote:
> From: Tiwei Bie <tiwei.bie@intel.com>
> Date: Wed, 11 Jul 2018 10:27:06 +0800
>
>> Hello everyone,
>>
>> This patch set implements packed ring support in virtio driver.
>>
>> Some functional tests have been done with Jason's
>> packed ring implementation in vhost:
>>
>> https://lkml.org/lkml/2018/7/3/33
>>
>> Both of ping and netperf worked as expected.
> Michael and Jason, where are we with this series?

For the series:

Acked-by: Jason Wang <jasowang@redhat.com>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-07-12 21:44 ` David Miller
  2018-07-13  0:52   ` Jason Wang
@ 2018-07-13  3:26   ` Michael S. Tsirkin
  1 sibling, 0 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-07-13  3:26 UTC (permalink / raw)
  To: David Miller
  Cc: tiwei.bie, jasowang, virtualization, linux-kernel, netdev,
	virtio-dev, wexu, jfreimann

On Thu, Jul 12, 2018 at 02:44:58PM -0700, David Miller wrote:
> From: Tiwei Bie <tiwei.bie@intel.com>
> Date: Wed, 11 Jul 2018 10:27:06 +0800
> 
> > Hello everyone,
> > 
> > This patch set implements packed ring support in virtio driver.
> > 
> > Some functional tests have been done with Jason's
> > packed ring implementation in vhost:
> > 
> > https://lkml.org/lkml/2018/7/3/33
> > 
> > Both of ping and netperf worked as expected.
> 
> Michael and Jason, where are we with this series?

I'm at netdev, won't be able to review before Monday.

-- 
MST

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
                   ` (6 preceding siblings ...)
  2018-07-12 21:44 ` David Miller
@ 2018-08-27 14:00 ` Michael S. Tsirkin
  2018-08-28  5:51   ` [virtio-dev] " Jens Freimann
  2018-09-07  1:22   ` Tiwei Bie
  7 siblings, 2 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-08-27 14:00 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

Are there still plans to test the performance with vost pmd?
vhost doesn't seem to show a performance gain ...


On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> Hello everyone,
> 
> This patch set implements packed ring support in virtio driver.
> 
> Some functional tests have been done with Jason's
> packed ring implementation in vhost:
> 
> https://lkml.org/lkml/2018/7/3/33
> 
> Both of ping and netperf worked as expected.
> 
> v1 -> v2:
> - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> - Add comments related to ccw (Jason);
> 
> RFC (v6) -> v1:
> - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
>   when event idx is off (Jason);
> - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> - Test the state of the desc at used_idx instead of last_used_idx
>   in virtqueue_enable_cb_delayed_packed() (Jason);
> - Save wrap counter (as part of queue state) in the return value
>   of virtqueue_enable_cb_prepare_packed();
> - Refine the packed ring definitions in uapi;
> - Rebase on the net-next tree;
> 
> RFC v5 -> RFC v6:
> - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> - Define wrap counter as bool (Jason);
> - Use ALIGN() in vring_init_packed() (Jason);
> - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> - Add comments for barriers (Jason);
> - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> - Refine the memory barrier in virtqueue_poll();
> - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> 
> RFC v4 -> RFC v5:
> - Save DMA addr, etc in desc state (Jason);
> - Track used wrap counter;
> 
> RFC v3 -> RFC v4:
> - Make ID allocation support out-of-order (Jason);
> - Various fixes for EVENT_IDX support;
> 
> RFC v2 -> RFC v3:
> - Split into small patches (Jason);
> - Add helper virtqueue_use_indirect() (Jason);
> - Just set id for the last descriptor of a list (Jason);
> - Calculate the prev in virtqueue_add_packed() (Jason);
> - Fix/improve desc suppression code (Jason/MST);
> - Refine the code layout for XXX_split/packed and wrappers (MST);
> - Fix the comments and API in uapi (MST);
> - Remove the BUG_ON() for indirect (Jason);
> - Some other refinements and bug fixes;
> 
> RFC v1 -> RFC v2:
> - Add indirect descriptor support - compile test only;
> - Add event suppression supprt - compile test only;
> - Move vring_packed_init() out of uapi (Jason, MST);
> - Merge two loops into one in virtqueue_add_packed() (Jason);
> - Split vring_unmap_one() for packed ring and split ring (Jason);
> - Avoid using '%' operator (Jason);
> - Rename free_head -> next_avail_idx (Jason);
> - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> - Some other refinements and bug fixes;
> 
> Thanks!
> 
> Tiwei Bie (5):
>   virtio: add packed ring definitions
>   virtio_ring: support creating packed ring
>   virtio_ring: add packed ring support
>   virtio_ring: add event idx support in packed ring
>   virtio_ring: enable packed ring
> 
>  drivers/s390/virtio/virtio_ccw.c   |   14 +
>  drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
>  include/linux/virtio_ring.h        |    8 +-
>  include/uapi/linux/virtio_config.h |    3 +
>  include/uapi/linux/virtio_ring.h   |   43 +
>  5 files changed, 1157 insertions(+), 276 deletions(-)
> 
> -- 
> 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-08-27 14:00 ` Michael S. Tsirkin
@ 2018-08-28  5:51   ` Jens Freimann
  2018-09-07  1:22   ` Tiwei Bie
  1 sibling, 0 replies; 53+ messages in thread
From: Jens Freimann @ 2018-08-28  5:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Tiwei Bie, jasowang, virtualization, linux-kernel, netdev,
	virtio-dev, wexu

On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
>Are there still plans to test the performance with vost pmd?
>vhost doesn't seem to show a performance gain ...

Yes, I'm having trouble getting it to work with virtio PMD (it works
with Tiweis guest driver though), but I'm getting closer. Should only
be 1-2 more days. 

regards,
Jens 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-08-27 14:00 ` Michael S. Tsirkin
  2018-08-28  5:51   ` [virtio-dev] " Jens Freimann
@ 2018-09-07  1:22   ` Tiwei Bie
  2018-09-07 13:00     ` Michael S. Tsirkin
  1 sibling, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-09-07  1:22 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
> Are there still plans to test the performance with vost pmd?
> vhost doesn't seem to show a performance gain ...
> 

I tried some performance tests with vhost PMD. In guest, the
XDP program will return XDP_DROP directly. And in host, testpmd
will do txonly fwd.

When burst size is 1 and packet size is 64 in testpmd and
testpmd needs to iterate 5 Tx queues (but only the first two
queues are enabled) to prepare and inject packets, I got ~12%
performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
is faster (e.g. just need to iterate the first two queues to
prepare and inject packets), then I got similar performance
for both rings (~9.9Mpps) (packed ring's performance can be
lower, because it's more complicated in driver.)

I think packed ring makes vhost PMD faster, but it doesn't make
the driver faster. In packed ring, the ring is simplified, and
the handling of the ring in vhost (device) is also simplified,
but things are not simplified in driver, e.g. although there is
no desc table in the virtqueue anymore, driver still needs to
maintain a private desc state table (which is still managed as
a list in this patch set) to support the out-of-order desc
processing in vhost (device).

I think this patch set is mainly to make the driver have a full
functional support for the packed ring, which makes it possible
to leverage the packed ring feature in vhost (device). But I'm
not sure whether there is any other better idea, I'd like to
hear your thoughts. Thanks!


> 
> On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> > Hello everyone,
> > 
> > This patch set implements packed ring support in virtio driver.
> > 
> > Some functional tests have been done with Jason's
> > packed ring implementation in vhost:
> > 
> > https://lkml.org/lkml/2018/7/3/33
> > 
> > Both of ping and netperf worked as expected.
> > 
> > v1 -> v2:
> > - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> > - Add comments related to ccw (Jason);
> > 
> > RFC (v6) -> v1:
> > - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
> >   when event idx is off (Jason);
> > - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> > - Test the state of the desc at used_idx instead of last_used_idx
> >   in virtqueue_enable_cb_delayed_packed() (Jason);
> > - Save wrap counter (as part of queue state) in the return value
> >   of virtqueue_enable_cb_prepare_packed();
> > - Refine the packed ring definitions in uapi;
> > - Rebase on the net-next tree;
> > 
> > RFC v5 -> RFC v6:
> > - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> > - Define wrap counter as bool (Jason);
> > - Use ALIGN() in vring_init_packed() (Jason);
> > - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> > - Add comments for barriers (Jason);
> > - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> > - Refine the memory barrier in virtqueue_poll();
> > - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> > - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> > 
> > RFC v4 -> RFC v5:
> > - Save DMA addr, etc in desc state (Jason);
> > - Track used wrap counter;
> > 
> > RFC v3 -> RFC v4:
> > - Make ID allocation support out-of-order (Jason);
> > - Various fixes for EVENT_IDX support;
> > 
> > RFC v2 -> RFC v3:
> > - Split into small patches (Jason);
> > - Add helper virtqueue_use_indirect() (Jason);
> > - Just set id for the last descriptor of a list (Jason);
> > - Calculate the prev in virtqueue_add_packed() (Jason);
> > - Fix/improve desc suppression code (Jason/MST);
> > - Refine the code layout for XXX_split/packed and wrappers (MST);
> > - Fix the comments and API in uapi (MST);
> > - Remove the BUG_ON() for indirect (Jason);
> > - Some other refinements and bug fixes;
> > 
> > RFC v1 -> RFC v2:
> > - Add indirect descriptor support - compile test only;
> > - Add event suppression supprt - compile test only;
> > - Move vring_packed_init() out of uapi (Jason, MST);
> > - Merge two loops into one in virtqueue_add_packed() (Jason);
> > - Split vring_unmap_one() for packed ring and split ring (Jason);
> > - Avoid using '%' operator (Jason);
> > - Rename free_head -> next_avail_idx (Jason);
> > - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> > - Some other refinements and bug fixes;
> > 
> > Thanks!
> > 
> > Tiwei Bie (5):
> >   virtio: add packed ring definitions
> >   virtio_ring: support creating packed ring
> >   virtio_ring: add packed ring support
> >   virtio_ring: add event idx support in packed ring
> >   virtio_ring: enable packed ring
> > 
> >  drivers/s390/virtio/virtio_ccw.c   |   14 +
> >  drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
> >  include/linux/virtio_ring.h        |    8 +-
> >  include/uapi/linux/virtio_config.h |    3 +
> >  include/uapi/linux/virtio_ring.h   |   43 +
> >  5 files changed, 1157 insertions(+), 276 deletions(-)
> > 
> > -- 
> > 2.18.0
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-07  1:22   ` Tiwei Bie
@ 2018-09-07 13:00     ` Michael S. Tsirkin
  2018-09-10  3:00       ` Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-07 13:00 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Fri, Sep 07, 2018 at 09:22:25AM +0800, Tiwei Bie wrote:
> On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
> > Are there still plans to test the performance with vost pmd?
> > vhost doesn't seem to show a performance gain ...
> > 
> 
> I tried some performance tests with vhost PMD. In guest, the
> XDP program will return XDP_DROP directly. And in host, testpmd
> will do txonly fwd.
> 
> When burst size is 1 and packet size is 64 in testpmd and
> testpmd needs to iterate 5 Tx queues (but only the first two
> queues are enabled) to prepare and inject packets, I got ~12%
> performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
> is faster (e.g. just need to iterate the first two queues to
> prepare and inject packets), then I got similar performance
> for both rings (~9.9Mpps) (packed ring's performance can be
> lower, because it's more complicated in driver.)
> 
> I think packed ring makes vhost PMD faster, but it doesn't make
> the driver faster. In packed ring, the ring is simplified, and
> the handling of the ring in vhost (device) is also simplified,
> but things are not simplified in driver, e.g. although there is
> no desc table in the virtqueue anymore, driver still needs to
> maintain a private desc state table (which is still managed as
> a list in this patch set) to support the out-of-order desc
> processing in vhost (device).
> 
> I think this patch set is mainly to make the driver have a full
> functional support for the packed ring, which makes it possible
> to leverage the packed ring feature in vhost (device). But I'm
> not sure whether there is any other better idea, I'd like to
> hear your thoughts. Thanks!

Just this: Jens seems to report a nice gain with virtio and
vhost pmd across the board. Try to compare virtio and
virtio pmd to see what does pmd do better?


> 
> > 
> > On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> > > Hello everyone,
> > > 
> > > This patch set implements packed ring support in virtio driver.
> > > 
> > > Some functional tests have been done with Jason's
> > > packed ring implementation in vhost:
> > > 
> > > https://lkml.org/lkml/2018/7/3/33
> > > 
> > > Both of ping and netperf worked as expected.
> > > 
> > > v1 -> v2:
> > > - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> > > - Add comments related to ccw (Jason);
> > > 
> > > RFC (v6) -> v1:
> > > - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
> > >   when event idx is off (Jason);
> > > - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> > > - Test the state of the desc at used_idx instead of last_used_idx
> > >   in virtqueue_enable_cb_delayed_packed() (Jason);
> > > - Save wrap counter (as part of queue state) in the return value
> > >   of virtqueue_enable_cb_prepare_packed();
> > > - Refine the packed ring definitions in uapi;
> > > - Rebase on the net-next tree;
> > > 
> > > RFC v5 -> RFC v6:
> > > - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> > > - Define wrap counter as bool (Jason);
> > > - Use ALIGN() in vring_init_packed() (Jason);
> > > - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> > > - Add comments for barriers (Jason);
> > > - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> > > - Refine the memory barrier in virtqueue_poll();
> > > - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> > > - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> > > 
> > > RFC v4 -> RFC v5:
> > > - Save DMA addr, etc in desc state (Jason);
> > > - Track used wrap counter;
> > > 
> > > RFC v3 -> RFC v4:
> > > - Make ID allocation support out-of-order (Jason);
> > > - Various fixes for EVENT_IDX support;
> > > 
> > > RFC v2 -> RFC v3:
> > > - Split into small patches (Jason);
> > > - Add helper virtqueue_use_indirect() (Jason);
> > > - Just set id for the last descriptor of a list (Jason);
> > > - Calculate the prev in virtqueue_add_packed() (Jason);
> > > - Fix/improve desc suppression code (Jason/MST);
> > > - Refine the code layout for XXX_split/packed and wrappers (MST);
> > > - Fix the comments and API in uapi (MST);
> > > - Remove the BUG_ON() for indirect (Jason);
> > > - Some other refinements and bug fixes;
> > > 
> > > RFC v1 -> RFC v2:
> > > - Add indirect descriptor support - compile test only;
> > > - Add event suppression supprt - compile test only;
> > > - Move vring_packed_init() out of uapi (Jason, MST);
> > > - Merge two loops into one in virtqueue_add_packed() (Jason);
> > > - Split vring_unmap_one() for packed ring and split ring (Jason);
> > > - Avoid using '%' operator (Jason);
> > > - Rename free_head -> next_avail_idx (Jason);
> > > - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> > > - Some other refinements and bug fixes;
> > > 
> > > Thanks!
> > > 
> > > Tiwei Bie (5):
> > >   virtio: add packed ring definitions
> > >   virtio_ring: support creating packed ring
> > >   virtio_ring: add packed ring support
> > >   virtio_ring: add event idx support in packed ring
> > >   virtio_ring: enable packed ring
> > > 
> > >  drivers/s390/virtio/virtio_ccw.c   |   14 +
> > >  drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
> > >  include/linux/virtio_ring.h        |    8 +-
> > >  include/uapi/linux/virtio_config.h |    3 +
> > >  include/uapi/linux/virtio_ring.h   |   43 +
> > >  5 files changed, 1157 insertions(+), 276 deletions(-)
> > > 
> > > -- 
> > > 2.18.0
> > 
> > ---------------------------------------------------------------------
> > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-07-11  2:27 ` [PATCH net-next v2 3/5] virtio_ring: add packed ring support Tiwei Bie
@ 2018-09-07 13:49   ` Michael S. Tsirkin
  2018-09-10  2:03     ` Tiwei Bie
  2018-09-12 16:22   ` Michael S. Tsirkin
  2018-11-07 17:48   ` Michael S. Tsirkin
  2 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-07 13:49 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Jul 11, 2018 at 10:27:09AM +0800, Tiwei Bie wrote:
> This commit introduces the support (without EVENT_IDX) for
> packed ring.
> 
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>  drivers/virtio/virtio_ring.c | 495 ++++++++++++++++++++++++++++++++++-
>  1 file changed, 487 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index c4f8abc7445a..f317b485ba54 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -55,12 +55,21 @@
>  #define END_USE(vq)
>  #endif
>  
> +#define _VRING_DESC_F_AVAIL(b)	((__u16)(b) << 7)
> +#define _VRING_DESC_F_USED(b)	((__u16)(b) << 15)

Underscore followed by an upper case letter isn't a good idea
for a symbol. And it's not nice that this does not
use the VRING_DESC_F_USED from the header.
How about ((b) ? VRING_DESC_F_USED : 0) instead?
Is produced code worse then?

> +
>  struct vring_desc_state {
>  	void *data;			/* Data for callback. */
>  	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
>  };
>  
>  struct vring_desc_state_packed {
> +	void *data;			/* Data for callback. */
> +	struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */

Include vring_desc_state for these?

> +	int num;			/* Descriptor list length. */
> +	dma_addr_t addr;		/* Buffer DMA addr. */
> +	u32 len;			/* Buffer length. */
> +	u16 flags;			/* Descriptor flags. */

This seems only to be used for indirect. Check indirect field above
instead and drop this.

>  	int next;			/* The next desc state. */

Packing things into 16 bit integers will help reduce
cache pressure here.

Also, this is only used for unmap, so when dma API is not used,
maintaining addr and len list management is unnecessary. How about we
maintain addr/len in a separate array, avoiding accesses on unmap in that case?

Also, lots of architectures have a nop unmap in the DMA API.
See a proposed patch at the end for optimizing that case.

>  };
>  
> @@ -660,7 +669,6 @@ static bool virtqueue_poll_split(struct virtqueue *_vq, unsigned last_used_idx)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
> -	virtio_mb(vq->weak_barriers);
>  	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
>  }

why is this changing the split queue implementation?


>  
> @@ -757,6 +765,72 @@ static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
>  		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
>  }
>  
> +static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
> +				     struct vring_desc_state_packed *state)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = state->flags;
> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 state->addr, state->len,
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       state->addr, state->len,
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
> +				   struct vring_packed_desc *desc)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);

I see no reason to use virtioXX wrappers for the packed ring.
That's there to support legacy devices. Just use leXX.

> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static struct vring_packed_desc *alloc_indirect_packed(struct virtqueue *_vq,
> +						       unsigned int total_sg,
> +						       gfp_t gfp)
> +{
> +	struct vring_packed_desc *desc;
> +
> +	/*
> +	 * We require lowmem mappings for the descriptors because
> +	 * otherwise virt_to_phys will give us bogus addresses in the
> +	 * virtqueue.

Where is virt_to_phys used? I don't see it in this patch.

> +	 */
> +	gfp &= ~__GFP_HIGHMEM;
> +
> +	desc = kmalloc(total_sg * sizeof(struct vring_packed_desc), gfp);
> +
> +	return desc;
> +}
> +
>  static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  				       struct scatterlist *sgs[],
>  				       unsigned int total_sg,
> @@ -766,47 +840,449 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  				       void *ctx,
>  				       gfp_t gfp)
>  {
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	struct vring_packed_desc *desc;
> +	struct scatterlist *sg;
> +	unsigned int i, n, descs_used, uninitialized_var(prev), err_idx;
> +	__virtio16 uninitialized_var(head_flags), flags;
> +	u16 head, avail_wrap_counter, id, curr;
> +	bool indirect;
> +
> +	START_USE(vq);
> +
> +	BUG_ON(data == NULL);
> +	BUG_ON(ctx && vq->indirect);
> +
> +	if (unlikely(vq->broken)) {
> +		END_USE(vq);
> +		return -EIO;
> +	}
> +
> +#ifdef DEBUG
> +	{
> +		ktime_t now = ktime_get();
> +
> +		/* No kick or get, with .1 second between?  Warn. */
> +		if (vq->last_add_time_valid)
> +			WARN_ON(ktime_to_ms(ktime_sub(now, vq->last_add_time))
> +					    > 100);
> +		vq->last_add_time = now;
> +		vq->last_add_time_valid = true;
> +	}
> +#endif

Add incline helpers for this debug stuff rather than
duplicate it from split ring?


> +
> +	BUG_ON(total_sg == 0);
> +
> +	head = vq->next_avail_idx;
> +	avail_wrap_counter = vq->avail_wrap_counter;
> +
> +	if (virtqueue_use_indirect(_vq, total_sg))
> +		desc = alloc_indirect_packed(_vq, total_sg, gfp);
> +	else {
> +		desc = NULL;
> +		WARN_ON_ONCE(total_sg > vq->vring_packed.num && !vq->indirect);
> +	}


Apparently this attempts to treat indirect descriptors same as
direct ones. This is how it is in the split ring, but not in
the packed one. I think you want two separate functions, for
direct and indirect.

> +
> +	if (desc) {
> +		/* Use a single buffer which doesn't continue */
> +		indirect = true;
> +		/* Set up rest to use this indirect table. */
> +		i = 0;
> +		descs_used = 1;
> +	} else {
> +		indirect = false;
> +		desc = vq->vring_packed.desc;
> +		i = head;
> +		descs_used = total_sg;
> +	}
> +
> +	if (vq->vq.num_free < descs_used) {
> +		pr_debug("Can't add buf len %i - avail = %i\n",
> +			 descs_used, vq->vq.num_free);
> +		/* FIXME: for historical reasons, we force a notify here if
> +		 * there are outgoing parts to the buffer.  Presumably the
> +		 * host should service the ring ASAP. */
> +		if (out_sgs)
> +			vq->notify(&vq->vq);
> +		if (indirect)
> +			kfree(desc);
> +		END_USE(vq);
> +		return -ENOSPC;
> +	}
> +
> +	id = vq->free_head;
> +	BUG_ON(id == vq->vring_packed.num);
> +
> +	curr = id;
> +	for (n = 0; n < out_sgs + in_sgs; n++) {
> +		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> +					       DMA_TO_DEVICE : DMA_FROM_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
> +			flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT |
> +				  (n < out_sgs ? 0 : VRING_DESC_F_WRITE) |
> +				  _VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> +				  _VRING_DESC_F_USED(!vq->avail_wrap_counter));

Spec says:
	The VIRTQ_DESC_F_WRITE flags bit is the only valid flag for descriptors in the
	indirect table.

All this logic isn't needed for indirect.

Also, why re-calculate avail/used flags every time? They only change
when you wrap around.


> +			if (!indirect && i == head)
> +				head_flags = flags;
> +			else
> +				desc[i].flags = flags;
> +
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
> +			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
> +			i++;
> +			if (!indirect) {
> +				if (vring_use_dma_api(_vq->vdev)) {
> +					vq->desc_state_packed[curr].addr = addr;
> +					vq->desc_state_packed[curr].len =
> +						sg->length;
> +					vq->desc_state_packed[curr].flags =
> +						virtio16_to_cpu(_vq->vdev,
> +								flags);
> +				}
> +				curr = vq->desc_state_packed[curr].next;
> +
> +				if (i >= vq->vring_packed.num) {
> +					i = 0;
> +					vq->avail_wrap_counter ^= 1;
> +				}
> +			}
> +		}
> +	}
> +
> +	prev = (i > 0 ? i : vq->vring_packed.num) - 1;
> +	desc[prev].id = cpu_to_virtio16(_vq->vdev, id);

Is it easier to write this out in all descriptors, to avoid the need to
calculate prev?

> +
> +	/* Last one doesn't continue. */
> +	if (total_sg == 1)
> +		head_flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
> +	else
> +		desc[prev].flags &= cpu_to_virtio16(_vq->vdev,
> +						~VRING_DESC_F_NEXT);

Wouldn't it be easier to avoid setting VRING_DESC_F_NEXT
in the first place?
	 if (n != in_sgs - 1 && n != out_sgs - 1)

must be better than writing descriptor an extra time.

> +
> +	if (indirect) {
> +		/* Now that the indirect table is filled in, map it. */
> +		dma_addr_t addr = vring_map_single(
> +			vq, desc, total_sg * sizeof(struct vring_packed_desc),
> +			DMA_TO_DEVICE);
> +		if (vring_mapping_error(vq, addr))
> +			goto unmap_release;
> +
> +		head_flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT |
> +				      _VRING_DESC_F_AVAIL(avail_wrap_counter) |
> +				      _VRING_DESC_F_USED(!avail_wrap_counter));
> +		vq->vring_packed.desc[head].addr = cpu_to_virtio64(_vq->vdev,
> +								   addr);
> +		vq->vring_packed.desc[head].len = cpu_to_virtio32(_vq->vdev,
> +				total_sg * sizeof(struct vring_packed_desc));
> +		vq->vring_packed.desc[head].id = cpu_to_virtio16(_vq->vdev, id);
> +
> +		if (vring_use_dma_api(_vq->vdev)) {
> +			vq->desc_state_packed[id].addr = addr;
> +			vq->desc_state_packed[id].len = total_sg *
> +					sizeof(struct vring_packed_desc);
> +			vq->desc_state_packed[id].flags =
> +					virtio16_to_cpu(_vq->vdev, head_flags);
> +		}
> +	}
> +
> +	/* We're using some buffers from the free list. */
> +	vq->vq.num_free -= descs_used;
> +
> +	/* Update free pointer */
> +	if (indirect) {
> +		n = head + 1;
> +		if (n >= vq->vring_packed.num) {
> +			n = 0;
> +			vq->avail_wrap_counter ^= 1;
> +		}
> +		vq->next_avail_idx = n;
> +		vq->free_head = vq->desc_state_packed[id].next;
> +	} else {
> +		vq->next_avail_idx = i;
> +		vq->free_head = curr;
> +	}
> +
> +	/* Store token and indirect buffer state. */
> +	vq->desc_state_packed[id].num = descs_used;
> +	vq->desc_state_packed[id].data = data;
> +	if (indirect)
> +		vq->desc_state_packed[id].indir_desc = desc;
> +	else
> +		vq->desc_state_packed[id].indir_desc = ctx;
> +
> +	/* A driver MUST NOT make the first descriptor in the list
> +	 * available before all subsequent descriptors comprising
> +	 * the list are made available. */
> +	virtio_wmb(vq->weak_barriers);
> +	vq->vring_packed.desc[head].flags = head_flags;
> +	vq->num_added += descs_used;
> +
> +	pr_debug("Added buffer head %i to %p\n", head, vq);
> +	END_USE(vq);
> +
> +	return 0;
> +
> +unmap_release:
> +	err_idx = i;
> +	i = head;
> +
> +	for (n = 0; n < total_sg; n++) {
> +		if (i == err_idx)
> +			break;
> +		vring_unmap_desc_packed(vq, &desc[i]);
> +		i++;
> +		if (!indirect && i >= vq->vring_packed.num)
> +			i = 0;
> +	}
> +
> +	vq->avail_wrap_counter = avail_wrap_counter;
> +
> +	if (indirect)
> +		kfree(desc);
> +
> +	END_USE(vq);
>  	return -EIO;
>  }
>  
>  static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
>  {
> -	return false;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 flags;
> +	bool needs_kick;
> +	u32 snapshot;
> +
> +	START_USE(vq);
> +	/* We need to expose the new flags value before checking notification
> +	 * suppressions. */
> +	virtio_mb(vq->weak_barriers);
> +
> +	snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
> +	flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;

What are all these hard-coded things? Also either we do READ_ONCE
everywhere or nowhere. Why is this place special? Any why read 32 bit
if you only want the 16?

And doesn't sparse complain about cast to __virtio16?

> +
> +#ifdef DEBUG
> +	if (vq->last_add_time_valid) {
> +		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
> +					      vq->last_add_time)) > 100);
> +	}
> +	vq->last_add_time_valid = false;
> +#endif
> +
> +	needs_kick = (flags != VRING_EVENT_F_DISABLE);
> +	END_USE(vq);
> +	return needs_kick;
> +}
> +
> +static void detach_buf_packed(struct vring_virtqueue *vq,
> +			      unsigned int id, void **ctx)
> +{
> +	struct vring_desc_state_packed *state = NULL;
> +	struct vring_packed_desc *desc;
> +	unsigned int curr, i;
> +
> +	/* Clear data ptr. */
> +	vq->desc_state_packed[id].data = NULL;
> +
> +	curr = id;
> +	for (i = 0; i < vq->desc_state_packed[id].num; i++) {
> +		state = &vq->desc_state_packed[curr];
> +		vring_unmap_state_packed(vq, state);
> +		curr = state->next;
> +	}
> +
> +	BUG_ON(state == NULL);
> +	vq->vq.num_free += vq->desc_state_packed[id].num;
> +	state->next = vq->free_head;
> +	vq->free_head = id;
> +
> +	if (vq->indirect) {
> +		u32 len;
> +
> +		/* Free the indirect table, if any, now that it's unmapped. */
> +		desc = vq->desc_state_packed[id].indir_desc;
> +		if (!desc)
> +			return;
> +
> +		if (vring_use_dma_api(vq->vq.vdev)) {
> +			len = vq->desc_state_packed[id].len;
> +			for (i = 0; i < len / sizeof(struct vring_packed_desc);
> +					i++)
> +				vring_unmap_desc_packed(vq, &desc[i]);
> +		}
> +		kfree(desc);
> +		vq->desc_state_packed[id].indir_desc = NULL;
> +	} else if (ctx) {
> +		*ctx = vq->desc_state_packed[id].indir_desc;
> +	}
> +}
> +
> +static inline bool is_used_desc_packed(const struct vring_virtqueue *vq,
> +				       u16 idx, bool used_wrap_counter)
> +{
> +	u16 flags;
> +	bool avail, used;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev,
> +				vq->vring_packed.desc[idx].flags);
> +	avail = !!(flags & VRING_DESC_F_AVAIL);
> +	used = !!(flags & VRING_DESC_F_USED);
> +
> +	return avail == used && used == used_wrap_counter;

I think that you don't need to look at avail flag to detect a used
descriptor. The reason device writes it is to avoid confusing
*device* next time descriptor wraps.

>  }
>  
>  static inline bool more_used_packed(const struct vring_virtqueue *vq)
>  {
> -	return false;
> +	return is_used_desc_packed(vq, vq->last_used_idx,
> +			vq->used_wrap_counter);
>  }
>  
>  static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
>  					  unsigned int *len,
>  					  void **ctx)
>  {
> -	return NULL;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 last_used, id;
> +	void *ret;
> +
> +	START_USE(vq);
> +
> +	if (unlikely(vq->broken)) {
> +		END_USE(vq);
> +		return NULL;
> +	}
> +
> +	if (!more_used_packed(vq)) {
> +		pr_debug("No more buffers in queue\n");
> +		END_USE(vq);
> +		return NULL;
> +	}
> +
> +	/* Only get used elements after they have been exposed by host. */
> +	virtio_rmb(vq->weak_barriers);
> +
> +	last_used = vq->last_used_idx;
> +	id = virtio16_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].id);
> +	*len = virtio32_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].len);
> +
> +	if (unlikely(id >= vq->vring_packed.num)) {
> +		BAD_RING(vq, "id %u out of range\n", id);
> +		return NULL;
> +	}
> +	if (unlikely(!vq->desc_state_packed[id].data)) {
> +		BAD_RING(vq, "id %u is not a head!\n", id);
> +		return NULL;
> +	}
> +
> +	vq->last_used_idx += vq->desc_state_packed[id].num;
> +	if (vq->last_used_idx >= vq->vring_packed.num) {
> +		vq->last_used_idx -= vq->vring_packed.num;
> +		vq->used_wrap_counter ^= 1;
> +	}
> +
> +	/* detach_buf_packed clears data, so grab it now. */
> +	ret = vq->desc_state_packed[id].data;
> +	detach_buf_packed(vq, id, ctx);
> +
> +#ifdef DEBUG
> +	vq->last_add_time_valid = false;
> +#endif
> +
> +	END_USE(vq);
> +	return ret;
>  }
>  
>  static void virtqueue_disable_cb_packed(struct virtqueue *_vq)
>  {
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	if (vq->event_flags_shadow != VRING_EVENT_F_DISABLE) {
> +		vq->event_flags_shadow = VRING_EVENT_F_DISABLE;
> +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> +							vq->event_flags_shadow);
> +	}
>  }
>  
>  static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
>  {
> -	return 0;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	START_USE(vq);
> +
> +	/* We optimistically turn back on interrupts, then check if there was
> +	 * more to do. */
> +
> +	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> +		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> +							vq->event_flags_shadow);
> +	}
> +
> +	END_USE(vq);
> +	return vq->last_used_idx | ((u16)vq->used_wrap_counter << 15);
>  }
>  
> -static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned last_used_idx)
> +static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned off_wrap)
>  {
> -	return false;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	bool wrap_counter;
> +	u16 used_idx;
> +
> +	wrap_counter = off_wrap >> 15;
> +	used_idx = off_wrap & ~(1 << 15);
> +
> +	return is_used_desc_packed(vq, used_idx, wrap_counter);

These >> 15 << 15 all over the place duplicate info.
Pls put 15 in the header.

Also can you maintain the counters properly shifted?
Then just use them.


>  }
>  
>  static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
>  {
> -	return false;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	START_USE(vq);
> +
> +	/* We optimistically turn back on interrupts, then check if there was
> +	 * more to do. */
> +
> +	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> +		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> +							vq->event_flags_shadow);
> +		/* We need to enable interrupts first before re-checking
> +		 * for more used buffers. */
> +		virtio_mb(vq->weak_barriers);
> +	}
> +
> +	if (more_used_packed(vq)) {
> +		END_USE(vq);
> +		return false;
> +	}
> +
> +	END_USE(vq);
> +	return true;
>  }
>  
>  static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq)
>  {
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	unsigned int i;
> +	void *buf;
> +
> +	START_USE(vq);
> +
> +	for (i = 0; i < vq->vring_packed.num; i++) {
> +		if (!vq->desc_state_packed[i].data)
> +			continue;
> +		/* detach_buf clears data, so grab it now. */
> +		buf = vq->desc_state_packed[i].data;
> +		detach_buf_packed(vq, i, NULL);
> +		END_USE(vq);
> +		return buf;
> +	}
> +	/* That should have freed everything. */
> +	BUG_ON(vq->vq.num_free != vq->vring_packed.num);
> +
> +	END_USE(vq);
>  	return NULL;
>  }
>  
> @@ -1083,6 +1559,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
> +	/* We need to enable interrupts first before re-checking
> +	 * for more used buffers. */
> +	virtio_mb(vq->weak_barriers);
>  	return vq->packed ? virtqueue_poll_packed(_vq, last_used_idx) :
>  			    virtqueue_poll_split(_vq, last_used_idx);
>  }

Possible optimization for when dma API is in use:

--->

dma: detecting nop unmap

drivers need to maintain the dma address for unmap purposes,
but these cycles are wasted when unmap callback is not
defined. Add an API for drivers to check that and avoid
unmap completely. Debug builds still have unmap.

Signed-off-by: Michael S. Tsirkin <mst@redhat.com>

----

diff --git a/include/linux/dma-debug.h b/include/linux/dma-debug.h
index a785f2507159..38b2c27c8449 100644
--- a/include/linux/dma-debug.h
+++ b/include/linux/dma-debug.h
@@ -42,6 +42,11 @@ extern void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
 extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,
 				 size_t size, int direction, bool map_single);
 
+static inline bool has_debug_dma_unmap(struct device *dev)
+{
+	return true;
+}
+
 extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
 			     int nents, int mapped_ents, int direction);
 
@@ -121,6 +126,11 @@ static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,
 {
 }
 
+static inline bool has_debug_dma_unmap(struct device *dev)
+{
+	return false;
+}
+
 static inline void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
 				    int nents, int mapped_ents, int direction)
 {
diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
index 1db6a6b46d0d..656f3e518166 100644
--- a/include/linux/dma-mapping.h
+++ b/include/linux/dma-mapping.h
@@ -241,6 +241,14 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
 	return addr;
 }
 
+static inline bool has_dma_unmap(struct device *dev)
+{
+	const struct dma_map_ops *ops = get_dma_ops(dev);
+
+	return ops->unmap_page || ops->unmap_sg || ops->unmap_resource ||
+		has_dma_unmap(dev);
+}
+
 static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
 					  size_t size,
 					  enum dma_data_direction dir,

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 1/5] virtio: add packed ring definitions
  2018-07-11  2:27 ` [PATCH net-next v2 1/5] virtio: add packed ring definitions Tiwei Bie
@ 2018-09-07 13:51   ` Michael S. Tsirkin
  2018-09-10  2:13     ` Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-07 13:51 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Jul 11, 2018 at 10:27:07AM +0800, Tiwei Bie wrote:
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>  include/uapi/linux/virtio_config.h |  3 +++
>  include/uapi/linux/virtio_ring.h   | 43 ++++++++++++++++++++++++++++++
>  2 files changed, 46 insertions(+)
> 
> diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h
> index 449132c76b1c..1196e1c1d4f6 100644
> --- a/include/uapi/linux/virtio_config.h
> +++ b/include/uapi/linux/virtio_config.h
> @@ -75,6 +75,9 @@
>   */
>  #define VIRTIO_F_IOMMU_PLATFORM		33
>  
> +/* This feature indicates support for the packed virtqueue layout. */
> +#define VIRTIO_F_RING_PACKED		34
> +
>  /*
>   * Does the device support Single Root I/O Virtualization?
>   */
> diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_ring.h
> index 6d5d5faa989b..0254a2ba29cf 100644
> --- a/include/uapi/linux/virtio_ring.h
> +++ b/include/uapi/linux/virtio_ring.h
> @@ -44,6 +44,10 @@
>  /* This means the buffer contains a list of buffer descriptors. */
>  #define VRING_DESC_F_INDIRECT	4
>  
> +/* Mark a descriptor as available or used. */
> +#define VRING_DESC_F_AVAIL	(1ul << 7)
> +#define VRING_DESC_F_USED	(1ul << 15)
> +
>  /* The Host uses this in used->flags to advise the Guest: don't kick me when
>   * you add a buffer.  It's unreliable, so it's simply an optimization.  Guest
>   * will still kick if it's out of buffers. */
> @@ -53,6 +57,17 @@
>   * optimization.  */
>  #define VRING_AVAIL_F_NO_INTERRUPT	1
>  
> +/* Enable events. */
> +#define VRING_EVENT_F_ENABLE	0x0
> +/* Disable events. */
> +#define VRING_EVENT_F_DISABLE	0x1
> +/*
> + * Enable events for a specific descriptor
> + * (as specified by Descriptor Ring Change Event Offset/Wrap Counter).
> + * Only valid if VIRTIO_RING_F_EVENT_IDX has been negotiated.
> + */
> +#define VRING_EVENT_F_DESC	0x2
> +
>  /* We support indirect buffer descriptors */
>  #define VIRTIO_RING_F_INDIRECT_DESC	28
>  

These are for the packed ring, right? Pls prefix accordingly.
Also, you likely need macros for the wrap counters.

> @@ -171,4 +186,32 @@ static inline int vring_need_event(__u16 event_idx, __u16 new_idx, __u16 old)
>  	return (__u16)(new_idx - event_idx - 1) < (__u16)(new_idx - old);
>  }
>  
> +struct vring_packed_desc_event {
> +	/* Descriptor Ring Change Event Offset/Wrap Counter. */
> +	__virtio16 off_wrap;
> +	/* Descriptor Ring Change Event Flags. */
> +	__virtio16 flags;
> +};
> +
> +struct vring_packed_desc {
> +	/* Buffer Address. */
> +	__virtio64 addr;
> +	/* Buffer Length. */
> +	__virtio32 len;
> +	/* Buffer ID. */
> +	__virtio16 id;
> +	/* The flags depending on descriptor type. */
> +	__virtio16 flags;
> +};

Don't use __virtioXX types, just __leXX ones.

> +
> +struct vring_packed {
> +	unsigned int num;
> +
> +	struct vring_packed_desc *desc;
> +
> +	struct vring_packed_desc_event *driver;
> +
> +	struct vring_packed_desc_event *device;
> +};
> +
>  #endif /* _UAPI_LINUX_VIRTIO_RING_H */
> -- 
> 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 2/5] virtio_ring: support creating packed ring
  2018-07-11  2:27 ` [PATCH net-next v2 2/5] virtio_ring: support creating packed ring Tiwei Bie
@ 2018-09-07 14:03   ` Michael S. Tsirkin
  2018-09-10  2:28     ` Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-07 14:03 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Jul 11, 2018 at 10:27:08AM +0800, Tiwei Bie wrote:
> This commit introduces the support for creating packed ring.
> All split ring specific functions are added _split suffix.
> Some necessary stubs for packed ring are also added.
> 
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>

I'd rather have a patch just renaming split functions, then
add all packed stuff in as a separate patch on top.


> ---
>  drivers/virtio/virtio_ring.c | 801 +++++++++++++++++++++++------------
>  include/linux/virtio_ring.h  |   8 +-
>  2 files changed, 546 insertions(+), 263 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 814b395007b2..c4f8abc7445a 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -60,11 +60,15 @@ struct vring_desc_state {
>  	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
>  };
>  
> +struct vring_desc_state_packed {
> +	int next;			/* The next desc state. */

So this can go away with IN_ORDER?

> +};
> +
>  struct vring_virtqueue {
>  	struct virtqueue vq;
>  
> -	/* Actual memory layout for this queue */
> -	struct vring vring;
> +	/* Is this a packed ring? */
> +	bool packed;
>  
>  	/* Can we use weak barriers? */
>  	bool weak_barriers;
> @@ -86,11 +90,39 @@ struct vring_virtqueue {
>  	/* Last used index we've seen. */
>  	u16 last_used_idx;
>  
> -	/* Last written value to avail->flags */
> -	u16 avail_flags_shadow;
> +	union {
> +		/* Available for split ring */
> +		struct {
> +			/* Actual memory layout for this queue. */
> +			struct vring vring;
>  
> -	/* Last written value to avail->idx in guest byte order */
> -	u16 avail_idx_shadow;
> +			/* Last written value to avail->flags */
> +			u16 avail_flags_shadow;
> +
> +			/* Last written value to avail->idx in
> +			 * guest byte order. */
> +			u16 avail_idx_shadow;
> +		};

Name this field split so it's easier to detect misuse of e.g.
packed fields in split code?

> +
> +		/* Available for packed ring */
> +		struct {
> +			/* Actual memory layout for this queue. */
> +			struct vring_packed vring_packed;
> +
> +			/* Driver ring wrap counter. */
> +			bool avail_wrap_counter;
> +
> +			/* Device ring wrap counter. */
> +			bool used_wrap_counter;
> +
> +			/* Index of the next avail descriptor. */
> +			u16 next_avail_idx;
> +
> +			/* Last written value to driver->flags in
> +			 * guest byte order. */
> +			u16 event_flags_shadow;
> +		};
> +	};
>  
>  	/* How to notify other side. FIXME: commonalize hcalls! */
>  	bool (*notify)(struct virtqueue *vq);
> @@ -110,11 +142,24 @@ struct vring_virtqueue {
>  #endif
>  
>  	/* Per-descriptor state. */
> -	struct vring_desc_state desc_state[];
> +	union {
> +		struct vring_desc_state desc_state[1];
> +		struct vring_desc_state_packed desc_state_packed[1];
> +	};
>  };
>  
>  #define to_vvq(_vq) container_of(_vq, struct vring_virtqueue, vq)
>  
> +static inline bool virtqueue_use_indirect(struct virtqueue *_vq,
> +					  unsigned int total_sg)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	/* If the host supports indirect descriptor tables, and we have multiple
> +	 * buffers, then go indirect. FIXME: tune this threshold */
> +	return (vq->indirect && total_sg > 1 && vq->vq.num_free);
> +}
> +
>  /*
>   * Modern virtio devices have feature bits to specify whether they need a
>   * quirk and bypass the IOMMU. If not there, just use the DMA API.
> @@ -200,8 +245,17 @@ static dma_addr_t vring_map_single(const struct vring_virtqueue *vq,
>  			      cpu_addr, size, direction);
>  }
>  
> -static void vring_unmap_one(const struct vring_virtqueue *vq,
> -			    struct vring_desc *desc)
> +static int vring_mapping_error(const struct vring_virtqueue *vq,
> +			       dma_addr_t addr)
> +{
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return 0;
> +
> +	return dma_mapping_error(vring_dma_dev(vq), addr);
> +}
> +
> +static void vring_unmap_one_split(const struct vring_virtqueue *vq,
> +				  struct vring_desc *desc)
>  {
>  	u16 flags;
>  
> @@ -225,17 +279,9 @@ static void vring_unmap_one(const struct vring_virtqueue *vq,
>  	}
>  }
>  
> -static int vring_mapping_error(const struct vring_virtqueue *vq,
> -			       dma_addr_t addr)
> -{
> -	if (!vring_use_dma_api(vq->vq.vdev))
> -		return 0;
> -
> -	return dma_mapping_error(vring_dma_dev(vq), addr);
> -}
> -
> -static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
> -					 unsigned int total_sg, gfp_t gfp)
> +static struct vring_desc *alloc_indirect_split(struct virtqueue *_vq,
> +					       unsigned int total_sg,
> +					       gfp_t gfp)
>  {
>  	struct vring_desc *desc;
>  	unsigned int i;
> @@ -256,14 +302,14 @@ static struct vring_desc *alloc_indirect(struct virtqueue *_vq,
>  	return desc;
>  }
>  
> -static inline int virtqueue_add(struct virtqueue *_vq,
> -				struct scatterlist *sgs[],
> -				unsigned int total_sg,
> -				unsigned int out_sgs,
> -				unsigned int in_sgs,
> -				void *data,
> -				void *ctx,
> -				gfp_t gfp)
> +static inline int virtqueue_add_split(struct virtqueue *_vq,
> +				      struct scatterlist *sgs[],
> +				      unsigned int total_sg,
> +				      unsigned int out_sgs,
> +				      unsigned int in_sgs,
> +				      void *data,
> +				      void *ctx,
> +				      gfp_t gfp)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  	struct scatterlist *sg;
> @@ -299,10 +345,8 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  
>  	head = vq->free_head;
>  
> -	/* If the host supports indirect descriptor tables, and we have multiple
> -	 * buffers, then go indirect. FIXME: tune this threshold */
> -	if (vq->indirect && total_sg > 1 && vq->vq.num_free)
> -		desc = alloc_indirect(_vq, total_sg, gfp);
> +	if (virtqueue_use_indirect(_vq, total_sg))
> +		desc = alloc_indirect_split(_vq, total_sg, gfp);
>  	else {
>  		desc = NULL;
>  		WARN_ON_ONCE(total_sg > vq->vring.num && !vq->indirect);
> @@ -423,7 +467,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	for (n = 0; n < total_sg; n++) {
>  		if (i == err_idx)
>  			break;
> -		vring_unmap_one(vq, &desc[i]);
> +		vring_unmap_one_split(vq, &desc[i]);
>  		i = virtio16_to_cpu(_vq->vdev, vq->vring.desc[i].next);
>  	}
>  
> @@ -434,6 +478,355 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	return -EIO;
>  }
>  
> +static bool virtqueue_kick_prepare_split(struct virtqueue *_vq)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 new, old;
> +	bool needs_kick;
> +
> +	START_USE(vq);
> +	/* We need to expose available array entries before checking avail
> +	 * event. */
> +	virtio_mb(vq->weak_barriers);
> +
> +	old = vq->avail_idx_shadow - vq->num_added;
> +	new = vq->avail_idx_shadow;
> +	vq->num_added = 0;
> +
> +#ifdef DEBUG
> +	if (vq->last_add_time_valid) {
> +		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
> +					      vq->last_add_time)) > 100);
> +	}
> +	vq->last_add_time_valid = false;
> +#endif
> +
> +	if (vq->event) {
> +		needs_kick = vring_need_event(virtio16_to_cpu(_vq->vdev, vring_avail_event(&vq->vring)),
> +					      new, old);
> +	} else {
> +		needs_kick = !(vq->vring.used->flags & cpu_to_virtio16(_vq->vdev, VRING_USED_F_NO_NOTIFY));
> +	}
> +	END_USE(vq);
> +	return needs_kick;
> +}
> +
> +static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
> +			     void **ctx)
> +{
> +	unsigned int i, j;
> +	__virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
> +
> +	/* Clear data ptr. */
> +	vq->desc_state[head].data = NULL;
> +
> +	/* Put back on free list: unmap first-level descriptors and find end */
> +	i = head;
> +
> +	while (vq->vring.desc[i].flags & nextflag) {
> +		vring_unmap_one_split(vq, &vq->vring.desc[i]);
> +		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
> +		vq->vq.num_free++;
> +	}
> +
> +	vring_unmap_one_split(vq, &vq->vring.desc[i]);
> +	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
> +	vq->free_head = head;
> +
> +	/* Plus final descriptor */
> +	vq->vq.num_free++;
> +
> +	if (vq->indirect) {
> +		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
> +		u32 len;
> +
> +		/* Free the indirect table, if any, now that it's unmapped. */
> +		if (!indir_desc)
> +			return;
> +
> +		len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
> +
> +		BUG_ON(!(vq->vring.desc[head].flags &
> +			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
> +		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
> +
> +		for (j = 0; j < len / sizeof(struct vring_desc); j++)
> +			vring_unmap_one_split(vq, &indir_desc[j]);
> +
> +		kfree(indir_desc);
> +		vq->desc_state[head].indir_desc = NULL;
> +	} else if (ctx) {
> +		*ctx = vq->desc_state[head].indir_desc;
> +	}
> +}
> +
> +static inline bool more_used_split(const struct vring_virtqueue *vq)
> +{
> +	return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, vq->vring.used->idx);
> +}
> +
> +static void *virtqueue_get_buf_ctx_split(struct virtqueue *_vq,
> +					 unsigned int *len,
> +					 void **ctx)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	void *ret;
> +	unsigned int i;
> +	u16 last_used;
> +
> +	START_USE(vq);
> +
> +	if (unlikely(vq->broken)) {
> +		END_USE(vq);
> +		return NULL;
> +	}
> +
> +	if (!more_used_split(vq)) {
> +		pr_debug("No more buffers in queue\n");
> +		END_USE(vq);
> +		return NULL;
> +	}
> +
> +	/* Only get used array entries after they have been exposed by host. */
> +	virtio_rmb(vq->weak_barriers);
> +
> +	last_used = (vq->last_used_idx & (vq->vring.num - 1));
> +	i = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].id);
> +	*len = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].len);
> +
> +	if (unlikely(i >= vq->vring.num)) {
> +		BAD_RING(vq, "id %u out of range\n", i);
> +		return NULL;
> +	}
> +	if (unlikely(!vq->desc_state[i].data)) {
> +		BAD_RING(vq, "id %u is not a head!\n", i);
> +		return NULL;
> +	}
> +
> +	/* detach_buf_split clears data, so grab it now. */
> +	ret = vq->desc_state[i].data;
> +	detach_buf_split(vq, i, ctx);
> +	vq->last_used_idx++;
> +	/* If we expect an interrupt for the next entry, tell host
> +	 * by writing event index and flush out the write before
> +	 * the read in the next get_buf call. */
> +	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
> +		virtio_store_mb(vq->weak_barriers,
> +				&vring_used_event(&vq->vring),
> +				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
> +
> +#ifdef DEBUG
> +	vq->last_add_time_valid = false;
> +#endif
> +
> +	END_USE(vq);
> +	return ret;
> +}
> +
> +static void virtqueue_disable_cb_split(struct virtqueue *_vq)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
> +		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
> +		if (!vq->event)
> +			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
> +	}
> +}
> +
> +static unsigned virtqueue_enable_cb_prepare_split(struct virtqueue *_vq)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 last_used_idx;
> +
> +	START_USE(vq);
> +
> +	/* We optimistically turn back on interrupts, then check if there was
> +	 * more to do. */
> +	/* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to
> +	 * either clear the flags bit or point the event index at the next
> +	 * entry. Always do both to keep code simple. */
> +	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
> +		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
> +		if (!vq->event)
> +			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
> +	}
> +	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx);
> +	END_USE(vq);
> +	return last_used_idx;
> +}
> +
> +static bool virtqueue_poll_split(struct virtqueue *_vq, unsigned last_used_idx)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	virtio_mb(vq->weak_barriers);
> +	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
> +}
> +
> +static bool virtqueue_enable_cb_delayed_split(struct virtqueue *_vq)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 bufs;
> +
> +	START_USE(vq);
> +
> +	/* We optimistically turn back on interrupts, then check if there was
> +	 * more to do. */
> +	/* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to
> +	 * either clear the flags bit or point the event index at the next
> +	 * entry. Always update the event index to keep code simple. */
> +	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
> +		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
> +		if (!vq->event)
> +			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
> +	}
> +	/* TODO: tune this threshold */
> +	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
> +
> +	virtio_store_mb(vq->weak_barriers,
> +			&vring_used_event(&vq->vring),
> +			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
> +
> +	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
> +		END_USE(vq);
> +		return false;
> +	}
> +
> +	END_USE(vq);
> +	return true;
> +}
> +
> +static void *virtqueue_detach_unused_buf_split(struct virtqueue *_vq)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	unsigned int i;
> +	void *buf;
> +
> +	START_USE(vq);
> +
> +	for (i = 0; i < vq->vring.num; i++) {
> +		if (!vq->desc_state[i].data)
> +			continue;
> +		/* detach_buf clears data, so grab it now. */
> +		buf = vq->desc_state[i].data;
> +		detach_buf_split(vq, i, NULL);
> +		vq->avail_idx_shadow--;
> +		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
> +		END_USE(vq);
> +		return buf;
> +	}
> +	/* That should have freed everything. */
> +	BUG_ON(vq->vq.num_free != vq->vring.num);
> +
> +	END_USE(vq);
> +	return NULL;
> +}
> +
> +/*
> + * The layout for the packed ring is a continuous chunk of memory
> + * which looks like this.
> + *
> + * struct vring_packed {
> + *	// The actual descriptors (16 bytes each)
> + *	struct vring_packed_desc desc[num];
> + *
> + *	// Padding to the next align boundary.
> + *	char pad[];
> + *
> + *	// Driver Event Suppression
> + *	struct vring_packed_desc_event driver;
> + *
> + *	// Device Event Suppression
> + *	struct vring_packed_desc_event device;
> + * };
> + */

Why not just allocate event structures separately?
Is it a win to have them share a cache line for some reason?

> +static inline void vring_init_packed(struct vring_packed *vr, unsigned int num,
> +				     void *p, unsigned long align)
> +{
> +	vr->num = num;
> +	vr->desc = p;
> +	vr->driver = (void *)ALIGN(((uintptr_t)p +
> +		sizeof(struct vring_packed_desc) * num), align);
> +	vr->device = vr->driver + 1;
> +}

What's all this about alignment? Where does it come from?

> +
> +static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
> +{
> +	return ((sizeof(struct vring_packed_desc) * num + align - 1)
> +		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
> +}
> +
> +static inline int virtqueue_add_packed(struct virtqueue *_vq,
> +				       struct scatterlist *sgs[],
> +				       unsigned int total_sg,
> +				       unsigned int out_sgs,
> +				       unsigned int in_sgs,
> +				       void *data,
> +				       void *ctx,
> +				       gfp_t gfp)
> +{
> +	return -EIO;
> +}
> +
> +static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
> +{
> +	return false;
> +}
> +
> +static inline bool more_used_packed(const struct vring_virtqueue *vq)
> +{
> +	return false;
> +}
> +
> +static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> +					  unsigned int *len,
> +					  void **ctx)
> +{
> +	return NULL;
> +}
> +
> +static void virtqueue_disable_cb_packed(struct virtqueue *_vq)
> +{
> +}
> +
> +static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
> +{
> +	return 0;
> +}
> +
> +static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned last_used_idx)
> +{
> +	return false;
> +}
> +
> +static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
> +{
> +	return false;
> +}
> +
> +static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq)
> +{
> +	return NULL;
> +}
> +
> +static inline int virtqueue_add(struct virtqueue *_vq,
> +				struct scatterlist *sgs[],
> +				unsigned int total_sg,
> +				unsigned int out_sgs,
> +				unsigned int in_sgs,
> +				void *data,
> +				void *ctx,
> +				gfp_t gfp)
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	return vq->packed ? virtqueue_add_packed(_vq, sgs, total_sg, out_sgs,
> +						 in_sgs, data, ctx, gfp) :
> +			    virtqueue_add_split(_vq, sgs, total_sg, out_sgs,
> +						in_sgs, data, ctx, gfp);
> +}
> +
>  /**
>   * virtqueue_add_sgs - expose buffers to other end
>   * @vq: the struct virtqueue we're talking about.
> @@ -550,34 +943,9 @@ EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx);
>  bool virtqueue_kick_prepare(struct virtqueue *_vq)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> -	u16 new, old;
> -	bool needs_kick;
>  
> -	START_USE(vq);
> -	/* We need to expose available array entries before checking avail
> -	 * event. */
> -	virtio_mb(vq->weak_barriers);
> -
> -	old = vq->avail_idx_shadow - vq->num_added;
> -	new = vq->avail_idx_shadow;
> -	vq->num_added = 0;
> -
> -#ifdef DEBUG
> -	if (vq->last_add_time_valid) {
> -		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
> -					      vq->last_add_time)) > 100);
> -	}
> -	vq->last_add_time_valid = false;
> -#endif
> -
> -	if (vq->event) {
> -		needs_kick = vring_need_event(virtio16_to_cpu(_vq->vdev, vring_avail_event(&vq->vring)),
> -					      new, old);
> -	} else {
> -		needs_kick = !(vq->vring.used->flags & cpu_to_virtio16(_vq->vdev, VRING_USED_F_NO_NOTIFY));
> -	}
> -	END_USE(vq);
> -	return needs_kick;
> +	return vq->packed ? virtqueue_kick_prepare_packed(_vq) :
> +			    virtqueue_kick_prepare_split(_vq);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_kick_prepare);
>  
> @@ -625,58 +993,9 @@ bool virtqueue_kick(struct virtqueue *vq)
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_kick);
>  
> -static void detach_buf(struct vring_virtqueue *vq, unsigned int head,
> -		       void **ctx)
> -{
> -	unsigned int i, j;
> -	__virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
> -
> -	/* Clear data ptr. */
> -	vq->desc_state[head].data = NULL;
> -
> -	/* Put back on free list: unmap first-level descriptors and find end */
> -	i = head;
> -
> -	while (vq->vring.desc[i].flags & nextflag) {
> -		vring_unmap_one(vq, &vq->vring.desc[i]);
> -		i = virtio16_to_cpu(vq->vq.vdev, vq->vring.desc[i].next);
> -		vq->vq.num_free++;
> -	}
> -
> -	vring_unmap_one(vq, &vq->vring.desc[i]);
> -	vq->vring.desc[i].next = cpu_to_virtio16(vq->vq.vdev, vq->free_head);
> -	vq->free_head = head;
> -
> -	/* Plus final descriptor */
> -	vq->vq.num_free++;
> -
> -	if (vq->indirect) {
> -		struct vring_desc *indir_desc = vq->desc_state[head].indir_desc;
> -		u32 len;
> -
> -		/* Free the indirect table, if any, now that it's unmapped. */
> -		if (!indir_desc)
> -			return;
> -
> -		len = virtio32_to_cpu(vq->vq.vdev, vq->vring.desc[head].len);
> -
> -		BUG_ON(!(vq->vring.desc[head].flags &
> -			 cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_INDIRECT)));
> -		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
> -
> -		for (j = 0; j < len / sizeof(struct vring_desc); j++)
> -			vring_unmap_one(vq, &indir_desc[j]);
> -
> -		kfree(indir_desc);
> -		vq->desc_state[head].indir_desc = NULL;
> -	} else if (ctx) {
> -		*ctx = vq->desc_state[head].indir_desc;
> -	}
> -}
> -
>  static inline bool more_used(const struct vring_virtqueue *vq)
>  {
> -	return vq->last_used_idx != virtio16_to_cpu(vq->vq.vdev, vq->vring.used->idx);
> +	return vq->packed ? more_used_packed(vq) : more_used_split(vq);
>  }
>  
>  /**
> @@ -699,57 +1018,9 @@ void *virtqueue_get_buf_ctx(struct virtqueue *_vq, unsigned int *len,
>  			    void **ctx)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> -	void *ret;
> -	unsigned int i;
> -	u16 last_used;
>  
> -	START_USE(vq);
> -
> -	if (unlikely(vq->broken)) {
> -		END_USE(vq);
> -		return NULL;
> -	}
> -
> -	if (!more_used(vq)) {
> -		pr_debug("No more buffers in queue\n");
> -		END_USE(vq);
> -		return NULL;
> -	}
> -
> -	/* Only get used array entries after they have been exposed by host. */
> -	virtio_rmb(vq->weak_barriers);
> -
> -	last_used = (vq->last_used_idx & (vq->vring.num - 1));
> -	i = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].id);
> -	*len = virtio32_to_cpu(_vq->vdev, vq->vring.used->ring[last_used].len);
> -
> -	if (unlikely(i >= vq->vring.num)) {
> -		BAD_RING(vq, "id %u out of range\n", i);
> -		return NULL;
> -	}
> -	if (unlikely(!vq->desc_state[i].data)) {
> -		BAD_RING(vq, "id %u is not a head!\n", i);
> -		return NULL;
> -	}
> -
> -	/* detach_buf clears data, so grab it now. */
> -	ret = vq->desc_state[i].data;
> -	detach_buf(vq, i, ctx);
> -	vq->last_used_idx++;
> -	/* If we expect an interrupt for the next entry, tell host
> -	 * by writing event index and flush out the write before
> -	 * the read in the next get_buf call. */
> -	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT))
> -		virtio_store_mb(vq->weak_barriers,
> -				&vring_used_event(&vq->vring),
> -				cpu_to_virtio16(_vq->vdev, vq->last_used_idx));
> -
> -#ifdef DEBUG
> -	vq->last_add_time_valid = false;
> -#endif
> -
> -	END_USE(vq);
> -	return ret;
> +	return vq->packed ? virtqueue_get_buf_ctx_packed(_vq, len, ctx) :
> +			    virtqueue_get_buf_ctx_split(_vq, len, ctx);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_get_buf_ctx);
>  
> @@ -771,12 +1042,10 @@ void virtqueue_disable_cb(struct virtqueue *_vq)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
> -	if (!(vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT)) {
> -		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
> -		if (!vq->event)
> -			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
> -	}
> -
> +	if (vq->packed)
> +		virtqueue_disable_cb_packed(_vq);
> +	else
> +		virtqueue_disable_cb_split(_vq);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_disable_cb);
>  
> @@ -795,23 +1064,9 @@ EXPORT_SYMBOL_GPL(virtqueue_disable_cb);
>  unsigned virtqueue_enable_cb_prepare(struct virtqueue *_vq)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> -	u16 last_used_idx;
>  
> -	START_USE(vq);
> -
> -	/* We optimistically turn back on interrupts, then check if there was
> -	 * more to do. */
> -	/* Depending on the VIRTIO_RING_F_EVENT_IDX feature, we need to
> -	 * either clear the flags bit or point the event index at the next
> -	 * entry. Always do both to keep code simple. */
> -	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
> -		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
> -		if (!vq->event)
> -			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
> -	}
> -	vring_used_event(&vq->vring) = cpu_to_virtio16(_vq->vdev, last_used_idx = vq->last_used_idx);
> -	END_USE(vq);
> -	return last_used_idx;
> +	return vq->packed ? virtqueue_enable_cb_prepare_packed(_vq) :
> +			    virtqueue_enable_cb_prepare_split(_vq);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_enable_cb_prepare);
>  
> @@ -828,8 +1083,8 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
> -	virtio_mb(vq->weak_barriers);
> -	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
> +	return vq->packed ? virtqueue_poll_packed(_vq, last_used_idx) :
> +			    virtqueue_poll_split(_vq, last_used_idx);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_poll);
>  
> @@ -867,34 +1122,9 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb);
>  bool virtqueue_enable_cb_delayed(struct virtqueue *_vq)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> -	u16 bufs;
>  
> -	START_USE(vq);
> -
> -	/* We optimistically turn back on interrupts, then check if there was
> -	 * more to do. */
> -	/* Depending on the VIRTIO_RING_F_USED_EVENT_IDX feature, we need to
> -	 * either clear the flags bit or point the event index at the next
> -	 * entry. Always update the event index to keep code simple. */
> -	if (vq->avail_flags_shadow & VRING_AVAIL_F_NO_INTERRUPT) {
> -		vq->avail_flags_shadow &= ~VRING_AVAIL_F_NO_INTERRUPT;
> -		if (!vq->event)
> -			vq->vring.avail->flags = cpu_to_virtio16(_vq->vdev, vq->avail_flags_shadow);
> -	}
> -	/* TODO: tune this threshold */
> -	bufs = (u16)(vq->avail_idx_shadow - vq->last_used_idx) * 3 / 4;
> -
> -	virtio_store_mb(vq->weak_barriers,
> -			&vring_used_event(&vq->vring),
> -			cpu_to_virtio16(_vq->vdev, vq->last_used_idx + bufs));
> -
> -	if (unlikely((u16)(virtio16_to_cpu(_vq->vdev, vq->vring.used->idx) - vq->last_used_idx) > bufs)) {
> -		END_USE(vq);
> -		return false;
> -	}
> -
> -	END_USE(vq);
> -	return true;
> +	return vq->packed ? virtqueue_enable_cb_delayed_packed(_vq) :
> +			    virtqueue_enable_cb_delayed_split(_vq);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed);
>  
> @@ -909,27 +1139,9 @@ EXPORT_SYMBOL_GPL(virtqueue_enable_cb_delayed);
>  void *virtqueue_detach_unused_buf(struct virtqueue *_vq)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> -	unsigned int i;
> -	void *buf;
>  
> -	START_USE(vq);
> -
> -	for (i = 0; i < vq->vring.num; i++) {
> -		if (!vq->desc_state[i].data)
> -			continue;
> -		/* detach_buf clears data, so grab it now. */
> -		buf = vq->desc_state[i].data;
> -		detach_buf(vq, i, NULL);
> -		vq->avail_idx_shadow--;
> -		vq->vring.avail->idx = cpu_to_virtio16(_vq->vdev, vq->avail_idx_shadow);
> -		END_USE(vq);
> -		return buf;
> -	}
> -	/* That should have freed everything. */
> -	BUG_ON(vq->vq.num_free != vq->vring.num);
> -
> -	END_USE(vq);
> -	return NULL;
> +	return vq->packed ? virtqueue_detach_unused_buf_packed(_vq) :
> +			    virtqueue_detach_unused_buf_split(_vq);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_detach_unused_buf);
>  
> @@ -954,7 +1166,8 @@ irqreturn_t vring_interrupt(int irq, void *_vq)
>  EXPORT_SYMBOL_GPL(vring_interrupt);
>  
>  struct virtqueue *__vring_new_virtqueue(unsigned int index,
> -					struct vring vring,
> +					union vring_union vring,
> +					bool packed,
>  					struct virtio_device *vdev,
>  					bool weak_barriers,
>  					bool context,
> @@ -962,19 +1175,22 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
>  					void (*callback)(struct virtqueue *),
>  					const char *name)
>  {
> -	unsigned int i;
>  	struct vring_virtqueue *vq;
> +	unsigned int num, i;
> +	size_t size;
>  
> -	vq = kmalloc(sizeof(*vq) + vring.num * sizeof(struct vring_desc_state),
> -		     GFP_KERNEL);
> +	num = packed ? vring.vring_packed.num : vring.vring_split.num;
> +	size = packed ? num * sizeof(struct vring_desc_state_packed) :
> +			num * sizeof(struct vring_desc_state);
> +
> +	vq = kmalloc(sizeof(*vq) + size, GFP_KERNEL);
>  	if (!vq)
>  		return NULL;
>  
> -	vq->vring = vring;
>  	vq->vq.callback = callback;
>  	vq->vq.vdev = vdev;
>  	vq->vq.name = name;
> -	vq->vq.num_free = vring.num;
> +	vq->vq.num_free = num;
>  	vq->vq.index = index;
>  	vq->we_own_ring = false;
>  	vq->queue_dma_addr = 0;
> @@ -983,9 +1199,8 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
>  	vq->weak_barriers = weak_barriers;
>  	vq->broken = false;
>  	vq->last_used_idx = 0;
> -	vq->avail_flags_shadow = 0;
> -	vq->avail_idx_shadow = 0;
>  	vq->num_added = 0;
> +	vq->packed = packed;
>  	list_add_tail(&vq->vq.list, &vdev->vqs);
>  #ifdef DEBUG
>  	vq->in_use = false;
> @@ -996,19 +1211,48 @@ struct virtqueue *__vring_new_virtqueue(unsigned int index,
>  		!context;
>  	vq->event = virtio_has_feature(vdev, VIRTIO_RING_F_EVENT_IDX);
>  
> +	if (vq->packed) {
> +		vq->vring_packed = vring.vring_packed;
> +		vq->next_avail_idx = 0;
> +		vq->avail_wrap_counter = 1;
> +		vq->used_wrap_counter = 1;
> +		vq->event_flags_shadow = 0;
> +
> +		memset(vq->desc_state_packed, 0,
> +			num * sizeof(struct vring_desc_state_packed));
> +
> +		/* Put everything in free lists. */
> +		vq->free_head = 0;
> +		for (i = 0; i < num-1; i++)
> +			vq->desc_state_packed[i].next = i + 1;
> +	} else {
> +		vq->vring = vring.vring_split;
> +		vq->avail_flags_shadow = 0;
> +		vq->avail_idx_shadow = 0;
> +
> +		/* Put everything in free lists. */
> +		vq->free_head = 0;
> +		for (i = 0; i < num-1; i++)
> +			vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
> +
> +		memset(vq->desc_state, 0,
> +			num * sizeof(struct vring_desc_state));
> +	}
> +
>  	/* No callback?  Tell other side not to bother us. */
>  	if (!callback) {
> -		vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
> -		if (!vq->event)
> -			vq->vring.avail->flags = cpu_to_virtio16(vdev, vq->avail_flags_shadow);
> +		if (packed) {
> +			vq->event_flags_shadow = VRING_EVENT_F_DISABLE;
> +			vq->vring_packed.driver->flags = cpu_to_virtio16(vdev,
> +						vq->event_flags_shadow);
> +		} else {
> +			vq->avail_flags_shadow |= VRING_AVAIL_F_NO_INTERRUPT;
> +			if (!vq->event)
> +				vq->vring.avail->flags = cpu_to_virtio16(vdev,
> +						vq->avail_flags_shadow);
> +		}
>  	}
>  
> -	/* Put everything in free lists. */
> -	vq->free_head = 0;
> -	for (i = 0; i < vring.num-1; i++)
> -		vq->vring.desc[i].next = cpu_to_virtio16(vdev, i + 1);
> -	memset(vq->desc_state, 0, vring.num * sizeof(struct vring_desc_state));
> -
>  	return &vq->vq;
>  }
>  EXPORT_SYMBOL_GPL(__vring_new_virtqueue);
> @@ -1055,6 +1299,12 @@ static void vring_free_queue(struct virtio_device *vdev, size_t size,
>  	}
>  }
>  
> +static inline int
> +__vring_size(unsigned int num, unsigned long align, bool packed)
> +{
> +	return packed ? vring_size_packed(num, align) : vring_size(num, align);
> +}
> +
>  struct virtqueue *vring_create_virtqueue(
>  	unsigned int index,
>  	unsigned int num,
> @@ -1071,7 +1321,8 @@ struct virtqueue *vring_create_virtqueue(
>  	void *queue = NULL;
>  	dma_addr_t dma_addr;
>  	size_t queue_size_in_bytes;
> -	struct vring vring;
> +	union vring_union vring;
> +	bool packed;
>  
>  	/* We assume num is a power of 2. */
>  	if (num & (num - 1)) {
> @@ -1079,9 +1330,13 @@ struct virtqueue *vring_create_virtqueue(
>  		return NULL;
>  	}
>  
> +	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
> +
>  	/* TODO: allocate each queue chunk individually */
> -	for (; num && vring_size(num, vring_align) > PAGE_SIZE; num /= 2) {
> -		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
> +	for (; num && __vring_size(num, vring_align, packed) > PAGE_SIZE;
> +			num /= 2) {
> +		queue = vring_alloc_queue(vdev, __vring_size(num, vring_align,
> +							     packed),
>  					  &dma_addr,
>  					  GFP_KERNEL|__GFP_NOWARN|__GFP_ZERO);
>  		if (queue)
> @@ -1093,17 +1348,21 @@ struct virtqueue *vring_create_virtqueue(
>  
>  	if (!queue) {
>  		/* Try to get a single page. You are my only hope! */
> -		queue = vring_alloc_queue(vdev, vring_size(num, vring_align),
> +		queue = vring_alloc_queue(vdev, __vring_size(num, vring_align,
> +							     packed),
>  					  &dma_addr, GFP_KERNEL|__GFP_ZERO);
>  	}
>  	if (!queue)
>  		return NULL;
>  
> -	queue_size_in_bytes = vring_size(num, vring_align);
> -	vring_init(&vring, num, queue, vring_align);
> +	queue_size_in_bytes = __vring_size(num, vring_align, packed);
> +	if (packed)
> +		vring_init_packed(&vring.vring_packed, num, queue, vring_align);
> +	else
> +		vring_init(&vring.vring_split, num, queue, vring_align);
>  
> -	vq = __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> -				   notify, callback, name);
> +	vq = __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
> +				   context, notify, callback, name);
>  	if (!vq) {
>  		vring_free_queue(vdev, queue_size_in_bytes, queue,
>  				 dma_addr);
> @@ -1129,10 +1388,17 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
>  				      void (*callback)(struct virtqueue *vq),
>  				      const char *name)
>  {
> -	struct vring vring;
> -	vring_init(&vring, num, pages, vring_align);
> -	return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> -				     notify, callback, name);
> +	union vring_union vring;
> +	bool packed;
> +
> +	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
> +	if (packed)
> +		vring_init_packed(&vring.vring_packed, num, pages, vring_align);
> +	else
> +		vring_init(&vring.vring_split, num, pages, vring_align);


vring_init in the UAPI header is more or less a bug.
I'd just stop using it, keep it around for legacy userspace.

> +
> +	return __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
> +				     context, notify, callback, name);
>  }
>  EXPORT_SYMBOL_GPL(vring_new_virtqueue);
>  
> @@ -1142,7 +1408,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
>  
>  	if (vq->we_own_ring) {
>  		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
> -				 vq->vring.desc, vq->queue_dma_addr);
> +				 vq->packed ? (void *)vq->vring_packed.desc :
> +					      (void *)vq->vring.desc,
> +				 vq->queue_dma_addr);
>  	}
>  	list_del(&_vq->list);
>  	kfree(vq);
> @@ -1184,7 +1452,7 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *_vq)
>  
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
> -	return vq->vring.num;
> +	return vq->packed ? vq->vring_packed.num : vq->vring.num;
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_get_vring_size);
>  
> @@ -1227,6 +1495,10 @@ dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
>  
>  	BUG_ON(!vq->we_own_ring);
>  
> +	if (vq->packed)
> +		return vq->queue_dma_addr + ((char *)vq->vring_packed.driver -
> +				(char *)vq->vring_packed.desc);
> +
>  	return vq->queue_dma_addr +
>  		((char *)vq->vring.avail - (char *)vq->vring.desc);
>  }
> @@ -1238,11 +1510,16 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
>  
>  	BUG_ON(!vq->we_own_ring);
>  
> +	if (vq->packed)
> +		return vq->queue_dma_addr + ((char *)vq->vring_packed.device -
> +				(char *)vq->vring_packed.desc);
> +
>  	return vq->queue_dma_addr +
>  		((char *)vq->vring.used - (char *)vq->vring.desc);
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
>  
> +/* Only available for split ring */
>  const struct vring *virtqueue_get_vring(struct virtqueue *vq)
>  {
>  	return &to_vvq(vq)->vring;
> diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> index fab02133a919..992142b35f55 100644
> --- a/include/linux/virtio_ring.h
> +++ b/include/linux/virtio_ring.h
> @@ -60,6 +60,11 @@ static inline void virtio_store_mb(bool weak_barriers,
>  struct virtio_device;
>  struct virtqueue;
>  
> +union vring_union {
> +	struct vring vring_split;
> +	struct vring_packed vring_packed;
> +};
> +
>  /*
>   * Creates a virtqueue and allocates the descriptor ring.  If
>   * may_reduce_num is set, then this may allocate a smaller ring than
> @@ -79,7 +84,8 @@ struct virtqueue *vring_create_virtqueue(unsigned int index,
>  
>  /* Creates a virtqueue with a custom layout. */
>  struct virtqueue *__vring_new_virtqueue(unsigned int index,
> -					struct vring vring,
> +					union vring_union vring,
> +					bool packed,
>  					struct virtio_device *vdev,
>  					bool weak_barriers,
>  					bool ctx,
> -- 
> 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring
  2018-07-11  2:27 ` [PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring Tiwei Bie
@ 2018-09-07 14:10   ` Michael S. Tsirkin
  2018-09-10  2:35     ` [virtio-dev] " Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-07 14:10 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Jul 11, 2018 at 10:27:10AM +0800, Tiwei Bie wrote:
> This commit introduces the EVENT_IDX support in packed ring.
> 
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>

Besides the usual comment about hard-coded constants like <<15:
does this actually do any good for performance? We don't
have to if we do not want to.

> ---
>  drivers/virtio/virtio_ring.c | 73 ++++++++++++++++++++++++++++++++----
>  1 file changed, 65 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index f317b485ba54..f79a1e17f7d1 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -1050,7 +1050,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> -	u16 flags;
> +	u16 new, old, off_wrap, flags, wrap_counter, event_idx;
>  	bool needs_kick;
>  	u32 snapshot;
>  
> @@ -1059,9 +1059,19 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
>  	 * suppressions. */
>  	virtio_mb(vq->weak_barriers);
>  
> +	old = vq->next_avail_idx - vq->num_added;
> +	new = vq->next_avail_idx;
> +	vq->num_added = 0;
> +
>  	snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
> +	off_wrap = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot & 0xffff));
>  	flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;


some kind of struct union would be helpful to make this readable.

>  
> +	wrap_counter = off_wrap >> 15;
> +	event_idx = off_wrap & ~(1 << 15);
> +	if (wrap_counter != vq->avail_wrap_counter)
> +		event_idx -= vq->vring_packed.num;
> +
>  #ifdef DEBUG
>  	if (vq->last_add_time_valid) {
>  		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
> @@ -1070,7 +1080,10 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
>  	vq->last_add_time_valid = false;
>  #endif
>  
> -	needs_kick = (flags != VRING_EVENT_F_DISABLE);
> +	if (flags == VRING_EVENT_F_DESC)
> +		needs_kick = vring_need_event(event_idx, new, old);
> +	else
> +		needs_kick = (flags != VRING_EVENT_F_DISABLE);
>  	END_USE(vq);
>  	return needs_kick;
>  }
> @@ -1185,6 +1198,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
>  	ret = vq->desc_state_packed[id].data;
>  	detach_buf_packed(vq, id, ctx);
>  
> +	/* If we expect an interrupt for the next entry, tell host
> +	 * by writing event index and flush out the write before
> +	 * the read in the next get_buf call. */
> +	if (vq->event_flags_shadow == VRING_EVENT_F_DESC)
> +		virtio_store_mb(vq->weak_barriers,
> +				&vq->vring_packed.driver->off_wrap,
> +				cpu_to_virtio16(_vq->vdev, vq->last_used_idx |
> +					((u16)vq->used_wrap_counter << 15)));
> +
>  #ifdef DEBUG
>  	vq->last_add_time_valid = false;
>  #endif
> @@ -1213,8 +1235,18 @@ static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
>  	/* We optimistically turn back on interrupts, then check if there was
>  	 * more to do. */
>  
> +	if (vq->event) {
> +		vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
> +				vq->last_used_idx |
> +				((u16)vq->used_wrap_counter << 15));
> +		/* We need to update event offset and event wrap
> +		 * counter first before updating event flags. */
> +		virtio_wmb(vq->weak_barriers);
> +	}
> +
>  	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> -		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> +		vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
> +						     VRING_EVENT_F_ENABLE;
>  		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
>  							vq->event_flags_shadow);
>  	}
> @@ -1238,22 +1270,47 @@ static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned off_wrap)
>  static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 bufs, used_idx, wrap_counter;
>  
>  	START_USE(vq);
>  
>  	/* We optimistically turn back on interrupts, then check if there was
>  	 * more to do. */
>  
> +	if (vq->event) {
> +		/* TODO: tune this threshold */
> +		bufs = (vq->vring_packed.num - _vq->num_free) * 3 / 4;
> +		wrap_counter = vq->used_wrap_counter;
> +
> +		used_idx = vq->last_used_idx + bufs;
> +		if (used_idx >= vq->vring_packed.num) {
> +			used_idx -= vq->vring_packed.num;
> +			wrap_counter ^= 1;
> +		}
> +
> +		vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
> +				used_idx | (wrap_counter << 15));
> +
> +		/* We need to update event offset and event wrap
> +		 * counter first before updating event flags. */
> +		virtio_wmb(vq->weak_barriers);
> +	} else {
> +		used_idx = vq->last_used_idx;
> +		wrap_counter = vq->used_wrap_counter;
> +	}
> +
>  	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> -		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> +		vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
> +						     VRING_EVENT_F_ENABLE;
>  		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
>  							vq->event_flags_shadow);
> -		/* We need to enable interrupts first before re-checking
> -		 * for more used buffers. */
> -		virtio_mb(vq->weak_barriers);
>  	}
>  
> -	if (more_used_packed(vq)) {
> +	/* We need to update event suppression structure first
> +	 * before re-checking for more used buffers. */
> +	virtio_mb(vq->weak_barriers);
> +

mb is expensive. We should not do it if we changed nothing.

> +	if (is_used_desc_packed(vq, used_idx, wrap_counter)) {
>  		END_USE(vq);
>  		return false;
>  	}
> -- 
> 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-09-07 13:49   ` Michael S. Tsirkin
@ 2018-09-10  2:03     ` Tiwei Bie
  0 siblings, 0 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-09-10  2:03 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Fri, Sep 07, 2018 at 09:49:14AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 11, 2018 at 10:27:09AM +0800, Tiwei Bie wrote:
> > This commit introduces the support (without EVENT_IDX) for
> > packed ring.
> > 
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> >  drivers/virtio/virtio_ring.c | 495 ++++++++++++++++++++++++++++++++++-
> >  1 file changed, 487 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index c4f8abc7445a..f317b485ba54 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -55,12 +55,21 @@
> >  #define END_USE(vq)
> >  #endif
> >  
> > +#define _VRING_DESC_F_AVAIL(b)	((__u16)(b) << 7)
> > +#define _VRING_DESC_F_USED(b)	((__u16)(b) << 15)
> 
> Underscore followed by an upper case letter isn't a good idea
> for a symbol. And it's not nice that this does not
> use the VRING_DESC_F_USED from the header.
> How about ((b) ? VRING_DESC_F_USED : 0) instead?
> Is produced code worse then?

Yes, I think the produced code is worse when we use
conditional expression. Below is a simple test:

#define foo1(b) ((b) << 10)
#define foo2(b) ((b) ? (1 << 10) : 0)

unsigned short bar(unsigned short b)
{
	return foo1(b);
}

unsigned short baz(unsigned short b)
{
	return foo2(b);
}

With `gcc -O3 -S`, I got below assembly code:

	.file	"tmp.c"
	.text
	.p2align 4,,15
	.globl	bar
	.type	bar, @function
bar:
.LFB0:
	.cfi_startproc
	movl	%edi, %eax
	sall	$10, %eax
	ret
	.cfi_endproc
.LFE0:
	.size	bar, .-bar
	.p2align 4,,15
	.globl	baz
	.type	baz, @function
baz:
.LFB1:
	.cfi_startproc
	xorl	%eax, %eax
	testw	%di, %di
	setne	%al
	sall	$10, %eax
	ret
	.cfi_endproc
.LFE1:
	.size	baz, .-baz
	.ident	"GCC: (Debian 8.2.0-4) 8.2.0"
	.section	.note.GNU-stack,"",@progbits

That is to say, for `((b) << 10)`, it will shift the
register directly. But for `((b) ? (1 << 10) : 0)`,
in above code, it will zero eax first, and set al
to 1 depend on whether di is 0, and shift eax.

> 
> > +
> >  struct vring_desc_state {
> >  	void *data;			/* Data for callback. */
> >  	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> >  };
> >  
> >  struct vring_desc_state_packed {
> > +	void *data;			/* Data for callback. */
> > +	struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
> 
> Include vring_desc_state for these?

Sure.

> 
> > +	int num;			/* Descriptor list length. */
> > +	dma_addr_t addr;		/* Buffer DMA addr. */
> > +	u32 len;			/* Buffer length. */
> > +	u16 flags;			/* Descriptor flags. */
> 
> This seems only to be used for indirect. Check indirect field above
> instead and drop this.

The `flags` is also needed to know the direction, i.e. DMA_FROM_DEVICE
or DMA_TO_DEVICE when do DMA unmap.

> 
> >  	int next;			/* The next desc state. */
> 
> Packing things into 16 bit integers will help reduce
> cache pressure here.
> 
> Also, this is only used for unmap, so when dma API is not used,
> maintaining addr and len list management is unnecessary. How about we
> maintain addr/len in a separate array, avoiding accesses on unmap in that case?

Sure. I'll give it a try.

> 
> Also, lots of architectures have a nop unmap in the DMA API.
> See a proposed patch at the end for optimizing that case.

Got it. Thanks.

> 
> >  };
> >  
> > @@ -660,7 +669,6 @@ static bool virtqueue_poll_split(struct virtqueue *_vq, unsigned last_used_idx)
> >  {
> >  	struct vring_virtqueue *vq = to_vvq(_vq);
> >  
> > -	virtio_mb(vq->weak_barriers);
> >  	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
> >  }
> 
> why is this changing the split queue implementation?

Because above barrier is also needed by virtqueue_poll_packed(),
so I moved it to virtqueue_poll() and also added some comments
for it.

> 
> 
> >  
> > @@ -757,6 +765,72 @@ static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
> >  		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
> >  }
> >  
> > +static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
> > +				     struct vring_desc_state_packed *state)
> > +{
> > +	u16 flags;
> > +
> > +	if (!vring_use_dma_api(vq->vq.vdev))
> > +		return;
> > +
> > +	flags = state->flags;
> > +
> > +	if (flags & VRING_DESC_F_INDIRECT) {
> > +		dma_unmap_single(vring_dma_dev(vq),
> > +				 state->addr, state->len,
> > +				 (flags & VRING_DESC_F_WRITE) ?
> > +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	} else {
> > +		dma_unmap_page(vring_dma_dev(vq),
> > +			       state->addr, state->len,
> > +			       (flags & VRING_DESC_F_WRITE) ?
> > +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	}
> > +}
> > +
> > +static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
> > +				   struct vring_packed_desc *desc)
> > +{
> > +	u16 flags;
> > +
> > +	if (!vring_use_dma_api(vq->vq.vdev))
> > +		return;
> > +
> > +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
> 
> I see no reason to use virtioXX wrappers for the packed ring.
> That's there to support legacy devices. Just use leXX.

Okay.

> 
> > +
> > +	if (flags & VRING_DESC_F_INDIRECT) {
> > +		dma_unmap_single(vring_dma_dev(vq),
> > +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> > +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> > +				 (flags & VRING_DESC_F_WRITE) ?
> > +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	} else {
> > +		dma_unmap_page(vring_dma_dev(vq),
> > +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> > +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> > +			       (flags & VRING_DESC_F_WRITE) ?
> > +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	}
> > +}
> > +
> > +static struct vring_packed_desc *alloc_indirect_packed(struct virtqueue *_vq,
> > +						       unsigned int total_sg,
> > +						       gfp_t gfp)
> > +{
> > +	struct vring_packed_desc *desc;
> > +
> > +	/*
> > +	 * We require lowmem mappings for the descriptors because
> > +	 * otherwise virt_to_phys will give us bogus addresses in the
> > +	 * virtqueue.
> 
> Where is virt_to_phys used? I don't see it in this patch.

In vring_map_single(), virt_to_phys() will be used to translate
the address to phys if dma api isn't used:

https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L197

> 
> > +	 */
> > +	gfp &= ~__GFP_HIGHMEM;
> > +
> > +	desc = kmalloc(total_sg * sizeof(struct vring_packed_desc), gfp);
> > +
> > +	return desc;
> > +}
> > +
> >  static inline int virtqueue_add_packed(struct virtqueue *_vq,
> >  				       struct scatterlist *sgs[],
> >  				       unsigned int total_sg,
> > @@ -766,47 +840,449 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
> >  				       void *ctx,
> >  				       gfp_t gfp)
> >  {
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	struct vring_packed_desc *desc;
> > +	struct scatterlist *sg;
> > +	unsigned int i, n, descs_used, uninitialized_var(prev), err_idx;
> > +	__virtio16 uninitialized_var(head_flags), flags;
> > +	u16 head, avail_wrap_counter, id, curr;
> > +	bool indirect;
> > +
> > +	START_USE(vq);
> > +
> > +	BUG_ON(data == NULL);
> > +	BUG_ON(ctx && vq->indirect);
> > +
> > +	if (unlikely(vq->broken)) {
> > +		END_USE(vq);
> > +		return -EIO;
> > +	}
> > +
> > +#ifdef DEBUG
> > +	{
> > +		ktime_t now = ktime_get();
> > +
> > +		/* No kick or get, with .1 second between?  Warn. */
> > +		if (vq->last_add_time_valid)
> > +			WARN_ON(ktime_to_ms(ktime_sub(now, vq->last_add_time))
> > +					    > 100);
> > +		vq->last_add_time = now;
> > +		vq->last_add_time_valid = true;
> > +	}
> > +#endif
> 
> Add incline helpers for this debug stuff rather than
> duplicate it from split ring?

Sure. I'd like to do that.

> 
> 
> > +
> > +	BUG_ON(total_sg == 0);
> > +
> > +	head = vq->next_avail_idx;
> > +	avail_wrap_counter = vq->avail_wrap_counter;
> > +
> > +	if (virtqueue_use_indirect(_vq, total_sg))
> > +		desc = alloc_indirect_packed(_vq, total_sg, gfp);
> > +	else {
> > +		desc = NULL;
> > +		WARN_ON_ONCE(total_sg > vq->vring_packed.num && !vq->indirect);
> > +	}
> 
> 
> Apparently this attempts to treat indirect descriptors same as
> direct ones. This is how it is in the split ring, but not in
> the packed one. I think you want two separate functions, for
> direct and indirect.

Okay, I'll do that.

> 
> > +
> > +	if (desc) {
> > +		/* Use a single buffer which doesn't continue */
> > +		indirect = true;
> > +		/* Set up rest to use this indirect table. */
> > +		i = 0;
> > +		descs_used = 1;
> > +	} else {
> > +		indirect = false;
> > +		desc = vq->vring_packed.desc;
> > +		i = head;
> > +		descs_used = total_sg;
> > +	}
> > +
> > +	if (vq->vq.num_free < descs_used) {
> > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > +			 descs_used, vq->vq.num_free);
> > +		/* FIXME: for historical reasons, we force a notify here if
> > +		 * there are outgoing parts to the buffer.  Presumably the
> > +		 * host should service the ring ASAP. */
> > +		if (out_sgs)
> > +			vq->notify(&vq->vq);
> > +		if (indirect)
> > +			kfree(desc);
> > +		END_USE(vq);
> > +		return -ENOSPC;
> > +	}
> > +
> > +	id = vq->free_head;
> > +	BUG_ON(id == vq->vring_packed.num);
> > +
> > +	curr = id;
> > +	for (n = 0; n < out_sgs + in_sgs; n++) {
> > +		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> > +			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> > +					       DMA_TO_DEVICE : DMA_FROM_DEVICE);
> > +			if (vring_mapping_error(vq, addr))
> > +				goto unmap_release;
> > +
> > +			flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT |
> > +				  (n < out_sgs ? 0 : VRING_DESC_F_WRITE) |
> > +				  _VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> > +				  _VRING_DESC_F_USED(!vq->avail_wrap_counter));
> 
> Spec says:
> 	The VIRTQ_DESC_F_WRITE flags bit is the only valid flag for descriptors in the
> 	indirect table.
> 
> All this logic isn't needed for indirect.
> 
> Also, why re-calculate avail/used flags every time? They only change
> when you wrap around.

Will do that. Thanks.

> 
> 
> > +			if (!indirect && i == head)
> > +				head_flags = flags;
> > +			else
> > +				desc[i].flags = flags;
> > +
> > +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
> > +			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
> > +			i++;
> > +			if (!indirect) {
> > +				if (vring_use_dma_api(_vq->vdev)) {
> > +					vq->desc_state_packed[curr].addr = addr;
> > +					vq->desc_state_packed[curr].len =
> > +						sg->length;
> > +					vq->desc_state_packed[curr].flags =
> > +						virtio16_to_cpu(_vq->vdev,
> > +								flags);
> > +				}
> > +				curr = vq->desc_state_packed[curr].next;
> > +
> > +				if (i >= vq->vring_packed.num) {
> > +					i = 0;
> > +					vq->avail_wrap_counter ^= 1;
> > +				}
> > +			}
> > +		}
> > +	}
> > +
> > +	prev = (i > 0 ? i : vq->vring_packed.num) - 1;
> > +	desc[prev].id = cpu_to_virtio16(_vq->vdev, id);
> 
> Is it easier to write this out in all descriptors, to avoid the need to
> calculate prev?

Yeah, I'll do that.

> 
> > +
> > +	/* Last one doesn't continue. */
> > +	if (total_sg == 1)
> > +		head_flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
> > +	else
> > +		desc[prev].flags &= cpu_to_virtio16(_vq->vdev,
> > +						~VRING_DESC_F_NEXT);
> 
> Wouldn't it be easier to avoid setting VRING_DESC_F_NEXT
> in the first place?
> 	 if (n != in_sgs - 1 && n != out_sgs - 1)

Will this affect the branch prediction?

> 
> must be better than writing descriptor an extra time.

Not quite sure about this. I think this descriptor has just
been written, it should still be in the cache.

> 
> > +
> > +	if (indirect) {
> > +		/* Now that the indirect table is filled in, map it. */
> > +		dma_addr_t addr = vring_map_single(
> > +			vq, desc, total_sg * sizeof(struct vring_packed_desc),
> > +			DMA_TO_DEVICE);
> > +		if (vring_mapping_error(vq, addr))
> > +			goto unmap_release;
> > +
> > +		head_flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT |
> > +				      _VRING_DESC_F_AVAIL(avail_wrap_counter) |
> > +				      _VRING_DESC_F_USED(!avail_wrap_counter));
> > +		vq->vring_packed.desc[head].addr = cpu_to_virtio64(_vq->vdev,
> > +								   addr);
> > +		vq->vring_packed.desc[head].len = cpu_to_virtio32(_vq->vdev,
> > +				total_sg * sizeof(struct vring_packed_desc));
> > +		vq->vring_packed.desc[head].id = cpu_to_virtio16(_vq->vdev, id);
> > +
> > +		if (vring_use_dma_api(_vq->vdev)) {
> > +			vq->desc_state_packed[id].addr = addr;
> > +			vq->desc_state_packed[id].len = total_sg *
> > +					sizeof(struct vring_packed_desc);
> > +			vq->desc_state_packed[id].flags =
> > +					virtio16_to_cpu(_vq->vdev, head_flags);
> > +		}
> > +	}
> > +
> > +	/* We're using some buffers from the free list. */
> > +	vq->vq.num_free -= descs_used;
> > +
> > +	/* Update free pointer */
> > +	if (indirect) {
> > +		n = head + 1;
> > +		if (n >= vq->vring_packed.num) {
> > +			n = 0;
> > +			vq->avail_wrap_counter ^= 1;
> > +		}
> > +		vq->next_avail_idx = n;
> > +		vq->free_head = vq->desc_state_packed[id].next;
> > +	} else {
> > +		vq->next_avail_idx = i;
> > +		vq->free_head = curr;
> > +	}
> > +
> > +	/* Store token and indirect buffer state. */
> > +	vq->desc_state_packed[id].num = descs_used;
> > +	vq->desc_state_packed[id].data = data;
> > +	if (indirect)
> > +		vq->desc_state_packed[id].indir_desc = desc;
> > +	else
> > +		vq->desc_state_packed[id].indir_desc = ctx;
> > +
> > +	/* A driver MUST NOT make the first descriptor in the list
> > +	 * available before all subsequent descriptors comprising
> > +	 * the list are made available. */
> > +	virtio_wmb(vq->weak_barriers);
> > +	vq->vring_packed.desc[head].flags = head_flags;
> > +	vq->num_added += descs_used;
> > +
> > +	pr_debug("Added buffer head %i to %p\n", head, vq);
> > +	END_USE(vq);
> > +
> > +	return 0;
> > +
> > +unmap_release:
> > +	err_idx = i;
> > +	i = head;
> > +
> > +	for (n = 0; n < total_sg; n++) {
> > +		if (i == err_idx)
> > +			break;
> > +		vring_unmap_desc_packed(vq, &desc[i]);
> > +		i++;
> > +		if (!indirect && i >= vq->vring_packed.num)
> > +			i = 0;
> > +	}
> > +
> > +	vq->avail_wrap_counter = avail_wrap_counter;
> > +
> > +	if (indirect)
> > +		kfree(desc);
> > +
> > +	END_USE(vq);
> >  	return -EIO;
> >  }
> >  
> >  static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
> >  {
> > -	return false;
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	u16 flags;
> > +	bool needs_kick;
> > +	u32 snapshot;
> > +
> > +	START_USE(vq);
> > +	/* We need to expose the new flags value before checking notification
> > +	 * suppressions. */
> > +	virtio_mb(vq->weak_barriers);
> > +
> > +	snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
> > +	flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
> 
> What are all these hard-coded things? Also either we do READ_ONCE
> everywhere or nowhere. Why is this place special? Any why read 32 bit
> if you only want the 16?

Yeah, READ_ONCE() and reading 32bits are not really needed in
this patch. But it's needed in the next patch. I thought it's
not wrong to do this, so I introduced them from the first place.

Just to double check: is the below code (apart from the
hard-coded value and virtio16) from the next patch OK?

"""
    snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
+   off_wrap = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot & 0xffff));
    flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
"""

> 
> And doesn't sparse complain about cast to __virtio16?

I'll give it a try in the next version.

> 
> > +
> > +#ifdef DEBUG
> > +	if (vq->last_add_time_valid) {
> > +		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
> > +					      vq->last_add_time)) > 100);
> > +	}
> > +	vq->last_add_time_valid = false;
> > +#endif
> > +
> > +	needs_kick = (flags != VRING_EVENT_F_DISABLE);
> > +	END_USE(vq);
> > +	return needs_kick;
> > +}
> > +
> > +static void detach_buf_packed(struct vring_virtqueue *vq,
> > +			      unsigned int id, void **ctx)
> > +{
> > +	struct vring_desc_state_packed *state = NULL;
> > +	struct vring_packed_desc *desc;
> > +	unsigned int curr, i;
> > +
> > +	/* Clear data ptr. */
> > +	vq->desc_state_packed[id].data = NULL;
> > +
> > +	curr = id;
> > +	for (i = 0; i < vq->desc_state_packed[id].num; i++) {
> > +		state = &vq->desc_state_packed[curr];
> > +		vring_unmap_state_packed(vq, state);
> > +		curr = state->next;
> > +	}
> > +
> > +	BUG_ON(state == NULL);
> > +	vq->vq.num_free += vq->desc_state_packed[id].num;
> > +	state->next = vq->free_head;
> > +	vq->free_head = id;
> > +
> > +	if (vq->indirect) {
> > +		u32 len;
> > +
> > +		/* Free the indirect table, if any, now that it's unmapped. */
> > +		desc = vq->desc_state_packed[id].indir_desc;
> > +		if (!desc)
> > +			return;
> > +
> > +		if (vring_use_dma_api(vq->vq.vdev)) {
> > +			len = vq->desc_state_packed[id].len;
> > +			for (i = 0; i < len / sizeof(struct vring_packed_desc);
> > +					i++)
> > +				vring_unmap_desc_packed(vq, &desc[i]);
> > +		}
> > +		kfree(desc);
> > +		vq->desc_state_packed[id].indir_desc = NULL;
> > +	} else if (ctx) {
> > +		*ctx = vq->desc_state_packed[id].indir_desc;
> > +	}
> > +}
> > +
> > +static inline bool is_used_desc_packed(const struct vring_virtqueue *vq,
> > +				       u16 idx, bool used_wrap_counter)
> > +{
> > +	u16 flags;
> > +	bool avail, used;
> > +
> > +	flags = virtio16_to_cpu(vq->vq.vdev,
> > +				vq->vring_packed.desc[idx].flags);
> > +	avail = !!(flags & VRING_DESC_F_AVAIL);
> > +	used = !!(flags & VRING_DESC_F_USED);
> > +
> > +	return avail == used && used == used_wrap_counter;
> 
> I think that you don't need to look at avail flag to detect a used
> descriptor. The reason device writes it is to avoid confusing
> *device* next time descriptor wraps.

Okay, I'll just check the used flag.

> 
> >  }
> >  
> >  static inline bool more_used_packed(const struct vring_virtqueue *vq)
> >  {
> > -	return false;
> > +	return is_used_desc_packed(vq, vq->last_used_idx,
> > +			vq->used_wrap_counter);
> >  }
> >  
> >  static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> >  					  unsigned int *len,
> >  					  void **ctx)
> >  {
> > -	return NULL;
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	u16 last_used, id;
> > +	void *ret;
> > +
> > +	START_USE(vq);
> > +
> > +	if (unlikely(vq->broken)) {
> > +		END_USE(vq);
> > +		return NULL;
> > +	}
> > +
> > +	if (!more_used_packed(vq)) {
> > +		pr_debug("No more buffers in queue\n");
> > +		END_USE(vq);
> > +		return NULL;
> > +	}
> > +
> > +	/* Only get used elements after they have been exposed by host. */
> > +	virtio_rmb(vq->weak_barriers);
> > +
> > +	last_used = vq->last_used_idx;
> > +	id = virtio16_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].id);
> > +	*len = virtio32_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].len);
> > +
> > +	if (unlikely(id >= vq->vring_packed.num)) {
> > +		BAD_RING(vq, "id %u out of range\n", id);
> > +		return NULL;
> > +	}
> > +	if (unlikely(!vq->desc_state_packed[id].data)) {
> > +		BAD_RING(vq, "id %u is not a head!\n", id);
> > +		return NULL;
> > +	}
> > +
> > +	vq->last_used_idx += vq->desc_state_packed[id].num;
> > +	if (vq->last_used_idx >= vq->vring_packed.num) {
> > +		vq->last_used_idx -= vq->vring_packed.num;
> > +		vq->used_wrap_counter ^= 1;
> > +	}
> > +
> > +	/* detach_buf_packed clears data, so grab it now. */
> > +	ret = vq->desc_state_packed[id].data;
> > +	detach_buf_packed(vq, id, ctx);
> > +
> > +#ifdef DEBUG
> > +	vq->last_add_time_valid = false;
> > +#endif
> > +
> > +	END_USE(vq);
> > +	return ret;
> >  }
> >  
> >  static void virtqueue_disable_cb_packed(struct virtqueue *_vq)
> >  {
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +
> > +	if (vq->event_flags_shadow != VRING_EVENT_F_DISABLE) {
> > +		vq->event_flags_shadow = VRING_EVENT_F_DISABLE;
> > +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> > +							vq->event_flags_shadow);
> > +	}
> >  }
> >  
> >  static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
> >  {
> > -	return 0;
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +
> > +	START_USE(vq);
> > +
> > +	/* We optimistically turn back on interrupts, then check if there was
> > +	 * more to do. */
> > +
> > +	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> > +		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> > +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> > +							vq->event_flags_shadow);
> > +	}
> > +
> > +	END_USE(vq);
> > +	return vq->last_used_idx | ((u16)vq->used_wrap_counter << 15);
> >  }
> >  
> > -static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned last_used_idx)
> > +static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned off_wrap)
> >  {
> > -	return false;
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	bool wrap_counter;
> > +	u16 used_idx;
> > +
> > +	wrap_counter = off_wrap >> 15;
> > +	used_idx = off_wrap & ~(1 << 15);
> > +
> > +	return is_used_desc_packed(vq, used_idx, wrap_counter);
> 
> These >> 15 << 15 all over the place duplicate info.
> Pls put 15 in the header.

Sure.

> 
> Also can you maintain the counters properly shifted?
> Then just use them.

Then, we may need to maintain both of the shifted wrapper
counters and un-shifted wrapper counters at the same time.

> 
> 
> >  }
> >  
> >  static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
> >  {
> > -	return false;
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +
> > +	START_USE(vq);
> > +
> > +	/* We optimistically turn back on interrupts, then check if there was
> > +	 * more to do. */
> > +
> > +	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> > +		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> > +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> > +							vq->event_flags_shadow);
> > +		/* We need to enable interrupts first before re-checking
> > +		 * for more used buffers. */
> > +		virtio_mb(vq->weak_barriers);
> > +	}
> > +
> > +	if (more_used_packed(vq)) {
> > +		END_USE(vq);
> > +		return false;
> > +	}
> > +
> > +	END_USE(vq);
> > +	return true;
> >  }
> >  
> >  static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq)
> >  {
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	unsigned int i;
> > +	void *buf;
> > +
> > +	START_USE(vq);
> > +
> > +	for (i = 0; i < vq->vring_packed.num; i++) {
> > +		if (!vq->desc_state_packed[i].data)
> > +			continue;
> > +		/* detach_buf clears data, so grab it now. */
> > +		buf = vq->desc_state_packed[i].data;
> > +		detach_buf_packed(vq, i, NULL);
> > +		END_USE(vq);
> > +		return buf;
> > +	}
> > +	/* That should have freed everything. */
> > +	BUG_ON(vq->vq.num_free != vq->vring_packed.num);
> > +
> > +	END_USE(vq);
> >  	return NULL;
> >  }
> >  
> > @@ -1083,6 +1559,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
> >  {
> >  	struct vring_virtqueue *vq = to_vvq(_vq);
> >  
> > +	/* We need to enable interrupts first before re-checking
> > +	 * for more used buffers. */
> > +	virtio_mb(vq->weak_barriers);
> >  	return vq->packed ? virtqueue_poll_packed(_vq, last_used_idx) :
> >  			    virtqueue_poll_split(_vq, last_used_idx);
> >  }
> 
> Possible optimization for when dma API is in use:

Got it. Will give it a try!

Best regards,
Tiwei Bie

> 
> --->
> 
> dma: detecting nop unmap
> 
> drivers need to maintain the dma address for unmap purposes,
> but these cycles are wasted when unmap callback is not
> defined. Add an API for drivers to check that and avoid
> unmap completely. Debug builds still have unmap.
> 
> Signed-off-by: Michael S. Tsirkin <mst@redhat.com>
> 
> ----
> 
> diff --git a/include/linux/dma-debug.h b/include/linux/dma-debug.h
> index a785f2507159..38b2c27c8449 100644
> --- a/include/linux/dma-debug.h
> +++ b/include/linux/dma-debug.h
> @@ -42,6 +42,11 @@ extern void debug_dma_mapping_error(struct device *dev, dma_addr_t dma_addr);
>  extern void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,
>  				 size_t size, int direction, bool map_single);
>  
> +static inline bool has_debug_dma_unmap(struct device *dev)
> +{
> +	return true;
> +}
> +
>  extern void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
>  			     int nents, int mapped_ents, int direction);
>  
> @@ -121,6 +126,11 @@ static inline void debug_dma_unmap_page(struct device *dev, dma_addr_t addr,
>  {
>  }
>  
> +static inline bool has_debug_dma_unmap(struct device *dev)
> +{
> +	return false;
> +}
> +
>  static inline void debug_dma_map_sg(struct device *dev, struct scatterlist *sg,
>  				    int nents, int mapped_ents, int direction)
>  {
> diff --git a/include/linux/dma-mapping.h b/include/linux/dma-mapping.h
> index 1db6a6b46d0d..656f3e518166 100644
> --- a/include/linux/dma-mapping.h
> +++ b/include/linux/dma-mapping.h
> @@ -241,6 +241,14 @@ static inline dma_addr_t dma_map_single_attrs(struct device *dev, void *ptr,
>  	return addr;
>  }
>  
> +static inline bool has_dma_unmap(struct device *dev)
> +{
> +	const struct dma_map_ops *ops = get_dma_ops(dev);
> +
> +	return ops->unmap_page || ops->unmap_sg || ops->unmap_resource ||
> +		has_dma_unmap(dev);
> +}
> +
>  static inline void dma_unmap_single_attrs(struct device *dev, dma_addr_t addr,
>  					  size_t size,
>  					  enum dma_data_direction dir,

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 1/5] virtio: add packed ring definitions
  2018-09-07 13:51   ` Michael S. Tsirkin
@ 2018-09-10  2:13     ` Tiwei Bie
  2018-09-12 12:53       ` Michael S. Tsirkin
  0 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-09-10  2:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Fri, Sep 07, 2018 at 09:51:23AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 11, 2018 at 10:27:07AM +0800, Tiwei Bie wrote:
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> >  include/uapi/linux/virtio_config.h |  3 +++
> >  include/uapi/linux/virtio_ring.h   | 43 ++++++++++++++++++++++++++++++
> >  2 files changed, 46 insertions(+)
> > 
> > diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h
> > index 449132c76b1c..1196e1c1d4f6 100644
> > --- a/include/uapi/linux/virtio_config.h
> > +++ b/include/uapi/linux/virtio_config.h
> > @@ -75,6 +75,9 @@
> >   */
> >  #define VIRTIO_F_IOMMU_PLATFORM		33
> >  
> > +/* This feature indicates support for the packed virtqueue layout. */
> > +#define VIRTIO_F_RING_PACKED		34
> > +
> >  /*
> >   * Does the device support Single Root I/O Virtualization?
> >   */
> > diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_ring.h
> > index 6d5d5faa989b..0254a2ba29cf 100644
> > --- a/include/uapi/linux/virtio_ring.h
> > +++ b/include/uapi/linux/virtio_ring.h
> > @@ -44,6 +44,10 @@
> >  /* This means the buffer contains a list of buffer descriptors. */
> >  #define VRING_DESC_F_INDIRECT	4
> >  
> > +/* Mark a descriptor as available or used. */
> > +#define VRING_DESC_F_AVAIL	(1ul << 7)
> > +#define VRING_DESC_F_USED	(1ul << 15)
> > +
> >  /* The Host uses this in used->flags to advise the Guest: don't kick me when
> >   * you add a buffer.  It's unreliable, so it's simply an optimization.  Guest
> >   * will still kick if it's out of buffers. */
> > @@ -53,6 +57,17 @@
> >   * optimization.  */
> >  #define VRING_AVAIL_F_NO_INTERRUPT	1
> >  
> > +/* Enable events. */
> > +#define VRING_EVENT_F_ENABLE	0x0
> > +/* Disable events. */
> > +#define VRING_EVENT_F_DISABLE	0x1
> > +/*
> > + * Enable events for a specific descriptor
> > + * (as specified by Descriptor Ring Change Event Offset/Wrap Counter).
> > + * Only valid if VIRTIO_RING_F_EVENT_IDX has been negotiated.
> > + */
> > +#define VRING_EVENT_F_DESC	0x2
> > +
> >  /* We support indirect buffer descriptors */
> >  #define VIRTIO_RING_F_INDIRECT_DESC	28
> >  
> 
> These are for the packed ring, right? Pls prefix accordingly.

How about something like this:

#define VRING_PACKED_DESC_F_AVAIL	(1u << 7)
#define VRING_PACKED_DESC_F_USED	(1u << 15)

#define VRING_PACKED_EVENT_F_ENABLE	0x0
#define VRING_PACKED_EVENT_F_DISABLE	0x1
#define VRING_PACKED_EVENT_F_DESC	0x2


> Also, you likely need macros for the wrap counters.

How about something like this:

#define VRING_PACKED_EVENT_WRAP_COUNTER_SHIFT	15
#define VRING_PACKED_EVENT_WRAP_COUNTER_MASK	\
			(1u << VRING_PACKED_WRAP_COUNTER_SHIFT)
#define VRING_PACKED_EVENT_OFFSET_MASK	\
			((1u << VRING_PACKED_WRAP_COUNTER_SHIFT) - 1)

> 
> > @@ -171,4 +186,32 @@ static inline int vring_need_event(__u16 event_idx, __u16 new_idx, __u16 old)
> >  	return (__u16)(new_idx - event_idx - 1) < (__u16)(new_idx - old);
> >  }
> >  
> > +struct vring_packed_desc_event {
> > +	/* Descriptor Ring Change Event Offset/Wrap Counter. */
> > +	__virtio16 off_wrap;
> > +	/* Descriptor Ring Change Event Flags. */
> > +	__virtio16 flags;
> > +};
> > +
> > +struct vring_packed_desc {
> > +	/* Buffer Address. */
> > +	__virtio64 addr;
> > +	/* Buffer Length. */
> > +	__virtio32 len;
> > +	/* Buffer ID. */
> > +	__virtio16 id;
> > +	/* The flags depending on descriptor type. */
> > +	__virtio16 flags;
> > +};
> 
> Don't use __virtioXX types, just __leXX ones.

Got it, will do that.

> 
> > +
> > +struct vring_packed {
> > +	unsigned int num;
> > +
> > +	struct vring_packed_desc *desc;
> > +
> > +	struct vring_packed_desc_event *driver;
> > +
> > +	struct vring_packed_desc_event *device;
> > +};
> > +
> >  #endif /* _UAPI_LINUX_VIRTIO_RING_H */
> > -- 
> > 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 2/5] virtio_ring: support creating packed ring
  2018-09-07 14:03   ` Michael S. Tsirkin
@ 2018-09-10  2:28     ` Tiwei Bie
  2018-09-12  7:51       ` Tiwei Bie
  2018-09-12 12:45       ` Michael S. Tsirkin
  0 siblings, 2 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-09-10  2:28 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Fri, Sep 07, 2018 at 10:03:24AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 11, 2018 at 10:27:08AM +0800, Tiwei Bie wrote:
> > This commit introduces the support for creating packed ring.
> > All split ring specific functions are added _split suffix.
> > Some necessary stubs for packed ring are also added.
> > 
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> 
> I'd rather have a patch just renaming split functions, then
> add all packed stuff in as a separate patch on top.

Sure, I will do that.

> 
> 
> > ---
> >  drivers/virtio/virtio_ring.c | 801 +++++++++++++++++++++++------------
> >  include/linux/virtio_ring.h  |   8 +-
> >  2 files changed, 546 insertions(+), 263 deletions(-)
> > 
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 814b395007b2..c4f8abc7445a 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -60,11 +60,15 @@ struct vring_desc_state {
> >  	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> >  };
> >  
> > +struct vring_desc_state_packed {
> > +	int next;			/* The next desc state. */
> 
> So this can go away with IN_ORDER?

Yes. If IN_ORDER is negotiated, next won't be needed anymore.
Currently, IN_ORDER isn't included in this patch set, because
some changes for split ring are needed to make sure that it
will use the descs in the expected order. After that,
optimizations can be done for both of split ring and packed
ring respectively.

> 
> > +};
> > +
> >  struct vring_virtqueue {
> >  	struct virtqueue vq;
> >  
> > -	/* Actual memory layout for this queue */
> > -	struct vring vring;
> > +	/* Is this a packed ring? */
> > +	bool packed;
> >  
> >  	/* Can we use weak barriers? */
> >  	bool weak_barriers;
> > @@ -86,11 +90,39 @@ struct vring_virtqueue {
> >  	/* Last used index we've seen. */
> >  	u16 last_used_idx;
> >  
> > -	/* Last written value to avail->flags */
> > -	u16 avail_flags_shadow;
> > +	union {
> > +		/* Available for split ring */
> > +		struct {
> > +			/* Actual memory layout for this queue. */
> > +			struct vring vring;
> >  
> > -	/* Last written value to avail->idx in guest byte order */
> > -	u16 avail_idx_shadow;
> > +			/* Last written value to avail->flags */
> > +			u16 avail_flags_shadow;
> > +
> > +			/* Last written value to avail->idx in
> > +			 * guest byte order. */
> > +			u16 avail_idx_shadow;
> > +		};
> 
> Name this field split so it's easier to detect misuse of e.g.
> packed fields in split code?

Good point, I'll do that.

> 
> > +
> > +		/* Available for packed ring */
> > +		struct {
> > +			/* Actual memory layout for this queue. */
> > +			struct vring_packed vring_packed;
> > +
> > +			/* Driver ring wrap counter. */
> > +			bool avail_wrap_counter;
> > +
> > +			/* Device ring wrap counter. */
> > +			bool used_wrap_counter;
> > +
> > +			/* Index of the next avail descriptor. */
> > +			u16 next_avail_idx;
> > +
> > +			/* Last written value to driver->flags in
> > +			 * guest byte order. */
> > +			u16 event_flags_shadow;
> > +		};
> > +	};
[...]
> > +
> > +/*
> > + * The layout for the packed ring is a continuous chunk of memory
> > + * which looks like this.
> > + *
> > + * struct vring_packed {
> > + *	// The actual descriptors (16 bytes each)
> > + *	struct vring_packed_desc desc[num];
> > + *
> > + *	// Padding to the next align boundary.
> > + *	char pad[];
> > + *
> > + *	// Driver Event Suppression
> > + *	struct vring_packed_desc_event driver;
> > + *
> > + *	// Device Event Suppression
> > + *	struct vring_packed_desc_event device;
> > + * };
> > + */
> 
> Why not just allocate event structures separately?
> Is it a win to have them share a cache line for some reason?

Will do that.

> 
> > +static inline void vring_init_packed(struct vring_packed *vr, unsigned int num,
> > +				     void *p, unsigned long align)
> > +{
> > +	vr->num = num;
> > +	vr->desc = p;
> > +	vr->driver = (void *)ALIGN(((uintptr_t)p +
> > +		sizeof(struct vring_packed_desc) * num), align);
> > +	vr->device = vr->driver + 1;
> > +}
> 
> What's all this about alignment? Where does it come from?

It comes from the `vring_align` parameter of vring_create_virtqueue()
and vring_new_virtqueue():

https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1061
https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1123

Should I just ignore it in packed ring?

CCW defined this:

#define KVM_VIRTIO_CCW_RING_ALIGN 4096

I'm not familiar with CCW. Currently, in this patch set, packed ring
isn't enabled on CCW dues to some legacy accessors are not implemented
in packed ring yet.

> 
> > +
> > +static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
> > +{
> > +	return ((sizeof(struct vring_packed_desc) * num + align - 1)
> > +		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
> > +}
[...]
> > @@ -1129,10 +1388,17 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
> >  				      void (*callback)(struct virtqueue *vq),
> >  				      const char *name)
> >  {
> > -	struct vring vring;
> > -	vring_init(&vring, num, pages, vring_align);
> > -	return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> > -				     notify, callback, name);
> > +	union vring_union vring;
> > +	bool packed;
> > +
> > +	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
> > +	if (packed)
> > +		vring_init_packed(&vring.vring_packed, num, pages, vring_align);
> > +	else
> > +		vring_init(&vring.vring_split, num, pages, vring_align);
> 
> 
> vring_init in the UAPI header is more or less a bug.
> I'd just stop using it, keep it around for legacy userspace.

Got it. I'd like to do that. Thanks.

> 
> > +
> > +	return __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
> > +				     context, notify, callback, name);
> >  }
> >  EXPORT_SYMBOL_GPL(vring_new_virtqueue);
> >  
> > @@ -1142,7 +1408,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
> >  
> >  	if (vq->we_own_ring) {
> >  		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
> > -				 vq->vring.desc, vq->queue_dma_addr);
> > +				 vq->packed ? (void *)vq->vring_packed.desc :
> > +					      (void *)vq->vring.desc,
> > +				 vq->queue_dma_addr);
> >  	}
> >  	list_del(&_vq->list);
> >  	kfree(vq);
> > @@ -1184,7 +1452,7 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *_vq)
> >  
> >  	struct vring_virtqueue *vq = to_vvq(_vq);
> >  
> > -	return vq->vring.num;
> > +	return vq->packed ? vq->vring_packed.num : vq->vring.num;
> >  }
> >  EXPORT_SYMBOL_GPL(virtqueue_get_vring_size);
> >  
> > @@ -1227,6 +1495,10 @@ dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
> >  
> >  	BUG_ON(!vq->we_own_ring);
> >  
> > +	if (vq->packed)
> > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.driver -
> > +				(char *)vq->vring_packed.desc);
> > +
> >  	return vq->queue_dma_addr +
> >  		((char *)vq->vring.avail - (char *)vq->vring.desc);
> >  }
> > @@ -1238,11 +1510,16 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
> >  
> >  	BUG_ON(!vq->we_own_ring);
> >  
> > +	if (vq->packed)
> > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.device -
> > +				(char *)vq->vring_packed.desc);
> > +
> >  	return vq->queue_dma_addr +
> >  		((char *)vq->vring.used - (char *)vq->vring.desc);
> >  }
> >  EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
> >  
> > +/* Only available for split ring */
> >  const struct vring *virtqueue_get_vring(struct virtqueue *vq)
> >  {
> >  	return &to_vvq(vq)->vring;
> > diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> > index fab02133a919..992142b35f55 100644
> > --- a/include/linux/virtio_ring.h
> > +++ b/include/linux/virtio_ring.h
> > @@ -60,6 +60,11 @@ static inline void virtio_store_mb(bool weak_barriers,
> >  struct virtio_device;
> >  struct virtqueue;
> >  
> > +union vring_union {
> > +	struct vring vring_split;
> > +	struct vring_packed vring_packed;
> > +};
> > +
> >  /*
> >   * Creates a virtqueue and allocates the descriptor ring.  If
> >   * may_reduce_num is set, then this may allocate a smaller ring than
> > @@ -79,7 +84,8 @@ struct virtqueue *vring_create_virtqueue(unsigned int index,
> >  
> >  /* Creates a virtqueue with a custom layout. */
> >  struct virtqueue *__vring_new_virtqueue(unsigned int index,
> > -					struct vring vring,
> > +					union vring_union vring,
> > +					bool packed,
> >  					struct virtio_device *vdev,
> >  					bool weak_barriers,
> >  					bool ctx,
> > -- 
> > 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring
  2018-09-07 14:10   ` Michael S. Tsirkin
@ 2018-09-10  2:35     ` Tiwei Bie
  0 siblings, 0 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-09-10  2:35 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Fri, Sep 07, 2018 at 10:10:14AM -0400, Michael S. Tsirkin wrote:
> On Wed, Jul 11, 2018 at 10:27:10AM +0800, Tiwei Bie wrote:
> > This commit introduces the EVENT_IDX support in packed ring.
> > 
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> 
> Besides the usual comment about hard-coded constants like <<15:
> does this actually do any good for performance? We don't
> have to if we do not want to.

Got it. Thanks.

> 
> > ---
> >  drivers/virtio/virtio_ring.c | 73 ++++++++++++++++++++++++++++++++----
> >  1 file changed, 65 insertions(+), 8 deletions(-)
> > 
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index f317b485ba54..f79a1e17f7d1 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -1050,7 +1050,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
> >  static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
> >  {
> >  	struct vring_virtqueue *vq = to_vvq(_vq);
> > -	u16 flags;
> > +	u16 new, old, off_wrap, flags, wrap_counter, event_idx;
> >  	bool needs_kick;
> >  	u32 snapshot;
> >  
> > @@ -1059,9 +1059,19 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
> >  	 * suppressions. */
> >  	virtio_mb(vq->weak_barriers);
> >  
> > +	old = vq->next_avail_idx - vq->num_added;
> > +	new = vq->next_avail_idx;
> > +	vq->num_added = 0;
> > +
> >  	snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
> > +	off_wrap = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot & 0xffff));
> >  	flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
> 
> 
> some kind of struct union would be helpful to make this readable.

I will define a struct/union for this.

> 
> >  
> > +	wrap_counter = off_wrap >> 15;
> > +	event_idx = off_wrap & ~(1 << 15);
> > +	if (wrap_counter != vq->avail_wrap_counter)
> > +		event_idx -= vq->vring_packed.num;
> > +
> >  #ifdef DEBUG
> >  	if (vq->last_add_time_valid) {
> >  		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
> > @@ -1070,7 +1080,10 @@ static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
> >  	vq->last_add_time_valid = false;
> >  #endif
> >  
> > -	needs_kick = (flags != VRING_EVENT_F_DISABLE);
> > +	if (flags == VRING_EVENT_F_DESC)
> > +		needs_kick = vring_need_event(event_idx, new, old);
> > +	else
> > +		needs_kick = (flags != VRING_EVENT_F_DISABLE);
> >  	END_USE(vq);
> >  	return needs_kick;
> >  }
> > @@ -1185,6 +1198,15 @@ static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
> >  	ret = vq->desc_state_packed[id].data;
> >  	detach_buf_packed(vq, id, ctx);
> >  
> > +	/* If we expect an interrupt for the next entry, tell host
> > +	 * by writing event index and flush out the write before
> > +	 * the read in the next get_buf call. */
> > +	if (vq->event_flags_shadow == VRING_EVENT_F_DESC)
> > +		virtio_store_mb(vq->weak_barriers,
> > +				&vq->vring_packed.driver->off_wrap,
> > +				cpu_to_virtio16(_vq->vdev, vq->last_used_idx |
> > +					((u16)vq->used_wrap_counter << 15)));
> > +
> >  #ifdef DEBUG
> >  	vq->last_add_time_valid = false;
> >  #endif
> > @@ -1213,8 +1235,18 @@ static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
> >  	/* We optimistically turn back on interrupts, then check if there was
> >  	 * more to do. */
> >  
> > +	if (vq->event) {
> > +		vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
> > +				vq->last_used_idx |
> > +				((u16)vq->used_wrap_counter << 15));
> > +		/* We need to update event offset and event wrap
> > +		 * counter first before updating event flags. */
> > +		virtio_wmb(vq->weak_barriers);
> > +	}
> > +
> >  	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> > -		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> > +		vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
> > +						     VRING_EVENT_F_ENABLE;
> >  		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> >  							vq->event_flags_shadow);
> >  	}
> > @@ -1238,22 +1270,47 @@ static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned off_wrap)
> >  static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
> >  {
> >  	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	u16 bufs, used_idx, wrap_counter;
> >  
> >  	START_USE(vq);
> >  
> >  	/* We optimistically turn back on interrupts, then check if there was
> >  	 * more to do. */
> >  
> > +	if (vq->event) {
> > +		/* TODO: tune this threshold */
> > +		bufs = (vq->vring_packed.num - _vq->num_free) * 3 / 4;
> > +		wrap_counter = vq->used_wrap_counter;
> > +
> > +		used_idx = vq->last_used_idx + bufs;
> > +		if (used_idx >= vq->vring_packed.num) {
> > +			used_idx -= vq->vring_packed.num;
> > +			wrap_counter ^= 1;
> > +		}
> > +
> > +		vq->vring_packed.driver->off_wrap = cpu_to_virtio16(_vq->vdev,
> > +				used_idx | (wrap_counter << 15));
> > +
> > +		/* We need to update event offset and event wrap
> > +		 * counter first before updating event flags. */
> > +		virtio_wmb(vq->weak_barriers);
> > +	} else {
> > +		used_idx = vq->last_used_idx;
> > +		wrap_counter = vq->used_wrap_counter;
> > +	}
> > +
> >  	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> > -		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> > +		vq->event_flags_shadow = vq->event ? VRING_EVENT_F_DESC :
> > +						     VRING_EVENT_F_ENABLE;
> >  		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> >  							vq->event_flags_shadow);
> > -		/* We need to enable interrupts first before re-checking
> > -		 * for more used buffers. */
> > -		virtio_mb(vq->weak_barriers);
> >  	}
> >  
> > -	if (more_used_packed(vq)) {
> > +	/* We need to update event suppression structure first
> > +	 * before re-checking for more used buffers. */
> > +	virtio_mb(vq->weak_barriers);
> > +
> 
> mb is expensive. We should not do it if we changed nothing.

I will try to avoid it when possible.

> 
> > +	if (is_used_desc_packed(vq, used_idx, wrap_counter)) {
> >  		END_USE(vq);
> >  		return false;
> >  	}
> > -- 
> > 2.18.0
> 
> ---------------------------------------------------------------------
> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-07 13:00     ` Michael S. Tsirkin
@ 2018-09-10  3:00       ` Tiwei Bie
  2018-09-10  3:33         ` Jason Wang
  2018-09-12 13:06         ` Michael S. Tsirkin
  0 siblings, 2 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-09-10  3:00 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Fri, Sep 07, 2018 at 09:00:49AM -0400, Michael S. Tsirkin wrote:
> On Fri, Sep 07, 2018 at 09:22:25AM +0800, Tiwei Bie wrote:
> > On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
> > > Are there still plans to test the performance with vost pmd?
> > > vhost doesn't seem to show a performance gain ...
> > > 
> > 
> > I tried some performance tests with vhost PMD. In guest, the
> > XDP program will return XDP_DROP directly. And in host, testpmd
> > will do txonly fwd.
> > 
> > When burst size is 1 and packet size is 64 in testpmd and
> > testpmd needs to iterate 5 Tx queues (but only the first two
> > queues are enabled) to prepare and inject packets, I got ~12%
> > performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
> > is faster (e.g. just need to iterate the first two queues to
> > prepare and inject packets), then I got similar performance
> > for both rings (~9.9Mpps) (packed ring's performance can be
> > lower, because it's more complicated in driver.)
> > 
> > I think packed ring makes vhost PMD faster, but it doesn't make
> > the driver faster. In packed ring, the ring is simplified, and
> > the handling of the ring in vhost (device) is also simplified,
> > but things are not simplified in driver, e.g. although there is
> > no desc table in the virtqueue anymore, driver still needs to
> > maintain a private desc state table (which is still managed as
> > a list in this patch set) to support the out-of-order desc
> > processing in vhost (device).
> > 
> > I think this patch set is mainly to make the driver have a full
> > functional support for the packed ring, which makes it possible
> > to leverage the packed ring feature in vhost (device). But I'm
> > not sure whether there is any other better idea, I'd like to
> > hear your thoughts. Thanks!
> 
> Just this: Jens seems to report a nice gain with virtio and
> vhost pmd across the board. Try to compare virtio and
> virtio pmd to see what does pmd do better?

The virtio PMD (drivers/net/virtio) in DPDK doesn't need to share
the virtio ring operation code with other drivers and is highly
optimized for network. E.g. in Rx, the Rx burst function won't
chain descs. So the ID management for the Rx ring can be quite
simple and straightforward, we just need to initialize these IDs
when initializing the ring and don't need to change these IDs
in data path anymore (the mergable Rx code in that patch set
assumes the descs will be written back in order, which should be
fixed. I.e., the ID in the desc should be used to index vq->descx[]).
The Tx code in that patch set also assumes the descs will be
written back by device in order, which should be fixed.

But in kernel virtio driver, the virtio_ring.c is very generic.
The enqueue (virtqueue_add()) and dequeue (virtqueue_get_buf_ctx())
functions need to support all the virtio devices and should be
able to handle all the possible cases that may happen. So although
the packed ring can be very efficient in some cases, currently
the room to optimize the performance in kernel's virtio_ring.c
isn't that much. If we want to take the fully advantage of the
packed ring's efficiency, we need some further e.g. API changes
in virtio_ring.c, which shouldn't be part of this patch set. So
I still think this patch set is mainly to make the kernel virtio
driver to have a full functional support of the packed ring, and
we can't expect impressive performance boost with it.

> 
> 
> > 
> > > 
> > > On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> > > > Hello everyone,
> > > > 
> > > > This patch set implements packed ring support in virtio driver.
> > > > 
> > > > Some functional tests have been done with Jason's
> > > > packed ring implementation in vhost:
> > > > 
> > > > https://lkml.org/lkml/2018/7/3/33
> > > > 
> > > > Both of ping and netperf worked as expected.
> > > > 
> > > > v1 -> v2:
> > > > - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> > > > - Add comments related to ccw (Jason);
> > > > 
> > > > RFC (v6) -> v1:
> > > > - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
> > > >   when event idx is off (Jason);
> > > > - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > - Test the state of the desc at used_idx instead of last_used_idx
> > > >   in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > - Save wrap counter (as part of queue state) in the return value
> > > >   of virtqueue_enable_cb_prepare_packed();
> > > > - Refine the packed ring definitions in uapi;
> > > > - Rebase on the net-next tree;
> > > > 
> > > > RFC v5 -> RFC v6:
> > > > - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> > > > - Define wrap counter as bool (Jason);
> > > > - Use ALIGN() in vring_init_packed() (Jason);
> > > > - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> > > > - Add comments for barriers (Jason);
> > > > - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> > > > - Refine the memory barrier in virtqueue_poll();
> > > > - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> > > > - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> > > > 
> > > > RFC v4 -> RFC v5:
> > > > - Save DMA addr, etc in desc state (Jason);
> > > > - Track used wrap counter;
> > > > 
> > > > RFC v3 -> RFC v4:
> > > > - Make ID allocation support out-of-order (Jason);
> > > > - Various fixes for EVENT_IDX support;
> > > > 
> > > > RFC v2 -> RFC v3:
> > > > - Split into small patches (Jason);
> > > > - Add helper virtqueue_use_indirect() (Jason);
> > > > - Just set id for the last descriptor of a list (Jason);
> > > > - Calculate the prev in virtqueue_add_packed() (Jason);
> > > > - Fix/improve desc suppression code (Jason/MST);
> > > > - Refine the code layout for XXX_split/packed and wrappers (MST);
> > > > - Fix the comments and API in uapi (MST);
> > > > - Remove the BUG_ON() for indirect (Jason);
> > > > - Some other refinements and bug fixes;
> > > > 
> > > > RFC v1 -> RFC v2:
> > > > - Add indirect descriptor support - compile test only;
> > > > - Add event suppression supprt - compile test only;
> > > > - Move vring_packed_init() out of uapi (Jason, MST);
> > > > - Merge two loops into one in virtqueue_add_packed() (Jason);
> > > > - Split vring_unmap_one() for packed ring and split ring (Jason);
> > > > - Avoid using '%' operator (Jason);
> > > > - Rename free_head -> next_avail_idx (Jason);
> > > > - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> > > > - Some other refinements and bug fixes;
> > > > 
> > > > Thanks!
> > > > 
> > > > Tiwei Bie (5):
> > > >   virtio: add packed ring definitions
> > > >   virtio_ring: support creating packed ring
> > > >   virtio_ring: add packed ring support
> > > >   virtio_ring: add event idx support in packed ring
> > > >   virtio_ring: enable packed ring
> > > > 
> > > >  drivers/s390/virtio/virtio_ccw.c   |   14 +
> > > >  drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
> > > >  include/linux/virtio_ring.h        |    8 +-
> > > >  include/uapi/linux/virtio_config.h |    3 +
> > > >  include/uapi/linux/virtio_ring.h   |   43 +
> > > >  5 files changed, 1157 insertions(+), 276 deletions(-)
> > > > 
> > > > -- 
> > > > 2.18.0
> > > 
> > > ---------------------------------------------------------------------
> > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> > > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-10  3:00       ` Tiwei Bie
@ 2018-09-10  3:33         ` Jason Wang
  2018-09-11  5:37           ` Tiwei Bie
  2018-09-12 13:06         ` Michael S. Tsirkin
  1 sibling, 1 reply; 53+ messages in thread
From: Jason Wang @ 2018-09-10  3:33 UTC (permalink / raw)
  To: Tiwei Bie, Michael S. Tsirkin
  Cc: virtualization, linux-kernel, netdev, virtio-dev, wexu, jfreimann



On 2018年09月10日 11:00, Tiwei Bie wrote:
> On Fri, Sep 07, 2018 at 09:00:49AM -0400, Michael S. Tsirkin wrote:
>> On Fri, Sep 07, 2018 at 09:22:25AM +0800, Tiwei Bie wrote:
>>> On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
>>>> Are there still plans to test the performance with vost pmd?
>>>> vhost doesn't seem to show a performance gain ...
>>>>
>>> I tried some performance tests with vhost PMD. In guest, the
>>> XDP program will return XDP_DROP directly. And in host, testpmd
>>> will do txonly fwd.
>>>
>>> When burst size is 1 and packet size is 64 in testpmd and
>>> testpmd needs to iterate 5 Tx queues (but only the first two
>>> queues are enabled) to prepare and inject packets, I got ~12%
>>> performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
>>> is faster (e.g. just need to iterate the first two queues to
>>> prepare and inject packets), then I got similar performance
>>> for both rings (~9.9Mpps) (packed ring's performance can be
>>> lower, because it's more complicated in driver.)
>>>
>>> I think packed ring makes vhost PMD faster, but it doesn't make
>>> the driver faster. In packed ring, the ring is simplified, and
>>> the handling of the ring in vhost (device) is also simplified,
>>> but things are not simplified in driver, e.g. although there is
>>> no desc table in the virtqueue anymore, driver still needs to
>>> maintain a private desc state table (which is still managed as
>>> a list in this patch set) to support the out-of-order desc
>>> processing in vhost (device).
>>>
>>> I think this patch set is mainly to make the driver have a full
>>> functional support for the packed ring, which makes it possible
>>> to leverage the packed ring feature in vhost (device). But I'm
>>> not sure whether there is any other better idea, I'd like to
>>> hear your thoughts. Thanks!
>> Just this: Jens seems to report a nice gain with virtio and
>> vhost pmd across the board. Try to compare virtio and
>> virtio pmd to see what does pmd do better?
> The virtio PMD (drivers/net/virtio) in DPDK doesn't need to share
> the virtio ring operation code with other drivers and is highly
> optimized for network. E.g. in Rx, the Rx burst function won't
> chain descs. So the ID management for the Rx ring can be quite
> simple and straightforward, we just need to initialize these IDs
> when initializing the ring and don't need to change these IDs
> in data path anymore (the mergable Rx code in that patch set
> assumes the descs will be written back in order, which should be
> fixed. I.e., the ID in the desc should be used to index vq->descx[]).
> The Tx code in that patch set also assumes the descs will be
> written back by device in order, which should be fixed.

Yes it is. I think I've pointed it out in some early version of pmd 
patch. So I suspect part (or all) of the boost may come from in order 
feature.

>
> But in kernel virtio driver, the virtio_ring.c is very generic.
> The enqueue (virtqueue_add()) and dequeue (virtqueue_get_buf_ctx())
> functions need to support all the virtio devices and should be
> able to handle all the possible cases that may happen. So although
> the packed ring can be very efficient in some cases, currently
> the room to optimize the performance in kernel's virtio_ring.c
> isn't that much. If we want to take the fully advantage of the
> packed ring's efficiency, we need some further e.g. API changes
> in virtio_ring.c, which shouldn't be part of this patch set.

Could you please share more thoughts on this e.g how to improve the API? 
Notice since the API is shared by both split ring and packed ring, it 
may improve the performance of split ring as well. One can easily 
imagine a batching API, but it does not have many real users now, the 
only case is the XDP transmission which can accept an array of XDP frames.

> So
> I still think this patch set is mainly to make the kernel virtio
> driver to have a full functional support of the packed ring, and
> we can't expect impressive performance boost with it.

We can only gain when virtio ring layout is the bottleneck. If there're 
bottlenecks elsewhere, we probably won't see any increasing in the 
numbers. Vhost-net is an example, and lots of optimizations have proved 
that virtio ring is not the main bottleneck for the current codes. I 
suspect it also the case of virtio driver. Did perf tell us any 
interesting things in virtio driver?

Thanks

>
>>
>>>> On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
>>>>> Hello everyone,
>>>>>
>>>>> This patch set implements packed ring support in virtio driver.
>>>>>
>>>>> Some functional tests have been done with Jason's
>>>>> packed ring implementation in vhost:
>>>>>
>>>>> https://lkml.org/lkml/2018/7/3/33
>>>>>
>>>>> Both of ping and netperf worked as expected.
>>>>>
>>>>> v1 -> v2:
>>>>> - Use READ_ONCE() to read event off_wrap and flags together (Jason);
>>>>> - Add comments related to ccw (Jason);
>>>>>
>>>>> RFC (v6) -> v1:
>>>>> - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
>>>>>    when event idx is off (Jason);
>>>>> - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
>>>>> - Test the state of the desc at used_idx instead of last_used_idx
>>>>>    in virtqueue_enable_cb_delayed_packed() (Jason);
>>>>> - Save wrap counter (as part of queue state) in the return value
>>>>>    of virtqueue_enable_cb_prepare_packed();
>>>>> - Refine the packed ring definitions in uapi;
>>>>> - Rebase on the net-next tree;
>>>>>
>>>>> RFC v5 -> RFC v6:
>>>>> - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
>>>>> - Define wrap counter as bool (Jason);
>>>>> - Use ALIGN() in vring_init_packed() (Jason);
>>>>> - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
>>>>> - Add comments for barriers (Jason);
>>>>> - Don't enable RING_PACKED on ccw for now (noticed by Jason);
>>>>> - Refine the memory barrier in virtqueue_poll();
>>>>> - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
>>>>> - Remove the hacks in virtqueue_enable_cb_prepare_packed();
>>>>>
>>>>> RFC v4 -> RFC v5:
>>>>> - Save DMA addr, etc in desc state (Jason);
>>>>> - Track used wrap counter;
>>>>>
>>>>> RFC v3 -> RFC v4:
>>>>> - Make ID allocation support out-of-order (Jason);
>>>>> - Various fixes for EVENT_IDX support;
>>>>>
>>>>> RFC v2 -> RFC v3:
>>>>> - Split into small patches (Jason);
>>>>> - Add helper virtqueue_use_indirect() (Jason);
>>>>> - Just set id for the last descriptor of a list (Jason);
>>>>> - Calculate the prev in virtqueue_add_packed() (Jason);
>>>>> - Fix/improve desc suppression code (Jason/MST);
>>>>> - Refine the code layout for XXX_split/packed and wrappers (MST);
>>>>> - Fix the comments and API in uapi (MST);
>>>>> - Remove the BUG_ON() for indirect (Jason);
>>>>> - Some other refinements and bug fixes;
>>>>>
>>>>> RFC v1 -> RFC v2:
>>>>> - Add indirect descriptor support - compile test only;
>>>>> - Add event suppression supprt - compile test only;
>>>>> - Move vring_packed_init() out of uapi (Jason, MST);
>>>>> - Merge two loops into one in virtqueue_add_packed() (Jason);
>>>>> - Split vring_unmap_one() for packed ring and split ring (Jason);
>>>>> - Avoid using '%' operator (Jason);
>>>>> - Rename free_head -> next_avail_idx (Jason);
>>>>> - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
>>>>> - Some other refinements and bug fixes;
>>>>>
>>>>> Thanks!
>>>>>
>>>>> Tiwei Bie (5):
>>>>>    virtio: add packed ring definitions
>>>>>    virtio_ring: support creating packed ring
>>>>>    virtio_ring: add packed ring support
>>>>>    virtio_ring: add event idx support in packed ring
>>>>>    virtio_ring: enable packed ring
>>>>>
>>>>>   drivers/s390/virtio/virtio_ccw.c   |   14 +
>>>>>   drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
>>>>>   include/linux/virtio_ring.h        |    8 +-
>>>>>   include/uapi/linux/virtio_config.h |    3 +
>>>>>   include/uapi/linux/virtio_ring.h   |   43 +
>>>>>   5 files changed, 1157 insertions(+), 276 deletions(-)
>>>>>
>>>>> -- 
>>>>> 2.18.0
>>>> ---------------------------------------------------------------------
>>>> To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
>>>> For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
>>>>


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-10  3:33         ` Jason Wang
@ 2018-09-11  5:37           ` Tiwei Bie
  2018-09-12 16:16             ` Michael S. Tsirkin
  0 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-09-11  5:37 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S. Tsirkin, virtualization, linux-kernel, netdev,
	virtio-dev, wexu, jfreimann

On Mon, Sep 10, 2018 at 11:33:17AM +0800, Jason Wang wrote:
> On 2018年09月10日 11:00, Tiwei Bie wrote:
> > On Fri, Sep 07, 2018 at 09:00:49AM -0400, Michael S. Tsirkin wrote:
> > > On Fri, Sep 07, 2018 at 09:22:25AM +0800, Tiwei Bie wrote:
> > > > On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
> > > > > Are there still plans to test the performance with vost pmd?
> > > > > vhost doesn't seem to show a performance gain ...
> > > > > 
> > > > I tried some performance tests with vhost PMD. In guest, the
> > > > XDP program will return XDP_DROP directly. And in host, testpmd
> > > > will do txonly fwd.
> > > > 
> > > > When burst size is 1 and packet size is 64 in testpmd and
> > > > testpmd needs to iterate 5 Tx queues (but only the first two
> > > > queues are enabled) to prepare and inject packets, I got ~12%
> > > > performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
> > > > is faster (e.g. just need to iterate the first two queues to
> > > > prepare and inject packets), then I got similar performance
> > > > for both rings (~9.9Mpps) (packed ring's performance can be
> > > > lower, because it's more complicated in driver.)
> > > > 
> > > > I think packed ring makes vhost PMD faster, but it doesn't make
> > > > the driver faster. In packed ring, the ring is simplified, and
> > > > the handling of the ring in vhost (device) is also simplified,
> > > > but things are not simplified in driver, e.g. although there is
> > > > no desc table in the virtqueue anymore, driver still needs to
> > > > maintain a private desc state table (which is still managed as
> > > > a list in this patch set) to support the out-of-order desc
> > > > processing in vhost (device).
> > > > 
> > > > I think this patch set is mainly to make the driver have a full
> > > > functional support for the packed ring, which makes it possible
> > > > to leverage the packed ring feature in vhost (device). But I'm
> > > > not sure whether there is any other better idea, I'd like to
> > > > hear your thoughts. Thanks!
> > > Just this: Jens seems to report a nice gain with virtio and
> > > vhost pmd across the board. Try to compare virtio and
> > > virtio pmd to see what does pmd do better?
> > The virtio PMD (drivers/net/virtio) in DPDK doesn't need to share
> > the virtio ring operation code with other drivers and is highly
> > optimized for network. E.g. in Rx, the Rx burst function won't
> > chain descs. So the ID management for the Rx ring can be quite
> > simple and straightforward, we just need to initialize these IDs
> > when initializing the ring and don't need to change these IDs
> > in data path anymore (the mergable Rx code in that patch set
> > assumes the descs will be written back in order, which should be
> > fixed. I.e., the ID in the desc should be used to index vq->descx[]).
> > The Tx code in that patch set also assumes the descs will be
> > written back by device in order, which should be fixed.
> 
> Yes it is. I think I've pointed it out in some early version of pmd patch.
> So I suspect part (or all) of the boost may come from in order feature.
> 
> > 
> > But in kernel virtio driver, the virtio_ring.c is very generic.
> > The enqueue (virtqueue_add()) and dequeue (virtqueue_get_buf_ctx())
> > functions need to support all the virtio devices and should be
> > able to handle all the possible cases that may happen. So although
> > the packed ring can be very efficient in some cases, currently
> > the room to optimize the performance in kernel's virtio_ring.c
> > isn't that much. If we want to take the fully advantage of the
> > packed ring's efficiency, we need some further e.g. API changes
> > in virtio_ring.c, which shouldn't be part of this patch set.
> 
> Could you please share more thoughts on this e.g how to improve the API?
> Notice since the API is shared by both split ring and packed ring, it may
> improve the performance of split ring as well. One can easily imagine a
> batching API, but it does not have many real users now, the only case is the
> XDP transmission which can accept an array of XDP frames.

I don't have detailed thoughts on this yet. But kernel's
virtio_ring.c is quite generic compared with what we did
in virtio PMD.

> 
> > So
> > I still think this patch set is mainly to make the kernel virtio
> > driver to have a full functional support of the packed ring, and
> > we can't expect impressive performance boost with it.
> 
> We can only gain when virtio ring layout is the bottleneck. If there're
> bottlenecks elsewhere, we probably won't see any increasing in the numbers.
> Vhost-net is an example, and lots of optimizations have proved that virtio
> ring is not the main bottleneck for the current codes. I suspect it also the
> case of virtio driver. Did perf tell us any interesting things in virtio
> driver?
> 
> Thanks
> 
> > 
> > > 
> > > > > On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> > > > > > Hello everyone,
> > > > > > 
> > > > > > This patch set implements packed ring support in virtio driver.
> > > > > > 
> > > > > > Some functional tests have been done with Jason's
> > > > > > packed ring implementation in vhost:
> > > > > > 
> > > > > > https://lkml.org/lkml/2018/7/3/33
> > > > > > 
> > > > > > Both of ping and netperf worked as expected.
> > > > > > 
> > > > > > v1 -> v2:
> > > > > > - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> > > > > > - Add comments related to ccw (Jason);
> > > > > > 
> > > > > > RFC (v6) -> v1:
> > > > > > - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
> > > > > >    when event idx is off (Jason);
> > > > > > - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > > - Test the state of the desc at used_idx instead of last_used_idx
> > > > > >    in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > > - Save wrap counter (as part of queue state) in the return value
> > > > > >    of virtqueue_enable_cb_prepare_packed();
> > > > > > - Refine the packed ring definitions in uapi;
> > > > > > - Rebase on the net-next tree;
> > > > > > 
> > > > > > RFC v5 -> RFC v6:
> > > > > > - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> > > > > > - Define wrap counter as bool (Jason);
> > > > > > - Use ALIGN() in vring_init_packed() (Jason);
> > > > > > - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> > > > > > - Add comments for barriers (Jason);
> > > > > > - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> > > > > > - Refine the memory barrier in virtqueue_poll();
> > > > > > - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> > > > > > - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> > > > > > 
> > > > > > RFC v4 -> RFC v5:
> > > > > > - Save DMA addr, etc in desc state (Jason);
> > > > > > - Track used wrap counter;
> > > > > > 
> > > > > > RFC v3 -> RFC v4:
> > > > > > - Make ID allocation support out-of-order (Jason);
> > > > > > - Various fixes for EVENT_IDX support;
> > > > > > 
> > > > > > RFC v2 -> RFC v3:
> > > > > > - Split into small patches (Jason);
> > > > > > - Add helper virtqueue_use_indirect() (Jason);
> > > > > > - Just set id for the last descriptor of a list (Jason);
> > > > > > - Calculate the prev in virtqueue_add_packed() (Jason);
> > > > > > - Fix/improve desc suppression code (Jason/MST);
> > > > > > - Refine the code layout for XXX_split/packed and wrappers (MST);
> > > > > > - Fix the comments and API in uapi (MST);
> > > > > > - Remove the BUG_ON() for indirect (Jason);
> > > > > > - Some other refinements and bug fixes;
> > > > > > 
> > > > > > RFC v1 -> RFC v2:
> > > > > > - Add indirect descriptor support - compile test only;
> > > > > > - Add event suppression supprt - compile test only;
> > > > > > - Move vring_packed_init() out of uapi (Jason, MST);
> > > > > > - Merge two loops into one in virtqueue_add_packed() (Jason);
> > > > > > - Split vring_unmap_one() for packed ring and split ring (Jason);
> > > > > > - Avoid using '%' operator (Jason);
> > > > > > - Rename free_head -> next_avail_idx (Jason);
> > > > > > - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> > > > > > - Some other refinements and bug fixes;
> > > > > > 
> > > > > > Thanks!
> > > > > > 
> > > > > > Tiwei Bie (5):
> > > > > >    virtio: add packed ring definitions
> > > > > >    virtio_ring: support creating packed ring
> > > > > >    virtio_ring: add packed ring support
> > > > > >    virtio_ring: add event idx support in packed ring
> > > > > >    virtio_ring: enable packed ring
> > > > > > 
> > > > > >   drivers/s390/virtio/virtio_ccw.c   |   14 +
> > > > > >   drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
> > > > > >   include/linux/virtio_ring.h        |    8 +-
> > > > > >   include/uapi/linux/virtio_config.h |    3 +
> > > > > >   include/uapi/linux/virtio_ring.h   |   43 +
> > > > > >   5 files changed, 1157 insertions(+), 276 deletions(-)
> > > > > > 
> > > > > > -- 
> > > > > > 2.18.0
> > > > > ---------------------------------------------------------------------
> > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> > > > > 
> 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 2/5] virtio_ring: support creating packed ring
  2018-09-10  2:28     ` Tiwei Bie
@ 2018-09-12  7:51       ` Tiwei Bie
  2018-09-12 16:12         ` Michael S. Tsirkin
  2018-09-12 12:45       ` Michael S. Tsirkin
  1 sibling, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-09-12  7:51 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Mon, Sep 10, 2018 at 10:28:37AM +0800, Tiwei Bie wrote:
> On Fri, Sep 07, 2018 at 10:03:24AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Jul 11, 2018 at 10:27:08AM +0800, Tiwei Bie wrote:
> > > This commit introduces the support for creating packed ring.
> > > All split ring specific functions are added _split suffix.
> > > Some necessary stubs for packed ring are also added.
> > > 
> > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > 
[...]
> > > +
> > > +/*
> > > + * The layout for the packed ring is a continuous chunk of memory
> > > + * which looks like this.
> > > + *
> > > + * struct vring_packed {
> > > + *	// The actual descriptors (16 bytes each)
> > > + *	struct vring_packed_desc desc[num];
> > > + *
> > > + *	// Padding to the next align boundary.
> > > + *	char pad[];
> > > + *
> > > + *	// Driver Event Suppression
> > > + *	struct vring_packed_desc_event driver;
> > > + *
> > > + *	// Device Event Suppression
> > > + *	struct vring_packed_desc_event device;
> > > + * };
> > > + */
> > 
> > Why not just allocate event structures separately?
> > Is it a win to have them share a cache line for some reason?

Users may call vring_new_virtqueue() with preallocated
memory to setup the ring, e.g.:

https://github.com/torvalds/linux/blob/11da3a7f84f1/drivers/s390/virtio/virtio_ccw.c#L513-L522
https://github.com/torvalds/linux/blob/11da3a7f84f1/drivers/misc/mic/vop/vop_main.c#L306-L307

Below is the corresponding definition in split ring:

https://github.com/oasis-tcs/virtio-spec/blob/89dd55f5e606/split-ring.tex#L64-L78

If my understanding is correct, this is just for legacy
interfaces, and we won't define layout in packed ring
and don't need to support vring_new_virtqueue() in packed
ring. Is it correct? Thanks!



> 
> Will do that.
> 
> > 
> > > +static inline void vring_init_packed(struct vring_packed *vr, unsigned int num,
> > > +				     void *p, unsigned long align)
> > > +{
> > > +	vr->num = num;
> > > +	vr->desc = p;
> > > +	vr->driver = (void *)ALIGN(((uintptr_t)p +
> > > +		sizeof(struct vring_packed_desc) * num), align);
> > > +	vr->device = vr->driver + 1;
> > > +}
> > 
> > What's all this about alignment? Where does it come from?
> 
> It comes from the `vring_align` parameter of vring_create_virtqueue()
> and vring_new_virtqueue():
> 
> https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1061
> https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1123
> 
> Should I just ignore it in packed ring?
> 
> CCW defined this:
> 
> #define KVM_VIRTIO_CCW_RING_ALIGN 4096
> 
> I'm not familiar with CCW. Currently, in this patch set, packed ring
> isn't enabled on CCW dues to some legacy accessors are not implemented
> in packed ring yet.
> 
> > 
> > > +
> > > +static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
> > > +{
> > > +	return ((sizeof(struct vring_packed_desc) * num + align - 1)
> > > +		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
> > > +}
> [...]
> > > @@ -1129,10 +1388,17 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
> > >  				      void (*callback)(struct virtqueue *vq),
> > >  				      const char *name)
> > >  {
> > > -	struct vring vring;
> > > -	vring_init(&vring, num, pages, vring_align);
> > > -	return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> > > -				     notify, callback, name);
> > > +	union vring_union vring;
> > > +	bool packed;
> > > +
> > > +	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
> > > +	if (packed)
> > > +		vring_init_packed(&vring.vring_packed, num, pages, vring_align);
> > > +	else
> > > +		vring_init(&vring.vring_split, num, pages, vring_align);
> > 
> > 
> > vring_init in the UAPI header is more or less a bug.
> > I'd just stop using it, keep it around for legacy userspace.
> 
> Got it. I'd like to do that. Thanks.
> 
> > 
> > > +
> > > +	return __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
> > > +				     context, notify, callback, name);
> > >  }
> > >  EXPORT_SYMBOL_GPL(vring_new_virtqueue);
> > >  
> > > @@ -1142,7 +1408,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
> > >  
> > >  	if (vq->we_own_ring) {
> > >  		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
> > > -				 vq->vring.desc, vq->queue_dma_addr);
> > > +				 vq->packed ? (void *)vq->vring_packed.desc :
> > > +					      (void *)vq->vring.desc,
> > > +				 vq->queue_dma_addr);
> > >  	}
> > >  	list_del(&_vq->list);
> > >  	kfree(vq);
> > > @@ -1184,7 +1452,7 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *_vq)
> > >  
> > >  	struct vring_virtqueue *vq = to_vvq(_vq);
> > >  
> > > -	return vq->vring.num;
> > > +	return vq->packed ? vq->vring_packed.num : vq->vring.num;
> > >  }
> > >  EXPORT_SYMBOL_GPL(virtqueue_get_vring_size);
> > >  
> > > @@ -1227,6 +1495,10 @@ dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
> > >  
> > >  	BUG_ON(!vq->we_own_ring);
> > >  
> > > +	if (vq->packed)
> > > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.driver -
> > > +				(char *)vq->vring_packed.desc);
> > > +
> > >  	return vq->queue_dma_addr +
> > >  		((char *)vq->vring.avail - (char *)vq->vring.desc);
> > >  }
> > > @@ -1238,11 +1510,16 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
> > >  
> > >  	BUG_ON(!vq->we_own_ring);
> > >  
> > > +	if (vq->packed)
> > > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.device -
> > > +				(char *)vq->vring_packed.desc);
> > > +
> > >  	return vq->queue_dma_addr +
> > >  		((char *)vq->vring.used - (char *)vq->vring.desc);
> > >  }
> > >  EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
> > >  
> > > +/* Only available for split ring */
> > >  const struct vring *virtqueue_get_vring(struct virtqueue *vq)
> > >  {
> > >  	return &to_vvq(vq)->vring;
> > > diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> > > index fab02133a919..992142b35f55 100644
> > > --- a/include/linux/virtio_ring.h
> > > +++ b/include/linux/virtio_ring.h
> > > @@ -60,6 +60,11 @@ static inline void virtio_store_mb(bool weak_barriers,
> > >  struct virtio_device;
> > >  struct virtqueue;
> > >  
> > > +union vring_union {
> > > +	struct vring vring_split;
> > > +	struct vring_packed vring_packed;
> > > +};
> > > +
> > >  /*
> > >   * Creates a virtqueue and allocates the descriptor ring.  If
> > >   * may_reduce_num is set, then this may allocate a smaller ring than
> > > @@ -79,7 +84,8 @@ struct virtqueue *vring_create_virtqueue(unsigned int index,
> > >  
> > >  /* Creates a virtqueue with a custom layout. */
> > >  struct virtqueue *__vring_new_virtqueue(unsigned int index,
> > > -					struct vring vring,
> > > +					union vring_union vring,
> > > +					bool packed,
> > >  					struct virtio_device *vdev,
> > >  					bool weak_barriers,
> > >  					bool ctx,
> > > -- 
> > > 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 2/5] virtio_ring: support creating packed ring
  2018-09-10  2:28     ` Tiwei Bie
  2018-09-12  7:51       ` Tiwei Bie
@ 2018-09-12 12:45       ` Michael S. Tsirkin
  1 sibling, 0 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-12 12:45 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Mon, Sep 10, 2018 at 10:28:37AM +0800, Tiwei Bie wrote:
> On Fri, Sep 07, 2018 at 10:03:24AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Jul 11, 2018 at 10:27:08AM +0800, Tiwei Bie wrote:
> > > This commit introduces the support for creating packed ring.
> > > All split ring specific functions are added _split suffix.
> > > Some necessary stubs for packed ring are also added.
> > > 
> > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > 
> > I'd rather have a patch just renaming split functions, then
> > add all packed stuff in as a separate patch on top.
> 
> Sure, I will do that.
> 
> > 
> > 
> > > ---
> > >  drivers/virtio/virtio_ring.c | 801 +++++++++++++++++++++++------------
> > >  include/linux/virtio_ring.h  |   8 +-
> > >  2 files changed, 546 insertions(+), 263 deletions(-)
> > > 
> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > index 814b395007b2..c4f8abc7445a 100644
> > > --- a/drivers/virtio/virtio_ring.c
> > > +++ b/drivers/virtio/virtio_ring.c
> > > @@ -60,11 +60,15 @@ struct vring_desc_state {
> > >  	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> > >  };
> > >  
> > > +struct vring_desc_state_packed {
> > > +	int next;			/* The next desc state. */
> > 
> > So this can go away with IN_ORDER?
> 
> Yes. If IN_ORDER is negotiated, next won't be needed anymore.
> Currently, IN_ORDER isn't included in this patch set, because
> some changes for split ring are needed to make sure that it
> will use the descs in the expected order. After that,
> optimizations can be done for both of split ring and packed
> ring respectively.
> 
> > 
> > > +};
> > > +
> > >  struct vring_virtqueue {
> > >  	struct virtqueue vq;
> > >  
> > > -	/* Actual memory layout for this queue */
> > > -	struct vring vring;
> > > +	/* Is this a packed ring? */
> > > +	bool packed;
> > >  
> > >  	/* Can we use weak barriers? */
> > >  	bool weak_barriers;
> > > @@ -86,11 +90,39 @@ struct vring_virtqueue {
> > >  	/* Last used index we've seen. */
> > >  	u16 last_used_idx;
> > >  
> > > -	/* Last written value to avail->flags */
> > > -	u16 avail_flags_shadow;
> > > +	union {
> > > +		/* Available for split ring */
> > > +		struct {
> > > +			/* Actual memory layout for this queue. */
> > > +			struct vring vring;
> > >  
> > > -	/* Last written value to avail->idx in guest byte order */
> > > -	u16 avail_idx_shadow;
> > > +			/* Last written value to avail->flags */
> > > +			u16 avail_flags_shadow;
> > > +
> > > +			/* Last written value to avail->idx in
> > > +			 * guest byte order. */
> > > +			u16 avail_idx_shadow;
> > > +		};
> > 
> > Name this field split so it's easier to detect misuse of e.g.
> > packed fields in split code?
> 
> Good point, I'll do that.
> 
> > 
> > > +
> > > +		/* Available for packed ring */
> > > +		struct {
> > > +			/* Actual memory layout for this queue. */
> > > +			struct vring_packed vring_packed;
> > > +
> > > +			/* Driver ring wrap counter. */
> > > +			bool avail_wrap_counter;
> > > +
> > > +			/* Device ring wrap counter. */
> > > +			bool used_wrap_counter;
> > > +
> > > +			/* Index of the next avail descriptor. */
> > > +			u16 next_avail_idx;
> > > +
> > > +			/* Last written value to driver->flags in
> > > +			 * guest byte order. */
> > > +			u16 event_flags_shadow;
> > > +		};
> > > +	};
> [...]
> > > +
> > > +/*
> > > + * The layout for the packed ring is a continuous chunk of memory
> > > + * which looks like this.
> > > + *
> > > + * struct vring_packed {
> > > + *	// The actual descriptors (16 bytes each)
> > > + *	struct vring_packed_desc desc[num];
> > > + *
> > > + *	// Padding to the next align boundary.
> > > + *	char pad[];
> > > + *
> > > + *	// Driver Event Suppression
> > > + *	struct vring_packed_desc_event driver;
> > > + *
> > > + *	// Device Event Suppression
> > > + *	struct vring_packed_desc_event device;
> > > + * };
> > > + */
> > 
> > Why not just allocate event structures separately?
> > Is it a win to have them share a cache line for some reason?
> 
> Will do that.
> 
> > 
> > > +static inline void vring_init_packed(struct vring_packed *vr, unsigned int num,
> > > +				     void *p, unsigned long align)
> > > +{
> > > +	vr->num = num;
> > > +	vr->desc = p;
> > > +	vr->driver = (void *)ALIGN(((uintptr_t)p +
> > > +		sizeof(struct vring_packed_desc) * num), align);
> > > +	vr->device = vr->driver + 1;
> > > +}
> > 
> > What's all this about alignment? Where does it come from?
> 
> It comes from the `vring_align` parameter of vring_create_virtqueue()
> and vring_new_virtqueue():
> 
> https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1061
> https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1123

Note the TODO - we just never got to fixing it. It would be great to fix
this for virtio 1 rings, but if not - at least let's not add this
stuff for packed rings.

> Should I just ignore it in packed ring?
> 
> CCW defined this:
> 
> #define KVM_VIRTIO_CCW_RING_ALIGN 4096
> 
> I'm not familiar with CCW. Currently, in this patch set, packed ring
> isn't enabled on CCW dues to some legacy accessors are not implemented
> in packed ring yet.


Then you need to take steps not to negotiate this feature bit for
ccw drivers.

> > 
> > > +
> > > +static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
> > > +{
> > > +	return ((sizeof(struct vring_packed_desc) * num + align - 1)
> > > +		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
> > > +}
> [...]
> > > @@ -1129,10 +1388,17 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
> > >  				      void (*callback)(struct virtqueue *vq),
> > >  				      const char *name)
> > >  {
> > > -	struct vring vring;
> > > -	vring_init(&vring, num, pages, vring_align);
> > > -	return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> > > -				     notify, callback, name);
> > > +	union vring_union vring;
> > > +	bool packed;
> > > +
> > > +	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
> > > +	if (packed)
> > > +		vring_init_packed(&vring.vring_packed, num, pages, vring_align);
> > > +	else
> > > +		vring_init(&vring.vring_split, num, pages, vring_align);
> > 
> > 
> > vring_init in the UAPI header is more or less a bug.
> > I'd just stop using it, keep it around for legacy userspace.
> 
> Got it. I'd like to do that. Thanks.
> 
> > 
> > > +
> > > +	return __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
> > > +				     context, notify, callback, name);
> > >  }
> > >  EXPORT_SYMBOL_GPL(vring_new_virtqueue);
> > >  
> > > @@ -1142,7 +1408,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
> > >  
> > >  	if (vq->we_own_ring) {
> > >  		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
> > > -				 vq->vring.desc, vq->queue_dma_addr);
> > > +				 vq->packed ? (void *)vq->vring_packed.desc :
> > > +					      (void *)vq->vring.desc,
> > > +				 vq->queue_dma_addr);
> > >  	}
> > >  	list_del(&_vq->list);
> > >  	kfree(vq);
> > > @@ -1184,7 +1452,7 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *_vq)
> > >  
> > >  	struct vring_virtqueue *vq = to_vvq(_vq);
> > >  
> > > -	return vq->vring.num;
> > > +	return vq->packed ? vq->vring_packed.num : vq->vring.num;
> > >  }
> > >  EXPORT_SYMBOL_GPL(virtqueue_get_vring_size);
> > >  
> > > @@ -1227,6 +1495,10 @@ dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
> > >  
> > >  	BUG_ON(!vq->we_own_ring);
> > >  
> > > +	if (vq->packed)
> > > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.driver -
> > > +				(char *)vq->vring_packed.desc);
> > > +
> > >  	return vq->queue_dma_addr +
> > >  		((char *)vq->vring.avail - (char *)vq->vring.desc);
> > >  }
> > > @@ -1238,11 +1510,16 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
> > >  
> > >  	BUG_ON(!vq->we_own_ring);
> > >  
> > > +	if (vq->packed)
> > > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.device -
> > > +				(char *)vq->vring_packed.desc);
> > > +
> > >  	return vq->queue_dma_addr +
> > >  		((char *)vq->vring.used - (char *)vq->vring.desc);
> > >  }
> > >  EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
> > >  
> > > +/* Only available for split ring */
> > >  const struct vring *virtqueue_get_vring(struct virtqueue *vq)
> > >  {
> > >  	return &to_vvq(vq)->vring;
> > > diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> > > index fab02133a919..992142b35f55 100644
> > > --- a/include/linux/virtio_ring.h
> > > +++ b/include/linux/virtio_ring.h
> > > @@ -60,6 +60,11 @@ static inline void virtio_store_mb(bool weak_barriers,
> > >  struct virtio_device;
> > >  struct virtqueue;
> > >  
> > > +union vring_union {
> > > +	struct vring vring_split;
> > > +	struct vring_packed vring_packed;
> > > +};
> > > +
> > >  /*
> > >   * Creates a virtqueue and allocates the descriptor ring.  If
> > >   * may_reduce_num is set, then this may allocate a smaller ring than
> > > @@ -79,7 +84,8 @@ struct virtqueue *vring_create_virtqueue(unsigned int index,
> > >  
> > >  /* Creates a virtqueue with a custom layout. */
> > >  struct virtqueue *__vring_new_virtqueue(unsigned int index,
> > > -					struct vring vring,
> > > +					union vring_union vring,
> > > +					bool packed,
> > >  					struct virtio_device *vdev,
> > >  					bool weak_barriers,
> > >  					bool ctx,
> > > -- 
> > > 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 1/5] virtio: add packed ring definitions
  2018-09-10  2:13     ` Tiwei Bie
@ 2018-09-12 12:53       ` Michael S. Tsirkin
  0 siblings, 0 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-12 12:53 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Mon, Sep 10, 2018 at 10:13:22AM +0800, Tiwei Bie wrote:
> On Fri, Sep 07, 2018 at 09:51:23AM -0400, Michael S. Tsirkin wrote:
> > On Wed, Jul 11, 2018 at 10:27:07AM +0800, Tiwei Bie wrote:
> > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > ---
> > >  include/uapi/linux/virtio_config.h |  3 +++
> > >  include/uapi/linux/virtio_ring.h   | 43 ++++++++++++++++++++++++++++++
> > >  2 files changed, 46 insertions(+)
> > > 
> > > diff --git a/include/uapi/linux/virtio_config.h b/include/uapi/linux/virtio_config.h
> > > index 449132c76b1c..1196e1c1d4f6 100644
> > > --- a/include/uapi/linux/virtio_config.h
> > > +++ b/include/uapi/linux/virtio_config.h
> > > @@ -75,6 +75,9 @@
> > >   */
> > >  #define VIRTIO_F_IOMMU_PLATFORM		33
> > >  
> > > +/* This feature indicates support for the packed virtqueue layout. */
> > > +#define VIRTIO_F_RING_PACKED		34
> > > +
> > >  /*
> > >   * Does the device support Single Root I/O Virtualization?
> > >   */
> > > diff --git a/include/uapi/linux/virtio_ring.h b/include/uapi/linux/virtio_ring.h
> > > index 6d5d5faa989b..0254a2ba29cf 100644
> > > --- a/include/uapi/linux/virtio_ring.h
> > > +++ b/include/uapi/linux/virtio_ring.h
> > > @@ -44,6 +44,10 @@
> > >  /* This means the buffer contains a list of buffer descriptors. */
> > >  #define VRING_DESC_F_INDIRECT	4
> > >  
> > > +/* Mark a descriptor as available or used. */
> > > +#define VRING_DESC_F_AVAIL	(1ul << 7)
> > > +#define VRING_DESC_F_USED	(1ul << 15)
> > > +
> > >  /* The Host uses this in used->flags to advise the Guest: don't kick me when
> > >   * you add a buffer.  It's unreliable, so it's simply an optimization.  Guest
> > >   * will still kick if it's out of buffers. */
> > > @@ -53,6 +57,17 @@
> > >   * optimization.  */
> > >  #define VRING_AVAIL_F_NO_INTERRUPT	1
> > >  
> > > +/* Enable events. */
> > > +#define VRING_EVENT_F_ENABLE	0x0
> > > +/* Disable events. */
> > > +#define VRING_EVENT_F_DISABLE	0x1
> > > +/*
> > > + * Enable events for a specific descriptor
> > > + * (as specified by Descriptor Ring Change Event Offset/Wrap Counter).
> > > + * Only valid if VIRTIO_RING_F_EVENT_IDX has been negotiated.
> > > + */
> > > +#define VRING_EVENT_F_DESC	0x2
> > > +
> > >  /* We support indirect buffer descriptors */
> > >  #define VIRTIO_RING_F_INDIRECT_DESC	28
> > >  
> > 
> > These are for the packed ring, right? Pls prefix accordingly.
> 
> How about something like this:
> 
> #define VRING_PACKED_DESC_F_AVAIL	(1u << 7)
> #define VRING_PACKED_DESC_F_USED	(1u << 15)

_F_ should be a bit number, not a shifted value.
Yes I know current virtio_ring.h is inconsistent in
that respect. changes welcome though we need to
maintain the old names for the sake of old apps.

> 
> #define VRING_PACKED_EVENT_F_ENABLE	0x0
> #define VRING_PACKED_EVENT_F_DISABLE	0x1
> #define VRING_PACKED_EVENT_F_DESC	0x2

These are values, I think they should not have _F_.
Yes I know current virtio_ring.h is inconsistent in
that respect.


> 
> > Also, you likely need macros for the wrap counters.
> 
> How about something like this:
> 
> #define VRING_PACKED_EVENT_WRAP_COUNTER_SHIFT	15

Given it's a single bit, maybe even
VRING_PACKED_EVENT_F_WRAP_CTR

> #define VRING_PACKED_EVENT_WRAP_COUNTER_MASK	\
> 			(1u << VRING_PACKED_WRAP_COUNTER_SHIFT)
> #define VRING_PACKED_EVENT_OFFSET_MASK	\
> 			((1u << VRING_PACKED_WRAP_COUNTER_SHIFT) - 1)

I'm not sure the mask is justified.

> > 
> > > @@ -171,4 +186,32 @@ static inline int vring_need_event(__u16 event_idx, __u16 new_idx, __u16 old)
> > >  	return (__u16)(new_idx - event_idx - 1) < (__u16)(new_idx - old);
> > >  }
> > >  
> > > +struct vring_packed_desc_event {
> > > +	/* Descriptor Ring Change Event Offset/Wrap Counter. */
> > > +	__virtio16 off_wrap;
> > > +	/* Descriptor Ring Change Event Flags. */
> > > +	__virtio16 flags;
> > > +};
> > > +
> > > +struct vring_packed_desc {
> > > +	/* Buffer Address. */
> > > +	__virtio64 addr;
> > > +	/* Buffer Length. */
> > > +	__virtio32 len;
> > > +	/* Buffer ID. */
> > > +	__virtio16 id;
> > > +	/* The flags depending on descriptor type. */
> > > +	__virtio16 flags;
> > > +};
> > 
> > Don't use __virtioXX types, just __leXX ones.
> 
> Got it, will do that.
> 
> > 
> > > +
> > > +struct vring_packed {
> > > +	unsigned int num;
> > > +
> > > +	struct vring_packed_desc *desc;
> > > +
> > > +	struct vring_packed_desc_event *driver;
> > > +
> > > +	struct vring_packed_desc_event *device;
> > > +};
> > > +
> > >  #endif /* _UAPI_LINUX_VIRTIO_RING_H */
> > > -- 
> > > 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-10  3:00       ` Tiwei Bie
  2018-09-10  3:33         ` Jason Wang
@ 2018-09-12 13:06         ` Michael S. Tsirkin
  1 sibling, 0 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-12 13:06 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Mon, Sep 10, 2018 at 11:00:53AM +0800, Tiwei Bie wrote:
> On Fri, Sep 07, 2018 at 09:00:49AM -0400, Michael S. Tsirkin wrote:
> > On Fri, Sep 07, 2018 at 09:22:25AM +0800, Tiwei Bie wrote:
> > > On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
> > > > Are there still plans to test the performance with vost pmd?
> > > > vhost doesn't seem to show a performance gain ...
> > > > 
> > > 
> > > I tried some performance tests with vhost PMD. In guest, the
> > > XDP program will return XDP_DROP directly. And in host, testpmd
> > > will do txonly fwd.
> > > 
> > > When burst size is 1 and packet size is 64 in testpmd and
> > > testpmd needs to iterate 5 Tx queues (but only the first two
> > > queues are enabled) to prepare and inject packets, I got ~12%
> > > performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
> > > is faster (e.g. just need to iterate the first two queues to
> > > prepare and inject packets), then I got similar performance
> > > for both rings (~9.9Mpps) (packed ring's performance can be
> > > lower, because it's more complicated in driver.)
> > > 
> > > I think packed ring makes vhost PMD faster, but it doesn't make
> > > the driver faster. In packed ring, the ring is simplified, and
> > > the handling of the ring in vhost (device) is also simplified,
> > > but things are not simplified in driver, e.g. although there is
> > > no desc table in the virtqueue anymore, driver still needs to
> > > maintain a private desc state table (which is still managed as
> > > a list in this patch set) to support the out-of-order desc
> > > processing in vhost (device).
> > > 
> > > I think this patch set is mainly to make the driver have a full
> > > functional support for the packed ring, which makes it possible
> > > to leverage the packed ring feature in vhost (device). But I'm
> > > not sure whether there is any other better idea, I'd like to
> > > hear your thoughts. Thanks!
> > 
> > Just this: Jens seems to report a nice gain with virtio and
> > vhost pmd across the board. Try to compare virtio and
> > virtio pmd to see what does pmd do better?
> 
> The virtio PMD (drivers/net/virtio) in DPDK doesn't need to share
> the virtio ring operation code with other drivers and is highly
> optimized for network. E.g. in Rx, the Rx burst function won't
> chain descs.
> So the ID management for the Rx ring can be quite
> simple and straightforward, we just need to initialize these IDs
> when initializing the ring and don't need to change these IDs
> in data path anymore (the mergable Rx code in that patch set
> assumes the descs will be written back in order, which should be
> fixed. I.e., the ID in the desc should be used to index vq->descx[]).
> The Tx code in that patch set also assumes the descs will be
> written back by device in order, which should be fixed.
> 
> But in kernel virtio driver, the virtio_ring.c is very generic.
> The enqueue (virtqueue_add()) and dequeue (virtqueue_get_buf_ctx())
> functions need to support all the virtio devices and should be
> able to handle all the possible cases that may happen. So although
> the packed ring can be very efficient in some cases, currently
> the room to optimize the performance in kernel's virtio_ring.c
> isn't that much. If we want to take the fully advantage of the
> packed ring's efficiency, we need some further e.g. API changes
> in virtio_ring.c, which shouldn't be part of this patch set. So
> I still think this patch set is mainly to make the kernel virtio
> driver to have a full functional support of the packed ring, and
> we can't expect impressive performance boost with it.

So what are the cases that make things complex?
Maybe we should drop support for them completely.


> > 
> > 
> > > 
> > > > 
> > > > On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> > > > > Hello everyone,
> > > > > 
> > > > > This patch set implements packed ring support in virtio driver.
> > > > > 
> > > > > Some functional tests have been done with Jason's
> > > > > packed ring implementation in vhost:
> > > > > 
> > > > > https://lkml.org/lkml/2018/7/3/33
> > > > > 
> > > > > Both of ping and netperf worked as expected.
> > > > > 
> > > > > v1 -> v2:
> > > > > - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> > > > > - Add comments related to ccw (Jason);
> > > > > 
> > > > > RFC (v6) -> v1:
> > > > > - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
> > > > >   when event idx is off (Jason);
> > > > > - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > - Test the state of the desc at used_idx instead of last_used_idx
> > > > >   in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > - Save wrap counter (as part of queue state) in the return value
> > > > >   of virtqueue_enable_cb_prepare_packed();
> > > > > - Refine the packed ring definitions in uapi;
> > > > > - Rebase on the net-next tree;
> > > > > 
> > > > > RFC v5 -> RFC v6:
> > > > > - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> > > > > - Define wrap counter as bool (Jason);
> > > > > - Use ALIGN() in vring_init_packed() (Jason);
> > > > > - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> > > > > - Add comments for barriers (Jason);
> > > > > - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> > > > > - Refine the memory barrier in virtqueue_poll();
> > > > > - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> > > > > - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> > > > > 
> > > > > RFC v4 -> RFC v5:
> > > > > - Save DMA addr, etc in desc state (Jason);
> > > > > - Track used wrap counter;
> > > > > 
> > > > > RFC v3 -> RFC v4:
> > > > > - Make ID allocation support out-of-order (Jason);
> > > > > - Various fixes for EVENT_IDX support;
> > > > > 
> > > > > RFC v2 -> RFC v3:
> > > > > - Split into small patches (Jason);
> > > > > - Add helper virtqueue_use_indirect() (Jason);
> > > > > - Just set id for the last descriptor of a list (Jason);
> > > > > - Calculate the prev in virtqueue_add_packed() (Jason);
> > > > > - Fix/improve desc suppression code (Jason/MST);
> > > > > - Refine the code layout for XXX_split/packed and wrappers (MST);
> > > > > - Fix the comments and API in uapi (MST);
> > > > > - Remove the BUG_ON() for indirect (Jason);
> > > > > - Some other refinements and bug fixes;
> > > > > 
> > > > > RFC v1 -> RFC v2:
> > > > > - Add indirect descriptor support - compile test only;
> > > > > - Add event suppression supprt - compile test only;
> > > > > - Move vring_packed_init() out of uapi (Jason, MST);
> > > > > - Merge two loops into one in virtqueue_add_packed() (Jason);
> > > > > - Split vring_unmap_one() for packed ring and split ring (Jason);
> > > > > - Avoid using '%' operator (Jason);
> > > > > - Rename free_head -> next_avail_idx (Jason);
> > > > > - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> > > > > - Some other refinements and bug fixes;
> > > > > 
> > > > > Thanks!
> > > > > 
> > > > > Tiwei Bie (5):
> > > > >   virtio: add packed ring definitions
> > > > >   virtio_ring: support creating packed ring
> > > > >   virtio_ring: add packed ring support
> > > > >   virtio_ring: add event idx support in packed ring
> > > > >   virtio_ring: enable packed ring
> > > > > 
> > > > >  drivers/s390/virtio/virtio_ccw.c   |   14 +
> > > > >  drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
> > > > >  include/linux/virtio_ring.h        |    8 +-
> > > > >  include/uapi/linux/virtio_config.h |    3 +
> > > > >  include/uapi/linux/virtio_ring.h   |   43 +
> > > > >  5 files changed, 1157 insertions(+), 276 deletions(-)
> > > > > 
> > > > > -- 
> > > > > 2.18.0
> > > > 
> > > > ---------------------------------------------------------------------
> > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> > > > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 2/5] virtio_ring: support creating packed ring
  2018-09-12  7:51       ` Tiwei Bie
@ 2018-09-12 16:12         ` Michael S. Tsirkin
  0 siblings, 0 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-12 16:12 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Sep 12, 2018 at 03:51:40PM +0800, Tiwei Bie wrote:
> On Mon, Sep 10, 2018 at 10:28:37AM +0800, Tiwei Bie wrote:
> > On Fri, Sep 07, 2018 at 10:03:24AM -0400, Michael S. Tsirkin wrote:
> > > On Wed, Jul 11, 2018 at 10:27:08AM +0800, Tiwei Bie wrote:
> > > > This commit introduces the support for creating packed ring.
> > > > All split ring specific functions are added _split suffix.
> > > > Some necessary stubs for packed ring are also added.
> > > > 
> > > > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > > 
> [...]
> > > > +
> > > > +/*
> > > > + * The layout for the packed ring is a continuous chunk of memory
> > > > + * which looks like this.
> > > > + *
> > > > + * struct vring_packed {
> > > > + *	// The actual descriptors (16 bytes each)
> > > > + *	struct vring_packed_desc desc[num];
> > > > + *
> > > > + *	// Padding to the next align boundary.
> > > > + *	char pad[];
> > > > + *
> > > > + *	// Driver Event Suppression
> > > > + *	struct vring_packed_desc_event driver;
> > > > + *
> > > > + *	// Device Event Suppression
> > > > + *	struct vring_packed_desc_event device;
> > > > + * };
> > > > + */
> > > 
> > > Why not just allocate event structures separately?
> > > Is it a win to have them share a cache line for some reason?
> 
> Users may call vring_new_virtqueue() with preallocated
> memory to setup the ring, e.g.:
> 
> https://github.com/torvalds/linux/blob/11da3a7f84f1/drivers/s390/virtio/virtio_ccw.c#L513-L522
> https://github.com/torvalds/linux/blob/11da3a7f84f1/drivers/misc/mic/vop/vop_main.c#L306-L307
> 
> Below is the corresponding definition in split ring:
> 
> https://github.com/oasis-tcs/virtio-spec/blob/89dd55f5e606/split-ring.tex#L64-L78
> 
> If my understanding is correct, this is just for legacy
> interfaces, and we won't define layout in packed ring
> and don't need to support vring_new_virtqueue() in packed
> ring. Is it correct? Thanks!

mic doesn't support 1.0 yet but ccw does.

It's probably best to look into converting ccw to
virtio_create_virtqueue, then you can just fail
vring_new_virtqueue for packed.

Cornelia, are there any gotchas to look out for?

> 
> 
> > 
> > Will do that.
> > 
> > > 
> > > > +static inline void vring_init_packed(struct vring_packed *vr, unsigned int num,
> > > > +				     void *p, unsigned long align)
> > > > +{
> > > > +	vr->num = num;
> > > > +	vr->desc = p;
> > > > +	vr->driver = (void *)ALIGN(((uintptr_t)p +
> > > > +		sizeof(struct vring_packed_desc) * num), align);
> > > > +	vr->device = vr->driver + 1;
> > > > +}
> > > 
> > > What's all this about alignment? Where does it come from?
> > 
> > It comes from the `vring_align` parameter of vring_create_virtqueue()
> > and vring_new_virtqueue():
> > 
> > https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1061
> > https://github.com/torvalds/linux/blob/a49a9dcce802/drivers/virtio/virtio_ring.c#L1123
> > 
> > Should I just ignore it in packed ring?
> > 
> > CCW defined this:
> > 
> > #define KVM_VIRTIO_CCW_RING_ALIGN 4096
> > 
> > I'm not familiar with CCW. Currently, in this patch set, packed ring
> > isn't enabled on CCW dues to some legacy accessors are not implemented
> > in packed ring yet.
> > 
> > > 
> > > > +
> > > > +static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
> > > > +{
> > > > +	return ((sizeof(struct vring_packed_desc) * num + align - 1)
> > > > +		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
> > > > +}
> > [...]
> > > > @@ -1129,10 +1388,17 @@ struct virtqueue *vring_new_virtqueue(unsigned int index,
> > > >  				      void (*callback)(struct virtqueue *vq),
> > > >  				      const char *name)
> > > >  {
> > > > -	struct vring vring;
> > > > -	vring_init(&vring, num, pages, vring_align);
> > > > -	return __vring_new_virtqueue(index, vring, vdev, weak_barriers, context,
> > > > -				     notify, callback, name);
> > > > +	union vring_union vring;
> > > > +	bool packed;
> > > > +
> > > > +	packed = virtio_has_feature(vdev, VIRTIO_F_RING_PACKED);
> > > > +	if (packed)
> > > > +		vring_init_packed(&vring.vring_packed, num, pages, vring_align);
> > > > +	else
> > > > +		vring_init(&vring.vring_split, num, pages, vring_align);
> > > 
> > > 
> > > vring_init in the UAPI header is more or less a bug.
> > > I'd just stop using it, keep it around for legacy userspace.
> > 
> > Got it. I'd like to do that. Thanks.
> > 
> > > 
> > > > +
> > > > +	return __vring_new_virtqueue(index, vring, packed, vdev, weak_barriers,
> > > > +				     context, notify, callback, name);
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(vring_new_virtqueue);
> > > >  
> > > > @@ -1142,7 +1408,9 @@ void vring_del_virtqueue(struct virtqueue *_vq)
> > > >  
> > > >  	if (vq->we_own_ring) {
> > > >  		vring_free_queue(vq->vq.vdev, vq->queue_size_in_bytes,
> > > > -				 vq->vring.desc, vq->queue_dma_addr);
> > > > +				 vq->packed ? (void *)vq->vring_packed.desc :
> > > > +					      (void *)vq->vring.desc,
> > > > +				 vq->queue_dma_addr);
> > > >  	}
> > > >  	list_del(&_vq->list);
> > > >  	kfree(vq);
> > > > @@ -1184,7 +1452,7 @@ unsigned int virtqueue_get_vring_size(struct virtqueue *_vq)
> > > >  
> > > >  	struct vring_virtqueue *vq = to_vvq(_vq);
> > > >  
> > > > -	return vq->vring.num;
> > > > +	return vq->packed ? vq->vring_packed.num : vq->vring.num;
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(virtqueue_get_vring_size);
> > > >  
> > > > @@ -1227,6 +1495,10 @@ dma_addr_t virtqueue_get_avail_addr(struct virtqueue *_vq)
> > > >  
> > > >  	BUG_ON(!vq->we_own_ring);
> > > >  
> > > > +	if (vq->packed)
> > > > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.driver -
> > > > +				(char *)vq->vring_packed.desc);
> > > > +
> > > >  	return vq->queue_dma_addr +
> > > >  		((char *)vq->vring.avail - (char *)vq->vring.desc);
> > > >  }
> > > > @@ -1238,11 +1510,16 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *_vq)
> > > >  
> > > >  	BUG_ON(!vq->we_own_ring);
> > > >  
> > > > +	if (vq->packed)
> > > > +		return vq->queue_dma_addr + ((char *)vq->vring_packed.device -
> > > > +				(char *)vq->vring_packed.desc);
> > > > +
> > > >  	return vq->queue_dma_addr +
> > > >  		((char *)vq->vring.used - (char *)vq->vring.desc);
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(virtqueue_get_used_addr);
> > > >  
> > > > +/* Only available for split ring */
> > > >  const struct vring *virtqueue_get_vring(struct virtqueue *vq)
> > > >  {
> > > >  	return &to_vvq(vq)->vring;
> > > > diff --git a/include/linux/virtio_ring.h b/include/linux/virtio_ring.h
> > > > index fab02133a919..992142b35f55 100644
> > > > --- a/include/linux/virtio_ring.h
> > > > +++ b/include/linux/virtio_ring.h
> > > > @@ -60,6 +60,11 @@ static inline void virtio_store_mb(bool weak_barriers,
> > > >  struct virtio_device;
> > > >  struct virtqueue;
> > > >  
> > > > +union vring_union {
> > > > +	struct vring vring_split;
> > > > +	struct vring_packed vring_packed;
> > > > +};
> > > > +
> > > >  /*
> > > >   * Creates a virtqueue and allocates the descriptor ring.  If
> > > >   * may_reduce_num is set, then this may allocate a smaller ring than
> > > > @@ -79,7 +84,8 @@ struct virtqueue *vring_create_virtqueue(unsigned int index,
> > > >  
> > > >  /* Creates a virtqueue with a custom layout. */
> > > >  struct virtqueue *__vring_new_virtqueue(unsigned int index,
> > > > -					struct vring vring,
> > > > +					union vring_union vring,
> > > > +					bool packed,
> > > >  					struct virtio_device *vdev,
> > > >  					bool weak_barriers,
> > > >  					bool ctx,
> > > > -- 
> > > > 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-11  5:37           ` Tiwei Bie
@ 2018-09-12 16:16             ` Michael S. Tsirkin
  2018-09-13  8:59               ` Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-12 16:16 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Tue, Sep 11, 2018 at 01:37:26PM +0800, Tiwei Bie wrote:
> On Mon, Sep 10, 2018 at 11:33:17AM +0800, Jason Wang wrote:
> > On 2018年09月10日 11:00, Tiwei Bie wrote:
> > > On Fri, Sep 07, 2018 at 09:00:49AM -0400, Michael S. Tsirkin wrote:
> > > > On Fri, Sep 07, 2018 at 09:22:25AM +0800, Tiwei Bie wrote:
> > > > > On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
> > > > > > Are there still plans to test the performance with vost pmd?
> > > > > > vhost doesn't seem to show a performance gain ...
> > > > > > 
> > > > > I tried some performance tests with vhost PMD. In guest, the
> > > > > XDP program will return XDP_DROP directly. And in host, testpmd
> > > > > will do txonly fwd.
> > > > > 
> > > > > When burst size is 1 and packet size is 64 in testpmd and
> > > > > testpmd needs to iterate 5 Tx queues (but only the first two
> > > > > queues are enabled) to prepare and inject packets, I got ~12%
> > > > > performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
> > > > > is faster (e.g. just need to iterate the first two queues to
> > > > > prepare and inject packets), then I got similar performance
> > > > > for both rings (~9.9Mpps) (packed ring's performance can be
> > > > > lower, because it's more complicated in driver.)
> > > > > 
> > > > > I think packed ring makes vhost PMD faster, but it doesn't make
> > > > > the driver faster. In packed ring, the ring is simplified, and
> > > > > the handling of the ring in vhost (device) is also simplified,
> > > > > but things are not simplified in driver, e.g. although there is
> > > > > no desc table in the virtqueue anymore, driver still needs to
> > > > > maintain a private desc state table (which is still managed as
> > > > > a list in this patch set) to support the out-of-order desc
> > > > > processing in vhost (device).
> > > > > 
> > > > > I think this patch set is mainly to make the driver have a full
> > > > > functional support for the packed ring, which makes it possible
> > > > > to leverage the packed ring feature in vhost (device). But I'm
> > > > > not sure whether there is any other better idea, I'd like to
> > > > > hear your thoughts. Thanks!
> > > > Just this: Jens seems to report a nice gain with virtio and
> > > > vhost pmd across the board. Try to compare virtio and
> > > > virtio pmd to see what does pmd do better?
> > > The virtio PMD (drivers/net/virtio) in DPDK doesn't need to share
> > > the virtio ring operation code with other drivers and is highly
> > > optimized for network. E.g. in Rx, the Rx burst function won't
> > > chain descs. So the ID management for the Rx ring can be quite
> > > simple and straightforward, we just need to initialize these IDs
> > > when initializing the ring and don't need to change these IDs
> > > in data path anymore (the mergable Rx code in that patch set
> > > assumes the descs will be written back in order, which should be
> > > fixed. I.e., the ID in the desc should be used to index vq->descx[]).
> > > The Tx code in that patch set also assumes the descs will be
> > > written back by device in order, which should be fixed.
> > 
> > Yes it is. I think I've pointed it out in some early version of pmd patch.
> > So I suspect part (or all) of the boost may come from in order feature.
> > 
> > > 
> > > But in kernel virtio driver, the virtio_ring.c is very generic.
> > > The enqueue (virtqueue_add()) and dequeue (virtqueue_get_buf_ctx())
> > > functions need to support all the virtio devices and should be
> > > able to handle all the possible cases that may happen. So although
> > > the packed ring can be very efficient in some cases, currently
> > > the room to optimize the performance in kernel's virtio_ring.c
> > > isn't that much. If we want to take the fully advantage of the
> > > packed ring's efficiency, we need some further e.g. API changes
> > > in virtio_ring.c, which shouldn't be part of this patch set.
> > 
> > Could you please share more thoughts on this e.g how to improve the API?
> > Notice since the API is shared by both split ring and packed ring, it may
> > improve the performance of split ring as well. One can easily imagine a
> > batching API, but it does not have many real users now, the only case is the
> > XDP transmission which can accept an array of XDP frames.
> 
> I don't have detailed thoughts on this yet. But kernel's
> virtio_ring.c is quite generic compared with what we did
> in virtio PMD.

In what way? What are some things that aren't implemented there?

If what you say is true then we should take a careful look
and not supporting these generic things with packed layout.
Once we do support them it will be too late and we won't
be able to get performance back.



> > 
> > > So
> > > I still think this patch set is mainly to make the kernel virtio
> > > driver to have a full functional support of the packed ring, and
> > > we can't expect impressive performance boost with it.
> > 
> > We can only gain when virtio ring layout is the bottleneck. If there're
> > bottlenecks elsewhere, we probably won't see any increasing in the numbers.
> > Vhost-net is an example, and lots of optimizations have proved that virtio
> > ring is not the main bottleneck for the current codes. I suspect it also the
> > case of virtio driver. Did perf tell us any interesting things in virtio
> > driver?
> > 
> > Thanks
> > 
> > > 
> > > > 
> > > > > > On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> > > > > > > Hello everyone,
> > > > > > > 
> > > > > > > This patch set implements packed ring support in virtio driver.
> > > > > > > 
> > > > > > > Some functional tests have been done with Jason's
> > > > > > > packed ring implementation in vhost:
> > > > > > > 
> > > > > > > https://lkml.org/lkml/2018/7/3/33
> > > > > > > 
> > > > > > > Both of ping and netperf worked as expected.
> > > > > > > 
> > > > > > > v1 -> v2:
> > > > > > > - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> > > > > > > - Add comments related to ccw (Jason);
> > > > > > > 
> > > > > > > RFC (v6) -> v1:
> > > > > > > - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
> > > > > > >    when event idx is off (Jason);
> > > > > > > - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > > > - Test the state of the desc at used_idx instead of last_used_idx
> > > > > > >    in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > > > - Save wrap counter (as part of queue state) in the return value
> > > > > > >    of virtqueue_enable_cb_prepare_packed();
> > > > > > > - Refine the packed ring definitions in uapi;
> > > > > > > - Rebase on the net-next tree;
> > > > > > > 
> > > > > > > RFC v5 -> RFC v6:
> > > > > > > - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> > > > > > > - Define wrap counter as bool (Jason);
> > > > > > > - Use ALIGN() in vring_init_packed() (Jason);
> > > > > > > - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> > > > > > > - Add comments for barriers (Jason);
> > > > > > > - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> > > > > > > - Refine the memory barrier in virtqueue_poll();
> > > > > > > - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> > > > > > > - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> > > > > > > 
> > > > > > > RFC v4 -> RFC v5:
> > > > > > > - Save DMA addr, etc in desc state (Jason);
> > > > > > > - Track used wrap counter;
> > > > > > > 
> > > > > > > RFC v3 -> RFC v4:
> > > > > > > - Make ID allocation support out-of-order (Jason);
> > > > > > > - Various fixes for EVENT_IDX support;
> > > > > > > 
> > > > > > > RFC v2 -> RFC v3:
> > > > > > > - Split into small patches (Jason);
> > > > > > > - Add helper virtqueue_use_indirect() (Jason);
> > > > > > > - Just set id for the last descriptor of a list (Jason);
> > > > > > > - Calculate the prev in virtqueue_add_packed() (Jason);
> > > > > > > - Fix/improve desc suppression code (Jason/MST);
> > > > > > > - Refine the code layout for XXX_split/packed and wrappers (MST);
> > > > > > > - Fix the comments and API in uapi (MST);
> > > > > > > - Remove the BUG_ON() for indirect (Jason);
> > > > > > > - Some other refinements and bug fixes;
> > > > > > > 
> > > > > > > RFC v1 -> RFC v2:
> > > > > > > - Add indirect descriptor support - compile test only;
> > > > > > > - Add event suppression supprt - compile test only;
> > > > > > > - Move vring_packed_init() out of uapi (Jason, MST);
> > > > > > > - Merge two loops into one in virtqueue_add_packed() (Jason);
> > > > > > > - Split vring_unmap_one() for packed ring and split ring (Jason);
> > > > > > > - Avoid using '%' operator (Jason);
> > > > > > > - Rename free_head -> next_avail_idx (Jason);
> > > > > > > - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> > > > > > > - Some other refinements and bug fixes;
> > > > > > > 
> > > > > > > Thanks!
> > > > > > > 
> > > > > > > Tiwei Bie (5):
> > > > > > >    virtio: add packed ring definitions
> > > > > > >    virtio_ring: support creating packed ring
> > > > > > >    virtio_ring: add packed ring support
> > > > > > >    virtio_ring: add event idx support in packed ring
> > > > > > >    virtio_ring: enable packed ring
> > > > > > > 
> > > > > > >   drivers/s390/virtio/virtio_ccw.c   |   14 +
> > > > > > >   drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
> > > > > > >   include/linux/virtio_ring.h        |    8 +-
> > > > > > >   include/uapi/linux/virtio_config.h |    3 +
> > > > > > >   include/uapi/linux/virtio_ring.h   |   43 +
> > > > > > >   5 files changed, 1157 insertions(+), 276 deletions(-)
> > > > > > > 
> > > > > > > -- 
> > > > > > > 2.18.0
> > > > > > ---------------------------------------------------------------------
> > > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> > > > > > 
> > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-07-11  2:27 ` [PATCH net-next v2 3/5] virtio_ring: add packed ring support Tiwei Bie
  2018-09-07 13:49   ` Michael S. Tsirkin
@ 2018-09-12 16:22   ` Michael S. Tsirkin
  2018-11-07 17:48   ` Michael S. Tsirkin
  2 siblings, 0 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-09-12 16:22 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Jul 11, 2018 at 10:27:09AM +0800, Tiwei Bie wrote:
> This commit introduces the support (without EVENT_IDX) for
> packed ring.
> 
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>  drivers/virtio/virtio_ring.c | 495 ++++++++++++++++++++++++++++++++++-
>  1 file changed, 487 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index c4f8abc7445a..f317b485ba54 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -55,12 +55,21 @@
>  #define END_USE(vq)
>  #endif
>  
> +#define _VRING_DESC_F_AVAIL(b)	((__u16)(b) << 7)
> +#define _VRING_DESC_F_USED(b)	((__u16)(b) << 15)
> +
>  struct vring_desc_state {
>  	void *data;			/* Data for callback. */
>  	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
>  };
>  
>  struct vring_desc_state_packed {
> +	void *data;			/* Data for callback. */
> +	struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
> +	int num;			/* Descriptor list length. */
> +	dma_addr_t addr;		/* Buffer DMA addr. */
> +	u32 len;			/* Buffer length. */
> +	u16 flags;			/* Descriptor flags. */
>  	int next;			/* The next desc state. */
>  };
>  

Idea: how about using data to chain these descriptors?
You can validate it's pointing within an array to distinguish
e.g. on detach.

-- 
MST

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-12 16:16             ` Michael S. Tsirkin
@ 2018-09-13  8:59               ` Tiwei Bie
  2018-09-13  9:47                 ` Jason Wang
  0 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-09-13  8:59 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Wed, Sep 12, 2018 at 12:16:32PM -0400, Michael S. Tsirkin wrote:
> On Tue, Sep 11, 2018 at 01:37:26PM +0800, Tiwei Bie wrote:
> > On Mon, Sep 10, 2018 at 11:33:17AM +0800, Jason Wang wrote:
> > > On 2018年09月10日 11:00, Tiwei Bie wrote:
> > > > On Fri, Sep 07, 2018 at 09:00:49AM -0400, Michael S. Tsirkin wrote:
> > > > > On Fri, Sep 07, 2018 at 09:22:25AM +0800, Tiwei Bie wrote:
> > > > > > On Mon, Aug 27, 2018 at 05:00:40PM +0300, Michael S. Tsirkin wrote:
> > > > > > > Are there still plans to test the performance with vost pmd?
> > > > > > > vhost doesn't seem to show a performance gain ...
> > > > > > > 
> > > > > > I tried some performance tests with vhost PMD. In guest, the
> > > > > > XDP program will return XDP_DROP directly. And in host, testpmd
> > > > > > will do txonly fwd.
> > > > > > 
> > > > > > When burst size is 1 and packet size is 64 in testpmd and
> > > > > > testpmd needs to iterate 5 Tx queues (but only the first two
> > > > > > queues are enabled) to prepare and inject packets, I got ~12%
> > > > > > performance boost (5.7Mpps -> 6.4Mpps). And if the vhost PMD
> > > > > > is faster (e.g. just need to iterate the first two queues to
> > > > > > prepare and inject packets), then I got similar performance
> > > > > > for both rings (~9.9Mpps) (packed ring's performance can be
> > > > > > lower, because it's more complicated in driver.)
> > > > > > 
> > > > > > I think packed ring makes vhost PMD faster, but it doesn't make
> > > > > > the driver faster. In packed ring, the ring is simplified, and
> > > > > > the handling of the ring in vhost (device) is also simplified,
> > > > > > but things are not simplified in driver, e.g. although there is
> > > > > > no desc table in the virtqueue anymore, driver still needs to
> > > > > > maintain a private desc state table (which is still managed as
> > > > > > a list in this patch set) to support the out-of-order desc
> > > > > > processing in vhost (device).
> > > > > > 
> > > > > > I think this patch set is mainly to make the driver have a full
> > > > > > functional support for the packed ring, which makes it possible
> > > > > > to leverage the packed ring feature in vhost (device). But I'm
> > > > > > not sure whether there is any other better idea, I'd like to
> > > > > > hear your thoughts. Thanks!
> > > > > Just this: Jens seems to report a nice gain with virtio and
> > > > > vhost pmd across the board. Try to compare virtio and
> > > > > virtio pmd to see what does pmd do better?
> > > > The virtio PMD (drivers/net/virtio) in DPDK doesn't need to share
> > > > the virtio ring operation code with other drivers and is highly
> > > > optimized for network. E.g. in Rx, the Rx burst function won't
> > > > chain descs. So the ID management for the Rx ring can be quite
> > > > simple and straightforward, we just need to initialize these IDs
> > > > when initializing the ring and don't need to change these IDs
> > > > in data path anymore (the mergable Rx code in that patch set
> > > > assumes the descs will be written back in order, which should be
> > > > fixed. I.e., the ID in the desc should be used to index vq->descx[]).
> > > > The Tx code in that patch set also assumes the descs will be
> > > > written back by device in order, which should be fixed.
> > > 
> > > Yes it is. I think I've pointed it out in some early version of pmd patch.
> > > So I suspect part (or all) of the boost may come from in order feature.
> > > 
> > > > 
> > > > But in kernel virtio driver, the virtio_ring.c is very generic.
> > > > The enqueue (virtqueue_add()) and dequeue (virtqueue_get_buf_ctx())
> > > > functions need to support all the virtio devices and should be
> > > > able to handle all the possible cases that may happen. So although
> > > > the packed ring can be very efficient in some cases, currently
> > > > the room to optimize the performance in kernel's virtio_ring.c
> > > > isn't that much. If we want to take the fully advantage of the
> > > > packed ring's efficiency, we need some further e.g. API changes
> > > > in virtio_ring.c, which shouldn't be part of this patch set.
> > > 
> > > Could you please share more thoughts on this e.g how to improve the API?
> > > Notice since the API is shared by both split ring and packed ring, it may
> > > improve the performance of split ring as well. One can easily imagine a
> > > batching API, but it does not have many real users now, the only case is the
> > > XDP transmission which can accept an array of XDP frames.
> > 
> > I don't have detailed thoughts on this yet. But kernel's
> > virtio_ring.c is quite generic compared with what we did
> > in virtio PMD.
> 
> In what way? What are some things that aren't implemented there?

Below is the code corresponding to the virtqueue_add()
for Rx ring in virtio PMD:

https://github.com/DPDK/dpdk/blob/3605968c2fa7/drivers/net/virtio/virtio_rxtx.c#L278-L304

And below is the code of virtqueue_add() in Linux:

https://github.com/torvalds/linux/blob/54eda9df17f3/drivers/virtio/virtio_ring.c#L275-L417

In virtio PMD, for each packet (mbuf), the code is pretty
straightforward, it will just check whether there is one
available desc. If it's true, it will just fill this desc
directly.

But in virtqueue_add(), it's obvious that, the logic is
much more complicated or generic. It's supposed to be
able to handle sglist (which may consist of multiple IN
buffers and multiple OUT buffers at the same time), and
it will try to use indirect descriptors. Then it needs
several loops to parse the sglist. That's why I said
it's quite generic.

> 
> If what you say is true then we should take a careful look
> and not supporting these generic things with packed layout.
> Once we do support them it will be too late and we won't
> be able to get performance back.

I think it's a good point that we don't need to support
everything in packed ring (especially these which would
hurt the performance), as the packed ring aims at high
performance. I'm also wondering about the features. Is
there any possibility that we won't support the out of
order processing (at least not by default) in packed ring?
If I didn't miss anything, the need to support out of order
processing in packed ring will make the data structure
inside the driver not cache friendly which is similar to
the case of the descriptor table in the split ring (the
difference is that, it only happens in driver now).


> 
> 
> 
> > > 
> > > > So
> > > > I still think this patch set is mainly to make the kernel virtio
> > > > driver to have a full functional support of the packed ring, and
> > > > we can't expect impressive performance boost with it.
> > > 
> > > We can only gain when virtio ring layout is the bottleneck. If there're
> > > bottlenecks elsewhere, we probably won't see any increasing in the numbers.
> > > Vhost-net is an example, and lots of optimizations have proved that virtio
> > > ring is not the main bottleneck for the current codes. I suspect it also the
> > > case of virtio driver. Did perf tell us any interesting things in virtio
> > > driver?
> > > 
> > > Thanks
> > > 
> > > > 
> > > > > 
> > > > > > > On Wed, Jul 11, 2018 at 10:27:06AM +0800, Tiwei Bie wrote:
> > > > > > > > Hello everyone,
> > > > > > > > 
> > > > > > > > This patch set implements packed ring support in virtio driver.
> > > > > > > > 
> > > > > > > > Some functional tests have been done with Jason's
> > > > > > > > packed ring implementation in vhost:
> > > > > > > > 
> > > > > > > > https://lkml.org/lkml/2018/7/3/33
> > > > > > > > 
> > > > > > > > Both of ping and netperf worked as expected.
> > > > > > > > 
> > > > > > > > v1 -> v2:
> > > > > > > > - Use READ_ONCE() to read event off_wrap and flags together (Jason);
> > > > > > > > - Add comments related to ccw (Jason);
> > > > > > > > 
> > > > > > > > RFC (v6) -> v1:
> > > > > > > > - Avoid extra virtio_wmb() in virtqueue_enable_cb_delayed_packed()
> > > > > > > >    when event idx is off (Jason);
> > > > > > > > - Fix bufs calculation in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > > > > - Test the state of the desc at used_idx instead of last_used_idx
> > > > > > > >    in virtqueue_enable_cb_delayed_packed() (Jason);
> > > > > > > > - Save wrap counter (as part of queue state) in the return value
> > > > > > > >    of virtqueue_enable_cb_prepare_packed();
> > > > > > > > - Refine the packed ring definitions in uapi;
> > > > > > > > - Rebase on the net-next tree;
> > > > > > > > 
> > > > > > > > RFC v5 -> RFC v6:
> > > > > > > > - Avoid tracking addr/len/flags when DMA API isn't used (MST/Jason);
> > > > > > > > - Define wrap counter as bool (Jason);
> > > > > > > > - Use ALIGN() in vring_init_packed() (Jason);
> > > > > > > > - Avoid using pointer to track `next` in detach_buf_packed() (Jason);
> > > > > > > > - Add comments for barriers (Jason);
> > > > > > > > - Don't enable RING_PACKED on ccw for now (noticed by Jason);
> > > > > > > > - Refine the memory barrier in virtqueue_poll();
> > > > > > > > - Add a missing memory barrier in virtqueue_enable_cb_delayed_packed();
> > > > > > > > - Remove the hacks in virtqueue_enable_cb_prepare_packed();
> > > > > > > > 
> > > > > > > > RFC v4 -> RFC v5:
> > > > > > > > - Save DMA addr, etc in desc state (Jason);
> > > > > > > > - Track used wrap counter;
> > > > > > > > 
> > > > > > > > RFC v3 -> RFC v4:
> > > > > > > > - Make ID allocation support out-of-order (Jason);
> > > > > > > > - Various fixes for EVENT_IDX support;
> > > > > > > > 
> > > > > > > > RFC v2 -> RFC v3:
> > > > > > > > - Split into small patches (Jason);
> > > > > > > > - Add helper virtqueue_use_indirect() (Jason);
> > > > > > > > - Just set id for the last descriptor of a list (Jason);
> > > > > > > > - Calculate the prev in virtqueue_add_packed() (Jason);
> > > > > > > > - Fix/improve desc suppression code (Jason/MST);
> > > > > > > > - Refine the code layout for XXX_split/packed and wrappers (MST);
> > > > > > > > - Fix the comments and API in uapi (MST);
> > > > > > > > - Remove the BUG_ON() for indirect (Jason);
> > > > > > > > - Some other refinements and bug fixes;
> > > > > > > > 
> > > > > > > > RFC v1 -> RFC v2:
> > > > > > > > - Add indirect descriptor support - compile test only;
> > > > > > > > - Add event suppression supprt - compile test only;
> > > > > > > > - Move vring_packed_init() out of uapi (Jason, MST);
> > > > > > > > - Merge two loops into one in virtqueue_add_packed() (Jason);
> > > > > > > > - Split vring_unmap_one() for packed ring and split ring (Jason);
> > > > > > > > - Avoid using '%' operator (Jason);
> > > > > > > > - Rename free_head -> next_avail_idx (Jason);
> > > > > > > > - Add comments for virtio_wmb() in virtqueue_add_packed() (Jason);
> > > > > > > > - Some other refinements and bug fixes;
> > > > > > > > 
> > > > > > > > Thanks!
> > > > > > > > 
> > > > > > > > Tiwei Bie (5):
> > > > > > > >    virtio: add packed ring definitions
> > > > > > > >    virtio_ring: support creating packed ring
> > > > > > > >    virtio_ring: add packed ring support
> > > > > > > >    virtio_ring: add event idx support in packed ring
> > > > > > > >    virtio_ring: enable packed ring
> > > > > > > > 
> > > > > > > >   drivers/s390/virtio/virtio_ccw.c   |   14 +
> > > > > > > >   drivers/virtio/virtio_ring.c       | 1365 ++++++++++++++++++++++------
> > > > > > > >   include/linux/virtio_ring.h        |    8 +-
> > > > > > > >   include/uapi/linux/virtio_config.h |    3 +
> > > > > > > >   include/uapi/linux/virtio_ring.h   |   43 +
> > > > > > > >   5 files changed, 1157 insertions(+), 276 deletions(-)
> > > > > > > > 
> > > > > > > > -- 
> > > > > > > > 2.18.0
> > > > > > > ---------------------------------------------------------------------
> > > > > > > To unsubscribe, e-mail: virtio-dev-unsubscribe@lists.oasis-open.org
> > > > > > > For additional commands, e-mail: virtio-dev-help@lists.oasis-open.org
> > > > > > > 
> > > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-13  8:59               ` Tiwei Bie
@ 2018-09-13  9:47                 ` Jason Wang
  2018-10-10 14:36                   ` Michael S. Tsirkin
  0 siblings, 1 reply; 53+ messages in thread
From: Jason Wang @ 2018-09-13  9:47 UTC (permalink / raw)
  To: Tiwei Bie, Michael S. Tsirkin
  Cc: virtualization, linux-kernel, netdev, virtio-dev, wexu, jfreimann



On 2018年09月13日 16:59, Tiwei Bie wrote:
>> If what you say is true then we should take a careful look
>> and not supporting these generic things with packed layout.
>> Once we do support them it will be too late and we won't
>> be able to get performance back.
> I think it's a good point that we don't need to support
> everything in packed ring (especially these which would
> hurt the performance), as the packed ring aims at high
> performance. I'm also wondering about the features. Is
> there any possibility that we won't support the out of
> order processing (at least not by default) in packed ring?
> If I didn't miss anything, the need to support out of order
> processing in packed ring will make the data structure
> inside the driver not cache friendly which is similar to
> the case of the descriptor table in the split ring (the
> difference is that, it only happens in driver now).

Out of order is not the only user, DMA is another one. We don't have 
used ring(len), so we need to maintain buffer length somewhere even for 
in order device. But if it's not too late, I second for a OUT_OF_ORDER 
feature. Starting from in order can have much simpler code in driver.

Thanks


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-09-13  9:47                 ` Jason Wang
@ 2018-10-10 14:36                   ` Michael S. Tsirkin
  2018-10-11 12:12                     ` Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-10-10 14:36 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Thu, Sep 13, 2018 at 05:47:29PM +0800, Jason Wang wrote:
> 
> 
> On 2018年09月13日 16:59, Tiwei Bie wrote:
> > > If what you say is true then we should take a careful look
> > > and not supporting these generic things with packed layout.
> > > Once we do support them it will be too late and we won't
> > > be able to get performance back.
> > I think it's a good point that we don't need to support
> > everything in packed ring (especially these which would
> > hurt the performance), as the packed ring aims at high
> > performance. I'm also wondering about the features. Is
> > there any possibility that we won't support the out of
> > order processing (at least not by default) in packed ring?
> > If I didn't miss anything, the need to support out of order
> > processing in packed ring will make the data structure
> > inside the driver not cache friendly which is similar to
> > the case of the descriptor table in the split ring (the
> > difference is that, it only happens in driver now).
> 
> Out of order is not the only user, DMA is another one. We don't have used
> ring(len), so we need to maintain buffer length somewhere even for in order
> device.

For a bunch of systems dma unmap is a nop so we do not really
need to maintain it. It's a question of an API to detect that
and optimize for it. I posted a proposed patch for that -
want to try using that?

> But if it's not too late, I second for a OUT_OF_ORDER feature.
> Starting from in order can have much simpler code in driver.
> 
> Thanks

It's tricky to change the flag polarity because of compatibility
with legacy interfaces. Why is this such a big deal?

Let's teach drivers about IN_ORDER, then if devices
are in order it will get enabled by default.

-- 
MST

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-10-10 14:36                   ` Michael S. Tsirkin
@ 2018-10-11 12:12                     ` Tiwei Bie
  2018-10-11 13:48                       ` Michael S. Tsirkin
  0 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-10-11 12:12 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Wed, Oct 10, 2018 at 10:36:26AM -0400, Michael S. Tsirkin wrote:
> On Thu, Sep 13, 2018 at 05:47:29PM +0800, Jason Wang wrote:
> > On 2018年09月13日 16:59, Tiwei Bie wrote:
> > > > If what you say is true then we should take a careful look
> > > > and not supporting these generic things with packed layout.
> > > > Once we do support them it will be too late and we won't
> > > > be able to get performance back.
> > > I think it's a good point that we don't need to support
> > > everything in packed ring (especially these which would
> > > hurt the performance), as the packed ring aims at high
> > > performance. I'm also wondering about the features. Is
> > > there any possibility that we won't support the out of
> > > order processing (at least not by default) in packed ring?
> > > If I didn't miss anything, the need to support out of order
> > > processing in packed ring will make the data structure
> > > inside the driver not cache friendly which is similar to
> > > the case of the descriptor table in the split ring (the
> > > difference is that, it only happens in driver now).
> > 
> > Out of order is not the only user, DMA is another one. We don't have used
> > ring(len), so we need to maintain buffer length somewhere even for in order
> > device.
> 
> For a bunch of systems dma unmap is a nop so we do not really
> need to maintain it. It's a question of an API to detect that
> and optimize for it. I posted a proposed patch for that -
> want to try using that?

Yeah, definitely!

> 
> > But if it's not too late, I second for a OUT_OF_ORDER feature.
> > Starting from in order can have much simpler code in driver.
> > 
> > Thanks
> 
> It's tricky to change the flag polarity because of compatibility
> with legacy interfaces. Why is this such a big deal?
> 
> Let's teach drivers about IN_ORDER, then if devices
> are in order it will get enabled by default.

Yeah, make sense.

Besides, I have done some further profiling and debugging
both in kernel driver and DPDK vhost. Previously I was mislead
by a bug in vhost code. I will send a patch to fix that bug.
With that bug fixed, the performance of packed ring in the
test between kernel driver and DPDK vhost is better now.
I will send a new series soon. Thanks!

> 
> -- 
> MST

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-10-11 12:12                     ` Tiwei Bie
@ 2018-10-11 13:48                       ` Michael S. Tsirkin
  2018-10-11 14:13                         ` Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-10-11 13:48 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Thu, Oct 11, 2018 at 08:12:21PM +0800, Tiwei Bie wrote:
> > > But if it's not too late, I second for a OUT_OF_ORDER feature.
> > > Starting from in order can have much simpler code in driver.
> > > 
> > > Thanks
> > 
> > It's tricky to change the flag polarity because of compatibility
> > with legacy interfaces. Why is this such a big deal?
> > 
> > Let's teach drivers about IN_ORDER, then if devices
> > are in order it will get enabled by default.
> 
> Yeah, make sense.
> 
> Besides, I have done some further profiling and debugging
> both in kernel driver and DPDK vhost. Previously I was mislead
> by a bug in vhost code. I will send a patch to fix that bug.
> With that bug fixed, the performance of packed ring in the
> test between kernel driver and DPDK vhost is better now.

OK, if we get a performance gain on the virtio side, we can finally
upstream it. If you see that please re-post ASAP so we can
put it in the next kernel release.

-- 
MST

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-10-11 13:48                       ` Michael S. Tsirkin
@ 2018-10-11 14:13                         ` Tiwei Bie
  2018-10-11 14:17                           ` Michael S. Tsirkin
  0 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-10-11 14:13 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann, maxime.coquelin, zhihong.wang

On Thu, Oct 11, 2018 at 09:48:48AM -0400, Michael S. Tsirkin wrote:
> On Thu, Oct 11, 2018 at 08:12:21PM +0800, Tiwei Bie wrote:
> > > > But if it's not too late, I second for a OUT_OF_ORDER feature.
> > > > Starting from in order can have much simpler code in driver.
> > > > 
> > > > Thanks
> > > 
> > > It's tricky to change the flag polarity because of compatibility
> > > with legacy interfaces. Why is this such a big deal?
> > > 
> > > Let's teach drivers about IN_ORDER, then if devices
> > > are in order it will get enabled by default.
> > 
> > Yeah, make sense.
> > 
> > Besides, I have done some further profiling and debugging
> > both in kernel driver and DPDK vhost. Previously I was mislead
> > by a bug in vhost code. I will send a patch to fix that bug.
> > With that bug fixed, the performance of packed ring in the
> > test between kernel driver and DPDK vhost is better now.
> 
> OK, if we get a performance gain on the virtio side, we can finally
> upstream it. If you see that please re-post ASAP so we can
> put it in the next kernel release.

Got it, I will re-post ASAP.

Thanks!


> 
> -- 
> MST

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-10-11 14:13                         ` Tiwei Bie
@ 2018-10-11 14:17                           ` Michael S. Tsirkin
  2018-10-11 14:34                             ` Tiwei Bie
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-10-11 14:17 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann, maxime.coquelin, zhihong.wang

On Thu, Oct 11, 2018 at 10:13:31PM +0800, Tiwei Bie wrote:
> On Thu, Oct 11, 2018 at 09:48:48AM -0400, Michael S. Tsirkin wrote:
> > On Thu, Oct 11, 2018 at 08:12:21PM +0800, Tiwei Bie wrote:
> > > > > But if it's not too late, I second for a OUT_OF_ORDER feature.
> > > > > Starting from in order can have much simpler code in driver.
> > > > > 
> > > > > Thanks
> > > > 
> > > > It's tricky to change the flag polarity because of compatibility
> > > > with legacy interfaces. Why is this such a big deal?
> > > > 
> > > > Let's teach drivers about IN_ORDER, then if devices
> > > > are in order it will get enabled by default.
> > > 
> > > Yeah, make sense.
> > > 
> > > Besides, I have done some further profiling and debugging
> > > both in kernel driver and DPDK vhost. Previously I was mislead
> > > by a bug in vhost code. I will send a patch to fix that bug.
> > > With that bug fixed, the performance of packed ring in the
> > > test between kernel driver and DPDK vhost is better now.
> > 
> > OK, if we get a performance gain on the virtio side, we can finally
> > upstream it. If you see that please re-post ASAP so we can
> > put it in the next kernel release.
> 
> Got it, I will re-post ASAP.
> 
> Thanks!


Pls remember to include data on performance gain in the cover letter.


> 
> > 
> > -- 
> > MST

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [virtio-dev] Re: [PATCH net-next v2 0/5] virtio: support packed ring
  2018-10-11 14:17                           ` Michael S. Tsirkin
@ 2018-10-11 14:34                             ` Tiwei Bie
  0 siblings, 0 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-10-11 14:34 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann, maxime.coquelin, zhihong.wang

On Thu, Oct 11, 2018 at 10:17:15AM -0400, Michael S. Tsirkin wrote:
> On Thu, Oct 11, 2018 at 10:13:31PM +0800, Tiwei Bie wrote:
> > On Thu, Oct 11, 2018 at 09:48:48AM -0400, Michael S. Tsirkin wrote:
> > > On Thu, Oct 11, 2018 at 08:12:21PM +0800, Tiwei Bie wrote:
> > > > > > But if it's not too late, I second for a OUT_OF_ORDER feature.
> > > > > > Starting from in order can have much simpler code in driver.
> > > > > > 
> > > > > > Thanks
> > > > > 
> > > > > It's tricky to change the flag polarity because of compatibility
> > > > > with legacy interfaces. Why is this such a big deal?
> > > > > 
> > > > > Let's teach drivers about IN_ORDER, then if devices
> > > > > are in order it will get enabled by default.
> > > > 
> > > > Yeah, make sense.
> > > > 
> > > > Besides, I have done some further profiling and debugging
> > > > both in kernel driver and DPDK vhost. Previously I was mislead
> > > > by a bug in vhost code. I will send a patch to fix that bug.
> > > > With that bug fixed, the performance of packed ring in the
> > > > test between kernel driver and DPDK vhost is better now.
> > > 
> > > OK, if we get a performance gain on the virtio side, we can finally
> > > upstream it. If you see that please re-post ASAP so we can
> > > put it in the next kernel release.
> > 
> > Got it, I will re-post ASAP.
> > 
> > Thanks!
> 
> 
> Pls remember to include data on performance gain in the cover letter.

Sure. I'll try to include some performance analyses.


^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-07-11  2:27 ` [PATCH net-next v2 3/5] virtio_ring: add packed ring support Tiwei Bie
  2018-09-07 13:49   ` Michael S. Tsirkin
  2018-09-12 16:22   ` Michael S. Tsirkin
@ 2018-11-07 17:48   ` Michael S. Tsirkin
  2018-11-08  1:38     ` Tiwei Bie
  2 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-11-07 17:48 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Jul 11, 2018 at 10:27:09AM +0800, Tiwei Bie wrote:
> This commit introduces the support (without EVENT_IDX) for
> packed ring.
> 
> Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> ---
>  drivers/virtio/virtio_ring.c | 495 ++++++++++++++++++++++++++++++++++-
>  1 file changed, 487 insertions(+), 8 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index c4f8abc7445a..f317b485ba54 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -55,12 +55,21 @@
>  #define END_USE(vq)
>  #endif
>  
> +#define _VRING_DESC_F_AVAIL(b)	((__u16)(b) << 7)
> +#define _VRING_DESC_F_USED(b)	((__u16)(b) << 15)
> +
>  struct vring_desc_state {
>  	void *data;			/* Data for callback. */
>  	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
>  };
>  
>  struct vring_desc_state_packed {
> +	void *data;			/* Data for callback. */
> +	struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
> +	int num;			/* Descriptor list length. */
> +	dma_addr_t addr;		/* Buffer DMA addr. */
> +	u32 len;			/* Buffer length. */
> +	u16 flags;			/* Descriptor flags. */
>  	int next;			/* The next desc state. */
>  };
>  
> @@ -660,7 +669,6 @@ static bool virtqueue_poll_split(struct virtqueue *_vq, unsigned last_used_idx)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
> -	virtio_mb(vq->weak_barriers);
>  	return (u16)last_used_idx != virtio16_to_cpu(_vq->vdev, vq->vring.used->idx);
>  }
>  
> @@ -757,6 +765,72 @@ static inline unsigned vring_size_packed(unsigned int num, unsigned long align)
>  		& ~(align - 1)) + sizeof(struct vring_packed_desc_event) * 2;
>  }
>  
> +static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
> +				     struct vring_desc_state_packed *state)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = state->flags;
> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 state->addr, state->len,
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       state->addr, state->len,
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
> +				   struct vring_packed_desc *desc)
> +{
> +	u16 flags;
> +
> +	if (!vring_use_dma_api(vq->vq.vdev))
> +		return;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);

BTW this stuff is only used on error etc. Is there a way to
reuse vring_unmap_state_packed?

> +
> +	if (flags & VRING_DESC_F_INDIRECT) {
> +		dma_unmap_single(vring_dma_dev(vq),
> +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> +				 (flags & VRING_DESC_F_WRITE) ?
> +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	} else {
> +		dma_unmap_page(vring_dma_dev(vq),
> +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> +			       (flags & VRING_DESC_F_WRITE) ?
> +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> +	}
> +}
> +
> +static struct vring_packed_desc *alloc_indirect_packed(struct virtqueue *_vq,
> +						       unsigned int total_sg,
> +						       gfp_t gfp)
> +{
> +	struct vring_packed_desc *desc;
> +
> +	/*
> +	 * We require lowmem mappings for the descriptors because
> +	 * otherwise virt_to_phys will give us bogus addresses in the
> +	 * virtqueue.
> +	 */
> +	gfp &= ~__GFP_HIGHMEM;
> +
> +	desc = kmalloc(total_sg * sizeof(struct vring_packed_desc), gfp);
> +
> +	return desc;
> +}
> +
>  static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  				       struct scatterlist *sgs[],
>  				       unsigned int total_sg,
> @@ -766,47 +840,449 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  				       void *ctx,
>  				       gfp_t gfp)
>  {
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	struct vring_packed_desc *desc;
> +	struct scatterlist *sg;
> +	unsigned int i, n, descs_used, uninitialized_var(prev), err_idx;
> +	__virtio16 uninitialized_var(head_flags), flags;
> +	u16 head, avail_wrap_counter, id, curr;
> +	bool indirect;
> +
> +	START_USE(vq);
> +
> +	BUG_ON(data == NULL);
> +	BUG_ON(ctx && vq->indirect);
> +
> +	if (unlikely(vq->broken)) {
> +		END_USE(vq);
> +		return -EIO;
> +	}
> +
> +#ifdef DEBUG
> +	{
> +		ktime_t now = ktime_get();
> +
> +		/* No kick or get, with .1 second between?  Warn. */
> +		if (vq->last_add_time_valid)
> +			WARN_ON(ktime_to_ms(ktime_sub(now, vq->last_add_time))
> +					    > 100);
> +		vq->last_add_time = now;
> +		vq->last_add_time_valid = true;
> +	}
> +#endif
> +
> +	BUG_ON(total_sg == 0);
> +
> +	head = vq->next_avail_idx;
> +	avail_wrap_counter = vq->avail_wrap_counter;
> +
> +	if (virtqueue_use_indirect(_vq, total_sg))
> +		desc = alloc_indirect_packed(_vq, total_sg, gfp);
> +	else {
> +		desc = NULL;
> +		WARN_ON_ONCE(total_sg > vq->vring_packed.num && !vq->indirect);
> +	}
> +
> +	if (desc) {
> +		/* Use a single buffer which doesn't continue */
> +		indirect = true;
> +		/* Set up rest to use this indirect table. */
> +		i = 0;
> +		descs_used = 1;
> +	} else {
> +		indirect = false;
> +		desc = vq->vring_packed.desc;
> +		i = head;
> +		descs_used = total_sg;
> +	}
> +
> +	if (vq->vq.num_free < descs_used) {
> +		pr_debug("Can't add buf len %i - avail = %i\n",
> +			 descs_used, vq->vq.num_free);
> +		/* FIXME: for historical reasons, we force a notify here if
> +		 * there are outgoing parts to the buffer.  Presumably the
> +		 * host should service the ring ASAP. */

I don't think we have a reason to do this for packed ring.
No historical baggage there, right?

> +		if (out_sgs)
> +			vq->notify(&vq->vq);
> +		if (indirect)
> +			kfree(desc);
> +		END_USE(vq);
> +		return -ENOSPC;
> +	}
> +
> +	id = vq->free_head;
> +	BUG_ON(id == vq->vring_packed.num);
> +
> +	curr = id;
> +	for (n = 0; n < out_sgs + in_sgs; n++) {
> +		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> +			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> +					       DMA_TO_DEVICE : DMA_FROM_DEVICE);
> +			if (vring_mapping_error(vq, addr))
> +				goto unmap_release;
> +
> +			flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_NEXT |
> +				  (n < out_sgs ? 0 : VRING_DESC_F_WRITE) |
> +				  _VRING_DESC_F_AVAIL(vq->avail_wrap_counter) |
> +				  _VRING_DESC_F_USED(!vq->avail_wrap_counter));
> +			if (!indirect && i == head)
> +				head_flags = flags;
> +			else
> +				desc[i].flags = flags;
> +
> +			desc[i].addr = cpu_to_virtio64(_vq->vdev, addr);
> +			desc[i].len = cpu_to_virtio32(_vq->vdev, sg->length);
> +			i++;
> +			if (!indirect) {
> +				if (vring_use_dma_api(_vq->vdev)) {
> +					vq->desc_state_packed[curr].addr = addr;
> +					vq->desc_state_packed[curr].len =
> +						sg->length;
> +					vq->desc_state_packed[curr].flags =
> +						virtio16_to_cpu(_vq->vdev,
> +								flags);
> +				}
> +				curr = vq->desc_state_packed[curr].next;
> +
> +				if (i >= vq->vring_packed.num) {
> +					i = 0;
> +					vq->avail_wrap_counter ^= 1;
> +				}
> +			}
> +		}
> +	}
> +
> +	prev = (i > 0 ? i : vq->vring_packed.num) - 1;
> +	desc[prev].id = cpu_to_virtio16(_vq->vdev, id);
> +
> +	/* Last one doesn't continue. */
> +	if (total_sg == 1)
> +		head_flags &= cpu_to_virtio16(_vq->vdev, ~VRING_DESC_F_NEXT);
> +	else
> +		desc[prev].flags &= cpu_to_virtio16(_vq->vdev,
> +						~VRING_DESC_F_NEXT);
> +
> +	if (indirect) {
> +		/* Now that the indirect table is filled in, map it. */
> +		dma_addr_t addr = vring_map_single(
> +			vq, desc, total_sg * sizeof(struct vring_packed_desc),
> +			DMA_TO_DEVICE);
> +		if (vring_mapping_error(vq, addr))
> +			goto unmap_release;
> +
> +		head_flags = cpu_to_virtio16(_vq->vdev, VRING_DESC_F_INDIRECT |
> +				      _VRING_DESC_F_AVAIL(avail_wrap_counter) |
> +				      _VRING_DESC_F_USED(!avail_wrap_counter));
> +		vq->vring_packed.desc[head].addr = cpu_to_virtio64(_vq->vdev,
> +								   addr);
> +		vq->vring_packed.desc[head].len = cpu_to_virtio32(_vq->vdev,
> +				total_sg * sizeof(struct vring_packed_desc));
> +		vq->vring_packed.desc[head].id = cpu_to_virtio16(_vq->vdev, id);
> +
> +		if (vring_use_dma_api(_vq->vdev)) {
> +			vq->desc_state_packed[id].addr = addr;
> +			vq->desc_state_packed[id].len = total_sg *
> +					sizeof(struct vring_packed_desc);
> +			vq->desc_state_packed[id].flags =
> +					virtio16_to_cpu(_vq->vdev, head_flags);
> +		}
> +	}
> +
> +	/* We're using some buffers from the free list. */
> +	vq->vq.num_free -= descs_used;
> +
> +	/* Update free pointer */
> +	if (indirect) {
> +		n = head + 1;
> +		if (n >= vq->vring_packed.num) {
> +			n = 0;
> +			vq->avail_wrap_counter ^= 1;
> +		}
> +		vq->next_avail_idx = n;
> +		vq->free_head = vq->desc_state_packed[id].next;
> +	} else {
> +		vq->next_avail_idx = i;
> +		vq->free_head = curr;
> +	}
> +
> +	/* Store token and indirect buffer state. */
> +	vq->desc_state_packed[id].num = descs_used;
> +	vq->desc_state_packed[id].data = data;
> +	if (indirect)
> +		vq->desc_state_packed[id].indir_desc = desc;
> +	else
> +		vq->desc_state_packed[id].indir_desc = ctx;
> +
> +	/* A driver MUST NOT make the first descriptor in the list
> +	 * available before all subsequent descriptors comprising
> +	 * the list are made available. */
> +	virtio_wmb(vq->weak_barriers);
> +	vq->vring_packed.desc[head].flags = head_flags;
> +	vq->num_added += descs_used;
> +
> +	pr_debug("Added buffer head %i to %p\n", head, vq);
> +	END_USE(vq);
> +
> +	return 0;
> +
> +unmap_release:
> +	err_idx = i;
> +	i = head;
> +
> +	for (n = 0; n < total_sg; n++) {
> +		if (i == err_idx)
> +			break;
> +		vring_unmap_desc_packed(vq, &desc[i]);
> +		i++;
> +		if (!indirect && i >= vq->vring_packed.num)
> +			i = 0;
> +	}
> +
> +	vq->avail_wrap_counter = avail_wrap_counter;
> +
> +	if (indirect)
> +		kfree(desc);
> +
> +	END_USE(vq);
>  	return -EIO;
>  }
>  
>  static bool virtqueue_kick_prepare_packed(struct virtqueue *_vq)
>  {
> -	return false;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 flags;
> +	bool needs_kick;
> +	u32 snapshot;
> +
> +	START_USE(vq);
> +	/* We need to expose the new flags value before checking notification
> +	 * suppressions. */
> +	virtio_mb(vq->weak_barriers);
> +
> +	snapshot = READ_ONCE(*(u32 *)vq->vring_packed.device);
> +	flags = virtio16_to_cpu(_vq->vdev, (__virtio16)(snapshot >> 16)) & 0x3;
> +
> +#ifdef DEBUG
> +	if (vq->last_add_time_valid) {
> +		WARN_ON(ktime_to_ms(ktime_sub(ktime_get(),
> +					      vq->last_add_time)) > 100);
> +	}
> +	vq->last_add_time_valid = false;
> +#endif
> +
> +	needs_kick = (flags != VRING_EVENT_F_DISABLE);
> +	END_USE(vq);
> +	return needs_kick;
> +}
> +
> +static void detach_buf_packed(struct vring_virtqueue *vq,
> +			      unsigned int id, void **ctx)
> +{
> +	struct vring_desc_state_packed *state = NULL;
> +	struct vring_packed_desc *desc;
> +	unsigned int curr, i;
> +
> +	/* Clear data ptr. */
> +	vq->desc_state_packed[id].data = NULL;
> +
> +	curr = id;
> +	for (i = 0; i < vq->desc_state_packed[id].num; i++) {
> +		state = &vq->desc_state_packed[curr];
> +		vring_unmap_state_packed(vq, state);
> +		curr = state->next;
> +	}
> +
> +	BUG_ON(state == NULL);
> +	vq->vq.num_free += vq->desc_state_packed[id].num;
> +	state->next = vq->free_head;
> +	vq->free_head = id;
> +
> +	if (vq->indirect) {
> +		u32 len;
> +
> +		/* Free the indirect table, if any, now that it's unmapped. */
> +		desc = vq->desc_state_packed[id].indir_desc;
> +		if (!desc)
> +			return;
> +
> +		if (vring_use_dma_api(vq->vq.vdev)) {
> +			len = vq->desc_state_packed[id].len;
> +			for (i = 0; i < len / sizeof(struct vring_packed_desc);
> +					i++)
> +				vring_unmap_desc_packed(vq, &desc[i]);
> +		}
> +		kfree(desc);
> +		vq->desc_state_packed[id].indir_desc = NULL;
> +	} else if (ctx) {
> +		*ctx = vq->desc_state_packed[id].indir_desc;
> +	}
> +}
> +
> +static inline bool is_used_desc_packed(const struct vring_virtqueue *vq,
> +				       u16 idx, bool used_wrap_counter)
> +{
> +	u16 flags;
> +	bool avail, used;
> +
> +	flags = virtio16_to_cpu(vq->vq.vdev,
> +				vq->vring_packed.desc[idx].flags);
> +	avail = !!(flags & VRING_DESC_F_AVAIL);
> +	used = !!(flags & VRING_DESC_F_USED);
> +
> +	return avail == used && used == used_wrap_counter;
>  }
>  
>  static inline bool more_used_packed(const struct vring_virtqueue *vq)
>  {
> -	return false;
> +	return is_used_desc_packed(vq, vq->last_used_idx,
> +			vq->used_wrap_counter);
>  }
>  
>  static void *virtqueue_get_buf_ctx_packed(struct virtqueue *_vq,
>  					  unsigned int *len,
>  					  void **ctx)
>  {
> -	return NULL;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	u16 last_used, id;
> +	void *ret;
> +
> +	START_USE(vq);
> +
> +	if (unlikely(vq->broken)) {
> +		END_USE(vq);
> +		return NULL;
> +	}
> +
> +	if (!more_used_packed(vq)) {
> +		pr_debug("No more buffers in queue\n");
> +		END_USE(vq);
> +		return NULL;
> +	}
> +
> +	/* Only get used elements after they have been exposed by host. */
> +	virtio_rmb(vq->weak_barriers);
> +
> +	last_used = vq->last_used_idx;
> +	id = virtio16_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].id);
> +	*len = virtio32_to_cpu(_vq->vdev, vq->vring_packed.desc[last_used].len);
> +
> +	if (unlikely(id >= vq->vring_packed.num)) {
> +		BAD_RING(vq, "id %u out of range\n", id);
> +		return NULL;
> +	}
> +	if (unlikely(!vq->desc_state_packed[id].data)) {
> +		BAD_RING(vq, "id %u is not a head!\n", id);
> +		return NULL;
> +	}
> +
> +	vq->last_used_idx += vq->desc_state_packed[id].num;
> +	if (vq->last_used_idx >= vq->vring_packed.num) {
> +		vq->last_used_idx -= vq->vring_packed.num;
> +		vq->used_wrap_counter ^= 1;
> +	}
> +
> +	/* detach_buf_packed clears data, so grab it now. */
> +	ret = vq->desc_state_packed[id].data;
> +	detach_buf_packed(vq, id, ctx);
> +
> +#ifdef DEBUG
> +	vq->last_add_time_valid = false;
> +#endif
> +
> +	END_USE(vq);
> +	return ret;
>  }
>  
>  static void virtqueue_disable_cb_packed(struct virtqueue *_vq)
>  {
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	if (vq->event_flags_shadow != VRING_EVENT_F_DISABLE) {
> +		vq->event_flags_shadow = VRING_EVENT_F_DISABLE;
> +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> +							vq->event_flags_shadow);
> +	}
>  }
>  
>  static unsigned virtqueue_enable_cb_prepare_packed(struct virtqueue *_vq)
>  {
> -	return 0;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	START_USE(vq);
> +
> +	/* We optimistically turn back on interrupts, then check if there was
> +	 * more to do. */
> +
> +	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> +		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> +							vq->event_flags_shadow);
> +	}
> +
> +	END_USE(vq);
> +	return vq->last_used_idx | ((u16)vq->used_wrap_counter << 15);
>  }
>  
> -static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned last_used_idx)
> +static bool virtqueue_poll_packed(struct virtqueue *_vq, unsigned off_wrap)
>  {
> -	return false;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	bool wrap_counter;
> +	u16 used_idx;
> +
> +	wrap_counter = off_wrap >> 15;
> +	used_idx = off_wrap & ~(1 << 15);
> +
> +	return is_used_desc_packed(vq, used_idx, wrap_counter);
>  }
>  
>  static bool virtqueue_enable_cb_delayed_packed(struct virtqueue *_vq)
>  {
> -	return false;
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +
> +	START_USE(vq);
> +
> +	/* We optimistically turn back on interrupts, then check if there was
> +	 * more to do. */
> +
> +	if (vq->event_flags_shadow == VRING_EVENT_F_DISABLE) {
> +		vq->event_flags_shadow = VRING_EVENT_F_ENABLE;
> +		vq->vring_packed.driver->flags = cpu_to_virtio16(_vq->vdev,
> +							vq->event_flags_shadow);
> +		/* We need to enable interrupts first before re-checking
> +		 * for more used buffers. */
> +		virtio_mb(vq->weak_barriers);
> +	}
> +
> +	if (more_used_packed(vq)) {
> +		END_USE(vq);
> +		return false;
> +	}
> +
> +	END_USE(vq);
> +	return true;
>  }
>  
>  static void *virtqueue_detach_unused_buf_packed(struct virtqueue *_vq)
>  {
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	unsigned int i;
> +	void *buf;
> +
> +	START_USE(vq);
> +
> +	for (i = 0; i < vq->vring_packed.num; i++) {
> +		if (!vq->desc_state_packed[i].data)
> +			continue;
> +		/* detach_buf clears data, so grab it now. */
> +		buf = vq->desc_state_packed[i].data;
> +		detach_buf_packed(vq, i, NULL);
> +		END_USE(vq);
> +		return buf;
> +	}
> +	/* That should have freed everything. */
> +	BUG_ON(vq->vq.num_free != vq->vring_packed.num);
> +
> +	END_USE(vq);
>  	return NULL;
>  }
>  
> @@ -1083,6 +1559,9 @@ bool virtqueue_poll(struct virtqueue *_vq, unsigned last_used_idx)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
> +	/* We need to enable interrupts first before re-checking
> +	 * for more used buffers. */
> +	virtio_mb(vq->weak_barriers);
>  	return vq->packed ? virtqueue_poll_packed(_vq, last_used_idx) :
>  			    virtqueue_poll_split(_vq, last_used_idx);
>  }
> -- 
> 2.18.0

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-07 17:48   ` Michael S. Tsirkin
@ 2018-11-08  1:38     ` Tiwei Bie
  2018-11-08  8:18       ` Jason Wang
  0 siblings, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-11-08  1:38 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: jasowang, virtualization, linux-kernel, netdev, virtio-dev, wexu,
	jfreimann

On Wed, Nov 07, 2018 at 12:48:46PM -0500, Michael S. Tsirkin wrote:
> On Wed, Jul 11, 2018 at 10:27:09AM +0800, Tiwei Bie wrote:
> > This commit introduces the support (without EVENT_IDX) for
> > packed ring.
> > 
> > Signed-off-by: Tiwei Bie <tiwei.bie@intel.com>
> > ---
> >  drivers/virtio/virtio_ring.c | 495 ++++++++++++++++++++++++++++++++++-
> >  1 file changed, 487 insertions(+), 8 deletions(-)
[...]
> >  
> > +static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
> > +				     struct vring_desc_state_packed *state)
> > +{
> > +	u16 flags;
> > +
> > +	if (!vring_use_dma_api(vq->vq.vdev))
> > +		return;
> > +
> > +	flags = state->flags;
> > +
> > +	if (flags & VRING_DESC_F_INDIRECT) {
> > +		dma_unmap_single(vring_dma_dev(vq),
> > +				 state->addr, state->len,
> > +				 (flags & VRING_DESC_F_WRITE) ?
> > +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	} else {
> > +		dma_unmap_page(vring_dma_dev(vq),
> > +			       state->addr, state->len,
> > +			       (flags & VRING_DESC_F_WRITE) ?
> > +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	}
> > +}
> > +
> > +static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
> > +				   struct vring_packed_desc *desc)
> > +{
> > +	u16 flags;
> > +
> > +	if (!vring_use_dma_api(vq->vq.vdev))
> > +		return;
> > +
> > +	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
> 
> BTW this stuff is only used on error etc. Is there a way to
> reuse vring_unmap_state_packed?

It's also used by the INDIRECT path. We don't allocate desc
state for INDIRECT descriptors to save DMA addr/len etc.

> 
> > +
> > +	if (flags & VRING_DESC_F_INDIRECT) {
> > +		dma_unmap_single(vring_dma_dev(vq),
> > +				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> > +				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> > +				 (flags & VRING_DESC_F_WRITE) ?
> > +				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	} else {
> > +		dma_unmap_page(vring_dma_dev(vq),
> > +			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> > +			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> > +			       (flags & VRING_DESC_F_WRITE) ?
> > +			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > +	}
> > +}
[...]
> > @@ -766,47 +840,449 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
> >  				       void *ctx,
> >  				       gfp_t gfp)
> >  {
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	struct vring_packed_desc *desc;
> > +	struct scatterlist *sg;
> > +	unsigned int i, n, descs_used, uninitialized_var(prev), err_idx;
> > +	__virtio16 uninitialized_var(head_flags), flags;
> > +	u16 head, avail_wrap_counter, id, curr;
> > +	bool indirect;
> > +
> > +	START_USE(vq);
> > +
> > +	BUG_ON(data == NULL);
> > +	BUG_ON(ctx && vq->indirect);
> > +
> > +	if (unlikely(vq->broken)) {
> > +		END_USE(vq);
> > +		return -EIO;
> > +	}
> > +
> > +#ifdef DEBUG
> > +	{
> > +		ktime_t now = ktime_get();
> > +
> > +		/* No kick or get, with .1 second between?  Warn. */
> > +		if (vq->last_add_time_valid)
> > +			WARN_ON(ktime_to_ms(ktime_sub(now, vq->last_add_time))
> > +					    > 100);
> > +		vq->last_add_time = now;
> > +		vq->last_add_time_valid = true;
> > +	}
> > +#endif
> > +
> > +	BUG_ON(total_sg == 0);
> > +
> > +	head = vq->next_avail_idx;
> > +	avail_wrap_counter = vq->avail_wrap_counter;
> > +
> > +	if (virtqueue_use_indirect(_vq, total_sg))
> > +		desc = alloc_indirect_packed(_vq, total_sg, gfp);
> > +	else {
> > +		desc = NULL;
> > +		WARN_ON_ONCE(total_sg > vq->vring_packed.num && !vq->indirect);
> > +	}
> > +
> > +	if (desc) {
> > +		/* Use a single buffer which doesn't continue */
> > +		indirect = true;
> > +		/* Set up rest to use this indirect table. */
> > +		i = 0;
> > +		descs_used = 1;
> > +	} else {
> > +		indirect = false;
> > +		desc = vq->vring_packed.desc;
> > +		i = head;
> > +		descs_used = total_sg;
> > +	}
> > +
> > +	if (vq->vq.num_free < descs_used) {
> > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > +			 descs_used, vq->vq.num_free);
> > +		/* FIXME: for historical reasons, we force a notify here if
> > +		 * there are outgoing parts to the buffer.  Presumably the
> > +		 * host should service the ring ASAP. */
> 
> I don't think we have a reason to do this for packed ring.
> No historical baggage there, right?

Based on the original commit log, it seems that the notify here
is just an "optimization". But I don't quite understand what does
the "the heuristics which KVM uses" refer to. If it's safe to drop
this in packed ring, I'd like to do it.

commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
Author: Rusty Russell <rusty@rustcorp.com.au>
Date:   Fri Jul 25 12:06:04 2008 -0500

    virtio: don't always force a notification when ring is full
    
    We force notification when the ring is full, even if the host has
    indicated it doesn't want to know.  This seemed like a good idea at
    the time: if we fill the transmit ring, we should tell the host
    immediately.
    
    Unfortunately this logic also applies to the receiving ring, which is
    refilled constantly.  We should introduce real notification thesholds
    to replace this logic.  Meanwhile, removing the logic altogether breaks
    the heuristics which KVM uses, so we use a hack: only notify if there are
    outgoing parts of the new buffer.
    
    Here are the number of exits with lguest's crappy network implementation:
    Before:
            network xmit 7859051 recv 236420
    After:
            network xmit 7858610 recv 118136
    
    Signed-off-by: Rusty Russell <rusty@rustcorp.com.au>

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 72bf8bc09014..21d9a62767af 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
 	if (vq->num_free < out + in) {
 		pr_debug("Can't add buf len %i - avail = %i\n",
 			 out + in, vq->num_free);
-		/* We notify *even if* VRING_USED_F_NO_NOTIFY is set here. */
-		vq->notify(&vq->vq);
+		/* FIXME: for historical reasons, we force a notify here if
+		 * there are outgoing parts to the buffer.  Presumably the
+		 * host should service the ring ASAP. */
+		if (out)
+			vq->notify(&vq->vq);
 		END_USE(vq);
 		return -ENOSPC;
 	}


> 
> > +		if (out_sgs)
> > +			vq->notify(&vq->vq);
> > +		if (indirect)
> > +			kfree(desc);
> > +		END_USE(vq);
> > +		return -ENOSPC;
> > +	}
> > +
[...]

^ permalink raw reply related	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-08  1:38     ` Tiwei Bie
@ 2018-11-08  8:18       ` Jason Wang
  2018-11-08 11:51         ` Tiwei Bie
  2018-11-08 14:14         ` Michael S. Tsirkin
  0 siblings, 2 replies; 53+ messages in thread
From: Jason Wang @ 2018-11-08  8:18 UTC (permalink / raw)
  To: Tiwei Bie, Michael S. Tsirkin
  Cc: virtualization, linux-kernel, netdev, virtio-dev, wexu, jfreimann


On 2018/11/8 上午9:38, Tiwei Bie wrote:
>>> +
>>> +	if (vq->vq.num_free < descs_used) {
>>> +		pr_debug("Can't add buf len %i - avail = %i\n",
>>> +			 descs_used, vq->vq.num_free);
>>> +		/* FIXME: for historical reasons, we force a notify here if
>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>> +		 * host should service the ring ASAP. */
>> I don't think we have a reason to do this for packed ring.
>> No historical baggage there, right?
> Based on the original commit log, it seems that the notify here
> is just an "optimization". But I don't quite understand what does
> the "the heuristics which KVM uses" refer to. If it's safe to drop
> this in packed ring, I'd like to do it.


According to the commit log, it seems like a workaround of lguest 
networking backend. I agree to drop it, we should not have such burden.

But we should notice that, with this removed, the compare between packed 
vs split is kind of unfair. Consider the removal of lguest support 
recently, maybe we can drop this for split ring as well?

Thanks


>
> commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
> Author: Rusty Russell<rusty@rustcorp.com.au>
> Date:   Fri Jul 25 12:06:04 2008 -0500
>
>      virtio: don't always force a notification when ring is full
>      
>      We force notification when the ring is full, even if the host has
>      indicated it doesn't want to know.  This seemed like a good idea at
>      the time: if we fill the transmit ring, we should tell the host
>      immediately.
>      
>      Unfortunately this logic also applies to the receiving ring, which is
>      refilled constantly.  We should introduce real notification thesholds
>      to replace this logic.  Meanwhile, removing the logic altogether breaks
>      the heuristics which KVM uses, so we use a hack: only notify if there are
>      outgoing parts of the new buffer.
>      
>      Here are the number of exits with lguest's crappy network implementation:
>      Before:
>              network xmit 7859051 recv 236420
>      After:
>              network xmit 7858610 recv 118136
>      
>      Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 72bf8bc09014..21d9a62767af 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
>   	if (vq->num_free < out + in) {
>   		pr_debug("Can't add buf len %i - avail = %i\n",
>   			 out + in, vq->num_free);
> -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
> -		vq->notify(&vq->vq);
> +		/* FIXME: for historical reasons, we force a notify here if
> +		 * there are outgoing parts to the buffer.  Presumably the
> +		 * host should service the ring ASAP. */
> +		if (out)
> +			vq->notify(&vq->vq);
>   		END_USE(vq);
>   		return -ENOSPC;
>   	}
>
>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-08  8:18       ` Jason Wang
@ 2018-11-08 11:51         ` Tiwei Bie
  2018-11-08 15:56           ` Michael S. Tsirkin
  2018-11-08 14:14         ` Michael S. Tsirkin
  1 sibling, 1 reply; 53+ messages in thread
From: Tiwei Bie @ 2018-11-08 11:51 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S. Tsirkin, virtualization, linux-kernel, netdev,
	virtio-dev, wexu, jfreimann

On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
> 
> On 2018/11/8 上午9:38, Tiwei Bie wrote:
> > > > +
> > > > +	if (vq->vq.num_free < descs_used) {
> > > > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > > > +			 descs_used, vq->vq.num_free);
> > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > +		 * host should service the ring ASAP. */
> > > I don't think we have a reason to do this for packed ring.
> > > No historical baggage there, right?
> > Based on the original commit log, it seems that the notify here
> > is just an "optimization". But I don't quite understand what does
> > the "the heuristics which KVM uses" refer to. If it's safe to drop
> > this in packed ring, I'd like to do it.
> 
> 
> According to the commit log, it seems like a workaround of lguest networking
> backend.

Do you know why removing this notify in Tx will break "the
heuristics which KVM uses"? Or what does "the heuristics
which KVM uses" refer to?


> I agree to drop it, we should not have such burden.
> 
> But we should notice that, with this removed, the compare between packed vs
> split is kind of unfair. Consider the removal of lguest support recently,
> maybe we can drop this for split ring as well?
> 
> Thanks
> 
> 
> > 
> > commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
> > Author: Rusty Russell<rusty@rustcorp.com.au>
> > Date:   Fri Jul 25 12:06:04 2008 -0500
> > 
> >      virtio: don't always force a notification when ring is full
> >      We force notification when the ring is full, even if the host has
> >      indicated it doesn't want to know.  This seemed like a good idea at
> >      the time: if we fill the transmit ring, we should tell the host
> >      immediately.
> >      Unfortunately this logic also applies to the receiving ring, which is
> >      refilled constantly.  We should introduce real notification thesholds
> >      to replace this logic.  Meanwhile, removing the logic altogether breaks
> >      the heuristics which KVM uses, so we use a hack: only notify if there are
> >      outgoing parts of the new buffer.
> >      Here are the number of exits with lguest's crappy network implementation:
> >      Before:
> >              network xmit 7859051 recv 236420
> >      After:
> >              network xmit 7858610 recv 118136
> >      Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
> > 
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 72bf8bc09014..21d9a62767af 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
> >   	if (vq->num_free < out + in) {
> >   		pr_debug("Can't add buf len %i - avail = %i\n",
> >   			 out + in, vq->num_free);
> > -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
> > -		vq->notify(&vq->vq);
> > +		/* FIXME: for historical reasons, we force a notify here if
> > +		 * there are outgoing parts to the buffer.  Presumably the
> > +		 * host should service the ring ASAP. */
> > +		if (out)
> > +			vq->notify(&vq->vq);
> >   		END_USE(vq);
> >   		return -ENOSPC;
> >   	}
> > 
> > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-08  8:18       ` Jason Wang
  2018-11-08 11:51         ` Tiwei Bie
@ 2018-11-08 14:14         ` Michael S. Tsirkin
  2018-11-09  2:25           ` Jason Wang
  1 sibling, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-11-08 14:14 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
> 
> On 2018/11/8 上午9:38, Tiwei Bie wrote:
> > > > +
> > > > +	if (vq->vq.num_free < descs_used) {
> > > > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > > > +			 descs_used, vq->vq.num_free);
> > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > +		 * host should service the ring ASAP. */
> > > I don't think we have a reason to do this for packed ring.
> > > No historical baggage there, right?
> > Based on the original commit log, it seems that the notify here
> > is just an "optimization". But I don't quite understand what does
> > the "the heuristics which KVM uses" refer to. If it's safe to drop
> > this in packed ring, I'd like to do it.
> 
> 
> According to the commit log, it seems like a workaround of lguest networking
> backend. I agree to drop it, we should not have such burden.
> 
> But we should notice that, with this removed, the compare between packed vs
> split is kind of unfair.

I don't think this ever triggers to be frank. When would it?

> Consider the removal of lguest support recently,
> maybe we can drop this for split ring as well?
> 
> Thanks

If it's helpful, then for sure we can drop it for virtio 1.
Can you see any perf differences at all? With which device?

> 
> > 
> > commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
> > Author: Rusty Russell<rusty@rustcorp.com.au>
> > Date:   Fri Jul 25 12:06:04 2008 -0500
> > 
> >      virtio: don't always force a notification when ring is full
> >      We force notification when the ring is full, even if the host has
> >      indicated it doesn't want to know.  This seemed like a good idea at
> >      the time: if we fill the transmit ring, we should tell the host
> >      immediately.
> >      Unfortunately this logic also applies to the receiving ring, which is
> >      refilled constantly.  We should introduce real notification thesholds
> >      to replace this logic.  Meanwhile, removing the logic altogether breaks
> >      the heuristics which KVM uses, so we use a hack: only notify if there are
> >      outgoing parts of the new buffer.
> >      Here are the number of exits with lguest's crappy network implementation:
> >      Before:
> >              network xmit 7859051 recv 236420
> >      After:
> >              network xmit 7858610 recv 118136
> >      Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
> > 
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index 72bf8bc09014..21d9a62767af 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
> >   	if (vq->num_free < out + in) {
> >   		pr_debug("Can't add buf len %i - avail = %i\n",
> >   			 out + in, vq->num_free);
> > -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
> > -		vq->notify(&vq->vq);
> > +		/* FIXME: for historical reasons, we force a notify here if
> > +		 * there are outgoing parts to the buffer.  Presumably the
> > +		 * host should service the ring ASAP. */
> > +		if (out)
> > +			vq->notify(&vq->vq);
> >   		END_USE(vq);
> >   		return -ENOSPC;
> >   	}
> > 
> > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-08 11:51         ` Tiwei Bie
@ 2018-11-08 15:56           ` Michael S. Tsirkin
  2018-11-09  1:50             ` Tiwei Bie
  2018-11-09  2:30             ` Jason Wang
  0 siblings, 2 replies; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-11-08 15:56 UTC (permalink / raw)
  To: Tiwei Bie
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Thu, Nov 08, 2018 at 07:51:48PM +0800, Tiwei Bie wrote:
> On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
> > 
> > On 2018/11/8 上午9:38, Tiwei Bie wrote:
> > > > > +
> > > > > +	if (vq->vq.num_free < descs_used) {
> > > > > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > > > > +			 descs_used, vq->vq.num_free);
> > > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > > +		 * host should service the ring ASAP. */
> > > > I don't think we have a reason to do this for packed ring.
> > > > No historical baggage there, right?
> > > Based on the original commit log, it seems that the notify here
> > > is just an "optimization". But I don't quite understand what does
> > > the "the heuristics which KVM uses" refer to. If it's safe to drop
> > > this in packed ring, I'd like to do it.
> > 
> > 
> > According to the commit log, it seems like a workaround of lguest networking
> > backend.
> 
> Do you know why removing this notify in Tx will break "the
> heuristics which KVM uses"? Or what does "the heuristics
> which KVM uses" refer to?

Yes. QEMU has a mode where it disables notifications and processes TX
ring periodically from a timer.  It's off by default but used to be on
by default a long time ago. If ring becomes full this causes traffic
stalls.  As a work-around Rusty put in this hack to kick on ring full
even with notifications disabled.  It's easy enough to make sure QEMU
does not combine devices with packed ring support with the timer hack.
And I am guessing it's safe enough to also block that option completely
e.g. when virtio 1.0 is enabled.

> 
> > I agree to drop it, we should not have such burden.
> > 
> > But we should notice that, with this removed, the compare between packed vs
> > split is kind of unfair. Consider the removal of lguest support recently,
> > maybe we can drop this for split ring as well?
> > 
> > Thanks
> > 
> > 
> > > 
> > > commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
> > > Author: Rusty Russell<rusty@rustcorp.com.au>
> > > Date:   Fri Jul 25 12:06:04 2008 -0500
> > > 
> > >      virtio: don't always force a notification when ring is full
> > >      We force notification when the ring is full, even if the host has
> > >      indicated it doesn't want to know.  This seemed like a good idea at
> > >      the time: if we fill the transmit ring, we should tell the host
> > >      immediately.
> > >      Unfortunately this logic also applies to the receiving ring, which is
> > >      refilled constantly.  We should introduce real notification thesholds
> > >      to replace this logic.  Meanwhile, removing the logic altogether breaks
> > >      the heuristics which KVM uses, so we use a hack: only notify if there are
> > >      outgoing parts of the new buffer.
> > >      Here are the number of exits with lguest's crappy network implementation:
> > >      Before:
> > >              network xmit 7859051 recv 236420
> > >      After:
> > >              network xmit 7858610 recv 118136
> > >      Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
> > > 
> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > index 72bf8bc09014..21d9a62767af 100644
> > > --- a/drivers/virtio/virtio_ring.c
> > > +++ b/drivers/virtio/virtio_ring.c
> > > @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
> > >   	if (vq->num_free < out + in) {
> > >   		pr_debug("Can't add buf len %i - avail = %i\n",
> > >   			 out + in, vq->num_free);
> > > -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
> > > -		vq->notify(&vq->vq);
> > > +		/* FIXME: for historical reasons, we force a notify here if
> > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > +		 * host should service the ring ASAP. */
> > > +		if (out)
> > > +			vq->notify(&vq->vq);
> > >   		END_USE(vq);
> > >   		return -ENOSPC;
> > >   	}
> > > 
> > > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-08 15:56           ` Michael S. Tsirkin
@ 2018-11-09  1:50             ` Tiwei Bie
  2018-11-09  2:30             ` Jason Wang
  1 sibling, 0 replies; 53+ messages in thread
From: Tiwei Bie @ 2018-11-09  1:50 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Jason Wang, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Thu, Nov 08, 2018 at 10:56:02AM -0500, Michael S. Tsirkin wrote:
> On Thu, Nov 08, 2018 at 07:51:48PM +0800, Tiwei Bie wrote:
> > On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
> > > 
> > > On 2018/11/8 上午9:38, Tiwei Bie wrote:
> > > > > > +
> > > > > > +	if (vq->vq.num_free < descs_used) {
> > > > > > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > > > > > +			 descs_used, vq->vq.num_free);
> > > > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > > > +		 * host should service the ring ASAP. */
> > > > > I don't think we have a reason to do this for packed ring.
> > > > > No historical baggage there, right?
> > > > Based on the original commit log, it seems that the notify here
> > > > is just an "optimization". But I don't quite understand what does
> > > > the "the heuristics which KVM uses" refer to. If it's safe to drop
> > > > this in packed ring, I'd like to do it.
> > > 
> > > 
> > > According to the commit log, it seems like a workaround of lguest networking
> > > backend.
> > 
> > Do you know why removing this notify in Tx will break "the
> > heuristics which KVM uses"? Or what does "the heuristics
> > which KVM uses" refer to?
> 
> Yes. QEMU has a mode where it disables notifications and processes TX
> ring periodically from a timer.  It's off by default but used to be on
> by default a long time ago. If ring becomes full this causes traffic
> stalls.  As a work-around Rusty put in this hack to kick on ring full
> even with notifications disabled.  It's easy enough to make sure QEMU
> does not combine devices with packed ring support with the timer hack.
> And I am guessing it's safe enough to also block that option completely
> e.g. when virtio 1.0 is enabled.

I see. Thanks!

> 
> > 
> > > I agree to drop it, we should not have such burden.
> > > 
> > > But we should notice that, with this removed, the compare between packed vs
> > > split is kind of unfair. Consider the removal of lguest support recently,
> > > maybe we can drop this for split ring as well?
> > > 
> > > Thanks
> > > 
> > > 
> > > > 
> > > > commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
> > > > Author: Rusty Russell<rusty@rustcorp.com.au>
> > > > Date:   Fri Jul 25 12:06:04 2008 -0500
> > > > 
> > > >      virtio: don't always force a notification when ring is full
> > > >      We force notification when the ring is full, even if the host has
> > > >      indicated it doesn't want to know.  This seemed like a good idea at
> > > >      the time: if we fill the transmit ring, we should tell the host
> > > >      immediately.
> > > >      Unfortunately this logic also applies to the receiving ring, which is
> > > >      refilled constantly.  We should introduce real notification thesholds
> > > >      to replace this logic.  Meanwhile, removing the logic altogether breaks
> > > >      the heuristics which KVM uses, so we use a hack: only notify if there are
> > > >      outgoing parts of the new buffer.
> > > >      Here are the number of exits with lguest's crappy network implementation:
> > > >      Before:
> > > >              network xmit 7859051 recv 236420
> > > >      After:
> > > >              network xmit 7858610 recv 118136
> > > >      Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
> > > > 
> > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > > index 72bf8bc09014..21d9a62767af 100644
> > > > --- a/drivers/virtio/virtio_ring.c
> > > > +++ b/drivers/virtio/virtio_ring.c
> > > > @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
> > > >   	if (vq->num_free < out + in) {
> > > >   		pr_debug("Can't add buf len %i - avail = %i\n",
> > > >   			 out + in, vq->num_free);
> > > > -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
> > > > -		vq->notify(&vq->vq);
> > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > +		 * host should service the ring ASAP. */
> > > > +		if (out)
> > > > +			vq->notify(&vq->vq);
> > > >   		END_USE(vq);
> > > >   		return -ENOSPC;
> > > >   	}
> > > > 
> > > > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-08 14:14         ` Michael S. Tsirkin
@ 2018-11-09  2:25           ` Jason Wang
  2018-11-09  3:58             ` Michael S. Tsirkin
  0 siblings, 1 reply; 53+ messages in thread
From: Jason Wang @ 2018-11-09  2:25 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Tiwei Bie, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann


On 2018/11/8 下午10:14, Michael S. Tsirkin wrote:
> On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
>> On 2018/11/8 上午9:38, Tiwei Bie wrote:
>>>>> +
>>>>> +	if (vq->vq.num_free < descs_used) {
>>>>> +		pr_debug("Can't add buf len %i - avail = %i\n",
>>>>> +			 descs_used, vq->vq.num_free);
>>>>> +		/* FIXME: for historical reasons, we force a notify here if
>>>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>>>> +		 * host should service the ring ASAP. */
>>>> I don't think we have a reason to do this for packed ring.
>>>> No historical baggage there, right?
>>> Based on the original commit log, it seems that the notify here
>>> is just an "optimization". But I don't quite understand what does
>>> the "the heuristics which KVM uses" refer to. If it's safe to drop
>>> this in packed ring, I'd like to do it.
>>
>> According to the commit log, it seems like a workaround of lguest networking
>> backend. I agree to drop it, we should not have such burden.
>>
>> But we should notice that, with this removed, the compare between packed vs
>> split is kind of unfair.
> I don't think this ever triggers to be frank. When would it?


I think it can happen e.g in the path of XDP transmission in 
__virtnet_xdp_xmit_one():


         err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC);
         if (unlikely(err))
                 return -ENOSPC; /* Caller handle free/refcnt */


>
>> Consider the removal of lguest support recently,
>> maybe we can drop this for split ring as well?
>>
>> Thanks
> If it's helpful, then for sure we can drop it for virtio 1.
> Can you see any perf differences at all? With which device?


I don't test but consider the case of XDP_TX in guest plus vhost_net in 
host. Since vhost_net is half duplex, it's pretty easier to trigger this 
condition.

Thanks


>
>>> commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
>>> Author: Rusty Russell<rusty@rustcorp.com.au>
>>> Date:   Fri Jul 25 12:06:04 2008 -0500
>>>
>>>       virtio: don't always force a notification when ring is full
>>>       We force notification when the ring is full, even if the host has
>>>       indicated it doesn't want to know.  This seemed like a good idea at
>>>       the time: if we fill the transmit ring, we should tell the host
>>>       immediately.
>>>       Unfortunately this logic also applies to the receiving ring, which is
>>>       refilled constantly.  We should introduce real notification thesholds
>>>       to replace this logic.  Meanwhile, removing the logic altogether breaks
>>>       the heuristics which KVM uses, so we use a hack: only notify if there are
>>>       outgoing parts of the new buffer.
>>>       Here are the number of exits with lguest's crappy network implementation:
>>>       Before:
>>>               network xmit 7859051 recv 236420
>>>       After:
>>>               network xmit 7858610 recv 118136
>>>       Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
>>>
>>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>>> index 72bf8bc09014..21d9a62767af 100644
>>> --- a/drivers/virtio/virtio_ring.c
>>> +++ b/drivers/virtio/virtio_ring.c
>>> @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
>>>    	if (vq->num_free < out + in) {
>>>    		pr_debug("Can't add buf len %i - avail = %i\n",
>>>    			 out + in, vq->num_free);
>>> -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
>>> -		vq->notify(&vq->vq);
>>> +		/* FIXME: for historical reasons, we force a notify here if
>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>> +		 * host should service the ring ASAP. */
>>> +		if (out)
>>> +			vq->notify(&vq->vq);
>>>    		END_USE(vq);
>>>    		return -ENOSPC;
>>>    	}
>>>
>>>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-08 15:56           ` Michael S. Tsirkin
  2018-11-09  1:50             ` Tiwei Bie
@ 2018-11-09  2:30             ` Jason Wang
  2018-11-09  4:00               ` Michael S. Tsirkin
  1 sibling, 1 reply; 53+ messages in thread
From: Jason Wang @ 2018-11-09  2:30 UTC (permalink / raw)
  To: Michael S. Tsirkin, Tiwei Bie
  Cc: virtualization, linux-kernel, netdev, virtio-dev, wexu, jfreimann


On 2018/11/8 下午11:56, Michael S. Tsirkin wrote:
> On Thu, Nov 08, 2018 at 07:51:48PM +0800, Tiwei Bie wrote:
>> On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
>>> On 2018/11/8 上午9:38, Tiwei Bie wrote:
>>>>>> +
>>>>>> +	if (vq->vq.num_free < descs_used) {
>>>>>> +		pr_debug("Can't add buf len %i - avail = %i\n",
>>>>>> +			 descs_used, vq->vq.num_free);
>>>>>> +		/* FIXME: for historical reasons, we force a notify here if
>>>>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>>>>> +		 * host should service the ring ASAP. */
>>>>> I don't think we have a reason to do this for packed ring.
>>>>> No historical baggage there, right?
>>>> Based on the original commit log, it seems that the notify here
>>>> is just an "optimization". But I don't quite understand what does
>>>> the "the heuristics which KVM uses" refer to. If it's safe to drop
>>>> this in packed ring, I'd like to do it.
>>>
>>> According to the commit log, it seems like a workaround of lguest networking
>>> backend.
>> Do you know why removing this notify in Tx will break "the
>> heuristics which KVM uses"? Or what does "the heuristics
>> which KVM uses" refer to?
> Yes. QEMU has a mode where it disables notifications and processes TX
> ring periodically from a timer.  It's off by default but used to be on
> by default a long time ago. If ring becomes full this causes traffic
> stalls.


Do you mean tx-timer? If yes, we can still enable it for packed ring and 
the timer will finally fired and we can go.


> As a work-around Rusty put in this hack to kick on ring full
> even with notifications disabled.


 From the commit log it looks more like a performance workaround instead 
of a bug fix.


> It's easy enough to make sure QEMU
> does not combine devices with packed ring support with the timer hack.
> And I am guessing it's safe enough to also block that option completely
> e.g. when virtio 1.0 is enabled.


I agree.

Thanks


>>> I agree to drop it, we should not have such burden.
>>>
>>> But we should notice that, with this removed, the compare between packed vs
>>> split is kind of unfair. Consider the removal of lguest support recently,
>>> maybe we can drop this for split ring as well?
>>>
>>> Thanks
>>>
>>>
>>>> commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
>>>> Author: Rusty Russell<rusty@rustcorp.com.au>
>>>> Date:   Fri Jul 25 12:06:04 2008 -0500
>>>>
>>>>       virtio: don't always force a notification when ring is full
>>>>       We force notification when the ring is full, even if the host has
>>>>       indicated it doesn't want to know.  This seemed like a good idea at
>>>>       the time: if we fill the transmit ring, we should tell the host
>>>>       immediately.
>>>>       Unfortunately this logic also applies to the receiving ring, which is
>>>>       refilled constantly.  We should introduce real notification thesholds
>>>>       to replace this logic.  Meanwhile, removing the logic altogether breaks
>>>>       the heuristics which KVM uses, so we use a hack: only notify if there are
>>>>       outgoing parts of the new buffer.
>>>>       Here are the number of exits with lguest's crappy network implementation:
>>>>       Before:
>>>>               network xmit 7859051 recv 236420
>>>>       After:
>>>>               network xmit 7858610 recv 118136
>>>>       Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
>>>>
>>>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>>>> index 72bf8bc09014..21d9a62767af 100644
>>>> --- a/drivers/virtio/virtio_ring.c
>>>> +++ b/drivers/virtio/virtio_ring.c
>>>> @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
>>>>    	if (vq->num_free < out + in) {
>>>>    		pr_debug("Can't add buf len %i - avail = %i\n",
>>>>    			 out + in, vq->num_free);
>>>> -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
>>>> -		vq->notify(&vq->vq);
>>>> +		/* FIXME: for historical reasons, we force a notify here if
>>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>>> +		 * host should service the ring ASAP. */
>>>> +		if (out)
>>>> +			vq->notify(&vq->vq);
>>>>    		END_USE(vq);
>>>>    		return -ENOSPC;
>>>>    	}
>>>>
>>>>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-09  2:25           ` Jason Wang
@ 2018-11-09  3:58             ` Michael S. Tsirkin
  2018-11-09 10:04               ` Jason Wang
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-11-09  3:58 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Fri, Nov 09, 2018 at 10:25:28AM +0800, Jason Wang wrote:
> 
> On 2018/11/8 下午10:14, Michael S. Tsirkin wrote:
> > On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
> > > On 2018/11/8 上午9:38, Tiwei Bie wrote:
> > > > > > +
> > > > > > +	if (vq->vq.num_free < descs_used) {
> > > > > > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > > > > > +			 descs_used, vq->vq.num_free);
> > > > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > > > +		 * host should service the ring ASAP. */
> > > > > I don't think we have a reason to do this for packed ring.
> > > > > No historical baggage there, right?
> > > > Based on the original commit log, it seems that the notify here
> > > > is just an "optimization". But I don't quite understand what does
> > > > the "the heuristics which KVM uses" refer to. If it's safe to drop
> > > > this in packed ring, I'd like to do it.
> > > 
> > > According to the commit log, it seems like a workaround of lguest networking
> > > backend. I agree to drop it, we should not have such burden.
> > > 
> > > But we should notice that, with this removed, the compare between packed vs
> > > split is kind of unfair.
> > I don't think this ever triggers to be frank. When would it?
> 
> 
> I think it can happen e.g in the path of XDP transmission in
> __virtnet_xdp_xmit_one():
> 
> 
>         err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC);
>         if (unlikely(err))
>                 return -ENOSPC; /* Caller handle free/refcnt */
> 

I see. We used to do it for regular xmit but stopped
doing it. Is it fine for xdp then?

> > 
> > > Consider the removal of lguest support recently,
> > > maybe we can drop this for split ring as well?
> > > 
> > > Thanks
> > If it's helpful, then for sure we can drop it for virtio 1.
> > Can you see any perf differences at all? With which device?
> 
> 
> I don't test but consider the case of XDP_TX in guest plus vhost_net in
> host. Since vhost_net is half duplex, it's pretty easier to trigger this
> condition.
> 
> Thanks

Sounds reasonable. Worth testing before we change things though.

> 
> > 
> > > > commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
> > > > Author: Rusty Russell<rusty@rustcorp.com.au>
> > > > Date:   Fri Jul 25 12:06:04 2008 -0500
> > > > 
> > > >       virtio: don't always force a notification when ring is full
> > > >       We force notification when the ring is full, even if the host has
> > > >       indicated it doesn't want to know.  This seemed like a good idea at
> > > >       the time: if we fill the transmit ring, we should tell the host
> > > >       immediately.
> > > >       Unfortunately this logic also applies to the receiving ring, which is
> > > >       refilled constantly.  We should introduce real notification thesholds
> > > >       to replace this logic.  Meanwhile, removing the logic altogether breaks
> > > >       the heuristics which KVM uses, so we use a hack: only notify if there are
> > > >       outgoing parts of the new buffer.
> > > >       Here are the number of exits with lguest's crappy network implementation:
> > > >       Before:
> > > >               network xmit 7859051 recv 236420
> > > >       After:
> > > >               network xmit 7858610 recv 118136
> > > >       Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
> > > > 
> > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > > index 72bf8bc09014..21d9a62767af 100644
> > > > --- a/drivers/virtio/virtio_ring.c
> > > > +++ b/drivers/virtio/virtio_ring.c
> > > > @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
> > > >    	if (vq->num_free < out + in) {
> > > >    		pr_debug("Can't add buf len %i - avail = %i\n",
> > > >    			 out + in, vq->num_free);
> > > > -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
> > > > -		vq->notify(&vq->vq);
> > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > +		 * host should service the ring ASAP. */
> > > > +		if (out)
> > > > +			vq->notify(&vq->vq);
> > > >    		END_USE(vq);
> > > >    		return -ENOSPC;
> > > >    	}
> > > > 
> > > > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-09  2:30             ` Jason Wang
@ 2018-11-09  4:00               ` Michael S. Tsirkin
  2018-11-09 10:05                 ` Jason Wang
  0 siblings, 1 reply; 53+ messages in thread
From: Michael S. Tsirkin @ 2018-11-09  4:00 UTC (permalink / raw)
  To: Jason Wang
  Cc: Tiwei Bie, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann

On Fri, Nov 09, 2018 at 10:30:50AM +0800, Jason Wang wrote:
> 
> On 2018/11/8 下午11:56, Michael S. Tsirkin wrote:
> > On Thu, Nov 08, 2018 at 07:51:48PM +0800, Tiwei Bie wrote:
> > > On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
> > > > On 2018/11/8 上午9:38, Tiwei Bie wrote:
> > > > > > > +
> > > > > > > +	if (vq->vq.num_free < descs_used) {
> > > > > > > +		pr_debug("Can't add buf len %i - avail = %i\n",
> > > > > > > +			 descs_used, vq->vq.num_free);
> > > > > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > > > > +		 * host should service the ring ASAP. */
> > > > > > I don't think we have a reason to do this for packed ring.
> > > > > > No historical baggage there, right?
> > > > > Based on the original commit log, it seems that the notify here
> > > > > is just an "optimization". But I don't quite understand what does
> > > > > the "the heuristics which KVM uses" refer to. If it's safe to drop
> > > > > this in packed ring, I'd like to do it.
> > > > 
> > > > According to the commit log, it seems like a workaround of lguest networking
> > > > backend.
> > > Do you know why removing this notify in Tx will break "the
> > > heuristics which KVM uses"? Or what does "the heuristics
> > > which KVM uses" refer to?
> > Yes. QEMU has a mode where it disables notifications and processes TX
> > ring periodically from a timer.  It's off by default but used to be on
> > by default a long time ago. If ring becomes full this causes traffic
> > stalls.
> 
> 
> Do you mean tx-timer? If yes, we can still enable it for packed ring

Yes we can but I doubt anyone does.

> and the
> timer will finally fired and we can go.

on tx ring full we probably don't want to wait for timer.
But I think we can just prevent qemu from using tx timer
with virtio 1.

> 
> > As a work-around Rusty put in this hack to kick on ring full
> > even with notifications disabled.
> 
> 
> From the commit log it looks more like a performance workaround instead of a
> bug fix.

it's a quality of implementation issue, yes.

> 
> > It's easy enough to make sure QEMU
> > does not combine devices with packed ring support with the timer hack.
> > And I am guessing it's safe enough to also block that option completely
> > e.g. when virtio 1.0 is enabled.
> 
> 
> I agree.
> 
> Thanks
> 
> 
> > > > I agree to drop it, we should not have such burden.
> > > > 
> > > > But we should notice that, with this removed, the compare between packed vs
> > > > split is kind of unfair. Consider the removal of lguest support recently,
> > > > maybe we can drop this for split ring as well?
> > > > 
> > > > Thanks
> > > > 
> > > > 
> > > > > commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
> > > > > Author: Rusty Russell<rusty@rustcorp.com.au>
> > > > > Date:   Fri Jul 25 12:06:04 2008 -0500
> > > > > 
> > > > >       virtio: don't always force a notification when ring is full
> > > > >       We force notification when the ring is full, even if the host has
> > > > >       indicated it doesn't want to know.  This seemed like a good idea at
> > > > >       the time: if we fill the transmit ring, we should tell the host
> > > > >       immediately.
> > > > >       Unfortunately this logic also applies to the receiving ring, which is
> > > > >       refilled constantly.  We should introduce real notification thesholds
> > > > >       to replace this logic.  Meanwhile, removing the logic altogether breaks
> > > > >       the heuristics which KVM uses, so we use a hack: only notify if there are
> > > > >       outgoing parts of the new buffer.
> > > > >       Here are the number of exits with lguest's crappy network implementation:
> > > > >       Before:
> > > > >               network xmit 7859051 recv 236420
> > > > >       After:
> > > > >               network xmit 7858610 recv 118136
> > > > >       Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
> > > > > 
> > > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > > > index 72bf8bc09014..21d9a62767af 100644
> > > > > --- a/drivers/virtio/virtio_ring.c
> > > > > +++ b/drivers/virtio/virtio_ring.c
> > > > > @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
> > > > >    	if (vq->num_free < out + in) {
> > > > >    		pr_debug("Can't add buf len %i - avail = %i\n",
> > > > >    			 out + in, vq->num_free);
> > > > > -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
> > > > > -		vq->notify(&vq->vq);
> > > > > +		/* FIXME: for historical reasons, we force a notify here if
> > > > > +		 * there are outgoing parts to the buffer.  Presumably the
> > > > > +		 * host should service the ring ASAP. */
> > > > > +		if (out)
> > > > > +			vq->notify(&vq->vq);
> > > > >    		END_USE(vq);
> > > > >    		return -ENOSPC;
> > > > >    	}
> > > > > 
> > > > > 

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-09  3:58             ` Michael S. Tsirkin
@ 2018-11-09 10:04               ` Jason Wang
  0 siblings, 0 replies; 53+ messages in thread
From: Jason Wang @ 2018-11-09 10:04 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Tiwei Bie, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann


On 2018/11/9 上午11:58, Michael S. Tsirkin wrote:
> On Fri, Nov 09, 2018 at 10:25:28AM +0800, Jason Wang wrote:
>> On 2018/11/8 下午10:14, Michael S. Tsirkin wrote:
>>> On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
>>>> On 2018/11/8 上午9:38, Tiwei Bie wrote:
>>>>>>> +
>>>>>>> +	if (vq->vq.num_free < descs_used) {
>>>>>>> +		pr_debug("Can't add buf len %i - avail = %i\n",
>>>>>>> +			 descs_used, vq->vq.num_free);
>>>>>>> +		/* FIXME: for historical reasons, we force a notify here if
>>>>>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>>>>>> +		 * host should service the ring ASAP. */
>>>>>> I don't think we have a reason to do this for packed ring.
>>>>>> No historical baggage there, right?
>>>>> Based on the original commit log, it seems that the notify here
>>>>> is just an "optimization". But I don't quite understand what does
>>>>> the "the heuristics which KVM uses" refer to. If it's safe to drop
>>>>> this in packed ring, I'd like to do it.
>>>> According to the commit log, it seems like a workaround of lguest networking
>>>> backend. I agree to drop it, we should not have such burden.
>>>>
>>>> But we should notice that, with this removed, the compare between packed vs
>>>> split is kind of unfair.
>>> I don't think this ever triggers to be frank. When would it?
>>
>> I think it can happen e.g in the path of XDP transmission in
>> __virtnet_xdp_xmit_one():
>>
>>
>>          err = virtqueue_add_outbuf(sq->vq, sq->sg, 1, xdpf, GFP_ATOMIC);
>>          if (unlikely(err))
>>                  return -ENOSPC; /* Caller handle free/refcnt */
>>
> I see. We used to do it for regular xmit but stopped
> doing it. Is it fine for xdp then?


There's no traffic control in XDP, so it was the only thing we can do.


>
>>>> Consider the removal of lguest support recently,
>>>> maybe we can drop this for split ring as well?
>>>>
>>>> Thanks
>>> If it's helpful, then for sure we can drop it for virtio 1.
>>> Can you see any perf differences at all? With which device?
>>
>> I don't test but consider the case of XDP_TX in guest plus vhost_net in
>> host. Since vhost_net is half duplex, it's pretty easier to trigger this
>> condition.
>>
>> Thanks
> Sounds reasonable. Worth testing before we change things though.


Let me test and submit a patch.

Thanks


>
>>>>> commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
>>>>> Author: Rusty Russell<rusty@rustcorp.com.au>
>>>>> Date:   Fri Jul 25 12:06:04 2008 -0500
>>>>>
>>>>>        virtio: don't always force a notification when ring is full
>>>>>        We force notification when the ring is full, even if the host has
>>>>>        indicated it doesn't want to know.  This seemed like a good idea at
>>>>>        the time: if we fill the transmit ring, we should tell the host
>>>>>        immediately.
>>>>>        Unfortunately this logic also applies to the receiving ring, which is
>>>>>        refilled constantly.  We should introduce real notification thesholds
>>>>>        to replace this logic.  Meanwhile, removing the logic altogether breaks
>>>>>        the heuristics which KVM uses, so we use a hack: only notify if there are
>>>>>        outgoing parts of the new buffer.
>>>>>        Here are the number of exits with lguest's crappy network implementation:
>>>>>        Before:
>>>>>                network xmit 7859051 recv 236420
>>>>>        After:
>>>>>                network xmit 7858610 recv 118136
>>>>>        Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
>>>>>
>>>>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>>>>> index 72bf8bc09014..21d9a62767af 100644
>>>>> --- a/drivers/virtio/virtio_ring.c
>>>>> +++ b/drivers/virtio/virtio_ring.c
>>>>> @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
>>>>>     	if (vq->num_free < out + in) {
>>>>>     		pr_debug("Can't add buf len %i - avail = %i\n",
>>>>>     			 out + in, vq->num_free);
>>>>> -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
>>>>> -		vq->notify(&vq->vq);
>>>>> +		/* FIXME: for historical reasons, we force a notify here if
>>>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>>>> +		 * host should service the ring ASAP. */
>>>>> +		if (out)
>>>>> +			vq->notify(&vq->vq);
>>>>>     		END_USE(vq);
>>>>>     		return -ENOSPC;
>>>>>     	}
>>>>>
>>>>>

^ permalink raw reply	[flat|nested] 53+ messages in thread

* Re: [PATCH net-next v2 3/5] virtio_ring: add packed ring support
  2018-11-09  4:00               ` Michael S. Tsirkin
@ 2018-11-09 10:05                 ` Jason Wang
  0 siblings, 0 replies; 53+ messages in thread
From: Jason Wang @ 2018-11-09 10:05 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Tiwei Bie, virtualization, linux-kernel, netdev, virtio-dev,
	wexu, jfreimann


On 2018/11/9 下午12:00, Michael S. Tsirkin wrote:
> On Fri, Nov 09, 2018 at 10:30:50AM +0800, Jason Wang wrote:
>> On 2018/11/8 下午11:56, Michael S. Tsirkin wrote:
>>> On Thu, Nov 08, 2018 at 07:51:48PM +0800, Tiwei Bie wrote:
>>>> On Thu, Nov 08, 2018 at 04:18:25PM +0800, Jason Wang wrote:
>>>>> On 2018/11/8 上午9:38, Tiwei Bie wrote:
>>>>>>>> +
>>>>>>>> +	if (vq->vq.num_free < descs_used) {
>>>>>>>> +		pr_debug("Can't add buf len %i - avail = %i\n",
>>>>>>>> +			 descs_used, vq->vq.num_free);
>>>>>>>> +		/* FIXME: for historical reasons, we force a notify here if
>>>>>>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>>>>>>> +		 * host should service the ring ASAP. */
>>>>>>> I don't think we have a reason to do this for packed ring.
>>>>>>> No historical baggage there, right?
>>>>>> Based on the original commit log, it seems that the notify here
>>>>>> is just an "optimization". But I don't quite understand what does
>>>>>> the "the heuristics which KVM uses" refer to. If it's safe to drop
>>>>>> this in packed ring, I'd like to do it.
>>>>> According to the commit log, it seems like a workaround of lguest networking
>>>>> backend.
>>>> Do you know why removing this notify in Tx will break "the
>>>> heuristics which KVM uses"? Or what does "the heuristics
>>>> which KVM uses" refer to?
>>> Yes. QEMU has a mode where it disables notifications and processes TX
>>> ring periodically from a timer.  It's off by default but used to be on
>>> by default a long time ago. If ring becomes full this causes traffic
>>> stalls.
>>
>> Do you mean tx-timer? If yes, we can still enable it for packed ring
> Yes we can but I doubt anyone does.
>
>> and the
>> timer will finally fired and we can go.
> on tx ring full we probably don't want to wait for timer.
> But I think we can just prevent qemu from using tx timer
> with virtio 1.


Yes, we can.

Thanks


>
>>> As a work-around Rusty put in this hack to kick on ring full
>>> even with notifications disabled.
>>
>>  From the commit log it looks more like a performance workaround instead of a
>> bug fix.
> it's a quality of implementation issue, yes.
>
>>> It's easy enough to make sure QEMU
>>> does not combine devices with packed ring support with the timer hack.
>>> And I am guessing it's safe enough to also block that option completely
>>> e.g. when virtio 1.0 is enabled.
>>
>> I agree.
>>
>> Thanks
>>
>>
>>>>> I agree to drop it, we should not have such burden.
>>>>>
>>>>> But we should notice that, with this removed, the compare between packed vs
>>>>> split is kind of unfair. Consider the removal of lguest support recently,
>>>>> maybe we can drop this for split ring as well?
>>>>>
>>>>> Thanks
>>>>>
>>>>>
>>>>>> commit 44653eae1407f79dff6f52fcf594ae84cb165ec4
>>>>>> Author: Rusty Russell<rusty@rustcorp.com.au>
>>>>>> Date:   Fri Jul 25 12:06:04 2008 -0500
>>>>>>
>>>>>>        virtio: don't always force a notification when ring is full
>>>>>>        We force notification when the ring is full, even if the host has
>>>>>>        indicated it doesn't want to know.  This seemed like a good idea at
>>>>>>        the time: if we fill the transmit ring, we should tell the host
>>>>>>        immediately.
>>>>>>        Unfortunately this logic also applies to the receiving ring, which is
>>>>>>        refilled constantly.  We should introduce real notification thesholds
>>>>>>        to replace this logic.  Meanwhile, removing the logic altogether breaks
>>>>>>        the heuristics which KVM uses, so we use a hack: only notify if there are
>>>>>>        outgoing parts of the new buffer.
>>>>>>        Here are the number of exits with lguest's crappy network implementation:
>>>>>>        Before:
>>>>>>                network xmit 7859051 recv 236420
>>>>>>        After:
>>>>>>                network xmit 7858610 recv 118136
>>>>>>        Signed-off-by: Rusty Russell<rusty@rustcorp.com.au>
>>>>>>
>>>>>> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
>>>>>> index 72bf8bc09014..21d9a62767af 100644
>>>>>> --- a/drivers/virtio/virtio_ring.c
>>>>>> +++ b/drivers/virtio/virtio_ring.c
>>>>>> @@ -87,8 +87,11 @@ static int vring_add_buf(struct virtqueue *_vq,
>>>>>>     	if (vq->num_free < out + in) {
>>>>>>     		pr_debug("Can't add buf len %i - avail = %i\n",
>>>>>>     			 out + in, vq->num_free);
>>>>>> -		/* We notify*even if*  VRING_USED_F_NO_NOTIFY is set here. */
>>>>>> -		vq->notify(&vq->vq);
>>>>>> +		/* FIXME: for historical reasons, we force a notify here if
>>>>>> +		 * there are outgoing parts to the buffer.  Presumably the
>>>>>> +		 * host should service the ring ASAP. */
>>>>>> +		if (out)
>>>>>> +			vq->notify(&vq->vq);
>>>>>>     		END_USE(vq);
>>>>>>     		return -ENOSPC;
>>>>>>     	}
>>>>>>
>>>>>>

^ permalink raw reply	[flat|nested] 53+ messages in thread

end of thread, other threads:[~2018-11-09 10:05 UTC | newest]

Thread overview: 53+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-11  2:27 [PATCH net-next v2 0/5] virtio: support packed ring Tiwei Bie
2018-07-11  2:27 ` [PATCH net-next v2 1/5] virtio: add packed ring definitions Tiwei Bie
2018-09-07 13:51   ` Michael S. Tsirkin
2018-09-10  2:13     ` Tiwei Bie
2018-09-12 12:53       ` Michael S. Tsirkin
2018-07-11  2:27 ` [PATCH net-next v2 2/5] virtio_ring: support creating packed ring Tiwei Bie
2018-09-07 14:03   ` Michael S. Tsirkin
2018-09-10  2:28     ` Tiwei Bie
2018-09-12  7:51       ` Tiwei Bie
2018-09-12 16:12         ` Michael S. Tsirkin
2018-09-12 12:45       ` Michael S. Tsirkin
2018-07-11  2:27 ` [PATCH net-next v2 3/5] virtio_ring: add packed ring support Tiwei Bie
2018-09-07 13:49   ` Michael S. Tsirkin
2018-09-10  2:03     ` Tiwei Bie
2018-09-12 16:22   ` Michael S. Tsirkin
2018-11-07 17:48   ` Michael S. Tsirkin
2018-11-08  1:38     ` Tiwei Bie
2018-11-08  8:18       ` Jason Wang
2018-11-08 11:51         ` Tiwei Bie
2018-11-08 15:56           ` Michael S. Tsirkin
2018-11-09  1:50             ` Tiwei Bie
2018-11-09  2:30             ` Jason Wang
2018-11-09  4:00               ` Michael S. Tsirkin
2018-11-09 10:05                 ` Jason Wang
2018-11-08 14:14         ` Michael S. Tsirkin
2018-11-09  2:25           ` Jason Wang
2018-11-09  3:58             ` Michael S. Tsirkin
2018-11-09 10:04               ` Jason Wang
2018-07-11  2:27 ` [PATCH net-next v2 4/5] virtio_ring: add event idx support in packed ring Tiwei Bie
2018-09-07 14:10   ` Michael S. Tsirkin
2018-09-10  2:35     ` [virtio-dev] " Tiwei Bie
2018-07-11  2:27 ` [PATCH net-next v2 5/5] virtio_ring: enable " Tiwei Bie
2018-07-11  2:52 ` [PATCH net-next v2 0/5] virtio: support " Jason Wang
2018-07-12 21:44 ` David Miller
2018-07-13  0:52   ` Jason Wang
2018-07-13  3:26   ` Michael S. Tsirkin
2018-08-27 14:00 ` Michael S. Tsirkin
2018-08-28  5:51   ` [virtio-dev] " Jens Freimann
2018-09-07  1:22   ` Tiwei Bie
2018-09-07 13:00     ` Michael S. Tsirkin
2018-09-10  3:00       ` Tiwei Bie
2018-09-10  3:33         ` Jason Wang
2018-09-11  5:37           ` Tiwei Bie
2018-09-12 16:16             ` Michael S. Tsirkin
2018-09-13  8:59               ` Tiwei Bie
2018-09-13  9:47                 ` Jason Wang
2018-10-10 14:36                   ` Michael S. Tsirkin
2018-10-11 12:12                     ` Tiwei Bie
2018-10-11 13:48                       ` Michael S. Tsirkin
2018-10-11 14:13                         ` Tiwei Bie
2018-10-11 14:17                           ` Michael S. Tsirkin
2018-10-11 14:34                             ` Tiwei Bie
2018-09-12 13:06         ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).