All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v1 0/6] virtio: support advance DMA
@ 2022-02-10  8:51 Xuan Zhuo
  2022-02-10  8:51 ` [PATCH v1 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed() Xuan Zhuo
                   ` (6 more replies)
  0 siblings, 7 replies; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-10  8:51 UTC (permalink / raw)
  To: virtualization; +Cc: Michael S. Tsirkin

virtqueue_add() only supports virtual addresses, dma is completed in
virtqueue_add().

In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
it is necessary for us to support passing the DMA address to virtqueue_add().

Record this predma information in extra->flags, which can be skipped when
executing dma unmap.

v1:
   1. All sgs requested at one time are required to be unified PREDMA, and several
      of them are not supported to be PREDMA
   2. virtio_dma_map() is removed from this patch set and will be submitted
      together with the next time AF_XDP supports virtio dma
   3. Added patch #2 #3 to remove the check for flags when performing unmap
      indirect desc

Xuan Zhuo (6):
  virtio: rename vring_unmap_state_packed() to
    vring_unmap_extra_packed()
  virtio: remove flags check for unmap split indirect desc
  virtio: remove flags check for unmap packed indirect desc
  virtio: virtqueue_add() support predma
  virtio: split: virtqueue_add_split() support dma address
  virtio: packed: virtqueue_add_packed() support dma address

 drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
 1 file changed, 126 insertions(+), 73 deletions(-)

--
2.31.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v1 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed()
  2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
@ 2022-02-10  8:51 ` Xuan Zhuo
  2022-02-23  2:41   ` Jason Wang
  2022-02-10  8:51 ` [PATCH v1 2/6] virtio: remove flags check for unmap split indirect desc Xuan Zhuo
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-10  8:51 UTC (permalink / raw)
  To: virtualization; +Cc: Michael S. Tsirkin

The actual parameter handled by vring_unmap_state_packed() is that
vring_desc_extra, so this function should use "extra" instead of "state".

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 962f1477b1fa..7cf3ae057833 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -984,24 +984,24 @@ static struct virtqueue *vring_create_virtqueue_split(
  * Packed ring specific functions - *_packed().
  */
 
-static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
-				     struct vring_desc_extra *state)
+static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
+				     struct vring_desc_extra *extra)
 {
 	u16 flags;
 
 	if (!vq->use_dma_api)
 		return;
 
-	flags = state->flags;
+	flags = extra->flags;
 
 	if (flags & VRING_DESC_F_INDIRECT) {
 		dma_unmap_single(vring_dma_dev(vq),
-				 state->addr, state->len,
+				 extra->addr, extra->len,
 				 (flags & VRING_DESC_F_WRITE) ?
 				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
 	} else {
 		dma_unmap_page(vring_dma_dev(vq),
-			       state->addr, state->len,
+			       extra->addr, extra->len,
 			       (flags & VRING_DESC_F_WRITE) ?
 			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
 	}
@@ -1303,8 +1303,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 	for (n = 0; n < total_sg; n++) {
 		if (i == err_idx)
 			break;
-		vring_unmap_state_packed(vq,
-					 &vq->packed.desc_extra[curr]);
+		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]);
 		curr = vq->packed.desc_extra[curr].next;
 		i++;
 		if (i >= vq->packed.vring.num)
@@ -1383,8 +1382,8 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 	if (unlikely(vq->use_dma_api)) {
 		curr = id;
 		for (i = 0; i < state->num; i++) {
-			vring_unmap_state_packed(vq,
-				&vq->packed.desc_extra[curr]);
+			vring_unmap_extra_packed(vq,
+						 &vq->packed.desc_extra[curr]);
 			curr = vq->packed.desc_extra[curr].next;
 		}
 	}
-- 
2.31.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 2/6] virtio: remove flags check for unmap split indirect desc
  2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
  2022-02-10  8:51 ` [PATCH v1 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed() Xuan Zhuo
@ 2022-02-10  8:51 ` Xuan Zhuo
  2022-02-23  2:42   ` Jason Wang
  2022-02-10  8:51 ` [PATCH v1 3/6] virtio: remove flags check for unmap packed " Xuan Zhuo
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-10  8:51 UTC (permalink / raw)
  To: virtualization; +Cc: Michael S. Tsirkin

When calling vring_unmap_one_split_indirect(), it will not encounter the
situation that the flags contains VRING_DESC_F_INDIRECT. So remove this
logic.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 18 +++++-------------
 1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 7cf3ae057833..fadd0a7503e9 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -379,19 +379,11 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq,
 
 	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
 
-	if (flags & VRING_DESC_F_INDIRECT) {
-		dma_unmap_single(vring_dma_dev(vq),
-				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
-				 virtio32_to_cpu(vq->vq.vdev, desc->len),
-				 (flags & VRING_DESC_F_WRITE) ?
-				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	} else {
-		dma_unmap_page(vring_dma_dev(vq),
-			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
-			       virtio32_to_cpu(vq->vq.vdev, desc->len),
-			       (flags & VRING_DESC_F_WRITE) ?
-			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	}
+	dma_unmap_page(vring_dma_dev(vq),
+		       virtio64_to_cpu(vq->vq.vdev, desc->addr),
+		       virtio32_to_cpu(vq->vq.vdev, desc->len),
+		       (flags & VRING_DESC_F_WRITE) ?
+		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
 }
 
 static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
-- 
2.31.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 3/6] virtio: remove flags check for unmap packed indirect desc
  2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
  2022-02-10  8:51 ` [PATCH v1 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed() Xuan Zhuo
  2022-02-10  8:51 ` [PATCH v1 2/6] virtio: remove flags check for unmap split indirect desc Xuan Zhuo
@ 2022-02-10  8:51 ` Xuan Zhuo
  2022-02-23  2:53   ` Jason Wang
  2022-02-10  8:51 ` [PATCH v1 4/6] virtio: virtqueue_add() support predma Xuan Zhuo
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-10  8:51 UTC (permalink / raw)
  To: virtualization; +Cc: Michael S. Tsirkin

When calling vring_unmap_desc_packed(), it will not encounter the
situation that the flags contains VRING_DESC_F_INDIRECT. So remove this
logic.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 18 +++++-------------
 1 file changed, 5 insertions(+), 13 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index fadd0a7503e9..cfb028ca238e 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1009,19 +1009,11 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
 
 	flags = le16_to_cpu(desc->flags);
 
-	if (flags & VRING_DESC_F_INDIRECT) {
-		dma_unmap_single(vring_dma_dev(vq),
-				 le64_to_cpu(desc->addr),
-				 le32_to_cpu(desc->len),
-				 (flags & VRING_DESC_F_WRITE) ?
-				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	} else {
-		dma_unmap_page(vring_dma_dev(vq),
-			       le64_to_cpu(desc->addr),
-			       le32_to_cpu(desc->len),
-			       (flags & VRING_DESC_F_WRITE) ?
-			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
-	}
+	dma_unmap_page(vring_dma_dev(vq),
+		       le64_to_cpu(desc->addr),
+		       le32_to_cpu(desc->len),
+		       (flags & VRING_DESC_F_WRITE) ?
+		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
 }
 
 static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg,
-- 
2.31.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 4/6] virtio: virtqueue_add() support predma
  2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
                   ` (2 preceding siblings ...)
  2022-02-10  8:51 ` [PATCH v1 3/6] virtio: remove flags check for unmap packed " Xuan Zhuo
@ 2022-02-10  8:51 ` Xuan Zhuo
  2022-02-23  3:00   ` Jason Wang
  2022-02-10  8:51 ` [PATCH v1 5/6] virtio: split: virtqueue_add_split() support dma address Xuan Zhuo
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-10  8:51 UTC (permalink / raw)
  To: virtualization; +Cc: Michael S. Tsirkin

virtuque_add() adds parameter predma.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 11 ++++++-----
 1 file changed, 6 insertions(+), 5 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index cfb028ca238e..cf9d118668f1 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1780,7 +1780,8 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 				unsigned int in_sgs,
 				void *data,
 				void *ctx,
-				gfp_t gfp)
+				gfp_t gfp,
+				bool predma)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
@@ -1821,7 +1822,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq,
 			total_sg++;
 	}
 	return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs,
-			     data, NULL, gfp);
+			     data, NULL, gfp, false);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_sgs);
 
@@ -1843,7 +1844,7 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
 			 void *data,
 			 gfp_t gfp)
 {
-	return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, gfp);
+	return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, gfp, false);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);
 
@@ -1865,7 +1866,7 @@ int virtqueue_add_inbuf(struct virtqueue *vq,
 			void *data,
 			gfp_t gfp)
 {
-	return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, gfp);
+	return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, gfp, false);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);
 
@@ -1889,7 +1890,7 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq,
 			void *ctx,
 			gfp_t gfp)
 {
-	return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, gfp);
+	return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, gfp, false);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx);
 
-- 
2.31.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 5/6] virtio: split: virtqueue_add_split() support dma address
  2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
                   ` (3 preceding siblings ...)
  2022-02-10  8:51 ` [PATCH v1 4/6] virtio: virtqueue_add() support predma Xuan Zhuo
@ 2022-02-10  8:51 ` Xuan Zhuo
  2022-02-23  3:38   ` Jason Wang
  2022-02-10  8:51 ` [PATCH v1 6/6] virtio: packed: virtqueue_add_packed() " Xuan Zhuo
  2022-02-17  7:19 ` [PATCH v1 0/6] virtio: support advance DMA Jason Wang
  6 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-10  8:51 UTC (permalink / raw)
  To: virtualization; +Cc: Michael S. Tsirkin

virtqueue_add_split() only supports virtual addresses, dma is completed
in virtqueue_add_split().

In some scenarios (such as the AF_XDP scenario), the memory is allocated
and DMA is completed in advance, so it is necessary for us to support
passing the DMA address to virtqueue_add_split().

And record this predma information in extra->flags, which can be skipped
when executing dma unmap.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 62 ++++++++++++++++++++++++++++--------
 1 file changed, 48 insertions(+), 14 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index cf9d118668f1..d32c0bf6016f 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -66,6 +66,9 @@
 #define LAST_ADD_TIME_INVALID(vq)
 #endif
 
+/* This means the buffer dma is pre-alloc. Just used by vring_desc_extra */
+#define VIRTIO_DESC_F_PREDMA (1 << 15)
+
 struct vring_desc_state_split {
 	void *data;			/* Data for callback. */
 	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
@@ -387,7 +390,7 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq,
 }
 
 static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
-					  unsigned int i)
+					  unsigned int i, bool predma)
 {
 	struct vring_desc_extra *extra = vq->split.desc_extra;
 	u16 flags;
@@ -404,6 +407,9 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
 				 (flags & VRING_DESC_F_WRITE) ?
 				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
 	} else {
+		if (predma)
+			goto out;
+
 		dma_unmap_page(vring_dma_dev(vq),
 			       extra[i].addr,
 			       extra[i].len,
@@ -474,7 +480,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 				      unsigned int in_sgs,
 				      void *data,
 				      void *ctx,
-				      gfp_t gfp)
+				      gfp_t gfp,
+				      bool predma)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 	struct scatterlist *sg;
@@ -535,9 +542,16 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 
 	for (n = 0; n < out_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			dma_addr_t addr;
+
+			if (predma) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			prev = i;
 			/* Note that we trust indirect descriptor
@@ -550,9 +564,16 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 	}
 	for (; n < (out_sgs + in_sgs); n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			dma_addr_t addr;
+
+			if (predma) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			prev = i;
 			/* Note that we trust indirect descriptor
@@ -602,6 +623,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 	else
 		vq->split.desc_state[head].indir_desc = ctx;
 
+	if (predma)
+		vq->split.desc_extra[head].flags |= VIRTIO_DESC_F_PREDMA;
+
 	/* Put entry in available array (but don't update avail->idx until they
 	 * do sync). */
 	avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
@@ -626,6 +650,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 	return 0;
 
 unmap_release:
+	if (predma)
+		goto skip_unmap;
+
 	err_idx = i;
 
 	if (indirect)
@@ -640,9 +667,10 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 			vring_unmap_one_split_indirect(vq, &desc[i]);
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
 		} else
-			i = vring_unmap_one_split(vq, i);
+			i = vring_unmap_one_split(vq, i, false);
 	}
 
+skip_unmap:
 	if (indirect)
 		kfree(desc);
 
@@ -686,20 +714,24 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
 {
 	unsigned int i, j;
 	__virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
+	bool predma = false;
 
 	/* Clear data ptr. */
 	vq->split.desc_state[head].data = NULL;
 
+	if (vq->split.desc_extra[head].flags & VIRTIO_DESC_F_PREDMA)
+		predma = true;
+
 	/* Put back on free list: unmap first-level descriptors and find end */
 	i = head;
 
 	while (vq->split.vring.desc[i].flags & nextflag) {
-		vring_unmap_one_split(vq, i);
+		vring_unmap_one_split(vq, i, predma);
 		i = vq->split.desc_extra[i].next;
 		vq->vq.num_free++;
 	}
 
-	vring_unmap_one_split(vq, i);
+	vring_unmap_one_split(vq, i, predma);
 	vq->split.desc_extra[i].next = vq->free_head;
 	vq->free_head = head;
 
@@ -721,8 +753,10 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
 				VRING_DESC_F_INDIRECT));
 		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
 
-		for (j = 0; j < len / sizeof(struct vring_desc); j++)
-			vring_unmap_one_split_indirect(vq, &indir_desc[j]);
+		if (!predma) {
+			for (j = 0; j < len / sizeof(struct vring_desc); j++)
+				vring_unmap_one_split_indirect(vq, &indir_desc[j]);
+		}
 
 		kfree(indir_desc);
 		vq->split.desc_state[head].indir_desc = NULL;
@@ -1788,7 +1822,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg,
 					out_sgs, in_sgs, data, ctx, gfp) :
 				 virtqueue_add_split(_vq, sgs, total_sg,
-					out_sgs, in_sgs, data, ctx, gfp);
+					out_sgs, in_sgs, data, ctx, gfp, predma);
 }
 
 /**
-- 
2.31.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v1 6/6] virtio: packed: virtqueue_add_packed() support dma address
  2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
                   ` (4 preceding siblings ...)
  2022-02-10  8:51 ` [PATCH v1 5/6] virtio: split: virtqueue_add_split() support dma address Xuan Zhuo
@ 2022-02-10  8:51 ` Xuan Zhuo
  2022-02-23  3:43   ` Jason Wang
  2022-02-17  7:19 ` [PATCH v1 0/6] virtio: support advance DMA Jason Wang
  6 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-10  8:51 UTC (permalink / raw)
  To: virtualization; +Cc: Michael S. Tsirkin

virtqueue_add_packed() only supports virtual addresses, dma is completed
in virtqueue_add_packed().

In some scenarios (such as the AF_XDP scenario), the memory is allocated
and DMA is completed in advance, so it is necessary for us to support
passing the DMA address to virtqueue_add_packed().

Record this predma information in extra->flags, which can be skipped
when executing dma unmap.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 79 ++++++++++++++++++++++++++----------
 1 file changed, 57 insertions(+), 22 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index d32c0bf6016f..b8c7697e925d 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -1011,7 +1011,8 @@ static struct virtqueue *vring_create_virtqueue_split(
  */
 
 static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
-				     struct vring_desc_extra *extra)
+				     struct vring_desc_extra *extra,
+				     bool predma)
 {
 	u16 flags;
 
@@ -1026,6 +1027,9 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
 				 (flags & VRING_DESC_F_WRITE) ?
 				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
 	} else {
+		if (predma)
+			return;
+
 		dma_unmap_page(vring_dma_dev(vq),
 			       extra->addr, extra->len,
 			       (flags & VRING_DESC_F_WRITE) ?
@@ -1073,7 +1077,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 					 unsigned int out_sgs,
 					 unsigned int in_sgs,
 					 void *data,
-					 gfp_t gfp)
+					 gfp_t gfp,
+					 bool predma)
 {
 	struct vring_packed_desc *desc;
 	struct scatterlist *sg;
@@ -1099,10 +1104,15 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 
 	for (n = 0; n < out_sgs + in_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			addr = vring_map_one_sg(vq, sg, n < out_sgs ?
-					DMA_TO_DEVICE : DMA_FROM_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			if (predma) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
+							DMA_TO_DEVICE : DMA_FROM_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			desc[i].flags = cpu_to_le16(n < out_sgs ?
 						0 : VRING_DESC_F_WRITE);
@@ -1132,6 +1142,9 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 						  vq->packed.avail_used_flags;
 	}
 
+	if (predma)
+		vq->packed.desc_extra[id].flags |= VIRTIO_DESC_F_PREDMA;
+
 	/*
 	 * A driver MUST NOT make the first descriptor in the list
 	 * available before all subsequent descriptors comprising
@@ -1170,10 +1183,11 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 	return 0;
 
 unmap_release:
-	err_idx = i;
-
-	for (i = 0; i < err_idx; i++)
-		vring_unmap_desc_packed(vq, &desc[i]);
+	if (!predma) {
+		err_idx = i;
+		for (i = 0; i < err_idx; i++)
+			vring_unmap_desc_packed(vq, &desc[i]);
+	}
 
 	kfree(desc);
 
@@ -1188,7 +1202,8 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 				       unsigned int in_sgs,
 				       void *data,
 				       void *ctx,
-				       gfp_t gfp)
+				       gfp_t gfp,
+				       bool predma)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
 	struct vring_packed_desc *desc;
@@ -1214,7 +1229,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 
 	if (virtqueue_use_indirect(_vq, total_sg)) {
 		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
-						    in_sgs, data, gfp);
+						    in_sgs, data, gfp, predma);
 		if (err != -ENOMEM) {
 			END_USE(vq);
 			return err;
@@ -1246,10 +1261,17 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 	c = 0;
 	for (n = 0; n < out_sgs + in_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
-					DMA_TO_DEVICE : DMA_FROM_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			dma_addr_t addr;
+
+			if (predma) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
+							DMA_TO_DEVICE : DMA_FROM_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			flags = cpu_to_le16(vq->packed.avail_used_flags |
 				    (++c == total_sg ? 0 : VRING_DESC_F_NEXT) |
@@ -1297,6 +1319,9 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 	vq->packed.desc_state[id].indir_desc = ctx;
 	vq->packed.desc_state[id].last = prev;
 
+	if (predma)
+		vq->packed.desc_extra[id].flags |= VIRTIO_DESC_F_PREDMA;
+
 	/*
 	 * A driver MUST NOT make the first descriptor in the list
 	 * available before all subsequent descriptors comprising
@@ -1312,22 +1337,27 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 	return 0;
 
 unmap_release:
+	vq->packed.avail_used_flags = avail_used_flags;
+
+	if (predma)
+		goto skip_unmap;
+
 	err_idx = i;
 	i = head;
 	curr = vq->free_head;
 
-	vq->packed.avail_used_flags = avail_used_flags;
-
 	for (n = 0; n < total_sg; n++) {
 		if (i == err_idx)
 			break;
-		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]);
+		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr], false);
 		curr = vq->packed.desc_extra[curr].next;
 		i++;
 		if (i >= vq->packed.vring.num)
 			i = 0;
 	}
 
+skip_unmap:
+
 	END_USE(vq);
 	return -EIO;
 }
@@ -1387,9 +1417,13 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 	struct vring_desc_state_packed *state = NULL;
 	struct vring_packed_desc *desc;
 	unsigned int i, curr;
+	bool predma = false;
 
 	state = &vq->packed.desc_state[id];
 
+	if (vq->packed.desc_extra[state->last].flags & VIRTIO_DESC_F_PREDMA)
+		predma = true;
+
 	/* Clear data ptr. */
 	state->data = NULL;
 
@@ -1401,7 +1435,8 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 		curr = id;
 		for (i = 0; i < state->num; i++) {
 			vring_unmap_extra_packed(vq,
-						 &vq->packed.desc_extra[curr]);
+						 &vq->packed.desc_extra[curr],
+						 predma);
 			curr = vq->packed.desc_extra[curr].next;
 		}
 	}
@@ -1414,7 +1449,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 		if (!desc)
 			return;
 
-		if (vq->use_dma_api) {
+		if (vq->use_dma_api && !predma) {
 			len = vq->packed.desc_extra[id].len;
 			for (i = 0; i < len / sizeof(struct vring_packed_desc);
 					i++)
@@ -1820,7 +1855,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
 	return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg,
-					out_sgs, in_sgs, data, ctx, gfp) :
+					out_sgs, in_sgs, data, ctx, gfp, predma) :
 				 virtqueue_add_split(_vq, sgs, total_sg,
 					out_sgs, in_sgs, data, ctx, gfp, predma);
 }
-- 
2.31.0

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
                   ` (5 preceding siblings ...)
  2022-02-10  8:51 ` [PATCH v1 6/6] virtio: packed: virtqueue_add_packed() " Xuan Zhuo
@ 2022-02-17  7:19 ` Jason Wang
  2022-02-18  8:55   ` Xuan Zhuo
  6 siblings, 1 reply; 36+ messages in thread
From: Jason Wang @ 2022-02-17  7:19 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: Michael S. Tsirkin, virtualization

On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> virtqueue_add() only supports virtual addresses, dma is completed in
> virtqueue_add().
>
> In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> it is necessary for us to support passing the DMA address to virtqueue_add().

I'd suggest rename this feature as "unmanaged DMA".

>
> Record this predma information in extra->flags, which can be skipped when
> executing dma unmap.

Question still, can we use per-virtqueue flag instead of per
descriptor flag? If my memory is correct, the answer is yes in the
discussion for the previous version.

Thanks

>
> v1:
>    1. All sgs requested at one time are required to be unified PREDMA, and several
>       of them are not supported to be PREDMA
>    2. virtio_dma_map() is removed from this patch set and will be submitted
>       together with the next time AF_XDP supports virtio dma
>    3. Added patch #2 #3 to remove the check for flags when performing unmap
>       indirect desc
>
> Xuan Zhuo (6):
>   virtio: rename vring_unmap_state_packed() to
>     vring_unmap_extra_packed()
>   virtio: remove flags check for unmap split indirect desc
>   virtio: remove flags check for unmap packed indirect desc
>   virtio: virtqueue_add() support predma
>   virtio: split: virtqueue_add_split() support dma address
>   virtio: packed: virtqueue_add_packed() support dma address
>
>  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
>  1 file changed, 126 insertions(+), 73 deletions(-)
>
> --
> 2.31.0
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-17  7:19 ` [PATCH v1 0/6] virtio: support advance DMA Jason Wang
@ 2022-02-18  8:55   ` Xuan Zhuo
  2022-02-18  9:24     ` Michael S. Tsirkin
  2022-02-21  3:32     ` Jason Wang
  0 siblings, 2 replies; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-18  8:55 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, virtualization

On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > virtqueue_add() only supports virtual addresses, dma is completed in
> > virtqueue_add().
> >
> > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > it is necessary for us to support passing the DMA address to virtqueue_add().
>
> I'd suggest rename this feature as "unmanaged DMA".

OK

>
> >
> > Record this predma information in extra->flags, which can be skipped when
> > executing dma unmap.
>
> Question still, can we use per-virtqueue flag instead of per
> descriptor flag? If my memory is correct, the answer is yes in the
> discussion for the previous version.
>

Yes.

per-virtqueue? I guess it should be per-submit.

This patch set only adds a flag to desc_extra[head].flags, so that we can know
if we need to unmap dma when we detach.

Thanks.

> Thanks
>
> >
> > v1:
> >    1. All sgs requested at one time are required to be unified PREDMA, and several
> >       of them are not supported to be PREDMA
> >    2. virtio_dma_map() is removed from this patch set and will be submitted
> >       together with the next time AF_XDP supports virtio dma
> >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> >       indirect desc
> >
> > Xuan Zhuo (6):
> >   virtio: rename vring_unmap_state_packed() to
> >     vring_unmap_extra_packed()
> >   virtio: remove flags check for unmap split indirect desc
> >   virtio: remove flags check for unmap packed indirect desc
> >   virtio: virtqueue_add() support predma
> >   virtio: split: virtqueue_add_split() support dma address
> >   virtio: packed: virtqueue_add_packed() support dma address
> >
> >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> >  1 file changed, 126 insertions(+), 73 deletions(-)
> >
> > --
> > 2.31.0
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-18  8:55   ` Xuan Zhuo
@ 2022-02-18  9:24     ` Michael S. Tsirkin
  2022-02-18  9:29       ` Xuan Zhuo
  2022-02-21  1:36       ` Jason Wang
  2022-02-21  3:32     ` Jason Wang
  1 sibling, 2 replies; 36+ messages in thread
From: Michael S. Tsirkin @ 2022-02-18  9:24 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: virtualization

On Fri, Feb 18, 2022 at 04:55:23PM +0800, Xuan Zhuo wrote:
> On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > virtqueue_add().
> > >
> > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > it is necessary for us to support passing the DMA address to virtqueue_add().
> >
> > I'd suggest rename this feature as "unmanaged DMA".
> 
> OK

IIUC when you say "dma is completed" you really mean "memory is mapped
for dma". What is special is that virtio suddenly gets s/g that is
already mapped for DMA, not in the way DMA (direct memory access) itself
happens. right? Pls document as such.
And this is why I'd call this flag "mapped" or "premapped", not
"predma". "unmanaged" is also ok, though it's a bit vague wrt what
exactly is managed.
I'll defer to Jason on this.  Jason what do you prefer?

> >
> > >
> > > Record this predma information in extra->flags, which can be skipped when
> > > executing dma unmap.
> >
> > Question still, can we use per-virtqueue flag instead of per
> > descriptor flag? If my memory is correct, the answer is yes in the
> > discussion for the previous version.
> >
> 
> Yes.
> 
> per-virtqueue? I guess it should be per-submit.
> 
> This patch set only adds a flag to desc_extra[head].flags, so that we can know
> if we need to unmap dma when we detach.
> 
> Thanks.
> 
> > Thanks
> >
> > >
> > > v1:
> > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > >       of them are not supported to be PREDMA
> > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > >       together with the next time AF_XDP supports virtio dma
> > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > >       indirect desc
> > >
> > > Xuan Zhuo (6):
> > >   virtio: rename vring_unmap_state_packed() to
> > >     vring_unmap_extra_packed()
> > >   virtio: remove flags check for unmap split indirect desc
> > >   virtio: remove flags check for unmap packed indirect desc
> > >   virtio: virtqueue_add() support predma
> > >   virtio: split: virtqueue_add_split() support dma address
> > >   virtio: packed: virtqueue_add_packed() support dma address
> > >
> > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > >
> > > --
> > > 2.31.0
> > >
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-18  9:24     ` Michael S. Tsirkin
@ 2022-02-18  9:29       ` Xuan Zhuo
  2022-02-21  1:36       ` Jason Wang
  1 sibling, 0 replies; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-18  9:29 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: virtualization

On Fri, 18 Feb 2022 04:24:58 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Fri, Feb 18, 2022 at 04:55:23PM +0800, Xuan Zhuo wrote:
> > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > virtqueue_add().
> > > >
> > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > >
> > > I'd suggest rename this feature as "unmanaged DMA".
> >
> > OK
>
> IIUC when you say "dma is completed" you really mean "memory is mapped
> for dma". What is special is that virtio suddenly gets s/g that is
> already mapped for DMA, not in the way DMA (direct memory access) itself
> happens. right? Pls document as such.
> And this is why I'd call this flag "mapped" or "premapped", not
> "predma". "unmanaged" is also ok, though it's a bit vague wrt what
> exactly is managed.
> I'll defer to Jason on this.  Jason what do you prefer?

OK. Thanks

>
> > >
> > > >
> > > > Record this predma information in extra->flags, which can be skipped when
> > > > executing dma unmap.
> > >
> > > Question still, can we use per-virtqueue flag instead of per
> > > descriptor flag? If my memory is correct, the answer is yes in the
> > > discussion for the previous version.
> > >
> >
> > Yes.
> >
> > per-virtqueue? I guess it should be per-submit.
> >
> > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > if we need to unmap dma when we detach.
> >
> > Thanks.
> >
> > > Thanks
> > >
> > > >
> > > > v1:
> > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > >       of them are not supported to be PREDMA
> > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > >       together with the next time AF_XDP supports virtio dma
> > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > >       indirect desc
> > > >
> > > > Xuan Zhuo (6):
> > > >   virtio: rename vring_unmap_state_packed() to
> > > >     vring_unmap_extra_packed()
> > > >   virtio: remove flags check for unmap split indirect desc
> > > >   virtio: remove flags check for unmap packed indirect desc
> > > >   virtio: virtqueue_add() support predma
> > > >   virtio: split: virtqueue_add_split() support dma address
> > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > >
> > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > >
> > > > --
> > > > 2.31.0
> > > >
> > >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-18  9:24     ` Michael S. Tsirkin
  2022-02-18  9:29       ` Xuan Zhuo
@ 2022-02-21  1:36       ` Jason Wang
  1 sibling, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-21  1:36 UTC (permalink / raw)
  To: Michael S. Tsirkin; +Cc: virtualization

On Fri, Feb 18, 2022 at 5:25 PM Michael S. Tsirkin <mst@redhat.com> wrote:
>
> On Fri, Feb 18, 2022 at 04:55:23PM +0800, Xuan Zhuo wrote:
> > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > virtqueue_add().
> > > >
> > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > >
> > > I'd suggest rename this feature as "unmanaged DMA".
> >
> > OK
>
> IIUC when you say "dma is completed" you really mean "memory is mapped
> for dma". What is special is that virtio suddenly gets s/g that is
> already mapped for DMA, not in the way DMA (direct memory access) itself
> happens. right? Pls document as such.
> And this is why I'd call this flag "mapped" or "premapped", not
> "predma". "unmanaged" is also ok, though it's a bit vague wrt what
> exactly is managed.
> I'll defer to Jason on this.  Jason what do you prefer?

I think "premapped" should be fine.

Thanks

>
> > >
> > > >
> > > > Record this predma information in extra->flags, which can be skipped when
> > > > executing dma unmap.
> > >
> > > Question still, can we use per-virtqueue flag instead of per
> > > descriptor flag? If my memory is correct, the answer is yes in the
> > > discussion for the previous version.
> > >
> >
> > Yes.
> >
> > per-virtqueue? I guess it should be per-submit.
> >
> > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > if we need to unmap dma when we detach.
> >
> > Thanks.
> >
> > > Thanks
> > >
> > > >
> > > > v1:
> > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > >       of them are not supported to be PREDMA
> > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > >       together with the next time AF_XDP supports virtio dma
> > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > >       indirect desc
> > > >
> > > > Xuan Zhuo (6):
> > > >   virtio: rename vring_unmap_state_packed() to
> > > >     vring_unmap_extra_packed()
> > > >   virtio: remove flags check for unmap split indirect desc
> > > >   virtio: remove flags check for unmap packed indirect desc
> > > >   virtio: virtqueue_add() support predma
> > > >   virtio: split: virtqueue_add_split() support dma address
> > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > >
> > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > >
> > > > --
> > > > 2.31.0
> > > >
> > >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-18  8:55   ` Xuan Zhuo
  2022-02-18  9:24     ` Michael S. Tsirkin
@ 2022-02-21  3:32     ` Jason Wang
  2022-02-21  3:39       ` Xuan Zhuo
  2022-02-21 11:23       ` Xuan Zhuo
  1 sibling, 2 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-21  3:32 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: Michael S. Tsirkin, virtualization

On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > virtqueue_add().
> > >
> > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > it is necessary for us to support passing the DMA address to virtqueue_add().
> >
> > I'd suggest rename this feature as "unmanaged DMA".
>
> OK
>
> >
> > >
> > > Record this predma information in extra->flags, which can be skipped when
> > > executing dma unmap.
> >
> > Question still, can we use per-virtqueue flag instead of per
> > descriptor flag? If my memory is correct, the answer is yes in the
> > discussion for the previous version.
> >
>
> Yes.
>
> per-virtqueue? I guess it should be per-submit.
>
> This patch set only adds a flag to desc_extra[head].flags, so that we can know
> if we need to unmap dma when we detach.

I meant if we can manage to make it per virtqueue, there's no need to
maintain per buffer flag.

So we know something that needs to be mapped by virtio core itself,
e.g the indirect page. Other than this, all the rest could be
pre-mapped.

For vnet header, it could be mapped by virtio-net which could be still
treated as pre mapped DMA since it's not the virtio ring code.

Anything I miss here?

Thanks


>
> Thanks.
>
> > Thanks
> >
> > >
> > > v1:
> > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > >       of them are not supported to be PREDMA
> > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > >       together with the next time AF_XDP supports virtio dma
> > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > >       indirect desc
> > >
> > > Xuan Zhuo (6):
> > >   virtio: rename vring_unmap_state_packed() to
> > >     vring_unmap_extra_packed()
> > >   virtio: remove flags check for unmap split indirect desc
> > >   virtio: remove flags check for unmap packed indirect desc
> > >   virtio: virtqueue_add() support predma
> > >   virtio: split: virtqueue_add_split() support dma address
> > >   virtio: packed: virtqueue_add_packed() support dma address
> > >
> > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > >
> > > --
> > > 2.31.0
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  3:32     ` Jason Wang
@ 2022-02-21  3:39       ` Xuan Zhuo
  2022-02-21  3:53         ` Jason Wang
  2022-02-21 11:23       ` Xuan Zhuo
  1 sibling, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-21  3:39 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, virtualization

On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > virtqueue_add().
> > > >
> > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > >
> > > I'd suggest rename this feature as "unmanaged DMA".
> >
> > OK
> >
> > >
> > > >
> > > > Record this predma information in extra->flags, which can be skipped when
> > > > executing dma unmap.
> > >
> > > Question still, can we use per-virtqueue flag instead of per
> > > descriptor flag? If my memory is correct, the answer is yes in the
> > > discussion for the previous version.
> > >
> >
> > Yes.
> >
> > per-virtqueue? I guess it should be per-submit.
> >
> > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > if we need to unmap dma when we detach.
>
> I meant if we can manage to make it per virtqueue, there's no need to
> maintain per buffer flag.
>
> So we know something that needs to be mapped by virtio core itself,
> e.g the indirect page. Other than this, all the rest could be
> pre-mapped.
>
> For vnet header, it could be mapped by virtio-net which could be still
> treated as pre mapped DMA since it's not the virtio ring code.
>
> Anything I miss here?

I guess, your understanding is that after the queue is reset, the queue is used
by xsk(AF_XDP), then all commits to this vq are premapped amd address.

This is ok for rx.

But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
skb to it at the same time. It is shared.

We can guarantee that the sg of the sgs submitted at one time uses the premapped
dma address or virtual address uniformly. It is not guaranteed that all the sgs
to the vq are uniform

Thanks.

>
> Thanks
>
>
> >
> > Thanks.
> >
> > > Thanks
> > >
> > > >
> > > > v1:
> > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > >       of them are not supported to be PREDMA
> > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > >       together with the next time AF_XDP supports virtio dma
> > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > >       indirect desc
> > > >
> > > > Xuan Zhuo (6):
> > > >   virtio: rename vring_unmap_state_packed() to
> > > >     vring_unmap_extra_packed()
> > > >   virtio: remove flags check for unmap split indirect desc
> > > >   virtio: remove flags check for unmap packed indirect desc
> > > >   virtio: virtqueue_add() support predma
> > > >   virtio: split: virtqueue_add_split() support dma address
> > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > >
> > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > >
> > > > --
> > > > 2.31.0
> > > >
> > >
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  3:39       ` Xuan Zhuo
@ 2022-02-21  3:53         ` Jason Wang
  2022-02-21  5:59           ` Xuan Zhuo
  0 siblings, 1 reply; 36+ messages in thread
From: Jason Wang @ 2022-02-21  3:53 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: Michael S. Tsirkin, virtualization

On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > virtqueue_add().
> > > > >
> > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > >
> > > > I'd suggest rename this feature as "unmanaged DMA".
> > >
> > > OK
> > >
> > > >
> > > > >
> > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > executing dma unmap.
> > > >
> > > > Question still, can we use per-virtqueue flag instead of per
> > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > discussion for the previous version.
> > > >
> > >
> > > Yes.
> > >
> > > per-virtqueue? I guess it should be per-submit.
> > >
> > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > if we need to unmap dma when we detach.
> >
> > I meant if we can manage to make it per virtqueue, there's no need to
> > maintain per buffer flag.
> >
> > So we know something that needs to be mapped by virtio core itself,
> > e.g the indirect page. Other than this, all the rest could be
> > pre-mapped.
> >
> > For vnet header, it could be mapped by virtio-net which could be still
> > treated as pre mapped DMA since it's not the virtio ring code.
> >
> > Anything I miss here?
>
> I guess, your understanding is that after the queue is reset, the queue is used
> by xsk(AF_XDP), then all commits to this vq are premapped amd address.
>
> This is ok for rx.
>
> But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> skb to it at the same time. It is shared.

Right.

>
> We can guarantee that the sg of the sgs submitted at one time uses the premapped
> dma address or virtual address uniformly. It is not guaranteed that all the sgs
> to the vq are uniform

Sorry, I don't understand here. We can let virtio-net do the mapping
even for TX, then from the virtio_ring point of view, it's still
pre-mapped?

Thanks

>
> Thanks.
>
> >
> > Thanks
> >
> >
> > >
> > > Thanks.
> > >
> > > > Thanks
> > > >
> > > > >
> > > > > v1:
> > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > >       of them are not supported to be PREDMA
> > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > >       together with the next time AF_XDP supports virtio dma
> > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > >       indirect desc
> > > > >
> > > > > Xuan Zhuo (6):
> > > > >   virtio: rename vring_unmap_state_packed() to
> > > > >     vring_unmap_extra_packed()
> > > > >   virtio: remove flags check for unmap split indirect desc
> > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > >   virtio: virtqueue_add() support predma
> > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > >
> > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > >
> > > > > --
> > > > > 2.31.0
> > > > >
> > > >
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  3:53         ` Jason Wang
@ 2022-02-21  5:59           ` Xuan Zhuo
  2022-02-21  6:18             ` Xuan Zhuo
  0 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-21  5:59 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, virtualization

On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > virtqueue_add().
> > > > > >
> > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > >
> > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > >
> > > > OK
> > > >
> > > > >
> > > > > >
> > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > executing dma unmap.
> > > > >
> > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > discussion for the previous version.
> > > > >
> > > >
> > > > Yes.
> > > >
> > > > per-virtqueue? I guess it should be per-submit.
> > > >
> > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > if we need to unmap dma when we detach.
> > >
> > > I meant if we can manage to make it per virtqueue, there's no need to
> > > maintain per buffer flag.
> > >
> > > So we know something that needs to be mapped by virtio core itself,
> > > e.g the indirect page. Other than this, all the rest could be
> > > pre-mapped.
> > >
> > > For vnet header, it could be mapped by virtio-net which could be still
> > > treated as pre mapped DMA since it's not the virtio ring code.
> > >
> > > Anything I miss here?
> >
> > I guess, your understanding is that after the queue is reset, the queue is used
> > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> >
> > This is ok for rx.
> >
> > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > skb to it at the same time. It is shared.
>
> Right.
>
> >
> > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > to the vq are uniform
>
> Sorry, I don't understand here. We can let virtio-net do the mapping
> even for TX, then from the virtio_ring point of view, it's still
> pre-mapped?
>

Yes, we can do this. My previous thought was to keep the skb path unchanged.

Then we can make it clear that in the case of xsk, after completing the queue
reset, all the addresses submitted to virtio are the addresses of the completed
dma, including the skb case, the dma map operation must be completed first.

In this case, I feel like we can do without this patch set.

Thanks.

> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > >
> > > >
> > > > Thanks.
> > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > v1:
> > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > >       of them are not supported to be PREDMA
> > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > >       indirect desc
> > > > > >
> > > > > > Xuan Zhuo (6):
> > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > >     vring_unmap_extra_packed()
> > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > >   virtio: virtqueue_add() support predma
> > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > >
> > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > >
> > > > > > --
> > > > > > 2.31.0
> > > > > >
> > > > >
> > > >
> > >
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  5:59           ` Xuan Zhuo
@ 2022-02-21  6:18             ` Xuan Zhuo
  2022-02-21  6:37               ` Jason Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-21  6:18 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: virtualization, Michael S. Tsirkin

On Mon, 21 Feb 2022 13:59:06 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > > virtqueue_add().
> > > > > > >
> > > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > > >
> > > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > > >
> > > > > OK
> > > > >
> > > > > >
> > > > > > >
> > > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > > executing dma unmap.
> > > > > >
> > > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > > discussion for the previous version.
> > > > > >
> > > > >
> > > > > Yes.
> > > > >
> > > > > per-virtqueue? I guess it should be per-submit.
> > > > >
> > > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > > if we need to unmap dma when we detach.
> > > >
> > > > I meant if we can manage to make it per virtqueue, there's no need to
> > > > maintain per buffer flag.
> > > >
> > > > So we know something that needs to be mapped by virtio core itself,
> > > > e.g the indirect page. Other than this, all the rest could be
> > > > pre-mapped.
> > > >
> > > > For vnet header, it could be mapped by virtio-net which could be still
> > > > treated as pre mapped DMA since it's not the virtio ring code.
> > > >
> > > > Anything I miss here?
> > >
> > > I guess, your understanding is that after the queue is reset, the queue is used
> > > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> > >
> > > This is ok for rx.
> > >
> > > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > > skb to it at the same time. It is shared.
> >
> > Right.
> >
> > >
> > > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > > to the vq are uniform
> >
> > Sorry, I don't understand here. We can let virtio-net do the mapping
> > even for TX, then from the virtio_ring point of view, it's still
> > pre-mapped?
> >
>
> Yes, we can do this. My previous thought was to keep the skb path unchanged.
>
> Then we can make it clear that in the case of xsk, after completing the queue
> reset, all the addresses submitted to virtio are the addresses of the completed
> dma, including the skb case, the dma map operation must be completed first.
>
> In this case, I feel like we can do without this patch set.

I originally thought that use_dma_api could be reused, but I found that this is
not the case. The logic of sg_phys() does not meet our ideas. We still have a
separate flag.

Thanks.

>
> Thanks.
>
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > v1:
> > > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > > >       of them are not supported to be PREDMA
> > > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > > >       indirect desc
> > > > > > >
> > > > > > > Xuan Zhuo (6):
> > > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > > >     vring_unmap_extra_packed()
> > > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > > >   virtio: virtqueue_add() support predma
> > > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > > >
> > > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > > >
> > > > > > > --
> > > > > > > 2.31.0
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
> _______________________________________________
> Virtualization mailing list
> Virtualization@lists.linux-foundation.org
> https://lists.linuxfoundation.org/mailman/listinfo/virtualization
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  6:18             ` Xuan Zhuo
@ 2022-02-21  6:37               ` Jason Wang
  2022-02-21  6:41                 ` Xuan Zhuo
  2022-02-21  6:46                 ` Xuan Zhuo
  0 siblings, 2 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-21  6:37 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: virtualization, Michael S. Tsirkin

On Mon, Feb 21, 2022 at 2:20 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Mon, 21 Feb 2022 13:59:06 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > > > virtqueue_add().
> > > > > > > >
> > > > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > > > >
> > > > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > > > >
> > > > > > OK
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > > > executing dma unmap.
> > > > > > >
> > > > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > > > discussion for the previous version.
> > > > > > >
> > > > > >
> > > > > > Yes.
> > > > > >
> > > > > > per-virtqueue? I guess it should be per-submit.
> > > > > >
> > > > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > > > if we need to unmap dma when we detach.
> > > > >
> > > > > I meant if we can manage to make it per virtqueue, there's no need to
> > > > > maintain per buffer flag.
> > > > >
> > > > > So we know something that needs to be mapped by virtio core itself,
> > > > > e.g the indirect page. Other than this, all the rest could be
> > > > > pre-mapped.
> > > > >
> > > > > For vnet header, it could be mapped by virtio-net which could be still
> > > > > treated as pre mapped DMA since it's not the virtio ring code.
> > > > >
> > > > > Anything I miss here?
> > > >
> > > > I guess, your understanding is that after the queue is reset, the queue is used
> > > > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> > > >
> > > > This is ok for rx.
> > > >
> > > > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > > > skb to it at the same time. It is shared.
> > >
> > > Right.
> > >
> > > >
> > > > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > > > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > > > to the vq are uniform
> > >
> > > Sorry, I don't understand here. We can let virtio-net do the mapping
> > > even for TX, then from the virtio_ring point of view, it's still
> > > pre-mapped?
> > >
> >
> > Yes, we can do this. My previous thought was to keep the skb path unchanged.

We can listen from Michael and others but I think it would be simpler.

And we can even make the pre-mapped per driver. E.g for virtio-net we
just let the virtio-net driver do the DMA mapping. This allows us to
do a lot of optimizations (e.g page flip) as what other networking
drivers did.

> >
> > Then we can make it clear that in the case of xsk, after completing the queue
> > reset, all the addresses submitted to virtio are the addresses of the completed
> > dma, including the skb case, the dma map operation must be completed first.
> >
> > In this case, I feel like we can do without this patch set.
>
> I originally thought that use_dma_api could be reused, but I found that this is
> not the case. The logic of sg_phys() does not meet our ideas. We still have a
> separate flag.

Just to make sure I understand here, for this flag you mean

1) per buffer
2) per virtqueue
or
3) per device?

Thanks

>
> Thanks.
>
> >
> > Thanks.
> >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > > v1:
> > > > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > > > >       of them are not supported to be PREDMA
> > > > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > > > >       indirect desc
> > > > > > > >
> > > > > > > > Xuan Zhuo (6):
> > > > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > > > >     vring_unmap_extra_packed()
> > > > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > > > >   virtio: virtqueue_add() support predma
> > > > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > > > >
> > > > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > > > >
> > > > > > > > --
> > > > > > > > 2.31.0
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> > _______________________________________________
> > Virtualization mailing list
> > Virtualization@lists.linux-foundation.org
> > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  6:37               ` Jason Wang
@ 2022-02-21  6:41                 ` Xuan Zhuo
  2022-02-21  6:55                   ` Jason Wang
  2022-02-21  6:46                 ` Xuan Zhuo
  1 sibling, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-21  6:41 UTC (permalink / raw)
  To: Jason Wang; +Cc: virtualization, Michael S. Tsirkin

On Mon, 21 Feb 2022 14:37:49 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 21, 2022 at 2:20 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Mon, 21 Feb 2022 13:59:06 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > > > > virtqueue_add().
> > > > > > > > >
> > > > > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > > > > >
> > > > > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > > > > >
> > > > > > > OK
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > > > > executing dma unmap.
> > > > > > > >
> > > > > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > > > > discussion for the previous version.
> > > > > > > >
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > per-virtqueue? I guess it should be per-submit.
> > > > > > >
> > > > > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > > > > if we need to unmap dma when we detach.
> > > > > >
> > > > > > I meant if we can manage to make it per virtqueue, there's no need to
> > > > > > maintain per buffer flag.
> > > > > >
> > > > > > So we know something that needs to be mapped by virtio core itself,
> > > > > > e.g the indirect page. Other than this, all the rest could be
> > > > > > pre-mapped.
> > > > > >
> > > > > > For vnet header, it could be mapped by virtio-net which could be still
> > > > > > treated as pre mapped DMA since it's not the virtio ring code.
> > > > > >
> > > > > > Anything I miss here?
> > > > >
> > > > > I guess, your understanding is that after the queue is reset, the queue is used
> > > > > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> > > > >
> > > > > This is ok for rx.
> > > > >
> > > > > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > > > > skb to it at the same time. It is shared.
> > > >
> > > > Right.
> > > >
> > > > >
> > > > > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > > > > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > > > > to the vq are uniform
> > > >
> > > > Sorry, I don't understand here. We can let virtio-net do the mapping
> > > > even for TX, then from the virtio_ring point of view, it's still
> > > > pre-mapped?
> > > >
> > >
> > > Yes, we can do this. My previous thought was to keep the skb path unchanged.
>
> We can listen from Michael and others but I think it would be simpler.
>
> And we can even make the pre-mapped per driver. E.g for virtio-net we
> just let the virtio-net driver do the DMA mapping. This allows us to
> do a lot of optimizations (e.g page flip) as what other networking
> drivers did.
>
> > >
> > > Then we can make it clear that in the case of xsk, after completing the queue
> > > reset, all the addresses submitted to virtio are the addresses of the completed
> > > dma, including the skb case, the dma map operation must be completed first.
> > >
> > > In this case, I feel like we can do without this patch set.
> >
> > I originally thought that use_dma_api could be reused, but I found that this is
> > not the case. The logic of sg_phys() does not meet our ideas. We still have a
> > separate flag.
>
> Just to make sure I understand here, for this flag you mean
>
> 1) per buffer
> 2) per virtqueue
> or
> 3) per device?

To be honest, I haven't figured out what the advantage of the driver's own
management of dma is. If it just implements AF_XDP, then per virtqueue should be
fine.

And if it's per device, then I feel like another interesting job. Is premapped
dma address used by default?

Should I submit a patch set to make virtnet-net use the premapped dma address
directly by default?

Thanks.


>
> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks.
> > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > v1:
> > > > > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > > > > >       of them are not supported to be PREDMA
> > > > > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > > > > >       indirect desc
> > > > > > > > >
> > > > > > > > > Xuan Zhuo (6):
> > > > > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > > > > >     vring_unmap_extra_packed()
> > > > > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > > > > >   virtio: virtqueue_add() support predma
> > > > > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > > > > >
> > > > > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.31.0
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > _______________________________________________
> > > Virtualization mailing list
> > > Virtualization@lists.linux-foundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  6:37               ` Jason Wang
  2022-02-21  6:41                 ` Xuan Zhuo
@ 2022-02-21  6:46                 ` Xuan Zhuo
  2022-02-21  6:56                   ` Jason Wang
  1 sibling, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-21  6:46 UTC (permalink / raw)
  To: Jason Wang; +Cc: virtualization, Michael S. Tsirkin

On Mon, 21 Feb 2022 14:37:49 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 21, 2022 at 2:20 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Mon, 21 Feb 2022 13:59:06 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > > > > virtqueue_add().
> > > > > > > > >
> > > > > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > > > > >
> > > > > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > > > > >
> > > > > > > OK
> > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > > > > executing dma unmap.
> > > > > > > >
> > > > > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > > > > discussion for the previous version.
> > > > > > > >
> > > > > > >
> > > > > > > Yes.
> > > > > > >
> > > > > > > per-virtqueue? I guess it should be per-submit.
> > > > > > >
> > > > > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > > > > if we need to unmap dma when we detach.
> > > > > >
> > > > > > I meant if we can manage to make it per virtqueue, there's no need to
> > > > > > maintain per buffer flag.
> > > > > >
> > > > > > So we know something that needs to be mapped by virtio core itself,
> > > > > > e.g the indirect page. Other than this, all the rest could be
> > > > > > pre-mapped.
> > > > > >
> > > > > > For vnet header, it could be mapped by virtio-net which could be still
> > > > > > treated as pre mapped DMA since it's not the virtio ring code.
> > > > > >
> > > > > > Anything I miss here?
> > > > >
> > > > > I guess, your understanding is that after the queue is reset, the queue is used
> > > > > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> > > > >
> > > > > This is ok for rx.
> > > > >
> > > > > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > > > > skb to it at the same time. It is shared.
> > > >
> > > > Right.
> > > >
> > > > >
> > > > > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > > > > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > > > > to the vq are uniform
> > > >
> > > > Sorry, I don't understand here. We can let virtio-net do the mapping
> > > > even for TX, then from the virtio_ring point of view, it's still
> > > > pre-mapped?
> > > >
> > >
> > > Yes, we can do this. My previous thought was to keep the skb path unchanged.
>
> We can listen from Michael and others but I think it would be simpler.
>
> And we can even make the pre-mapped per driver. E.g for virtio-net we
> just let the virtio-net driver do the DMA mapping. This allows us to
> do a lot of optimizations (e.g page flip) as what other networking
> drivers did.
>
> > >
> > > Then we can make it clear that in the case of xsk, after completing the queue
> > > reset, all the addresses submitted to virtio are the addresses of the completed
> > > dma, including the skb case, the dma map operation must be completed first.
> > >
> > > In this case, I feel like we can do without this patch set.
> >
> > I originally thought that use_dma_api could be reused, but I found that this is
> > not the case. The logic of sg_phys() does not meet our ideas. We still have a
> > separate flag.
>
> Just to make sure I understand here, for this flag you mean
>
> 1) per buffer
> 2) per virtqueue
> or
> 3) per device?


As far as this question is concerned, I personally prefer per virtqueue. Because
this will be more flexible. It is also very simple to implement per device based
on this, as long as each vq is set to premapped mode.

Thanks.

>
> Thanks
>
> >
> > Thanks.
> >
> > >
> > > Thanks.
> > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > >
> > > > > > > > > v1:
> > > > > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > > > > >       of them are not supported to be PREDMA
> > > > > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > > > > >       indirect desc
> > > > > > > > >
> > > > > > > > > Xuan Zhuo (6):
> > > > > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > > > > >     vring_unmap_extra_packed()
> > > > > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > > > > >   virtio: virtqueue_add() support predma
> > > > > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > > > > >
> > > > > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > > > > >
> > > > > > > > > --
> > > > > > > > > 2.31.0
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > > _______________________________________________
> > > Virtualization mailing list
> > > Virtualization@lists.linux-foundation.org
> > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  6:41                 ` Xuan Zhuo
@ 2022-02-21  6:55                   ` Jason Wang
  2022-02-21  7:46                     ` Xuan Zhuo
  0 siblings, 1 reply; 36+ messages in thread
From: Jason Wang @ 2022-02-21  6:55 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: virtualization, Michael S. Tsirkin

On Mon, Feb 21, 2022 at 2:45 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Mon, 21 Feb 2022 14:37:49 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Feb 21, 2022 at 2:20 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Mon, 21 Feb 2022 13:59:06 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > > > > > virtqueue_add().
> > > > > > > > > >
> > > > > > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > > > > > >
> > > > > > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > > > > > >
> > > > > > > > OK
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > > > > > executing dma unmap.
> > > > > > > > >
> > > > > > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > > > > > discussion for the previous version.
> > > > > > > > >
> > > > > > > >
> > > > > > > > Yes.
> > > > > > > >
> > > > > > > > per-virtqueue? I guess it should be per-submit.
> > > > > > > >
> > > > > > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > > > > > if we need to unmap dma when we detach.
> > > > > > >
> > > > > > > I meant if we can manage to make it per virtqueue, there's no need to
> > > > > > > maintain per buffer flag.
> > > > > > >
> > > > > > > So we know something that needs to be mapped by virtio core itself,
> > > > > > > e.g the indirect page. Other than this, all the rest could be
> > > > > > > pre-mapped.
> > > > > > >
> > > > > > > For vnet header, it could be mapped by virtio-net which could be still
> > > > > > > treated as pre mapped DMA since it's not the virtio ring code.
> > > > > > >
> > > > > > > Anything I miss here?
> > > > > >
> > > > > > I guess, your understanding is that after the queue is reset, the queue is used
> > > > > > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> > > > > >
> > > > > > This is ok for rx.
> > > > > >
> > > > > > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > > > > > skb to it at the same time. It is shared.
> > > > >
> > > > > Right.
> > > > >
> > > > > >
> > > > > > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > > > > > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > > > > > to the vq are uniform
> > > > >
> > > > > Sorry, I don't understand here. We can let virtio-net do the mapping
> > > > > even for TX, then from the virtio_ring point of view, it's still
> > > > > pre-mapped?
> > > > >
> > > >
> > > > Yes, we can do this. My previous thought was to keep the skb path unchanged.
> >
> > We can listen from Michael and others but I think it would be simpler.
> >
> > And we can even make the pre-mapped per driver. E.g for virtio-net we
> > just let the virtio-net driver do the DMA mapping. This allows us to
> > do a lot of optimizations (e.g page flip) as what other networking
> > drivers did.
> >
> > > >
> > > > Then we can make it clear that in the case of xsk, after completing the queue
> > > > reset, all the addresses submitted to virtio are the addresses of the completed
> > > > dma, including the skb case, the dma map operation must be completed first.
> > > >
> > > > In this case, I feel like we can do without this patch set.
> > >
> > > I originally thought that use_dma_api could be reused, but I found that this is
> > > not the case. The logic of sg_phys() does not meet our ideas. We still have a
> > > separate flag.
> >
> > Just to make sure I understand here, for this flag you mean
> >
> > 1) per buffer
> > 2) per virtqueue
> > or
> > 3) per device?
>
> To be honest, I haven't figured out what the advantage of the driver's own
> management of dma is. If it just implements AF_XDP, then per virtqueue should be
> fine.

Haven't thought it deeply but generally it allows driver to do ad-hoc
optimizations on DMA mapping.

But it should be also a question of complexity, if we don't see real
use case, we can choose the easier way. I thought using per virtqueue
or per device is simpler but I might be wrong. Most of ethernet driver
manage the DMA mapping by itself.

>
> And if it's per device, then I feel like another interesting job. Is premapped
> dma address used by default?

If we go with this way, it should be job of the driver to enable it or not.

>
> Should I submit a patch set to make virtnet-net use the premapped dma address
> directly by default?

It might be a better idea if it's simpler. (I haven't saw a driver
that mixes the per-mapped DMA with the managed DMA)

Thanks

>
> Thanks.
>
>
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks.
> > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > v1:
> > > > > > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > > > > > >       of them are not supported to be PREDMA
> > > > > > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > > > > > >       indirect desc
> > > > > > > > > >
> > > > > > > > > > Xuan Zhuo (6):
> > > > > > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > > > > > >     vring_unmap_extra_packed()
> > > > > > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > > > > > >   virtio: virtqueue_add() support predma
> > > > > > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > > > > > >
> > > > > > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.31.0
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > _______________________________________________
> > > > Virtualization mailing list
> > > > Virtualization@lists.linux-foundation.org
> > > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  6:46                 ` Xuan Zhuo
@ 2022-02-21  6:56                   ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-21  6:56 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: virtualization, Michael S. Tsirkin

On Mon, Feb 21, 2022 at 2:48 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Mon, 21 Feb 2022 14:37:49 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Feb 21, 2022 at 2:20 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Mon, 21 Feb 2022 13:59:06 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > > > > > virtqueue_add().
> > > > > > > > > >
> > > > > > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > > > > > >
> > > > > > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > > > > > >
> > > > > > > > OK
> > > > > > > >
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > > > > > executing dma unmap.
> > > > > > > > >
> > > > > > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > > > > > discussion for the previous version.
> > > > > > > > >
> > > > > > > >
> > > > > > > > Yes.
> > > > > > > >
> > > > > > > > per-virtqueue? I guess it should be per-submit.
> > > > > > > >
> > > > > > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > > > > > if we need to unmap dma when we detach.
> > > > > > >
> > > > > > > I meant if we can manage to make it per virtqueue, there's no need to
> > > > > > > maintain per buffer flag.
> > > > > > >
> > > > > > > So we know something that needs to be mapped by virtio core itself,
> > > > > > > e.g the indirect page. Other than this, all the rest could be
> > > > > > > pre-mapped.
> > > > > > >
> > > > > > > For vnet header, it could be mapped by virtio-net which could be still
> > > > > > > treated as pre mapped DMA since it's not the virtio ring code.
> > > > > > >
> > > > > > > Anything I miss here?
> > > > > >
> > > > > > I guess, your understanding is that after the queue is reset, the queue is used
> > > > > > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> > > > > >
> > > > > > This is ok for rx.
> > > > > >
> > > > > > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > > > > > skb to it at the same time. It is shared.
> > > > >
> > > > > Right.
> > > > >
> > > > > >
> > > > > > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > > > > > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > > > > > to the vq are uniform
> > > > >
> > > > > Sorry, I don't understand here. We can let virtio-net do the mapping
> > > > > even for TX, then from the virtio_ring point of view, it's still
> > > > > pre-mapped?
> > > > >
> > > >
> > > > Yes, we can do this. My previous thought was to keep the skb path unchanged.
> >
> > We can listen from Michael and others but I think it would be simpler.
> >
> > And we can even make the pre-mapped per driver. E.g for virtio-net we
> > just let the virtio-net driver do the DMA mapping. This allows us to
> > do a lot of optimizations (e.g page flip) as what other networking
> > drivers did.
> >
> > > >
> > > > Then we can make it clear that in the case of xsk, after completing the queue
> > > > reset, all the addresses submitted to virtio are the addresses of the completed
> > > > dma, including the skb case, the dma map operation must be completed first.
> > > >
> > > > In this case, I feel like we can do without this patch set.
> > >
> > > I originally thought that use_dma_api could be reused, but I found that this is
> > > not the case. The logic of sg_phys() does not meet our ideas. We still have a
> > > separate flag.
> >
> > Just to make sure I understand here, for this flag you mean
> >
> > 1) per buffer
> > 2) per virtqueue
> > or
> > 3) per device?
>
>
> As far as this question is concerned, I personally prefer per virtqueue. Because
> this will be more flexible. It is also very simple to implement per device based
> on this, as long as each vq is set to premapped mode.

It should be fine I think.

Thanks

>
> Thanks.
>
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks.
> > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > v1:
> > > > > > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > > > > > >       of them are not supported to be PREDMA
> > > > > > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > > > > > >       indirect desc
> > > > > > > > > >
> > > > > > > > > > Xuan Zhuo (6):
> > > > > > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > > > > > >     vring_unmap_extra_packed()
> > > > > > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > > > > > >   virtio: virtqueue_add() support predma
> > > > > > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > > > > > >
> > > > > > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > > > > > >
> > > > > > > > > > --
> > > > > > > > > > 2.31.0
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > > _______________________________________________
> > > > Virtualization mailing list
> > > > Virtualization@lists.linux-foundation.org
> > > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> > >
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  6:55                   ` Jason Wang
@ 2022-02-21  7:46                     ` Xuan Zhuo
  0 siblings, 0 replies; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-21  7:46 UTC (permalink / raw)
  To: Jason Wang; +Cc: virtualization, Michael S. Tsirkin

On Mon, 21 Feb 2022 14:55:43 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Feb 21, 2022 at 2:45 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Mon, 21 Feb 2022 14:37:49 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Mon, Feb 21, 2022 at 2:20 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Mon, 21 Feb 2022 13:59:06 +0800, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > On Mon, 21 Feb 2022 11:53:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Mon, Feb 21, 2022 at 11:46 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > > >
> > > > > > > > > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > > > > > > > > virtqueue_add().
> > > > > > > > > > >
> > > > > > > > > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > > > > > > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > > > > > > > > >
> > > > > > > > > > I'd suggest rename this feature as "unmanaged DMA".
> > > > > > > > >
> > > > > > > > > OK
> > > > > > > > >
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > Record this predma information in extra->flags, which can be skipped when
> > > > > > > > > > > executing dma unmap.
> > > > > > > > > >
> > > > > > > > > > Question still, can we use per-virtqueue flag instead of per
> > > > > > > > > > descriptor flag? If my memory is correct, the answer is yes in the
> > > > > > > > > > discussion for the previous version.
> > > > > > > > > >
> > > > > > > > >
> > > > > > > > > Yes.
> > > > > > > > >
> > > > > > > > > per-virtqueue? I guess it should be per-submit.
> > > > > > > > >
> > > > > > > > > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > > > > > > > if we need to unmap dma when we detach.
> > > > > > > >
> > > > > > > > I meant if we can manage to make it per virtqueue, there's no need to
> > > > > > > > maintain per buffer flag.
> > > > > > > >
> > > > > > > > So we know something that needs to be mapped by virtio core itself,
> > > > > > > > e.g the indirect page. Other than this, all the rest could be
> > > > > > > > pre-mapped.
> > > > > > > >
> > > > > > > > For vnet header, it could be mapped by virtio-net which could be still
> > > > > > > > treated as pre mapped DMA since it's not the virtio ring code.
> > > > > > > >
> > > > > > > > Anything I miss here?
> > > > > > >
> > > > > > > I guess, your understanding is that after the queue is reset, the queue is used
> > > > > > > by xsk(AF_XDP), then all commits to this vq are premapped amd address.
> > > > > > >
> > > > > > > This is ok for rx.
> > > > > > >
> > > > > > > But for tx, just like XDP TX, although vq is used by xsk, the kernel also passes
> > > > > > > skb to it at the same time. It is shared.
> > > > > >
> > > > > > Right.
> > > > > >
> > > > > > >
> > > > > > > We can guarantee that the sg of the sgs submitted at one time uses the premapped
> > > > > > > dma address or virtual address uniformly. It is not guaranteed that all the sgs
> > > > > > > to the vq are uniform
> > > > > >
> > > > > > Sorry, I don't understand here. We can let virtio-net do the mapping
> > > > > > even for TX, then from the virtio_ring point of view, it's still
> > > > > > pre-mapped?
> > > > > >
> > > > >
> > > > > Yes, we can do this. My previous thought was to keep the skb path unchanged.
> > >
> > > We can listen from Michael and others but I think it would be simpler.
> > >
> > > And we can even make the pre-mapped per driver. E.g for virtio-net we
> > > just let the virtio-net driver do the DMA mapping. This allows us to
> > > do a lot of optimizations (e.g page flip) as what other networking
> > > drivers did.
> > >
> > > > >
> > > > > Then we can make it clear that in the case of xsk, after completing the queue
> > > > > reset, all the addresses submitted to virtio are the addresses of the completed
> > > > > dma, including the skb case, the dma map operation must be completed first.
> > > > >
> > > > > In this case, I feel like we can do without this patch set.
> > > >
> > > > I originally thought that use_dma_api could be reused, but I found that this is
> > > > not the case. The logic of sg_phys() does not meet our ideas. We still have a
> > > > separate flag.
> > >
> > > Just to make sure I understand here, for this flag you mean
> > >
> > > 1) per buffer
> > > 2) per virtqueue
> > > or
> > > 3) per device?
> >
> > To be honest, I haven't figured out what the advantage of the driver's own
> > management of dma is. If it just implements AF_XDP, then per virtqueue should be
> > fine.
>
> Haven't thought it deeply but generally it allows driver to do ad-hoc
> optimizations on DMA mapping.
>
> But it should be also a question of complexity, if we don't see real
> use case, we can choose the easier way. I thought using per virtqueue
> or per device is simpler but I might be wrong. Most of ethernet driver
> manage the DMA mapping by itself.
>
> >
> > And if it's per device, then I feel like another interesting job. Is premapped
> > dma address used by default?
>
> If we go with this way, it should be job of the driver to enable it or not.
>
> >
> > Should I submit a patch set to make virtnet-net use the premapped dma address
> > directly by default?
>
> It might be a better idea if it's simpler. (I haven't saw a driver
> that mixes the per-mapped DMA with the managed DMA)

I tried to implement this function. One of the problems encountered is that we
need to save the dma address in virtnet for unmap. Especially in the case of
xmit, one skb may have multiple dma addresses.

1. Let virtio return the addr of each desc when detached.
2. Allocate a block of memory for each sq/rq to hold the dma address.

How do you think about this?

Thanks.

>
> Thanks
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks.
> > > > > > > > >
> > > > > > > > > > Thanks
> > > > > > > > > >
> > > > > > > > > > >
> > > > > > > > > > > v1:
> > > > > > > > > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > > > > > > > > >       of them are not supported to be PREDMA
> > > > > > > > > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > > > > > > > > >       together with the next time AF_XDP supports virtio dma
> > > > > > > > > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > > > > > > > > >       indirect desc
> > > > > > > > > > >
> > > > > > > > > > > Xuan Zhuo (6):
> > > > > > > > > > >   virtio: rename vring_unmap_state_packed() to
> > > > > > > > > > >     vring_unmap_extra_packed()
> > > > > > > > > > >   virtio: remove flags check for unmap split indirect desc
> > > > > > > > > > >   virtio: remove flags check for unmap packed indirect desc
> > > > > > > > > > >   virtio: virtqueue_add() support predma
> > > > > > > > > > >   virtio: split: virtqueue_add_split() support dma address
> > > > > > > > > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > > > > > > > > >
> > > > > > > > > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > > > > > > > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > > > > > > > > >
> > > > > > > > > > > --
> > > > > > > > > > > 2.31.0
> > > > > > > > > > >
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > > _______________________________________________
> > > > > Virtualization mailing list
> > > > > Virtualization@lists.linux-foundation.org
> > > > > https://lists.linuxfoundation.org/mailman/listinfo/virtualization
> > > >
> > >
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21  3:32     ` Jason Wang
  2022-02-21  3:39       ` Xuan Zhuo
@ 2022-02-21 11:23       ` Xuan Zhuo
  2022-02-22  4:02         ` Jason Wang
  1 sibling, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-21 11:23 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, virtualization

On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > virtqueue_add() only supports virtual addresses, dma is completed in
> > > > virtqueue_add().
> > > >
> > > > In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > > it is necessary for us to support passing the DMA address to virtqueue_add().
> > >
> > > I'd suggest rename this feature as "unmanaged DMA".
> >
> > OK
> >
> > >
> > > >
> > > > Record this predma information in extra->flags, which can be skipped when
> > > > executing dma unmap.
> > >
> > > Question still, can we use per-virtqueue flag instead of per
> > > descriptor flag? If my memory is correct, the answer is yes in the
> > > discussion for the previous version.
> > >
> >
> > Yes.
> >
> > per-virtqueue? I guess it should be per-submit.
> >
> > This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > if we need to unmap dma when we detach.
>
> I meant if we can manage to make it per virtqueue, there's no need to
> maintain per buffer flag.
>

Rethinking this question, I feel there is no essential difference between per
virtqueue and per sgs.

per virtqueue:
	1. add buf:
		a. check vq->premapped for map every sg
	2. detach:
	        a. check vq->premaped for unmap

per sgs:
	1. add buf:
	        a. check function parameter "premapped" for map every sg
		b. add flag to extra[head].flag

	2. detach:
	        a: check extra[head].flag for unmap


Thanks.


> So we know something that needs to be mapped by virtio core itself,
> e.g the indirect page. Other than this, all the rest could be
> pre-mapped.
>
> For vnet header, it could be mapped by virtio-net which could be still
> treated as pre mapped DMA since it's not the virtio ring code.
>
> Anything I miss here?
>
> Thanks
>
>
> >
> > Thanks.
> >
> > > Thanks
> > >
> > > >
> > > > v1:
> > > >    1. All sgs requested at one time are required to be unified PREDMA, and several
> > > >       of them are not supported to be PREDMA
> > > >    2. virtio_dma_map() is removed from this patch set and will be submitted
> > > >       together with the next time AF_XDP supports virtio dma
> > > >    3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > >       indirect desc
> > > >
> > > > Xuan Zhuo (6):
> > > >   virtio: rename vring_unmap_state_packed() to
> > > >     vring_unmap_extra_packed()
> > > >   virtio: remove flags check for unmap split indirect desc
> > > >   virtio: remove flags check for unmap packed indirect desc
> > > >   virtio: virtqueue_add() support predma
> > > >   virtio: split: virtqueue_add_split() support dma address
> > > >   virtio: packed: virtqueue_add_packed() support dma address
> > > >
> > > >  drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > >  1 file changed, 126 insertions(+), 73 deletions(-)
> > > >
> > > > --
> > > > 2.31.0
> > > >
> > >
> >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-21 11:23       ` Xuan Zhuo
@ 2022-02-22  4:02         ` Jason Wang
  2022-02-22  7:48           ` Xuan Zhuo
  0 siblings, 1 reply; 36+ messages in thread
From: Jason Wang @ 2022-02-22  4:02 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: Michael S. Tsirkin, virtualization


在 2022/2/21 下午7:23, Xuan Zhuo 写道:
> On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
>> On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>>> On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
>>>> On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>>>>> virtqueue_add() only supports virtual addresses, dma is completed in
>>>>> virtqueue_add().
>>>>>
>>>>> In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
>>>>> it is necessary for us to support passing the DMA address to virtqueue_add().
>>>> I'd suggest rename this feature as "unmanaged DMA".
>>> OK
>>>
>>>>> Record this predma information in extra->flags, which can be skipped when
>>>>> executing dma unmap.
>>>> Question still, can we use per-virtqueue flag instead of per
>>>> descriptor flag? If my memory is correct, the answer is yes in the
>>>> discussion for the previous version.
>>>>
>>> Yes.
>>>
>>> per-virtqueue? I guess it should be per-submit.
>>>
>>> This patch set only adds a flag to desc_extra[head].flags, so that we can know
>>> if we need to unmap dma when we detach.
>> I meant if we can manage to make it per virtqueue, there's no need to
>> maintain per buffer flag.
>>
> Rethinking this question, I feel there is no essential difference between per
> virtqueue and per sgs.
>
> per virtqueue:
> 	1. add buf:
> 		a. check vq->premapped for map every sg
> 	2. detach:
> 	        a. check vq->premaped for unmap
>
> per sgs:
> 	1. add buf:
> 	        a. check function parameter "premapped" for map every sg
> 		b. add flag to extra[head].flag
>
> 	2. detach:
> 	        a: check extra[head].flag for unmap
>
>
> Thanks.


Per-virtqueue is still a little bit easier at the first glance.

Actually, per-sg have one advantage: it can be used without virtqueue 
reset (to allow switching between the two modes). But I'm not sure 
whether we had such requirements.

I think to answer this question, we probably need a real use case (if we 
can come up with a case that is more lightweight than AF_XDP, that would 
be even better).

Thanks


>
>
>> So we know something that needs to be mapped by virtio core itself,
>> e.g the indirect page. Other than this, all the rest could be
>> pre-mapped.
>>
>> For vnet header, it could be mapped by virtio-net which could be still
>> treated as pre mapped DMA since it's not the virtio ring code.
>>
>> Anything I miss here?
>>
>> Thanks
>>
>>
>>> Thanks.
>>>
>>>> Thanks
>>>>
>>>>> v1:
>>>>>     1. All sgs requested at one time are required to be unified PREDMA, and several
>>>>>        of them are not supported to be PREDMA
>>>>>     2. virtio_dma_map() is removed from this patch set and will be submitted
>>>>>        together with the next time AF_XDP supports virtio dma
>>>>>     3. Added patch #2 #3 to remove the check for flags when performing unmap
>>>>>        indirect desc
>>>>>
>>>>> Xuan Zhuo (6):
>>>>>    virtio: rename vring_unmap_state_packed() to
>>>>>      vring_unmap_extra_packed()
>>>>>    virtio: remove flags check for unmap split indirect desc
>>>>>    virtio: remove flags check for unmap packed indirect desc
>>>>>    virtio: virtqueue_add() support predma
>>>>>    virtio: split: virtqueue_add_split() support dma address
>>>>>    virtio: packed: virtqueue_add_packed() support dma address
>>>>>
>>>>>   drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
>>>>>   1 file changed, 126 insertions(+), 73 deletions(-)
>>>>>
>>>>> --
>>>>> 2.31.0
>>>>>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-22  4:02         ` Jason Wang
@ 2022-02-22  7:48           ` Xuan Zhuo
  2022-02-23  2:58             ` Jason Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-22  7:48 UTC (permalink / raw)
  To: Jason Wang; +Cc: Michael S. Tsirkin, virtualization

On Tue, 22 Feb 2022 12:02:14 +0800, Jason Wang <jasowang@redhat.com> wrote:
>
> 在 2022/2/21 下午7:23, Xuan Zhuo 写道:
> > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> >> On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >>> On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> >>>> On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >>>>> virtqueue_add() only supports virtual addresses, dma is completed in
> >>>>> virtqueue_add().
> >>>>>
> >>>>> In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> >>>>> it is necessary for us to support passing the DMA address to virtqueue_add().
> >>>> I'd suggest rename this feature as "unmanaged DMA".
> >>> OK
> >>>
> >>>>> Record this predma information in extra->flags, which can be skipped when
> >>>>> executing dma unmap.
> >>>> Question still, can we use per-virtqueue flag instead of per
> >>>> descriptor flag? If my memory is correct, the answer is yes in the
> >>>> discussion for the previous version.
> >>>>
> >>> Yes.
> >>>
> >>> per-virtqueue? I guess it should be per-submit.
> >>>
> >>> This patch set only adds a flag to desc_extra[head].flags, so that we can know
> >>> if we need to unmap dma when we detach.
> >> I meant if we can manage to make it per virtqueue, there's no need to
> >> maintain per buffer flag.
> >>
> > Rethinking this question, I feel there is no essential difference between per
> > virtqueue and per sgs.
> >
> > per virtqueue:
> > 	1. add buf:
> > 		a. check vq->premapped for map every sg
> > 	2. detach:
> > 	        a. check vq->premaped for unmap
> >
> > per sgs:
> > 	1. add buf:
> > 	        a. check function parameter "premapped" for map every sg
> > 		b. add flag to extra[head].flag
> >
> > 	2. detach:
> > 	        a: check extra[head].flag for unmap
> >
> >
> > Thanks.
>
>
> Per-virtqueue is still a little bit easier at the first glance.
>
> Actually, per-sg have one advantage: it can be used without virtqueue
> reset (to allow switching between the two modes). But I'm not sure
> whether we had such requirements.
>
> I think to answer this question, we probably need a real use case (if we
> can come up with a case that is more lightweight than AF_XDP, that would
> be even better).

Sadly, I didn't think of other scenarios. Hope someone can give a scenario.

For per virtqueue, virtio-net will also switch to premapped. Because the tx
queue is shared.

But in the process of implementing this, I encountered a troublesome problem. We
need to record the dma address in virtnet. For tx, since skb contains multiple
frags, there will be many dma addresses. When unmap in virtnet It will be more
troublesome. Because we have to regain these dma addresses.

I think of two ways:

1. Let virtio return the addr of each desc when detached.
2. Allocate a block of memory for each sq/rq to hold the dma address.

Thanks.

>
> Thanks
>
>
> >
> >
> >> So we know something that needs to be mapped by virtio core itself,
> >> e.g the indirect page. Other than this, all the rest could be
> >> pre-mapped.
> >>
> >> For vnet header, it could be mapped by virtio-net which could be still
> >> treated as pre mapped DMA since it's not the virtio ring code.
> >>
> >> Anything I miss here?
> >>
> >> Thanks
> >>
> >>
> >>> Thanks.
> >>>
> >>>> Thanks
> >>>>
> >>>>> v1:
> >>>>>     1. All sgs requested at one time are required to be unified PREDMA, and several
> >>>>>        of them are not supported to be PREDMA
> >>>>>     2. virtio_dma_map() is removed from this patch set and will be submitted
> >>>>>        together with the next time AF_XDP supports virtio dma
> >>>>>     3. Added patch #2 #3 to remove the check for flags when performing unmap
> >>>>>        indirect desc
> >>>>>
> >>>>> Xuan Zhuo (6):
> >>>>>    virtio: rename vring_unmap_state_packed() to
> >>>>>      vring_unmap_extra_packed()
> >>>>>    virtio: remove flags check for unmap split indirect desc
> >>>>>    virtio: remove flags check for unmap packed indirect desc
> >>>>>    virtio: virtqueue_add() support predma
> >>>>>    virtio: split: virtqueue_add_split() support dma address
> >>>>>    virtio: packed: virtqueue_add_packed() support dma address
> >>>>>
> >>>>>   drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> >>>>>   1 file changed, 126 insertions(+), 73 deletions(-)
> >>>>>
> >>>>> --
> >>>>> 2.31.0
> >>>>>
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed()
  2022-02-10  8:51 ` [PATCH v1 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed() Xuan Zhuo
@ 2022-02-23  2:41   ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-23  2:41 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization; +Cc: Michael S. Tsirkin


在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> The actual parameter handled by vring_unmap_state_packed() is that
> vring_desc_extra, so this function should use "extra" instead of "state".
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/virtio/virtio_ring.c | 17 ++++++++---------
>   1 file changed, 8 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 962f1477b1fa..7cf3ae057833 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -984,24 +984,24 @@ static struct virtqueue *vring_create_virtqueue_split(
>    * Packed ring specific functions - *_packed().
>    */
>   
> -static void vring_unmap_state_packed(const struct vring_virtqueue *vq,
> -				     struct vring_desc_extra *state)
> +static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
> +				     struct vring_desc_extra *extra)
>   {
>   	u16 flags;
>   
>   	if (!vq->use_dma_api)
>   		return;
>   
> -	flags = state->flags;
> +	flags = extra->flags;
>   
>   	if (flags & VRING_DESC_F_INDIRECT) {
>   		dma_unmap_single(vring_dma_dev(vq),
> -				 state->addr, state->len,
> +				 extra->addr, extra->len,
>   				 (flags & VRING_DESC_F_WRITE) ?
>   				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
>   	} else {
>   		dma_unmap_page(vring_dma_dev(vq),
> -			       state->addr, state->len,
> +			       extra->addr, extra->len,
>   			       (flags & VRING_DESC_F_WRITE) ?
>   			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
>   	}
> @@ -1303,8 +1303,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>   	for (n = 0; n < total_sg; n++) {
>   		if (i == err_idx)
>   			break;
> -		vring_unmap_state_packed(vq,
> -					 &vq->packed.desc_extra[curr]);
> +		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]);
>   		curr = vq->packed.desc_extra[curr].next;
>   		i++;
>   		if (i >= vq->packed.vring.num)
> @@ -1383,8 +1382,8 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>   	if (unlikely(vq->use_dma_api)) {
>   		curr = id;
>   		for (i = 0; i < state->num; i++) {
> -			vring_unmap_state_packed(vq,
> -				&vq->packed.desc_extra[curr]);
> +			vring_unmap_extra_packed(vq,
> +						 &vq->packed.desc_extra[curr]);
>   			curr = vq->packed.desc_extra[curr].next;
>   		}
>   	}

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 2/6] virtio: remove flags check for unmap split indirect desc
  2022-02-10  8:51 ` [PATCH v1 2/6] virtio: remove flags check for unmap split indirect desc Xuan Zhuo
@ 2022-02-23  2:42   ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-23  2:42 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization; +Cc: Michael S. Tsirkin


在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> When calling vring_unmap_one_split_indirect(), it will not encounter the
> situation that the flags contains VRING_DESC_F_INDIRECT. So remove this
> logic.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>


Acked-by: Jason Wang <jasowang@redhat.com>


> ---
>   drivers/virtio/virtio_ring.c | 18 +++++-------------
>   1 file changed, 5 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 7cf3ae057833..fadd0a7503e9 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -379,19 +379,11 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq,
>   
>   	flags = virtio16_to_cpu(vq->vq.vdev, desc->flags);
>   
> -	if (flags & VRING_DESC_F_INDIRECT) {
> -		dma_unmap_single(vring_dma_dev(vq),
> -				 virtio64_to_cpu(vq->vq.vdev, desc->addr),
> -				 virtio32_to_cpu(vq->vq.vdev, desc->len),
> -				 (flags & VRING_DESC_F_WRITE) ?
> -				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> -	} else {
> -		dma_unmap_page(vring_dma_dev(vq),
> -			       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> -			       virtio32_to_cpu(vq->vq.vdev, desc->len),
> -			       (flags & VRING_DESC_F_WRITE) ?
> -			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> -	}
> +	dma_unmap_page(vring_dma_dev(vq),
> +		       virtio64_to_cpu(vq->vq.vdev, desc->addr),
> +		       virtio32_to_cpu(vq->vq.vdev, desc->len),
> +		       (flags & VRING_DESC_F_WRITE) ?
> +		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
>   }
>   
>   static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/6] virtio: remove flags check for unmap packed indirect desc
  2022-02-10  8:51 ` [PATCH v1 3/6] virtio: remove flags check for unmap packed " Xuan Zhuo
@ 2022-02-23  2:53   ` Jason Wang
  2022-02-23  8:30     ` Xuan Zhuo
  0 siblings, 1 reply; 36+ messages in thread
From: Jason Wang @ 2022-02-23  2:53 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization; +Cc: Michael S. Tsirkin


在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> When calling vring_unmap_desc_packed(), it will not encounter the
> situation that the flags contains VRING_DESC_F_INDIRECT. So remove this
> logic.


This seems not correct.

There's a call from detach_buf_packed() that can work for indirect 
descriptors?

Thanks


>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>   drivers/virtio/virtio_ring.c | 18 +++++-------------
>   1 file changed, 5 insertions(+), 13 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index fadd0a7503e9..cfb028ca238e 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -1009,19 +1009,11 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
>   
>   	flags = le16_to_cpu(desc->flags);
>   
> -	if (flags & VRING_DESC_F_INDIRECT) {
> -		dma_unmap_single(vring_dma_dev(vq),
> -				 le64_to_cpu(desc->addr),
> -				 le32_to_cpu(desc->len),
> -				 (flags & VRING_DESC_F_WRITE) ?
> -				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> -	} else {
> -		dma_unmap_page(vring_dma_dev(vq),
> -			       le64_to_cpu(desc->addr),
> -			       le32_to_cpu(desc->len),
> -			       (flags & VRING_DESC_F_WRITE) ?
> -			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> -	}
> +	dma_unmap_page(vring_dma_dev(vq),
> +		       le64_to_cpu(desc->addr),
> +		       le32_to_cpu(desc->len),
> +		       (flags & VRING_DESC_F_WRITE) ?
> +		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
>   }
>   
>   static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg,

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-22  7:48           ` Xuan Zhuo
@ 2022-02-23  2:58             ` Jason Wang
  2022-02-23  3:44               ` Jason Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Jason Wang @ 2022-02-23  2:58 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: Michael S. Tsirkin, virtualization

On Tue, Feb 22, 2022 at 4:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 22 Feb 2022 12:02:14 +0800, Jason Wang <jasowang@redhat.com> wrote:
> >
> > 在 2022/2/21 下午7:23, Xuan Zhuo 写道:
> > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > >> On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >>> On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > >>>> On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >>>>> virtqueue_add() only supports virtual addresses, dma is completed in
> > >>>>> virtqueue_add().
> > >>>>>
> > >>>>> In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > >>>>> it is necessary for us to support passing the DMA address to virtqueue_add().
> > >>>> I'd suggest rename this feature as "unmanaged DMA".
> > >>> OK
> > >>>
> > >>>>> Record this predma information in extra->flags, which can be skipped when
> > >>>>> executing dma unmap.
> > >>>> Question still, can we use per-virtqueue flag instead of per
> > >>>> descriptor flag? If my memory is correct, the answer is yes in the
> > >>>> discussion for the previous version.
> > >>>>
> > >>> Yes.
> > >>>
> > >>> per-virtqueue? I guess it should be per-submit.
> > >>>
> > >>> This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > >>> if we need to unmap dma when we detach.
> > >> I meant if we can manage to make it per virtqueue, there's no need to
> > >> maintain per buffer flag.
> > >>
> > > Rethinking this question, I feel there is no essential difference between per
> > > virtqueue and per sgs.
> > >
> > > per virtqueue:
> > >     1. add buf:
> > >             a. check vq->premapped for map every sg
> > >     2. detach:
> > >             a. check vq->premaped for unmap
> > >
> > > per sgs:
> > >     1. add buf:
> > >             a. check function parameter "premapped" for map every sg
> > >             b. add flag to extra[head].flag
> > >
> > >     2. detach:
> > >             a: check extra[head].flag for unmap
> > >
> > >
> > > Thanks.
> >
> >
> > Per-virtqueue is still a little bit easier at the first glance.
> >
> > Actually, per-sg have one advantage: it can be used without virtqueue
> > reset (to allow switching between the two modes). But I'm not sure
> > whether we had such requirements.
> >
> > I think to answer this question, we probably need a real use case (if we
> > can come up with a case that is more lightweight than AF_XDP, that would
> > be even better).
>
> Sadly, I didn't think of other scenarios. Hope someone can give a scenario.
>
> For per virtqueue, virtio-net will also switch to premapped. Because the tx
> queue is shared.
>
> But in the process of implementing this, I encountered a troublesome problem. We
> need to record the dma address in virtnet. For tx, since skb contains multiple
> frags, there will be many dma addresses. When unmap in virtnet It will be more
> troublesome. Because we have to regain these dma addresses.

Right, actually, we store the dma address in desc_extra, but exposing
it to the driver seems like an overkill.

>
> I think of two ways:
>
> 1. Let virtio return the addr of each desc when detached.
> 2. Allocate a block of memory for each sq/rq to hold the dma address.
>
> Thanks.

So it looks to me having a per buffer flag seems ok. Let me go through
this series.

Thanks

>
> >
> > Thanks
> >
> >
> > >
> > >
> > >> So we know something that needs to be mapped by virtio core itself,
> > >> e.g the indirect page. Other than this, all the rest could be
> > >> pre-mapped.
> > >>
> > >> For vnet header, it could be mapped by virtio-net which could be still
> > >> treated as pre mapped DMA since it's not the virtio ring code.
> > >>
> > >> Anything I miss here?
> > >>
> > >> Thanks
> > >>
> > >>
> > >>> Thanks.
> > >>>
> > >>>> Thanks
> > >>>>
> > >>>>> v1:
> > >>>>>     1. All sgs requested at one time are required to be unified PREDMA, and several
> > >>>>>        of them are not supported to be PREDMA
> > >>>>>     2. virtio_dma_map() is removed from this patch set and will be submitted
> > >>>>>        together with the next time AF_XDP supports virtio dma
> > >>>>>     3. Added patch #2 #3 to remove the check for flags when performing unmap
> > >>>>>        indirect desc
> > >>>>>
> > >>>>> Xuan Zhuo (6):
> > >>>>>    virtio: rename vring_unmap_state_packed() to
> > >>>>>      vring_unmap_extra_packed()
> > >>>>>    virtio: remove flags check for unmap split indirect desc
> > >>>>>    virtio: remove flags check for unmap packed indirect desc
> > >>>>>    virtio: virtqueue_add() support predma
> > >>>>>    virtio: split: virtqueue_add_split() support dma address
> > >>>>>    virtio: packed: virtqueue_add_packed() support dma address
> > >>>>>
> > >>>>>   drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > >>>>>   1 file changed, 126 insertions(+), 73 deletions(-)
> > >>>>>
> > >>>>> --
> > >>>>> 2.31.0
> > >>>>>
> >
>

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/6] virtio: virtqueue_add() support predma
  2022-02-10  8:51 ` [PATCH v1 4/6] virtio: virtqueue_add() support predma Xuan Zhuo
@ 2022-02-23  3:00   ` Jason Wang
  2022-02-23  3:02     ` Jason Wang
  0 siblings, 1 reply; 36+ messages in thread
From: Jason Wang @ 2022-02-23  3:00 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization; +Cc: Michael S. Tsirkin


在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> virtuque_add() adds parameter predma.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>   drivers/virtio/virtio_ring.c | 11 ++++++-----
>   1 file changed, 6 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index cfb028ca238e..cf9d118668f1 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -1780,7 +1780,8 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>   				unsigned int in_sgs,
>   				void *data,
>   				void *ctx,
> -				gfp_t gfp)
> +				gfp_t gfp,
> +				bool predma)


sg is assumed to use dma address, so I wonder whether "sg_is_phys" is a 
better name?

Thanks


>   {
>   	struct vring_virtqueue *vq = to_vvq(_vq);
>   
> @@ -1821,7 +1822,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq,
>   			total_sg++;
>   	}
>   	return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs,
> -			     data, NULL, gfp);
> +			     data, NULL, gfp, false);
>   }
>   EXPORT_SYMBOL_GPL(virtqueue_add_sgs);
>   
> @@ -1843,7 +1844,7 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
>   			 void *data,
>   			 gfp_t gfp)
>   {
> -	return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, gfp);
> +	return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, gfp, false);
>   }
>   EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);
>   
> @@ -1865,7 +1866,7 @@ int virtqueue_add_inbuf(struct virtqueue *vq,
>   			void *data,
>   			gfp_t gfp)
>   {
> -	return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, gfp);
> +	return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, gfp, false);
>   }
>   EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);
>   
> @@ -1889,7 +1890,7 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq,
>   			void *ctx,
>   			gfp_t gfp)
>   {
> -	return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, gfp);
> +	return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, gfp, false);
>   }
>   EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx);
>   

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 4/6] virtio: virtqueue_add() support predma
  2022-02-23  3:00   ` Jason Wang
@ 2022-02-23  3:02     ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-23  3:02 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization; +Cc: Michael S. Tsirkin

On Wed, Feb 23, 2022 at 11:01 AM Jason Wang <jasowang@redhat.com> wrote:
>
>
> 在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> > virtuque_add() adds parameter predma.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >   drivers/virtio/virtio_ring.c | 11 ++++++-----
> >   1 file changed, 6 insertions(+), 5 deletions(-)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index cfb028ca238e..cf9d118668f1 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -1780,7 +1780,8 @@ static inline int virtqueue_add(struct virtqueue *_vq,
> >                               unsigned int in_sgs,
> >                               void *data,
> >                               void *ctx,
> > -                             gfp_t gfp)
> > +                             gfp_t gfp,
> > +                             bool predma)
>
>
> sg is assumed to use dma address, so I wonder whether "sg_is_phys" is a
> better name?

Speak too fast, I was wrong here, I think we should be consistent
here, so "premapped" should be better.

Thanks

>
> Thanks
>
>
> >   {
> >       struct vring_virtqueue *vq = to_vvq(_vq);
> >
> > @@ -1821,7 +1822,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq,
> >                       total_sg++;
> >       }
> >       return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs,
> > -                          data, NULL, gfp);
> > +                          data, NULL, gfp, false);
> >   }
> >   EXPORT_SYMBOL_GPL(virtqueue_add_sgs);
> >
> > @@ -1843,7 +1844,7 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
> >                        void *data,
> >                        gfp_t gfp)
> >   {
> > -     return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, gfp);
> > +     return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, gfp, false);
> >   }
> >   EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);
> >
> > @@ -1865,7 +1866,7 @@ int virtqueue_add_inbuf(struct virtqueue *vq,
> >                       void *data,
> >                       gfp_t gfp)
> >   {
> > -     return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, gfp);
> > +     return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, gfp, false);
> >   }
> >   EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);
> >
> > @@ -1889,7 +1890,7 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq,
> >                       void *ctx,
> >                       gfp_t gfp)
> >   {
> > -     return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, gfp);
> > +     return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, gfp, false);
> >   }
> >   EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx);
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 5/6] virtio: split: virtqueue_add_split() support dma address
  2022-02-10  8:51 ` [PATCH v1 5/6] virtio: split: virtqueue_add_split() support dma address Xuan Zhuo
@ 2022-02-23  3:38   ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-23  3:38 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization; +Cc: Michael S. Tsirkin


在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> virtqueue_add_split() only supports virtual addresses, dma is completed
> in virtqueue_add_split().
>
> In some scenarios (such as the AF_XDP scenario), the memory is allocated
> and DMA is completed in advance, so it is necessary for us to support
> passing the DMA address to virtqueue_add_split().
>
> And record this predma information in extra->flags, which can be skipped
> when executing dma unmap.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>   drivers/virtio/virtio_ring.c | 62 ++++++++++++++++++++++++++++--------
>   1 file changed, 48 insertions(+), 14 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index cf9d118668f1..d32c0bf6016f 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -66,6 +66,9 @@
>   #define LAST_ADD_TIME_INVALID(vq)
>   #endif
>   
> +/* This means the buffer dma is pre-alloc. Just used by vring_desc_extra */
> +#define VIRTIO_DESC_F_PREDMA (1 << 15)


I suggest to use a new field in desc_extra to avoid conflict with future 
virtio extension.


> +
>   struct vring_desc_state_split {
>   	void *data;			/* Data for callback. */
>   	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
> @@ -387,7 +390,7 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq,
>   }
>   
>   static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
> -					  unsigned int i)
> +					  unsigned int i, bool predma)
>   {
>   	struct vring_desc_extra *extra = vq->split.desc_extra;
>   	u16 flags;
> @@ -404,6 +407,9 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
>   				 (flags & VRING_DESC_F_WRITE) ?
>   				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
>   	} else {
> +		if (predma)
> +			goto out;
> +
>   		dma_unmap_page(vring_dma_dev(vq),
>   			       extra[i].addr,
>   			       extra[i].len,
> @@ -474,7 +480,8 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>   				      unsigned int in_sgs,
>   				      void *data,
>   				      void *ctx,
> -				      gfp_t gfp)
> +				      gfp_t gfp,
> +				      bool predma)
>   {
>   	struct vring_virtqueue *vq = to_vvq(_vq);
>   	struct scatterlist *sg;
> @@ -535,9 +542,16 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>   
>   	for (n = 0; n < out_sgs; n++) {
>   		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> -			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> -			if (vring_mapping_error(vq, addr))
> -				goto unmap_release;
> +			dma_addr_t addr;
> +
> +			if (predma) {
> +				addr = sg_dma_address(sg);
> +
> +			} else {
> +				addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
> +				if (vring_mapping_error(vq, addr))
> +					goto unmap_release;
> +			}
>   
>   			prev = i;
>   			/* Note that we trust indirect descriptor
> @@ -550,9 +564,16 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>   	}
>   	for (; n < (out_sgs + in_sgs); n++) {
>   		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> -			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> -			if (vring_mapping_error(vq, addr))
> -				goto unmap_release;
> +			dma_addr_t addr;
> +
> +			if (predma) {
> +				addr = sg_dma_address(sg);
> +
> +			} else {
> +				addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
> +				if (vring_mapping_error(vq, addr))
> +					goto unmap_release;
> +			}
>   
>   			prev = i;
>   			/* Note that we trust indirect descriptor
> @@ -602,6 +623,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>   	else
>   		vq->split.desc_state[head].indir_desc = ctx;
>   
> +	if (predma)
> +		vq->split.desc_extra[head].flags |= VIRTIO_DESC_F_PREDMA;
> +
>   	/* Put entry in available array (but don't update avail->idx until they
>   	 * do sync). */
>   	avail = vq->split.avail_idx_shadow & (vq->split.vring.num - 1);
> @@ -626,6 +650,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>   	return 0;
>   
>   unmap_release:
> +	if (predma)
> +		goto skip_unmap;
> +


Nit: we probably need a better name for the label how about "unmap_free"?

Thanks


>   	err_idx = i;
>   
>   	if (indirect)
> @@ -640,9 +667,10 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
>   			vring_unmap_one_split_indirect(vq, &desc[i]);
>   			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
>   		} else
> -			i = vring_unmap_one_split(vq, i);
> +			i = vring_unmap_one_split(vq, i, false);
>   	}
>   
> +skip_unmap:
>   	if (indirect)
>   		kfree(desc);
>   
> @@ -686,20 +714,24 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
>   {
>   	unsigned int i, j;
>   	__virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
> +	bool predma = false;
>   
>   	/* Clear data ptr. */
>   	vq->split.desc_state[head].data = NULL;
>   
> +	if (vq->split.desc_extra[head].flags & VIRTIO_DESC_F_PREDMA)
> +		predma = true;
> +
>   	/* Put back on free list: unmap first-level descriptors and find end */
>   	i = head;
>   
>   	while (vq->split.vring.desc[i].flags & nextflag) {
> -		vring_unmap_one_split(vq, i);
> +		vring_unmap_one_split(vq, i, predma);
>   		i = vq->split.desc_extra[i].next;
>   		vq->vq.num_free++;
>   	}
>   
> -	vring_unmap_one_split(vq, i);
> +	vring_unmap_one_split(vq, i, predma);
>   	vq->split.desc_extra[i].next = vq->free_head;
>   	vq->free_head = head;
>   
> @@ -721,8 +753,10 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
>   				VRING_DESC_F_INDIRECT));
>   		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
>   
> -		for (j = 0; j < len / sizeof(struct vring_desc); j++)
> -			vring_unmap_one_split_indirect(vq, &indir_desc[j]);
> +		if (!predma) {
> +			for (j = 0; j < len / sizeof(struct vring_desc); j++)
> +				vring_unmap_one_split_indirect(vq, &indir_desc[j]);
> +		}
>   
>   		kfree(indir_desc);
>   		vq->split.desc_state[head].indir_desc = NULL;
> @@ -1788,7 +1822,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>   	return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg,
>   					out_sgs, in_sgs, data, ctx, gfp) :
>   				 virtqueue_add_split(_vq, sgs, total_sg,
> -					out_sgs, in_sgs, data, ctx, gfp);
> +					out_sgs, in_sgs, data, ctx, gfp, predma);
>   }
>   
>   /**

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 6/6] virtio: packed: virtqueue_add_packed() support dma address
  2022-02-10  8:51 ` [PATCH v1 6/6] virtio: packed: virtqueue_add_packed() " Xuan Zhuo
@ 2022-02-23  3:43   ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-23  3:43 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization; +Cc: Michael S. Tsirkin


在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> virtqueue_add_packed() only supports virtual addresses, dma is completed
> in virtqueue_add_packed().
>
> In some scenarios (such as the AF_XDP scenario), the memory is allocated
> and DMA is completed in advance, so it is necessary for us to support
> passing the DMA address to virtqueue_add_packed().
>
> Record this predma information in extra->flags, which can be skipped
> when executing dma unmap.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>


Except for the similar comments as split version other looks good.

Thanks


> ---
>   drivers/virtio/virtio_ring.c | 79 ++++++++++++++++++++++++++----------
>   1 file changed, 57 insertions(+), 22 deletions(-)
>
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index d32c0bf6016f..b8c7697e925d 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -1011,7 +1011,8 @@ static struct virtqueue *vring_create_virtqueue_split(
>    */
>   
>   static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
> -				     struct vring_desc_extra *extra)
> +				     struct vring_desc_extra *extra,
> +				     bool predma)
>   {
>   	u16 flags;
>   
> @@ -1026,6 +1027,9 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
>   				 (flags & VRING_DESC_F_WRITE) ?
>   				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
>   	} else {
> +		if (predma)
> +			return;
> +
>   		dma_unmap_page(vring_dma_dev(vq),
>   			       extra->addr, extra->len,
>   			       (flags & VRING_DESC_F_WRITE) ?
> @@ -1073,7 +1077,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>   					 unsigned int out_sgs,
>   					 unsigned int in_sgs,
>   					 void *data,
> -					 gfp_t gfp)
> +					 gfp_t gfp,
> +					 bool predma)
>   {
>   	struct vring_packed_desc *desc;
>   	struct scatterlist *sg;
> @@ -1099,10 +1104,15 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>   
>   	for (n = 0; n < out_sgs + in_sgs; n++) {
>   		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> -			addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> -					DMA_TO_DEVICE : DMA_FROM_DEVICE);
> -			if (vring_mapping_error(vq, addr))
> -				goto unmap_release;
> +			if (predma) {
> +				addr = sg_dma_address(sg);
> +
> +			} else {
> +				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> +							DMA_TO_DEVICE : DMA_FROM_DEVICE);
> +				if (vring_mapping_error(vq, addr))
> +					goto unmap_release;
> +			}
>   
>   			desc[i].flags = cpu_to_le16(n < out_sgs ?
>   						0 : VRING_DESC_F_WRITE);
> @@ -1132,6 +1142,9 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>   						  vq->packed.avail_used_flags;
>   	}
>   
> +	if (predma)
> +		vq->packed.desc_extra[id].flags |= VIRTIO_DESC_F_PREDMA;
> +
>   	/*
>   	 * A driver MUST NOT make the first descriptor in the list
>   	 * available before all subsequent descriptors comprising
> @@ -1170,10 +1183,11 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>   	return 0;
>   
>   unmap_release:
> -	err_idx = i;
> -
> -	for (i = 0; i < err_idx; i++)
> -		vring_unmap_desc_packed(vq, &desc[i]);
> +	if (!predma) {
> +		err_idx = i;
> +		for (i = 0; i < err_idx; i++)
> +			vring_unmap_desc_packed(vq, &desc[i]);
> +	}
>   
>   	kfree(desc);
>   
> @@ -1188,7 +1202,8 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>   				       unsigned int in_sgs,
>   				       void *data,
>   				       void *ctx,
> -				       gfp_t gfp)
> +				       gfp_t gfp,
> +				       bool predma)
>   {
>   	struct vring_virtqueue *vq = to_vvq(_vq);
>   	struct vring_packed_desc *desc;
> @@ -1214,7 +1229,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>   
>   	if (virtqueue_use_indirect(_vq, total_sg)) {
>   		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
> -						    in_sgs, data, gfp);
> +						    in_sgs, data, gfp, predma);
>   		if (err != -ENOMEM) {
>   			END_USE(vq);
>   			return err;
> @@ -1246,10 +1261,17 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>   	c = 0;
>   	for (n = 0; n < out_sgs + in_sgs; n++) {
>   		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> -			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> -					DMA_TO_DEVICE : DMA_FROM_DEVICE);
> -			if (vring_mapping_error(vq, addr))
> -				goto unmap_release;
> +			dma_addr_t addr;
> +
> +			if (predma) {
> +				addr = sg_dma_address(sg);
> +
> +			} else {
> +				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> +							DMA_TO_DEVICE : DMA_FROM_DEVICE);
> +				if (vring_mapping_error(vq, addr))
> +					goto unmap_release;
> +			}
>   
>   			flags = cpu_to_le16(vq->packed.avail_used_flags |
>   				    (++c == total_sg ? 0 : VRING_DESC_F_NEXT) |
> @@ -1297,6 +1319,9 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>   	vq->packed.desc_state[id].indir_desc = ctx;
>   	vq->packed.desc_state[id].last = prev;
>   
> +	if (predma)
> +		vq->packed.desc_extra[id].flags |= VIRTIO_DESC_F_PREDMA;
> +
>   	/*
>   	 * A driver MUST NOT make the first descriptor in the list
>   	 * available before all subsequent descriptors comprising
> @@ -1312,22 +1337,27 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>   	return 0;
>   
>   unmap_release:
> +	vq->packed.avail_used_flags = avail_used_flags;
> +
> +	if (predma)
> +		goto skip_unmap;
> +
>   	err_idx = i;
>   	i = head;
>   	curr = vq->free_head;
>   
> -	vq->packed.avail_used_flags = avail_used_flags;
> -
>   	for (n = 0; n < total_sg; n++) {
>   		if (i == err_idx)
>   			break;
> -		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]);
> +		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr], false);
>   		curr = vq->packed.desc_extra[curr].next;
>   		i++;
>   		if (i >= vq->packed.vring.num)
>   			i = 0;
>   	}
>   
> +skip_unmap:
> +
>   	END_USE(vq);
>   	return -EIO;
>   }
> @@ -1387,9 +1417,13 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>   	struct vring_desc_state_packed *state = NULL;
>   	struct vring_packed_desc *desc;
>   	unsigned int i, curr;
> +	bool predma = false;
>   
>   	state = &vq->packed.desc_state[id];
>   
> +	if (vq->packed.desc_extra[state->last].flags & VIRTIO_DESC_F_PREDMA)
> +		predma = true;
> +
>   	/* Clear data ptr. */
>   	state->data = NULL;
>   
> @@ -1401,7 +1435,8 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>   		curr = id;
>   		for (i = 0; i < state->num; i++) {
>   			vring_unmap_extra_packed(vq,
> -						 &vq->packed.desc_extra[curr]);
> +						 &vq->packed.desc_extra[curr],
> +						 predma);
>   			curr = vq->packed.desc_extra[curr].next;
>   		}
>   	}
> @@ -1414,7 +1449,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>   		if (!desc)
>   			return;
>   
> -		if (vq->use_dma_api) {
> +		if (vq->use_dma_api && !predma) {
>   			len = vq->packed.desc_extra[id].len;
>   			for (i = 0; i < len / sizeof(struct vring_packed_desc);
>   					i++)
> @@ -1820,7 +1855,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>   	struct vring_virtqueue *vq = to_vvq(_vq);
>   
>   	return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg,
> -					out_sgs, in_sgs, data, ctx, gfp) :
> +					out_sgs, in_sgs, data, ctx, gfp, predma) :
>   				 virtqueue_add_split(_vq, sgs, total_sg,
>   					out_sgs, in_sgs, data, ctx, gfp, predma);
>   }

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 0/6] virtio: support advance DMA
  2022-02-23  2:58             ` Jason Wang
@ 2022-02-23  3:44               ` Jason Wang
  0 siblings, 0 replies; 36+ messages in thread
From: Jason Wang @ 2022-02-23  3:44 UTC (permalink / raw)
  To: Xuan Zhuo; +Cc: Michael S. Tsirkin, virtualization

On Wed, Feb 23, 2022 at 10:58 AM Jason Wang <jasowang@redhat.com> wrote:
>
> On Tue, Feb 22, 2022 at 4:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 22 Feb 2022 12:02:14 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > >
> > > 在 2022/2/21 下午7:23, Xuan Zhuo 写道:
> > > > On Mon, 21 Feb 2022 11:32:52 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > >> On Fri, Feb 18, 2022 at 5:00 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >>> On Thu, 17 Feb 2022 15:19:44 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > >>>> On Thu, Feb 10, 2022 at 4:51 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >>>>> virtqueue_add() only supports virtual addresses, dma is completed in
> > > >>>>> virtqueue_add().
> > > >>>>>
> > > >>>>> In some scenarios (such as the AF_XDP scenario), DMA is completed in advance, so
> > > >>>>> it is necessary for us to support passing the DMA address to virtqueue_add().
> > > >>>> I'd suggest rename this feature as "unmanaged DMA".
> > > >>> OK
> > > >>>
> > > >>>>> Record this predma information in extra->flags, which can be skipped when
> > > >>>>> executing dma unmap.
> > > >>>> Question still, can we use per-virtqueue flag instead of per
> > > >>>> descriptor flag? If my memory is correct, the answer is yes in the
> > > >>>> discussion for the previous version.
> > > >>>>
> > > >>> Yes.
> > > >>>
> > > >>> per-virtqueue? I guess it should be per-submit.
> > > >>>
> > > >>> This patch set only adds a flag to desc_extra[head].flags, so that we can know
> > > >>> if we need to unmap dma when we detach.
> > > >> I meant if we can manage to make it per virtqueue, there's no need to
> > > >> maintain per buffer flag.
> > > >>
> > > > Rethinking this question, I feel there is no essential difference between per
> > > > virtqueue and per sgs.
> > > >
> > > > per virtqueue:
> > > >     1. add buf:
> > > >             a. check vq->premapped for map every sg
> > > >     2. detach:
> > > >             a. check vq->premaped for unmap
> > > >
> > > > per sgs:
> > > >     1. add buf:
> > > >             a. check function parameter "premapped" for map every sg
> > > >             b. add flag to extra[head].flag
> > > >
> > > >     2. detach:
> > > >             a: check extra[head].flag for unmap
> > > >
> > > >
> > > > Thanks.
> > >
> > >
> > > Per-virtqueue is still a little bit easier at the first glance.
> > >
> > > Actually, per-sg have one advantage: it can be used without virtqueue
> > > reset (to allow switching between the two modes). But I'm not sure
> > > whether we had such requirements.
> > >
> > > I think to answer this question, we probably need a real use case (if we
> > > can come up with a case that is more lightweight than AF_XDP, that would
> > > be even better).
> >
> > Sadly, I didn't think of other scenarios. Hope someone can give a scenario.
> >
> > For per virtqueue, virtio-net will also switch to premapped. Because the tx
> > queue is shared.
> >
> > But in the process of implementing this, I encountered a troublesome problem. We
> > need to record the dma address in virtnet. For tx, since skb contains multiple
> > frags, there will be many dma addresses. When unmap in virtnet It will be more
> > troublesome. Because we have to regain these dma addresses.
>
> Right, actually, we store the dma address in desc_extra, but exposing
> it to the driver seems like an overkill.
>
> >
> > I think of two ways:
> >
> > 1. Let virtio return the addr of each desc when detached.
> > 2. Allocate a block of memory for each sq/rq to hold the dma address.
> >
> > Thanks.
>
> So it looks to me having a per buffer flag seems ok. Let me go through
> this series.
>

Ok, so the series looks good overall, but we need a user.  I wonder if
we can convert XDP to use that as an example. (and AF_XDP on top).

Thanks

> Thanks
>
> >
> > >
> > > Thanks
> > >
> > >
> > > >
> > > >
> > > >> So we know something that needs to be mapped by virtio core itself,
> > > >> e.g the indirect page. Other than this, all the rest could be
> > > >> pre-mapped.
> > > >>
> > > >> For vnet header, it could be mapped by virtio-net which could be still
> > > >> treated as pre mapped DMA since it's not the virtio ring code.
> > > >>
> > > >> Anything I miss here?
> > > >>
> > > >> Thanks
> > > >>
> > > >>
> > > >>> Thanks.
> > > >>>
> > > >>>> Thanks
> > > >>>>
> > > >>>>> v1:
> > > >>>>>     1. All sgs requested at one time are required to be unified PREDMA, and several
> > > >>>>>        of them are not supported to be PREDMA
> > > >>>>>     2. virtio_dma_map() is removed from this patch set and will be submitted
> > > >>>>>        together with the next time AF_XDP supports virtio dma
> > > >>>>>     3. Added patch #2 #3 to remove the check for flags when performing unmap
> > > >>>>>        indirect desc
> > > >>>>>
> > > >>>>> Xuan Zhuo (6):
> > > >>>>>    virtio: rename vring_unmap_state_packed() to
> > > >>>>>      vring_unmap_extra_packed()
> > > >>>>>    virtio: remove flags check for unmap split indirect desc
> > > >>>>>    virtio: remove flags check for unmap packed indirect desc
> > > >>>>>    virtio: virtqueue_add() support predma
> > > >>>>>    virtio: split: virtqueue_add_split() support dma address
> > > >>>>>    virtio: packed: virtqueue_add_packed() support dma address
> > > >>>>>
> > > >>>>>   drivers/virtio/virtio_ring.c | 199 ++++++++++++++++++++++-------------
> > > >>>>>   1 file changed, 126 insertions(+), 73 deletions(-)
> > > >>>>>
> > > >>>>> --
> > > >>>>> 2.31.0
> > > >>>>>
> > >
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v1 3/6] virtio: remove flags check for unmap packed indirect desc
  2022-02-23  2:53   ` Jason Wang
@ 2022-02-23  8:30     ` Xuan Zhuo
  0 siblings, 0 replies; 36+ messages in thread
From: Xuan Zhuo @ 2022-02-23  8:30 UTC (permalink / raw)
  To: Jason Wang; +Cc: virtualization, Michael S. Tsirkin

On Wed, 23 Feb 2022 10:53:26 +0800, Jason Wang <jasowang@redhat.com> wrote:
>
> 在 2022/2/10 下午4:51, Xuan Zhuo 写道:
> > When calling vring_unmap_desc_packed(), it will not encounter the
> > situation that the flags contains VRING_DESC_F_INDIRECT. So remove this
> > logic.
>
>
> This seems not correct.
>
> There's a call from detach_buf_packed() that can work for indirect
> descriptors?
>

Yes, it works with indirect descriptors. But these descriptors do not contain
VRING_DESC_F_INDIRECT.

The one that contains VRING_DESC_F_INDIRECT is vq->packed.desc_extra[id].flags.
This is handled by vring_unmap_state_packed().

Thanks.

> Thanks
>
>
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >   drivers/virtio/virtio_ring.c | 18 +++++-------------
> >   1 file changed, 5 insertions(+), 13 deletions(-)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index fadd0a7503e9..cfb028ca238e 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -1009,19 +1009,11 @@ static void vring_unmap_desc_packed(const struct vring_virtqueue *vq,
> >
> >   	flags = le16_to_cpu(desc->flags);
> >
> > -	if (flags & VRING_DESC_F_INDIRECT) {
> > -		dma_unmap_single(vring_dma_dev(vq),
> > -				 le64_to_cpu(desc->addr),
> > -				 le32_to_cpu(desc->len),
> > -				 (flags & VRING_DESC_F_WRITE) ?
> > -				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > -	} else {
> > -		dma_unmap_page(vring_dma_dev(vq),
> > -			       le64_to_cpu(desc->addr),
> > -			       le32_to_cpu(desc->len),
> > -			       (flags & VRING_DESC_F_WRITE) ?
> > -			       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> > -	}
> > +	dma_unmap_page(vring_dma_dev(vq),
> > +		       le64_to_cpu(desc->addr),
> > +		       le32_to_cpu(desc->len),
> > +		       (flags & VRING_DESC_F_WRITE) ?
> > +		       DMA_FROM_DEVICE : DMA_TO_DEVICE);
> >   }
> >
> >   static struct vring_packed_desc *alloc_indirect_packed(unsigned int total_sg,
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2022-02-23  8:33 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-10  8:51 [PATCH v1 0/6] virtio: support advance DMA Xuan Zhuo
2022-02-10  8:51 ` [PATCH v1 1/6] virtio: rename vring_unmap_state_packed() to vring_unmap_extra_packed() Xuan Zhuo
2022-02-23  2:41   ` Jason Wang
2022-02-10  8:51 ` [PATCH v1 2/6] virtio: remove flags check for unmap split indirect desc Xuan Zhuo
2022-02-23  2:42   ` Jason Wang
2022-02-10  8:51 ` [PATCH v1 3/6] virtio: remove flags check for unmap packed " Xuan Zhuo
2022-02-23  2:53   ` Jason Wang
2022-02-23  8:30     ` Xuan Zhuo
2022-02-10  8:51 ` [PATCH v1 4/6] virtio: virtqueue_add() support predma Xuan Zhuo
2022-02-23  3:00   ` Jason Wang
2022-02-23  3:02     ` Jason Wang
2022-02-10  8:51 ` [PATCH v1 5/6] virtio: split: virtqueue_add_split() support dma address Xuan Zhuo
2022-02-23  3:38   ` Jason Wang
2022-02-10  8:51 ` [PATCH v1 6/6] virtio: packed: virtqueue_add_packed() " Xuan Zhuo
2022-02-23  3:43   ` Jason Wang
2022-02-17  7:19 ` [PATCH v1 0/6] virtio: support advance DMA Jason Wang
2022-02-18  8:55   ` Xuan Zhuo
2022-02-18  9:24     ` Michael S. Tsirkin
2022-02-18  9:29       ` Xuan Zhuo
2022-02-21  1:36       ` Jason Wang
2022-02-21  3:32     ` Jason Wang
2022-02-21  3:39       ` Xuan Zhuo
2022-02-21  3:53         ` Jason Wang
2022-02-21  5:59           ` Xuan Zhuo
2022-02-21  6:18             ` Xuan Zhuo
2022-02-21  6:37               ` Jason Wang
2022-02-21  6:41                 ` Xuan Zhuo
2022-02-21  6:55                   ` Jason Wang
2022-02-21  7:46                     ` Xuan Zhuo
2022-02-21  6:46                 ` Xuan Zhuo
2022-02-21  6:56                   ` Jason Wang
2022-02-21 11:23       ` Xuan Zhuo
2022-02-22  4:02         ` Jason Wang
2022-02-22  7:48           ` Xuan Zhuo
2022-02-23  2:58             ` Jason Wang
2022-02-23  3:44               ` Jason Wang

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.