All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default
@ 2024-04-24  8:16 Xuan Zhuo
  2024-04-24  8:16 ` [PATCH vhost v3 1/4] virtio_ring: enable premapped mode whatever use_dma_api Xuan Zhuo
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Xuan Zhuo @ 2024-04-24  8:16 UTC (permalink / raw)
  To: virtualization
  Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

Actually, for the virtio drivers, we can enable premapped mode whatever
the value of use_dma_api. Because we provide the virtio dma apis.
So the driver can enable premapped mode unconditionally.

This patch set makes the big mode of virtio-net to support premapped mode.
And enable premapped mode for rx by default.

Based on the following points, we do not use page pool to manage these
    pages:

    1. virtio-net uses the DMA APIs wrapped by virtio core. Therefore,
       we can only prevent the page pool from performing DMA operations, and
       let the driver perform DMA operations on the allocated pages.
    2. But when the page pool releases the page, we have no chance to
       execute dma unmap.
    3. A solution to #2 is to execute dma unmap every time before putting
       the page back to the page pool. (This is actually a waste, we don't
       execute unmap so frequently.)
    4. But there is another problem, we still need to use page.dma_addr to
       save the dma address. Using page.dma_addr while using page pool is
       unsafe behavior.
    5. And we need space the chain the pages submitted once to virtio core.

    More:
        https://lore.kernel.org/all/CACGkMEu=Aok9z2imB_c5qVuujSh=vjj1kx12fy9N7hqyi+M5Ow@mail.gmail.com/

Why we do not use the page space to store the dma?
    http://lore.kernel.org/all/CACGkMEuyeJ9mMgYnnB42=hw6umNuo=agn7VBqBqYPd7GN=+39Q@mail.gmail.com

Please review.

v3:
    1. big mode still use the mode that virtio core does the dma map/unmap

v2:
    1. make gcc happy in page_chain_get_dma()
        http://lore.kernel.org/all/202404221325.SX5ChRGP-lkp@intel.com

v1:
    1. discussed for using page pool
    2. use dma sync to replace the unmap for the first page

Thanks.




Xuan Zhuo (4):
  virtio_ring: enable premapped mode whatever use_dma_api
  virtio_net: big mode skip the unmap check
  virtio_net: rx remove premapped failover code
  virtio_net: remove the misleading comment

 drivers/net/virtio_net.c     | 90 +++++++++++++++---------------------
 drivers/virtio/virtio_ring.c |  7 +--
 2 files changed, 38 insertions(+), 59 deletions(-)

--
2.32.0.3.g01195cf9f


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH vhost v3 1/4] virtio_ring: enable premapped mode whatever use_dma_api
  2024-04-24  8:16 [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Xuan Zhuo
@ 2024-04-24  8:16 ` Xuan Zhuo
  2024-04-24  8:16 ` [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check Xuan Zhuo
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Xuan Zhuo @ 2024-04-24  8:16 UTC (permalink / raw)
  To: virtualization
  Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

Now, we have virtio DMA APIs, the driver can be the premapped
mode whatever the virtio core uses dma api or not.

So remove the limit of checking use_dma_api from
virtqueue_set_dma_premapped().

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/virtio/virtio_ring.c | 7 +------
 1 file changed, 1 insertion(+), 6 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 70de1a9a81a3..a939104d551f 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2730,7 +2730,7 @@ EXPORT_SYMBOL_GPL(virtqueue_resize);
  *
  * Returns zero or a negative error.
  * 0: success.
- * -EINVAL: vring does not use the dma api, so we can not enable premapped mode.
+ * -EINVAL: the vq is in use.
  */
 int virtqueue_set_dma_premapped(struct virtqueue *_vq)
 {
@@ -2746,11 +2746,6 @@ int virtqueue_set_dma_premapped(struct virtqueue *_vq)
 		return -EINVAL;
 	}
 
-	if (!vq->use_dma_api) {
-		END_USE(vq);
-		return -EINVAL;
-	}
-
 	vq->premapped = true;
 	vq->do_unmap = false;
 
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check
  2024-04-24  8:16 [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Xuan Zhuo
  2024-04-24  8:16 ` [PATCH vhost v3 1/4] virtio_ring: enable premapped mode whatever use_dma_api Xuan Zhuo
@ 2024-04-24  8:16 ` Xuan Zhuo
  2024-04-25  2:11   ` Jason Wang
  2024-04-24  8:16 ` [PATCH vhost v3 3/4] virtio_net: rx remove premapped failover code Xuan Zhuo
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Xuan Zhuo @ 2024-04-24  8:16 UTC (permalink / raw)
  To: virtualization
  Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

The virtio-net big mode did not enable premapped mode,
so we did not need to check the unmap. And the subsequent
commit will remove the failover code for failing enable
premapped for merge and small mode. So we need to remove
the checking do_dma code in the big mode path.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index c22d1118a133..16d84c95779c 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -820,7 +820,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf)
 
 	rq = &vi->rq[i];
 
-	if (rq->do_dma)
+	if (!vi->big_packets || vi->mergeable_rx_bufs)
 		virtnet_rq_unmap(rq, buf, 0);
 
 	virtnet_rq_free_buf(vi, rq, buf);
@@ -2128,7 +2128,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
 		}
 	} else {
 		while (packets < budget &&
-		       (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) {
+		       (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
 			receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats);
 			packets++;
 		}
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH vhost v3 3/4] virtio_net: rx remove premapped failover code
  2024-04-24  8:16 [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Xuan Zhuo
  2024-04-24  8:16 ` [PATCH vhost v3 1/4] virtio_ring: enable premapped mode whatever use_dma_api Xuan Zhuo
  2024-04-24  8:16 ` [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check Xuan Zhuo
@ 2024-04-24  8:16 ` Xuan Zhuo
  2024-04-24  8:16 ` [PATCH vhost v3 4/4] virtio_net: remove the misleading comment Xuan Zhuo
  2024-04-26 11:00 ` [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Paolo Abeni
  4 siblings, 0 replies; 10+ messages in thread
From: Xuan Zhuo @ 2024-04-24  8:16 UTC (permalink / raw)
  To: virtualization
  Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

Now, the premapped mode can be enabled unconditionally.

So we can remove the failover code for merge and small mode.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 85 +++++++++++++++++-----------------------
 1 file changed, 35 insertions(+), 50 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 16d84c95779c..a4b924ba18d3 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -213,9 +213,6 @@ struct receive_queue {
 
 	/* Record the last dma info to free after new pages is allocated. */
 	struct virtnet_rq_dma *last_dma;
-
-	/* Do dma by self */
-	bool do_dma;
 };
 
 /* This structure can contain rss message with maximum settings for indirection table and keysize
@@ -707,7 +704,7 @@ static void *virtnet_rq_get_buf(struct receive_queue *rq, u32 *len, void **ctx)
 	void *buf;
 
 	buf = virtqueue_get_buf_ctx(rq->vq, len, ctx);
-	if (buf && rq->do_dma)
+	if (buf)
 		virtnet_rq_unmap(rq, buf, *len);
 
 	return buf;
@@ -720,11 +717,6 @@ static void virtnet_rq_init_one_sg(struct receive_queue *rq, void *buf, u32 len)
 	u32 offset;
 	void *head;
 
-	if (!rq->do_dma) {
-		sg_init_one(rq->sg, buf, len);
-		return;
-	}
-
 	head = page_address(rq->alloc_frag.page);
 
 	offset = buf - head;
@@ -750,44 +742,42 @@ static void *virtnet_rq_alloc(struct receive_queue *rq, u32 size, gfp_t gfp)
 
 	head = page_address(alloc_frag->page);
 
-	if (rq->do_dma) {
-		dma = head;
-
-		/* new pages */
-		if (!alloc_frag->offset) {
-			if (rq->last_dma) {
-				/* Now, the new page is allocated, the last dma
-				 * will not be used. So the dma can be unmapped
-				 * if the ref is 0.
-				 */
-				virtnet_rq_unmap(rq, rq->last_dma, 0);
-				rq->last_dma = NULL;
-			}
+	dma = head;
 
-			dma->len = alloc_frag->size - sizeof(*dma);
+	/* new pages */
+	if (!alloc_frag->offset) {
+		if (rq->last_dma) {
+			/* Now, the new page is allocated, the last dma
+			 * will not be used. So the dma can be unmapped
+			 * if the ref is 0.
+			 */
+			virtnet_rq_unmap(rq, rq->last_dma, 0);
+			rq->last_dma = NULL;
+		}
 
-			addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1,
-							      dma->len, DMA_FROM_DEVICE, 0);
-			if (virtqueue_dma_mapping_error(rq->vq, addr))
-				return NULL;
+		dma->len = alloc_frag->size - sizeof(*dma);
 
-			dma->addr = addr;
-			dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr);
+		addr = virtqueue_dma_map_single_attrs(rq->vq, dma + 1,
+						      dma->len, DMA_FROM_DEVICE, 0);
+		if (virtqueue_dma_mapping_error(rq->vq, addr))
+			return NULL;
 
-			/* Add a reference to dma to prevent the entire dma from
-			 * being released during error handling. This reference
-			 * will be freed after the pages are no longer used.
-			 */
-			get_page(alloc_frag->page);
-			dma->ref = 1;
-			alloc_frag->offset = sizeof(*dma);
+		dma->addr = addr;
+		dma->need_sync = virtqueue_dma_need_sync(rq->vq, addr);
 
-			rq->last_dma = dma;
-		}
+		/* Add a reference to dma to prevent the entire dma from
+		 * being released during error handling. This reference
+		 * will be freed after the pages are no longer used.
+		 */
+		get_page(alloc_frag->page);
+		dma->ref = 1;
+		alloc_frag->offset = sizeof(*dma);
 
-		++dma->ref;
+		rq->last_dma = dma;
 	}
 
+	++dma->ref;
+
 	buf = head + alloc_frag->offset;
 
 	get_page(alloc_frag->page);
@@ -804,12 +794,9 @@ static void virtnet_rq_set_premapped(struct virtnet_info *vi)
 	if (!vi->mergeable_rx_bufs && vi->big_packets)
 		return;
 
-	for (i = 0; i < vi->max_queue_pairs; i++) {
-		if (virtqueue_set_dma_premapped(vi->rq[i].vq))
-			continue;
-
-		vi->rq[i].do_dma = true;
-	}
+	for (i = 0; i < vi->max_queue_pairs; i++)
+		/* error never happen */
+		BUG_ON(virtqueue_set_dma_premapped(vi->rq[i].vq));
 }
 
 static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf)
@@ -1881,8 +1868,7 @@ static int add_recvbuf_small(struct virtnet_info *vi, struct receive_queue *rq,
 
 	err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
 	if (err < 0) {
-		if (rq->do_dma)
-			virtnet_rq_unmap(rq, buf, 0);
+		virtnet_rq_unmap(rq, buf, 0);
 		put_page(virt_to_head_page(buf));
 	}
 
@@ -1996,8 +1982,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
 	ctx = mergeable_len_to_ctx(len + room, headroom);
 	err = virtqueue_add_inbuf_ctx(rq->vq, rq->sg, 1, buf, ctx, gfp);
 	if (err < 0) {
-		if (rq->do_dma)
-			virtnet_rq_unmap(rq, buf, 0);
+		virtnet_rq_unmap(rq, buf, 0);
 		put_page(virt_to_head_page(buf));
 	}
 
@@ -4271,7 +4256,7 @@ static void free_receive_page_frags(struct virtnet_info *vi)
 	int i;
 	for (i = 0; i < vi->max_queue_pairs; i++)
 		if (vi->rq[i].alloc_frag.page) {
-			if (vi->rq[i].do_dma && vi->rq[i].last_dma)
+			if (vi->rq[i].last_dma)
 				virtnet_rq_unmap(&vi->rq[i], vi->rq[i].last_dma, 0);
 			put_page(vi->rq[i].alloc_frag.page);
 		}
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH vhost v3 4/4] virtio_net: remove the misleading comment
  2024-04-24  8:16 [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Xuan Zhuo
                   ` (2 preceding siblings ...)
  2024-04-24  8:16 ` [PATCH vhost v3 3/4] virtio_net: rx remove premapped failover code Xuan Zhuo
@ 2024-04-24  8:16 ` Xuan Zhuo
  2024-04-26 11:00 ` [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Paolo Abeni
  4 siblings, 0 replies; 10+ messages in thread
From: Xuan Zhuo @ 2024-04-24  8:16 UTC (permalink / raw)
  To: virtualization
  Cc: Michael S. Tsirkin, Jason Wang, Xuan Zhuo, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

We call the build_skb() actually without copying data.
The comment is misleading. So remove it.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
Acked-by: Jason Wang <jasowang@redhat.com>
---
 drivers/net/virtio_net.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a4b924ba18d3..3e8694837a29 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -600,7 +600,6 @@ static struct sk_buff *page_to_skb(struct virtnet_info *vi,
 
 	shinfo_size = SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 
-	/* copy small packet so we can reuse these pages */
 	if (!NET_IP_ALIGN && len > GOOD_COPY_LEN && tailroom >= shinfo_size) {
 		skb = virtnet_build_skb(buf, truesize, p - buf, len);
 		if (unlikely(!skb))
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check
  2024-04-24  8:16 ` [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check Xuan Zhuo
@ 2024-04-25  2:11   ` Jason Wang
  2024-04-25  2:35     ` Xuan Zhuo
  0 siblings, 1 reply; 10+ messages in thread
From: Jason Wang @ 2024-04-25  2:11 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, Michael S. Tsirkin, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

On Wed, Apr 24, 2024 at 4:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> The virtio-net big mode did not enable premapped mode,
> so we did not need to check the unmap. And the subsequent
> commit will remove the failover code for failing enable
> premapped for merge and small mode. So we need to remove
> the checking do_dma code in the big mode path.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 4 ++--
>  1 file changed, 2 insertions(+), 2 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index c22d1118a133..16d84c95779c 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -820,7 +820,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf)
>
>         rq = &vi->rq[i];
>
> -       if (rq->do_dma)
> +       if (!vi->big_packets || vi->mergeable_rx_bufs)

This seems to be equivalent to

if (!vi->big_packets)


>                 virtnet_rq_unmap(rq, buf, 0);
>
>         virtnet_rq_free_buf(vi, rq, buf);
> @@ -2128,7 +2128,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
>                 }
>         } else {
>                 while (packets < budget &&
> -                      (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) {
> +                      (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
>                         receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats);
>                         packets++;
>                 }

Other part looks good.

Thanks

> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check
  2024-04-25  2:11   ` Jason Wang
@ 2024-04-25  2:35     ` Xuan Zhuo
  2024-04-25  6:04       ` Jason Wang
  0 siblings, 1 reply; 10+ messages in thread
From: Xuan Zhuo @ 2024-04-25  2:35 UTC (permalink / raw)
  To: Jason Wang
  Cc: virtualization, Michael S. Tsirkin, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

On Thu, 25 Apr 2024 10:11:55 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Wed, Apr 24, 2024 at 4:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > The virtio-net big mode did not enable premapped mode,
> > so we did not need to check the unmap. And the subsequent
> > commit will remove the failover code for failing enable
> > premapped for merge and small mode. So we need to remove
> > the checking do_dma code in the big mode path.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 4 ++--
> >  1 file changed, 2 insertions(+), 2 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index c22d1118a133..16d84c95779c 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -820,7 +820,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf)
> >
> >         rq = &vi->rq[i];
> >
> > -       if (rq->do_dma)
> > +       if (!vi->big_packets || vi->mergeable_rx_bufs)
>
> This seems to be equivalent to
>
> if (!vi->big_packets)


If VIRTIO_NET_F_MRG_RXBUF and guest_gso are coexisting,
big_packets and mergeable_rx_bufs are all true.
!vi->big_packets only means the small.

Did I miss something?

Thanks.


>
>
> >                 virtnet_rq_unmap(rq, buf, 0);
> >
> >         virtnet_rq_free_buf(vi, rq, buf);
> > @@ -2128,7 +2128,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
> >                 }
> >         } else {
> >                 while (packets < budget &&
> > -                      (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) {
> > +                      (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
> >                         receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats);
> >                         packets++;
> >                 }
>
> Other part looks good.
>
> Thanks
>
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check
  2024-04-25  2:35     ` Xuan Zhuo
@ 2024-04-25  6:04       ` Jason Wang
  0 siblings, 0 replies; 10+ messages in thread
From: Jason Wang @ 2024-04-25  6:04 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: virtualization, Michael S. Tsirkin, David S. Miller,
	Eric Dumazet, Jakub Kicinski, Paolo Abeni, netdev

On Thu, Apr 25, 2024 at 10:37 AM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Thu, 25 Apr 2024 10:11:55 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Wed, Apr 24, 2024 at 4:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > The virtio-net big mode did not enable premapped mode,
> > > so we did not need to check the unmap. And the subsequent
> > > commit will remove the failover code for failing enable
> > > premapped for merge and small mode. So we need to remove
> > > the checking do_dma code in the big mode path.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio_net.c | 4 ++--
> > >  1 file changed, 2 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index c22d1118a133..16d84c95779c 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -820,7 +820,7 @@ static void virtnet_rq_unmap_free_buf(struct virtqueue *vq, void *buf)
> > >
> > >         rq = &vi->rq[i];
> > >
> > > -       if (rq->do_dma)
> > > +       if (!vi->big_packets || vi->mergeable_rx_bufs)
> >
> > This seems to be equivalent to
> >
> > if (!vi->big_packets)
>
>
> If VIRTIO_NET_F_MRG_RXBUF and guest_gso are coexisting,
> big_packets and mergeable_rx_bufs are all true.
> !vi->big_packets only means the small.
>
> Did I miss something?

Nope, you are right.

The big_packets are kind of misleading as it doesn't mean big mode.

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

>
> Thanks.
>
>
> >
> >
> > >                 virtnet_rq_unmap(rq, buf, 0);
> > >
> > >         virtnet_rq_free_buf(vi, rq, buf);
> > > @@ -2128,7 +2128,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
> > >                 }
> > >         } else {
> > >                 while (packets < budget &&
> > > -                      (buf = virtnet_rq_get_buf(rq, &len, NULL)) != NULL) {
> > > +                      (buf = virtqueue_get_buf(rq->vq, &len)) != NULL) {
> > >                         receive_buf(vi, rq, buf, len, NULL, xdp_xmit, &stats);
> > >                         packets++;
> > >                 }
> >
> > Other part looks good.
> >
> > Thanks
> >
> > > --
> > > 2.32.0.3.g01195cf9f
> > >
> >
>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default
  2024-04-24  8:16 [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Xuan Zhuo
                   ` (3 preceding siblings ...)
  2024-04-24  8:16 ` [PATCH vhost v3 4/4] virtio_net: remove the misleading comment Xuan Zhuo
@ 2024-04-26 11:00 ` Paolo Abeni
  2024-04-28  1:18   ` Xuan Zhuo
  4 siblings, 1 reply; 10+ messages in thread
From: Paolo Abeni @ 2024-04-26 11:00 UTC (permalink / raw)
  To: Xuan Zhuo, virtualization
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, netdev

On Wed, 2024-04-24 at 16:16 +0800, Xuan Zhuo wrote:
> Actually, for the virtio drivers, we can enable premapped mode whatever
> the value of use_dma_api. Because we provide the virtio dma apis.
> So the driver can enable premapped mode unconditionally.
> 
> This patch set makes the big mode of virtio-net to support premapped mode.
> And enable premapped mode for rx by default.
> 
> Based on the following points, we do not use page pool to manage these
>     pages:
> 
>     1. virtio-net uses the DMA APIs wrapped by virtio core. Therefore,
>        we can only prevent the page pool from performing DMA operations, and
>        let the driver perform DMA operations on the allocated pages.
>     2. But when the page pool releases the page, we have no chance to
>        execute dma unmap.
>     3. A solution to #2 is to execute dma unmap every time before putting
>        the page back to the page pool. (This is actually a waste, we don't
>        execute unmap so frequently.)
>     4. But there is another problem, we still need to use page.dma_addr to
>        save the dma address. Using page.dma_addr while using page pool is
>        unsafe behavior.
>     5. And we need space the chain the pages submitted once to virtio core.
> 
>     More:
>         https://lore.kernel.org/all/CACGkMEu=Aok9z2imB_c5qVuujSh=vjj1kx12fy9N7hqyi+M5Ow@mail.gmail.com/
> 
> Why we do not use the page space to store the dma?
>     http://lore.kernel.org/all/CACGkMEuyeJ9mMgYnnB42=hw6umNuo=agn7VBqBqYPd7GN=+39Q@mail.gmail.com
> 
> Please review.
> 
> v3:
>     1. big mode still use the mode that virtio core does the dma map/unmap
> 
> v2:
>     1. make gcc happy in page_chain_get_dma()
>         http://lore.kernel.org/all/202404221325.SX5ChRGP-lkp@intel.com
> 
> v1:
>     1. discussed for using page pool
>     2. use dma sync to replace the unmap for the first page

Judging by the subj prefix, this is targeting the vhost tree, right? 

There are a few patches landing on virtio_net on net-next, I guess
there will be some conflict while pushing to Linux (but I haven't
double check yet!)

Perhaps you could provide a stable git branch so that both vhost and
netdev could pull this set?

Thanks!

Paolo


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default
  2024-04-26 11:00 ` [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Paolo Abeni
@ 2024-04-28  1:18   ` Xuan Zhuo
  0 siblings, 0 replies; 10+ messages in thread
From: Xuan Zhuo @ 2024-04-28  1:18 UTC (permalink / raw)
  To: Paolo Abeni
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, netdev, virtualization

On Fri, 26 Apr 2024 13:00:08 +0200, Paolo Abeni <pabeni@redhat.com> wrote:
> On Wed, 2024-04-24 at 16:16 +0800, Xuan Zhuo wrote:
> > Actually, for the virtio drivers, we can enable premapped mode whatever
> > the value of use_dma_api. Because we provide the virtio dma apis.
> > So the driver can enable premapped mode unconditionally.
> >
> > This patch set makes the big mode of virtio-net to support premapped mode.
> > And enable premapped mode for rx by default.
> >
> > Based on the following points, we do not use page pool to manage these
> >     pages:
> >
> >     1. virtio-net uses the DMA APIs wrapped by virtio core. Therefore,
> >        we can only prevent the page pool from performing DMA operations, and
> >        let the driver perform DMA operations on the allocated pages.
> >     2. But when the page pool releases the page, we have no chance to
> >        execute dma unmap.
> >     3. A solution to #2 is to execute dma unmap every time before putting
> >        the page back to the page pool. (This is actually a waste, we don't
> >        execute unmap so frequently.)
> >     4. But there is another problem, we still need to use page.dma_addr to
> >        save the dma address. Using page.dma_addr while using page pool is
> >        unsafe behavior.
> >     5. And we need space the chain the pages submitted once to virtio core.
> >
> >     More:
> >         https://lore.kernel.org/all/CACGkMEu=Aok9z2imB_c5qVuujSh=vjj1kx12fy9N7hqyi+M5Ow@mail.gmail.com/
> >
> > Why we do not use the page space to store the dma?
> >     http://lore.kernel.org/all/CACGkMEuyeJ9mMgYnnB42=hw6umNuo=agn7VBqBqYPd7GN=+39Q@mail.gmail.com
> >
> > Please review.
> >
> > v3:
> >     1. big mode still use the mode that virtio core does the dma map/unmap
> >
> > v2:
> >     1. make gcc happy in page_chain_get_dma()
> >         http://lore.kernel.org/all/202404221325.SX5ChRGP-lkp@intel.com
> >
> > v1:
> >     1. discussed for using page pool
> >     2. use dma sync to replace the unmap for the first page
>
> Judging by the subj prefix, this is targeting the vhost tree, right?
>
> There are a few patches landing on virtio_net on net-next, I guess
> there will be some conflict while pushing to Linux (but I haven't
> double check yet!)
>
> Perhaps you could provide a stable git branch so that both vhost and
> netdev could pull this set?

This patch set is related with the virio core, so I push this
to the vhost branch. And there is no new feature to the virtio-net.
Similar situation in the past, we pushed to the vhost branch.

I am ok to net-next or vhost. We can hear others

@Michael

Thanks.

>
> Thanks!
>
> Paolo
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2024-04-28  1:32 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-04-24  8:16 [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Xuan Zhuo
2024-04-24  8:16 ` [PATCH vhost v3 1/4] virtio_ring: enable premapped mode whatever use_dma_api Xuan Zhuo
2024-04-24  8:16 ` [PATCH vhost v3 2/4] virtio_net: big mode skip the unmap check Xuan Zhuo
2024-04-25  2:11   ` Jason Wang
2024-04-25  2:35     ` Xuan Zhuo
2024-04-25  6:04       ` Jason Wang
2024-04-24  8:16 ` [PATCH vhost v3 3/4] virtio_net: rx remove premapped failover code Xuan Zhuo
2024-04-24  8:16 ` [PATCH vhost v3 4/4] virtio_net: remove the misleading comment Xuan Zhuo
2024-04-26 11:00 ` [PATCH vhost v3 0/4] virtio_net: rx enable premapped mode by default Paolo Abeni
2024-04-28  1:18   ` Xuan Zhuo

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.