virtualization.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 00/33] virtio-net: support AF_XDP zero copy
@ 2023-02-02 11:00 Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 01/33] virtio_ring: virtqueue_add() support premapped Xuan Zhuo
                   ` (35 more replies)
  0 siblings, 36 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
copy feature of xsk (XDP socket) needs to be supported by the driver. The
performance of zero copy is very good. mlx5 and intel ixgbe already support
this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
feature.

Virtio-net did not support per-queue reset, so it was impossible to support XDP
Socket Zerocopy. At present, we have completed the work of Virtio Spec and
Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
the XDP Socket Zerocopy.

Virtio-net can not increase the queue at will, so xsk shares the queue with
kernel.

On the other hand, Virtio-Net does not support generate interrupt manually, so
when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
local CPU, then we wake up sofrirqd.

Please review.

Thanks.


Xuan Zhuo (33):
  virtio_ring: virtqueue_add() support premapped
  virtio_ring: split: virtqueue_add_split() support premapped
  virtio_ring: packed: virtqueue_add_packed() support premapped
  virtio_ring: introduce virtqueue_add_outbuf_premapped()
  virtio_ring: introduce virtqueue_add_inbuf_premapped()
  virtio_ring: introduce virtqueue_reset()
  virtio_ring: add api virtio_dma_map() for advance dma
  virtio_ring: introduce dma sync api for virtio
  xsk: xsk_buff_pool add callback for dma_sync
  xsk: support virtio DMA map
  virtio_net: rename free_old_xmit_skbs to free_old_xmit
  virtio_net: unify the code for recycling the xmit ptr
  virtio_net: virtnet_poll_tx support rescheduled
  virtio_net: independent directory
  virtio_net: move to virtio_net.h
  virtio_net: introduce virtnet_xdp_handler() to seprate the logic of
    run xdp
  virtio_net: receive_small() use virtnet_xdp_handler()
  virtio_net: receive_merageable() use virtnet_xdp_handler()
  virtio_net: introduce virtnet_tx_reset()
  virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
  virtio_net: xsk: introduce virtnet_xsk_pool_enable()
  virtio_net: xsk: introduce xsk disable
  virtio_net: xsk: support xsk setup
  virtio_net: xsk: stop disable tx napi
  virtio_net: xsk: __free_old_xmit distinguishes xsk buffer
  virtio_net: virtnet_sq_free_unused_buf() check xsk buffer
  virtio_net: virtnet_rq_free_unused_buf() check xsk buffer
  net: introduce napi_tx_raise()
  virtio_net: xsk: tx: support tx
  virtio_net: xsk: tx: support wakeup
  virtio_net: xsk: tx: auto wakeup when free old xmit
  virtio_net: xsk: rx: introduce add_recvbuf_xsk()
  virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer

 MAINTAINERS                                 |   2 +-
 drivers/net/Kconfig                         |   8 +-
 drivers/net/Makefile                        |   2 +-
 drivers/net/virtio/Kconfig                  |  11 +
 drivers/net/virtio/Makefile                 |   8 +
 drivers/net/{virtio_net.c => virtio/main.c} | 564 +++++++-------------
 drivers/net/virtio/virtio_net.h             | 317 +++++++++++
 drivers/net/virtio/xsk.c                    | 524 ++++++++++++++++++
 drivers/net/virtio/xsk.h                    |  33 ++
 drivers/virtio/virtio_ring.c                | 376 +++++++++++--
 include/linux/netdevice.h                   |   7 +
 include/linux/virtio.h                      |  29 +
 include/net/xsk_buff_pool.h                 |   6 +
 net/core/dev.c                              |  11 +
 net/xdp/xsk_buff_pool.c                     |  79 ++-
 15 files changed, 1541 insertions(+), 436 deletions(-)
 create mode 100644 drivers/net/virtio/Kconfig
 create mode 100644 drivers/net/virtio/Makefile
 rename drivers/net/{virtio_net.c => virtio/main.c} (92%)
 create mode 100644 drivers/net/virtio/virtio_net.h
 create mode 100644 drivers/net/virtio/xsk.c
 create mode 100644 drivers/net/virtio/xsk.h

-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* [PATCH 01/33] virtio_ring: virtqueue_add() support premapped
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 02/33] virtio_ring: split: virtqueue_add_split() " Xuan Zhuo
                   ` (34 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

virtuque_add() adds parameter premapped.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 9 +++++----
 1 file changed, 5 insertions(+), 4 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 723c4e29e1d3..c9f194c86aec 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2087,6 +2087,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 				unsigned int in_sgs,
 				void *data,
 				void *ctx,
+				bool premapped,
 				gfp_t gfp)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
@@ -2128,7 +2129,7 @@ int virtqueue_add_sgs(struct virtqueue *_vq,
 			total_sg++;
 	}
 	return virtqueue_add(_vq, sgs, total_sg, out_sgs, in_sgs,
-			     data, NULL, gfp);
+			     data, NULL, false, gfp);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_sgs);
 
@@ -2150,7 +2151,7 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
 			 void *data,
 			 gfp_t gfp)
 {
-	return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, gfp);
+	return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, false, gfp);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);
 
@@ -2172,7 +2173,7 @@ int virtqueue_add_inbuf(struct virtqueue *vq,
 			void *data,
 			gfp_t gfp)
 {
-	return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, gfp);
+	return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, false, gfp);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_inbuf);
 
@@ -2196,7 +2197,7 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq,
 			void *ctx,
 			gfp_t gfp)
 {
-	return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, gfp);
+	return virtqueue_add(vq, &sg, num, 0, 1, data, ctx, false, gfp);
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx);
 
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 02/33] virtio_ring: split: virtqueue_add_split() support premapped
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 01/33] virtio_ring: virtqueue_add() support premapped Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 03/33] virtio_ring: packed: virtqueue_add_packed() " Xuan Zhuo
                   ` (33 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

virtqueue_add_split() only supports virtual addresses, dma is completed
in virtqueue_add_split().

In some scenarios (such as the AF_XDP scenario), the memory is allocated
and DMA is completed in advance, so it is necessary for us to support
passing the DMA address to virtqueue_add_split().

Record this information in desc_state, we can skip unmap based on this
when executing dma unmap.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 55 +++++++++++++++++++++++++++---------
 1 file changed, 42 insertions(+), 13 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index c9f194c86aec..ec622403cbd5 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -70,6 +70,7 @@
 struct vring_desc_state_split {
 	void *data;			/* Data for callback. */
 	struct vring_desc *indir_desc;	/* Indirect descriptor, if any. */
+	bool premapped;
 };
 
 struct vring_desc_state_packed {
@@ -434,7 +435,7 @@ static void vring_unmap_one_split_indirect(const struct vring_virtqueue *vq,
 }
 
 static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
-					  unsigned int i)
+					  unsigned int i, bool premapped)
 {
 	struct vring_desc_extra *extra = vq->split.desc_extra;
 	u16 flags;
@@ -451,6 +452,9 @@ static unsigned int vring_unmap_one_split(const struct vring_virtqueue *vq,
 				 (flags & VRING_DESC_F_WRITE) ?
 				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
 	} else {
+		if (premapped)
+			goto out;
+
 		dma_unmap_page(vring_dma_dev(vq),
 			       extra[i].addr,
 			       extra[i].len,
@@ -521,6 +525,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 				      unsigned int in_sgs,
 				      void *data,
 				      void *ctx,
+				      bool premapped,
 				      gfp_t gfp)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
@@ -582,9 +587,16 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 
 	for (n = 0; n < out_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			dma_addr_t addr;
+
+			if (premapped) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, DMA_TO_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			prev = i;
 			/* Note that we trust indirect descriptor
@@ -597,9 +609,16 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 	}
 	for (; n < (out_sgs + in_sgs); n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			dma_addr_t addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			dma_addr_t addr;
+
+			if (premapped) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, DMA_FROM_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			prev = i;
 			/* Note that we trust indirect descriptor
@@ -644,6 +663,7 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 
 	/* Store token and indirect buffer state. */
 	vq->split.desc_state[head].data = data;
+	vq->split.desc_state[head].premapped = premapped;
 	if (indirect)
 		vq->split.desc_state[head].indir_desc = desc;
 	else
@@ -673,6 +693,9 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 	return 0;
 
 unmap_release:
+	if (premapped)
+		goto unmap_free;
+
 	err_idx = i;
 
 	if (indirect)
@@ -687,9 +710,10 @@ static inline int virtqueue_add_split(struct virtqueue *_vq,
 			vring_unmap_one_split_indirect(vq, &desc[i]);
 			i = virtio16_to_cpu(_vq->vdev, desc[i].next);
 		} else
-			i = vring_unmap_one_split(vq, i);
+			i = vring_unmap_one_split(vq, i, false);
 	}
 
+unmap_free:
 	if (indirect)
 		kfree(desc);
 
@@ -733,20 +757,23 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
 {
 	unsigned int i, j;
 	__virtio16 nextflag = cpu_to_virtio16(vq->vq.vdev, VRING_DESC_F_NEXT);
+	bool premapped;
 
 	/* Clear data ptr. */
 	vq->split.desc_state[head].data = NULL;
 
+	premapped = vq->split.desc_state[head].premapped;
+
 	/* Put back on free list: unmap first-level descriptors and find end */
 	i = head;
 
 	while (vq->split.vring.desc[i].flags & nextflag) {
-		vring_unmap_one_split(vq, i);
+		vring_unmap_one_split(vq, i, premapped);
 		i = vq->split.desc_extra[i].next;
 		vq->vq.num_free++;
 	}
 
-	vring_unmap_one_split(vq, i);
+	vring_unmap_one_split(vq, i, premapped);
 	vq->split.desc_extra[i].next = vq->free_head;
 	vq->free_head = head;
 
@@ -768,8 +795,10 @@ static void detach_buf_split(struct vring_virtqueue *vq, unsigned int head,
 				VRING_DESC_F_INDIRECT));
 		BUG_ON(len == 0 || len % sizeof(struct vring_desc));
 
-		for (j = 0; j < len / sizeof(struct vring_desc); j++)
-			vring_unmap_one_split_indirect(vq, &indir_desc[j]);
+		if (!premapped) {
+			for (j = 0; j < len / sizeof(struct vring_desc); j++)
+				vring_unmap_one_split_indirect(vq, &indir_desc[j]);
+		}
 
 		kfree(indir_desc);
 		vq->split.desc_state[head].indir_desc = NULL;
@@ -2095,7 +2124,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg,
 					out_sgs, in_sgs, data, ctx, gfp) :
 				 virtqueue_add_split(_vq, sgs, total_sg,
-					out_sgs, in_sgs, data, ctx, gfp);
+					out_sgs, in_sgs, data, ctx, premapped, gfp);
 }
 
 /**
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 03/33] virtio_ring: packed: virtqueue_add_packed() support premapped
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 01/33] virtio_ring: virtqueue_add() support premapped Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 02/33] virtio_ring: split: virtqueue_add_split() " Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-03  9:16   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 04/33] virtio_ring: introduce virtqueue_add_outbuf_premapped() Xuan Zhuo
                   ` (32 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

virtqueue_add_packed() only supports virtual addresses, dma is completed
in virtqueue_add_packed().

In some scenarios (such as the AF_XDP scenario), the memory is allocated
and DMA is completed in advance, so it is necessary for us to support
passing the DMA address to virtqueue_add_packed().

Record this information in desc_state, we can skip unmap based on this
when executing dma unmap.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 71 +++++++++++++++++++++++++-----------
 1 file changed, 50 insertions(+), 21 deletions(-)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index ec622403cbd5..25027a35fcf8 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -78,6 +78,7 @@ struct vring_desc_state_packed {
 	struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
 	u16 num;			/* Descriptor list length. */
 	u16 last;			/* The last desc state in a list. */
+	bool premapped;
 };
 
 struct vring_desc_extra {
@@ -1200,7 +1201,8 @@ static inline u16 packed_last_used(u16 last_used_idx)
 }
 
 static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
-				     struct vring_desc_extra *extra)
+				     struct vring_desc_extra *extra,
+				     bool premapped)
 {
 	u16 flags;
 
@@ -1215,6 +1217,9 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
 				 (flags & VRING_DESC_F_WRITE) ?
 				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
 	} else {
+		if (premapped)
+			return;
+
 		dma_unmap_page(vring_dma_dev(vq),
 			       extra->addr, extra->len,
 			       (flags & VRING_DESC_F_WRITE) ?
@@ -1262,7 +1267,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 					 unsigned int out_sgs,
 					 unsigned int in_sgs,
 					 void *data,
-					 gfp_t gfp)
+					 gfp_t gfp,
+					 bool premapped)
 {
 	struct vring_packed_desc *desc;
 	struct scatterlist *sg;
@@ -1288,10 +1294,15 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 
 	for (n = 0; n < out_sgs + in_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			addr = vring_map_one_sg(vq, sg, n < out_sgs ?
-					DMA_TO_DEVICE : DMA_FROM_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			if (premapped) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
+							DMA_TO_DEVICE : DMA_FROM_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			desc[i].flags = cpu_to_le16(n < out_sgs ?
 						0 : VRING_DESC_F_WRITE);
@@ -1350,6 +1361,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 	vq->packed.desc_state[id].data = data;
 	vq->packed.desc_state[id].indir_desc = desc;
 	vq->packed.desc_state[id].last = id;
+	vq->packed.desc_state[id].premapped = premapped;
 
 	vq->num_added += 1;
 
@@ -1359,10 +1371,11 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
 	return 0;
 
 unmap_release:
-	err_idx = i;
-
-	for (i = 0; i < err_idx; i++)
-		vring_unmap_desc_packed(vq, &desc[i]);
+	if (!premapped) {
+		err_idx = i;
+		for (i = 0; i < err_idx; i++)
+			vring_unmap_desc_packed(vq, &desc[i]);
+	}
 
 	kfree(desc);
 
@@ -1377,6 +1390,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 				       unsigned int in_sgs,
 				       void *data,
 				       void *ctx,
+				       bool premapped,
 				       gfp_t gfp)
 {
 	struct vring_virtqueue *vq = to_vvq(_vq);
@@ -1403,7 +1417,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 
 	if (virtqueue_use_indirect(vq, total_sg)) {
 		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
-						    in_sgs, data, gfp);
+						    in_sgs, data, gfp, premapped);
 		if (err != -ENOMEM) {
 			END_USE(vq);
 			return err;
@@ -1435,10 +1449,17 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 	c = 0;
 	for (n = 0; n < out_sgs + in_sgs; n++) {
 		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
-			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
-					DMA_TO_DEVICE : DMA_FROM_DEVICE);
-			if (vring_mapping_error(vq, addr))
-				goto unmap_release;
+			dma_addr_t addr;
+
+			if (premapped) {
+				addr = sg_dma_address(sg);
+
+			} else {
+				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
+							DMA_TO_DEVICE : DMA_FROM_DEVICE);
+				if (vring_mapping_error(vq, addr))
+					goto unmap_release;
+			}
 
 			flags = cpu_to_le16(vq->packed.avail_used_flags |
 				    (++c == total_sg ? 0 : VRING_DESC_F_NEXT) |
@@ -1485,6 +1506,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 	vq->packed.desc_state[id].data = data;
 	vq->packed.desc_state[id].indir_desc = ctx;
 	vq->packed.desc_state[id].last = prev;
+	vq->packed.desc_state[id].premapped = premapped;
 
 	/*
 	 * A driver MUST NOT make the first descriptor in the list
@@ -1501,22 +1523,26 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
 	return 0;
 
 unmap_release:
+	vq->packed.avail_used_flags = avail_used_flags;
+
+	if (premapped)
+		goto unmap_free;
+
 	err_idx = i;
 	i = head;
 	curr = vq->free_head;
 
-	vq->packed.avail_used_flags = avail_used_flags;
-
 	for (n = 0; n < total_sg; n++) {
 		if (i == err_idx)
 			break;
-		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]);
+		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr], false);
 		curr = vq->packed.desc_extra[curr].next;
 		i++;
 		if (i >= vq->packed.vring.num)
 			i = 0;
 	}
 
+unmap_free:
 	END_USE(vq);
 	return -EIO;
 }
@@ -1576,8 +1602,10 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 	struct vring_desc_state_packed *state = NULL;
 	struct vring_packed_desc *desc;
 	unsigned int i, curr;
+	bool premapped;
 
 	state = &vq->packed.desc_state[id];
+	premapped = state->premapped;
 
 	/* Clear data ptr. */
 	state->data = NULL;
@@ -1590,7 +1618,8 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 		curr = id;
 		for (i = 0; i < state->num; i++) {
 			vring_unmap_extra_packed(vq,
-						 &vq->packed.desc_extra[curr]);
+						 &vq->packed.desc_extra[curr],
+						 premapped);
 			curr = vq->packed.desc_extra[curr].next;
 		}
 	}
@@ -1603,7 +1632,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
 		if (!desc)
 			return;
 
-		if (vq->use_dma_api) {
+		if (vq->use_dma_api && !premapped) {
 			len = vq->packed.desc_extra[id].len;
 			for (i = 0; i < len / sizeof(struct vring_packed_desc);
 					i++)
@@ -2122,7 +2151,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
 	struct vring_virtqueue *vq = to_vvq(_vq);
 
 	return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg,
-					out_sgs, in_sgs, data, ctx, gfp) :
+					out_sgs, in_sgs, data, ctx, premapped, gfp) :
 				 virtqueue_add_split(_vq, sgs, total_sg,
 					out_sgs, in_sgs, data, ctx, premapped, gfp);
 }
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 04/33] virtio_ring: introduce virtqueue_add_outbuf_premapped()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (2 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 03/33] virtio_ring: packed: virtqueue_add_packed() " Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 05/33] virtio_ring: introduce virtqueue_add_inbuf_premapped() Xuan Zhuo
                   ` (31 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Introduce virtqueue_add_outbuf_premapped() to submit premapped sgs.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 25 +++++++++++++++++++++++++
 include/linux/virtio.h       |  5 +++++
 2 files changed, 30 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 25027a35fcf8..6bfa5193b0d8 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2213,6 +2213,31 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_outbuf);
 
+/**
+ * virtqueue_add_outbuf_premapped - expose output buffers to other end
+ * @vq: the struct virtqueue we're talking about.
+ * @sg: scatterlist (must be well-formed and terminated!)
+ * @num: the number of entries in @sg readable by other side
+ * @data: the token identifying the buffer.
+ * @gfp: how to do memory allocations (if necessary).
+ *
+ * Caller must ensure we don't call this with other virtqueue operations
+ * at the same time (except where noted).
+ *
+ * It is required that all addrs have completed DMA operations. And use
+ * sg->dma_address, sg->length to pass addr and length.
+ *
+ * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO).
+ */
+int virtqueue_add_outbuf_premapped(struct virtqueue *vq,
+				   struct scatterlist *sg, unsigned int num,
+				   void *data,
+				   gfp_t gfp)
+{
+	return virtqueue_add(vq, &sg, num, 1, 0, data, NULL, true, gfp);
+}
+EXPORT_SYMBOL_GPL(virtqueue_add_outbuf_premapped);
+
 /**
  * virtqueue_add_inbuf - expose input buffers to other end
  * @vq: the struct virtqueue we're talking about.
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index dcab9c7e8784..d8b472a7dcae 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -43,6 +43,11 @@ int virtqueue_add_outbuf(struct virtqueue *vq,
 			 void *data,
 			 gfp_t gfp);
 
+int virtqueue_add_outbuf_premapped(struct virtqueue *vq,
+				   struct scatterlist *sg, unsigned int num,
+				   void *data,
+				   gfp_t gfp);
+
 int virtqueue_add_inbuf(struct virtqueue *vq,
 			struct scatterlist sg[], unsigned int num,
 			void *data,
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 05/33] virtio_ring: introduce virtqueue_add_inbuf_premapped()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (3 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 04/33] virtio_ring: introduce virtqueue_add_outbuf_premapped() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 06/33] virtio_ring: introduce virtqueue_reset() Xuan Zhuo
                   ` (30 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Introduce virtqueue_add_inbuf_premapped() to submit premapped sgs.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 25 +++++++++++++++++++++++++
 include/linux/virtio.h       |  5 +++++
 2 files changed, 30 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 6bfa5193b0d8..e32046fd15a5 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2284,6 +2284,31 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq,
 }
 EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_ctx);
 
+/**
+ * virtqueue_add_inbuf_premapped - expose input buffers to other end
+ * @vq: the struct virtqueue we're talking about.
+ * @sg: scatterlist (must be well-formed and terminated!)
+ * @num: the number of entries in @sg writable by other side
+ * @data: the token identifying the buffer.
+ * @gfp: how to do memory allocations (if necessary).
+ *
+ * Caller must ensure we don't call this with other virtqueue operations
+ * at the same time (except where noted).
+ *
+ * It is required that all addrs have completed DMA operations. And use
+ * sg->dma_address, sg->length to pass addr and length.
+ *
+ * Returns zero or a negative error (ie. ENOSPC, ENOMEM, EIO).
+ */
+int virtqueue_add_inbuf_premapped(struct virtqueue *vq,
+				  struct scatterlist *sg, unsigned int num,
+				  void *data,
+				  gfp_t gfp)
+{
+	return virtqueue_add(vq, &sg, num, 0, 1, data, NULL, true, gfp);
+}
+EXPORT_SYMBOL_GPL(virtqueue_add_inbuf_premapped);
+
 /**
  * virtqueue_kick_prepare - first half of split virtqueue_kick call.
  * @_vq: the struct virtqueue
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index d8b472a7dcae..3ebb346ebb7c 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -59,6 +59,11 @@ int virtqueue_add_inbuf_ctx(struct virtqueue *vq,
 			    void *ctx,
 			    gfp_t gfp);
 
+int virtqueue_add_inbuf_premapped(struct virtqueue *vq,
+				  struct scatterlist *sg, unsigned int num,
+				  void *data,
+				  gfp_t gfp);
+
 int virtqueue_add_sgs(struct virtqueue *vq,
 		      struct scatterlist *sgs[],
 		      unsigned int out_sgs,
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 06/33] virtio_ring: introduce virtqueue_reset()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (4 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 05/33] virtio_ring: introduce virtqueue_add_inbuf_premapped() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-03  9:05   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 07/33] virtio_ring: add api virtio_dma_map() for advance dma Xuan Zhuo
                   ` (29 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Introduce virtqueue_reset() to release all buffer inside vq.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 50 ++++++++++++++++++++++++++++++++++++
 include/linux/virtio.h       |  2 ++
 2 files changed, 52 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index e32046fd15a5..7dfce7001f9f 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -2735,6 +2735,56 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
 }
 EXPORT_SYMBOL_GPL(virtqueue_resize);
 
+/**
+ * virtqueue_reset - reset the vring of vq
+ * @_vq: the struct virtqueue we're talking about.
+ * @recycle: callback for recycle the useless buffer
+ *
+ * Caller must ensure we don't call this with other virtqueue operations
+ * at the same time (except where noted).
+ *
+ * Returns zero or a negative error.
+ * 0: success.
+ * -EBUSY: Failed to sync with device, vq may not work properly
+ * -ENOENT: Transport or device not supported
+ * -EPERM: Operation not permitted
+ */
+int virtqueue_reset(struct virtqueue *_vq,
+		    void (*recycle)(struct virtqueue *vq, void *buf))
+{
+	struct vring_virtqueue *vq = to_vvq(_vq);
+	struct virtio_device *vdev = vq->vq.vdev;
+	void *buf;
+	int err;
+
+	if (!vq->we_own_ring)
+		return -EPERM;
+
+	if (!vdev->config->disable_vq_and_reset)
+		return -ENOENT;
+
+	if (!vdev->config->enable_vq_after_reset)
+		return -ENOENT;
+
+	err = vdev->config->disable_vq_and_reset(_vq);
+	if (err)
+		return err;
+
+	while ((buf = virtqueue_detach_unused_buf(_vq)) != NULL)
+		recycle(_vq, buf);
+
+	if (vq->packed_ring)
+		virtqueue_reinit_packed(vq);
+	else
+		virtqueue_reinit_split(vq);
+
+	if (vdev->config->enable_vq_after_reset(_vq))
+		return -EBUSY;
+
+	return 0;
+}
+EXPORT_SYMBOL_GPL(virtqueue_reset);
+
 /* Only available for split ring */
 struct virtqueue *vring_new_virtqueue(unsigned int index,
 				      unsigned int num,
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 3ebb346ebb7c..3ca2edb1aef3 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -105,6 +105,8 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
 
 int virtqueue_resize(struct virtqueue *vq, u32 num,
 		     void (*recycle)(struct virtqueue *vq, void *buf));
+int virtqueue_reset(struct virtqueue *vq,
+		    void (*recycle)(struct virtqueue *vq, void *buf));
 
 /**
  * struct virtio_device - representation of a device using virtio
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 07/33] virtio_ring: add api virtio_dma_map() for advance dma
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (5 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 06/33] virtio_ring: introduce virtqueue_reset() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-03  9:07   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 08/33] virtio_ring: introduce dma sync api for virtio Xuan Zhuo
                   ` (28 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Added virtio_dma_map() to map DMA addresses for virtual memory in
advance. The purpose of adding this function is to check
vring_use_dma_api() for virtio dma operation and get vdev->dev.parent as
the parameter of dma_map_page().

Added virtio_dma_unmap() for unmap DMA address.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 80 ++++++++++++++++++++++++++++++++++++
 include/linux/virtio.h       |  9 ++++
 2 files changed, 89 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 7dfce7001f9f..67eda7bc23ea 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -3022,4 +3022,84 @@ const struct vring *virtqueue_get_vring(struct virtqueue *vq)
 }
 EXPORT_SYMBOL_GPL(virtqueue_get_vring);
 
+/**
+ * virtio_dma_map_page - get the DMA addr of the memory for virtio device
+ * @dev: virtio device
+ * @page: the page of the memory to DMA
+ * @offset: the offset of the memory inside page
+ * @length: memory length
+ * @dir: DMA direction
+ *
+ * Returns the DMA addr. DMA_MAPPING_ERROR means error.
+ */
+dma_addr_t virtio_dma_map_page(struct device *dev, struct page *page, size_t offset,
+			       unsigned int length, enum dma_data_direction dir)
+{
+	struct virtio_device *vdev = dev_to_virtio(dev);
+
+	if (!vring_use_dma_api(vdev))
+		return page_to_phys(page) + offset;
+
+	return dma_map_page(vdev->dev.parent, page, offset, length, dir);
+}
+
+/**
+ * virtio_dma_map - get the DMA addr of the memory for virtio device
+ * @dev: virtio device
+ * @addr: the addr to DMA
+ * @length: memory length
+ * @dir: DMA direction
+ *
+ * Returns the DMA addr.
+ */
+dma_addr_t virtio_dma_map(struct device *dev, void *addr, unsigned int length,
+			  enum dma_data_direction dir)
+{
+	struct page *page;
+	size_t offset;
+
+	page = virt_to_page(addr);
+	offset = offset_in_page(addr);
+
+	return virtio_dma_map_page(dev, page, offset, length, dir);
+}
+EXPORT_SYMBOL_GPL(virtio_dma_map);
+
+/**
+ * virtio_dma_mapping_error - check dma address
+ * @dev: virtio device
+ * @addr: DMA address
+ *
+ * Returns 0 means dma valid. Other means invalid dma address.
+ */
+int virtio_dma_mapping_error(struct device *dev, dma_addr_t addr)
+{
+	struct virtio_device *vdev = dev_to_virtio(dev);
+
+	if (!vring_use_dma_api(vdev))
+		return 0;
+
+	return dma_mapping_error(vdev->dev.parent, addr);
+}
+EXPORT_SYMBOL_GPL(virtio_dma_mapping_error);
+
+/**
+ * virtio_dma_unmap - unmap DMA addr
+ * @dev: virtio device
+ * @dma: DMA address
+ * @length: memory length
+ * @dir: DMA direction
+ */
+void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
+		      enum dma_data_direction dir)
+{
+	struct virtio_device *vdev = dev_to_virtio(dev);
+
+	if (!vring_use_dma_api(vdev))
+		return;
+
+	dma_unmap_page(vdev->dev.parent, dma, length, dir);
+}
+EXPORT_SYMBOL_GPL(virtio_dma_unmap);
+
 MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index 3ca2edb1aef3..ce89126becc5 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -9,6 +9,7 @@
 #include <linux/device.h>
 #include <linux/mod_devicetable.h>
 #include <linux/gfp.h>
+#include <linux/dma-mapping.h>
 
 /**
  * struct virtqueue - a queue to register buffers for sending or receiving.
@@ -218,4 +219,12 @@ void unregister_virtio_driver(struct virtio_driver *drv);
 #define module_virtio_driver(__virtio_driver) \
 	module_driver(__virtio_driver, register_virtio_driver, \
 			unregister_virtio_driver)
+
+dma_addr_t virtio_dma_map_page(struct device *dev, struct page *page, size_t offset,
+			       unsigned int length, enum dma_data_direction dir);
+dma_addr_t virtio_dma_map(struct device *dev, void *addr, unsigned int length,
+			  enum dma_data_direction dir);
+int virtio_dma_mapping_error(struct device *dev, dma_addr_t addr);
+void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
+		      enum dma_data_direction dir);
 #endif /* _LINUX_VIRTIO_H */
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 08/33] virtio_ring: introduce dma sync api for virtio
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (6 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 07/33] virtio_ring: add api virtio_dma_map() for advance dma Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-03  9:24   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 09/33] xsk: xsk_buff_pool add callback for dma_sync Xuan Zhuo
                   ` (27 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

In the process of dma sync, we involved whether virtio uses dma api. On
the other hand, it is also necessary to read vdev->dev.parent. So these
API has been introduced.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/virtio/virtio_ring.c | 61 ++++++++++++++++++++++++++++++++++++
 include/linux/virtio.h       |  8 +++++
 2 files changed, 69 insertions(+)

diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
index 67eda7bc23ea..7b393133fd27 100644
--- a/drivers/virtio/virtio_ring.c
+++ b/drivers/virtio/virtio_ring.c
@@ -3102,4 +3102,65 @@ void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
 }
 EXPORT_SYMBOL_GPL(virtio_dma_unmap);
 
+/**
+ * virtio_dma_need_sync - check dma address need sync
+ * @dev: virtio device
+ * @addr: DMA address
+ */
+bool virtio_dma_need_sync(struct device *dev, dma_addr_t addr)
+{
+	struct virtio_device *vdev = dev_to_virtio(dev);
+
+	if (!vring_use_dma_api(vdev))
+		return 0;
+
+	return dma_need_sync(vdev->dev.parent, addr);
+}
+EXPORT_SYMBOL_GPL(virtio_dma_need_sync);
+
+/**
+ * virtio_dma_sync_signle_range_for_cpu - dma sync for cpu
+ * @dev: virtio device
+ * @addr: DMA address
+ * @offset: DMA address offset
+ * @size: mem size for sync
+ * @dir: DMA direction
+ *
+ * Before calling this function, use virtio_dma_need_sync() to confirm that the
+ * DMA address really needs to be synchronized
+ */
+void virtio_dma_sync_signle_range_for_cpu(struct device *dev, dma_addr_t addr,
+					  unsigned long offset, size_t size,
+					  enum dma_data_direction dir)
+{
+	struct virtio_device *vdev = dev_to_virtio(dev);
+
+	dma_sync_single_range_for_cpu(vdev->dev.parent, addr, offset,
+				      size, DMA_BIDIRECTIONAL);
+}
+EXPORT_SYMBOL_GPL(virtio_dma_sync_signle_range_for_cpu);
+
+/**
+ * virtio_dma_sync_signle_range_for_device - dma sync for device
+ * @dev: virtio device
+ * @addr: DMA address
+ * @offset: DMA address offset
+ * @size: mem size for sync
+ * @dir: DMA direction
+ *
+ * Before calling this function, use virtio_dma_need_sync() to confirm that the
+ * DMA address really needs to be synchronized
+ */
+void virtio_dma_sync_signle_range_for_device(struct device *dev,
+					     dma_addr_t addr,
+					     unsigned long offset, size_t size,
+					     enum dma_data_direction dir)
+{
+	struct virtio_device *vdev = dev_to_virtio(dev);
+
+	dma_sync_single_range_for_device(vdev->dev.parent, addr, offset,
+					 size, DMA_BIDIRECTIONAL);
+}
+EXPORT_SYMBOL_GPL(virtio_dma_sync_signle_range_for_device);
+
 MODULE_LICENSE("GPL");
diff --git a/include/linux/virtio.h b/include/linux/virtio.h
index ce89126becc5..8c2fae318b0c 100644
--- a/include/linux/virtio.h
+++ b/include/linux/virtio.h
@@ -227,4 +227,12 @@ dma_addr_t virtio_dma_map(struct device *dev, void *addr, unsigned int length,
 int virtio_dma_mapping_error(struct device *dev, dma_addr_t addr);
 void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
 		      enum dma_data_direction dir);
+bool virtio_dma_need_sync(struct device *dev, dma_addr_t addr);
+void virtio_dma_sync_signle_range_for_cpu(struct device *dev, dma_addr_t addr,
+					  unsigned long offset, size_t size,
+					  enum dma_data_direction dir);
+void virtio_dma_sync_signle_range_for_device(struct device *dev,
+					     dma_addr_t addr,
+					     unsigned long offset, size_t size,
+					     enum dma_data_direction dir);
 #endif /* _LINUX_VIRTIO_H */
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 09/33] xsk: xsk_buff_pool add callback for dma_sync
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (7 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 08/33] virtio_ring: introduce dma sync api for virtio Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
       [not found]   ` <CAJ8uoz2+4+wUFYF1GjF51DFBV8ZsBRtTEVWpu_2fBmFUEQzOLQ@mail.gmail.com>
  2023-02-02 11:00 ` [PATCH 10/33] xsk: support virtio DMA map Xuan Zhuo
                   ` (26 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Use callback to implement dma sync to simplify subsequent support for
virtio dma sync.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 include/net/xsk_buff_pool.h |  6 ++++++
 net/xdp/xsk_buff_pool.c     | 24 ++++++++++++++++++++----
 2 files changed, 26 insertions(+), 4 deletions(-)

diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
index 3e952e569418..53b681120354 100644
--- a/include/net/xsk_buff_pool.h
+++ b/include/net/xsk_buff_pool.h
@@ -75,6 +75,12 @@ struct xsk_buff_pool {
 	u32 chunk_size;
 	u32 chunk_shift;
 	u32 frame_len;
+	void (*dma_sync_for_cpu)(struct device *dev, dma_addr_t addr,
+				 unsigned long offset, size_t size,
+				 enum dma_data_direction dir);
+	void (*dma_sync_for_device)(struct device *dev, dma_addr_t addr,
+				    unsigned long offset, size_t size,
+				    enum dma_data_direction dir);
 	u8 cached_need_wakeup;
 	bool uses_need_wakeup;
 	bool dma_need_sync;
diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index ed6c71826d31..78e325e195fa 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -403,6 +403,20 @@ static int xp_init_dma_info(struct xsk_buff_pool *pool, struct xsk_dma_map *dma_
 	return 0;
 }
 
+static void dma_sync_for_cpu(struct device *dev, dma_addr_t addr,
+			     unsigned long offset, size_t size,
+			     enum dma_data_direction dir)
+{
+	dma_sync_single_range_for_cpu(dev, addr, offset, size, dir);
+}
+
+static void dma_sync_for_device(struct device *dev, dma_addr_t addr,
+				unsigned long offset, size_t size,
+				enum dma_data_direction dir)
+{
+	dma_sync_single_range_for_device(dev, addr, offset, size, dir);
+}
+
 int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
 	       unsigned long attrs, struct page **pages, u32 nr_pages)
 {
@@ -421,6 +435,9 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
 		return 0;
 	}
 
+	pool->dma_sync_for_cpu = dma_sync_for_cpu;
+	pool->dma_sync_for_device = dma_sync_for_device;
+
 	dma_map = xp_create_dma_map(dev, pool->netdev, nr_pages, pool->umem);
 	if (!dma_map)
 		return -ENOMEM;
@@ -667,15 +684,14 @@ EXPORT_SYMBOL(xp_raw_get_dma);
 
 void xp_dma_sync_for_cpu_slow(struct xdp_buff_xsk *xskb)
 {
-	dma_sync_single_range_for_cpu(xskb->pool->dev, xskb->dma, 0,
-				      xskb->pool->frame_len, DMA_BIDIRECTIONAL);
+	xskb->pool->dma_sync_for_cpu(xskb->pool->dev, xskb->dma, 0,
+				     xskb->pool->frame_len, DMA_BIDIRECTIONAL);
 }
 EXPORT_SYMBOL(xp_dma_sync_for_cpu_slow);
 
 void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dma,
 				 size_t size)
 {
-	dma_sync_single_range_for_device(pool->dev, dma, 0,
-					 size, DMA_BIDIRECTIONAL);
+	pool->dma_sync_for_device(pool->dev, dma, 0, size, DMA_BIDIRECTIONAL);
 }
 EXPORT_SYMBOL(xp_dma_sync_for_device_slow);
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 10/33] xsk: support virtio DMA map
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (8 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 09/33] xsk: xsk_buff_pool add callback for dma_sync Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-05 22:04   ` kernel test robot
  2023-02-02 11:00 ` [PATCH 11/33] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
                   ` (25 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

When device is a virtio device, use virtio's DMA interface.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 net/xdp/xsk_buff_pool.c | 59 +++++++++++++++++++++++++++++++----------
 1 file changed, 45 insertions(+), 14 deletions(-)

diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
index 78e325e195fa..e2785aca8396 100644
--- a/net/xdp/xsk_buff_pool.c
+++ b/net/xdp/xsk_buff_pool.c
@@ -3,6 +3,7 @@
 #include <net/xsk_buff_pool.h>
 #include <net/xdp_sock.h>
 #include <net/xdp_sock_drv.h>
+#include <linux/virtio.h>
 
 #include "xsk_queue.h"
 #include "xdp_umem.h"
@@ -334,8 +335,12 @@ static void __xp_dma_unmap(struct xsk_dma_map *dma_map, unsigned long attrs)
 		dma = &dma_map->dma_pages[i];
 		if (*dma) {
 			*dma &= ~XSK_NEXT_PG_CONTIG_MASK;
-			dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
-					     DMA_BIDIRECTIONAL, attrs);
+			if (is_virtio_device(dma_map->dev))
+				virtio_dma_unmap(dma_map->dev, *dma, PAGE_SIZE,
+						 DMA_BIDIRECTIONAL);
+			else
+				dma_unmap_page_attrs(dma_map->dev, *dma, PAGE_SIZE,
+						     DMA_BIDIRECTIONAL, attrs);
 			*dma = 0;
 		}
 	}
@@ -435,22 +440,40 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
 		return 0;
 	}
 
-	pool->dma_sync_for_cpu = dma_sync_for_cpu;
-	pool->dma_sync_for_device = dma_sync_for_device;
+	if (is_virtio_device(dev)) {
+		pool->dma_sync_for_cpu = virtio_dma_sync_signle_range_for_cpu;
+		pool->dma_sync_for_device = virtio_dma_sync_signle_range_for_device;
+
+	} else {
+		pool->dma_sync_for_cpu = dma_sync_for_cpu;
+		pool->dma_sync_for_device = dma_sync_for_device;
+	}
 
 	dma_map = xp_create_dma_map(dev, pool->netdev, nr_pages, pool->umem);
 	if (!dma_map)
 		return -ENOMEM;
 
 	for (i = 0; i < dma_map->dma_pages_cnt; i++) {
-		dma = dma_map_page_attrs(dev, pages[i], 0, PAGE_SIZE,
-					 DMA_BIDIRECTIONAL, attrs);
-		if (dma_mapping_error(dev, dma)) {
-			__xp_dma_unmap(dma_map, attrs);
-			return -ENOMEM;
+		if (is_virtio_device(dev)) {
+			dma = virtio_dma_map_page(dev, pages[i], 0, PAGE_SIZE,
+						  DMA_BIDIRECTIONAL);
+
+			if (virtio_dma_mapping_error(dev, dma))
+				goto err;
+
+			if (virtio_dma_need_sync(dev, dma))
+				dma_map->dma_need_sync = true;
+
+		} else {
+			dma = dma_map_page_attrs(dev, pages[i], 0, PAGE_SIZE,
+						 DMA_BIDIRECTIONAL, attrs);
+
+			if (dma_mapping_error(dev, dma))
+				goto err;
+
+			if (dma_need_sync(dev, dma))
+				dma_map->dma_need_sync = true;
 		}
-		if (dma_need_sync(dev, dma))
-			dma_map->dma_need_sync = true;
 		dma_map->dma_pages[i] = dma;
 	}
 
@@ -464,6 +487,9 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
 	}
 
 	return 0;
+err:
+	__xp_dma_unmap(dma_map, attrs);
+	return -ENOMEM;
 }
 EXPORT_SYMBOL(xp_dma_map);
 
@@ -546,9 +572,14 @@ struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool)
 	xskb->xdp.data_meta = xskb->xdp.data;
 
 	if (pool->dma_need_sync) {
-		dma_sync_single_range_for_device(pool->dev, xskb->dma, 0,
-						 pool->frame_len,
-						 DMA_BIDIRECTIONAL);
+		if (is_virtio_device(pool->dev))
+			virtio_dma_sync_signle_range_for_device(pool->dev, xskb->dma, 0,
+								pool->frame_len,
+								DMA_BIDIRECTIONAL);
+		else
+			dma_sync_single_range_for_device(pool->dev, xskb->dma, 0,
+							 pool->frame_len,
+							 DMA_BIDIRECTIONAL);
 	}
 	return &xskb->xdp;
 }
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 11/33] virtio_net: rename free_old_xmit_skbs to free_old_xmit
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (9 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 10/33] xsk: support virtio DMA map Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 12/33] virtio_net: unify the code for recycling the xmit ptr Xuan Zhuo
                   ` (24 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Since free_old_xmit_skbs not only deals with skb, but also xdp frame and
subsequent added xsk, so change the name of this function to
free_old_xmit.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 10 +++++-----
 1 file changed, 5 insertions(+), 5 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 0f0036b1514d..bc7b9ccb325a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1714,7 +1714,7 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
 	return stats.packets;
 }
 
-static void free_old_xmit_skbs(struct send_queue *sq, bool in_napi)
+static void free_old_xmit(struct send_queue *sq, bool in_napi)
 {
 	unsigned int len;
 	unsigned int packets = 0;
@@ -1778,7 +1778,7 @@ static void virtnet_poll_cleantx(struct receive_queue *rq)
 
 		do {
 			virtqueue_disable_cb(sq->vq);
-			free_old_xmit_skbs(sq, true);
+			free_old_xmit(sq, true);
 		} while (unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
 
 		if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
@@ -1870,7 +1870,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	txq = netdev_get_tx_queue(vi->dev, index);
 	__netif_tx_lock(txq, raw_smp_processor_id());
 	virtqueue_disable_cb(sq->vq);
-	free_old_xmit_skbs(sq, true);
+	free_old_xmit(sq, true);
 
 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
 		netif_tx_wake_queue(txq);
@@ -1960,7 +1960,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
 		if (use_napi)
 			virtqueue_disable_cb(sq->vq);
 
-		free_old_xmit_skbs(sq, false);
+		free_old_xmit(sq, false);
 
 	} while (use_napi && kick &&
 	       unlikely(!virtqueue_enable_cb_delayed(sq->vq)));
@@ -2006,7 +2006,7 @@ static netdev_tx_t start_xmit(struct sk_buff *skb, struct net_device *dev)
 				virtqueue_napi_schedule(&sq->napi, sq->vq);
 		} else if (unlikely(!virtqueue_enable_cb_delayed(sq->vq))) {
 			/* More just got used, free them then recheck. */
-			free_old_xmit_skbs(sq, false);
+			free_old_xmit(sq, false);
 			if (sq->vq->num_free >= 2+MAX_SKB_FRAGS) {
 				netif_start_subqueue(dev, qnum);
 				virtqueue_disable_cb(sq->vq);
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 12/33] virtio_net: unify the code for recycling the xmit ptr
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (10 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 11/33] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 13/33] virtio_net: virtnet_poll_tx support rescheduled Xuan Zhuo
                   ` (23 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

There are two completely similar and independent implementations. This
is inconvenient for the subsequent addition of new types. So extract a
function from this piece of code and call this function uniformly to
recover old xmit ptr.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 76 +++++++++++++++++-----------------------
 1 file changed, 33 insertions(+), 43 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index bc7b9ccb325a..058ec6ca7cfc 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -318,6 +318,30 @@ static struct xdp_frame *ptr_to_xdp(void *ptr)
 	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
 }
 
+static void __free_old_xmit(struct send_queue *sq, bool in_napi,
+			    struct virtnet_sq_stats *stats)
+{
+	unsigned int len;
+	void *ptr;
+
+	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
+		if (!is_xdp_frame(ptr)) {
+			struct sk_buff *skb = ptr;
+
+			pr_debug("Sent skb %p\n", skb);
+
+			stats->bytes += skb->len;
+			napi_consume_skb(skb, in_napi);
+		} else {
+			struct xdp_frame *frame = ptr_to_xdp(ptr);
+
+			stats->bytes += xdp_get_frame_len(frame);
+			xdp_return_frame(frame);
+		}
+		stats->packets++;
+	}
+}
+
 /* Converting between virtqueue no. and kernel tx/rx queue no.
  * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
  */
@@ -635,15 +659,12 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 			    int n, struct xdp_frame **frames, u32 flags)
 {
 	struct virtnet_info *vi = netdev_priv(dev);
+	struct virtnet_sq_stats stats = {};
 	struct receive_queue *rq = vi->rq;
 	struct bpf_prog *xdp_prog;
 	struct send_queue *sq;
-	unsigned int len;
-	int packets = 0;
-	int bytes = 0;
 	int nxmit = 0;
 	int kicks = 0;
-	void *ptr;
 	int ret;
 	int i;
 
@@ -662,20 +683,7 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	}
 
 	/* Free up any pending old buffers before queueing new ones. */
-	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (likely(is_xdp_frame(ptr))) {
-			struct xdp_frame *frame = ptr_to_xdp(ptr);
-
-			bytes += xdp_get_frame_len(frame);
-			xdp_return_frame(frame);
-		} else {
-			struct sk_buff *skb = ptr;
-
-			bytes += skb->len;
-			napi_consume_skb(skb, false);
-		}
-		packets++;
-	}
+	__free_old_xmit(sq, false, &stats);
 
 	for (i = 0; i < n; i++) {
 		struct xdp_frame *xdpf = frames[i];
@@ -692,8 +700,8 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	}
 out:
 	u64_stats_update_begin(&sq->stats.syncp);
-	sq->stats.bytes += bytes;
-	sq->stats.packets += packets;
+	sq->stats.bytes += stats.bytes;
+	sq->stats.packets += stats.packets;
 	sq->stats.xdp_tx += n;
 	sq->stats.xdp_tx_drops += n - nxmit;
 	sq->stats.kicks += kicks;
@@ -1716,37 +1724,19 @@ static int virtnet_receive(struct receive_queue *rq, int budget,
 
 static void free_old_xmit(struct send_queue *sq, bool in_napi)
 {
-	unsigned int len;
-	unsigned int packets = 0;
-	unsigned int bytes = 0;
-	void *ptr;
+	struct virtnet_sq_stats stats = {};
 
-	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (likely(!is_xdp_frame(ptr))) {
-			struct sk_buff *skb = ptr;
-
-			pr_debug("Sent skb %p\n", skb);
-
-			bytes += skb->len;
-			napi_consume_skb(skb, in_napi);
-		} else {
-			struct xdp_frame *frame = ptr_to_xdp(ptr);
-
-			bytes += xdp_get_frame_len(frame);
-			xdp_return_frame(frame);
-		}
-		packets++;
-	}
+	__free_old_xmit(sq, in_napi, &stats);
 
 	/* Avoid overhead when no packets have been processed
 	 * happens when called speculatively from start_xmit.
 	 */
-	if (!packets)
+	if (!stats.packets)
 		return;
 
 	u64_stats_update_begin(&sq->stats.syncp);
-	sq->stats.bytes += bytes;
-	sq->stats.packets += packets;
+	sq->stats.bytes += stats.bytes;
+	sq->stats.packets += stats.packets;
 	u64_stats_update_end(&sq->stats.syncp);
 }
 
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 13/33] virtio_net: virtnet_poll_tx support rescheduled
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (11 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 12/33] virtio_net: unify the code for recycling the xmit ptr Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 14/33] virtio_net: independent directory Xuan Zhuo
                   ` (22 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

virtnet_poll_tx() support to return budget when busy to be rescheduled.

When retval < budget, napi_poll() in dev.c will exit directly. And
virtqueue_napi_complete() will be called to close napi.

When retval == budget, the napi_poll() in dev.c will re-add napi to the
queue.

The purpose of this patch is to support xsk xmit in virtio_poll_tx() for
subsequent patch.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 058ec6ca7cfc..eb7f00194b5c 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1848,6 +1848,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	struct virtnet_info *vi = sq->vq->vdev->priv;
 	unsigned int index = vq2txq(sq->vq);
 	struct netdev_queue *txq;
+	int busy = 0;
 	int opaque;
 	bool done;
 
@@ -1865,6 +1866,11 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
 		netif_tx_wake_queue(txq);
 
+	if (busy) {
+		__netif_tx_unlock(txq);
+		return budget;
+	}
+
 	opaque = virtqueue_enable_cb_prepare(sq->vq);
 
 	done = napi_complete_done(napi, 0);
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 14/33] virtio_net: independent directory
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (12 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 13/33] virtio_net: virtnet_poll_tx support rescheduled Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 15/33] virtio_net: move to virtio_net.h Xuan Zhuo
                   ` (21 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Create a separate directory for virtio-net. AF_XDP support will be added
later, then a separate xsk.c file will be added, so we should create a
directory for virtio-net.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 MAINTAINERS                                 |  2 +-
 drivers/net/Kconfig                         |  8 +-------
 drivers/net/Makefile                        |  2 +-
 drivers/net/virtio/Kconfig                  | 11 +++++++++++
 drivers/net/virtio/Makefile                 |  8 ++++++++
 drivers/net/{virtio_net.c => virtio/main.c} |  0
 6 files changed, 22 insertions(+), 9 deletions(-)
 create mode 100644 drivers/net/virtio/Kconfig
 create mode 100644 drivers/net/virtio/Makefile
 rename drivers/net/{virtio_net.c => virtio/main.c} (100%)

diff --git a/MAINTAINERS b/MAINTAINERS
index 8cdba0580cb8..4f2bb7013768 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -22052,7 +22052,7 @@ F:	Documentation/ABI/testing/sysfs-class-vduse
 F:	Documentation/devicetree/bindings/virtio/
 F:	drivers/block/virtio_blk.c
 F:	drivers/crypto/virtio/
-F:	drivers/net/virtio_net.c
+F:	drivers/net/virtio/
 F:	drivers/vdpa/
 F:	drivers/virtio/
 F:	include/linux/vdpa.h
diff --git a/drivers/net/Kconfig b/drivers/net/Kconfig
index 950a09f021dd..825a091c902a 100644
--- a/drivers/net/Kconfig
+++ b/drivers/net/Kconfig
@@ -408,13 +408,7 @@ config VETH
 	  When one end receives the packet it appears on its pair and vice
 	  versa.
 
-config VIRTIO_NET
-	tristate "Virtio network driver"
-	depends on VIRTIO
-	select NET_FAILOVER
-	help
-	  This is the virtual network driver for virtio.  It can be used with
-	  QEMU based VMMs (like KVM or Xen).  Say Y or M.
+source "drivers/net/virtio/Kconfig"
 
 config NLMON
 	tristate "Virtual netlink monitoring device"
diff --git a/drivers/net/Makefile b/drivers/net/Makefile
index e26f98f897c5..47537dd0f120 100644
--- a/drivers/net/Makefile
+++ b/drivers/net/Makefile
@@ -31,7 +31,7 @@ obj-$(CONFIG_NET_TEAM) += team/
 obj-$(CONFIG_TUN) += tun.o
 obj-$(CONFIG_TAP) += tap.o
 obj-$(CONFIG_VETH) += veth.o
-obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
+obj-$(CONFIG_VIRTIO_NET) += virtio/
 obj-$(CONFIG_VXLAN) += vxlan/
 obj-$(CONFIG_GENEVE) += geneve.o
 obj-$(CONFIG_BAREUDP) += bareudp.o
diff --git a/drivers/net/virtio/Kconfig b/drivers/net/virtio/Kconfig
new file mode 100644
index 000000000000..9bc2a2fc6c3e
--- /dev/null
+++ b/drivers/net/virtio/Kconfig
@@ -0,0 +1,11 @@
+# SPDX-License-Identifier: GPL-2.0-only
+#
+# virtio-net device configuration
+#
+config VIRTIO_NET
+	tristate "Virtio network driver"
+	depends on VIRTIO
+	select NET_FAILOVER
+	help
+	  This is the virtual network driver for virtio.  It can be used with
+	  QEMU based VMMs (like KVM or Xen).  Say Y or M.
diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
new file mode 100644
index 000000000000..15ed7c97fd4f
--- /dev/null
+++ b/drivers/net/virtio/Makefile
@@ -0,0 +1,8 @@
+# SPDX-License-Identifier: GPL-2.0
+#
+# Makefile for the virtio network device drivers.
+#
+
+obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
+
+virtio_net-y := main.o
diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio/main.c
similarity index 100%
rename from drivers/net/virtio_net.c
rename to drivers/net/virtio/main.c
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 15/33] virtio_net: move to virtio_net.h
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (13 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 14/33] virtio_net: independent directory Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-03  8:53   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 16/33] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
                   ` (20 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Move some structure definitions and inline functions into the
virtio_net.h file.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 247 +----------------------------
 drivers/net/virtio/virtio_net.h | 265 ++++++++++++++++++++++++++++++++
 2 files changed, 267 insertions(+), 245 deletions(-)
 create mode 100644 drivers/net/virtio/virtio_net.h

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index eb7f00194b5c..5683cb576474 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -4,24 +4,8 @@
  * Copyright 2007 Rusty Russell <rusty@rustcorp.com.au> IBM Corporation
  */
 //#define DEBUG
-#include <linux/netdevice.h>
-#include <linux/etherdevice.h>
-#include <linux/ethtool.h>
-#include <linux/module.h>
-#include <linux/virtio.h>
-#include <linux/virtio_net.h>
-#include <linux/bpf.h>
-#include <linux/bpf_trace.h>
-#include <linux/scatterlist.h>
-#include <linux/if_vlan.h>
-#include <linux/slab.h>
-#include <linux/cpu.h>
-#include <linux/average.h>
-#include <linux/filter.h>
-#include <linux/kernel.h>
-#include <net/route.h>
-#include <net/xdp.h>
-#include <net/net_failover.h>
+
+#include "virtio_net.h"
 
 static int napi_weight = NAPI_POLL_WEIGHT;
 module_param(napi_weight, int, 0444);
@@ -44,15 +28,6 @@ module_param(napi_tx, bool, 0644);
 #define VIRTIO_XDP_TX		BIT(0)
 #define VIRTIO_XDP_REDIR	BIT(1)
 
-#define VIRTIO_XDP_FLAG	BIT(0)
-
-/* RX packet size EWMA. The average packet size is used to determine the packet
- * buffer size when refilling RX rings. As the entire RX ring may be refilled
- * at once, the weight is chosen so that the EWMA will be insensitive to short-
- * term, transient changes in packet size.
- */
-DECLARE_EWMA(pkt_len, 0, 64)
-
 #define VIRTNET_DRIVER_VERSION "1.0.0"
 
 static const unsigned long guest_offloads[] = {
@@ -72,36 +47,6 @@ static const unsigned long guest_offloads[] = {
 				(1ULL << VIRTIO_NET_F_GUEST_USO4) | \
 				(1ULL << VIRTIO_NET_F_GUEST_USO6))
 
-struct virtnet_stat_desc {
-	char desc[ETH_GSTRING_LEN];
-	size_t offset;
-};
-
-struct virtnet_sq_stats {
-	struct u64_stats_sync syncp;
-	u64 packets;
-	u64 bytes;
-	u64 xdp_tx;
-	u64 xdp_tx_drops;
-	u64 kicks;
-	u64 tx_timeouts;
-};
-
-struct virtnet_rq_stats {
-	struct u64_stats_sync syncp;
-	u64 packets;
-	u64 bytes;
-	u64 drops;
-	u64 xdp_packets;
-	u64 xdp_tx;
-	u64 xdp_redirects;
-	u64 xdp_drops;
-	u64 kicks;
-};
-
-#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
-#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
-
 static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
 	{ "packets",		VIRTNET_SQ_STAT(packets) },
 	{ "bytes",		VIRTNET_SQ_STAT(bytes) },
@@ -125,57 +70,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
 #define VIRTNET_SQ_STATS_LEN	ARRAY_SIZE(virtnet_sq_stats_desc)
 #define VIRTNET_RQ_STATS_LEN	ARRAY_SIZE(virtnet_rq_stats_desc)
 
-/* Internal representation of a send virtqueue */
-struct send_queue {
-	/* Virtqueue associated with this send _queue */
-	struct virtqueue *vq;
-
-	/* TX: fragments + linear part + virtio header */
-	struct scatterlist sg[MAX_SKB_FRAGS + 2];
-
-	/* Name of the send queue: output.$index */
-	char name[16];
-
-	struct virtnet_sq_stats stats;
-
-	struct napi_struct napi;
-
-	/* Record whether sq is in reset state. */
-	bool reset;
-};
-
-/* Internal representation of a receive virtqueue */
-struct receive_queue {
-	/* Virtqueue associated with this receive_queue */
-	struct virtqueue *vq;
-
-	struct napi_struct napi;
-
-	struct bpf_prog __rcu *xdp_prog;
-
-	struct virtnet_rq_stats stats;
-
-	/* Chain pages by the private ptr. */
-	struct page *pages;
-
-	/* Average packet length for mergeable receive buffers. */
-	struct ewma_pkt_len mrg_avg_pkt_len;
-
-	/* Page frag for packet buffer allocation. */
-	struct page_frag alloc_frag;
-
-	/* RX: fragments + linear part + virtio header */
-	struct scatterlist sg[MAX_SKB_FRAGS + 2];
-
-	/* Min single buffer size for mergeable buffers case. */
-	unsigned int min_buf_len;
-
-	/* Name of this receive queue: input.$index */
-	char name[16];
-
-	struct xdp_rxq_info xdp_rxq;
-};
-
 /* This structure can contain rss message with maximum settings for indirection table and keysize
  * Note, that default structure that describes RSS configuration virtio_net_rss_config
  * contains same info but can't handle table values.
@@ -206,90 +100,6 @@ struct control_buf {
 	struct virtio_net_ctrl_rss rss;
 };
 
-struct virtnet_info {
-	struct virtio_device *vdev;
-	struct virtqueue *cvq;
-	struct net_device *dev;
-	struct send_queue *sq;
-	struct receive_queue *rq;
-	unsigned int status;
-
-	/* Max # of queue pairs supported by the device */
-	u16 max_queue_pairs;
-
-	/* # of queue pairs currently used by the driver */
-	u16 curr_queue_pairs;
-
-	/* # of XDP queue pairs currently used by the driver */
-	u16 xdp_queue_pairs;
-
-	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
-	bool xdp_enabled;
-
-	/* I like... big packets and I cannot lie! */
-	bool big_packets;
-
-	/* number of sg entries allocated for big packets */
-	unsigned int big_packets_num_skbfrags;
-
-	/* Host will merge rx buffers for big packets (shake it! shake it!) */
-	bool mergeable_rx_bufs;
-
-	/* Host supports rss and/or hash report */
-	bool has_rss;
-	bool has_rss_hash_report;
-	u8 rss_key_size;
-	u16 rss_indir_table_size;
-	u32 rss_hash_types_supported;
-	u32 rss_hash_types_saved;
-
-	/* Has control virtqueue */
-	bool has_cvq;
-
-	/* Host can handle any s/g split between our header and packet data */
-	bool any_header_sg;
-
-	/* Packet virtio header size */
-	u8 hdr_len;
-
-	/* Work struct for delayed refilling if we run low on memory. */
-	struct delayed_work refill;
-
-	/* Is delayed refill enabled? */
-	bool refill_enabled;
-
-	/* The lock to synchronize the access to refill_enabled */
-	spinlock_t refill_lock;
-
-	/* Work struct for config space updates */
-	struct work_struct config_work;
-
-	/* Does the affinity hint is set for virtqueues? */
-	bool affinity_hint_set;
-
-	/* CPU hotplug instances for online & dead */
-	struct hlist_node node;
-	struct hlist_node node_dead;
-
-	struct control_buf *ctrl;
-
-	/* Ethtool settings */
-	u8 duplex;
-	u32 speed;
-
-	/* Interrupt coalescing settings */
-	u32 tx_usecs;
-	u32 rx_usecs;
-	u32 tx_max_packets;
-	u32 rx_max_packets;
-
-	unsigned long guest_offloads;
-	unsigned long guest_offloads_capable;
-
-	/* failover when STANDBY feature enabled */
-	struct failover *failover;
-};
-
 struct padded_vnet_hdr {
 	struct virtio_net_hdr_v1_hash hdr;
 	/*
@@ -303,45 +113,11 @@ struct padded_vnet_hdr {
 static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
 static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
 
-static bool is_xdp_frame(void *ptr)
-{
-	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
-}
-
 static void *xdp_to_ptr(struct xdp_frame *ptr)
 {
 	return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
 }
 
-static struct xdp_frame *ptr_to_xdp(void *ptr)
-{
-	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
-}
-
-static void __free_old_xmit(struct send_queue *sq, bool in_napi,
-			    struct virtnet_sq_stats *stats)
-{
-	unsigned int len;
-	void *ptr;
-
-	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (!is_xdp_frame(ptr)) {
-			struct sk_buff *skb = ptr;
-
-			pr_debug("Sent skb %p\n", skb);
-
-			stats->bytes += skb->len;
-			napi_consume_skb(skb, in_napi);
-		} else {
-			struct xdp_frame *frame = ptr_to_xdp(ptr);
-
-			stats->bytes += xdp_get_frame_len(frame);
-			xdp_return_frame(frame);
-		}
-		stats->packets++;
-	}
-}
-
 /* Converting between virtqueue no. and kernel tx/rx queue no.
  * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
  */
@@ -411,15 +187,6 @@ static void disable_delayed_refill(struct virtnet_info *vi)
 	spin_unlock_bh(&vi->refill_lock);
 }
 
-static void virtqueue_napi_schedule(struct napi_struct *napi,
-				    struct virtqueue *vq)
-{
-	if (napi_schedule_prep(napi)) {
-		virtqueue_disable_cb(vq);
-		__napi_schedule(napi);
-	}
-}
-
 static void virtqueue_napi_complete(struct napi_struct *napi,
 				    struct virtqueue *vq, int processed)
 {
@@ -1740,16 +1507,6 @@ static void free_old_xmit(struct send_queue *sq, bool in_napi)
 	u64_stats_update_end(&sq->stats.syncp);
 }
 
-static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
-{
-	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
-		return false;
-	else if (q < vi->curr_queue_pairs)
-		return true;
-	else
-		return false;
-}
-
 static void virtnet_poll_cleantx(struct receive_queue *rq)
 {
 	struct virtnet_info *vi = rq->vq->vdev->priv;
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
new file mode 100644
index 000000000000..8bf31429ae28
--- /dev/null
+++ b/drivers/net/virtio/virtio_net.h
@@ -0,0 +1,265 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef __VIRTIO_NET_H__
+#define __VIRTIO_NET_H__
+#include <linux/netdevice.h>
+#include <linux/etherdevice.h>
+#include <linux/ethtool.h>
+#include <linux/module.h>
+#include <linux/virtio.h>
+#include <linux/virtio_net.h>
+#include <linux/bpf.h>
+#include <linux/bpf_trace.h>
+#include <linux/scatterlist.h>
+#include <linux/if_vlan.h>
+#include <linux/slab.h>
+#include <linux/cpu.h>
+#include <linux/average.h>
+#include <linux/filter.h>
+#include <linux/kernel.h>
+#include <net/route.h>
+#include <net/xdp.h>
+#include <net/net_failover.h>
+#include <net/xdp_sock_drv.h>
+
+#define VIRTIO_XDP_FLAG	BIT(0)
+
+struct virtnet_info {
+	struct virtio_device *vdev;
+	struct virtqueue *cvq;
+	struct net_device *dev;
+	struct send_queue *sq;
+	struct receive_queue *rq;
+	unsigned int status;
+
+	/* Max # of queue pairs supported by the device */
+	u16 max_queue_pairs;
+
+	/* # of queue pairs currently used by the driver */
+	u16 curr_queue_pairs;
+
+	/* # of XDP queue pairs currently used by the driver */
+	u16 xdp_queue_pairs;
+
+	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
+	bool xdp_enabled;
+
+	/* I like... big packets and I cannot lie! */
+	bool big_packets;
+
+	/* number of sg entries allocated for big packets */
+	unsigned int big_packets_num_skbfrags;
+
+	/* Host will merge rx buffers for big packets (shake it! shake it!) */
+	bool mergeable_rx_bufs;
+
+	/* Host supports rss and/or hash report */
+	bool has_rss;
+	bool has_rss_hash_report;
+	u8 rss_key_size;
+	u16 rss_indir_table_size;
+	u32 rss_hash_types_supported;
+	u32 rss_hash_types_saved;
+
+	/* Has control virtqueue */
+	bool has_cvq;
+
+	/* Host can handle any s/g split between our header and packet data */
+	bool any_header_sg;
+
+	/* Packet virtio header size */
+	u8 hdr_len;
+
+	/* Work struct for delayed refilling if we run low on memory. */
+	struct delayed_work refill;
+
+	/* Is delayed refill enabled? */
+	bool refill_enabled;
+
+	/* The lock to synchronize the access to refill_enabled */
+	spinlock_t refill_lock;
+
+	/* Work struct for config space updates */
+	struct work_struct config_work;
+
+	/* Does the affinity hint is set for virtqueues? */
+	bool affinity_hint_set;
+
+	/* CPU hotplug instances for online & dead */
+	struct hlist_node node;
+	struct hlist_node node_dead;
+
+	struct control_buf *ctrl;
+
+	/* Ethtool settings */
+	u8 duplex;
+	u32 speed;
+
+	/* Interrupt coalescing settings */
+	u32 tx_usecs;
+	u32 rx_usecs;
+	u32 tx_max_packets;
+	u32 rx_max_packets;
+
+	unsigned long guest_offloads;
+	unsigned long guest_offloads_capable;
+
+	/* failover when STANDBY feature enabled */
+	struct failover *failover;
+};
+
+/* RX packet size EWMA. The average packet size is used to determine the packet
+ * buffer size when refilling RX rings. As the entire RX ring may be refilled
+ * at once, the weight is chosen so that the EWMA will be insensitive to short-
+ * term, transient changes in packet size.
+ */
+DECLARE_EWMA(pkt_len, 0, 64)
+
+struct virtnet_stat_desc {
+	char desc[ETH_GSTRING_LEN];
+	size_t offset;
+};
+
+struct virtnet_sq_stats {
+	struct u64_stats_sync syncp;
+	u64 packets;
+	u64 bytes;
+	u64 xdp_tx;
+	u64 xdp_tx_drops;
+	u64 kicks;
+	u64 tx_timeouts;
+};
+
+struct virtnet_rq_stats {
+	struct u64_stats_sync syncp;
+	u64 packets;
+	u64 bytes;
+	u64 drops;
+	u64 xdp_packets;
+	u64 xdp_tx;
+	u64 xdp_redirects;
+	u64 xdp_drops;
+	u64 kicks;
+};
+
+#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
+#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
+
+/* Internal representation of a send virtqueue */
+struct send_queue {
+	/* Virtqueue associated with this send _queue */
+	struct virtqueue *vq;
+
+	/* TX: fragments + linear part + virtio header */
+	struct scatterlist sg[MAX_SKB_FRAGS + 2];
+
+	/* Name of the send queue: output.$index */
+	char name[16];
+
+	struct virtnet_sq_stats stats;
+
+	struct napi_struct napi;
+
+	/* Record whether sq is in reset state. */
+	bool reset;
+};
+
+/* Internal representation of a receive virtqueue */
+struct receive_queue {
+	/* Virtqueue associated with this receive_queue */
+	struct virtqueue *vq;
+
+	struct napi_struct napi;
+
+	struct bpf_prog __rcu *xdp_prog;
+
+	struct virtnet_rq_stats stats;
+
+	/* Chain pages by the private ptr. */
+	struct page *pages;
+
+	/* Average packet length for mergeable receive buffers. */
+	struct ewma_pkt_len mrg_avg_pkt_len;
+
+	/* Page frag for packet buffer allocation. */
+	struct page_frag alloc_frag;
+
+	/* RX: fragments + linear part + virtio header */
+	struct scatterlist sg[MAX_SKB_FRAGS + 2];
+
+	/* Min single buffer size for mergeable buffers case. */
+	unsigned int min_buf_len;
+
+	/* Name of this receive queue: input.$index */
+	char name[16];
+
+	struct xdp_rxq_info xdp_rxq;
+};
+
+static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
+{
+	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
+		return false;
+	else if (q < vi->curr_queue_pairs)
+		return true;
+	else
+		return false;
+}
+
+static inline void virtnet_return_xdp_frame(struct send_queue *sq,
+					    struct xdp_frame *frame)
+{
+	struct virtnet_info *vi = sq->vq->vdev->priv;
+	dma_addr_t *p_addr, addr;
+
+	p_addr = frame->data - sizeof(*p_addr);
+	addr = *p_addr;
+
+	virtio_dma_unmap(&vi->vdev->dev, addr, frame->len, DMA_TO_DEVICE);
+
+	xdp_return_frame(frame);
+}
+
+static inline void virtqueue_napi_schedule(struct napi_struct *napi,
+					   struct virtqueue *vq)
+{
+	if (napi_schedule_prep(napi)) {
+		virtqueue_disable_cb(vq);
+		__napi_schedule(napi);
+	}
+}
+
+static inline bool is_xdp_frame(void *ptr)
+{
+	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
+}
+
+static struct xdp_frame *ptr_to_xdp(void *ptr)
+{
+	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
+}
+
+static void __free_old_xmit(struct send_queue *sq, bool in_napi,
+			    struct virtnet_sq_stats *stats)
+{
+	unsigned int len;
+	void *ptr;
+
+	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
+		if (!is_xdp_frame(ptr)) {
+			struct sk_buff *skb = ptr;
+
+			pr_debug("Sent skb %p\n", skb);
+
+			stats->bytes += skb->len;
+			napi_consume_skb(skb, in_napi);
+		} else {
+			struct xdp_frame *frame = ptr_to_xdp(ptr);
+
+			stats->bytes += xdp_get_frame_len(frame);
+			xdp_return_frame(frame);
+		}
+		stats->packets++;
+	}
+}
+#endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 16/33] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (14 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 15/33] virtio_net: move to virtio_net.h Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-03  8:55   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 17/33] virtio_net: receive_small() use virtnet_xdp_handler() Xuan Zhuo
                   ` (19 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

At present, we have two long similar logic to perform XDP Progs. And in
the implementation of XSK, we will have this need.

Therefore, this PATCH separates the code of executing XDP, which is
conducive to later maintenance and facilitates subsequent XSK for reuse.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 53 +++++++++++++++++++++++++++++++++
 drivers/net/virtio/virtio_net.h | 11 +++++++
 2 files changed, 64 insertions(+)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 5683cb576474..9d4b84b23ef7 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -478,6 +478,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	return ret;
 }
 
+int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
+			struct net_device *dev,
+			unsigned int *xdp_xmit,
+			struct virtnet_rq_stats *stats)
+{
+	struct xdp_frame *xdpf;
+	int err;
+	u32 act;
+
+	act = bpf_prog_run_xdp(xdp_prog, xdp);
+	stats->xdp_packets++;
+
+	switch (act) {
+	case XDP_PASS:
+		return VIRTNET_XDP_RES_PASS;
+
+	case XDP_TX:
+		stats->xdp_tx++;
+		xdpf = xdp_convert_buff_to_frame(xdp);
+		if (unlikely(!xdpf))
+			return VIRTNET_XDP_RES_DROP;
+
+		err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
+		if (unlikely(!err)) {
+			xdp_return_frame_rx_napi(xdpf);
+		} else if (unlikely(err < 0)) {
+			trace_xdp_exception(dev, xdp_prog, act);
+			return VIRTNET_XDP_RES_DROP;
+		}
+
+		*xdp_xmit |= VIRTIO_XDP_TX;
+		return VIRTNET_XDP_RES_CONSUMED;
+
+	case XDP_REDIRECT:
+		stats->xdp_redirects++;
+		err = xdp_do_redirect(dev, xdp, xdp_prog);
+		if (err)
+			return VIRTNET_XDP_RES_DROP;
+
+		*xdp_xmit |= VIRTIO_XDP_REDIR;
+		return VIRTNET_XDP_RES_CONSUMED;
+
+	default:
+		bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
+		fallthrough;
+	case XDP_ABORTED:
+		trace_xdp_exception(dev, xdp_prog, act);
+		fallthrough;
+	case XDP_DROP:
+		return VIRTNET_XDP_RES_DROP;
+	}
+}
+
 static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
 {
 	return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 8bf31429ae28..af3e7e817f9e 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -22,6 +22,12 @@
 #include <net/net_failover.h>
 #include <net/xdp_sock_drv.h>
 
+enum {
+	VIRTNET_XDP_RES_PASS,
+	VIRTNET_XDP_RES_DROP,
+	VIRTNET_XDP_RES_CONSUMED,
+};
+
 #define VIRTIO_XDP_FLAG	BIT(0)
 
 struct virtnet_info {
@@ -262,4 +268,9 @@ static void __free_old_xmit(struct send_queue *sq, bool in_napi,
 		stats->packets++;
 	}
 }
+
+int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
+			struct net_device *dev,
+			unsigned int *xdp_xmit,
+			struct virtnet_rq_stats *stats);
 #endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 17/33] virtio_net: receive_small() use virtnet_xdp_handler()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (15 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 16/33] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 18/33] virtio_net: receive_merageable() " Xuan Zhuo
                   ` (18 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

receive_small() use virtnet_xdp_handler().

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 40 +++++++--------------------------------
 1 file changed, 7 insertions(+), 33 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 9d4b84b23ef7..d7a856bd8862 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -618,7 +618,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
 	struct page *page = virt_to_head_page(buf);
 	unsigned int delta = 0;
 	struct page *xdp_page;
-	int err;
 	unsigned int metasize = 0;
 
 	len -= vi->hdr_len;
@@ -640,7 +639,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (xdp_prog) {
 		struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
-		struct xdp_frame *xdpf;
 		struct xdp_buff xdp;
 		void *orig_data;
 		u32 act;
@@ -673,46 +671,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
 		xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
 				 xdp_headroom, len, true);
 		orig_data = xdp.data;
-		act = bpf_prog_run_xdp(xdp_prog, &xdp);
-		stats->xdp_packets++;
+
+		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
 
 		switch (act) {
-		case XDP_PASS:
+		case VIRTNET_XDP_RES_PASS:
 			/* Recalculate length in case bpf program changed it */
 			delta = orig_data - xdp.data;
 			len = xdp.data_end - xdp.data;
 			metasize = xdp.data - xdp.data_meta;
 			break;
-		case XDP_TX:
-			stats->xdp_tx++;
-			xdpf = xdp_convert_buff_to_frame(&xdp);
-			if (unlikely(!xdpf))
-				goto err_xdp;
-			err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
-			if (unlikely(!err)) {
-				xdp_return_frame_rx_napi(xdpf);
-			} else if (unlikely(err < 0)) {
-				trace_xdp_exception(vi->dev, xdp_prog, act);
-				goto err_xdp;
-			}
-			*xdp_xmit |= VIRTIO_XDP_TX;
-			rcu_read_unlock();
-			goto xdp_xmit;
-		case XDP_REDIRECT:
-			stats->xdp_redirects++;
-			err = xdp_do_redirect(dev, &xdp, xdp_prog);
-			if (err)
-				goto err_xdp;
-			*xdp_xmit |= VIRTIO_XDP_REDIR;
+
+		case VIRTNET_XDP_RES_CONSUMED:
 			rcu_read_unlock();
 			goto xdp_xmit;
-		default:
-			bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
-			fallthrough;
-		case XDP_ABORTED:
-			trace_xdp_exception(vi->dev, xdp_prog, act);
-			goto err_xdp;
-		case XDP_DROP:
+
+		case VIRTNET_XDP_RES_DROP:
 			goto err_xdp;
 		}
 	}
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 18/33] virtio_net: receive_merageable() use virtnet_xdp_handler()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (16 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 17/33] virtio_net: receive_small() use virtnet_xdp_handler() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 17:16   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 19/33] virtio_net: introduce virtnet_tx_reset() Xuan Zhuo
                   ` (17 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

receive_merageable() use virtnet_xdp_handler()

Meanwhile, support Multi Buffer XDP.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 88 +++++++++++++++------------------------
 1 file changed, 33 insertions(+), 55 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index d7a856bd8862..fb82035a0b7f 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -483,8 +483,10 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 			unsigned int *xdp_xmit,
 			struct virtnet_rq_stats *stats)
 {
+	struct skb_shared_info *shinfo;
 	struct xdp_frame *xdpf;
-	int err;
+	struct page *xdp_page;
+	int err, i;
 	u32 act;
 
 	act = bpf_prog_run_xdp(xdp_prog, xdp);
@@ -527,6 +529,13 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 		trace_xdp_exception(dev, xdp_prog, act);
 		fallthrough;
 	case XDP_DROP:
+		if (xdp_buff_has_frags(xdp)) {
+			shinfo = xdp_get_shared_info_from_buff(xdp);
+			for (i = 0; i < shinfo->nr_frags; i++) {
+				xdp_page = skb_frag_page(&shinfo->frags[i]);
+				put_page(xdp_page);
+			}
+		}
 		return VIRTNET_XDP_RES_DROP;
 	}
 }
@@ -809,7 +818,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 	unsigned int xdp_frags_truesz = 0;
 	struct page *page;
 	skb_frag_t *frag;
-	int offset;
+	int offset, i;
 	void *ctx;
 
 	xdp_init_buff(xdp, frame_sz, &rq->xdp_rxq);
@@ -842,7 +851,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 				 dev->name, *num_buf,
 				 virtio16_to_cpu(vi->vdev, hdr->num_buffers));
 			dev->stats.rx_length_errors++;
-			return -EINVAL;
+			goto err;
 		}
 
 		stats->bytes += len;
@@ -861,7 +870,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 			pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
 				 dev->name, len, (unsigned long)(truesize - room));
 			dev->stats.rx_length_errors++;
-			return -EINVAL;
+			goto err;
 		}
 
 		frag = &shinfo->frags[shinfo->nr_frags++];
@@ -876,6 +885,14 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 
 	*xdp_frags_truesize = xdp_frags_truesz;
 	return 0;
+
+err:
+	for (i = 0; i < shinfo->nr_frags; i++) {
+		page = skb_frag_page(&shinfo->frags[i]);
+		put_page(page);
+	}
+
+	return -EINVAL;
 }
 
 static struct sk_buff *receive_mergeable(struct net_device *dev,
@@ -919,13 +936,10 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (xdp_prog) {
 		unsigned int xdp_frags_truesz = 0;
-		struct skb_shared_info *shinfo;
-		struct xdp_frame *xdpf;
 		struct page *xdp_page;
 		struct xdp_buff xdp;
 		void *data;
 		u32 act;
-		int i;
 
 		/* Transient failure which in theory could occur if
 		 * in-flight packets from before XDP was enabled reach
@@ -983,69 +997,33 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
 						 &num_buf, &xdp_frags_truesz, stats);
 		if (unlikely(err))
-			goto err_xdp_frags;
+			goto err_xdp;
 
-		act = bpf_prog_run_xdp(xdp_prog, &xdp);
-		stats->xdp_packets++;
+		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
 
 		switch (act) {
-		case XDP_PASS:
+		case VIRTNET_XDP_RES_PASS:
 			if (unlikely(xdp_page != page))
 				put_page(page);
+
 			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
 			rcu_read_unlock();
 			return head_skb;
-		case XDP_TX:
-			stats->xdp_tx++;
-			xdpf = xdp_convert_buff_to_frame(&xdp);
-			if (unlikely(!xdpf)) {
-				netdev_dbg(dev, "convert buff to frame failed for xdp\n");
-				goto err_xdp_frags;
-			}
-			err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
-			if (unlikely(!err)) {
-				xdp_return_frame_rx_napi(xdpf);
-			} else if (unlikely(err < 0)) {
-				trace_xdp_exception(vi->dev, xdp_prog, act);
-				goto err_xdp_frags;
-			}
-			*xdp_xmit |= VIRTIO_XDP_TX;
-			if (unlikely(xdp_page != page))
-				put_page(page);
-			rcu_read_unlock();
-			goto xdp_xmit;
-		case XDP_REDIRECT:
-			stats->xdp_redirects++;
-			err = xdp_do_redirect(dev, &xdp, xdp_prog);
-			if (err)
-				goto err_xdp_frags;
-			*xdp_xmit |= VIRTIO_XDP_REDIR;
+
+		case VIRTNET_XDP_RES_CONSUMED:
 			if (unlikely(xdp_page != page))
 				put_page(page);
+
 			rcu_read_unlock();
 			goto xdp_xmit;
-		default:
-			bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
-			fallthrough;
-		case XDP_ABORTED:
-			trace_xdp_exception(vi->dev, xdp_prog, act);
-			fallthrough;
-		case XDP_DROP:
-			goto err_xdp_frags;
-		}
-err_xdp_frags:
-		if (unlikely(xdp_page != page))
-			__free_pages(xdp_page, 0);
 
-		if (xdp_buff_has_frags(&xdp)) {
-			shinfo = xdp_get_shared_info_from_buff(&xdp);
-			for (i = 0; i < shinfo->nr_frags; i++) {
-				xdp_page = skb_frag_page(&shinfo->frags[i]);
+		case VIRTNET_XDP_RES_DROP:
+			if (unlikely(xdp_page != page))
 				put_page(xdp_page);
-			}
-		}
 
-		goto err_xdp;
+			rcu_read_unlock();
+			goto err_xdp;
+		}
 	}
 	rcu_read_unlock();
 
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 19/33] virtio_net: introduce virtnet_tx_reset()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (17 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 18/33] virtio_net: receive_merageable() " Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 17:23   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool() Xuan Zhuo
                   ` (16 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Introduce virtnet_tx_reset() to release the buffers inside virtio ring.

This is needed for xsk disable. When disable xsk, we need to relese the
buffer from xsk, so this function is needed.

This patch reuse the virtnet_tx_resize.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 21 ++++++++++++++++++---
 drivers/net/virtio/virtio_net.h |  1 +
 2 files changed, 19 insertions(+), 3 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index fb82035a0b7f..049a3bb9d88d 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -1806,8 +1806,8 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
 	return err;
 }
 
-static int virtnet_tx_resize(struct virtnet_info *vi,
-			     struct send_queue *sq, u32 ring_num)
+static int __virtnet_tx_reset(struct virtnet_info *vi,
+			      struct send_queue *sq, u32 ring_num)
 {
 	bool running = netif_running(vi->dev);
 	struct netdev_queue *txq;
@@ -1833,7 +1833,11 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
 
 	__netif_tx_unlock_bh(txq);
 
-	err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
+	if (ring_num)
+		err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
+	else
+		err = virtqueue_reset(sq->vq, virtnet_sq_free_unused_buf);
+
 	if (err)
 		netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
 
@@ -1847,6 +1851,17 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
 	return err;
 }
 
+static int virtnet_tx_resize(struct virtnet_info *vi,
+			     struct send_queue *sq, u32 ring_num)
+{
+	return __virtnet_tx_reset(vi, sq, ring_num);
+}
+
+int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq)
+{
+	return __virtnet_tx_reset(vi, sq, 0);
+}
+
 /*
  * Send command via the control virtqueue and check status.  Commands
  * supported by the hypervisor, as indicated by feature bits, should
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index af3e7e817f9e..b46f083a630a 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -273,4 +273,5 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 			struct net_device *dev,
 			unsigned int *xdp_xmit,
 			struct virtnet_rq_stats *stats);
+int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq);
 #endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (18 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 19/33] virtio_net: introduce virtnet_tx_reset() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-03  8:48   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 21/33] virtio_net: xsk: introduce virtnet_xsk_pool_enable() Xuan Zhuo
                   ` (15 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

This function is used to bind or unbind xsk pool to virtnet rq.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/Makefile     |  2 +-
 drivers/net/virtio/main.c       |  8 ++---
 drivers/net/virtio/virtio_net.h | 16 ++++++++++
 drivers/net/virtio/xsk.c        | 56 +++++++++++++++++++++++++++++++++
 4 files changed, 76 insertions(+), 6 deletions(-)
 create mode 100644 drivers/net/virtio/xsk.c

diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
index 15ed7c97fd4f..8c2a884d2dba 100644
--- a/drivers/net/virtio/Makefile
+++ b/drivers/net/virtio/Makefile
@@ -5,4 +5,4 @@
 
 obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
 
-virtio_net-y := main.o
+virtio_net-y := main.o xsk.o
diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 049a3bb9d88d..0ee23468b795 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -110,7 +110,6 @@ struct padded_vnet_hdr {
 	char padding[12];
 };
 
-static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
 static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
 
 static void *xdp_to_ptr(struct xdp_frame *ptr)
@@ -1351,8 +1350,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
  * before we're receiving packets, or from refill_work which is
  * careful to disable receiving (using napi_disable).
  */
-static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
-			  gfp_t gfp)
+bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
 {
 	int err;
 	bool oom;
@@ -1388,7 +1386,7 @@ static void skb_recv_done(struct virtqueue *rvq)
 	virtqueue_napi_schedule(&rq->napi, rvq);
 }
 
-static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
+void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
 {
 	napi_enable(napi);
 
@@ -3284,7 +3282,7 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 		xdp_return_frame(ptr_to_xdp(buf));
 }
 
-static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
+void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
 	struct virtnet_info *vi = vq->vdev->priv;
 	int i = vq2rxq(vq);
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index b46f083a630a..4a7633714802 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -168,6 +168,12 @@ struct send_queue {
 
 	/* Record whether sq is in reset state. */
 	bool reset;
+
+	struct {
+		struct xsk_buff_pool __rcu *pool;
+
+		dma_addr_t hdr_dma_address;
+	} xsk;
 };
 
 /* Internal representation of a receive virtqueue */
@@ -200,6 +206,13 @@ struct receive_queue {
 	char name[16];
 
 	struct xdp_rxq_info xdp_rxq;
+
+	struct {
+		struct xsk_buff_pool __rcu *pool;
+
+		/* xdp rxq used by xsk */
+		struct xdp_rxq_info xdp_rxq;
+	} xsk;
 };
 
 static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
@@ -274,4 +287,7 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 			unsigned int *xdp_xmit,
 			struct virtnet_rq_stats *stats);
 int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq);
+bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp);
+void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi);
+void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
 #endif
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
new file mode 100644
index 000000000000..e01ff2abea11
--- /dev/null
+++ b/drivers/net/virtio/xsk.c
@@ -0,0 +1,56 @@
+// SPDX-License-Identifier: GPL-2.0-or-later
+/*
+ * virtio-net xsk
+ */
+
+#include "virtio_net.h"
+
+static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
+				    struct xsk_buff_pool *pool, struct net_device *dev)
+{
+	bool running = netif_running(vi->dev);
+	int err, qindex;
+
+	qindex = rq - vi->rq;
+
+	if (pool) {
+		err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, dev, qindex, rq->napi.napi_id);
+		if (err < 0)
+			return err;
+
+		err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq,
+						 MEM_TYPE_XSK_BUFF_POOL, NULL);
+		if (err < 0) {
+			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
+			return err;
+		}
+
+		xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq);
+	} else {
+		xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
+	}
+
+	if (running)
+		napi_disable(&rq->napi);
+
+	err = virtqueue_reset(rq->vq, virtnet_rq_free_unused_buf);
+	if (err)
+		netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err);
+
+	if (pool) {
+		if (err)
+			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
+		else
+			rcu_assign_pointer(rq->xsk.pool, pool);
+	} else {
+		rcu_assign_pointer(rq->xsk.pool, NULL);
+	}
+
+	if (!try_fill_recv(vi, rq, GFP_KERNEL))
+		schedule_delayed_work(&vi->refill, 0);
+
+	if (running)
+		virtnet_napi_enable(rq->vq, &rq->napi);
+
+	return err;
+}
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 21/33] virtio_net: xsk: introduce virtnet_xsk_pool_enable()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (19 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 22/33] virtio_net: xsk: introduce xsk disable Xuan Zhuo
                   ` (14 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Introduce virtnet_xsk_pool_enable() for xsk setup.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/xsk.c | 59 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 59 insertions(+)

diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index e01ff2abea11..a7e8005233d2 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -5,6 +5,8 @@
 
 #include "virtio_net.h"
 
+static struct virtio_net_hdr_mrg_rxbuf xsk_hdr;
+
 static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
 				    struct xsk_buff_pool *pool, struct net_device *dev)
 {
@@ -54,3 +56,60 @@ static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queu
 
 	return err;
 }
+
+static int virtnet_xsk_pool_enable(struct net_device *dev,
+				   struct xsk_buff_pool *pool,
+				   u16 qid)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	struct receive_queue *rq;
+	struct send_queue *sq;
+	int err;
+
+	if (qid >= vi->curr_queue_pairs)
+		return -EINVAL;
+
+	sq = &vi->sq[qid];
+	rq = &vi->rq[qid];
+
+	/* xsk zerocopy depend on the tx napi.
+	 *
+	 * All xsk packets are actually consumed and sent out from the xsk tx
+	 * queue under the tx napi mechanism.
+	 */
+	if (!sq->napi.weight)
+		return -EPERM;
+
+	/* In big_packets mode, xdp cannot work, so there is no need to
+	 * initialize xsk of rq.
+	 */
+	if (vi->big_packets && !vi->mergeable_rx_bufs)
+		return -ENOENT;
+
+	sq->xsk.hdr_dma_address = virtio_dma_map(&vi->vdev->dev, &xsk_hdr,
+						 vi->hdr_len, DMA_TO_DEVICE);
+	if (virtio_dma_mapping_error(&vi->vdev->dev, sq->xsk.hdr_dma_address))
+		return -ENOMEM;
+
+	err = xsk_pool_dma_map(pool, &vi->vdev->dev, 0);
+	if (err)
+		goto err_xsk_map;
+
+	err = virtnet_rq_bind_xsk_pool(vi, rq, pool, dev);
+	if (err)
+		goto err_rxq;
+
+	/* Here is already protected by rtnl_lock, so rcu_assign_pointer
+	 * is safe.
+	 */
+	rcu_assign_pointer(sq->xsk.pool, pool);
+
+	return 0;
+
+err_rxq:
+	xsk_pool_dma_unmap(pool, 0);
+err_xsk_map:
+	virtio_dma_unmap(&vi->vdev->dev, sq->xsk.hdr_dma_address, vi->hdr_len,
+			 DMA_TO_DEVICE);
+	return err;
+}
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 22/33] virtio_net: xsk: introduce xsk disable
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (20 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 21/33] virtio_net: xsk: introduce virtnet_xsk_pool_enable() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 23:02   ` kernel test robot
  2023-02-12  7:56   ` kernel test robot
  2023-02-02 11:00 ` [PATCH 23/33] virtio_net: xsk: support xsk setup Xuan Zhuo
                   ` (13 subsequent siblings)
  35 siblings, 2 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Introduce virtnet_xsk_pool_disable() for xsk unbind.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/xsk.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index a7e8005233d2..18c0c6e4d501 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -113,3 +113,32 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 			 DMA_TO_DEVICE);
 	return err;
 }
+
+static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	struct receive_queue *rq;
+	struct send_queue *sq;
+	int err1, err2;
+
+	if (qid >= vi->curr_queue_pairs)
+		return -EINVAL;
+
+	sq = &vi->sq[qid];
+	rq = &vi->rq[qid];
+
+	virtio_dma_unmap(&vi->vdev->dev, sq->xsk.hdr_dma_address, vi->hdr_len,
+			 DMA_TO_DEVICE);
+
+	xsk_pool_dma_unmap(sq->xsk.pool, 0);
+
+	rcu_assign_pointer(sq->xsk.pool, NULL);
+
+	/* Sync with the XSK wakeup and with NAPI. */
+	synchronize_net();
+
+	err1 = virtnet_tx_reset(vi, sq);
+	err2 = virtnet_rq_bind_xsk_pool(vi, rq, NULL, NULL);
+
+	return err1 | err2;
+}
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 23/33] virtio_net: xsk: support xsk setup
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (21 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 22/33] virtio_net: xsk: introduce xsk disable Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 24/33] virtio_net: xsk: stop disable tx napi Xuan Zhuo
                   ` (12 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

support xsk XDP_SETUP_XSK_POOL.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       | 2 ++
 drivers/net/virtio/virtio_net.h | 2 ++
 drivers/net/virtio/xsk.c        | 9 +++++++++
 drivers/net/virtio/xsk.h        | 7 +++++++
 4 files changed, 20 insertions(+)
 create mode 100644 drivers/net/virtio/xsk.h

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 0ee23468b795..ed79e750bc6c 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -3094,6 +3094,8 @@ static int virtnet_xdp(struct net_device *dev, struct netdev_bpf *xdp)
 	switch (xdp->command) {
 	case XDP_SETUP_PROG:
 		return virtnet_xdp_set(dev, xdp->prog, xdp->extack);
+	case XDP_SETUP_XSK_POOL:
+		return virtnet_xsk_pool_setup(dev, xdp);
 	default:
 		return -EINVAL;
 	}
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 4a7633714802..a12d85624fe9 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -248,6 +248,8 @@ static inline void virtqueue_napi_schedule(struct napi_struct *napi,
 	}
 }
 
+#include "xsk.h"
+
 static inline bool is_xdp_frame(void *ptr)
 {
 	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index 18c0c6e4d501..b96af38a2608 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -142,3 +142,12 @@ static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
 
 	return err1 | err2;
 }
+
+int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp)
+{
+	if (xdp->xsk.pool)
+		return virtnet_xsk_pool_enable(dev, xdp->xsk.pool,
+					       xdp->xsk.queue_id);
+	else
+		return virtnet_xsk_pool_disable(dev, xdp->xsk.queue_id);
+}
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
new file mode 100644
index 000000000000..1918285c310c
--- /dev/null
+++ b/drivers/net/virtio/xsk.h
@@ -0,0 +1,7 @@
+/* SPDX-License-Identifier: GPL-2.0-or-later */
+
+#ifndef __XSK_H__
+#define __XSK_H__
+
+int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
+#endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 24/33] virtio_net: xsk: stop disable tx napi
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (22 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 23/33] virtio_net: xsk: support xsk setup Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 17:25   ` Michael S. Tsirkin
  2023-02-02 11:00 ` [PATCH 25/33] virtio_net: xsk: __free_old_xmit distinguishes xsk buffer Xuan Zhuo
                   ` (11 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
we must stop tx napi from being disabled.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 9 ++++++++-
 1 file changed, 8 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index ed79e750bc6c..232cf151abff 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -2728,8 +2728,15 @@ static int virtnet_set_coalesce(struct net_device *dev,
 		return ret;
 
 	if (update_napi) {
-		for (i = 0; i < vi->max_queue_pairs; i++)
+		for (i = 0; i < vi->max_queue_pairs; i++) {
+			/* xsk xmit depend on the tx napi. So if xsk is active,
+			 * prevent modifications to tx napi.
+			 */
+			if (rtnl_dereference(vi->sq[i].xsk.pool))
+				continue;
+
 			vi->sq[i].napi.weight = napi_weight;
+		}
 	}
 
 	return ret;
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 25/33] virtio_net: xsk: __free_old_xmit distinguishes xsk buffer
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (23 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 24/33] virtio_net: xsk: stop disable tx napi Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 26/33] virtio_net: virtnet_sq_free_unused_buf() check " Xuan Zhuo
                   ` (10 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

__free_old_xmit distinguishes three type ptr(skb, xdp frame, xsk buffer)
by the last two types bits.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/virtio_net.h | 18 ++++++++++++++++--
 drivers/net/virtio/xsk.h        | 16 ++++++++++++++++
 2 files changed, 32 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index a12d85624fe9..dd2f7890f8cd 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -29,6 +29,7 @@ enum {
 };
 
 #define VIRTIO_XDP_FLAG	BIT(0)
+#define VIRTIO_XSK_FLAG	BIT(1)
 
 struct virtnet_info {
 	struct virtio_device *vdev;
@@ -250,6 +251,11 @@ static inline void virtqueue_napi_schedule(struct napi_struct *napi,
 
 #include "xsk.h"
 
+static inline bool is_skb_ptr(void *ptr)
+{
+	return !((unsigned long)ptr & (VIRTIO_XDP_FLAG | VIRTIO_XSK_FLAG));
+}
+
 static inline bool is_xdp_frame(void *ptr)
 {
 	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
@@ -263,25 +269,33 @@ static struct xdp_frame *ptr_to_xdp(void *ptr)
 static void __free_old_xmit(struct send_queue *sq, bool in_napi,
 			    struct virtnet_sq_stats *stats)
 {
+	unsigned int xsknum = 0;
 	unsigned int len;
 	void *ptr;
 
 	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
-		if (!is_xdp_frame(ptr)) {
+		if (is_skb_ptr(ptr)) {
 			struct sk_buff *skb = ptr;
 
 			pr_debug("Sent skb %p\n", skb);
 
 			stats->bytes += skb->len;
 			napi_consume_skb(skb, in_napi);
-		} else {
+		} else if (is_xdp_frame(ptr)) {
 			struct xdp_frame *frame = ptr_to_xdp(ptr);
 
 			stats->bytes += xdp_get_frame_len(frame);
 			xdp_return_frame(frame);
+			virtnet_return_xdp_frame(sq, frame);
+		} else {
+			stats->bytes += ptr_to_xsk(ptr);
+			++xsknum;
 		}
 		stats->packets++;
 	}
+
+	if (xsknum)
+		xsk_tx_completed(sq->xsk.pool, xsknum);
 }
 
 int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 1918285c310c..ad684c812091 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -3,5 +3,21 @@
 #ifndef __XSK_H__
 #define __XSK_H__
 
+#define VIRTIO_XSK_FLAG_OFFSET	4
+
+static inline void *xsk_to_ptr(u32 len)
+{
+	unsigned long p;
+
+	p = len << VIRTIO_XSK_FLAG_OFFSET;
+
+	return (void *)(p | VIRTIO_XSK_FLAG);
+}
+
+static inline u32 ptr_to_xsk(void *ptr)
+{
+	return ((unsigned long)ptr) >> VIRTIO_XSK_FLAG_OFFSET;
+}
+
 int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
 #endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 26/33] virtio_net: virtnet_sq_free_unused_buf() check xsk buffer
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (24 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 25/33] virtio_net: xsk: __free_old_xmit distinguishes xsk buffer Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 27/33] virtio_net: virtnet_rq_free_unused_buf() " Xuan Zhuo
                   ` (9 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

virtnet_sq_free_unused_buf() check xsk buffer.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 6 ++++--
 1 file changed, 4 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 232cf151abff..ced9a37f706b 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -3285,10 +3285,12 @@ static void free_receive_page_frags(struct virtnet_info *vi)
 
 static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
-	if (!is_xdp_frame(buf))
+	if (is_skb_ptr(buf))
 		dev_kfree_skb(buf);
-	else
+	else if (is_xdp_frame(buf))
 		xdp_return_frame(ptr_to_xdp(buf));
+
+	/* xsk buffer do not need handle. */
 }
 
 void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 27/33] virtio_net: virtnet_rq_free_unused_buf() check xsk buffer
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (25 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 26/33] virtio_net: virtnet_sq_free_unused_buf() check " Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 28/33] net: introduce napi_tx_raise() Xuan Zhuo
                   ` (8 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Since this will be called in other circumstances(freeze), we must check
whether it is xsk's buffer in this function. It cannot be judged outside
this function.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index ced9a37f706b..43249c78484a 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -3296,8 +3296,21 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
 void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
 {
 	struct virtnet_info *vi = vq->vdev->priv;
+	struct xsk_buff_pool *pool;
 	int i = vq2rxq(vq);
 
+	rcu_read_lock();
+	pool = rcu_dereference(vi->rq[i].xsk.pool);
+	if (pool) {
+		struct xdp_buff *xdp;
+
+		xdp = (struct xdp_buff *)buf;
+		xsk_buff_free(xdp);
+		rcu_read_unlock();
+		return;
+	}
+	rcu_read_unlock();
+
 	if (vi->mergeable_rx_bufs)
 		put_page(virt_to_head_page(buf));
 	else if (vi->big_packets)
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 28/33] net: introduce napi_tx_raise()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (26 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 27/33] virtio_net: virtnet_rq_free_unused_buf() " Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 29/33] virtio_net: xsk: tx: support tx Xuan Zhuo
                   ` (7 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Raise napi tx manually without softirq/irq context.

In some cases, we hope to trigger TX Napi from the user's context.
Because it is not triggered from softirq or IRQ, softirq will not be
executed from IRQ Exit. Napi_tx_raise() here will call softirqd.

For example, in the implementation of AF_XDP ZERCOPY TX, we want TX Napi
to process packets in the XSK TX queue. But Virtio-Net does not support
to generate a interrupt from hw manually. So We hope to trigger TX NAPI
from the user's context.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 include/linux/netdevice.h |  7 +++++++
 net/core/dev.c            | 11 +++++++++++
 2 files changed, 18 insertions(+)

diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
index d5ef4c1fedd2..a3f8664fadd5 100644
--- a/include/linux/netdevice.h
+++ b/include/linux/netdevice.h
@@ -519,6 +519,13 @@ static inline bool napi_complete(struct napi_struct *n)
 	return napi_complete_done(n, 0);
 }
 
+/**
+ * napi_tx_raise - raise tx napi
+ *
+ * Raise napi tx manually without softirq/irq context.
+ */
+void napi_tx_raise(void);
+
 int dev_set_threaded(struct net_device *dev, bool threaded);
 
 /**
diff --git a/net/core/dev.c b/net/core/dev.c
index bb42150a38ec..ec19eff89c56 100644
--- a/net/core/dev.c
+++ b/net/core/dev.c
@@ -6092,6 +6092,17 @@ bool napi_complete_done(struct napi_struct *n, int work_done)
 }
 EXPORT_SYMBOL(napi_complete_done);
 
+/**
+ * napi_tx_raise - raise tx napi
+ *
+ * Raise napi tx manually without softirq/irq context.
+ */
+void napi_tx_raise(void)
+{
+	raise_softirq(NET_TX_SOFTIRQ);
+}
+EXPORT_SYMBOL(napi_tx_raise);
+
 /* must be called under rcu_read_lock(), as we dont take a reference */
 static struct napi_struct *napi_by_id(unsigned int napi_id)
 {
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 29/33] virtio_net: xsk: tx: support tx
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (27 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 28/33] net: introduce napi_tx_raise() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
       [not found]   ` <Y9zIPdKmTvXqyuYS@boxer>
  2023-02-02 11:00 ` [PATCH 30/33] virtio_net: xsk: tx: support wakeup Xuan Zhuo
                   ` (6 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

The driver's tx napi is very important for XSK. It is responsible for
obtaining data from the XSK queue and sending it out.

At the beginning, we need to trigger tx napi.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c |  12 +++-
 drivers/net/virtio/xsk.c  | 146 ++++++++++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h  |   2 +
 3 files changed, 159 insertions(+), 1 deletion(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 43249c78484a..02d2f7d21bdf 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -1607,6 +1607,7 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	struct send_queue *sq = container_of(napi, struct send_queue, napi);
 	struct virtnet_info *vi = sq->vq->vdev->priv;
 	unsigned int index = vq2txq(sq->vq);
+	struct xsk_buff_pool *pool;
 	struct netdev_queue *txq;
 	int busy = 0;
 	int opaque;
@@ -1621,7 +1622,16 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	txq = netdev_get_tx_queue(vi->dev, index);
 	__netif_tx_lock(txq, raw_smp_processor_id());
 	virtqueue_disable_cb(sq->vq);
-	free_old_xmit(sq, true);
+
+	rcu_read_lock();
+	pool = rcu_dereference(sq->xsk.pool);
+	if (pool) {
+		busy |= virtnet_xsk_xmit(sq, pool, budget);
+		rcu_read_unlock();
+	} else {
+		rcu_read_unlock();
+		free_old_xmit(sq, true);
+	}
 
 	if (sq->vq->num_free >= 2 + MAX_SKB_FRAGS)
 		netif_tx_wake_queue(txq);
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index b96af38a2608..04db80244dbd 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -7,6 +7,152 @@
 
 static struct virtio_net_hdr_mrg_rxbuf xsk_hdr;
 
+static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
+{
+	sg->dma_address = addr;
+	sg->length = len;
+}
+
+static void virtnet_xsk_check_queue(struct send_queue *sq)
+{
+	struct virtnet_info *vi = sq->vq->vdev->priv;
+	struct net_device *dev = vi->dev;
+	int qnum = sq - vi->sq;
+
+	/* If it is a raw buffer queue, it does not check whether the status
+	 * of the queue is stopped when sending. So there is no need to check
+	 * the situation of the raw buffer queue.
+	 */
+	if (is_xdp_raw_buffer_queue(vi, qnum))
+		return;
+
+	/* If this sq is not the exclusive queue of the current cpu,
+	 * then it may be called by start_xmit, so check it running out
+	 * of space.
+	 *
+	 * Stop the queue to avoid getting packets that we are
+	 * then unable to transmit. Then wait the tx interrupt.
+	 */
+	if (sq->vq->num_free < 2 + MAX_SKB_FRAGS)
+		netif_stop_subqueue(dev, qnum);
+}
+
+static int virtnet_xsk_xmit_one(struct send_queue *sq,
+				struct xsk_buff_pool *pool,
+				struct xdp_desc *desc)
+{
+	struct virtnet_info *vi;
+	dma_addr_t addr;
+
+	vi = sq->vq->vdev->priv;
+
+	addr = xsk_buff_raw_get_dma(pool, desc->addr);
+	xsk_buff_raw_dma_sync_for_device(pool, addr, desc->len);
+
+	sg_init_table(sq->sg, 2);
+
+	sg_fill_dma(sq->sg, sq->xsk.hdr_dma_address, vi->hdr_len);
+	sg_fill_dma(sq->sg + 1, addr, desc->len);
+
+	return virtqueue_add_outbuf_premapped(sq->vq, sq->sg, 2,
+					      xsk_to_ptr(desc->len),
+					      GFP_ATOMIC);
+}
+
+enum {
+	XSK_XMIT_DONE,
+	XSK_XMIT_DEV_BUSY,
+	XSK_XMIT_NO_BUDGET
+};
+
+static int virtnet_xsk_xmit_batch(struct send_queue *sq,
+				  struct xsk_buff_pool *pool,
+				  unsigned int budget,
+				  struct virtnet_sq_stats *stats)
+{
+	int ret = XSK_XMIT_NO_BUDGET;
+	struct xdp_desc desc;
+	int err, packet = 0;
+
+	while (budget-- > 0) {
+		if (sq->vq->num_free < 2) {
+			__free_old_xmit(sq, true, stats);
+			if (sq->vq->num_free < 2) {
+				ret = XSK_XMIT_DEV_BUSY;
+				break;
+			}
+		}
+
+		if (!xsk_tx_peek_desc(pool, &desc)) {
+			ret = XSK_XMIT_DONE;
+			break;
+		}
+
+		err = virtnet_xsk_xmit_one(sq, pool, &desc);
+		if (unlikely(err)) {
+			ret = XSK_XMIT_DEV_BUSY;
+			break;
+		}
+
+		++packet;
+
+		if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq))
+			++stats->kicks;
+	}
+
+	if (packet) {
+		stats->xdp_tx += packet;
+
+		xsk_tx_release(pool);
+	}
+
+	return ret;
+}
+
+bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
+		      int budget)
+{
+	struct virtnet_sq_stats stats = {};
+	bool busy;
+	int ret;
+
+	__free_old_xmit(sq, true, &stats);
+
+	if (xsk_uses_need_wakeup(pool))
+		xsk_set_tx_need_wakeup(pool);
+
+	ret = virtnet_xsk_xmit_batch(sq, pool, budget, &stats);
+	switch (ret) {
+	case XSK_XMIT_DONE:
+		/* xsk tx qeueu has been consumed done. should complete napi. */
+		busy = false;
+		break;
+
+	case XSK_XMIT_NO_BUDGET:
+		/* reach the budget limit. should let napi run again. */
+		busy = true;
+		break;
+
+	case XSK_XMIT_DEV_BUSY:
+		/* sq vring is full, should complete napi. wait for tx napi been
+		 * triggered by interrupt.
+		 */
+		busy = false;
+		break;
+	}
+
+	virtnet_xsk_check_queue(sq);
+
+	u64_stats_update_begin(&sq->stats.syncp);
+	sq->stats.packets += stats.packets;
+	sq->stats.bytes += stats.bytes;
+	sq->stats.kicks += stats.kicks;
+	sq->stats.xdp_tx += stats.xdp_tx;
+	u64_stats_update_end(&sq->stats.syncp);
+
+	return busy;
+}
+
 static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
 				    struct xsk_buff_pool *pool, struct net_device *dev)
 {
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index ad684c812091..15f1540a5803 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -20,4 +20,6 @@ static inline u32 ptr_to_xsk(void *ptr)
 }
 
 int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
+bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
+		      int budget);
 #endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 30/33] virtio_net: xsk: tx: support wakeup
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (28 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 29/33] virtio_net: xsk: tx: support tx Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 31/33] virtio_net: xsk: tx: auto wakeup when free old xmit Xuan Zhuo
                   ` (5 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

xsk wakeup is used to trigger the logic for xsk xmit by xsk framework or
user.

Virtio-Net does not support to actively generate a interruption, so it
try to trigger tx NAPI on the tx interrupt cpu.

Consider the effect of cache. When interrupt triggers, it is
generally fixed on a CPU. It is better to start TX Napi on the same
CPU.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       |  3 ++
 drivers/net/virtio/virtio_net.h |  2 ++
 drivers/net/virtio/xsk.c        | 53 +++++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h        |  1 +
 4 files changed, 59 insertions(+)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 02d2f7d21bdf..7259b27f5cba 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -1613,6 +1613,8 @@ static int virtnet_poll_tx(struct napi_struct *napi, int budget)
 	int opaque;
 	bool done;
 
+	sq->xsk.last_cpu = smp_processor_id();
+
 	if (unlikely(is_xdp_raw_buffer_queue(vi, index))) {
 		/* We don't need to enable cb for XDP */
 		napi_complete_done(napi, 0);
@@ -3197,6 +3199,7 @@ static const struct net_device_ops virtnet_netdev = {
 	.ndo_vlan_rx_kill_vid = virtnet_vlan_rx_kill_vid,
 	.ndo_bpf		= virtnet_xdp,
 	.ndo_xdp_xmit		= virtnet_xdp_xmit,
+	.ndo_xsk_wakeup         = virtnet_xsk_wakeup,
 	.ndo_features_check	= passthru_features_check,
 	.ndo_get_phys_port_name	= virtnet_get_phys_port_name,
 	.ndo_set_features	= virtnet_set_features,
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index dd2f7890f8cd..fc7c7a0f3c89 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -174,6 +174,8 @@ struct send_queue {
 		struct xsk_buff_pool __rcu *pool;
 
 		dma_addr_t hdr_dma_address;
+
+		u32 last_cpu;
 	} xsk;
 };
 
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index 04db80244dbd..27b7f0bb2d34 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -153,6 +153,59 @@ bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
 	return busy;
 }
 
+static void xsk_remote_trigger_napi(void *info)
+{
+	struct send_queue *sq = info;
+
+	virtqueue_napi_schedule(&sq->napi, sq->vq);
+}
+
+static void virtnet_xsk_wakeup_sq(struct send_queue *sq, bool in_napi)
+{
+	u32 last_cpu, cur_cpu;
+
+	if (napi_if_scheduled_mark_missed(&sq->napi))
+		return;
+
+	last_cpu = sq->xsk.last_cpu;
+
+	cur_cpu = get_cpu();
+
+	/* On remote cpu, softirq will run automatically when ipi irq exit. On
+	 * local cpu, smp_call_xxx will not trigger ipi interrupt, then softirq
+	 * cannot be triggered automatically by ipi irq exit.
+	 */
+	if (last_cpu == cur_cpu) {
+		virtqueue_napi_schedule(&sq->napi, sq->vq);
+
+		/* Not in softirq/irq context, we must raise napi tx manually. */
+		if (!in_napi)
+			napi_tx_raise();
+	} else {
+		smp_call_function_single(last_cpu, xsk_remote_trigger_napi, sq, true);
+	}
+
+	put_cpu();
+}
+
+int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag)
+{
+	struct virtnet_info *vi = netdev_priv(dev);
+	struct send_queue *sq;
+
+	if (!netif_running(dev))
+		return -ENETDOWN;
+
+	if (qid >= vi->curr_queue_pairs)
+		return -EINVAL;
+
+	sq = &vi->sq[qid];
+
+	virtnet_xsk_wakeup_sq(sq, false);
+
+	return 0;
+}
+
 static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
 				    struct xsk_buff_pool *pool, struct net_device *dev)
 {
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 15f1540a5803..5eece0de3310 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -22,4 +22,5 @@ static inline u32 ptr_to_xsk(void *ptr)
 int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
 bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
 		      int budget);
+int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
 #endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 31/33] virtio_net: xsk: tx: auto wakeup when free old xmit
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (29 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 30/33] virtio_net: xsk: tx: support wakeup Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:00 ` [PATCH 32/33] virtio_net: xsk: rx: introduce add_recvbuf_xsk() Xuan Zhuo
                   ` (4 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

If the XSK xmit stops because the TX queue is full, this time is
waiting for the TX interrupt to trigger the follow-up work again.

But for Virtio Net, the recycling old buf is not only completed in tx
napi, but also is called in start_xmit(), rx poll and other places.

So if xsk xmit stop by full tx queue, __free_old_xmit() will try to
wakeup tx napi.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/virtio_net.h |  5 +++--
 drivers/net/virtio/xsk.c        | 30 ++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h        |  1 +
 3 files changed, 34 insertions(+), 2 deletions(-)

diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index fc7c7a0f3c89..100ce48c6d55 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -176,6 +176,8 @@ struct send_queue {
 		dma_addr_t hdr_dma_address;
 
 		u32 last_cpu;
+
+		bool need_wakeup;
 	} xsk;
 };
 
@@ -296,8 +298,7 @@ static void __free_old_xmit(struct send_queue *sq, bool in_napi,
 		stats->packets++;
 	}
 
-	if (xsknum)
-		xsk_tx_completed(sq->xsk.pool, xsknum);
+	virtnet_xsk_complete(sq, xsknum, in_napi);
 }
 
 int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index 27b7f0bb2d34..043b0bf2a5d7 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -116,6 +116,7 @@ bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
 	bool busy;
 	int ret;
 
+	sq->xsk.need_wakeup = false;
 	__free_old_xmit(sq, true, &stats);
 
 	if (xsk_uses_need_wakeup(pool))
@@ -138,6 +139,13 @@ bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
 		 * triggered by interrupt.
 		 */
 		busy = false;
+
+		/* tx poll may not be triggered by tx interruption because of
+		 * that start_xmit() and rx poll will try free old xmit that
+		 * cause no tx interruption will be generated. So set
+		 * need_wakeup, then tx poll can be triggered by free_old_xmit.
+		 */
+		sq->xsk.need_wakeup = true;
 		break;
 	}
 
@@ -206,6 +214,26 @@ int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag)
 	return 0;
 }
 
+void virtnet_xsk_complete(struct send_queue *sq, u32 num, bool in_napi)
+{
+	struct xsk_buff_pool *pool;
+
+	rcu_read_lock();
+
+	pool = rcu_dereference(sq->xsk.pool);
+	if (pool) {
+		if (num)
+			xsk_tx_completed(pool, num);
+
+		if (sq->xsk.need_wakeup) {
+			sq->xsk.need_wakeup = false;
+			virtnet_xsk_wakeup_sq(sq, in_napi);
+		}
+	}
+
+	rcu_read_unlock();
+}
+
 static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
 				    struct xsk_buff_pool *pool, struct net_device *dev)
 {
@@ -298,6 +326,8 @@ static int virtnet_xsk_pool_enable(struct net_device *dev,
 	if (err)
 		goto err_rxq;
 
+	sq->xsk.need_wakeup = false;
+
 	/* Here is already protected by rtnl_lock, so rcu_assign_pointer
 	 * is safe.
 	 */
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 5eece0de3310..f90c28972d72 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -19,6 +19,7 @@ static inline u32 ptr_to_xsk(void *ptr)
 	return ((unsigned long)ptr) >> VIRTIO_XSK_FLAG_OFFSET;
 }
 
+void virtnet_xsk_complete(struct send_queue *sq, u32 num, bool in_napi);
 int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
 bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
 		      int budget);
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 32/33] virtio_net: xsk: rx: introduce add_recvbuf_xsk()
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (30 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 31/33] virtio_net: xsk: tx: auto wakeup when free old xmit Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
       [not found]   ` <Y9zJS+ugeY9qEMt9@boxer>
  2023-02-02 11:00 ` [PATCH 33/33] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer Xuan Zhuo
                   ` (3 subsequent siblings)
  35 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Implement the logic of filling vq with XSK buffer.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c | 11 +++++++++++
 drivers/net/virtio/xsk.c  | 26 ++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h  |  2 ++
 3 files changed, 39 insertions(+)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 7259b27f5cba..2aff0eee35d3 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -1352,10 +1352,20 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
  */
 bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
 {
+	struct xsk_buff_pool *pool;
 	int err;
 	bool oom;
 
 	do {
+		rcu_read_lock();
+		pool = rcu_dereference(rq->xsk.pool);
+		if (pool) {
+			err = add_recvbuf_xsk(vi, rq, pool, gfp);
+			rcu_read_unlock();
+			goto check;
+		}
+		rcu_read_unlock();
+
 		if (vi->mergeable_rx_bufs)
 			err = add_recvbuf_mergeable(vi, rq, gfp);
 		else if (vi->big_packets)
@@ -1363,6 +1373,7 @@ bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
 		else
 			err = add_recvbuf_small(vi, rq, gfp);
 
+check:
 		oom = err == -ENOMEM;
 		if (err)
 			break;
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index 043b0bf2a5d7..a5e88f919c46 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -37,6 +37,32 @@ static void virtnet_xsk_check_queue(struct send_queue *sq)
 		netif_stop_subqueue(dev, qnum);
 }
 
+int add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq,
+		    struct xsk_buff_pool *pool, gfp_t gfp)
+{
+	struct xdp_buff *xdp;
+	dma_addr_t addr;
+	u32 len;
+	int err;
+
+	xdp = xsk_buff_alloc(pool);
+	if (!xdp)
+		return -ENOMEM;
+
+	/* use the part of XDP_PACKET_HEADROOM as the virtnet hdr space */
+	addr = xsk_buff_xdp_get_dma(xdp) - vi->hdr_len;
+	len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len;
+
+	sg_init_table(rq->sg, 1);
+	sg_fill_dma(rq->sg, addr, len);
+
+	err = virtqueue_add_inbuf_premapped(rq->vq, rq->sg, 1, xdp, gfp);
+	if (err)
+		xsk_buff_free(xdp);
+
+	return err;
+}
+
 static int virtnet_xsk_xmit_one(struct send_queue *sq,
 				struct xsk_buff_pool *pool,
 				struct xdp_desc *desc)
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index f90c28972d72..5549143ef118 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -24,4 +24,6 @@ int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
 bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
 		      int budget);
 int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
+int add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq,
+		    struct xsk_buff_pool *pool, gfp_t gfp);
 #endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* [PATCH 33/33] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (31 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 32/33] virtio_net: xsk: rx: introduce add_recvbuf_xsk() Xuan Zhuo
@ 2023-02-02 11:00 ` Xuan Zhuo
  2023-02-02 11:08 ` [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (2 subsequent siblings)
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:00 UTC (permalink / raw)
  To: netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

Implementing the logic of xsk rx. If this packet is not for XSK
determined in XDP, then we need to copy once to generate a SKB.
If it is for XSK, it is a zerocopy receive packet process.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio/main.c       |  11 ++-
 drivers/net/virtio/virtio_net.h |   5 ++
 drivers/net/virtio/xsk.c        | 116 ++++++++++++++++++++++++++++++++
 drivers/net/virtio/xsk.h        |   4 ++
 4 files changed, 130 insertions(+), 6 deletions(-)

diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
index 2aff0eee35d3..c96183b99e46 100644
--- a/drivers/net/virtio/main.c
+++ b/drivers/net/virtio/main.c
@@ -140,11 +140,6 @@ static int rxq2vq(int rxq)
 	return rxq * 2;
 }
 
-static inline struct virtio_net_hdr_mrg_rxbuf *skb_vnet_hdr(struct sk_buff *skb)
-{
-	return (struct virtio_net_hdr_mrg_rxbuf *)skb->cb;
-}
-
 /*
  * private is used to chain pages for big packets, put the whole
  * most recent used list in the beginning for reuse
@@ -1161,13 +1156,17 @@ static void receive_buf(struct virtnet_info *vi, struct receive_queue *rq,
 		return;
 	}
 
-	if (vi->mergeable_rx_bufs)
+	rcu_read_lock();
+	if (rcu_dereference(rq->xsk.pool))
+		skb = receive_xsk(dev, vi, rq, buf, len, xdp_xmit, stats);
+	else if (vi->mergeable_rx_bufs)
 		skb = receive_mergeable(dev, vi, rq, buf, ctx, len, xdp_xmit,
 					stats);
 	else if (vi->big_packets)
 		skb = receive_big(dev, vi, rq, buf, len, stats);
 	else
 		skb = receive_small(dev, vi, rq, buf, ctx, len, xdp_xmit, stats);
+	rcu_read_unlock();
 
 	if (unlikely(!skb))
 		return;
diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
index 100ce48c6d55..d9d2e9dcc36c 100644
--- a/drivers/net/virtio/virtio_net.h
+++ b/drivers/net/virtio/virtio_net.h
@@ -220,6 +220,11 @@ struct receive_queue {
 	} xsk;
 };
 
+static inline struct virtio_net_hdr_mrg_rxbuf *skb_vnet_hdr(struct sk_buff *skb)
+{
+	return (struct virtio_net_hdr_mrg_rxbuf *)skb->cb;
+}
+
 static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
 {
 	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
index a5e88f919c46..1287b25cb207 100644
--- a/drivers/net/virtio/xsk.c
+++ b/drivers/net/virtio/xsk.c
@@ -13,6 +13,18 @@ static void sg_fill_dma(struct scatterlist *sg, dma_addr_t addr, u32 len)
 	sg->length = len;
 }
 
+static unsigned int virtnet_receive_buf_num(struct virtnet_info *vi, char *buf)
+{
+	struct virtio_net_hdr_mrg_rxbuf *hdr;
+
+	if (vi->mergeable_rx_bufs) {
+		hdr = (struct virtio_net_hdr_mrg_rxbuf *)buf;
+		return virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+	}
+
+	return 1;
+}
+
 static void virtnet_xsk_check_queue(struct send_queue *sq)
 {
 	struct virtnet_info *vi = sq->vq->vdev->priv;
@@ -37,6 +49,110 @@ static void virtnet_xsk_check_queue(struct send_queue *sq)
 		netif_stop_subqueue(dev, qnum);
 }
 
+static void merge_drop_follow_xdp(struct net_device *dev,
+				  struct receive_queue *rq,
+				  u32 num_buf,
+				  struct virtnet_rq_stats *stats)
+{
+	struct xdp_buff *xdp;
+	u32 len;
+
+	while (num_buf-- > 1) {
+		xdp = virtqueue_get_buf(rq->vq, &len);
+		if (unlikely(!xdp)) {
+			pr_debug("%s: rx error: %d buffers missing\n",
+				 dev->name, num_buf);
+			dev->stats.rx_length_errors++;
+			break;
+		}
+		stats->bytes += len;
+		xsk_buff_free(xdp);
+	}
+}
+
+static struct sk_buff *construct_skb(struct receive_queue *rq,
+				     struct xdp_buff *xdp)
+{
+	unsigned int metasize = xdp->data - xdp->data_meta;
+	struct sk_buff *skb;
+	unsigned int size;
+
+	size = xdp->data_end - xdp->data_hard_start;
+	skb = napi_alloc_skb(&rq->napi, size);
+	if (unlikely(!skb))
+		return NULL;
+
+	skb_reserve(skb, xdp->data_meta - xdp->data_hard_start);
+
+	size = xdp->data_end - xdp->data_meta;
+	memcpy(__skb_put(skb, size), xdp->data_meta, size);
+
+	if (metasize) {
+		__skb_pull(skb, metasize);
+		skb_metadata_set(skb, metasize);
+	}
+
+	return skb;
+}
+
+struct sk_buff *receive_xsk(struct net_device *dev, struct virtnet_info *vi,
+			    struct receive_queue *rq, void *buf,
+			    unsigned int len, unsigned int *xdp_xmit,
+			    struct virtnet_rq_stats *stats)
+{
+	struct virtio_net_hdr_mrg_rxbuf *hdr;
+	struct sk_buff *skb = NULL;
+	u32 ret, headroom, num_buf;
+	struct bpf_prog *prog;
+	struct xdp_buff *xdp;
+
+	xdp = (struct xdp_buff *)buf;
+
+	hdr = xdp->data - vi->hdr_len;
+
+	num_buf = virtnet_receive_buf_num(vi, (char *)hdr);
+	if (num_buf > 1)
+		goto drop;
+
+	len -= vi->hdr_len;
+	headroom = xdp->data - xdp->data_hard_start;
+
+	xdp_prepare_buff(xdp, xdp->data_hard_start, headroom, len, true);
+	xsk_buff_dma_sync_for_cpu(xdp, rq->xsk.pool);
+
+	ret = VIRTNET_XDP_RES_PASS;
+	rcu_read_lock();
+	prog = rcu_dereference(rq->xdp_prog);
+	if (prog)
+		ret = virtnet_xdp_handler(prog, xdp, dev, xdp_xmit, stats);
+	rcu_read_unlock();
+
+	switch (ret) {
+	case VIRTNET_XDP_RES_PASS:
+		skb = construct_skb(rq, xdp);
+		xsk_buff_free(xdp);
+		break;
+
+	case VIRTNET_XDP_RES_DROP:
+		goto drop;
+
+	case VIRTNET_XDP_RES_CONSUMED:
+		goto consumed;
+	}
+
+	return skb;
+
+drop:
+	stats->drops++;
+
+	xsk_buff_free(xdp);
+
+	if (num_buf > 1)
+		merge_drop_follow_xdp(dev, rq, num_buf, stats);
+consumed:
+	return NULL;
+}
+
 int add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq,
 		    struct xsk_buff_pool *pool, gfp_t gfp)
 {
diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
index 5549143ef118..ebdee62b9dc8 100644
--- a/drivers/net/virtio/xsk.h
+++ b/drivers/net/virtio/xsk.h
@@ -26,4 +26,8 @@ bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
 int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
 int add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq,
 		    struct xsk_buff_pool *pool, gfp_t gfp);
+struct sk_buff *receive_xsk(struct net_device *dev, struct virtnet_info *vi,
+			    struct receive_queue *rq, void *buf,
+			    unsigned int len, unsigned int *xdp_xmit,
+			    struct virtnet_rq_stats *stats);
 #endif
-- 
2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply related	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (32 preceding siblings ...)
  2023-02-02 11:00 ` [PATCH 33/33] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer Xuan Zhuo
@ 2023-02-02 11:08 ` Xuan Zhuo
  2023-02-02 11:08 ` Michael S. Tsirkin
  2023-02-02 14:41 ` Paolo Abeni
  35 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:08 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	Eric Dumazet, Sebastian Andrzej Siewior, John Fastabend,
	Alexei Starovoitov, virtualization, netdev,
	Björn Töpel, Kuniyuki Iwashima, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, David S. Miller,
	Magnus Karlsson


Sorry, subject miss "net-next".

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (33 preceding siblings ...)
  2023-02-02 11:08 ` [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
@ 2023-02-02 11:08 ` Michael S. Tsirkin
  2023-02-02 11:11   ` Xuan Zhuo
  2023-02-02 11:44   ` Xuan Zhuo
  2023-02-02 14:41 ` Paolo Abeni
  35 siblings, 2 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-02 11:08 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:25PM +0800, Xuan Zhuo wrote:
> XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> copy feature of xsk (XDP socket) needs to be supported by the driver. The
> performance of zero copy is very good.

Great! Any numbers to share?

> mlx5 and intel ixgbe already support
> this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> feature.
> 
> Virtio-net did not support per-queue reset, so it was impossible to support XDP
> Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> the XDP Socket Zerocopy.
> 
> Virtio-net can not increase the queue at will, so xsk shares the queue with
> kernel.
> 
> On the other hand, Virtio-Net does not support generate interrupt manually, so
> when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> local CPU, then we wake up sofrirqd.
> 
> Please review.
> 
> Thanks.
> 
> 
> Xuan Zhuo (33):
>   virtio_ring: virtqueue_add() support premapped
>   virtio_ring: split: virtqueue_add_split() support premapped
>   virtio_ring: packed: virtqueue_add_packed() support premapped
>   virtio_ring: introduce virtqueue_add_outbuf_premapped()
>   virtio_ring: introduce virtqueue_add_inbuf_premapped()
>   virtio_ring: introduce virtqueue_reset()
>   virtio_ring: add api virtio_dma_map() for advance dma
>   virtio_ring: introduce dma sync api for virtio
>   xsk: xsk_buff_pool add callback for dma_sync
>   xsk: support virtio DMA map
>   virtio_net: rename free_old_xmit_skbs to free_old_xmit
>   virtio_net: unify the code for recycling the xmit ptr
>   virtio_net: virtnet_poll_tx support rescheduled
>   virtio_net: independent directory
>   virtio_net: move to virtio_net.h
>   virtio_net: introduce virtnet_xdp_handler() to seprate the logic of
>     run xdp
>   virtio_net: receive_small() use virtnet_xdp_handler()
>   virtio_net: receive_merageable() use virtnet_xdp_handler()
>   virtio_net: introduce virtnet_tx_reset()
>   virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
>   virtio_net: xsk: introduce virtnet_xsk_pool_enable()
>   virtio_net: xsk: introduce xsk disable
>   virtio_net: xsk: support xsk setup
>   virtio_net: xsk: stop disable tx napi
>   virtio_net: xsk: __free_old_xmit distinguishes xsk buffer
>   virtio_net: virtnet_sq_free_unused_buf() check xsk buffer
>   virtio_net: virtnet_rq_free_unused_buf() check xsk buffer
>   net: introduce napi_tx_raise()
>   virtio_net: xsk: tx: support tx
>   virtio_net: xsk: tx: support wakeup
>   virtio_net: xsk: tx: auto wakeup when free old xmit
>   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
>   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> 
>  MAINTAINERS                                 |   2 +-
>  drivers/net/Kconfig                         |   8 +-
>  drivers/net/Makefile                        |   2 +-
>  drivers/net/virtio/Kconfig                  |  11 +
>  drivers/net/virtio/Makefile                 |   8 +
>  drivers/net/{virtio_net.c => virtio/main.c} | 564 +++++++-------------
>  drivers/net/virtio/virtio_net.h             | 317 +++++++++++
>  drivers/net/virtio/xsk.c                    | 524 ++++++++++++++++++
>  drivers/net/virtio/xsk.h                    |  33 ++
>  drivers/virtio/virtio_ring.c                | 376 +++++++++++--
>  include/linux/netdevice.h                   |   7 +
>  include/linux/virtio.h                      |  29 +
>  include/net/xsk_buff_pool.h                 |   6 +
>  net/core/dev.c                              |  11 +
>  net/xdp/xsk_buff_pool.c                     |  79 ++-
>  15 files changed, 1541 insertions(+), 436 deletions(-)
>  create mode 100644 drivers/net/virtio/Kconfig
>  create mode 100644 drivers/net/virtio/Makefile
>  rename drivers/net/{virtio_net.c => virtio/main.c} (92%)
>  create mode 100644 drivers/net/virtio/virtio_net.h
>  create mode 100644 drivers/net/virtio/xsk.c
>  create mode 100644 drivers/net/virtio/xsk.h
> 
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-02 11:08 ` Michael S. Tsirkin
@ 2023-02-02 11:11   ` Xuan Zhuo
  2023-02-02 11:44   ` Xuan Zhuo
  1 sibling, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:11 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, 2 Feb 2023 06:08:30 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:25PM +0800, Xuan Zhuo wrote:
> > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > performance of zero copy is very good.
>
> Great! Any numbers to share?


ENV: Qemu with vhost.

                   vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS
-----------------------------|---------------|------------------|------------
xmit by sockperf:     90%    |   100%        |                  |  318967
xmit by xsk:          100%   |   30%         |   33%            | 1192064
recv by sockperf:     100%   |   68%         |   100%           |  692288
recv by xsk:          100%   |   33%         |   43%            |  771670


Thanks.
                             |               |                  |
>                                                               |
> > mlx5 and intel ixgbe already support
> > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > feature.
> >
> > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > the XDP Socket Zerocopy.
> >
> > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > kernel.
> >
> > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > local CPU, then we wake up sofrirqd.
> >
> > Please review.
> >
> > Thanks.
> >
> >
> > Xuan Zhuo (33):
> >   virtio_ring: virtqueue_add() support premapped
> >   virtio_ring: split: virtqueue_add_split() support premapped
> >   virtio_ring: packed: virtqueue_add_packed() support premapped
> >   virtio_ring: introduce virtqueue_add_outbuf_premapped()
> >   virtio_ring: introduce virtqueue_add_inbuf_premapped()
> >   virtio_ring: introduce virtqueue_reset()
> >   virtio_ring: add api virtio_dma_map() for advance dma
> >   virtio_ring: introduce dma sync api for virtio
> >   xsk: xsk_buff_pool add callback for dma_sync
> >   xsk: support virtio DMA map
> >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> >   virtio_net: unify the code for recycling the xmit ptr
> >   virtio_net: virtnet_poll_tx support rescheduled
> >   virtio_net: independent directory
> >   virtio_net: move to virtio_net.h
> >   virtio_net: introduce virtnet_xdp_handler() to seprate the logic of
> >     run xdp
> >   virtio_net: receive_small() use virtnet_xdp_handler()
> >   virtio_net: receive_merageable() use virtnet_xdp_handler()
> >   virtio_net: introduce virtnet_tx_reset()
> >   virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
> >   virtio_net: xsk: introduce virtnet_xsk_pool_enable()
> >   virtio_net: xsk: introduce xsk disable
> >   virtio_net: xsk: support xsk setup
> >   virtio_net: xsk: stop disable tx napi
> >   virtio_net: xsk: __free_old_xmit distinguishes xsk buffer
> >   virtio_net: virtnet_sq_free_unused_buf() check xsk buffer
> >   virtio_net: virtnet_rq_free_unused_buf() check xsk buffer
> >   net: introduce napi_tx_raise()
> >   virtio_net: xsk: tx: support tx
> >   virtio_net: xsk: tx: support wakeup
> >   virtio_net: xsk: tx: auto wakeup when free old xmit
> >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> >
> >  MAINTAINERS                                 |   2 +-
> >  drivers/net/Kconfig                         |   8 +-
> >  drivers/net/Makefile                        |   2 +-
> >  drivers/net/virtio/Kconfig                  |  11 +
> >  drivers/net/virtio/Makefile                 |   8 +
> >  drivers/net/{virtio_net.c => virtio/main.c} | 564 +++++++-------------
> >  drivers/net/virtio/virtio_net.h             | 317 +++++++++++
> >  drivers/net/virtio/xsk.c                    | 524 ++++++++++++++++++
> >  drivers/net/virtio/xsk.h                    |  33 ++
> >  drivers/virtio/virtio_ring.c                | 376 +++++++++++--
> >  include/linux/netdevice.h                   |   7 +
> >  include/linux/virtio.h                      |  29 +
> >  include/net/xsk_buff_pool.h                 |   6 +
> >  net/core/dev.c                              |  11 +
> >  net/xdp/xsk_buff_pool.c                     |  79 ++-
> >  15 files changed, 1541 insertions(+), 436 deletions(-)
> >  create mode 100644 drivers/net/virtio/Kconfig
> >  create mode 100644 drivers/net/virtio/Makefile
> >  rename drivers/net/{virtio_net.c => virtio/main.c} (92%)
> >  create mode 100644 drivers/net/virtio/virtio_net.h
> >  create mode 100644 drivers/net/virtio/xsk.c
> >  create mode 100644 drivers/net/virtio/xsk.h
> >
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-02 11:08 ` Michael S. Tsirkin
  2023-02-02 11:11   ` Xuan Zhuo
@ 2023-02-02 11:44   ` Xuan Zhuo
  2023-02-03  9:08     ` Michael S. Tsirkin
  1 sibling, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-02 11:44 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, 2 Feb 2023 06:08:30 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:25PM +0800, Xuan Zhuo wrote:
> > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > performance of zero copy is very good.
>
> Great! Any numbers to share?

RESEND. Last mail has some email format error.

ENV: Qemu with vhost.

                   vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS
-----------------------------|---------------|------------------|------------
xmit by sockperf:     90%    |   100%        |                  |  318967
xmit by xsk:          100%   |   30%         |   33%            | 1192064
recv by sockperf:     100%   |   68%         |   100%           |  692288
recv by xsk:          100%   |   33%         |   43%            |  771670

Thanks.


>
> > mlx5 and intel ixgbe already support
> > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > feature.
> >
> > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > the XDP Socket Zerocopy.
> >
> > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > kernel.
> >
> > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > local CPU, then we wake up sofrirqd.
> >
> > Please review.
> >
> > Thanks.
> >
> >
> > Xuan Zhuo (33):
> >   virtio_ring: virtqueue_add() support premapped
> >   virtio_ring: split: virtqueue_add_split() support premapped
> >   virtio_ring: packed: virtqueue_add_packed() support premapped
> >   virtio_ring: introduce virtqueue_add_outbuf_premapped()
> >   virtio_ring: introduce virtqueue_add_inbuf_premapped()
> >   virtio_ring: introduce virtqueue_reset()
> >   virtio_ring: add api virtio_dma_map() for advance dma
> >   virtio_ring: introduce dma sync api for virtio
> >   xsk: xsk_buff_pool add callback for dma_sync
> >   xsk: support virtio DMA map
> >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> >   virtio_net: unify the code for recycling the xmit ptr
> >   virtio_net: virtnet_poll_tx support rescheduled
> >   virtio_net: independent directory
> >   virtio_net: move to virtio_net.h
> >   virtio_net: introduce virtnet_xdp_handler() to seprate the logic of
> >     run xdp
> >   virtio_net: receive_small() use virtnet_xdp_handler()
> >   virtio_net: receive_merageable() use virtnet_xdp_handler()
> >   virtio_net: introduce virtnet_tx_reset()
> >   virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
> >   virtio_net: xsk: introduce virtnet_xsk_pool_enable()
> >   virtio_net: xsk: introduce xsk disable
> >   virtio_net: xsk: support xsk setup
> >   virtio_net: xsk: stop disable tx napi
> >   virtio_net: xsk: __free_old_xmit distinguishes xsk buffer
> >   virtio_net: virtnet_sq_free_unused_buf() check xsk buffer
> >   virtio_net: virtnet_rq_free_unused_buf() check xsk buffer
> >   net: introduce napi_tx_raise()
> >   virtio_net: xsk: tx: support tx
> >   virtio_net: xsk: tx: support wakeup
> >   virtio_net: xsk: tx: auto wakeup when free old xmit
> >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> >
> >  MAINTAINERS                                 |   2 +-
> >  drivers/net/Kconfig                         |   8 +-
> >  drivers/net/Makefile                        |   2 +-
> >  drivers/net/virtio/Kconfig                  |  11 +
> >  drivers/net/virtio/Makefile                 |   8 +
> >  drivers/net/{virtio_net.c => virtio/main.c} | 564 +++++++-------------
> >  drivers/net/virtio/virtio_net.h             | 317 +++++++++++
> >  drivers/net/virtio/xsk.c                    | 524 ++++++++++++++++++
> >  drivers/net/virtio/xsk.h                    |  33 ++
> >  drivers/virtio/virtio_ring.c                | 376 +++++++++++--
> >  include/linux/netdevice.h                   |   7 +
> >  include/linux/virtio.h                      |  29 +
> >  include/net/xsk_buff_pool.h                 |   6 +
> >  net/core/dev.c                              |  11 +
> >  net/xdp/xsk_buff_pool.c                     |  79 ++-
> >  15 files changed, 1541 insertions(+), 436 deletions(-)
> >  create mode 100644 drivers/net/virtio/Kconfig
> >  create mode 100644 drivers/net/virtio/Makefile
> >  rename drivers/net/{virtio_net.c => virtio/main.c} (92%)
> >  create mode 100644 drivers/net/virtio/virtio_net.h
> >  create mode 100644 drivers/net/virtio/xsk.c
> >  create mode 100644 drivers/net/virtio/xsk.h
> >
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
                   ` (34 preceding siblings ...)
  2023-02-02 11:08 ` Michael S. Tsirkin
@ 2023-02-02 14:41 ` Paolo Abeni
  2023-02-03  3:33   ` Xuan Zhuo
  35 siblings, 1 reply; 76+ messages in thread
From: Paolo Abeni @ 2023-02-02 14:41 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	John Fastabend, Björn Töpel, Alexei Starovoitov,
	Eric Dumazet, Kuniyuki Iwashima, Sebastian Andrzej Siewior,
	Jonathan Lemon, Jakub Kicinski, bpf, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> copy feature of xsk (XDP socket) needs to be supported by the driver. The
> performance of zero copy is very good. mlx5 and intel ixgbe already support
> this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> feature.
> 
> Virtio-net did not support per-queue reset, so it was impossible to support XDP
> Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> the XDP Socket Zerocopy.
> 
> Virtio-net can not increase the queue at will, so xsk shares the queue with
> kernel.
> 
> On the other hand, Virtio-Net does not support generate interrupt manually, so
> when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> local CPU, then we wake up sofrirqd.

Thank you for the large effort.

Since this will likely need a few iterations, on next revision please
do split the work in multiple chunks to help the reviewer efforts -
from Documentation/process/maintainer-netdev.rst:

 - don't post large series (> 15 patches), break them up

In this case I guess you can split it in 1 (or even 2) pre-req series
and another one for the actual xsk zero copy support.

Thanks!

Paolo

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 18/33] virtio_net: receive_merageable() use virtnet_xdp_handler()
  2023-02-02 11:00 ` [PATCH 18/33] virtio_net: receive_merageable() " Xuan Zhuo
@ 2023-02-02 17:16   ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-02 17:16 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:43PM +0800, Xuan Zhuo wrote:
> receive_merageable() use virtnet_xdp_handler()
> 
> Meanwhile, support Multi Buffer XDP.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

typo

> ---
>  drivers/net/virtio/main.c | 88 +++++++++++++++------------------------
>  1 file changed, 33 insertions(+), 55 deletions(-)
> 
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index d7a856bd8862..fb82035a0b7f 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -483,8 +483,10 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>  			unsigned int *xdp_xmit,
>  			struct virtnet_rq_stats *stats)
>  {
> +	struct skb_shared_info *shinfo;
>  	struct xdp_frame *xdpf;
> -	int err;
> +	struct page *xdp_page;
> +	int err, i;
>  	u32 act;
>  
>  	act = bpf_prog_run_xdp(xdp_prog, xdp);
> @@ -527,6 +529,13 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>  		trace_xdp_exception(dev, xdp_prog, act);
>  		fallthrough;
>  	case XDP_DROP:
> +		if (xdp_buff_has_frags(xdp)) {
> +			shinfo = xdp_get_shared_info_from_buff(xdp);
> +			for (i = 0; i < shinfo->nr_frags; i++) {
> +				xdp_page = skb_frag_page(&shinfo->frags[i]);
> +				put_page(xdp_page);
> +			}
> +		}
>  		return VIRTNET_XDP_RES_DROP;
>  	}
>  }
> @@ -809,7 +818,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>  	unsigned int xdp_frags_truesz = 0;
>  	struct page *page;
>  	skb_frag_t *frag;
> -	int offset;
> +	int offset, i;
>  	void *ctx;
>  
>  	xdp_init_buff(xdp, frame_sz, &rq->xdp_rxq);
> @@ -842,7 +851,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>  				 dev->name, *num_buf,
>  				 virtio16_to_cpu(vi->vdev, hdr->num_buffers));
>  			dev->stats.rx_length_errors++;
> -			return -EINVAL;
> +			goto err;
>  		}
>  
>  		stats->bytes += len;
> @@ -861,7 +870,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>  			pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
>  				 dev->name, len, (unsigned long)(truesize - room));
>  			dev->stats.rx_length_errors++;
> -			return -EINVAL;
> +			goto err;
>  		}
>  
>  		frag = &shinfo->frags[shinfo->nr_frags++];
> @@ -876,6 +885,14 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>  
>  	*xdp_frags_truesize = xdp_frags_truesz;
>  	return 0;
> +
> +err:
> +	for (i = 0; i < shinfo->nr_frags; i++) {
> +		page = skb_frag_page(&shinfo->frags[i]);
> +		put_page(page);
> +	}
> +
> +	return -EINVAL;
>  }
>  
>  static struct sk_buff *receive_mergeable(struct net_device *dev,
> @@ -919,13 +936,10 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>  	xdp_prog = rcu_dereference(rq->xdp_prog);
>  	if (xdp_prog) {
>  		unsigned int xdp_frags_truesz = 0;
> -		struct skb_shared_info *shinfo;
> -		struct xdp_frame *xdpf;
>  		struct page *xdp_page;
>  		struct xdp_buff xdp;
>  		void *data;
>  		u32 act;
> -		int i;
>  
>  		/* Transient failure which in theory could occur if
>  		 * in-flight packets from before XDP was enabled reach
> @@ -983,69 +997,33 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>  		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
>  						 &num_buf, &xdp_frags_truesz, stats);
>  		if (unlikely(err))
> -			goto err_xdp_frags;
> +			goto err_xdp;
>  
> -		act = bpf_prog_run_xdp(xdp_prog, &xdp);
> -		stats->xdp_packets++;
> +		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
>  
>  		switch (act) {
> -		case XDP_PASS:
> +		case VIRTNET_XDP_RES_PASS:
>  			if (unlikely(xdp_page != page))
>  				put_page(page);
> +
>  			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
>  			rcu_read_unlock();
>  			return head_skb;
> -		case XDP_TX:
> -			stats->xdp_tx++;
> -			xdpf = xdp_convert_buff_to_frame(&xdp);
> -			if (unlikely(!xdpf)) {
> -				netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> -				goto err_xdp_frags;
> -			}
> -			err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> -			if (unlikely(!err)) {
> -				xdp_return_frame_rx_napi(xdpf);
> -			} else if (unlikely(err < 0)) {
> -				trace_xdp_exception(vi->dev, xdp_prog, act);
> -				goto err_xdp_frags;
> -			}
> -			*xdp_xmit |= VIRTIO_XDP_TX;
> -			if (unlikely(xdp_page != page))
> -				put_page(page);
> -			rcu_read_unlock();
> -			goto xdp_xmit;
> -		case XDP_REDIRECT:
> -			stats->xdp_redirects++;
> -			err = xdp_do_redirect(dev, &xdp, xdp_prog);
> -			if (err)
> -				goto err_xdp_frags;
> -			*xdp_xmit |= VIRTIO_XDP_REDIR;
> +
> +		case VIRTNET_XDP_RES_CONSUMED:
>  			if (unlikely(xdp_page != page))
>  				put_page(page);
> +
>  			rcu_read_unlock();
>  			goto xdp_xmit;
> -		default:
> -			bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> -			fallthrough;
> -		case XDP_ABORTED:
> -			trace_xdp_exception(vi->dev, xdp_prog, act);
> -			fallthrough;
> -		case XDP_DROP:
> -			goto err_xdp_frags;
> -		}
> -err_xdp_frags:
> -		if (unlikely(xdp_page != page))
> -			__free_pages(xdp_page, 0);
>  
> -		if (xdp_buff_has_frags(&xdp)) {
> -			shinfo = xdp_get_shared_info_from_buff(&xdp);
> -			for (i = 0; i < shinfo->nr_frags; i++) {
> -				xdp_page = skb_frag_page(&shinfo->frags[i]);
> +		case VIRTNET_XDP_RES_DROP:
> +			if (unlikely(xdp_page != page))
>  				put_page(xdp_page);
> -			}
> -		}
>  
> -		goto err_xdp;
> +			rcu_read_unlock();
> +			goto err_xdp;
> +		}
>  	}
>  	rcu_read_unlock();
>  
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 19/33] virtio_net: introduce virtnet_tx_reset()
  2023-02-02 11:00 ` [PATCH 19/33] virtio_net: introduce virtnet_tx_reset() Xuan Zhuo
@ 2023-02-02 17:23   ` Michael S. Tsirkin
  2023-02-03  4:35     ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-02 17:23 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:44PM +0800, Xuan Zhuo wrote:
> Introduce virtnet_tx_reset() to release the buffers inside virtio ring.
> 
> This is needed for xsk disable. When disable xsk, we need to relese the

typo

> buffer from xsk, so this function is needed.
> 
> This patch reuse the virtnet_tx_resize.

reuses

> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>


> ---
>  drivers/net/virtio/main.c       | 21 ++++++++++++++++++---
>  drivers/net/virtio/virtio_net.h |  1 +
>  2 files changed, 19 insertions(+), 3 deletions(-)
> 
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index fb82035a0b7f..049a3bb9d88d 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -1806,8 +1806,8 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
>  	return err;
>  }
>  
> -static int virtnet_tx_resize(struct virtnet_info *vi,
> -			     struct send_queue *sq, u32 ring_num)
> +static int __virtnet_tx_reset(struct virtnet_info *vi,
> +			      struct send_queue *sq, u32 ring_num)
>  {
>  	bool running = netif_running(vi->dev);
>  	struct netdev_queue *txq;
> @@ -1833,7 +1833,11 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
>  
>  	__netif_tx_unlock_bh(txq);
>  
> -	err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
> +	if (ring_num)
> +		err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
> +	else
> +		err = virtqueue_reset(sq->vq, virtnet_sq_free_unused_buf);
> +
>  	if (err)
>  		netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
>

This __virtnet_tx_reset is a really weird API.

Suggest just splitting the common parts:

__virtnet_tx_pause
__virtnet_tx_resume

we can then implement virtnet_tx_resize and virtnet_tx_reset
using these two.

  
> @@ -1847,6 +1851,17 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
>  	return err;
>  }
>  
> +static int virtnet_tx_resize(struct virtnet_info *vi,
> +			     struct send_queue *sq, u32 ring_num)
> +{
> +	return __virtnet_tx_reset(vi, sq, ring_num);
> +}
> +
> +int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq)
> +{
> +	return __virtnet_tx_reset(vi, sq, 0);
> +}
> +
>  /*
>   * Send command via the control virtqueue and check status.  Commands
>   * supported by the hypervisor, as indicated by feature bits, should
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index af3e7e817f9e..b46f083a630a 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -273,4 +273,5 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>  			struct net_device *dev,
>  			unsigned int *xdp_xmit,
>  			struct virtnet_rq_stats *stats);
> +int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq);
>  #endif
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 24/33] virtio_net: xsk: stop disable tx napi
  2023-02-02 11:00 ` [PATCH 24/33] virtio_net: xsk: stop disable tx napi Xuan Zhuo
@ 2023-02-02 17:25   ` Michael S. Tsirkin
  2023-02-03  3:24     ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-02 17:25 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:49PM +0800, Xuan Zhuo wrote:
> Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
> we must stop tx napi from being disabled.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c | 9 ++++++++-
>  1 file changed, 8 insertions(+), 1 deletion(-)
> 
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index ed79e750bc6c..232cf151abff 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -2728,8 +2728,15 @@ static int virtnet_set_coalesce(struct net_device *dev,
>  		return ret;
>  
>  	if (update_napi) {
> -		for (i = 0; i < vi->max_queue_pairs; i++)
> +		for (i = 0; i < vi->max_queue_pairs; i++) {
> +			/* xsk xmit depend on the tx napi. So if xsk is active,

depends.

> +			 * prevent modifications to tx napi.
> +			 */
> +			if (rtnl_dereference(vi->sq[i].xsk.pool))
> +				continue;
> +
>  			vi->sq[i].napi.weight = napi_weight;

I don't get it.
changing napi weight does not work then.
why is this ok?


> +		}
>  	}
>  
>  	return ret;
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 22/33] virtio_net: xsk: introduce xsk disable
  2023-02-02 11:00 ` [PATCH 22/33] virtio_net: xsk: introduce xsk disable Xuan Zhuo
@ 2023-02-02 23:02   ` kernel test robot
  2023-02-12  7:56   ` kernel test robot
  1 sibling, 0 replies; 76+ messages in thread
From: kernel test robot @ 2023-02-02 23:02 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	Jonathan Lemon, John Fastabend, Björn Töpel,
	Alexei Starovoitov, Eric Dumazet, Kuniyuki Iwashima,
	oe-kbuild-all, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	Sebastian Andrzej Siewior, Magnus Karlsson

Hi Xuan,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]
[also build test WARNING on next-20230202]
[cannot apply to net/master mst-vhost/linux-next linus/master v6.2-rc6]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Xuan-Zhuo/virtio_ring-virtqueue_add-support-premapped/20230202-190707
patch link:    https://lore.kernel.org/r/20230202110058.130695-23-xuanzhuo%40linux.alibaba.com
patch subject: [PATCH 22/33] virtio_net: xsk: introduce xsk disable
config: nios2-randconfig-s033-20230202 (https://download.01.org/0day-ci/archive/20230203/202302030652.8JBKpzat-lkp@intel.com/config)
compiler: nios2-linux-gcc (GCC) 12.1.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # apt-get install sparse
        # sparse version: v0.6.4-39-gce1a6720-dirty
        # https://github.com/intel-lab-lkp/linux/commit/3c385ac45368b585d2ca1a45263b4a0536cef0dd
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Xuan-Zhuo/virtio_ring-virtqueue_add-support-premapped/20230202-190707
        git checkout 3c385ac45368b585d2ca1a45263b4a0536cef0dd
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=nios2 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=nios2 SHELL=/bin/bash drivers/net/virtio/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

sparse warnings: (new ones prefixed by >>)
>> drivers/net/virtio/xsk.c:133:35: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct xsk_buff_pool *pool @@     got struct xsk_buff_pool [noderef] __rcu *pool @@
   drivers/net/virtio/xsk.c:133:35: sparse:     expected struct xsk_buff_pool *pool
   drivers/net/virtio/xsk.c:133:35: sparse:     got struct xsk_buff_pool [noderef] __rcu *pool

vim +133 drivers/net/virtio/xsk.c

   116	
   117	static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
   118	{
   119		struct virtnet_info *vi = netdev_priv(dev);
   120		struct receive_queue *rq;
   121		struct send_queue *sq;
   122		int err1, err2;
   123	
   124		if (qid >= vi->curr_queue_pairs)
   125			return -EINVAL;
   126	
   127		sq = &vi->sq[qid];
   128		rq = &vi->rq[qid];
   129	
   130		virtio_dma_unmap(&vi->vdev->dev, sq->xsk.hdr_dma_address, vi->hdr_len,
   131				 DMA_TO_DEVICE);
   132	
 > 133		xsk_pool_dma_unmap(sq->xsk.pool, 0);

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 24/33] virtio_net: xsk: stop disable tx napi
  2023-02-02 17:25   ` Michael S. Tsirkin
@ 2023-02-03  3:24     ` Xuan Zhuo
  2023-02-03  8:33       ` Michael S. Tsirkin
  0 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  3:24 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, 2 Feb 2023 12:25:59 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:49PM +0800, Xuan Zhuo wrote:
> > Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
> > we must stop tx napi from being disabled.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c | 9 ++++++++-
> >  1 file changed, 8 insertions(+), 1 deletion(-)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index ed79e750bc6c..232cf151abff 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -2728,8 +2728,15 @@ static int virtnet_set_coalesce(struct net_device *dev,
> >  		return ret;
> >
> >  	if (update_napi) {
> > -		for (i = 0; i < vi->max_queue_pairs; i++)
> > +		for (i = 0; i < vi->max_queue_pairs; i++) {
> > +			/* xsk xmit depend on the tx napi. So if xsk is active,
>
> depends.
>
> > +			 * prevent modifications to tx napi.
> > +			 */
> > +			if (rtnl_dereference(vi->sq[i].xsk.pool))
> > +				continue;
> > +
> >  			vi->sq[i].napi.weight = napi_weight;
>
> I don't get it.
> changing napi weight does not work then.
> why is this ok?


static void skb_xmit_done(struct virtqueue *vq)
{
	struct virtnet_info *vi = vq->vdev->priv;
	struct napi_struct *napi = &vi->sq[vq2txq(vq)].napi;

	/* Suppress further interrupts. */
	virtqueue_disable_cb(vq);

	if (napi->weight)
		virtqueue_napi_schedule(napi, vq);
	else
		/* We were probably waiting for more output buffers. */
		netif_wake_subqueue(vi->dev, vq2txq(vq));
}


If the weight is 0, tx napi will not be triggered again.

Thanks.

>
>
> > +		}
> >  	}
> >
> >  	return ret;
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-02 14:41 ` Paolo Abeni
@ 2023-02-03  3:33   ` Xuan Zhuo
  2023-02-03  8:37     ` Michael S. Tsirkin
  2023-02-03  9:17     ` Michael S. Tsirkin
  0 siblings, 2 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  3:33 UTC (permalink / raw)
  To: Paolo Abeni
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	netdev, John Fastabend, Björn Töpel,
	Alexei Starovoitov, Eric Dumazet, Kuniyuki Iwashima,
	Sebastian Andrzej Siewior, Jonathan Lemon, Jakub Kicinski, bpf,
	virtualization, David S. Miller, Magnus Karlsson

On Thu, 02 Feb 2023 15:41:44 +0100, Paolo Abeni <pabeni@redhat.com> wrote:
> On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > feature.
> >
> > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > the XDP Socket Zerocopy.
> >
> > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > kernel.
> >
> > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > local CPU, then we wake up sofrirqd.
>
> Thank you for the large effort.
>
> Since this will likely need a few iterations, on next revision please
> do split the work in multiple chunks to help the reviewer efforts -
> from Documentation/process/maintainer-netdev.rst:
>
>  - don't post large series (> 15 patches), break them up
>
> In this case I guess you can split it in 1 (or even 2) pre-req series
> and another one for the actual xsk zero copy support.


OK.

I can split patch into multiple parts such as

* virtio core
* xsk
* virtio-net prepare
* virtio-net support xsk zerocopy

However, there is a problem, the virtio core part should enter the VHOST branch
of Michael. Then, should I post follow-up patches to which branch vhost or
next-next?

Thanks.


>
> Thanks!
>
> Paolo
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 19/33] virtio_net: introduce virtnet_tx_reset()
  2023-02-02 17:23   ` Michael S. Tsirkin
@ 2023-02-03  4:35     ` Xuan Zhuo
  0 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  4:35 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, 2 Feb 2023 12:23:56 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:44PM +0800, Xuan Zhuo wrote:
> > Introduce virtnet_tx_reset() to release the buffers inside virtio ring.
> >
> > This is needed for xsk disable. When disable xsk, we need to relese the
>
> typo
>
> > buffer from xsk, so this function is needed.
> >
> > This patch reuse the virtnet_tx_resize.
>
> reuses
>
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
>
>
> > ---
> >  drivers/net/virtio/main.c       | 21 ++++++++++++++++++---
> >  drivers/net/virtio/virtio_net.h |  1 +
> >  2 files changed, 19 insertions(+), 3 deletions(-)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index fb82035a0b7f..049a3bb9d88d 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -1806,8 +1806,8 @@ static int virtnet_rx_resize(struct virtnet_info *vi,
> >  	return err;
> >  }
> >
> > -static int virtnet_tx_resize(struct virtnet_info *vi,
> > -			     struct send_queue *sq, u32 ring_num)
> > +static int __virtnet_tx_reset(struct virtnet_info *vi,
> > +			      struct send_queue *sq, u32 ring_num)
> >  {
> >  	bool running = netif_running(vi->dev);
> >  	struct netdev_queue *txq;
> > @@ -1833,7 +1833,11 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
> >
> >  	__netif_tx_unlock_bh(txq);
> >
> > -	err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
> > +	if (ring_num)
> > +		err = virtqueue_resize(sq->vq, ring_num, virtnet_sq_free_unused_buf);
> > +	else
> > +		err = virtqueue_reset(sq->vq, virtnet_sq_free_unused_buf);
> > +
> >  	if (err)
> >  		netdev_err(vi->dev, "resize tx fail: tx queue index: %d err: %d\n", qindex, err);
> >
>
> This __virtnet_tx_reset is a really weird API.
>
> Suggest just splitting the common parts:
>
> __virtnet_tx_pause
> __virtnet_tx_resume
>
> we can then implement virtnet_tx_resize and virtnet_tx_reset
> using these two.

Good idea.

Thanks.

>
>
> > @@ -1847,6 +1851,17 @@ static int virtnet_tx_resize(struct virtnet_info *vi,
> >  	return err;
> >  }
> >
> > +static int virtnet_tx_resize(struct virtnet_info *vi,
> > +			     struct send_queue *sq, u32 ring_num)
> > +{
> > +	return __virtnet_tx_reset(vi, sq, ring_num);
> > +}
> > +
> > +int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq)
> > +{
> > +	return __virtnet_tx_reset(vi, sq, 0);
> > +}
> > +
> >  /*
> >   * Send command via the control virtqueue and check status.  Commands
> >   * supported by the hypervisor, as indicated by feature bits, should
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index af3e7e817f9e..b46f083a630a 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -273,4 +273,5 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> >  			struct net_device *dev,
> >  			unsigned int *xdp_xmit,
> >  			struct virtnet_rq_stats *stats);
> > +int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq);
> >  #endif
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 09/33] xsk: xsk_buff_pool add callback for dma_sync
       [not found]   ` <CAJ8uoz2+4+wUFYF1GjF51DFBV8ZsBRtTEVWpu_2fBmFUEQzOLQ@mail.gmail.com>
@ 2023-02-03  7:01     ` Xuan Zhuo
  0 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  7:01 UTC (permalink / raw)
  To: Magnus Karlsson
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	netdev, John Fastabend, Björn Töpel,
	Alexei Starovoitov, Eric Dumazet, Kuniyuki Iwashima,
	Sebastian Andrzej Siewior, Jonathan Lemon, Jakub Kicinski, bpf,
	Paolo Abeni, virtualization, David S. Miller, Magnus Karlsson

On Thu, 2 Feb 2023 13:51:20 +0100, Magnus Karlsson <magnus.karlsson@gmail.com> wrote:
> On Thu, 2 Feb 2023 at 12:05, Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > Use callback to implement dma sync to simplify subsequent support for
> > virtio dma sync.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  include/net/xsk_buff_pool.h |  6 ++++++
> >  net/xdp/xsk_buff_pool.c     | 24 ++++++++++++++++++++----
> >  2 files changed, 26 insertions(+), 4 deletions(-)
> >
> > diff --git a/include/net/xsk_buff_pool.h b/include/net/xsk_buff_pool.h
> > index 3e952e569418..53b681120354 100644
> > --- a/include/net/xsk_buff_pool.h
> > +++ b/include/net/xsk_buff_pool.h
> > @@ -75,6 +75,12 @@ struct xsk_buff_pool {
> >         u32 chunk_size;
> >         u32 chunk_shift;
> >         u32 frame_len;
> > +       void (*dma_sync_for_cpu)(struct device *dev, dma_addr_t addr,
> > +                                unsigned long offset, size_t size,
> > +                                enum dma_data_direction dir);
> > +       void (*dma_sync_for_device)(struct device *dev, dma_addr_t addr,
> > +                                   unsigned long offset, size_t size,
> > +                                   enum dma_data_direction dir);
>
> If we put these two pointers here, the number of cache lines required
> in the data path for this struct will be increased from 2 to 3 which
> will likely affect performance negatively. These sync operations are
> also not used on most systems. So how about we put them in the first
> section of this struct labeled "Members only used in the control path
> first." instead. There is a 26-byte hole at the end of it that can be
> used.


Will fix.

Thanks.


>
> >         u8 cached_need_wakeup;
> >         bool uses_need_wakeup;
> >         bool dma_need_sync;
> > diff --git a/net/xdp/xsk_buff_pool.c b/net/xdp/xsk_buff_pool.c
> > index ed6c71826d31..78e325e195fa 100644
> > --- a/net/xdp/xsk_buff_pool.c
> > +++ b/net/xdp/xsk_buff_pool.c
> > @@ -403,6 +403,20 @@ static int xp_init_dma_info(struct xsk_buff_pool *pool, struct xsk_dma_map *dma_
> >         return 0;
> >  }
> >
> > +static void dma_sync_for_cpu(struct device *dev, dma_addr_t addr,
> > +                            unsigned long offset, size_t size,
> > +                            enum dma_data_direction dir)
> > +{
> > +       dma_sync_single_range_for_cpu(dev, addr, offset, size, dir);
> > +}
> > +
> > +static void dma_sync_for_device(struct device *dev, dma_addr_t addr,
> > +                               unsigned long offset, size_t size,
> > +                               enum dma_data_direction dir)
> > +{
> > +       dma_sync_single_range_for_device(dev, addr, offset, size, dir);
> > +}
> > +
> >  int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
> >                unsigned long attrs, struct page **pages, u32 nr_pages)
> >  {
> > @@ -421,6 +435,9 @@ int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
> >                 return 0;
> >         }
> >
> > +       pool->dma_sync_for_cpu = dma_sync_for_cpu;
> > +       pool->dma_sync_for_device = dma_sync_for_device;
> > +
> >         dma_map = xp_create_dma_map(dev, pool->netdev, nr_pages, pool->umem);
> >         if (!dma_map)
> >                 return -ENOMEM;
> > @@ -667,15 +684,14 @@ EXPORT_SYMBOL(xp_raw_get_dma);
> >
> >  void xp_dma_sync_for_cpu_slow(struct xdp_buff_xsk *xskb)
> >  {
> > -       dma_sync_single_range_for_cpu(xskb->pool->dev, xskb->dma, 0,
> > -                                     xskb->pool->frame_len, DMA_BIDIRECTIONAL);
> > +       xskb->pool->dma_sync_for_cpu(xskb->pool->dev, xskb->dma, 0,
> > +                                    xskb->pool->frame_len, DMA_BIDIRECTIONAL);
> >  }
> >  EXPORT_SYMBOL(xp_dma_sync_for_cpu_slow);
> >
> >  void xp_dma_sync_for_device_slow(struct xsk_buff_pool *pool, dma_addr_t dma,
> >                                  size_t size)
> >  {
> > -       dma_sync_single_range_for_device(pool->dev, dma, 0,
> > -                                        size, DMA_BIDIRECTIONAL);
> > +       pool->dma_sync_for_device(pool->dev, dma, 0, size, DMA_BIDIRECTIONAL);
> >  }
> >  EXPORT_SYMBOL(xp_dma_sync_for_device_slow);
> > --
> > 2.32.0.3.g01195cf9f
> >
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 24/33] virtio_net: xsk: stop disable tx napi
  2023-02-03  3:24     ` Xuan Zhuo
@ 2023-02-03  8:33       ` Michael S. Tsirkin
  2023-02-03  8:49         ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  8:33 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 11:24:42AM +0800, Xuan Zhuo wrote:
> On Thu, 2 Feb 2023 12:25:59 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Thu, Feb 02, 2023 at 07:00:49PM +0800, Xuan Zhuo wrote:
> > > Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
> > > we must stop tx napi from being disabled.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio/main.c | 9 ++++++++-
> > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > >
> > > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > > index ed79e750bc6c..232cf151abff 100644
> > > --- a/drivers/net/virtio/main.c
> > > +++ b/drivers/net/virtio/main.c
> > > @@ -2728,8 +2728,15 @@ static int virtnet_set_coalesce(struct net_device *dev,
> > >  		return ret;
> > >
> > >  	if (update_napi) {
> > > -		for (i = 0; i < vi->max_queue_pairs; i++)
> > > +		for (i = 0; i < vi->max_queue_pairs; i++) {
> > > +			/* xsk xmit depend on the tx napi. So if xsk is active,
> >
> > depends.
> >
> > > +			 * prevent modifications to tx napi.
> > > +			 */
> > > +			if (rtnl_dereference(vi->sq[i].xsk.pool))
> > > +				continue;
> > > +
> > >  			vi->sq[i].napi.weight = napi_weight;
> >
> > I don't get it.
> > changing napi weight does not work then.
> > why is this ok?
> 
> 
> static void skb_xmit_done(struct virtqueue *vq)
> {
> 	struct virtnet_info *vi = vq->vdev->priv;
> 	struct napi_struct *napi = &vi->sq[vq2txq(vq)].napi;
> 
> 	/* Suppress further interrupts. */
> 	virtqueue_disable_cb(vq);
> 
> 	if (napi->weight)
> 		virtqueue_napi_schedule(napi, vq);
> 	else
> 		/* We were probably waiting for more output buffers. */
> 		netif_wake_subqueue(vi->dev, vq2txq(vq));
> }
> 
> 
> If the weight is 0, tx napi will not be triggered again.
> 
> Thanks.

This needs more thought then. First ignoring what user is requesting is
not nice.  Second what if napi is first disabled and then xsk enabled?


> >
> >
> > > +		}
> > >  	}
> > >
> > >  	return ret;
> > > --
> > > 2.32.0.3.g01195cf9f
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-03  3:33   ` Xuan Zhuo
@ 2023-02-03  8:37     ` Michael S. Tsirkin
       [not found]       ` <Y9zJ9j0GthvRSFHL@boxer>
  2023-02-03  9:17     ` Michael S. Tsirkin
  1 sibling, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  8:37 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 11:33:31AM +0800, Xuan Zhuo wrote:
> On Thu, 02 Feb 2023 15:41:44 +0100, Paolo Abeni <pabeni@redhat.com> wrote:
> > On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > feature.
> > >
> > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > the XDP Socket Zerocopy.
> > >
> > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > kernel.
> > >
> > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > local CPU, then we wake up sofrirqd.
> >
> > Thank you for the large effort.
> >
> > Since this will likely need a few iterations, on next revision please
> > do split the work in multiple chunks to help the reviewer efforts -
> > from Documentation/process/maintainer-netdev.rst:
> >
> >  - don't post large series (> 15 patches), break them up
> >
> > In this case I guess you can split it in 1 (or even 2) pre-req series
> > and another one for the actual xsk zero copy support.
> 
> 
> OK.
> 
> I can split patch into multiple parts such as
> 
> * virtio core
> * xsk
> * virtio-net prepare
> * virtio-net support xsk zerocopy
> 
> However, there is a problem, the virtio core part should enter the VHOST branch
> of Michael. Then, should I post follow-up patches to which branch vhost or
> next-next?
> 
> Thanks.

I personally think 33 patches is still manageable no need to split.
Do try to be careful and track acks and changes: if someone sends an ack
add it in the patch if you change the patch drop the acks,
and logs this fact in the changelog in the cover letter
so people know they need to re-review.


> 
> >
> > Thanks!
> >
> > Paolo
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
  2023-02-02 11:00 ` [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool() Xuan Zhuo
@ 2023-02-03  8:48   ` Michael S. Tsirkin
  2023-02-03  8:52     ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  8:48 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:45PM +0800, Xuan Zhuo wrote:
> This function is used to bind or unbind xsk pool to virtnet rq.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/Makefile     |  2 +-
>  drivers/net/virtio/main.c       |  8 ++---
>  drivers/net/virtio/virtio_net.h | 16 ++++++++++
>  drivers/net/virtio/xsk.c        | 56 +++++++++++++++++++++++++++++++++
>  4 files changed, 76 insertions(+), 6 deletions(-)
>  create mode 100644 drivers/net/virtio/xsk.c
> 
> diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
> index 15ed7c97fd4f..8c2a884d2dba 100644
> --- a/drivers/net/virtio/Makefile
> +++ b/drivers/net/virtio/Makefile
> @@ -5,4 +5,4 @@
>  
>  obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
>  
> -virtio_net-y := main.o
> +virtio_net-y := main.o xsk.o
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 049a3bb9d88d..0ee23468b795 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -110,7 +110,6 @@ struct padded_vnet_hdr {
>  	char padding[12];
>  };
>  
> -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
>  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
>  
>  static void *xdp_to_ptr(struct xdp_frame *ptr)
> @@ -1351,8 +1350,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
>   * before we're receiving packets, or from refill_work which is
>   * careful to disable receiving (using napi_disable).
>   */
> -static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
> -			  gfp_t gfp)
> +bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
>  {
>  	int err;
>  	bool oom;
> @@ -1388,7 +1386,7 @@ static void skb_recv_done(struct virtqueue *rvq)
>  	virtqueue_napi_schedule(&rq->napi, rvq);
>  }
>  
> -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> +void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
>  {
>  	napi_enable(napi);
>  
> @@ -3284,7 +3282,7 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
>  		xdp_return_frame(ptr_to_xdp(buf));
>  }
>  
> -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)

If you are making this an API now you better document
what it does. Same applies to other stuff you are
making non-static.


>  {
>  	struct virtnet_info *vi = vq->vdev->priv;
>  	int i = vq2rxq(vq);
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index b46f083a630a..4a7633714802 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -168,6 +168,12 @@ struct send_queue {
>  
>  	/* Record whether sq is in reset state. */
>  	bool reset;
> +
> +	struct {
> +		struct xsk_buff_pool __rcu *pool;
> +
> +		dma_addr_t hdr_dma_address;
> +	} xsk;
>  };
>  
>  /* Internal representation of a receive virtqueue */
> @@ -200,6 +206,13 @@ struct receive_queue {
>  	char name[16];
>  
>  	struct xdp_rxq_info xdp_rxq;
> +
> +	struct {
> +		struct xsk_buff_pool __rcu *pool;
> +
> +		/* xdp rxq used by xsk */
> +		struct xdp_rxq_info xdp_rxq;
> +	} xsk;
>  };
>  
>  static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> @@ -274,4 +287,7 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>  			unsigned int *xdp_xmit,
>  			struct virtnet_rq_stats *stats);
>  int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq);
> +bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp);
> +void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi);
> +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
>  #endif
> diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> new file mode 100644
> index 000000000000..e01ff2abea11
> --- /dev/null
> +++ b/drivers/net/virtio/xsk.c
> @@ -0,0 +1,56 @@
> +// SPDX-License-Identifier: GPL-2.0-or-later
> +/*
> + * virtio-net xsk
> + */
> +
> +#include "virtio_net.h"
> +
> +static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
> +				    struct xsk_buff_pool *pool, struct net_device *dev)

This static function is unused after this patch, so compiler will
complain. Yes it's just a warning but still not nice.


> +{
> +	bool running = netif_running(vi->dev);
> +	int err, qindex;
> +
> +	qindex = rq - vi->rq;
> +
> +	if (pool) {
> +		err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, dev, qindex, rq->napi.napi_id);
> +		if (err < 0)
> +			return err;
> +
> +		err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq,
> +						 MEM_TYPE_XSK_BUFF_POOL, NULL);
> +		if (err < 0) {
> +			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> +			return err;
> +		}
> +
> +		xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq);
> +	} else {
> +		xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> +	}
> +
> +	if (running)
> +		napi_disable(&rq->napi);
> +
> +	err = virtqueue_reset(rq->vq, virtnet_rq_free_unused_buf);
> +	if (err)
> +		netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err);
> +
> +	if (pool) {
> +		if (err)
> +			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> +		else
> +			rcu_assign_pointer(rq->xsk.pool, pool);
> +	} else {
> +		rcu_assign_pointer(rq->xsk.pool, NULL);
> +	}
> +
> +	if (!try_fill_recv(vi, rq, GFP_KERNEL))
> +		schedule_delayed_work(&vi->refill, 0);
> +
> +	if (running)
> +		virtnet_napi_enable(rq->vq, &rq->napi);
> +
> +	return err;
> +}
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 24/33] virtio_net: xsk: stop disable tx napi
  2023-02-03  8:33       ` Michael S. Tsirkin
@ 2023-02-03  8:49         ` Xuan Zhuo
  2023-02-03  9:29           ` Michael S. Tsirkin
  0 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  8:49 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 03:33:41 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Fri, Feb 03, 2023 at 11:24:42AM +0800, Xuan Zhuo wrote:
> > On Thu, 2 Feb 2023 12:25:59 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > On Thu, Feb 02, 2023 at 07:00:49PM +0800, Xuan Zhuo wrote:
> > > > Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
> > > > we must stop tx napi from being disabled.
> > > >
> > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > ---
> > > >  drivers/net/virtio/main.c | 9 ++++++++-
> > > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > > >
> > > > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > > > index ed79e750bc6c..232cf151abff 100644
> > > > --- a/drivers/net/virtio/main.c
> > > > +++ b/drivers/net/virtio/main.c
> > > > @@ -2728,8 +2728,15 @@ static int virtnet_set_coalesce(struct net_device *dev,
> > > >  		return ret;
> > > >
> > > >  	if (update_napi) {
> > > > -		for (i = 0; i < vi->max_queue_pairs; i++)
> > > > +		for (i = 0; i < vi->max_queue_pairs; i++) {
> > > > +			/* xsk xmit depend on the tx napi. So if xsk is active,
> > >
> > > depends.
> > >
> > > > +			 * prevent modifications to tx napi.
> > > > +			 */
> > > > +			if (rtnl_dereference(vi->sq[i].xsk.pool))
> > > > +				continue;
> > > > +
> > > >  			vi->sq[i].napi.weight = napi_weight;
> > >
> > > I don't get it.
> > > changing napi weight does not work then.
> > > why is this ok?
> >
> >
> > static void skb_xmit_done(struct virtqueue *vq)
> > {
> > 	struct virtnet_info *vi = vq->vdev->priv;
> > 	struct napi_struct *napi = &vi->sq[vq2txq(vq)].napi;
> >
> > 	/* Suppress further interrupts. */
> > 	virtqueue_disable_cb(vq);
> >
> > 	if (napi->weight)
> > 		virtqueue_napi_schedule(napi, vq);
> > 	else
> > 		/* We were probably waiting for more output buffers. */
> > 		netif_wake_subqueue(vi->dev, vq2txq(vq));
> > }
> >
> >
> > If the weight is 0, tx napi will not be triggered again.
> >
> > Thanks.
>
> This needs more thought then. First ignoring what user is requesting is
> not nice.

Maybe we should return an error.

>Second what if napi is first disabled and then xsk enabled?


static int virtnet_xsk_pool_enable(struct net_device *dev,
				   struct xsk_buff_pool *pool,
				   u16 qid)
{
	struct virtnet_info *vi = netdev_priv(dev);
	struct receive_queue *rq;
	struct send_queue *sq;
	int err;

	if (qid >= vi->curr_queue_pairs)
		return -EINVAL;

	sq = &vi->sq[qid];
	rq = &vi->rq[qid];

	/* xsk zerocopy depend on the tx napi.
	 *
	 * All xsk packets are actually consumed and sent out from the xsk tx
	 * queue under the tx napi mechanism.
	 */
->	if (!sq->napi.weight)
		return -EPERM;

Thanks.


>
>
> > >
> > >
> > > > +		}
> > > >  	}
> > > >
> > > >  	return ret;
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
  2023-02-03  8:48   ` Michael S. Tsirkin
@ 2023-02-03  8:52     ` Xuan Zhuo
  2023-02-03  9:28       ` Michael S. Tsirkin
  0 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  8:52 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 03:48:33 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:45PM +0800, Xuan Zhuo wrote:
> > This function is used to bind or unbind xsk pool to virtnet rq.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/Makefile     |  2 +-
> >  drivers/net/virtio/main.c       |  8 ++---
> >  drivers/net/virtio/virtio_net.h | 16 ++++++++++
> >  drivers/net/virtio/xsk.c        | 56 +++++++++++++++++++++++++++++++++
> >  4 files changed, 76 insertions(+), 6 deletions(-)
> >  create mode 100644 drivers/net/virtio/xsk.c
> >
> > diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
> > index 15ed7c97fd4f..8c2a884d2dba 100644
> > --- a/drivers/net/virtio/Makefile
> > +++ b/drivers/net/virtio/Makefile
> > @@ -5,4 +5,4 @@
> >
> >  obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
> >
> > -virtio_net-y := main.o
> > +virtio_net-y := main.o xsk.o
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 049a3bb9d88d..0ee23468b795 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -110,7 +110,6 @@ struct padded_vnet_hdr {
> >  	char padding[12];
> >  };
> >
> > -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> >
> >  static void *xdp_to_ptr(struct xdp_frame *ptr)
> > @@ -1351,8 +1350,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
> >   * before we're receiving packets, or from refill_work which is
> >   * careful to disable receiving (using napi_disable).
> >   */
> > -static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
> > -			  gfp_t gfp)
> > +bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
> >  {
> >  	int err;
> >  	bool oom;
> > @@ -1388,7 +1386,7 @@ static void skb_recv_done(struct virtqueue *rvq)
> >  	virtqueue_napi_schedule(&rq->napi, rvq);
> >  }
> >
> > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > +void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> >  {
> >  	napi_enable(napi);
> >
> > @@ -3284,7 +3282,7 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
> >  		xdp_return_frame(ptr_to_xdp(buf));
> >  }
> >
> > -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> > +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
>
> If you are making this an API now you better document
> what it does. Same applies to other stuff you are
> making non-static.

I agree.

>
>
> >  {
> >  	struct virtnet_info *vi = vq->vdev->priv;
> >  	int i = vq2rxq(vq);
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index b46f083a630a..4a7633714802 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -168,6 +168,12 @@ struct send_queue {
> >
> >  	/* Record whether sq is in reset state. */
> >  	bool reset;
> > +
> > +	struct {
> > +		struct xsk_buff_pool __rcu *pool;
> > +
> > +		dma_addr_t hdr_dma_address;
> > +	} xsk;
> >  };
> >
> >  /* Internal representation of a receive virtqueue */
> > @@ -200,6 +206,13 @@ struct receive_queue {
> >  	char name[16];
> >
> >  	struct xdp_rxq_info xdp_rxq;
> > +
> > +	struct {
> > +		struct xsk_buff_pool __rcu *pool;
> > +
> > +		/* xdp rxq used by xsk */
> > +		struct xdp_rxq_info xdp_rxq;
> > +	} xsk;
> >  };
> >
> >  static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> > @@ -274,4 +287,7 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> >  			unsigned int *xdp_xmit,
> >  			struct virtnet_rq_stats *stats);
> >  int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq);
> > +bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp);
> > +void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi);
> > +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> >  #endif
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > new file mode 100644
> > index 000000000000..e01ff2abea11
> > --- /dev/null
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -0,0 +1,56 @@
> > +// SPDX-License-Identifier: GPL-2.0-or-later
> > +/*
> > + * virtio-net xsk
> > + */
> > +
> > +#include "virtio_net.h"
> > +
> > +static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
> > +				    struct xsk_buff_pool *pool, struct net_device *dev)
>
> This static function is unused after this patch, so compiler will
> complain. Yes it's just a warning but still not nice.

Otherwise, we need merge some patches, which will increase the difficulty of
review.

Is there a better way to deal with? Remove Static?

Thanks.


>
>
> > +{
> > +	bool running = netif_running(vi->dev);
> > +	int err, qindex;
> > +
> > +	qindex = rq - vi->rq;
> > +
> > +	if (pool) {
> > +		err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, dev, qindex, rq->napi.napi_id);
> > +		if (err < 0)
> > +			return err;
> > +
> > +		err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq,
> > +						 MEM_TYPE_XSK_BUFF_POOL, NULL);
> > +		if (err < 0) {
> > +			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> > +			return err;
> > +		}
> > +
> > +		xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq);
> > +	} else {
> > +		xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> > +	}
> > +
> > +	if (running)
> > +		napi_disable(&rq->napi);
> > +
> > +	err = virtqueue_reset(rq->vq, virtnet_rq_free_unused_buf);
> > +	if (err)
> > +		netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err);
> > +
> > +	if (pool) {
> > +		if (err)
> > +			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> > +		else
> > +			rcu_assign_pointer(rq->xsk.pool, pool);
> > +	} else {
> > +		rcu_assign_pointer(rq->xsk.pool, NULL);
> > +	}
> > +
> > +	if (!try_fill_recv(vi, rq, GFP_KERNEL))
> > +		schedule_delayed_work(&vi->refill, 0);
> > +
> > +	if (running)
> > +		virtnet_napi_enable(rq->vq, &rq->napi);
> > +
> > +	return err;
> > +}
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 15/33] virtio_net: move to virtio_net.h
  2023-02-02 11:00 ` [PATCH 15/33] virtio_net: move to virtio_net.h Xuan Zhuo
@ 2023-02-03  8:53   ` Michael S. Tsirkin
  2023-02-03  9:04     ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  8:53 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:40PM +0800, Xuan Zhuo wrote:
> Move some structure definitions and inline functions into the
> virtio_net.h file.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio/main.c       | 247 +----------------------------
>  drivers/net/virtio/virtio_net.h | 265 ++++++++++++++++++++++++++++++++
>  2 files changed, 267 insertions(+), 245 deletions(-)
>  create mode 100644 drivers/net/virtio/virtio_net.h
> 
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index eb7f00194b5c..5683cb576474 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -4,24 +4,8 @@
>   * Copyright 2007 Rusty Russell <rusty@rustcorp.com.au> IBM Corporation
>   */
>  //#define DEBUG
> -#include <linux/netdevice.h>
> -#include <linux/etherdevice.h>
> -#include <linux/ethtool.h>
> -#include <linux/module.h>
> -#include <linux/virtio.h>
> -#include <linux/virtio_net.h>
> -#include <linux/bpf.h>
> -#include <linux/bpf_trace.h>
> -#include <linux/scatterlist.h>
> -#include <linux/if_vlan.h>
> -#include <linux/slab.h>
> -#include <linux/cpu.h>
> -#include <linux/average.h>
> -#include <linux/filter.h>
> -#include <linux/kernel.h>
> -#include <net/route.h>
> -#include <net/xdp.h>
> -#include <net/net_failover.h>
> +
> +#include "virtio_net.h"
>  
>  static int napi_weight = NAPI_POLL_WEIGHT;
>  module_param(napi_weight, int, 0444);


You should only move the headers that are actually needed not
everything.


> @@ -44,15 +28,6 @@ module_param(napi_tx, bool, 0644);
>  #define VIRTIO_XDP_TX		BIT(0)
>  #define VIRTIO_XDP_REDIR	BIT(1)
>  
> -#define VIRTIO_XDP_FLAG	BIT(0)
> -
> -/* RX packet size EWMA. The average packet size is used to determine the packet
> - * buffer size when refilling RX rings. As the entire RX ring may be refilled
> - * at once, the weight is chosen so that the EWMA will be insensitive to short-
> - * term, transient changes in packet size.
> - */
> -DECLARE_EWMA(pkt_len, 0, 64)
> -
>  #define VIRTNET_DRIVER_VERSION "1.0.0"
>  
>  static const unsigned long guest_offloads[] = {
> @@ -72,36 +47,6 @@ static const unsigned long guest_offloads[] = {
>  				(1ULL << VIRTIO_NET_F_GUEST_USO4) | \
>  				(1ULL << VIRTIO_NET_F_GUEST_USO6))
>  
> -struct virtnet_stat_desc {
> -	char desc[ETH_GSTRING_LEN];
> -	size_t offset;
> -};
> -
> -struct virtnet_sq_stats {
> -	struct u64_stats_sync syncp;
> -	u64 packets;
> -	u64 bytes;
> -	u64 xdp_tx;
> -	u64 xdp_tx_drops;
> -	u64 kicks;
> -	u64 tx_timeouts;
> -};
> -
> -struct virtnet_rq_stats {
> -	struct u64_stats_sync syncp;
> -	u64 packets;
> -	u64 bytes;
> -	u64 drops;
> -	u64 xdp_packets;
> -	u64 xdp_tx;
> -	u64 xdp_redirects;
> -	u64 xdp_drops;
> -	u64 kicks;
> -};
> -
> -#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
> -#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
> -
>  static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
>  	{ "packets",		VIRTNET_SQ_STAT(packets) },
>  	{ "bytes",		VIRTNET_SQ_STAT(bytes) },
> @@ -125,57 +70,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
>  #define VIRTNET_SQ_STATS_LEN	ARRAY_SIZE(virtnet_sq_stats_desc)
>  #define VIRTNET_RQ_STATS_LEN	ARRAY_SIZE(virtnet_rq_stats_desc)
>  
> -/* Internal representation of a send virtqueue */
> -struct send_queue {
> -	/* Virtqueue associated with this send _queue */
> -	struct virtqueue *vq;
> -
> -	/* TX: fragments + linear part + virtio header */
> -	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> -
> -	/* Name of the send queue: output.$index */
> -	char name[16];
> -
> -	struct virtnet_sq_stats stats;
> -
> -	struct napi_struct napi;
> -
> -	/* Record whether sq is in reset state. */
> -	bool reset;
> -};
> -
> -/* Internal representation of a receive virtqueue */
> -struct receive_queue {
> -	/* Virtqueue associated with this receive_queue */
> -	struct virtqueue *vq;
> -
> -	struct napi_struct napi;
> -
> -	struct bpf_prog __rcu *xdp_prog;
> -
> -	struct virtnet_rq_stats stats;
> -
> -	/* Chain pages by the private ptr. */
> -	struct page *pages;
> -
> -	/* Average packet length for mergeable receive buffers. */
> -	struct ewma_pkt_len mrg_avg_pkt_len;
> -
> -	/* Page frag for packet buffer allocation. */
> -	struct page_frag alloc_frag;
> -
> -	/* RX: fragments + linear part + virtio header */
> -	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> -
> -	/* Min single buffer size for mergeable buffers case. */
> -	unsigned int min_buf_len;
> -
> -	/* Name of this receive queue: input.$index */
> -	char name[16];
> -
> -	struct xdp_rxq_info xdp_rxq;
> -};
> -
>  /* This structure can contain rss message with maximum settings for indirection table and keysize
>   * Note, that default structure that describes RSS configuration virtio_net_rss_config
>   * contains same info but can't handle table values.
> @@ -206,90 +100,6 @@ struct control_buf {
>  	struct virtio_net_ctrl_rss rss;
>  };
>  
> -struct virtnet_info {
> -	struct virtio_device *vdev;
> -	struct virtqueue *cvq;
> -	struct net_device *dev;
> -	struct send_queue *sq;
> -	struct receive_queue *rq;
> -	unsigned int status;
> -
> -	/* Max # of queue pairs supported by the device */
> -	u16 max_queue_pairs;
> -
> -	/* # of queue pairs currently used by the driver */
> -	u16 curr_queue_pairs;
> -
> -	/* # of XDP queue pairs currently used by the driver */
> -	u16 xdp_queue_pairs;
> -
> -	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> -	bool xdp_enabled;
> -
> -	/* I like... big packets and I cannot lie! */
> -	bool big_packets;
> -
> -	/* number of sg entries allocated for big packets */
> -	unsigned int big_packets_num_skbfrags;
> -
> -	/* Host will merge rx buffers for big packets (shake it! shake it!) */
> -	bool mergeable_rx_bufs;
> -
> -	/* Host supports rss and/or hash report */
> -	bool has_rss;
> -	bool has_rss_hash_report;
> -	u8 rss_key_size;
> -	u16 rss_indir_table_size;
> -	u32 rss_hash_types_supported;
> -	u32 rss_hash_types_saved;
> -
> -	/* Has control virtqueue */
> -	bool has_cvq;
> -
> -	/* Host can handle any s/g split between our header and packet data */
> -	bool any_header_sg;
> -
> -	/* Packet virtio header size */
> -	u8 hdr_len;
> -
> -	/* Work struct for delayed refilling if we run low on memory. */
> -	struct delayed_work refill;
> -
> -	/* Is delayed refill enabled? */
> -	bool refill_enabled;
> -
> -	/* The lock to synchronize the access to refill_enabled */
> -	spinlock_t refill_lock;
> -
> -	/* Work struct for config space updates */
> -	struct work_struct config_work;
> -
> -	/* Does the affinity hint is set for virtqueues? */
> -	bool affinity_hint_set;
> -
> -	/* CPU hotplug instances for online & dead */
> -	struct hlist_node node;
> -	struct hlist_node node_dead;
> -
> -	struct control_buf *ctrl;
> -
> -	/* Ethtool settings */
> -	u8 duplex;
> -	u32 speed;
> -
> -	/* Interrupt coalescing settings */
> -	u32 tx_usecs;
> -	u32 rx_usecs;
> -	u32 tx_max_packets;
> -	u32 rx_max_packets;
> -
> -	unsigned long guest_offloads;
> -	unsigned long guest_offloads_capable;
> -
> -	/* failover when STANDBY feature enabled */
> -	struct failover *failover;
> -};
> -
>  struct padded_vnet_hdr {
>  	struct virtio_net_hdr_v1_hash hdr;
>  	/*
> @@ -303,45 +113,11 @@ struct padded_vnet_hdr {
>  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
>  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
>  
> -static bool is_xdp_frame(void *ptr)
> -{
> -	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> -}
> -
>  static void *xdp_to_ptr(struct xdp_frame *ptr)
>  {
>  	return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
>  }
>  
> -static struct xdp_frame *ptr_to_xdp(void *ptr)
> -{
> -	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
> -}
> -
> -static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> -			    struct virtnet_sq_stats *stats)
> -{
> -	unsigned int len;
> -	void *ptr;
> -
> -	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> -		if (!is_xdp_frame(ptr)) {
> -			struct sk_buff *skb = ptr;
> -
> -			pr_debug("Sent skb %p\n", skb);
> -
> -			stats->bytes += skb->len;
> -			napi_consume_skb(skb, in_napi);
> -		} else {
> -			struct xdp_frame *frame = ptr_to_xdp(ptr);
> -
> -			stats->bytes += xdp_get_frame_len(frame);
> -			xdp_return_frame(frame);
> -		}
> -		stats->packets++;
> -	}
> -}
> -
>  /* Converting between virtqueue no. and kernel tx/rx queue no.
>   * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
>   */
> @@ -411,15 +187,6 @@ static void disable_delayed_refill(struct virtnet_info *vi)
>  	spin_unlock_bh(&vi->refill_lock);
>  }
>  
> -static void virtqueue_napi_schedule(struct napi_struct *napi,
> -				    struct virtqueue *vq)
> -{
> -	if (napi_schedule_prep(napi)) {
> -		virtqueue_disable_cb(vq);
> -		__napi_schedule(napi);
> -	}
> -}
> -
>  static void virtqueue_napi_complete(struct napi_struct *napi,
>  				    struct virtqueue *vq, int processed)
>  {
> @@ -1740,16 +1507,6 @@ static void free_old_xmit(struct send_queue *sq, bool in_napi)
>  	u64_stats_update_end(&sq->stats.syncp);
>  }
>  
> -static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> -{
> -	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
> -		return false;
> -	else if (q < vi->curr_queue_pairs)
> -		return true;
> -	else
> -		return false;
> -}
> -
>  static void virtnet_poll_cleantx(struct receive_queue *rq)
>  {
>  	struct virtnet_info *vi = rq->vq->vdev->priv;
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> new file mode 100644
> index 000000000000..8bf31429ae28
> --- /dev/null
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -0,0 +1,265 @@
> +/* SPDX-License-Identifier: GPL-2.0-or-later */
> +
> +#ifndef __VIRTIO_NET_H__
> +#define __VIRTIO_NET_H__
> +#include <linux/netdevice.h>
> +#include <linux/etherdevice.h>
> +#include <linux/ethtool.h>
> +#include <linux/module.h>
> +#include <linux/virtio.h>
> +#include <linux/virtio_net.h>
> +#include <linux/bpf.h>
> +#include <linux/bpf_trace.h>
> +#include <linux/scatterlist.h>
> +#include <linux/if_vlan.h>
> +#include <linux/slab.h>
> +#include <linux/cpu.h>
> +#include <linux/average.h>
> +#include <linux/filter.h>
> +#include <linux/kernel.h>
> +#include <net/route.h>
> +#include <net/xdp.h>
> +#include <net/net_failover.h>
> +#include <net/xdp_sock_drv.h>
> +
> +#define VIRTIO_XDP_FLAG	BIT(0)
> +
> +struct virtnet_info {
> +	struct virtio_device *vdev;
> +	struct virtqueue *cvq;
> +	struct net_device *dev;
> +	struct send_queue *sq;
> +	struct receive_queue *rq;
> +	unsigned int status;
> +
> +	/* Max # of queue pairs supported by the device */
> +	u16 max_queue_pairs;
> +
> +	/* # of queue pairs currently used by the driver */
> +	u16 curr_queue_pairs;
> +
> +	/* # of XDP queue pairs currently used by the driver */
> +	u16 xdp_queue_pairs;
> +
> +	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> +	bool xdp_enabled;
> +
> +	/* I like... big packets and I cannot lie! */
> +	bool big_packets;
> +
> +	/* number of sg entries allocated for big packets */
> +	unsigned int big_packets_num_skbfrags;
> +
> +	/* Host will merge rx buffers for big packets (shake it! shake it!) */
> +	bool mergeable_rx_bufs;
> +
> +	/* Host supports rss and/or hash report */
> +	bool has_rss;
> +	bool has_rss_hash_report;
> +	u8 rss_key_size;
> +	u16 rss_indir_table_size;
> +	u32 rss_hash_types_supported;
> +	u32 rss_hash_types_saved;
> +
> +	/* Has control virtqueue */
> +	bool has_cvq;
> +
> +	/* Host can handle any s/g split between our header and packet data */
> +	bool any_header_sg;
> +
> +	/* Packet virtio header size */
> +	u8 hdr_len;
> +
> +	/* Work struct for delayed refilling if we run low on memory. */
> +	struct delayed_work refill;
> +
> +	/* Is delayed refill enabled? */
> +	bool refill_enabled;
> +
> +	/* The lock to synchronize the access to refill_enabled */
> +	spinlock_t refill_lock;
> +
> +	/* Work struct for config space updates */
> +	struct work_struct config_work;
> +
> +	/* Does the affinity hint is set for virtqueues? */
> +	bool affinity_hint_set;
> +
> +	/* CPU hotplug instances for online & dead */
> +	struct hlist_node node;
> +	struct hlist_node node_dead;
> +
> +	struct control_buf *ctrl;
> +
> +	/* Ethtool settings */
> +	u8 duplex;
> +	u32 speed;
> +
> +	/* Interrupt coalescing settings */
> +	u32 tx_usecs;
> +	u32 rx_usecs;
> +	u32 tx_max_packets;
> +	u32 rx_max_packets;
> +
> +	unsigned long guest_offloads;
> +	unsigned long guest_offloads_capable;
> +
> +	/* failover when STANDBY feature enabled */
> +	struct failover *failover;
> +};
> +
> +/* RX packet size EWMA. The average packet size is used to determine the packet
> + * buffer size when refilling RX rings. As the entire RX ring may be refilled
> + * at once, the weight is chosen so that the EWMA will be insensitive to short-
> + * term, transient changes in packet size.
> + */
> +DECLARE_EWMA(pkt_len, 0, 64)
> +
> +struct virtnet_stat_desc {
> +	char desc[ETH_GSTRING_LEN];
> +	size_t offset;
> +};
> +
> +struct virtnet_sq_stats {
> +	struct u64_stats_sync syncp;
> +	u64 packets;
> +	u64 bytes;
> +	u64 xdp_tx;
> +	u64 xdp_tx_drops;
> +	u64 kicks;
> +	u64 tx_timeouts;
> +};
> +
> +struct virtnet_rq_stats {
> +	struct u64_stats_sync syncp;
> +	u64 packets;
> +	u64 bytes;
> +	u64 drops;
> +	u64 xdp_packets;
> +	u64 xdp_tx;
> +	u64 xdp_redirects;
> +	u64 xdp_drops;
> +	u64 kicks;
> +};
> +
> +#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
> +#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
> +
> +/* Internal representation of a send virtqueue */
> +struct send_queue {
> +	/* Virtqueue associated with this send _queue */
> +	struct virtqueue *vq;
> +
> +	/* TX: fragments + linear part + virtio header */
> +	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> +
> +	/* Name of the send queue: output.$index */
> +	char name[16];
> +
> +	struct virtnet_sq_stats stats;
> +
> +	struct napi_struct napi;
> +
> +	/* Record whether sq is in reset state. */
> +	bool reset;
> +};
> +
> +/* Internal representation of a receive virtqueue */
> +struct receive_queue {
> +	/* Virtqueue associated with this receive_queue */
> +	struct virtqueue *vq;
> +
> +	struct napi_struct napi;
> +
> +	struct bpf_prog __rcu *xdp_prog;
> +
> +	struct virtnet_rq_stats stats;
> +
> +	/* Chain pages by the private ptr. */
> +	struct page *pages;
> +
> +	/* Average packet length for mergeable receive buffers. */
> +	struct ewma_pkt_len mrg_avg_pkt_len;
> +
> +	/* Page frag for packet buffer allocation. */
> +	struct page_frag alloc_frag;
> +
> +	/* RX: fragments + linear part + virtio header */
> +	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> +
> +	/* Min single buffer size for mergeable buffers case. */
> +	unsigned int min_buf_len;
> +
> +	/* Name of this receive queue: input.$index */
> +	char name[16];
> +
> +	struct xdp_rxq_info xdp_rxq;
> +};
> +
> +static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> +{
> +	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
> +		return false;
> +	else if (q < vi->curr_queue_pairs)
> +		return true;
> +	else
> +		return false;
> +}
> +
> +static inline void virtnet_return_xdp_frame(struct send_queue *sq,
> +					    struct xdp_frame *frame)
> +{
> +	struct virtnet_info *vi = sq->vq->vdev->priv;
> +	dma_addr_t *p_addr, addr;
> +
> +	p_addr = frame->data - sizeof(*p_addr);
> +	addr = *p_addr;
> +
> +	virtio_dma_unmap(&vi->vdev->dev, addr, frame->len, DMA_TO_DEVICE);
> +
> +	xdp_return_frame(frame);
> +}
> +
> +static inline void virtqueue_napi_schedule(struct napi_struct *napi,
> +					   struct virtqueue *vq)
> +{
> +	if (napi_schedule_prep(napi)) {
> +		virtqueue_disable_cb(vq);
> +		__napi_schedule(napi);
> +	}
> +}
> +
> +static inline bool is_xdp_frame(void *ptr)
> +{
> +	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> +}
> +
> +static struct xdp_frame *ptr_to_xdp(void *ptr)
> +{
> +	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
> +}
> +
> +static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> +			    struct virtnet_sq_stats *stats)
> +{
> +	unsigned int len;
> +	void *ptr;
> +
> +	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> +		if (!is_xdp_frame(ptr)) {
> +			struct sk_buff *skb = ptr;
> +
> +			pr_debug("Sent skb %p\n", skb);
> +
> +			stats->bytes += skb->len;
> +			napi_consume_skb(skb, in_napi);
> +		} else {
> +			struct xdp_frame *frame = ptr_to_xdp(ptr);
> +
> +			stats->bytes += xdp_get_frame_len(frame);
> +			xdp_return_frame(frame);
> +		}
> +		stats->packets++;
> +	}
> +}
> +#endif

All these APIs not prefixed with virtnet were ok as internal
static functions. No longer ok in a header.


> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 16/33] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-02-02 11:00 ` [PATCH 16/33] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
@ 2023-02-03  8:55   ` Michael S. Tsirkin
  2023-02-03  9:01     ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  8:55 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:41PM +0800, Xuan Zhuo wrote:
> At present, we have two long similar logic to perform XDP Progs. And in
> the implementation of XSK, we will have this need.
> 
> Therefore, this PATCH separates the code of executing XDP, which is
> conducive to later maintenance and facilitates subsequent XSK for reuse.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

So you first add a new function then move users over.
This means that it's hard during review to make sure
nothing is lost in translation.
Do the refactoring in a single patch instead.

> ---
>  drivers/net/virtio/main.c       | 53 +++++++++++++++++++++++++++++++++
>  drivers/net/virtio/virtio_net.h | 11 +++++++
>  2 files changed, 64 insertions(+)
> 
> diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> index 5683cb576474..9d4b84b23ef7 100644
> --- a/drivers/net/virtio/main.c
> +++ b/drivers/net/virtio/main.c
> @@ -478,6 +478,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>  	return ret;
>  }
>  
> +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> +			struct net_device *dev,
> +			unsigned int *xdp_xmit,
> +			struct virtnet_rq_stats *stats)
> +{
> +	struct xdp_frame *xdpf;
> +	int err;
> +	u32 act;
> +
> +	act = bpf_prog_run_xdp(xdp_prog, xdp);
> +	stats->xdp_packets++;
> +
> +	switch (act) {
> +	case XDP_PASS:
> +		return VIRTNET_XDP_RES_PASS;
> +
> +	case XDP_TX:
> +		stats->xdp_tx++;
> +		xdpf = xdp_convert_buff_to_frame(xdp);
> +		if (unlikely(!xdpf))
> +			return VIRTNET_XDP_RES_DROP;
> +
> +		err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> +		if (unlikely(!err)) {
> +			xdp_return_frame_rx_napi(xdpf);
> +		} else if (unlikely(err < 0)) {
> +			trace_xdp_exception(dev, xdp_prog, act);
> +			return VIRTNET_XDP_RES_DROP;
> +		}
> +
> +		*xdp_xmit |= VIRTIO_XDP_TX;
> +		return VIRTNET_XDP_RES_CONSUMED;
> +
> +	case XDP_REDIRECT:
> +		stats->xdp_redirects++;
> +		err = xdp_do_redirect(dev, xdp, xdp_prog);
> +		if (err)
> +			return VIRTNET_XDP_RES_DROP;
> +
> +		*xdp_xmit |= VIRTIO_XDP_REDIR;
> +		return VIRTNET_XDP_RES_CONSUMED;
> +
> +	default:
> +		bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> +		fallthrough;
> +	case XDP_ABORTED:
> +		trace_xdp_exception(dev, xdp_prog, act);
> +		fallthrough;
> +	case XDP_DROP:
> +		return VIRTNET_XDP_RES_DROP;
> +	}
> +}
> +
>  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
>  {
>  	return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> index 8bf31429ae28..af3e7e817f9e 100644
> --- a/drivers/net/virtio/virtio_net.h
> +++ b/drivers/net/virtio/virtio_net.h
> @@ -22,6 +22,12 @@
>  #include <net/net_failover.h>
>  #include <net/xdp_sock_drv.h>
>  
> +enum {
> +	VIRTNET_XDP_RES_PASS,
> +	VIRTNET_XDP_RES_DROP,
> +	VIRTNET_XDP_RES_CONSUMED,
> +};
> +
>  #define VIRTIO_XDP_FLAG	BIT(0)
>  
>  struct virtnet_info {
> @@ -262,4 +268,9 @@ static void __free_old_xmit(struct send_queue *sq, bool in_napi,
>  		stats->packets++;
>  	}
>  }
> +
> +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> +			struct net_device *dev,
> +			unsigned int *xdp_xmit,
> +			struct virtnet_rq_stats *stats);
>  #endif
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 29/33] virtio_net: xsk: tx: support tx
       [not found]   ` <Y9zIPdKmTvXqyuYS@boxer>
@ 2023-02-03  8:55     ` Xuan Zhuo
  0 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  8:55 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Petr Machata, Menglong Dong, Jesper Dangaard Brouer,
	Daniel Borkmann, Michael S. Tsirkin, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 09:39:25 +0100, Maciej Fijalkowski <maciej.fijalkowski@intel.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:54PM +0800, Xuan Zhuo wrote:
> > The driver's tx napi is very important for XSK. It is responsible for
> > obtaining data from the XSK queue and sending it out.
> >
> > At the beginning, we need to trigger tx napi.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c |  12 +++-
> >  drivers/net/virtio/xsk.c  | 146 ++++++++++++++++++++++++++++++++++++++
> >  drivers/net/virtio/xsk.h  |   2 +
> >  3 files changed, 159 insertions(+), 1 deletion(-)
> >
>
> (...)
>
> > +static int virtnet_xsk_xmit_batch(struct send_queue *sq,
> > +				  struct xsk_buff_pool *pool,
> > +				  unsigned int budget,
> > +				  struct virtnet_sq_stats *stats)
> > +{
> > +	int ret = XSK_XMIT_NO_BUDGET;
> > +	struct xdp_desc desc;
> > +	int err, packet = 0;
> > +
> > +	while (budget-- > 0) {
> > +		if (sq->vq->num_free < 2) {
> > +			__free_old_xmit(sq, true, stats);
> > +			if (sq->vq->num_free < 2) {
> > +				ret = XSK_XMIT_DEV_BUSY;
> > +				break;
> > +			}
> > +		}
> > +
> > +		if (!xsk_tx_peek_desc(pool, &desc)) {
>
> anything that stopped from using xsk_tx_peek_release_desc_batch() ?


Great!

Will fix.

Thanks.


>
> > +			ret = XSK_XMIT_DONE;
> > +			break;
> > +		}
> > +
> > +		err = virtnet_xsk_xmit_one(sq, pool, &desc);
> > +		if (unlikely(err)) {
> > +			ret = XSK_XMIT_DEV_BUSY;
> > +			break;
> > +		}
> > +
> > +		++packet;
> > +
> > +		if (virtqueue_kick_prepare(sq->vq) && virtqueue_notify(sq->vq))
> > +			++stats->kicks;
> > +	}
> > +
> > +	if (packet) {
> > +		stats->xdp_tx += packet;
> > +
> > +		xsk_tx_release(pool);
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> > +bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
> > +		      int budget)
> > +{
> > +	struct virtnet_sq_stats stats = {};
> > +	bool busy;
> > +	int ret;
> > +
> > +	__free_old_xmit(sq, true, &stats);
> > +
> > +	if (xsk_uses_need_wakeup(pool))
> > +		xsk_set_tx_need_wakeup(pool);
> > +
> > +	ret = virtnet_xsk_xmit_batch(sq, pool, budget, &stats);
> > +	switch (ret) {
> > +	case XSK_XMIT_DONE:
> > +		/* xsk tx qeueu has been consumed done. should complete napi. */
> > +		busy = false;
> > +		break;
> > +
> > +	case XSK_XMIT_NO_BUDGET:
> > +		/* reach the budget limit. should let napi run again. */
> > +		busy = true;
> > +		break;
> > +
> > +	case XSK_XMIT_DEV_BUSY:
> > +		/* sq vring is full, should complete napi. wait for tx napi been
> > +		 * triggered by interrupt.
> > +		 */
> > +		busy = false;
> > +		break;
> > +	}
> > +
> > +	virtnet_xsk_check_queue(sq);
> > +
> > +	u64_stats_update_begin(&sq->stats.syncp);
> > +	sq->stats.packets += stats.packets;
> > +	sq->stats.bytes += stats.bytes;
> > +	sq->stats.kicks += stats.kicks;
> > +	sq->stats.xdp_tx += stats.xdp_tx;
> > +	u64_stats_update_end(&sq->stats.syncp);
> > +
> > +	return busy;
> > +}
> > +
> >  static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
> >  				    struct xsk_buff_pool *pool, struct net_device *dev)
> >  {
> > diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> > index ad684c812091..15f1540a5803 100644
> > --- a/drivers/net/virtio/xsk.h
> > +++ b/drivers/net/virtio/xsk.h
> > @@ -20,4 +20,6 @@ static inline u32 ptr_to_xsk(void *ptr)
> >  }
> >
> >  int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> > +bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
> > +		      int budget);
> >  #endif
> > --
> > 2.32.0.3.g01195cf9f
> >
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 32/33] virtio_net: xsk: rx: introduce add_recvbuf_xsk()
       [not found]   ` <Y9zJS+ugeY9qEMt9@boxer>
@ 2023-02-03  8:56     ` Xuan Zhuo
  0 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  8:56 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Petr Machata, Menglong Dong, Jesper Dangaard Brouer,
	Daniel Borkmann, Michael S. Tsirkin, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 09:43:55 +0100, Maciej Fijalkowski <maciej.fijalkowski@intel.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:57PM +0800, Xuan Zhuo wrote:
> > Implement the logic of filling vq with XSK buffer.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c | 11 +++++++++++
> >  drivers/net/virtio/xsk.c  | 26 ++++++++++++++++++++++++++
> >  drivers/net/virtio/xsk.h  |  2 ++
> >  3 files changed, 39 insertions(+)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 7259b27f5cba..2aff0eee35d3 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -1352,10 +1352,20 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
> >   */
> >  bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
> >  {
> > +	struct xsk_buff_pool *pool;
> >  	int err;
> >  	bool oom;
> >
> >  	do {
> > +		rcu_read_lock();
> > +		pool = rcu_dereference(rq->xsk.pool);
> > +		if (pool) {
> > +			err = add_recvbuf_xsk(vi, rq, pool, gfp);
> > +			rcu_read_unlock();
> > +			goto check;
> > +		}
> > +		rcu_read_unlock();
> > +
> >  		if (vi->mergeable_rx_bufs)
> >  			err = add_recvbuf_mergeable(vi, rq, gfp);
> >  		else if (vi->big_packets)
> > @@ -1363,6 +1373,7 @@ bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
> >  		else
> >  			err = add_recvbuf_small(vi, rq, gfp);
> >
> > +check:
> >  		oom = err == -ENOMEM;
> >  		if (err)
> >  			break;
> > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > index 043b0bf2a5d7..a5e88f919c46 100644
> > --- a/drivers/net/virtio/xsk.c
> > +++ b/drivers/net/virtio/xsk.c
> > @@ -37,6 +37,32 @@ static void virtnet_xsk_check_queue(struct send_queue *sq)
> >  		netif_stop_subqueue(dev, qnum);
> >  }
> >
> > +int add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq,
> > +		    struct xsk_buff_pool *pool, gfp_t gfp)
> > +{
> > +	struct xdp_buff *xdp;
> > +	dma_addr_t addr;
> > +	u32 len;
> > +	int err;
> > +
> > +	xdp = xsk_buff_alloc(pool);
>
> same question as on tx side -anything stopped you from using batch API -
> xsk_buff_alloc_batch() ?

Will fix.

You should know that when I write the earliest version, there is no these APIs.  ^_^

Thanks.



>
> > +	if (!xdp)
> > +		return -ENOMEM;
> > +
> > +	/* use the part of XDP_PACKET_HEADROOM as the virtnet hdr space */
> > +	addr = xsk_buff_xdp_get_dma(xdp) - vi->hdr_len;
> > +	len = xsk_pool_get_rx_frame_size(pool) + vi->hdr_len;
> > +
> > +	sg_init_table(rq->sg, 1);
> > +	sg_fill_dma(rq->sg, addr, len);
> > +
> > +	err = virtqueue_add_inbuf_premapped(rq->vq, rq->sg, 1, xdp, gfp);
> > +	if (err)
> > +		xsk_buff_free(xdp);
> > +
> > +	return err;
> > +}
> > +
> >  static int virtnet_xsk_xmit_one(struct send_queue *sq,
> >  				struct xsk_buff_pool *pool,
> >  				struct xdp_desc *desc)
> > diff --git a/drivers/net/virtio/xsk.h b/drivers/net/virtio/xsk.h
> > index f90c28972d72..5549143ef118 100644
> > --- a/drivers/net/virtio/xsk.h
> > +++ b/drivers/net/virtio/xsk.h
> > @@ -24,4 +24,6 @@ int virtnet_xsk_pool_setup(struct net_device *dev, struct netdev_bpf *xdp);
> >  bool virtnet_xsk_xmit(struct send_queue *sq, struct xsk_buff_pool *pool,
> >  		      int budget);
> >  int virtnet_xsk_wakeup(struct net_device *dev, u32 qid, u32 flag);
> > +int add_recvbuf_xsk(struct virtnet_info *vi, struct receive_queue *rq,
> > +		    struct xsk_buff_pool *pool, gfp_t gfp);
> >  #endif
> > --
> > 2.32.0.3.g01195cf9f
> >
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 16/33] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-02-03  8:55   ` Michael S. Tsirkin
@ 2023-02-03  9:01     ` Xuan Zhuo
  0 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  9:01 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 03:55:26 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:41PM +0800, Xuan Zhuo wrote:
> > At present, we have two long similar logic to perform XDP Progs. And in
> > the implementation of XSK, we will have this need.
> >
> > Therefore, this PATCH separates the code of executing XDP, which is
> > conducive to later maintenance and facilitates subsequent XSK for reuse.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
>
> So you first add a new function then move users over.
> This means that it's hard during review to make sure
> nothing is lost in translation.
> Do the refactoring in a single patch instead.

I agree.

Thanks.

>
> > ---
> >  drivers/net/virtio/main.c       | 53 +++++++++++++++++++++++++++++++++
> >  drivers/net/virtio/virtio_net.h | 11 +++++++
> >  2 files changed, 64 insertions(+)
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index 5683cb576474..9d4b84b23ef7 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -478,6 +478,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> >  	return ret;
> >  }
> >
> > +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > +			struct net_device *dev,
> > +			unsigned int *xdp_xmit,
> > +			struct virtnet_rq_stats *stats)
> > +{
> > +	struct xdp_frame *xdpf;
> > +	int err;
> > +	u32 act;
> > +
> > +	act = bpf_prog_run_xdp(xdp_prog, xdp);
> > +	stats->xdp_packets++;
> > +
> > +	switch (act) {
> > +	case XDP_PASS:
> > +		return VIRTNET_XDP_RES_PASS;
> > +
> > +	case XDP_TX:
> > +		stats->xdp_tx++;
> > +		xdpf = xdp_convert_buff_to_frame(xdp);
> > +		if (unlikely(!xdpf))
> > +			return VIRTNET_XDP_RES_DROP;
> > +
> > +		err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > +		if (unlikely(!err)) {
> > +			xdp_return_frame_rx_napi(xdpf);
> > +		} else if (unlikely(err < 0)) {
> > +			trace_xdp_exception(dev, xdp_prog, act);
> > +			return VIRTNET_XDP_RES_DROP;
> > +		}
> > +
> > +		*xdp_xmit |= VIRTIO_XDP_TX;
> > +		return VIRTNET_XDP_RES_CONSUMED;
> > +
> > +	case XDP_REDIRECT:
> > +		stats->xdp_redirects++;
> > +		err = xdp_do_redirect(dev, xdp, xdp_prog);
> > +		if (err)
> > +			return VIRTNET_XDP_RES_DROP;
> > +
> > +		*xdp_xmit |= VIRTIO_XDP_REDIR;
> > +		return VIRTNET_XDP_RES_CONSUMED;
> > +
> > +	default:
> > +		bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > +		fallthrough;
> > +	case XDP_ABORTED:
> > +		trace_xdp_exception(dev, xdp_prog, act);
> > +		fallthrough;
> > +	case XDP_DROP:
> > +		return VIRTNET_XDP_RES_DROP;
> > +	}
> > +}
> > +
> >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> >  {
> >  	return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > index 8bf31429ae28..af3e7e817f9e 100644
> > --- a/drivers/net/virtio/virtio_net.h
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -22,6 +22,12 @@
> >  #include <net/net_failover.h>
> >  #include <net/xdp_sock_drv.h>
> >
> > +enum {
> > +	VIRTNET_XDP_RES_PASS,
> > +	VIRTNET_XDP_RES_DROP,
> > +	VIRTNET_XDP_RES_CONSUMED,
> > +};
> > +
> >  #define VIRTIO_XDP_FLAG	BIT(0)
> >
> >  struct virtnet_info {
> > @@ -262,4 +268,9 @@ static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> >  		stats->packets++;
> >  	}
> >  }
> > +
> > +int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > +			struct net_device *dev,
> > +			unsigned int *xdp_xmit,
> > +			struct virtnet_rq_stats *stats);
> >  #endif
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 15/33] virtio_net: move to virtio_net.h
  2023-02-03  8:53   ` Michael S. Tsirkin
@ 2023-02-03  9:04     ` Xuan Zhuo
  2023-02-03  9:26       ` Michael S. Tsirkin
  0 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  9:04 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 03:53:12 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:40PM +0800, Xuan Zhuo wrote:
> > Move some structure definitions and inline functions into the
> > virtio_net.h file.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio/main.c       | 247 +----------------------------
> >  drivers/net/virtio/virtio_net.h | 265 ++++++++++++++++++++++++++++++++
> >  2 files changed, 267 insertions(+), 245 deletions(-)
> >  create mode 100644 drivers/net/virtio/virtio_net.h
> >
> > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > index eb7f00194b5c..5683cb576474 100644
> > --- a/drivers/net/virtio/main.c
> > +++ b/drivers/net/virtio/main.c
> > @@ -4,24 +4,8 @@
> >   * Copyright 2007 Rusty Russell <rusty@rustcorp.com.au> IBM Corporation
> >   */
> >  //#define DEBUG
> > -#include <linux/netdevice.h>
> > -#include <linux/etherdevice.h>
> > -#include <linux/ethtool.h>
> > -#include <linux/module.h>
> > -#include <linux/virtio.h>
> > -#include <linux/virtio_net.h>
> > -#include <linux/bpf.h>
> > -#include <linux/bpf_trace.h>
> > -#include <linux/scatterlist.h>
> > -#include <linux/if_vlan.h>
> > -#include <linux/slab.h>
> > -#include <linux/cpu.h>
> > -#include <linux/average.h>
> > -#include <linux/filter.h>
> > -#include <linux/kernel.h>
> > -#include <net/route.h>
> > -#include <net/xdp.h>
> > -#include <net/net_failover.h>
> > +
> > +#include "virtio_net.h"
> >
> >  static int napi_weight = NAPI_POLL_WEIGHT;
> >  module_param(napi_weight, int, 0444);
>
>
> You should only move the headers that are actually needed not
> everything.

You mean the "include".

I think it is a simple way to concentrate "Include" into a header file, and
other .c files reference this header file.

Do you agree?

Thanks.

>
>
> > @@ -44,15 +28,6 @@ module_param(napi_tx, bool, 0644);
> >  #define VIRTIO_XDP_TX		BIT(0)
> >  #define VIRTIO_XDP_REDIR	BIT(1)
> >
> > -#define VIRTIO_XDP_FLAG	BIT(0)
> > -
> > -/* RX packet size EWMA. The average packet size is used to determine the packet
> > - * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > - * at once, the weight is chosen so that the EWMA will be insensitive to short-
> > - * term, transient changes in packet size.
> > - */
> > -DECLARE_EWMA(pkt_len, 0, 64)
> > -
> >  #define VIRTNET_DRIVER_VERSION "1.0.0"
> >
> >  static const unsigned long guest_offloads[] = {
> > @@ -72,36 +47,6 @@ static const unsigned long guest_offloads[] = {
> >  				(1ULL << VIRTIO_NET_F_GUEST_USO4) | \
> >  				(1ULL << VIRTIO_NET_F_GUEST_USO6))
> >
> > -struct virtnet_stat_desc {
> > -	char desc[ETH_GSTRING_LEN];
> > -	size_t offset;
> > -};
> > -
> > -struct virtnet_sq_stats {
> > -	struct u64_stats_sync syncp;
> > -	u64 packets;
> > -	u64 bytes;
> > -	u64 xdp_tx;
> > -	u64 xdp_tx_drops;
> > -	u64 kicks;
> > -	u64 tx_timeouts;
> > -};
> > -
> > -struct virtnet_rq_stats {
> > -	struct u64_stats_sync syncp;
> > -	u64 packets;
> > -	u64 bytes;
> > -	u64 drops;
> > -	u64 xdp_packets;
> > -	u64 xdp_tx;
> > -	u64 xdp_redirects;
> > -	u64 xdp_drops;
> > -	u64 kicks;
> > -};
> > -
> > -#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
> > -#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
> > -
> >  static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
> >  	{ "packets",		VIRTNET_SQ_STAT(packets) },
> >  	{ "bytes",		VIRTNET_SQ_STAT(bytes) },
> > @@ -125,57 +70,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
> >  #define VIRTNET_SQ_STATS_LEN	ARRAY_SIZE(virtnet_sq_stats_desc)
> >  #define VIRTNET_RQ_STATS_LEN	ARRAY_SIZE(virtnet_rq_stats_desc)
> >
> > -/* Internal representation of a send virtqueue */
> > -struct send_queue {
> > -	/* Virtqueue associated with this send _queue */
> > -	struct virtqueue *vq;
> > -
> > -	/* TX: fragments + linear part + virtio header */
> > -	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > -
> > -	/* Name of the send queue: output.$index */
> > -	char name[16];
> > -
> > -	struct virtnet_sq_stats stats;
> > -
> > -	struct napi_struct napi;
> > -
> > -	/* Record whether sq is in reset state. */
> > -	bool reset;
> > -};
> > -
> > -/* Internal representation of a receive virtqueue */
> > -struct receive_queue {
> > -	/* Virtqueue associated with this receive_queue */
> > -	struct virtqueue *vq;
> > -
> > -	struct napi_struct napi;
> > -
> > -	struct bpf_prog __rcu *xdp_prog;
> > -
> > -	struct virtnet_rq_stats stats;
> > -
> > -	/* Chain pages by the private ptr. */
> > -	struct page *pages;
> > -
> > -	/* Average packet length for mergeable receive buffers. */
> > -	struct ewma_pkt_len mrg_avg_pkt_len;
> > -
> > -	/* Page frag for packet buffer allocation. */
> > -	struct page_frag alloc_frag;
> > -
> > -	/* RX: fragments + linear part + virtio header */
> > -	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > -
> > -	/* Min single buffer size for mergeable buffers case. */
> > -	unsigned int min_buf_len;
> > -
> > -	/* Name of this receive queue: input.$index */
> > -	char name[16];
> > -
> > -	struct xdp_rxq_info xdp_rxq;
> > -};
> > -
> >  /* This structure can contain rss message with maximum settings for indirection table and keysize
> >   * Note, that default structure that describes RSS configuration virtio_net_rss_config
> >   * contains same info but can't handle table values.
> > @@ -206,90 +100,6 @@ struct control_buf {
> >  	struct virtio_net_ctrl_rss rss;
> >  };
> >
> > -struct virtnet_info {
> > -	struct virtio_device *vdev;
> > -	struct virtqueue *cvq;
> > -	struct net_device *dev;
> > -	struct send_queue *sq;
> > -	struct receive_queue *rq;
> > -	unsigned int status;
> > -
> > -	/* Max # of queue pairs supported by the device */
> > -	u16 max_queue_pairs;
> > -
> > -	/* # of queue pairs currently used by the driver */
> > -	u16 curr_queue_pairs;
> > -
> > -	/* # of XDP queue pairs currently used by the driver */
> > -	u16 xdp_queue_pairs;
> > -
> > -	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> > -	bool xdp_enabled;
> > -
> > -	/* I like... big packets and I cannot lie! */
> > -	bool big_packets;
> > -
> > -	/* number of sg entries allocated for big packets */
> > -	unsigned int big_packets_num_skbfrags;
> > -
> > -	/* Host will merge rx buffers for big packets (shake it! shake it!) */
> > -	bool mergeable_rx_bufs;
> > -
> > -	/* Host supports rss and/or hash report */
> > -	bool has_rss;
> > -	bool has_rss_hash_report;
> > -	u8 rss_key_size;
> > -	u16 rss_indir_table_size;
> > -	u32 rss_hash_types_supported;
> > -	u32 rss_hash_types_saved;
> > -
> > -	/* Has control virtqueue */
> > -	bool has_cvq;
> > -
> > -	/* Host can handle any s/g split between our header and packet data */
> > -	bool any_header_sg;
> > -
> > -	/* Packet virtio header size */
> > -	u8 hdr_len;
> > -
> > -	/* Work struct for delayed refilling if we run low on memory. */
> > -	struct delayed_work refill;
> > -
> > -	/* Is delayed refill enabled? */
> > -	bool refill_enabled;
> > -
> > -	/* The lock to synchronize the access to refill_enabled */
> > -	spinlock_t refill_lock;
> > -
> > -	/* Work struct for config space updates */
> > -	struct work_struct config_work;
> > -
> > -	/* Does the affinity hint is set for virtqueues? */
> > -	bool affinity_hint_set;
> > -
> > -	/* CPU hotplug instances for online & dead */
> > -	struct hlist_node node;
> > -	struct hlist_node node_dead;
> > -
> > -	struct control_buf *ctrl;
> > -
> > -	/* Ethtool settings */
> > -	u8 duplex;
> > -	u32 speed;
> > -
> > -	/* Interrupt coalescing settings */
> > -	u32 tx_usecs;
> > -	u32 rx_usecs;
> > -	u32 tx_max_packets;
> > -	u32 rx_max_packets;
> > -
> > -	unsigned long guest_offloads;
> > -	unsigned long guest_offloads_capable;
> > -
> > -	/* failover when STANDBY feature enabled */
> > -	struct failover *failover;
> > -};
> > -
> >  struct padded_vnet_hdr {
> >  	struct virtio_net_hdr_v1_hash hdr;
> >  	/*
> > @@ -303,45 +113,11 @@ struct padded_vnet_hdr {
> >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> >
> > -static bool is_xdp_frame(void *ptr)
> > -{
> > -	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > -}
> > -
> >  static void *xdp_to_ptr(struct xdp_frame *ptr)
> >  {
> >  	return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
> >  }
> >
> > -static struct xdp_frame *ptr_to_xdp(void *ptr)
> > -{
> > -	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
> > -}
> > -
> > -static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> > -			    struct virtnet_sq_stats *stats)
> > -{
> > -	unsigned int len;
> > -	void *ptr;
> > -
> > -	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> > -		if (!is_xdp_frame(ptr)) {
> > -			struct sk_buff *skb = ptr;
> > -
> > -			pr_debug("Sent skb %p\n", skb);
> > -
> > -			stats->bytes += skb->len;
> > -			napi_consume_skb(skb, in_napi);
> > -		} else {
> > -			struct xdp_frame *frame = ptr_to_xdp(ptr);
> > -
> > -			stats->bytes += xdp_get_frame_len(frame);
> > -			xdp_return_frame(frame);
> > -		}
> > -		stats->packets++;
> > -	}
> > -}
> > -
> >  /* Converting between virtqueue no. and kernel tx/rx queue no.
> >   * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
> >   */
> > @@ -411,15 +187,6 @@ static void disable_delayed_refill(struct virtnet_info *vi)
> >  	spin_unlock_bh(&vi->refill_lock);
> >  }
> >
> > -static void virtqueue_napi_schedule(struct napi_struct *napi,
> > -				    struct virtqueue *vq)
> > -{
> > -	if (napi_schedule_prep(napi)) {
> > -		virtqueue_disable_cb(vq);
> > -		__napi_schedule(napi);
> > -	}
> > -}
> > -
> >  static void virtqueue_napi_complete(struct napi_struct *napi,
> >  				    struct virtqueue *vq, int processed)
> >  {
> > @@ -1740,16 +1507,6 @@ static void free_old_xmit(struct send_queue *sq, bool in_napi)
> >  	u64_stats_update_end(&sq->stats.syncp);
> >  }
> >
> > -static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> > -{
> > -	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
> > -		return false;
> > -	else if (q < vi->curr_queue_pairs)
> > -		return true;
> > -	else
> > -		return false;
> > -}
> > -
> >  static void virtnet_poll_cleantx(struct receive_queue *rq)
> >  {
> >  	struct virtnet_info *vi = rq->vq->vdev->priv;
> > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > new file mode 100644
> > index 000000000000..8bf31429ae28
> > --- /dev/null
> > +++ b/drivers/net/virtio/virtio_net.h
> > @@ -0,0 +1,265 @@
> > +/* SPDX-License-Identifier: GPL-2.0-or-later */
> > +
> > +#ifndef __VIRTIO_NET_H__
> > +#define __VIRTIO_NET_H__
> > +#include <linux/netdevice.h>
> > +#include <linux/etherdevice.h>
> > +#include <linux/ethtool.h>
> > +#include <linux/module.h>
> > +#include <linux/virtio.h>
> > +#include <linux/virtio_net.h>
> > +#include <linux/bpf.h>
> > +#include <linux/bpf_trace.h>
> > +#include <linux/scatterlist.h>
> > +#include <linux/if_vlan.h>
> > +#include <linux/slab.h>
> > +#include <linux/cpu.h>
> > +#include <linux/average.h>
> > +#include <linux/filter.h>
> > +#include <linux/kernel.h>
> > +#include <net/route.h>
> > +#include <net/xdp.h>
> > +#include <net/net_failover.h>
> > +#include <net/xdp_sock_drv.h>
> > +
> > +#define VIRTIO_XDP_FLAG	BIT(0)
> > +
> > +struct virtnet_info {
> > +	struct virtio_device *vdev;
> > +	struct virtqueue *cvq;
> > +	struct net_device *dev;
> > +	struct send_queue *sq;
> > +	struct receive_queue *rq;
> > +	unsigned int status;
> > +
> > +	/* Max # of queue pairs supported by the device */
> > +	u16 max_queue_pairs;
> > +
> > +	/* # of queue pairs currently used by the driver */
> > +	u16 curr_queue_pairs;
> > +
> > +	/* # of XDP queue pairs currently used by the driver */
> > +	u16 xdp_queue_pairs;
> > +
> > +	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> > +	bool xdp_enabled;
> > +
> > +	/* I like... big packets and I cannot lie! */
> > +	bool big_packets;
> > +
> > +	/* number of sg entries allocated for big packets */
> > +	unsigned int big_packets_num_skbfrags;
> > +
> > +	/* Host will merge rx buffers for big packets (shake it! shake it!) */
> > +	bool mergeable_rx_bufs;
> > +
> > +	/* Host supports rss and/or hash report */
> > +	bool has_rss;
> > +	bool has_rss_hash_report;
> > +	u8 rss_key_size;
> > +	u16 rss_indir_table_size;
> > +	u32 rss_hash_types_supported;
> > +	u32 rss_hash_types_saved;
> > +
> > +	/* Has control virtqueue */
> > +	bool has_cvq;
> > +
> > +	/* Host can handle any s/g split between our header and packet data */
> > +	bool any_header_sg;
> > +
> > +	/* Packet virtio header size */
> > +	u8 hdr_len;
> > +
> > +	/* Work struct for delayed refilling if we run low on memory. */
> > +	struct delayed_work refill;
> > +
> > +	/* Is delayed refill enabled? */
> > +	bool refill_enabled;
> > +
> > +	/* The lock to synchronize the access to refill_enabled */
> > +	spinlock_t refill_lock;
> > +
> > +	/* Work struct for config space updates */
> > +	struct work_struct config_work;
> > +
> > +	/* Does the affinity hint is set for virtqueues? */
> > +	bool affinity_hint_set;
> > +
> > +	/* CPU hotplug instances for online & dead */
> > +	struct hlist_node node;
> > +	struct hlist_node node_dead;
> > +
> > +	struct control_buf *ctrl;
> > +
> > +	/* Ethtool settings */
> > +	u8 duplex;
> > +	u32 speed;
> > +
> > +	/* Interrupt coalescing settings */
> > +	u32 tx_usecs;
> > +	u32 rx_usecs;
> > +	u32 tx_max_packets;
> > +	u32 rx_max_packets;
> > +
> > +	unsigned long guest_offloads;
> > +	unsigned long guest_offloads_capable;
> > +
> > +	/* failover when STANDBY feature enabled */
> > +	struct failover *failover;
> > +};
> > +
> > +/* RX packet size EWMA. The average packet size is used to determine the packet
> > + * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > + * at once, the weight is chosen so that the EWMA will be insensitive to short-
> > + * term, transient changes in packet size.
> > + */
> > +DECLARE_EWMA(pkt_len, 0, 64)
> > +
> > +struct virtnet_stat_desc {
> > +	char desc[ETH_GSTRING_LEN];
> > +	size_t offset;
> > +};
> > +
> > +struct virtnet_sq_stats {
> > +	struct u64_stats_sync syncp;
> > +	u64 packets;
> > +	u64 bytes;
> > +	u64 xdp_tx;
> > +	u64 xdp_tx_drops;
> > +	u64 kicks;
> > +	u64 tx_timeouts;
> > +};
> > +
> > +struct virtnet_rq_stats {
> > +	struct u64_stats_sync syncp;
> > +	u64 packets;
> > +	u64 bytes;
> > +	u64 drops;
> > +	u64 xdp_packets;
> > +	u64 xdp_tx;
> > +	u64 xdp_redirects;
> > +	u64 xdp_drops;
> > +	u64 kicks;
> > +};
> > +
> > +#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
> > +#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
> > +
> > +/* Internal representation of a send virtqueue */
> > +struct send_queue {
> > +	/* Virtqueue associated with this send _queue */
> > +	struct virtqueue *vq;
> > +
> > +	/* TX: fragments + linear part + virtio header */
> > +	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > +
> > +	/* Name of the send queue: output.$index */
> > +	char name[16];
> > +
> > +	struct virtnet_sq_stats stats;
> > +
> > +	struct napi_struct napi;
> > +
> > +	/* Record whether sq is in reset state. */
> > +	bool reset;
> > +};
> > +
> > +/* Internal representation of a receive virtqueue */
> > +struct receive_queue {
> > +	/* Virtqueue associated with this receive_queue */
> > +	struct virtqueue *vq;
> > +
> > +	struct napi_struct napi;
> > +
> > +	struct bpf_prog __rcu *xdp_prog;
> > +
> > +	struct virtnet_rq_stats stats;
> > +
> > +	/* Chain pages by the private ptr. */
> > +	struct page *pages;
> > +
> > +	/* Average packet length for mergeable receive buffers. */
> > +	struct ewma_pkt_len mrg_avg_pkt_len;
> > +
> > +	/* Page frag for packet buffer allocation. */
> > +	struct page_frag alloc_frag;
> > +
> > +	/* RX: fragments + linear part + virtio header */
> > +	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > +
> > +	/* Min single buffer size for mergeable buffers case. */
> > +	unsigned int min_buf_len;
> > +
> > +	/* Name of this receive queue: input.$index */
> > +	char name[16];
> > +
> > +	struct xdp_rxq_info xdp_rxq;
> > +};
> > +
> > +static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> > +{
> > +	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
> > +		return false;
> > +	else if (q < vi->curr_queue_pairs)
> > +		return true;
> > +	else
> > +		return false;
> > +}
> > +
> > +static inline void virtnet_return_xdp_frame(struct send_queue *sq,
> > +					    struct xdp_frame *frame)
> > +{
> > +	struct virtnet_info *vi = sq->vq->vdev->priv;
> > +	dma_addr_t *p_addr, addr;
> > +
> > +	p_addr = frame->data - sizeof(*p_addr);
> > +	addr = *p_addr;
> > +
> > +	virtio_dma_unmap(&vi->vdev->dev, addr, frame->len, DMA_TO_DEVICE);
> > +
> > +	xdp_return_frame(frame);
> > +}
> > +
> > +static inline void virtqueue_napi_schedule(struct napi_struct *napi,
> > +					   struct virtqueue *vq)
> > +{
> > +	if (napi_schedule_prep(napi)) {
> > +		virtqueue_disable_cb(vq);
> > +		__napi_schedule(napi);
> > +	}
> > +}
> > +
> > +static inline bool is_xdp_frame(void *ptr)
> > +{
> > +	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > +}
> > +
> > +static struct xdp_frame *ptr_to_xdp(void *ptr)
> > +{
> > +	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
> > +}
> > +
> > +static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> > +			    struct virtnet_sq_stats *stats)
> > +{
> > +	unsigned int len;
> > +	void *ptr;
> > +
> > +	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> > +		if (!is_xdp_frame(ptr)) {
> > +			struct sk_buff *skb = ptr;
> > +
> > +			pr_debug("Sent skb %p\n", skb);
> > +
> > +			stats->bytes += skb->len;
> > +			napi_consume_skb(skb, in_napi);
> > +		} else {
> > +			struct xdp_frame *frame = ptr_to_xdp(ptr);
> > +
> > +			stats->bytes += xdp_get_frame_len(frame);
> > +			xdp_return_frame(frame);
> > +		}
> > +		stats->packets++;
> > +	}
> > +}
> > +#endif
>
> All these APIs not prefixed with virtnet were ok as internal
> static functions. No longer ok in a header.

I agree. Will fix.

Thanks.

>
>
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 06/33] virtio_ring: introduce virtqueue_reset()
  2023-02-02 11:00 ` [PATCH 06/33] virtio_ring: introduce virtqueue_reset() Xuan Zhuo
@ 2023-02-03  9:05   ` Michael S. Tsirkin
  2023-02-03  9:09     ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:05 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:31PM +0800, Xuan Zhuo wrote:
> Introduce virtqueue_reset() to release all buffer inside vq.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c | 50 ++++++++++++++++++++++++++++++++++++
>  include/linux/virtio.h       |  2 ++
>  2 files changed, 52 insertions(+)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index e32046fd15a5..7dfce7001f9f 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -2735,6 +2735,56 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_resize);
>  
> +/**
> + * virtqueue_reset - reset the vring of vq

..., detach and recycle all unused buffers

	after all this is why we are doing this reset, right?

> + * @_vq: the struct virtqueue we're talking about.
> + * @recycle: callback for recycle the useless buffer

not useless :) unused:

	callback to recycle unused buffers

I know we have the same confusion in virtqueue_resize, I will fix
that.

> + *
> + * Caller must ensure we don't call this with other virtqueue operations
> + * at the same time (except where noted).
> + *
> + * Returns zero or a negative error.
> + * 0: success.
> + * -EBUSY: Failed to sync with device, vq may not work properly
> + * -ENOENT: Transport or device not supported
> + * -EPERM: Operation not permitted
> + */
> +int virtqueue_reset(struct virtqueue *_vq,
> +		    void (*recycle)(struct virtqueue *vq, void *buf))
> +{
> +	struct vring_virtqueue *vq = to_vvq(_vq);
> +	struct virtio_device *vdev = vq->vq.vdev;
> +	void *buf;
> +	int err;
> +
> +	if (!vq->we_own_ring)
> +		return -EPERM;
> +
> +	if (!vdev->config->disable_vq_and_reset)
> +		return -ENOENT;
> +
> +	if (!vdev->config->enable_vq_after_reset)
> +		return -ENOENT;
> +
> +	err = vdev->config->disable_vq_and_reset(_vq);
> +	if (err)
> +		return err;
> +
> +	while ((buf = virtqueue_detach_unused_buf(_vq)) != NULL)
> +		recycle(_vq, buf);
> +
> +	if (vq->packed_ring)
> +		virtqueue_reinit_packed(vq);
> +	else
> +		virtqueue_reinit_split(vq);
> +
> +	if (vdev->config->enable_vq_after_reset(_vq))
> +		return -EBUSY;
> +
> +	return 0;
> +}
> +EXPORT_SYMBOL_GPL(virtqueue_reset);
> +
>  /* Only available for split ring */
>  struct virtqueue *vring_new_virtqueue(unsigned int index,
>  				      unsigned int num,
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index 3ebb346ebb7c..3ca2edb1aef3 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -105,6 +105,8 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
>  
>  int virtqueue_resize(struct virtqueue *vq, u32 num,
>  		     void (*recycle)(struct virtqueue *vq, void *buf));
> +int virtqueue_reset(struct virtqueue *vq,
> +		    void (*recycle)(struct virtqueue *vq, void *buf));
>  
>  /**
>   * struct virtio_device - representation of a device using virtio
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 07/33] virtio_ring: add api virtio_dma_map() for advance dma
  2023-02-02 11:00 ` [PATCH 07/33] virtio_ring: add api virtio_dma_map() for advance dma Xuan Zhuo
@ 2023-02-03  9:07   ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:07 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:32PM +0800, Xuan Zhuo wrote:
> Added virtio_dma_map() to map DMA addresses for virtual memory in
> advance. The purpose of adding this function is to check
> vring_use_dma_api() for virtio dma operation and get vdev->dev.parent as
> the parameter of dma_map_page().

No this looks like the implementation not the purpose.
I am guessing the purpose is to keep memory mapped
across multiple add/get buf operations right?

> Added virtio_dma_unmap() for unmap DMA address.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c | 80 ++++++++++++++++++++++++++++++++++++
>  include/linux/virtio.h       |  9 ++++
>  2 files changed, 89 insertions(+)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 7dfce7001f9f..67eda7bc23ea 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -3022,4 +3022,84 @@ const struct vring *virtqueue_get_vring(struct virtqueue *vq)
>  }
>  EXPORT_SYMBOL_GPL(virtqueue_get_vring);
>  
> +/**
> + * virtio_dma_map_page - get the DMA addr of the memory for virtio device
> + * @dev: virtio device
> + * @page: the page of the memory to DMA
> + * @offset: the offset of the memory inside page
> + * @length: memory length
> + * @dir: DMA direction
> + *
> + * Returns the DMA addr. DMA_MAPPING_ERROR means error.
> + */
> +dma_addr_t virtio_dma_map_page(struct device *dev, struct page *page, size_t offset,
> +			       unsigned int length, enum dma_data_direction dir)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(dev);
> +
> +	if (!vring_use_dma_api(vdev))
> +		return page_to_phys(page) + offset;
> +
> +	return dma_map_page(vdev->dev.parent, page, offset, length, dir);
> +}
> +
> +/**
> + * virtio_dma_map - get the DMA addr of the memory for virtio device
> + * @dev: virtio device
> + * @addr: the addr to DMA
> + * @length: memory length
> + * @dir: DMA direction
> + *
> + * Returns the DMA addr.
> + */
> +dma_addr_t virtio_dma_map(struct device *dev, void *addr, unsigned int length,
> +			  enum dma_data_direction dir)
> +{
> +	struct page *page;
> +	size_t offset;
> +
> +	page = virt_to_page(addr);
> +	offset = offset_in_page(addr);
> +
> +	return virtio_dma_map_page(dev, page, offset, length, dir);
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_map);
> +
> +/**
> + * virtio_dma_mapping_error - check dma address
> + * @dev: virtio device
> + * @addr: DMA address
> + *
> + * Returns 0 means dma valid. Other means invalid dma address.
> + */
> +int virtio_dma_mapping_error(struct device *dev, dma_addr_t addr)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(dev);
> +
> +	if (!vring_use_dma_api(vdev))
> +		return 0;
> +
> +	return dma_mapping_error(vdev->dev.parent, addr);
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_mapping_error);
> +
> +/**
> + * virtio_dma_unmap - unmap DMA addr
> + * @dev: virtio device
> + * @dma: DMA address
> + * @length: memory length
> + * @dir: DMA direction
> + */
> +void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
> +		      enum dma_data_direction dir)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(dev);
> +
> +	if (!vring_use_dma_api(vdev))
> +		return;
> +
> +	dma_unmap_page(vdev->dev.parent, dma, length, dir);
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_unmap);
> +
>  MODULE_LICENSE("GPL");
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index 3ca2edb1aef3..ce89126becc5 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -9,6 +9,7 @@
>  #include <linux/device.h>
>  #include <linux/mod_devicetable.h>
>  #include <linux/gfp.h>
> +#include <linux/dma-mapping.h>
>  
>  /**
>   * struct virtqueue - a queue to register buffers for sending or receiving.
> @@ -218,4 +219,12 @@ void unregister_virtio_driver(struct virtio_driver *drv);
>  #define module_virtio_driver(__virtio_driver) \
>  	module_driver(__virtio_driver, register_virtio_driver, \
>  			unregister_virtio_driver)
> +
> +dma_addr_t virtio_dma_map_page(struct device *dev, struct page *page, size_t offset,
> +			       unsigned int length, enum dma_data_direction dir);
> +dma_addr_t virtio_dma_map(struct device *dev, void *addr, unsigned int length,
> +			  enum dma_data_direction dir);
> +int virtio_dma_mapping_error(struct device *dev, dma_addr_t addr);
> +void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
> +		      enum dma_data_direction dir);
>  #endif /* _LINUX_VIRTIO_H */
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-02 11:44   ` Xuan Zhuo
@ 2023-02-03  9:08     ` Michael S. Tsirkin
  2023-02-03  9:09       ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:08 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:44:07PM +0800, Xuan Zhuo wrote:
> On Thu, 2 Feb 2023 06:08:30 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Thu, Feb 02, 2023 at 07:00:25PM +0800, Xuan Zhuo wrote:
> > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > performance of zero copy is very good.
> >
> > Great! Any numbers to share?
> 
> RESEND. Last mail has some email format error.
> 
> ENV: Qemu with vhost.
> 
>                    vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS
> -----------------------------|---------------|------------------|------------
> xmit by sockperf:     90%    |   100%        |                  |  318967
> xmit by xsk:          100%   |   30%         |   33%            | 1192064
> recv by sockperf:     100%   |   68%         |   100%           |  692288
> recv by xsk:          100%   |   33%         |   43%            |  771670
> 
> Thanks.

Impressive, thanks a lot for this work!
Pls remember to retest and include up to date numbers on
subsequent versions.

> 
> >
> > > mlx5 and intel ixgbe already support
> > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > feature.
> > >
> > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > the XDP Socket Zerocopy.
> > >
> > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > kernel.
> > >
> > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > local CPU, then we wake up sofrirqd.
> > >
> > > Please review.
> > >
> > > Thanks.
> > >
> > >
> > > Xuan Zhuo (33):
> > >   virtio_ring: virtqueue_add() support premapped
> > >   virtio_ring: split: virtqueue_add_split() support premapped
> > >   virtio_ring: packed: virtqueue_add_packed() support premapped
> > >   virtio_ring: introduce virtqueue_add_outbuf_premapped()
> > >   virtio_ring: introduce virtqueue_add_inbuf_premapped()
> > >   virtio_ring: introduce virtqueue_reset()
> > >   virtio_ring: add api virtio_dma_map() for advance dma
> > >   virtio_ring: introduce dma sync api for virtio
> > >   xsk: xsk_buff_pool add callback for dma_sync
> > >   xsk: support virtio DMA map
> > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > >   virtio_net: unify the code for recycling the xmit ptr
> > >   virtio_net: virtnet_poll_tx support rescheduled
> > >   virtio_net: independent directory
> > >   virtio_net: move to virtio_net.h
> > >   virtio_net: introduce virtnet_xdp_handler() to seprate the logic of
> > >     run xdp
> > >   virtio_net: receive_small() use virtnet_xdp_handler()
> > >   virtio_net: receive_merageable() use virtnet_xdp_handler()
> > >   virtio_net: introduce virtnet_tx_reset()
> > >   virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
> > >   virtio_net: xsk: introduce virtnet_xsk_pool_enable()
> > >   virtio_net: xsk: introduce xsk disable
> > >   virtio_net: xsk: support xsk setup
> > >   virtio_net: xsk: stop disable tx napi
> > >   virtio_net: xsk: __free_old_xmit distinguishes xsk buffer
> > >   virtio_net: virtnet_sq_free_unused_buf() check xsk buffer
> > >   virtio_net: virtnet_rq_free_unused_buf() check xsk buffer
> > >   net: introduce napi_tx_raise()
> > >   virtio_net: xsk: tx: support tx
> > >   virtio_net: xsk: tx: support wakeup
> > >   virtio_net: xsk: tx: auto wakeup when free old xmit
> > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > >
> > >  MAINTAINERS                                 |   2 +-
> > >  drivers/net/Kconfig                         |   8 +-
> > >  drivers/net/Makefile                        |   2 +-
> > >  drivers/net/virtio/Kconfig                  |  11 +
> > >  drivers/net/virtio/Makefile                 |   8 +
> > >  drivers/net/{virtio_net.c => virtio/main.c} | 564 +++++++-------------
> > >  drivers/net/virtio/virtio_net.h             | 317 +++++++++++
> > >  drivers/net/virtio/xsk.c                    | 524 ++++++++++++++++++
> > >  drivers/net/virtio/xsk.h                    |  33 ++
> > >  drivers/virtio/virtio_ring.c                | 376 +++++++++++--
> > >  include/linux/netdevice.h                   |   7 +
> > >  include/linux/virtio.h                      |  29 +
> > >  include/net/xsk_buff_pool.h                 |   6 +
> > >  net/core/dev.c                              |  11 +
> > >  net/xdp/xsk_buff_pool.c                     |  79 ++-
> > >  15 files changed, 1541 insertions(+), 436 deletions(-)
> > >  create mode 100644 drivers/net/virtio/Kconfig
> > >  create mode 100644 drivers/net/virtio/Makefile
> > >  rename drivers/net/{virtio_net.c => virtio/main.c} (92%)
> > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > >  create mode 100644 drivers/net/virtio/xsk.c
> > >  create mode 100644 drivers/net/virtio/xsk.h
> > >
> > > --
> > > 2.32.0.3.g01195cf9f
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 06/33] virtio_ring: introduce virtqueue_reset()
  2023-02-03  9:05   ` Michael S. Tsirkin
@ 2023-02-03  9:09     ` Xuan Zhuo
  2023-02-13 12:15       ` Michael S. Tsirkin
  0 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  9:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 04:05:38 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:00:31PM +0800, Xuan Zhuo wrote:
> > Introduce virtqueue_reset() to release all buffer inside vq.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/virtio/virtio_ring.c | 50 ++++++++++++++++++++++++++++++++++++
> >  include/linux/virtio.h       |  2 ++
> >  2 files changed, 52 insertions(+)
> >
> > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > index e32046fd15a5..7dfce7001f9f 100644
> > --- a/drivers/virtio/virtio_ring.c
> > +++ b/drivers/virtio/virtio_ring.c
> > @@ -2735,6 +2735,56 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
> >  }
> >  EXPORT_SYMBOL_GPL(virtqueue_resize);
> >
> > +/**
> > + * virtqueue_reset - reset the vring of vq
>
> ..., detach and recycle all unused buffers
>
> 	after all this is why we are doing this reset, right?
>
> > + * @_vq: the struct virtqueue we're talking about.
> > + * @recycle: callback for recycle the useless buffer
>
> not useless :) unused:
>
> 	callback to recycle unused buffers


I agree. Will fix.

Thanks.

>
> I know we have the same confusion in virtqueue_resize, I will fix
> that.
>
> > + *
> > + * Caller must ensure we don't call this with other virtqueue operations
> > + * at the same time (except where noted).
> > + *
> > + * Returns zero or a negative error.
> > + * 0: success.
> > + * -EBUSY: Failed to sync with device, vq may not work properly
> > + * -ENOENT: Transport or device not supported
> > + * -EPERM: Operation not permitted
> > + */
> > +int virtqueue_reset(struct virtqueue *_vq,
> > +		    void (*recycle)(struct virtqueue *vq, void *buf))
> > +{
> > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > +	struct virtio_device *vdev = vq->vq.vdev;
> > +	void *buf;
> > +	int err;
> > +
> > +	if (!vq->we_own_ring)
> > +		return -EPERM;
> > +
> > +	if (!vdev->config->disable_vq_and_reset)
> > +		return -ENOENT;
> > +
> > +	if (!vdev->config->enable_vq_after_reset)
> > +		return -ENOENT;
> > +
> > +	err = vdev->config->disable_vq_and_reset(_vq);
> > +	if (err)
> > +		return err;
> > +
> > +	while ((buf = virtqueue_detach_unused_buf(_vq)) != NULL)
> > +		recycle(_vq, buf);
> > +
> > +	if (vq->packed_ring)
> > +		virtqueue_reinit_packed(vq);
> > +	else
> > +		virtqueue_reinit_split(vq);
> > +
> > +	if (vdev->config->enable_vq_after_reset(_vq))
> > +		return -EBUSY;
> > +
> > +	return 0;
> > +}
> > +EXPORT_SYMBOL_GPL(virtqueue_reset);
> > +
> >  /* Only available for split ring */
> >  struct virtqueue *vring_new_virtqueue(unsigned int index,
> >  				      unsigned int num,
> > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > index 3ebb346ebb7c..3ca2edb1aef3 100644
> > --- a/include/linux/virtio.h
> > +++ b/include/linux/virtio.h
> > @@ -105,6 +105,8 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
> >
> >  int virtqueue_resize(struct virtqueue *vq, u32 num,
> >  		     void (*recycle)(struct virtqueue *vq, void *buf));
> > +int virtqueue_reset(struct virtqueue *vq,
> > +		    void (*recycle)(struct virtqueue *vq, void *buf));
> >
> >  /**
> >   * struct virtio_device - representation of a device using virtio
> > --
> > 2.32.0.3.g01195cf9f
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
       [not found]       ` <Y9zJ9j0GthvRSFHL@boxer>
@ 2023-02-03  9:09         ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:09 UTC (permalink / raw)
  To: Maciej Fijalkowski
  Cc: Petr Machata, Menglong Dong, Jesper Dangaard Brouer,
	Daniel Borkmann, netdev, John Fastabend, Björn Töpel,
	Alexei Starovoitov, Eric Dumazet, Kuniyuki Iwashima,
	Sebastian Andrzej Siewior, Jonathan Lemon, Jakub Kicinski, bpf,
	Paolo Abeni, virtualization, David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 09:46:46AM +0100, Maciej Fijalkowski wrote:
> On Fri, Feb 03, 2023 at 03:37:32AM -0500, Michael S. Tsirkin wrote:
> > On Fri, Feb 03, 2023 at 11:33:31AM +0800, Xuan Zhuo wrote:
> > > On Thu, 02 Feb 2023 15:41:44 +0100, Paolo Abeni <pabeni@redhat.com> wrote:
> > > > On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > feature.
> > > > >
> > > > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > > > the XDP Socket Zerocopy.
> > > > >
> > > > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > > > kernel.
> > > > >
> > > > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > > > local CPU, then we wake up sofrirqd.
> > > >
> > > > Thank you for the large effort.
> > > >
> > > > Since this will likely need a few iterations, on next revision please
> > > > do split the work in multiple chunks to help the reviewer efforts -
> > > > from Documentation/process/maintainer-netdev.rst:
> > > >
> > > >  - don't post large series (> 15 patches), break them up
> > > >
> > > > In this case I guess you can split it in 1 (or even 2) pre-req series
> > > > and another one for the actual xsk zero copy support.
> > > 
> > > 
> > > OK.
> > > 
> > > I can split patch into multiple parts such as
> > > 
> > > * virtio core
> > > * xsk
> > > * virtio-net prepare
> > > * virtio-net support xsk zerocopy
> > > 
> > > However, there is a problem, the virtio core part should enter the VHOST branch
> > > of Michael. Then, should I post follow-up patches to which branch vhost or
> > > next-next?
> > > 
> > > Thanks.
> > 
> > I personally think 33 patches is still manageable no need to split.
> > Do try to be careful and track acks and changes: if someone sends an ack
> > add it in the patch if you change the patch drop the acks,
> > and logs this fact in the changelog in the cover letter
> > so people know they need to re-review.
> 
> To me some of the patches are too granular but probably this is related to
> personal taste.

I agree here. Some unrelated refactoring can also be deferred.

> However, I would like to ask to check how this series
> affects existing ZC enabled driver(s), since xsk core is touched.
> 
> > 
> > 
> > > 
> > > >
> > > > Thanks!
> > > >
> > > > Paolo
> > > >
> > 

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-03  9:08     ` Michael S. Tsirkin
@ 2023-02-03  9:09       ` Xuan Zhuo
  0 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-03  9:09 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 04:08:31 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Thu, Feb 02, 2023 at 07:44:07PM +0800, Xuan Zhuo wrote:
> > On Thu, 2 Feb 2023 06:08:30 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > On Thu, Feb 02, 2023 at 07:00:25PM +0800, Xuan Zhuo wrote:
> > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > performance of zero copy is very good.
> > >
> > > Great! Any numbers to share?
> >
> > RESEND. Last mail has some email format error.
> >
> > ENV: Qemu with vhost.
> >
> >                    vhost cpu | Guest APP CPU |Guest Softirq CPU | PPS
> > -----------------------------|---------------|------------------|------------
> > xmit by sockperf:     90%    |   100%        |                  |  318967
> > xmit by xsk:          100%   |   30%         |   33%            | 1192064
> > recv by sockperf:     100%   |   68%         |   100%           |  692288
> > recv by xsk:          100%   |   33%         |   43%            |  771670
> >
> > Thanks.
>
> Impressive, thanks a lot for this work!
> Pls remember to retest and include up to date numbers on
> subsequent versions.


OK.

Thanks.


>
> >
> > >
> > > > mlx5 and intel ixgbe already support
> > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > feature.
> > > >
> > > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > > the XDP Socket Zerocopy.
> > > >
> > > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > > kernel.
> > > >
> > > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > > local CPU, then we wake up sofrirqd.
> > > >
> > > > Please review.
> > > >
> > > > Thanks.
> > > >
> > > >
> > > > Xuan Zhuo (33):
> > > >   virtio_ring: virtqueue_add() support premapped
> > > >   virtio_ring: split: virtqueue_add_split() support premapped
> > > >   virtio_ring: packed: virtqueue_add_packed() support premapped
> > > >   virtio_ring: introduce virtqueue_add_outbuf_premapped()
> > > >   virtio_ring: introduce virtqueue_add_inbuf_premapped()
> > > >   virtio_ring: introduce virtqueue_reset()
> > > >   virtio_ring: add api virtio_dma_map() for advance dma
> > > >   virtio_ring: introduce dma sync api for virtio
> > > >   xsk: xsk_buff_pool add callback for dma_sync
> > > >   xsk: support virtio DMA map
> > > >   virtio_net: rename free_old_xmit_skbs to free_old_xmit
> > > >   virtio_net: unify the code for recycling the xmit ptr
> > > >   virtio_net: virtnet_poll_tx support rescheduled
> > > >   virtio_net: independent directory
> > > >   virtio_net: move to virtio_net.h
> > > >   virtio_net: introduce virtnet_xdp_handler() to seprate the logic of
> > > >     run xdp
> > > >   virtio_net: receive_small() use virtnet_xdp_handler()
> > > >   virtio_net: receive_merageable() use virtnet_xdp_handler()
> > > >   virtio_net: introduce virtnet_tx_reset()
> > > >   virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
> > > >   virtio_net: xsk: introduce virtnet_xsk_pool_enable()
> > > >   virtio_net: xsk: introduce xsk disable
> > > >   virtio_net: xsk: support xsk setup
> > > >   virtio_net: xsk: stop disable tx napi
> > > >   virtio_net: xsk: __free_old_xmit distinguishes xsk buffer
> > > >   virtio_net: virtnet_sq_free_unused_buf() check xsk buffer
> > > >   virtio_net: virtnet_rq_free_unused_buf() check xsk buffer
> > > >   net: introduce napi_tx_raise()
> > > >   virtio_net: xsk: tx: support tx
> > > >   virtio_net: xsk: tx: support wakeup
> > > >   virtio_net: xsk: tx: auto wakeup when free old xmit
> > > >   virtio_net: xsk: rx: introduce add_recvbuf_xsk()
> > > >   virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer
> > > >
> > > >  MAINTAINERS                                 |   2 +-
> > > >  drivers/net/Kconfig                         |   8 +-
> > > >  drivers/net/Makefile                        |   2 +-
> > > >  drivers/net/virtio/Kconfig                  |  11 +
> > > >  drivers/net/virtio/Makefile                 |   8 +
> > > >  drivers/net/{virtio_net.c => virtio/main.c} | 564 +++++++-------------
> > > >  drivers/net/virtio/virtio_net.h             | 317 +++++++++++
> > > >  drivers/net/virtio/xsk.c                    | 524 ++++++++++++++++++
> > > >  drivers/net/virtio/xsk.h                    |  33 ++
> > > >  drivers/virtio/virtio_ring.c                | 376 +++++++++++--
> > > >  include/linux/netdevice.h                   |   7 +
> > > >  include/linux/virtio.h                      |  29 +
> > > >  include/net/xsk_buff_pool.h                 |   6 +
> > > >  net/core/dev.c                              |  11 +
> > > >  net/xdp/xsk_buff_pool.c                     |  79 ++-
> > > >  15 files changed, 1541 insertions(+), 436 deletions(-)
> > > >  create mode 100644 drivers/net/virtio/Kconfig
> > > >  create mode 100644 drivers/net/virtio/Makefile
> > > >  rename drivers/net/{virtio_net.c => virtio/main.c} (92%)
> > > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > > >  create mode 100644 drivers/net/virtio/xsk.c
> > > >  create mode 100644 drivers/net/virtio/xsk.h
> > > >
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 03/33] virtio_ring: packed: virtqueue_add_packed() support premapped
  2023-02-02 11:00 ` [PATCH 03/33] virtio_ring: packed: virtqueue_add_packed() " Xuan Zhuo
@ 2023-02-03  9:16   ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:16 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:28PM +0800, Xuan Zhuo wrote:
> virtqueue_add_packed() only supports virtual addresses, dma is completed
> in virtqueue_add_packed().
> 
> In some scenarios (such as the AF_XDP scenario), the memory is allocated
> and DMA is completed in advance, so it is necessary for us to support
> passing the DMA address to virtqueue_add_packed().
> 
> Record this information in desc_state, we can skip unmap based on this
> when executing dma unmap.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c | 71 +++++++++++++++++++++++++-----------
>  1 file changed, 50 insertions(+), 21 deletions(-)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index ec622403cbd5..25027a35fcf8 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -78,6 +78,7 @@ struct vring_desc_state_packed {
>  	struct vring_packed_desc *indir_desc; /* Indirect descriptor, if any. */
>  	u16 num;			/* Descriptor list length. */
>  	u16 last;			/* The last desc state in a list. */
> +	bool premapped;
>  };
>  
>  struct vring_desc_extra {


That's an extra cache line. 
> @@ -1200,7 +1201,8 @@ static inline u16 packed_last_used(u16 last_used_idx)
>  }
>  
>  static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
> -				     struct vring_desc_extra *extra)
> +				     struct vring_desc_extra *extra,
> +				     bool premapped)
>  {
>  	u16 flags;
>  
> @@ -1215,6 +1217,9 @@ static void vring_unmap_extra_packed(const struct vring_virtqueue *vq,
>  				 (flags & VRING_DESC_F_WRITE) ?
>  				 DMA_FROM_DEVICE : DMA_TO_DEVICE);
>  	} else {
> +		if (premapped)
> +			return;
> +
>  		dma_unmap_page(vring_dma_dev(vq),
>  			       extra->addr, extra->len,
>  			       (flags & VRING_DESC_F_WRITE) ?
> @@ -1262,7 +1267,8 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>  					 unsigned int out_sgs,
>  					 unsigned int in_sgs,
>  					 void *data,
> -					 gfp_t gfp)
> +					 gfp_t gfp,
> +					 bool premapped)
>  {
>  	struct vring_packed_desc *desc;
>  	struct scatterlist *sg;
> @@ -1288,10 +1294,15 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>  
>  	for (n = 0; n < out_sgs + in_sgs; n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> -			addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> -					DMA_TO_DEVICE : DMA_FROM_DEVICE);
> -			if (vring_mapping_error(vq, addr))
> -				goto unmap_release;
> +			if (premapped) {
> +				addr = sg_dma_address(sg);
> +
> +			} else {
> +				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> +							DMA_TO_DEVICE : DMA_FROM_DEVICE);
> +				if (vring_mapping_error(vq, addr))
> +					goto unmap_release;
> +			}
>  
>  			desc[i].flags = cpu_to_le16(n < out_sgs ?
>  						0 : VRING_DESC_F_WRITE);
> @@ -1350,6 +1361,7 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>  	vq->packed.desc_state[id].data = data;
>  	vq->packed.desc_state[id].indir_desc = desc;
>  	vq->packed.desc_state[id].last = id;
> +	vq->packed.desc_state[id].premapped = premapped;
>  
>  	vq->num_added += 1;
>  
> @@ -1359,10 +1371,11 @@ static int virtqueue_add_indirect_packed(struct vring_virtqueue *vq,
>  	return 0;
>  
>  unmap_release:
> -	err_idx = i;
> -
> -	for (i = 0; i < err_idx; i++)
> -		vring_unmap_desc_packed(vq, &desc[i]);
> +	if (!premapped) {
> +		err_idx = i;
> +		for (i = 0; i < err_idx; i++)
> +			vring_unmap_desc_packed(vq, &desc[i]);
> +	}
>  
>  	kfree(desc);
>  
> @@ -1377,6 +1390,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  				       unsigned int in_sgs,
>  				       void *data,
>  				       void *ctx,
> +				       bool premapped,
>  				       gfp_t gfp)
>  {
>  	struct vring_virtqueue *vq = to_vvq(_vq);
> @@ -1403,7 +1417,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  
>  	if (virtqueue_use_indirect(vq, total_sg)) {
>  		err = virtqueue_add_indirect_packed(vq, sgs, total_sg, out_sgs,
> -						    in_sgs, data, gfp);
> +						    in_sgs, data, gfp, premapped);
>  		if (err != -ENOMEM) {
>  			END_USE(vq);
>  			return err;
> @@ -1435,10 +1449,17 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  	c = 0;
>  	for (n = 0; n < out_sgs + in_sgs; n++) {
>  		for (sg = sgs[n]; sg; sg = sg_next(sg)) {
> -			dma_addr_t addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> -					DMA_TO_DEVICE : DMA_FROM_DEVICE);
> -			if (vring_mapping_error(vq, addr))
> -				goto unmap_release;
> +			dma_addr_t addr;
> +
> +			if (premapped) {
> +				addr = sg_dma_address(sg);
> +

drop this empty line pls.

> +			} else {
> +				addr = vring_map_one_sg(vq, sg, n < out_sgs ?
> +							DMA_TO_DEVICE : DMA_FROM_DEVICE);
> +				if (vring_mapping_error(vq, addr))
> +					goto unmap_release;
> +			}
>  
>  			flags = cpu_to_le16(vq->packed.avail_used_flags |
>  				    (++c == total_sg ? 0 : VRING_DESC_F_NEXT) |
> @@ -1485,6 +1506,7 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  	vq->packed.desc_state[id].data = data;
>  	vq->packed.desc_state[id].indir_desc = ctx;
>  	vq->packed.desc_state[id].last = prev;
> +	vq->packed.desc_state[id].premapped = premapped;
>  
>  	/*
>  	 * A driver MUST NOT make the first descriptor in the list
> @@ -1501,22 +1523,26 @@ static inline int virtqueue_add_packed(struct virtqueue *_vq,
>  	return 0;
>  
>  unmap_release:
> +	vq->packed.avail_used_flags = avail_used_flags;
> +
> +	if (premapped)
> +		goto unmap_free;
> +

This goto branching inside error handling is too much like spaghetti code.
See Documentation/process/coding-style.rst for when goto is ok -
basically exit/error handling. This is not error handling.
Pls find a way to avoid.

>  	err_idx = i;
>  	i = head;
>  	curr = vq->free_head;
>  
> -	vq->packed.avail_used_flags = avail_used_flags;
> -
>  	for (n = 0; n < total_sg; n++) {
>  		if (i == err_idx)
>  			break;
> -		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr]);
> +		vring_unmap_extra_packed(vq, &vq->packed.desc_extra[curr], false);
>  		curr = vq->packed.desc_extra[curr].next;
>  		i++;
>  		if (i >= vq->packed.vring.num)
>  			i = 0;
>  	}
>  
> +unmap_free:
>  	END_USE(vq);
>  	return -EIO;
>  }
> @@ -1576,8 +1602,10 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>  	struct vring_desc_state_packed *state = NULL;
>  	struct vring_packed_desc *desc;
>  	unsigned int i, curr;
> +	bool premapped;
>  
>  	state = &vq->packed.desc_state[id];
> +	premapped = state->premapped;
>  
>  	/* Clear data ptr. */
>  	state->data = NULL;
> @@ -1590,7 +1618,8 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>  		curr = id;
>  		for (i = 0; i < state->num; i++) {
>  			vring_unmap_extra_packed(vq,
> -						 &vq->packed.desc_extra[curr]);
> +						 &vq->packed.desc_extra[curr],
> +						 premapped);
>  			curr = vq->packed.desc_extra[curr].next;
>  		}
>  	}
> @@ -1603,7 +1632,7 @@ static void detach_buf_packed(struct vring_virtqueue *vq,
>  		if (!desc)
>  			return;
>  
> -		if (vq->use_dma_api) {
> +		if (vq->use_dma_api && !premapped) {
>  			len = vq->packed.desc_extra[id].len;
>  			for (i = 0; i < len / sizeof(struct vring_packed_desc);
>  					i++)
> @@ -2122,7 +2151,7 @@ static inline int virtqueue_add(struct virtqueue *_vq,
>  	struct vring_virtqueue *vq = to_vvq(_vq);
>  
>  	return vq->packed_ring ? virtqueue_add_packed(_vq, sgs, total_sg,
> -					out_sgs, in_sgs, data, ctx, gfp) :
> +					out_sgs, in_sgs, data, ctx, premapped, gfp) :
>  				 virtqueue_add_split(_vq, sgs, total_sg,
>  					out_sgs, in_sgs, data, ctx, premapped, gfp);
>  }


Too much if !premapped all over the place. Pls refactor so we
get common code and then have premapped and non premapped
versions call that.
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-03  3:33   ` Xuan Zhuo
  2023-02-03  8:37     ` Michael S. Tsirkin
@ 2023-02-03  9:17     ` Michael S. Tsirkin
  2023-02-06  2:41       ` Xuan Zhuo
  1 sibling, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:17 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 11:33:31AM +0800, Xuan Zhuo wrote:
> On Thu, 02 Feb 2023 15:41:44 +0100, Paolo Abeni <pabeni@redhat.com> wrote:
> > On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > feature.
> > >
> > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > the XDP Socket Zerocopy.
> > >
> > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > kernel.
> > >
> > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > local CPU, then we wake up sofrirqd.
> >
> > Thank you for the large effort.
> >
> > Since this will likely need a few iterations, on next revision please
> > do split the work in multiple chunks to help the reviewer efforts -
> > from Documentation/process/maintainer-netdev.rst:
> >
> >  - don't post large series (> 15 patches), break them up
> >
> > In this case I guess you can split it in 1 (or even 2) pre-req series
> > and another one for the actual xsk zero copy support.
> 
> 
> OK.
> 
> I can split patch into multiple parts such as
> 
> * virtio core
> * xsk
> * virtio-net prepare
> * virtio-net support xsk zerocopy
> 
> However, there is a problem, the virtio core part should enter the VHOST branch
> of Michael. Then, should I post follow-up patches to which branch vhost or
> next-next?
> 
> Thanks.
> 

Here are some ideas on how to make the patchset smaller
and easier to merge:
- keep everything in virtio_net.c for now. We can split
  things out later, but this way your patchset will not
  conflict with every since change merged meanwhile.
  Also, split up needs to be done carefully with sane
  APIs between components, let's maybe not waste time
  on that now, do the split-up later.
- you have patches that add APIs then other
  patches use them. as long as it's only virtio net just
  add and use in a single patch, review is actually easier this way.
- we can try merging pre-requisites earlier, then patchset
  size will shrink.


> >
> > Thanks!
> >
> > Paolo
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 08/33] virtio_ring: introduce dma sync api for virtio
  2023-02-02 11:00 ` [PATCH 08/33] virtio_ring: introduce dma sync api for virtio Xuan Zhuo
@ 2023-02-03  9:24   ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:24 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Thu, Feb 02, 2023 at 07:00:33PM +0800, Xuan Zhuo wrote:
> In the process of dma sync, we involved whether virtio uses dma api. On
> the other hand, it is also necessary to read vdev->dev.parent. So these
> API has been introduced.

you don't need to repeat implementation here.
just list the new APIs and how they will be used
with premapped API.



> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/virtio/virtio_ring.c | 61 ++++++++++++++++++++++++++++++++++++
>  include/linux/virtio.h       |  8 +++++
>  2 files changed, 69 insertions(+)
> 
> diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> index 67eda7bc23ea..7b393133fd27 100644
> --- a/drivers/virtio/virtio_ring.c
> +++ b/drivers/virtio/virtio_ring.c
> @@ -3102,4 +3102,65 @@ void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
>  }
>  EXPORT_SYMBOL_GPL(virtio_dma_unmap);
>  
> +/**
> + * virtio_dma_need_sync - check dma address need sync

.... whether a dma address needs sync

> + * @dev: virtio device
> + * @addr: DMA address
> + */
> +bool virtio_dma_need_sync(struct device *dev, dma_addr_t addr)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(dev);
> +
> +	if (!vring_use_dma_api(vdev))
> +		return 0;
> +
> +	return dma_need_sync(vdev->dev.parent, addr);
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_need_sync);
> +
> +/**
> + * virtio_dma_sync_signle_range_for_cpu - dma sync for cpu
> + * @dev: virtio device
> + * @addr: DMA address
> + * @offset: DMA address offset
> + * @size: mem size for sync
> + * @dir: DMA direction
> + *
> + * Before calling this function, use virtio_dma_need_sync() to confirm that the
> + * DMA address really needs to be synchronized
> + */
> +void virtio_dma_sync_signle_range_for_cpu(struct device *dev, dma_addr_t addr,
> +					  unsigned long offset, size_t size,
> +					  enum dma_data_direction dir)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(dev);
> +
> +	dma_sync_single_range_for_cpu(vdev->dev.parent, addr, offset,
> +				      size, DMA_BIDIRECTIONAL);
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_sync_signle_range_for_cpu);
> +
> +/**
> + * virtio_dma_sync_signle_range_for_device - dma sync for device
> + * @dev: virtio device
> + * @addr: DMA address
> + * @offset: DMA address offset
> + * @size: mem size for sync
> + * @dir: DMA direction
> + *
> + * Before calling this function, use virtio_dma_need_sync() to confirm that the
> + * DMA address really needs to be synchronized
> + */
> +void virtio_dma_sync_signle_range_for_device(struct device *dev,
> +					     dma_addr_t addr,
> +					     unsigned long offset, size_t size,
> +					     enum dma_data_direction dir)
> +{
> +	struct virtio_device *vdev = dev_to_virtio(dev);
> +
> +	dma_sync_single_range_for_device(vdev->dev.parent, addr, offset,
> +					 size, DMA_BIDIRECTIONAL);
> +}
> +EXPORT_SYMBOL_GPL(virtio_dma_sync_signle_range_for_device);
> +


Pls document how these APIs are only for pre-mapped buffers,
for non premapped virtio core handles DMA API internally.


>  MODULE_LICENSE("GPL");
> diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> index ce89126becc5..8c2fae318b0c 100644
> --- a/include/linux/virtio.h
> +++ b/include/linux/virtio.h
> @@ -227,4 +227,12 @@ dma_addr_t virtio_dma_map(struct device *dev, void *addr, unsigned int length,
>  int virtio_dma_mapping_error(struct device *dev, dma_addr_t addr);
>  void virtio_dma_unmap(struct device *dev, dma_addr_t dma, unsigned int length,
>  		      enum dma_data_direction dir);
> +bool virtio_dma_need_sync(struct device *dev, dma_addr_t addr);
> +void virtio_dma_sync_signle_range_for_cpu(struct device *dev, dma_addr_t addr,
> +					  unsigned long offset, size_t size,
> +					  enum dma_data_direction dir);
> +void virtio_dma_sync_signle_range_for_device(struct device *dev,
> +					     dma_addr_t addr,
> +					     unsigned long offset, size_t size,
> +					     enum dma_data_direction dir);
>  #endif /* _LINUX_VIRTIO_H */
> -- 
> 2.32.0.3.g01195cf9f

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 15/33] virtio_net: move to virtio_net.h
  2023-02-03  9:04     ` Xuan Zhuo
@ 2023-02-03  9:26       ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:26 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 05:04:42PM +0800, Xuan Zhuo wrote:
> On Fri, 3 Feb 2023 03:53:12 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Thu, Feb 02, 2023 at 07:00:40PM +0800, Xuan Zhuo wrote:
> > > Move some structure definitions and inline functions into the
> > > virtio_net.h file.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio/main.c       | 247 +----------------------------
> > >  drivers/net/virtio/virtio_net.h | 265 ++++++++++++++++++++++++++++++++
> > >  2 files changed, 267 insertions(+), 245 deletions(-)
> > >  create mode 100644 drivers/net/virtio/virtio_net.h
> > >
> > > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > > index eb7f00194b5c..5683cb576474 100644
> > > --- a/drivers/net/virtio/main.c
> > > +++ b/drivers/net/virtio/main.c
> > > @@ -4,24 +4,8 @@
> > >   * Copyright 2007 Rusty Russell <rusty@rustcorp.com.au> IBM Corporation
> > >   */
> > >  //#define DEBUG
> > > -#include <linux/netdevice.h>
> > > -#include <linux/etherdevice.h>
> > > -#include <linux/ethtool.h>
> > > -#include <linux/module.h>
> > > -#include <linux/virtio.h>
> > > -#include <linux/virtio_net.h>
> > > -#include <linux/bpf.h>
> > > -#include <linux/bpf_trace.h>
> > > -#include <linux/scatterlist.h>
> > > -#include <linux/if_vlan.h>
> > > -#include <linux/slab.h>
> > > -#include <linux/cpu.h>
> > > -#include <linux/average.h>
> > > -#include <linux/filter.h>
> > > -#include <linux/kernel.h>
> > > -#include <net/route.h>
> > > -#include <net/xdp.h>
> > > -#include <net/net_failover.h>
> > > +
> > > +#include "virtio_net.h"
> > >
> > >  static int napi_weight = NAPI_POLL_WEIGHT;
> > >  module_param(napi_weight, int, 0444);
> >
> >
> > You should only move the headers that are actually needed not
> > everything.
> 
> You mean the "include".
> 
> I think it is a simple way to concentrate "Include" into a header file, and
> other .c files reference this header file.
> 
> Do you agree?
> 
> Thanks.

Not really, one ends up including unnecessary stuff in both C files
making build times longer.

> >
> >
> > > @@ -44,15 +28,6 @@ module_param(napi_tx, bool, 0644);
> > >  #define VIRTIO_XDP_TX		BIT(0)
> > >  #define VIRTIO_XDP_REDIR	BIT(1)
> > >
> > > -#define VIRTIO_XDP_FLAG	BIT(0)
> > > -
> > > -/* RX packet size EWMA. The average packet size is used to determine the packet
> > > - * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > > - * at once, the weight is chosen so that the EWMA will be insensitive to short-
> > > - * term, transient changes in packet size.
> > > - */
> > > -DECLARE_EWMA(pkt_len, 0, 64)
> > > -
> > >  #define VIRTNET_DRIVER_VERSION "1.0.0"
> > >
> > >  static const unsigned long guest_offloads[] = {
> > > @@ -72,36 +47,6 @@ static const unsigned long guest_offloads[] = {
> > >  				(1ULL << VIRTIO_NET_F_GUEST_USO4) | \
> > >  				(1ULL << VIRTIO_NET_F_GUEST_USO6))
> > >
> > > -struct virtnet_stat_desc {
> > > -	char desc[ETH_GSTRING_LEN];
> > > -	size_t offset;
> > > -};
> > > -
> > > -struct virtnet_sq_stats {
> > > -	struct u64_stats_sync syncp;
> > > -	u64 packets;
> > > -	u64 bytes;
> > > -	u64 xdp_tx;
> > > -	u64 xdp_tx_drops;
> > > -	u64 kicks;
> > > -	u64 tx_timeouts;
> > > -};
> > > -
> > > -struct virtnet_rq_stats {
> > > -	struct u64_stats_sync syncp;
> > > -	u64 packets;
> > > -	u64 bytes;
> > > -	u64 drops;
> > > -	u64 xdp_packets;
> > > -	u64 xdp_tx;
> > > -	u64 xdp_redirects;
> > > -	u64 xdp_drops;
> > > -	u64 kicks;
> > > -};
> > > -
> > > -#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
> > > -#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
> > > -
> > >  static const struct virtnet_stat_desc virtnet_sq_stats_desc[] = {
> > >  	{ "packets",		VIRTNET_SQ_STAT(packets) },
> > >  	{ "bytes",		VIRTNET_SQ_STAT(bytes) },
> > > @@ -125,57 +70,6 @@ static const struct virtnet_stat_desc virtnet_rq_stats_desc[] = {
> > >  #define VIRTNET_SQ_STATS_LEN	ARRAY_SIZE(virtnet_sq_stats_desc)
> > >  #define VIRTNET_RQ_STATS_LEN	ARRAY_SIZE(virtnet_rq_stats_desc)
> > >
> > > -/* Internal representation of a send virtqueue */
> > > -struct send_queue {
> > > -	/* Virtqueue associated with this send _queue */
> > > -	struct virtqueue *vq;
> > > -
> > > -	/* TX: fragments + linear part + virtio header */
> > > -	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > > -
> > > -	/* Name of the send queue: output.$index */
> > > -	char name[16];
> > > -
> > > -	struct virtnet_sq_stats stats;
> > > -
> > > -	struct napi_struct napi;
> > > -
> > > -	/* Record whether sq is in reset state. */
> > > -	bool reset;
> > > -};
> > > -
> > > -/* Internal representation of a receive virtqueue */
> > > -struct receive_queue {
> > > -	/* Virtqueue associated with this receive_queue */
> > > -	struct virtqueue *vq;
> > > -
> > > -	struct napi_struct napi;
> > > -
> > > -	struct bpf_prog __rcu *xdp_prog;
> > > -
> > > -	struct virtnet_rq_stats stats;
> > > -
> > > -	/* Chain pages by the private ptr. */
> > > -	struct page *pages;
> > > -
> > > -	/* Average packet length for mergeable receive buffers. */
> > > -	struct ewma_pkt_len mrg_avg_pkt_len;
> > > -
> > > -	/* Page frag for packet buffer allocation. */
> > > -	struct page_frag alloc_frag;
> > > -
> > > -	/* RX: fragments + linear part + virtio header */
> > > -	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > > -
> > > -	/* Min single buffer size for mergeable buffers case. */
> > > -	unsigned int min_buf_len;
> > > -
> > > -	/* Name of this receive queue: input.$index */
> > > -	char name[16];
> > > -
> > > -	struct xdp_rxq_info xdp_rxq;
> > > -};
> > > -
> > >  /* This structure can contain rss message with maximum settings for indirection table and keysize
> > >   * Note, that default structure that describes RSS configuration virtio_net_rss_config
> > >   * contains same info but can't handle table values.
> > > @@ -206,90 +100,6 @@ struct control_buf {
> > >  	struct virtio_net_ctrl_rss rss;
> > >  };
> > >
> > > -struct virtnet_info {
> > > -	struct virtio_device *vdev;
> > > -	struct virtqueue *cvq;
> > > -	struct net_device *dev;
> > > -	struct send_queue *sq;
> > > -	struct receive_queue *rq;
> > > -	unsigned int status;
> > > -
> > > -	/* Max # of queue pairs supported by the device */
> > > -	u16 max_queue_pairs;
> > > -
> > > -	/* # of queue pairs currently used by the driver */
> > > -	u16 curr_queue_pairs;
> > > -
> > > -	/* # of XDP queue pairs currently used by the driver */
> > > -	u16 xdp_queue_pairs;
> > > -
> > > -	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> > > -	bool xdp_enabled;
> > > -
> > > -	/* I like... big packets and I cannot lie! */
> > > -	bool big_packets;
> > > -
> > > -	/* number of sg entries allocated for big packets */
> > > -	unsigned int big_packets_num_skbfrags;
> > > -
> > > -	/* Host will merge rx buffers for big packets (shake it! shake it!) */
> > > -	bool mergeable_rx_bufs;
> > > -
> > > -	/* Host supports rss and/or hash report */
> > > -	bool has_rss;
> > > -	bool has_rss_hash_report;
> > > -	u8 rss_key_size;
> > > -	u16 rss_indir_table_size;
> > > -	u32 rss_hash_types_supported;
> > > -	u32 rss_hash_types_saved;
> > > -
> > > -	/* Has control virtqueue */
> > > -	bool has_cvq;
> > > -
> > > -	/* Host can handle any s/g split between our header and packet data */
> > > -	bool any_header_sg;
> > > -
> > > -	/* Packet virtio header size */
> > > -	u8 hdr_len;
> > > -
> > > -	/* Work struct for delayed refilling if we run low on memory. */
> > > -	struct delayed_work refill;
> > > -
> > > -	/* Is delayed refill enabled? */
> > > -	bool refill_enabled;
> > > -
> > > -	/* The lock to synchronize the access to refill_enabled */
> > > -	spinlock_t refill_lock;
> > > -
> > > -	/* Work struct for config space updates */
> > > -	struct work_struct config_work;
> > > -
> > > -	/* Does the affinity hint is set for virtqueues? */
> > > -	bool affinity_hint_set;
> > > -
> > > -	/* CPU hotplug instances for online & dead */
> > > -	struct hlist_node node;
> > > -	struct hlist_node node_dead;
> > > -
> > > -	struct control_buf *ctrl;
> > > -
> > > -	/* Ethtool settings */
> > > -	u8 duplex;
> > > -	u32 speed;
> > > -
> > > -	/* Interrupt coalescing settings */
> > > -	u32 tx_usecs;
> > > -	u32 rx_usecs;
> > > -	u32 tx_max_packets;
> > > -	u32 rx_max_packets;
> > > -
> > > -	unsigned long guest_offloads;
> > > -	unsigned long guest_offloads_capable;
> > > -
> > > -	/* failover when STANDBY feature enabled */
> > > -	struct failover *failover;
> > > -};
> > > -
> > >  struct padded_vnet_hdr {
> > >  	struct virtio_net_hdr_v1_hash hdr;
> > >  	/*
> > > @@ -303,45 +113,11 @@ struct padded_vnet_hdr {
> > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >
> > > -static bool is_xdp_frame(void *ptr)
> > > -{
> > > -	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > > -}
> > > -
> > >  static void *xdp_to_ptr(struct xdp_frame *ptr)
> > >  {
> > >  	return (void *)((unsigned long)ptr | VIRTIO_XDP_FLAG);
> > >  }
> > >
> > > -static struct xdp_frame *ptr_to_xdp(void *ptr)
> > > -{
> > > -	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
> > > -}
> > > -
> > > -static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> > > -			    struct virtnet_sq_stats *stats)
> > > -{
> > > -	unsigned int len;
> > > -	void *ptr;
> > > -
> > > -	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> > > -		if (!is_xdp_frame(ptr)) {
> > > -			struct sk_buff *skb = ptr;
> > > -
> > > -			pr_debug("Sent skb %p\n", skb);
> > > -
> > > -			stats->bytes += skb->len;
> > > -			napi_consume_skb(skb, in_napi);
> > > -		} else {
> > > -			struct xdp_frame *frame = ptr_to_xdp(ptr);
> > > -
> > > -			stats->bytes += xdp_get_frame_len(frame);
> > > -			xdp_return_frame(frame);
> > > -		}
> > > -		stats->packets++;
> > > -	}
> > > -}
> > > -
> > >  /* Converting between virtqueue no. and kernel tx/rx queue no.
> > >   * 0:rx0 1:tx0 2:rx1 3:tx1 ... 2N:rxN 2N+1:txN 2N+2:cvq
> > >   */
> > > @@ -411,15 +187,6 @@ static void disable_delayed_refill(struct virtnet_info *vi)
> > >  	spin_unlock_bh(&vi->refill_lock);
> > >  }
> > >
> > > -static void virtqueue_napi_schedule(struct napi_struct *napi,
> > > -				    struct virtqueue *vq)
> > > -{
> > > -	if (napi_schedule_prep(napi)) {
> > > -		virtqueue_disable_cb(vq);
> > > -		__napi_schedule(napi);
> > > -	}
> > > -}
> > > -
> > >  static void virtqueue_napi_complete(struct napi_struct *napi,
> > >  				    struct virtqueue *vq, int processed)
> > >  {
> > > @@ -1740,16 +1507,6 @@ static void free_old_xmit(struct send_queue *sq, bool in_napi)
> > >  	u64_stats_update_end(&sq->stats.syncp);
> > >  }
> > >
> > > -static bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> > > -{
> > > -	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
> > > -		return false;
> > > -	else if (q < vi->curr_queue_pairs)
> > > -		return true;
> > > -	else
> > > -		return false;
> > > -}
> > > -
> > >  static void virtnet_poll_cleantx(struct receive_queue *rq)
> > >  {
> > >  	struct virtnet_info *vi = rq->vq->vdev->priv;
> > > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > > new file mode 100644
> > > index 000000000000..8bf31429ae28
> > > --- /dev/null
> > > +++ b/drivers/net/virtio/virtio_net.h
> > > @@ -0,0 +1,265 @@
> > > +/* SPDX-License-Identifier: GPL-2.0-or-later */
> > > +
> > > +#ifndef __VIRTIO_NET_H__
> > > +#define __VIRTIO_NET_H__
> > > +#include <linux/netdevice.h>
> > > +#include <linux/etherdevice.h>
> > > +#include <linux/ethtool.h>
> > > +#include <linux/module.h>
> > > +#include <linux/virtio.h>
> > > +#include <linux/virtio_net.h>
> > > +#include <linux/bpf.h>
> > > +#include <linux/bpf_trace.h>
> > > +#include <linux/scatterlist.h>
> > > +#include <linux/if_vlan.h>
> > > +#include <linux/slab.h>
> > > +#include <linux/cpu.h>
> > > +#include <linux/average.h>
> > > +#include <linux/filter.h>
> > > +#include <linux/kernel.h>
> > > +#include <net/route.h>
> > > +#include <net/xdp.h>
> > > +#include <net/net_failover.h>
> > > +#include <net/xdp_sock_drv.h>
> > > +
> > > +#define VIRTIO_XDP_FLAG	BIT(0)
> > > +
> > > +struct virtnet_info {
> > > +	struct virtio_device *vdev;
> > > +	struct virtqueue *cvq;
> > > +	struct net_device *dev;
> > > +	struct send_queue *sq;
> > > +	struct receive_queue *rq;
> > > +	unsigned int status;
> > > +
> > > +	/* Max # of queue pairs supported by the device */
> > > +	u16 max_queue_pairs;
> > > +
> > > +	/* # of queue pairs currently used by the driver */
> > > +	u16 curr_queue_pairs;
> > > +
> > > +	/* # of XDP queue pairs currently used by the driver */
> > > +	u16 xdp_queue_pairs;
> > > +
> > > +	/* xdp_queue_pairs may be 0, when xdp is already loaded. So add this. */
> > > +	bool xdp_enabled;
> > > +
> > > +	/* I like... big packets and I cannot lie! */
> > > +	bool big_packets;
> > > +
> > > +	/* number of sg entries allocated for big packets */
> > > +	unsigned int big_packets_num_skbfrags;
> > > +
> > > +	/* Host will merge rx buffers for big packets (shake it! shake it!) */
> > > +	bool mergeable_rx_bufs;
> > > +
> > > +	/* Host supports rss and/or hash report */
> > > +	bool has_rss;
> > > +	bool has_rss_hash_report;
> > > +	u8 rss_key_size;
> > > +	u16 rss_indir_table_size;
> > > +	u32 rss_hash_types_supported;
> > > +	u32 rss_hash_types_saved;
> > > +
> > > +	/* Has control virtqueue */
> > > +	bool has_cvq;
> > > +
> > > +	/* Host can handle any s/g split between our header and packet data */
> > > +	bool any_header_sg;
> > > +
> > > +	/* Packet virtio header size */
> > > +	u8 hdr_len;
> > > +
> > > +	/* Work struct for delayed refilling if we run low on memory. */
> > > +	struct delayed_work refill;
> > > +
> > > +	/* Is delayed refill enabled? */
> > > +	bool refill_enabled;
> > > +
> > > +	/* The lock to synchronize the access to refill_enabled */
> > > +	spinlock_t refill_lock;
> > > +
> > > +	/* Work struct for config space updates */
> > > +	struct work_struct config_work;
> > > +
> > > +	/* Does the affinity hint is set for virtqueues? */
> > > +	bool affinity_hint_set;
> > > +
> > > +	/* CPU hotplug instances for online & dead */
> > > +	struct hlist_node node;
> > > +	struct hlist_node node_dead;
> > > +
> > > +	struct control_buf *ctrl;
> > > +
> > > +	/* Ethtool settings */
> > > +	u8 duplex;
> > > +	u32 speed;
> > > +
> > > +	/* Interrupt coalescing settings */
> > > +	u32 tx_usecs;
> > > +	u32 rx_usecs;
> > > +	u32 tx_max_packets;
> > > +	u32 rx_max_packets;
> > > +
> > > +	unsigned long guest_offloads;
> > > +	unsigned long guest_offloads_capable;
> > > +
> > > +	/* failover when STANDBY feature enabled */
> > > +	struct failover *failover;
> > > +};
> > > +
> > > +/* RX packet size EWMA. The average packet size is used to determine the packet
> > > + * buffer size when refilling RX rings. As the entire RX ring may be refilled
> > > + * at once, the weight is chosen so that the EWMA will be insensitive to short-
> > > + * term, transient changes in packet size.
> > > + */
> > > +DECLARE_EWMA(pkt_len, 0, 64)
> > > +
> > > +struct virtnet_stat_desc {
> > > +	char desc[ETH_GSTRING_LEN];
> > > +	size_t offset;
> > > +};
> > > +
> > > +struct virtnet_sq_stats {
> > > +	struct u64_stats_sync syncp;
> > > +	u64 packets;
> > > +	u64 bytes;
> > > +	u64 xdp_tx;
> > > +	u64 xdp_tx_drops;
> > > +	u64 kicks;
> > > +	u64 tx_timeouts;
> > > +};
> > > +
> > > +struct virtnet_rq_stats {
> > > +	struct u64_stats_sync syncp;
> > > +	u64 packets;
> > > +	u64 bytes;
> > > +	u64 drops;
> > > +	u64 xdp_packets;
> > > +	u64 xdp_tx;
> > > +	u64 xdp_redirects;
> > > +	u64 xdp_drops;
> > > +	u64 kicks;
> > > +};
> > > +
> > > +#define VIRTNET_SQ_STAT(m)	offsetof(struct virtnet_sq_stats, m)
> > > +#define VIRTNET_RQ_STAT(m)	offsetof(struct virtnet_rq_stats, m)
> > > +
> > > +/* Internal representation of a send virtqueue */
> > > +struct send_queue {
> > > +	/* Virtqueue associated with this send _queue */
> > > +	struct virtqueue *vq;
> > > +
> > > +	/* TX: fragments + linear part + virtio header */
> > > +	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > > +
> > > +	/* Name of the send queue: output.$index */
> > > +	char name[16];
> > > +
> > > +	struct virtnet_sq_stats stats;
> > > +
> > > +	struct napi_struct napi;
> > > +
> > > +	/* Record whether sq is in reset state. */
> > > +	bool reset;
> > > +};
> > > +
> > > +/* Internal representation of a receive virtqueue */
> > > +struct receive_queue {
> > > +	/* Virtqueue associated with this receive_queue */
> > > +	struct virtqueue *vq;
> > > +
> > > +	struct napi_struct napi;
> > > +
> > > +	struct bpf_prog __rcu *xdp_prog;
> > > +
> > > +	struct virtnet_rq_stats stats;
> > > +
> > > +	/* Chain pages by the private ptr. */
> > > +	struct page *pages;
> > > +
> > > +	/* Average packet length for mergeable receive buffers. */
> > > +	struct ewma_pkt_len mrg_avg_pkt_len;
> > > +
> > > +	/* Page frag for packet buffer allocation. */
> > > +	struct page_frag alloc_frag;
> > > +
> > > +	/* RX: fragments + linear part + virtio header */
> > > +	struct scatterlist sg[MAX_SKB_FRAGS + 2];
> > > +
> > > +	/* Min single buffer size for mergeable buffers case. */
> > > +	unsigned int min_buf_len;
> > > +
> > > +	/* Name of this receive queue: input.$index */
> > > +	char name[16];
> > > +
> > > +	struct xdp_rxq_info xdp_rxq;
> > > +};
> > > +
> > > +static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> > > +{
> > > +	if (q < (vi->curr_queue_pairs - vi->xdp_queue_pairs))
> > > +		return false;
> > > +	else if (q < vi->curr_queue_pairs)
> > > +		return true;
> > > +	else
> > > +		return false;
> > > +}
> > > +
> > > +static inline void virtnet_return_xdp_frame(struct send_queue *sq,
> > > +					    struct xdp_frame *frame)
> > > +{
> > > +	struct virtnet_info *vi = sq->vq->vdev->priv;
> > > +	dma_addr_t *p_addr, addr;
> > > +
> > > +	p_addr = frame->data - sizeof(*p_addr);
> > > +	addr = *p_addr;
> > > +
> > > +	virtio_dma_unmap(&vi->vdev->dev, addr, frame->len, DMA_TO_DEVICE);
> > > +
> > > +	xdp_return_frame(frame);
> > > +}
> > > +
> > > +static inline void virtqueue_napi_schedule(struct napi_struct *napi,
> > > +					   struct virtqueue *vq)
> > > +{
> > > +	if (napi_schedule_prep(napi)) {
> > > +		virtqueue_disable_cb(vq);
> > > +		__napi_schedule(napi);
> > > +	}
> > > +}
> > > +
> > > +static inline bool is_xdp_frame(void *ptr)
> > > +{
> > > +	return (unsigned long)ptr & VIRTIO_XDP_FLAG;
> > > +}
> > > +
> > > +static struct xdp_frame *ptr_to_xdp(void *ptr)
> > > +{
> > > +	return (struct xdp_frame *)((unsigned long)ptr & ~VIRTIO_XDP_FLAG);
> > > +}
> > > +
> > > +static void __free_old_xmit(struct send_queue *sq, bool in_napi,
> > > +			    struct virtnet_sq_stats *stats)
> > > +{
> > > +	unsigned int len;
> > > +	void *ptr;
> > > +
> > > +	while ((ptr = virtqueue_get_buf(sq->vq, &len)) != NULL) {
> > > +		if (!is_xdp_frame(ptr)) {
> > > +			struct sk_buff *skb = ptr;
> > > +
> > > +			pr_debug("Sent skb %p\n", skb);
> > > +
> > > +			stats->bytes += skb->len;
> > > +			napi_consume_skb(skb, in_napi);
> > > +		} else {
> > > +			struct xdp_frame *frame = ptr_to_xdp(ptr);
> > > +
> > > +			stats->bytes += xdp_get_frame_len(frame);
> > > +			xdp_return_frame(frame);
> > > +		}
> > > +		stats->packets++;
> > > +	}
> > > +}
> > > +#endif
> >
> > All these APIs not prefixed with virtnet were ok as internal
> > static functions. No longer ok in a header.
> 
> I agree. Will fix.
> 
> Thanks.
> 
> >
> >
> > > --
> > > 2.32.0.3.g01195cf9f
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool()
  2023-02-03  8:52     ` Xuan Zhuo
@ 2023-02-03  9:28       ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:28 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 04:52:35PM +0800, Xuan Zhuo wrote:
> On Fri, 3 Feb 2023 03:48:33 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Thu, Feb 02, 2023 at 07:00:45PM +0800, Xuan Zhuo wrote:
> > > This function is used to bind or unbind xsk pool to virtnet rq.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio/Makefile     |  2 +-
> > >  drivers/net/virtio/main.c       |  8 ++---
> > >  drivers/net/virtio/virtio_net.h | 16 ++++++++++
> > >  drivers/net/virtio/xsk.c        | 56 +++++++++++++++++++++++++++++++++
> > >  4 files changed, 76 insertions(+), 6 deletions(-)
> > >  create mode 100644 drivers/net/virtio/xsk.c
> > >
> > > diff --git a/drivers/net/virtio/Makefile b/drivers/net/virtio/Makefile
> > > index 15ed7c97fd4f..8c2a884d2dba 100644
> > > --- a/drivers/net/virtio/Makefile
> > > +++ b/drivers/net/virtio/Makefile
> > > @@ -5,4 +5,4 @@
> > >
> > >  obj-$(CONFIG_VIRTIO_NET) += virtio_net.o
> > >
> > > -virtio_net-y := main.o
> > > +virtio_net-y := main.o xsk.o
> > > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > > index 049a3bb9d88d..0ee23468b795 100644
> > > --- a/drivers/net/virtio/main.c
> > > +++ b/drivers/net/virtio/main.c
> > > @@ -110,7 +110,6 @@ struct padded_vnet_hdr {
> > >  	char padding[12];
> > >  };
> > >
> > > -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >
> > >  static void *xdp_to_ptr(struct xdp_frame *ptr)
> > > @@ -1351,8 +1350,7 @@ static int add_recvbuf_mergeable(struct virtnet_info *vi,
> > >   * before we're receiving packets, or from refill_work which is
> > >   * careful to disable receiving (using napi_disable).
> > >   */
> > > -static bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq,
> > > -			  gfp_t gfp)
> > > +bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp)
> > >  {
> > >  	int err;
> > >  	bool oom;
> > > @@ -1388,7 +1386,7 @@ static void skb_recv_done(struct virtqueue *rvq)
> > >  	virtqueue_napi_schedule(&rq->napi, rvq);
> > >  }
> > >
> > > -static void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > > +void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi)
> > >  {
> > >  	napi_enable(napi);
> > >
> > > @@ -3284,7 +3282,7 @@ static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf)
> > >  		xdp_return_frame(ptr_to_xdp(buf));
> > >  }
> > >
> > > -static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> > > +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf)
> >
> > If you are making this an API now you better document
> > what it does. Same applies to other stuff you are
> > making non-static.
> 
> I agree.
> 
> >
> >
> > >  {
> > >  	struct virtnet_info *vi = vq->vdev->priv;
> > >  	int i = vq2rxq(vq);
> > > diff --git a/drivers/net/virtio/virtio_net.h b/drivers/net/virtio/virtio_net.h
> > > index b46f083a630a..4a7633714802 100644
> > > --- a/drivers/net/virtio/virtio_net.h
> > > +++ b/drivers/net/virtio/virtio_net.h
> > > @@ -168,6 +168,12 @@ struct send_queue {
> > >
> > >  	/* Record whether sq is in reset state. */
> > >  	bool reset;
> > > +
> > > +	struct {
> > > +		struct xsk_buff_pool __rcu *pool;
> > > +
> > > +		dma_addr_t hdr_dma_address;
> > > +	} xsk;
> > >  };
> > >
> > >  /* Internal representation of a receive virtqueue */
> > > @@ -200,6 +206,13 @@ struct receive_queue {
> > >  	char name[16];
> > >
> > >  	struct xdp_rxq_info xdp_rxq;
> > > +
> > > +	struct {
> > > +		struct xsk_buff_pool __rcu *pool;
> > > +
> > > +		/* xdp rxq used by xsk */
> > > +		struct xdp_rxq_info xdp_rxq;
> > > +	} xsk;
> > >  };
> > >
> > >  static inline bool is_xdp_raw_buffer_queue(struct virtnet_info *vi, int q)
> > > @@ -274,4 +287,7 @@ int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > >  			unsigned int *xdp_xmit,
> > >  			struct virtnet_rq_stats *stats);
> > >  int virtnet_tx_reset(struct virtnet_info *vi, struct send_queue *sq);
> > > +bool try_fill_recv(struct virtnet_info *vi, struct receive_queue *rq, gfp_t gfp);
> > > +void virtnet_napi_enable(struct virtqueue *vq, struct napi_struct *napi);
> > > +void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >  #endif
> > > diff --git a/drivers/net/virtio/xsk.c b/drivers/net/virtio/xsk.c
> > > new file mode 100644
> > > index 000000000000..e01ff2abea11
> > > --- /dev/null
> > > +++ b/drivers/net/virtio/xsk.c
> > > @@ -0,0 +1,56 @@
> > > +// SPDX-License-Identifier: GPL-2.0-or-later
> > > +/*
> > > + * virtio-net xsk
> > > + */
> > > +
> > > +#include "virtio_net.h"
> > > +
> > > +static int virtnet_rq_bind_xsk_pool(struct virtnet_info *vi, struct receive_queue *rq,
> > > +				    struct xsk_buff_pool *pool, struct net_device *dev)
> >
> > This static function is unused after this patch, so compiler will
> > complain. Yes it's just a warning but still not nice.
> 
> Otherwise, we need merge some patches, which will increase the difficulty of
> review.
> 
> Is there a better way to deal with? Remove Static?
> 
> Thanks.

In this case review is not made easier because the API does not make
much sense by its own and is undocumented anyway. To review one has to
jump back and forth between multiple patches - that is harder not easier
than a single bigger patch. Others in this thread already commented that
the patches are too small.


> 
> >
> >
> > > +{
> > > +	bool running = netif_running(vi->dev);
> > > +	int err, qindex;
> > > +
> > > +	qindex = rq - vi->rq;
> > > +
> > > +	if (pool) {
> > > +		err = xdp_rxq_info_reg(&rq->xsk.xdp_rxq, dev, qindex, rq->napi.napi_id);
> > > +		if (err < 0)
> > > +			return err;
> > > +
> > > +		err = xdp_rxq_info_reg_mem_model(&rq->xsk.xdp_rxq,
> > > +						 MEM_TYPE_XSK_BUFF_POOL, NULL);
> > > +		if (err < 0) {
> > > +			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> > > +			return err;
> > > +		}
> > > +
> > > +		xsk_pool_set_rxq_info(pool, &rq->xsk.xdp_rxq);
> > > +	} else {
> > > +		xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> > > +	}
> > > +
> > > +	if (running)
> > > +		napi_disable(&rq->napi);
> > > +
> > > +	err = virtqueue_reset(rq->vq, virtnet_rq_free_unused_buf);
> > > +	if (err)
> > > +		netdev_err(vi->dev, "reset rx fail: rx queue index: %d err: %d\n", qindex, err);
> > > +
> > > +	if (pool) {
> > > +		if (err)
> > > +			xdp_rxq_info_unreg(&rq->xsk.xdp_rxq);
> > > +		else
> > > +			rcu_assign_pointer(rq->xsk.pool, pool);
> > > +	} else {
> > > +		rcu_assign_pointer(rq->xsk.pool, NULL);
> > > +	}
> > > +
> > > +	if (!try_fill_recv(vi, rq, GFP_KERNEL))
> > > +		schedule_delayed_work(&vi->refill, 0);
> > > +
> > > +	if (running)
> > > +		virtnet_napi_enable(rq->vq, &rq->napi);
> > > +
> > > +	return err;
> > > +}
> > > --
> > > 2.32.0.3.g01195cf9f
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 24/33] virtio_net: xsk: stop disable tx napi
  2023-02-03  8:49         ` Xuan Zhuo
@ 2023-02-03  9:29           ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-03  9:29 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 04:49:16PM +0800, Xuan Zhuo wrote:
> On Fri, 3 Feb 2023 03:33:41 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Fri, Feb 03, 2023 at 11:24:42AM +0800, Xuan Zhuo wrote:
> > > On Thu, 2 Feb 2023 12:25:59 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > > On Thu, Feb 02, 2023 at 07:00:49PM +0800, Xuan Zhuo wrote:
> > > > > Since xsk's TX queue is consumed by TX NAPI, if sq is bound to xsk, then
> > > > > we must stop tx napi from being disabled.
> > > > >
> > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > ---
> > > > >  drivers/net/virtio/main.c | 9 ++++++++-
> > > > >  1 file changed, 8 insertions(+), 1 deletion(-)
> > > > >
> > > > > diff --git a/drivers/net/virtio/main.c b/drivers/net/virtio/main.c
> > > > > index ed79e750bc6c..232cf151abff 100644
> > > > > --- a/drivers/net/virtio/main.c
> > > > > +++ b/drivers/net/virtio/main.c
> > > > > @@ -2728,8 +2728,15 @@ static int virtnet_set_coalesce(struct net_device *dev,
> > > > >  		return ret;
> > > > >
> > > > >  	if (update_napi) {
> > > > > -		for (i = 0; i < vi->max_queue_pairs; i++)
> > > > > +		for (i = 0; i < vi->max_queue_pairs; i++) {
> > > > > +			/* xsk xmit depend on the tx napi. So if xsk is active,
> > > >
> > > > depends.
> > > >
> > > > > +			 * prevent modifications to tx napi.
> > > > > +			 */
> > > > > +			if (rtnl_dereference(vi->sq[i].xsk.pool))
> > > > > +				continue;
> > > > > +
> > > > >  			vi->sq[i].napi.weight = napi_weight;
> > > >
> > > > I don't get it.
> > > > changing napi weight does not work then.
> > > > why is this ok?
> > >
> > >
> > > static void skb_xmit_done(struct virtqueue *vq)
> > > {
> > > 	struct virtnet_info *vi = vq->vdev->priv;
> > > 	struct napi_struct *napi = &vi->sq[vq2txq(vq)].napi;
> > >
> > > 	/* Suppress further interrupts. */
> > > 	virtqueue_disable_cb(vq);
> > >
> > > 	if (napi->weight)
> > > 		virtqueue_napi_schedule(napi, vq);
> > > 	else
> > > 		/* We were probably waiting for more output buffers. */
> > > 		netif_wake_subqueue(vi->dev, vq2txq(vq));
> > > }
> > >
> > >
> > > If the weight is 0, tx napi will not be triggered again.
> > >
> > > Thanks.
> >
> > This needs more thought then. First ignoring what user is requesting is
> > not nice.
> 
> Maybe we should return an error.

maybe


> >Second what if napi is first disabled and then xsk enabled?
> 
> 
> static int virtnet_xsk_pool_enable(struct net_device *dev,
> 				   struct xsk_buff_pool *pool,
> 				   u16 qid)
> {
> 	struct virtnet_info *vi = netdev_priv(dev);
> 	struct receive_queue *rq;
> 	struct send_queue *sq;
> 	int err;
> 
> 	if (qid >= vi->curr_queue_pairs)
> 		return -EINVAL;
> 
> 	sq = &vi->sq[qid];
> 	rq = &vi->rq[qid];
> 
> 	/* xsk zerocopy depend on the tx napi.
> 	 *
> 	 * All xsk packets are actually consumed and sent out from the xsk tx
> 	 * queue under the tx napi mechanism.
> 	 */
> ->	if (!sq->napi.weight)
> 		return -EPERM;
> 
> Thanks.
> 
> 
> >
> >
> > > >
> > > >
> > > > > +		}
> > > > >  	}
> > > > >
> > > > >  	return ret;
> > > > > --
> > > > > 2.32.0.3.g01195cf9f
> > > >
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 10/33] xsk: support virtio DMA map
  2023-02-02 11:00 ` [PATCH 10/33] xsk: support virtio DMA map Xuan Zhuo
@ 2023-02-05 22:04   ` kernel test robot
  0 siblings, 0 replies; 76+ messages in thread
From: kernel test robot @ 2023-02-05 22:04 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	Jonathan Lemon, John Fastabend, Björn Töpel,
	Alexei Starovoitov, Eric Dumazet, Kuniyuki Iwashima,
	oe-kbuild-all, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	Sebastian Andrzej Siewior, Magnus Karlsson

Hi Xuan,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on net-next/master]
[also build test ERROR on mst-vhost/linux-next linus/master v6.2-rc6 next-20230203]
[cannot apply to net/master]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Xuan-Zhuo/virtio_ring-virtqueue_add-support-premapped/20230202-190707
patch link:    https://lore.kernel.org/r/20230202110058.130695-11-xuanzhuo%40linux.alibaba.com
patch subject: [PATCH 10/33] xsk: support virtio DMA map
config: i386-debian-10.3-kvm (https://download.01.org/0day-ci/archive/20230206/202302060542.IxBGSiKh-lkp@intel.com/config)
compiler: gcc-11 (Debian 11.3.0-8) 11.3.0
reproduce (this is a W=1 build):
        # https://github.com/intel-lab-lkp/linux/commit/370aefebcea755f7c4c14e16f8dcb5540769fd26
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Xuan-Zhuo/virtio_ring-virtqueue_add-support-premapped/20230202-190707
        git checkout 370aefebcea755f7c4c14e16f8dcb5540769fd26
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        make W=1 O=build_dir ARCH=i386 olddefconfig
        make W=1 O=build_dir ARCH=i386 SHELL=/bin/bash

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   ld: net/xdp/xsk_buff_pool.o: in function `xp_alloc':
>> net/xdp/xsk_buff_pool.c:575: undefined reference to `is_virtio_device'
>> ld: net/xdp/xsk_buff_pool.c:576: undefined reference to `virtio_dma_sync_signle_range_for_device'
   ld: net/xdp/xsk_buff_pool.o: in function `__xp_dma_unmap':
   net/xdp/xsk_buff_pool.c:338: undefined reference to `is_virtio_device'
>> ld: net/xdp/xsk_buff_pool.c:339: undefined reference to `virtio_dma_unmap'
   ld: net/xdp/xsk_buff_pool.o: in function `xp_dma_map':
   net/xdp/xsk_buff_pool.c:443: undefined reference to `is_virtio_device'
   ld: net/xdp/xsk_buff_pool.c:443: undefined reference to `virtio_dma_sync_signle_range_for_device'
>> ld: net/xdp/xsk_buff_pool.c:443: undefined reference to `virtio_dma_sync_signle_range_for_cpu'
>> ld: net/xdp/xsk_buff_pool.c:458: undefined reference to `virtio_dma_map_page'
>> ld: net/xdp/xsk_buff_pool.c:461: undefined reference to `virtio_dma_mapping_error'
>> ld: net/xdp/xsk_buff_pool.c:464: undefined reference to `virtio_dma_need_sync'
>> ld: net/xdp/xsk_buff_pool.c:457: undefined reference to `is_virtio_device'


vim +575 net/xdp/xsk_buff_pool.c

   424	
   425	int xp_dma_map(struct xsk_buff_pool *pool, struct device *dev,
   426		       unsigned long attrs, struct page **pages, u32 nr_pages)
   427	{
   428		struct xsk_dma_map *dma_map;
   429		dma_addr_t dma;
   430		int err;
   431		u32 i;
   432	
   433		dma_map = xp_find_dma_map(pool);
   434		if (dma_map) {
   435			err = xp_init_dma_info(pool, dma_map);
   436			if (err)
   437				return err;
   438	
   439			refcount_inc(&dma_map->users);
   440			return 0;
   441		}
   442	
 > 443		if (is_virtio_device(dev)) {
   444			pool->dma_sync_for_cpu = virtio_dma_sync_signle_range_for_cpu;
   445			pool->dma_sync_for_device = virtio_dma_sync_signle_range_for_device;
   446	
   447		} else {
   448			pool->dma_sync_for_cpu = dma_sync_for_cpu;
   449			pool->dma_sync_for_device = dma_sync_for_device;
   450		}
   451	
   452		dma_map = xp_create_dma_map(dev, pool->netdev, nr_pages, pool->umem);
   453		if (!dma_map)
   454			return -ENOMEM;
   455	
   456		for (i = 0; i < dma_map->dma_pages_cnt; i++) {
 > 457			if (is_virtio_device(dev)) {
 > 458				dma = virtio_dma_map_page(dev, pages[i], 0, PAGE_SIZE,
   459							  DMA_BIDIRECTIONAL);
   460	
 > 461				if (virtio_dma_mapping_error(dev, dma))
   462					goto err;
   463	
 > 464				if (virtio_dma_need_sync(dev, dma))
   465					dma_map->dma_need_sync = true;
   466	
   467			} else {
   468				dma = dma_map_page_attrs(dev, pages[i], 0, PAGE_SIZE,
   469							 DMA_BIDIRECTIONAL, attrs);
   470	
   471				if (dma_mapping_error(dev, dma))
   472					goto err;
   473	
   474				if (dma_need_sync(dev, dma))
   475					dma_map->dma_need_sync = true;
   476			}
   477			dma_map->dma_pages[i] = dma;
   478		}
   479	
   480		if (pool->unaligned)
   481			xp_check_dma_contiguity(dma_map);
   482	
   483		err = xp_init_dma_info(pool, dma_map);
   484		if (err) {
   485			__xp_dma_unmap(dma_map, attrs);
   486			return err;
   487		}
   488	
   489		return 0;
   490	err:
   491		__xp_dma_unmap(dma_map, attrs);
   492		return -ENOMEM;
   493	}
   494	EXPORT_SYMBOL(xp_dma_map);
   495	
   496	static bool xp_addr_crosses_non_contig_pg(struct xsk_buff_pool *pool,
   497						  u64 addr)
   498	{
   499		return xp_desc_crosses_non_contig_pg(pool, addr, pool->chunk_size);
   500	}
   501	
   502	static bool xp_check_unaligned(struct xsk_buff_pool *pool, u64 *addr)
   503	{
   504		*addr = xp_unaligned_extract_addr(*addr);
   505		if (*addr >= pool->addrs_cnt ||
   506		    *addr + pool->chunk_size > pool->addrs_cnt ||
   507		    xp_addr_crosses_non_contig_pg(pool, *addr))
   508			return false;
   509		return true;
   510	}
   511	
   512	static bool xp_check_aligned(struct xsk_buff_pool *pool, u64 *addr)
   513	{
   514		*addr = xp_aligned_extract_addr(pool, *addr);
   515		return *addr < pool->addrs_cnt;
   516	}
   517	
   518	static struct xdp_buff_xsk *__xp_alloc(struct xsk_buff_pool *pool)
   519	{
   520		struct xdp_buff_xsk *xskb;
   521		u64 addr;
   522		bool ok;
   523	
   524		if (pool->free_heads_cnt == 0)
   525			return NULL;
   526	
   527		for (;;) {
   528			if (!xskq_cons_peek_addr_unchecked(pool->fq, &addr)) {
   529				pool->fq->queue_empty_descs++;
   530				return NULL;
   531			}
   532	
   533			ok = pool->unaligned ? xp_check_unaligned(pool, &addr) :
   534			     xp_check_aligned(pool, &addr);
   535			if (!ok) {
   536				pool->fq->invalid_descs++;
   537				xskq_cons_release(pool->fq);
   538				continue;
   539			}
   540			break;
   541		}
   542	
   543		if (pool->unaligned) {
   544			xskb = pool->free_heads[--pool->free_heads_cnt];
   545			xp_init_xskb_addr(xskb, pool, addr);
   546			if (pool->dma_pages_cnt)
   547				xp_init_xskb_dma(xskb, pool, pool->dma_pages, addr);
   548		} else {
   549			xskb = &pool->heads[xp_aligned_extract_idx(pool, addr)];
   550		}
   551	
   552		xskq_cons_release(pool->fq);
   553		return xskb;
   554	}
   555	
   556	struct xdp_buff *xp_alloc(struct xsk_buff_pool *pool)
   557	{
   558		struct xdp_buff_xsk *xskb;
   559	
   560		if (!pool->free_list_cnt) {
   561			xskb = __xp_alloc(pool);
   562			if (!xskb)
   563				return NULL;
   564		} else {
   565			pool->free_list_cnt--;
   566			xskb = list_first_entry(&pool->free_list, struct xdp_buff_xsk,
   567						free_list_node);
   568			list_del_init(&xskb->free_list_node);
   569		}
   570	
   571		xskb->xdp.data = xskb->xdp.data_hard_start + XDP_PACKET_HEADROOM;
   572		xskb->xdp.data_meta = xskb->xdp.data;
   573	
   574		if (pool->dma_need_sync) {
 > 575			if (is_virtio_device(pool->dev))
 > 576				virtio_dma_sync_signle_range_for_device(pool->dev, xskb->dma, 0,
   577									pool->frame_len,
   578									DMA_BIDIRECTIONAL);
   579			else
   580				dma_sync_single_range_for_device(pool->dev, xskb->dma, 0,
   581								 pool->frame_len,
   582								 DMA_BIDIRECTIONAL);
   583		}
   584		return &xskb->xdp;
   585	}
   586	EXPORT_SYMBOL(xp_alloc);
   587	

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-03  9:17     ` Michael S. Tsirkin
@ 2023-02-06  2:41       ` Xuan Zhuo
  2023-02-13 12:14         ` Michael S. Tsirkin
  0 siblings, 1 reply; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-06  2:41 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, 3 Feb 2023 04:17:59 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Fri, Feb 03, 2023 at 11:33:31AM +0800, Xuan Zhuo wrote:
> > On Thu, 02 Feb 2023 15:41:44 +0100, Paolo Abeni <pabeni@redhat.com> wrote:
> > > On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > feature.
> > > >
> > > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > > the XDP Socket Zerocopy.
> > > >
> > > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > > kernel.
> > > >
> > > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > > local CPU, then we wake up sofrirqd.
> > >
> > > Thank you for the large effort.
> > >
> > > Since this will likely need a few iterations, on next revision please
> > > do split the work in multiple chunks to help the reviewer efforts -
> > > from Documentation/process/maintainer-netdev.rst:
> > >
> > >  - don't post large series (> 15 patches), break them up
> > >
> > > In this case I guess you can split it in 1 (or even 2) pre-req series
> > > and another one for the actual xsk zero copy support.
> >
> >
> > OK.
> >
> > I can split patch into multiple parts such as
> >
> > * virtio core
> > * xsk
> > * virtio-net prepare
> > * virtio-net support xsk zerocopy
> >
> > However, there is a problem, the virtio core part should enter the VHOST branch
> > of Michael. Then, should I post follow-up patches to which branch vhost or
> > next-next?
> >
> > Thanks.
> >
>
> Here are some ideas on how to make the patchset smaller
> and easier to merge:
> - keep everything in virtio_net.c for now. We can split
>   things out later, but this way your patchset will not
>   conflict with every since change merged meanwhile.
>   Also, split up needs to be done carefully with sane
>   APIs between components, let's maybe not waste time
>   on that now, do the split-up later.
> - you have patches that add APIs then other
>   patches use them. as long as it's only virtio net just
>   add and use in a single patch, review is actually easier this way.

I will try to merge #16-#18 and #20-#23.


> - we can try merging pre-requisites earlier, then patchset
>   size will shrink.

Do you mean the patches of virtio core? Should we put these
patches to vhost branch?

Thanks.

>
>
> > >
> > > Thanks!
> > >
> > > Paolo
> > >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 22/33] virtio_net: xsk: introduce xsk disable
  2023-02-02 11:00 ` [PATCH 22/33] virtio_net: xsk: introduce xsk disable Xuan Zhuo
  2023-02-02 23:02   ` kernel test robot
@ 2023-02-12  7:56   ` kernel test robot
  1 sibling, 0 replies; 76+ messages in thread
From: kernel test robot @ 2023-02-12  7:56 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, Michael S. Tsirkin,
	Jonathan Lemon, John Fastabend, Björn Töpel,
	Alexei Starovoitov, Eric Dumazet, Kuniyuki Iwashima,
	oe-kbuild-all, Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	Sebastian Andrzej Siewior, Magnus Karlsson

Hi Xuan,

Thank you for the patch! Perhaps something to improve:

[auto build test WARNING on net-next/master]
[also build test WARNING on next-20230210]
[cannot apply to net/master mst-vhost/linux-next linus/master v6.2-rc7]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch#_base_tree_information]

url:    https://github.com/intel-lab-lkp/linux/commits/Xuan-Zhuo/virtio_ring-virtqueue_add-support-premapped/20230202-190707
patch link:    https://lore.kernel.org/r/20230202110058.130695-23-xuanzhuo%40linux.alibaba.com
patch subject: [PATCH 22/33] virtio_net: xsk: introduce xsk disable
config: nios2-randconfig-s033-20230202 (https://download.01.org/0day-ci/archive/20230212/202302121555.BtDmbIKI-lkp@intel.com/config)
compiler: nios2-linux-gcc (GCC) 12.1.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # apt-get install sparse
        # sparse version: v0.6.4-39-gce1a6720-dirty
        # https://github.com/intel-lab-lkp/linux/commit/3c385ac45368b585d2ca1a45263b4a0536cef0dd
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Xuan-Zhuo/virtio_ring-virtqueue_add-support-premapped/20230202-190707
        git checkout 3c385ac45368b585d2ca1a45263b4a0536cef0dd
        # save the config file
        mkdir build_dir && cp config build_dir/.config
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=nios2 olddefconfig
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-12.1.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=nios2 SHELL=/bin/bash drivers/net/virtio/

If you fix the issue, kindly add following tag where applicable
| Reported-by: kernel test robot <lkp@intel.com>
| Link: https://lore.kernel.org/oe-kbuild-all/202302121555.BtDmbIKI-lkp@intel.com/

sparse warnings: (new ones prefixed by >>)
>> drivers/net/virtio/xsk.c:133:35: sparse: sparse: incorrect type in argument 1 (different address spaces) @@     expected struct xsk_buff_pool *pool @@     got struct xsk_buff_pool [noderef] __rcu *pool @@
   drivers/net/virtio/xsk.c:133:35: sparse:     expected struct xsk_buff_pool *pool
   drivers/net/virtio/xsk.c:133:35: sparse:     got struct xsk_buff_pool [noderef] __rcu *pool

vim +133 drivers/net/virtio/xsk.c

   116	
   117	static int virtnet_xsk_pool_disable(struct net_device *dev, u16 qid)
   118	{
   119		struct virtnet_info *vi = netdev_priv(dev);
   120		struct receive_queue *rq;
   121		struct send_queue *sq;
   122		int err1, err2;
   123	
   124		if (qid >= vi->curr_queue_pairs)
   125			return -EINVAL;
   126	
   127		sq = &vi->sq[qid];
   128		rq = &vi->rq[qid];
   129	
   130		virtio_dma_unmap(&vi->vdev->dev, sq->xsk.hdr_dma_address, vi->hdr_len,
   131				 DMA_TO_DEVICE);
   132	
 > 133		xsk_pool_dma_unmap(sq->xsk.pool, 0);

-- 
0-DAY CI Kernel Test Service
https://github.com/intel/lkp-tests
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 00/33] virtio-net: support AF_XDP zero copy
  2023-02-06  2:41       ` Xuan Zhuo
@ 2023-02-13 12:14         ` Michael S. Tsirkin
  0 siblings, 0 replies; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-13 12:14 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Mon, Feb 06, 2023 at 10:41:16AM +0800, Xuan Zhuo wrote:
> On Fri, 3 Feb 2023 04:17:59 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Fri, Feb 03, 2023 at 11:33:31AM +0800, Xuan Zhuo wrote:
> > > On Thu, 02 Feb 2023 15:41:44 +0100, Paolo Abeni <pabeni@redhat.com> wrote:
> > > > On Thu, 2023-02-02 at 19:00 +0800, Xuan Zhuo wrote:
> > > > > XDP socket(AF_XDP) is an excellent bypass kernel network framework. The zero
> > > > > copy feature of xsk (XDP socket) needs to be supported by the driver. The
> > > > > performance of zero copy is very good. mlx5 and intel ixgbe already support
> > > > > this feature, This patch set allows virtio-net to support xsk's zerocopy xmit
> > > > > feature.
> > > > >
> > > > > Virtio-net did not support per-queue reset, so it was impossible to support XDP
> > > > > Socket Zerocopy. At present, we have completed the work of Virtio Spec and
> > > > > Kernel in Per-Queue Reset. It is time for Virtio-Net to complete the support for
> > > > > the XDP Socket Zerocopy.
> > > > >
> > > > > Virtio-net can not increase the queue at will, so xsk shares the queue with
> > > > > kernel.
> > > > >
> > > > > On the other hand, Virtio-Net does not support generate interrupt manually, so
> > > > > when we wakeup tx xmit, we used some tips. If the CPU run by TX NAPI last time
> > > > > is other CPUs, use IPI to wake up NAPI on the remote CPU. If it is also the
> > > > > local CPU, then we wake up sofrirqd.
> > > >
> > > > Thank you for the large effort.
> > > >
> > > > Since this will likely need a few iterations, on next revision please
> > > > do split the work in multiple chunks to help the reviewer efforts -
> > > > from Documentation/process/maintainer-netdev.rst:
> > > >
> > > >  - don't post large series (> 15 patches), break them up
> > > >
> > > > In this case I guess you can split it in 1 (or even 2) pre-req series
> > > > and another one for the actual xsk zero copy support.
> > >
> > >
> > > OK.
> > >
> > > I can split patch into multiple parts such as
> > >
> > > * virtio core
> > > * xsk
> > > * virtio-net prepare
> > > * virtio-net support xsk zerocopy
> > >
> > > However, there is a problem, the virtio core part should enter the VHOST branch
> > > of Michael. Then, should I post follow-up patches to which branch vhost or
> > > next-next?
> > >
> > > Thanks.
> > >
> >
> > Here are some ideas on how to make the patchset smaller
> > and easier to merge:
> > - keep everything in virtio_net.c for now. We can split
> >   things out later, but this way your patchset will not
> >   conflict with every since change merged meanwhile.
> >   Also, split up needs to be done carefully with sane
> >   APIs between components, let's maybe not waste time
> >   on that now, do the split-up later.
> > - you have patches that add APIs then other
> >   patches use them. as long as it's only virtio net just
> >   add and use in a single patch, review is actually easier this way.
> 
> I will try to merge #16-#18 and #20-#23.

don't do the code reorg thing for now either.

leave this for later.

> 
> > - we can try merging pre-requisites earlier, then patchset
> >   size will shrink.
> 
> Do you mean the patches of virtio core? Should we put these
> patches to vhost branch?
> 
> Thanks.

I can merge patches 1-8, yes.
This patchset probably missed the merge window anyway.


> >
> >
> > > >
> > > > Thanks!
> > > >
> > > > Paolo
> > > >
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 06/33] virtio_ring: introduce virtqueue_reset()
  2023-02-03  9:09     ` Xuan Zhuo
@ 2023-02-13 12:15       ` Michael S. Tsirkin
  2023-02-14  1:53         ` Xuan Zhuo
  0 siblings, 1 reply; 76+ messages in thread
From: Michael S. Tsirkin @ 2023-02-13 12:15 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Fri, Feb 03, 2023 at 05:09:12PM +0800, Xuan Zhuo wrote:
> On Fri, 3 Feb 2023 04:05:38 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > On Thu, Feb 02, 2023 at 07:00:31PM +0800, Xuan Zhuo wrote:
> > > Introduce virtqueue_reset() to release all buffer inside vq.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/virtio/virtio_ring.c | 50 ++++++++++++++++++++++++++++++++++++
> > >  include/linux/virtio.h       |  2 ++
> > >  2 files changed, 52 insertions(+)
> > >
> > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > index e32046fd15a5..7dfce7001f9f 100644
> > > --- a/drivers/virtio/virtio_ring.c
> > > +++ b/drivers/virtio/virtio_ring.c
> > > @@ -2735,6 +2735,56 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
> > >  }
> > >  EXPORT_SYMBOL_GPL(virtqueue_resize);
> > >
> > > +/**
> > > + * virtqueue_reset - reset the vring of vq
> >
> > ..., detach and recycle all unused buffers
> >
> > 	after all this is why we are doing this reset, right?
> >
> > > + * @_vq: the struct virtqueue we're talking about.
> > > + * @recycle: callback for recycle the useless buffer
> >
> > not useless :) unused:
> >
> > 	callback to recycle unused buffers
> 
> 
> I agree. Will fix.
> 
> Thanks.

Probably too late for this merge cycle then. Oh well.


> >
> > I know we have the same confusion in virtqueue_resize, I will fix
> > that.
> >
> > > + *
> > > + * Caller must ensure we don't call this with other virtqueue operations
> > > + * at the same time (except where noted).
> > > + *
> > > + * Returns zero or a negative error.
> > > + * 0: success.
> > > + * -EBUSY: Failed to sync with device, vq may not work properly
> > > + * -ENOENT: Transport or device not supported
> > > + * -EPERM: Operation not permitted
> > > + */
> > > +int virtqueue_reset(struct virtqueue *_vq,
> > > +		    void (*recycle)(struct virtqueue *vq, void *buf))
> > > +{
> > > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > > +	struct virtio_device *vdev = vq->vq.vdev;
> > > +	void *buf;
> > > +	int err;
> > > +
> > > +	if (!vq->we_own_ring)
> > > +		return -EPERM;
> > > +
> > > +	if (!vdev->config->disable_vq_and_reset)
> > > +		return -ENOENT;
> > > +
> > > +	if (!vdev->config->enable_vq_after_reset)
> > > +		return -ENOENT;
> > > +
> > > +	err = vdev->config->disable_vq_and_reset(_vq);
> > > +	if (err)
> > > +		return err;
> > > +
> > > +	while ((buf = virtqueue_detach_unused_buf(_vq)) != NULL)
> > > +		recycle(_vq, buf);
> > > +
> > > +	if (vq->packed_ring)
> > > +		virtqueue_reinit_packed(vq);
> > > +	else
> > > +		virtqueue_reinit_split(vq);
> > > +
> > > +	if (vdev->config->enable_vq_after_reset(_vq))
> > > +		return -EBUSY;
> > > +
> > > +	return 0;
> > > +}
> > > +EXPORT_SYMBOL_GPL(virtqueue_reset);
> > > +
> > >  /* Only available for split ring */
> > >  struct virtqueue *vring_new_virtqueue(unsigned int index,
> > >  				      unsigned int num,
> > > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > > index 3ebb346ebb7c..3ca2edb1aef3 100644
> > > --- a/include/linux/virtio.h
> > > +++ b/include/linux/virtio.h
> > > @@ -105,6 +105,8 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
> > >
> > >  int virtqueue_resize(struct virtqueue *vq, u32 num,
> > >  		     void (*recycle)(struct virtqueue *vq, void *buf));
> > > +int virtqueue_reset(struct virtqueue *vq,
> > > +		    void (*recycle)(struct virtqueue *vq, void *buf));
> > >
> > >  /**
> > >   * struct virtio_device - representation of a device using virtio
> > > --
> > > 2.32.0.3.g01195cf9f
> >

_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

* Re: [PATCH 06/33] virtio_ring: introduce virtqueue_reset()
  2023-02-13 12:15       ` Michael S. Tsirkin
@ 2023-02-14  1:53         ` Xuan Zhuo
  0 siblings, 0 replies; 76+ messages in thread
From: Xuan Zhuo @ 2023-02-14  1:53 UTC (permalink / raw)
  To: Michael S. Tsirkin
  Cc: Petr Machata, Menglong Dong, Maciej Fijalkowski,
	Jesper Dangaard Brouer, Daniel Borkmann, netdev, John Fastabend,
	Björn Töpel, Alexei Starovoitov, Eric Dumazet,
	Kuniyuki Iwashima, Sebastian Andrzej Siewior, Jonathan Lemon,
	Jakub Kicinski, bpf, Paolo Abeni, virtualization,
	David S. Miller, Magnus Karlsson

On Mon, 13 Feb 2023 07:15:02 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> On Fri, Feb 03, 2023 at 05:09:12PM +0800, Xuan Zhuo wrote:
> > On Fri, 3 Feb 2023 04:05:38 -0500, "Michael S. Tsirkin" <mst@redhat.com> wrote:
> > > On Thu, Feb 02, 2023 at 07:00:31PM +0800, Xuan Zhuo wrote:
> > > > Introduce virtqueue_reset() to release all buffer inside vq.
> > > >
> > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > ---
> > > >  drivers/virtio/virtio_ring.c | 50 ++++++++++++++++++++++++++++++++++++
> > > >  include/linux/virtio.h       |  2 ++
> > > >  2 files changed, 52 insertions(+)
> > > >
> > > > diff --git a/drivers/virtio/virtio_ring.c b/drivers/virtio/virtio_ring.c
> > > > index e32046fd15a5..7dfce7001f9f 100644
> > > > --- a/drivers/virtio/virtio_ring.c
> > > > +++ b/drivers/virtio/virtio_ring.c
> > > > @@ -2735,6 +2735,56 @@ int virtqueue_resize(struct virtqueue *_vq, u32 num,
> > > >  }
> > > >  EXPORT_SYMBOL_GPL(virtqueue_resize);
> > > >
> > > > +/**
> > > > + * virtqueue_reset - reset the vring of vq
> > >
> > > ..., detach and recycle all unused buffers
> > >
> > > 	after all this is why we are doing this reset, right?
> > >
> > > > + * @_vq: the struct virtqueue we're talking about.
> > > > + * @recycle: callback for recycle the useless buffer
> > >
> > > not useless :) unused:
> > >
> > > 	callback to recycle unused buffers
> >
> >
> > I agree. Will fix.
> >
> > Thanks.
>
> Probably too late for this merge cycle then. Oh well.

It's ok for next.

I plan to push the virtio ring code to the vhost branch firstly.

Thanks.


>
>
> > >
> > > I know we have the same confusion in virtqueue_resize, I will fix
> > > that.
> > >
> > > > + *
> > > > + * Caller must ensure we don't call this with other virtqueue operations
> > > > + * at the same time (except where noted).
> > > > + *
> > > > + * Returns zero or a negative error.
> > > > + * 0: success.
> > > > + * -EBUSY: Failed to sync with device, vq may not work properly
> > > > + * -ENOENT: Transport or device not supported
> > > > + * -EPERM: Operation not permitted
> > > > + */
> > > > +int virtqueue_reset(struct virtqueue *_vq,
> > > > +		    void (*recycle)(struct virtqueue *vq, void *buf))
> > > > +{
> > > > +	struct vring_virtqueue *vq = to_vvq(_vq);
> > > > +	struct virtio_device *vdev = vq->vq.vdev;
> > > > +	void *buf;
> > > > +	int err;
> > > > +
> > > > +	if (!vq->we_own_ring)
> > > > +		return -EPERM;
> > > > +
> > > > +	if (!vdev->config->disable_vq_and_reset)
> > > > +		return -ENOENT;
> > > > +
> > > > +	if (!vdev->config->enable_vq_after_reset)
> > > > +		return -ENOENT;
> > > > +
> > > > +	err = vdev->config->disable_vq_and_reset(_vq);
> > > > +	if (err)
> > > > +		return err;
> > > > +
> > > > +	while ((buf = virtqueue_detach_unused_buf(_vq)) != NULL)
> > > > +		recycle(_vq, buf);
> > > > +
> > > > +	if (vq->packed_ring)
> > > > +		virtqueue_reinit_packed(vq);
> > > > +	else
> > > > +		virtqueue_reinit_split(vq);
> > > > +
> > > > +	if (vdev->config->enable_vq_after_reset(_vq))
> > > > +		return -EBUSY;
> > > > +
> > > > +	return 0;
> > > > +}
> > > > +EXPORT_SYMBOL_GPL(virtqueue_reset);
> > > > +
> > > >  /* Only available for split ring */
> > > >  struct virtqueue *vring_new_virtqueue(unsigned int index,
> > > >  				      unsigned int num,
> > > > diff --git a/include/linux/virtio.h b/include/linux/virtio.h
> > > > index 3ebb346ebb7c..3ca2edb1aef3 100644
> > > > --- a/include/linux/virtio.h
> > > > +++ b/include/linux/virtio.h
> > > > @@ -105,6 +105,8 @@ dma_addr_t virtqueue_get_used_addr(struct virtqueue *vq);
> > > >
> > > >  int virtqueue_resize(struct virtqueue *vq, u32 num,
> > > >  		     void (*recycle)(struct virtqueue *vq, void *buf));
> > > > +int virtqueue_reset(struct virtqueue *vq,
> > > > +		    void (*recycle)(struct virtqueue *vq, void *buf));
> > > >
> > > >  /**
> > > >   * struct virtio_device - representation of a device using virtio
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > >
>
_______________________________________________
Virtualization mailing list
Virtualization@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/virtualization

^ permalink raw reply	[flat|nested] 76+ messages in thread

end of thread, other threads:[~2023-02-14  1:55 UTC | newest]

Thread overview: 76+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-02-02 11:00 [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
2023-02-02 11:00 ` [PATCH 01/33] virtio_ring: virtqueue_add() support premapped Xuan Zhuo
2023-02-02 11:00 ` [PATCH 02/33] virtio_ring: split: virtqueue_add_split() " Xuan Zhuo
2023-02-02 11:00 ` [PATCH 03/33] virtio_ring: packed: virtqueue_add_packed() " Xuan Zhuo
2023-02-03  9:16   ` Michael S. Tsirkin
2023-02-02 11:00 ` [PATCH 04/33] virtio_ring: introduce virtqueue_add_outbuf_premapped() Xuan Zhuo
2023-02-02 11:00 ` [PATCH 05/33] virtio_ring: introduce virtqueue_add_inbuf_premapped() Xuan Zhuo
2023-02-02 11:00 ` [PATCH 06/33] virtio_ring: introduce virtqueue_reset() Xuan Zhuo
2023-02-03  9:05   ` Michael S. Tsirkin
2023-02-03  9:09     ` Xuan Zhuo
2023-02-13 12:15       ` Michael S. Tsirkin
2023-02-14  1:53         ` Xuan Zhuo
2023-02-02 11:00 ` [PATCH 07/33] virtio_ring: add api virtio_dma_map() for advance dma Xuan Zhuo
2023-02-03  9:07   ` Michael S. Tsirkin
2023-02-02 11:00 ` [PATCH 08/33] virtio_ring: introduce dma sync api for virtio Xuan Zhuo
2023-02-03  9:24   ` Michael S. Tsirkin
2023-02-02 11:00 ` [PATCH 09/33] xsk: xsk_buff_pool add callback for dma_sync Xuan Zhuo
     [not found]   ` <CAJ8uoz2+4+wUFYF1GjF51DFBV8ZsBRtTEVWpu_2fBmFUEQzOLQ@mail.gmail.com>
2023-02-03  7:01     ` Xuan Zhuo
2023-02-02 11:00 ` [PATCH 10/33] xsk: support virtio DMA map Xuan Zhuo
2023-02-05 22:04   ` kernel test robot
2023-02-02 11:00 ` [PATCH 11/33] virtio_net: rename free_old_xmit_skbs to free_old_xmit Xuan Zhuo
2023-02-02 11:00 ` [PATCH 12/33] virtio_net: unify the code for recycling the xmit ptr Xuan Zhuo
2023-02-02 11:00 ` [PATCH 13/33] virtio_net: virtnet_poll_tx support rescheduled Xuan Zhuo
2023-02-02 11:00 ` [PATCH 14/33] virtio_net: independent directory Xuan Zhuo
2023-02-02 11:00 ` [PATCH 15/33] virtio_net: move to virtio_net.h Xuan Zhuo
2023-02-03  8:53   ` Michael S. Tsirkin
2023-02-03  9:04     ` Xuan Zhuo
2023-02-03  9:26       ` Michael S. Tsirkin
2023-02-02 11:00 ` [PATCH 16/33] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
2023-02-03  8:55   ` Michael S. Tsirkin
2023-02-03  9:01     ` Xuan Zhuo
2023-02-02 11:00 ` [PATCH 17/33] virtio_net: receive_small() use virtnet_xdp_handler() Xuan Zhuo
2023-02-02 11:00 ` [PATCH 18/33] virtio_net: receive_merageable() " Xuan Zhuo
2023-02-02 17:16   ` Michael S. Tsirkin
2023-02-02 11:00 ` [PATCH 19/33] virtio_net: introduce virtnet_tx_reset() Xuan Zhuo
2023-02-02 17:23   ` Michael S. Tsirkin
2023-02-03  4:35     ` Xuan Zhuo
2023-02-02 11:00 ` [PATCH 20/33] virtio_net: xsk: introduce virtnet_rq_bind_xsk_pool() Xuan Zhuo
2023-02-03  8:48   ` Michael S. Tsirkin
2023-02-03  8:52     ` Xuan Zhuo
2023-02-03  9:28       ` Michael S. Tsirkin
2023-02-02 11:00 ` [PATCH 21/33] virtio_net: xsk: introduce virtnet_xsk_pool_enable() Xuan Zhuo
2023-02-02 11:00 ` [PATCH 22/33] virtio_net: xsk: introduce xsk disable Xuan Zhuo
2023-02-02 23:02   ` kernel test robot
2023-02-12  7:56   ` kernel test robot
2023-02-02 11:00 ` [PATCH 23/33] virtio_net: xsk: support xsk setup Xuan Zhuo
2023-02-02 11:00 ` [PATCH 24/33] virtio_net: xsk: stop disable tx napi Xuan Zhuo
2023-02-02 17:25   ` Michael S. Tsirkin
2023-02-03  3:24     ` Xuan Zhuo
2023-02-03  8:33       ` Michael S. Tsirkin
2023-02-03  8:49         ` Xuan Zhuo
2023-02-03  9:29           ` Michael S. Tsirkin
2023-02-02 11:00 ` [PATCH 25/33] virtio_net: xsk: __free_old_xmit distinguishes xsk buffer Xuan Zhuo
2023-02-02 11:00 ` [PATCH 26/33] virtio_net: virtnet_sq_free_unused_buf() check " Xuan Zhuo
2023-02-02 11:00 ` [PATCH 27/33] virtio_net: virtnet_rq_free_unused_buf() " Xuan Zhuo
2023-02-02 11:00 ` [PATCH 28/33] net: introduce napi_tx_raise() Xuan Zhuo
2023-02-02 11:00 ` [PATCH 29/33] virtio_net: xsk: tx: support tx Xuan Zhuo
     [not found]   ` <Y9zIPdKmTvXqyuYS@boxer>
2023-02-03  8:55     ` Xuan Zhuo
2023-02-02 11:00 ` [PATCH 30/33] virtio_net: xsk: tx: support wakeup Xuan Zhuo
2023-02-02 11:00 ` [PATCH 31/33] virtio_net: xsk: tx: auto wakeup when free old xmit Xuan Zhuo
2023-02-02 11:00 ` [PATCH 32/33] virtio_net: xsk: rx: introduce add_recvbuf_xsk() Xuan Zhuo
     [not found]   ` <Y9zJS+ugeY9qEMt9@boxer>
2023-02-03  8:56     ` Xuan Zhuo
2023-02-02 11:00 ` [PATCH 33/33] virtio_net: xsk: rx: introduce receive_xsk() to recv xsk buffer Xuan Zhuo
2023-02-02 11:08 ` [PATCH 00/33] virtio-net: support AF_XDP zero copy Xuan Zhuo
2023-02-02 11:08 ` Michael S. Tsirkin
2023-02-02 11:11   ` Xuan Zhuo
2023-02-02 11:44   ` Xuan Zhuo
2023-02-03  9:08     ` Michael S. Tsirkin
2023-02-03  9:09       ` Xuan Zhuo
2023-02-02 14:41 ` Paolo Abeni
2023-02-03  3:33   ` Xuan Zhuo
2023-02-03  8:37     ` Michael S. Tsirkin
     [not found]       ` <Y9zJ9j0GthvRSFHL@boxer>
2023-02-03  9:09         ` Michael S. Tsirkin
2023-02-03  9:17     ` Michael S. Tsirkin
2023-02-06  2:41       ` Xuan Zhuo
2023-02-13 12:14         ` Michael S. Tsirkin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).