bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/8] virtio_net: refactor xdp codes
@ 2023-03-28 12:04 Xuan Zhuo
  2023-03-28 12:04 ` [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately Xuan Zhuo
                   ` (7 more replies)
  0 siblings, 8 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

Due to historical reasons, the implementation of XDP in virtio-net is relatively
chaotic. For example, the processing of XDP actions has two copies of similar
code. Such as page, xdp_page processing, etc.

The purpose of this patch set is to refactor these code. Reduce the difficulty
of subsequent maintenance. Subsequent developers will not introduce new bugs
because of some complex logical relationships.

In addition, the supporting to AF_XDP that I want to submit later will also need
to reuse the logic of XDP, such as the processing of actions, I don't want to
introduce a new similar code. In this way, I can reuse these codes in the
future.

Please review.

Thanks.

v1:
    1. fix some variables are uninitialized

Xuan Zhuo (8):
  virtio_net: mergeable xdp: put old page immediately
  virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  virtio_net: introduce virtnet_xdp_handler() to seprate the logic of
    run xdp
  virtio_net: separate the logic of freeing xdp shinfo
  virtio_net: separate the logic of freeing the rest mergeable buf
  virtio_net: auto release xdp shinfo
  virtio_net: introduce receive_mergeable_xdp()
  virtio_net: introduce receive_small_xdp()

 drivers/net/virtio_net.c | 618 +++++++++++++++++++++++----------------
 1 file changed, 360 insertions(+), 258 deletions(-)

--
2.32.0.3.g01195cf9f


^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-03-31  9:01   ` Jason Wang
  2023-03-28 12:04 ` [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare Xuan Zhuo
                   ` (6 subsequent siblings)
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

In the xdp implementation of virtio-net mergeable, it always checks
whether two page is used and a page is selected to release. This is
complicated for the processing of action, and be careful.

In the entire process, we have such principles:
* If xdp_page is used (PASS, TX, Redirect), then we release the old
  page.
* If it is a drop case, we will release two. The old page obtained from
  buf is release inside err_xdp, and xdp_page needs be relased by us.

But in fact, when we allocate a new page, we can release the old page
immediately. Then just one is using, we just need to release the new
page for drop case. On the drop path, err_xdp will release the variable
"page", so we only need to let "page" point to the new xdp_page in
advance.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 15 ++++++---------
 1 file changed, 6 insertions(+), 9 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index e2560b6f7980..4d2bf1ce0730 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1245,6 +1245,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 			if (!xdp_page)
 				goto err_xdp;
 			offset = VIRTIO_XDP_HEADROOM;
+
+			put_page(page);
+			page = xdp_page;
 		} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
 			xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
 						  sizeof(struct skb_shared_info));
@@ -1259,6 +1262,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 			       page_address(page) + offset, len);
 			frame_sz = PAGE_SIZE;
 			offset = VIRTIO_XDP_HEADROOM;
+
+			put_page(page);
+			page = xdp_page;
 		} else {
 			xdp_page = page;
 		}
@@ -1278,8 +1284,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 			if (unlikely(!head_skb))
 				goto err_xdp_frags;
 
-			if (unlikely(xdp_page != page))
-				put_page(page);
 			rcu_read_unlock();
 			return head_skb;
 		case XDP_TX:
@@ -1297,8 +1301,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 				goto err_xdp_frags;
 			}
 			*xdp_xmit |= VIRTIO_XDP_TX;
-			if (unlikely(xdp_page != page))
-				put_page(page);
 			rcu_read_unlock();
 			goto xdp_xmit;
 		case XDP_REDIRECT:
@@ -1307,8 +1309,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 			if (err)
 				goto err_xdp_frags;
 			*xdp_xmit |= VIRTIO_XDP_REDIR;
-			if (unlikely(xdp_page != page))
-				put_page(page);
 			rcu_read_unlock();
 			goto xdp_xmit;
 		default:
@@ -1321,9 +1321,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 			goto err_xdp_frags;
 		}
 err_xdp_frags:
-		if (unlikely(xdp_page != page))
-			__free_pages(xdp_page, 0);
-
 		if (xdp_buff_has_frags(&xdp)) {
 			shinfo = xdp_get_shared_info_from_buff(&xdp);
 			for (i = 0; i < shinfo->nr_frags; i++) {
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
  2023-03-28 12:04 ` [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-03-31  9:14   ` Jason Wang
  2023-03-28 12:04 ` [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

Separating the logic of preparation for xdp from receive_mergeable.

The purpose of this is to simplify the logic of execution of XDP.

The main logic here is that when headroom is insufficient, we need to
allocate a new page and calculate offset. It should be noted that if
there is new page, the variable page will refer to the new page.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
 1 file changed, 77 insertions(+), 58 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 4d2bf1ce0730..bb426958cdd4 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 	return 0;
 }
 
+static void *mergeable_xdp_prepare(struct virtnet_info *vi,
+				   struct receive_queue *rq,
+				   struct bpf_prog *xdp_prog,
+				   void *ctx,
+				   unsigned int *frame_sz,
+				   int *num_buf,
+				   struct page **page,
+				   int offset,
+				   unsigned int *len,
+				   struct virtio_net_hdr_mrg_rxbuf *hdr)
+{
+	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
+	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
+	struct page *xdp_page;
+	unsigned int xdp_room;
+
+	/* Transient failure which in theory could occur if
+	 * in-flight packets from before XDP was enabled reach
+	 * the receive path after XDP is loaded.
+	 */
+	if (unlikely(hdr->hdr.gso_type))
+		return NULL;
+
+	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
+	 * with headroom may add hole in truesize, which
+	 * make their length exceed PAGE_SIZE. So we disabled the
+	 * hole mechanism for xdp. See add_recvbuf_mergeable().
+	 */
+	*frame_sz = truesize;
+
+	/* This happens when headroom is not enough because
+	 * of the buffer was prefilled before XDP is set.
+	 * This should only happen for the first several packets.
+	 * In fact, vq reset can be used here to help us clean up
+	 * the prefilled buffers, but many existing devices do not
+	 * support it, and we don't want to bother users who are
+	 * using xdp normally.
+	 */
+	if (!xdp_prog->aux->xdp_has_frags &&
+	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
+		/* linearize data for XDP */
+		xdp_page = xdp_linearize_page(rq, num_buf,
+					      *page, offset,
+					      VIRTIO_XDP_HEADROOM,
+					      len);
+
+		if (!xdp_page)
+			return NULL;
+	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
+		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
+					  sizeof(struct skb_shared_info));
+		if (*len + xdp_room > PAGE_SIZE)
+			return NULL;
+
+		xdp_page = alloc_page(GFP_ATOMIC);
+		if (!xdp_page)
+			return NULL;
+
+		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
+		       page_address(*page) + offset, *len);
+	} else {
+		return page_address(*page) + offset;
+	}
+
+	*frame_sz = PAGE_SIZE;
+
+	put_page(*page);
+
+	*page = xdp_page;
+
+	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
+}
+
 static struct sk_buff *receive_mergeable(struct net_device *dev,
 					 struct virtnet_info *vi,
 					 struct receive_queue *rq,
@@ -1181,7 +1254,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
 	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
 	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
-	unsigned int frame_sz, xdp_room;
+	unsigned int frame_sz;
 	int err;
 
 	head_skb = NULL;
@@ -1211,65 +1284,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		u32 act;
 		int i;
 
-		/* Transient failure which in theory could occur if
-		 * in-flight packets from before XDP was enabled reach
-		 * the receive path after XDP is loaded.
-		 */
-		if (unlikely(hdr->hdr.gso_type))
+		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
+					     offset, &len, hdr);
+		if (!data)
 			goto err_xdp;
 
-		/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
-		 * with headroom may add hole in truesize, which
-		 * make their length exceed PAGE_SIZE. So we disabled the
-		 * hole mechanism for xdp. See add_recvbuf_mergeable().
-		 */
-		frame_sz = truesize;
-
-		/* This happens when headroom is not enough because
-		 * of the buffer was prefilled before XDP is set.
-		 * This should only happen for the first several packets.
-		 * In fact, vq reset can be used here to help us clean up
-		 * the prefilled buffers, but many existing devices do not
-		 * support it, and we don't want to bother users who are
-		 * using xdp normally.
-		 */
-		if (!xdp_prog->aux->xdp_has_frags &&
-		    (num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
-			/* linearize data for XDP */
-			xdp_page = xdp_linearize_page(rq, &num_buf,
-						      page, offset,
-						      VIRTIO_XDP_HEADROOM,
-						      &len);
-			frame_sz = PAGE_SIZE;
-
-			if (!xdp_page)
-				goto err_xdp;
-			offset = VIRTIO_XDP_HEADROOM;
-
-			put_page(page);
-			page = xdp_page;
-		} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
-			xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
-						  sizeof(struct skb_shared_info));
-			if (len + xdp_room > PAGE_SIZE)
-				goto err_xdp;
-
-			xdp_page = alloc_page(GFP_ATOMIC);
-			if (!xdp_page)
-				goto err_xdp;
-
-			memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
-			       page_address(page) + offset, len);
-			frame_sz = PAGE_SIZE;
-			offset = VIRTIO_XDP_HEADROOM;
-
-			put_page(page);
-			page = xdp_page;
-		} else {
-			xdp_page = page;
-		}
-
-		data = page_address(xdp_page) + offset;
 		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
 						 &num_buf, &xdp_frags_truesz, stats);
 		if (unlikely(err))
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
  2023-03-28 12:04 ` [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately Xuan Zhuo
  2023-03-28 12:04 ` [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-04-03  2:43   ` Jason Wang
  2023-03-28 12:04 ` [PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo Xuan Zhuo
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

At present, we have two similar logic to perform the XDP prog.

Therefore, this PATCH separates the code of executing XDP, which is
conducive to later maintenance.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
 1 file changed, 75 insertions(+), 67 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index bb426958cdd4..72b9d6ee4024 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -301,6 +301,15 @@ struct padded_vnet_hdr {
 	char padding[12];
 };
 
+enum {
+	/* xdp pass */
+	VIRTNET_XDP_RES_PASS,
+	/* drop packet. the caller needs to release the page. */
+	VIRTNET_XDP_RES_DROP,
+	/* packet is consumed by xdp. the caller needs to do nothing. */
+	VIRTNET_XDP_RES_CONSUMED,
+};
+
 static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
 static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
 
@@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	return ret;
 }
 
+static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
+			       struct net_device *dev,
+			       unsigned int *xdp_xmit,
+			       struct virtnet_rq_stats *stats)
+{
+	struct xdp_frame *xdpf;
+	int err;
+	u32 act;
+
+	act = bpf_prog_run_xdp(xdp_prog, xdp);
+	stats->xdp_packets++;
+
+	switch (act) {
+	case XDP_PASS:
+		return VIRTNET_XDP_RES_PASS;
+
+	case XDP_TX:
+		stats->xdp_tx++;
+		xdpf = xdp_convert_buff_to_frame(xdp);
+		if (unlikely(!xdpf))
+			return VIRTNET_XDP_RES_DROP;
+
+		err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
+		if (unlikely(!err)) {
+			xdp_return_frame_rx_napi(xdpf);
+		} else if (unlikely(err < 0)) {
+			trace_xdp_exception(dev, xdp_prog, act);
+			return VIRTNET_XDP_RES_DROP;
+		}
+
+		*xdp_xmit |= VIRTIO_XDP_TX;
+		return VIRTNET_XDP_RES_CONSUMED;
+
+	case XDP_REDIRECT:
+		stats->xdp_redirects++;
+		err = xdp_do_redirect(dev, xdp, xdp_prog);
+		if (err)
+			return VIRTNET_XDP_RES_DROP;
+
+		*xdp_xmit |= VIRTIO_XDP_REDIR;
+		return VIRTNET_XDP_RES_CONSUMED;
+
+	default:
+		bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
+		fallthrough;
+	case XDP_ABORTED:
+		trace_xdp_exception(dev, xdp_prog, act);
+		fallthrough;
+	case XDP_DROP:
+		return VIRTNET_XDP_RES_DROP;
+	}
+}
+
 static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
 {
 	return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
@@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
 	struct page *page = virt_to_head_page(buf);
 	unsigned int delta = 0;
 	struct page *xdp_page;
-	int err;
 	unsigned int metasize = 0;
 
 	len -= vi->hdr_len;
@@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (xdp_prog) {
 		struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
-		struct xdp_frame *xdpf;
 		struct xdp_buff xdp;
 		void *orig_data;
 		u32 act;
@@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
 		xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
 				 xdp_headroom, len, true);
 		orig_data = xdp.data;
-		act = bpf_prog_run_xdp(xdp_prog, &xdp);
-		stats->xdp_packets++;
+
+		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
 
 		switch (act) {
-		case XDP_PASS:
+		case VIRTNET_XDP_RES_PASS:
 			/* Recalculate length in case bpf program changed it */
 			delta = orig_data - xdp.data;
 			len = xdp.data_end - xdp.data;
 			metasize = xdp.data - xdp.data_meta;
 			break;
-		case XDP_TX:
-			stats->xdp_tx++;
-			xdpf = xdp_convert_buff_to_frame(&xdp);
-			if (unlikely(!xdpf))
-				goto err_xdp;
-			err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
-			if (unlikely(!err)) {
-				xdp_return_frame_rx_napi(xdpf);
-			} else if (unlikely(err < 0)) {
-				trace_xdp_exception(vi->dev, xdp_prog, act);
-				goto err_xdp;
-			}
-			*xdp_xmit |= VIRTIO_XDP_TX;
-			rcu_read_unlock();
-			goto xdp_xmit;
-		case XDP_REDIRECT:
-			stats->xdp_redirects++;
-			err = xdp_do_redirect(dev, &xdp, xdp_prog);
-			if (err)
-				goto err_xdp;
-			*xdp_xmit |= VIRTIO_XDP_REDIR;
+
+		case VIRTNET_XDP_RES_CONSUMED:
 			rcu_read_unlock();
 			goto xdp_xmit;
-		default:
-			bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
-			fallthrough;
-		case XDP_ABORTED:
-			trace_xdp_exception(vi->dev, xdp_prog, act);
-			goto err_xdp;
-		case XDP_DROP:
+
+		case VIRTNET_XDP_RES_DROP:
 			goto err_xdp;
 		}
 	}
@@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	if (xdp_prog) {
 		unsigned int xdp_frags_truesz = 0;
 		struct skb_shared_info *shinfo;
-		struct xdp_frame *xdpf;
 		struct page *xdp_page;
 		struct xdp_buff xdp;
 		void *data;
@@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		if (unlikely(err))
 			goto err_xdp_frags;
 
-		act = bpf_prog_run_xdp(xdp_prog, &xdp);
-		stats->xdp_packets++;
+		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
 
 		switch (act) {
-		case XDP_PASS:
+		case VIRTNET_XDP_RES_PASS:
 			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
 			if (unlikely(!head_skb))
 				goto err_xdp_frags;
 
 			rcu_read_unlock();
 			return head_skb;
-		case XDP_TX:
-			stats->xdp_tx++;
-			xdpf = xdp_convert_buff_to_frame(&xdp);
-			if (unlikely(!xdpf)) {
-				netdev_dbg(dev, "convert buff to frame failed for xdp\n");
-				goto err_xdp_frags;
-			}
-			err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
-			if (unlikely(!err)) {
-				xdp_return_frame_rx_napi(xdpf);
-			} else if (unlikely(err < 0)) {
-				trace_xdp_exception(vi->dev, xdp_prog, act);
-				goto err_xdp_frags;
-			}
-			*xdp_xmit |= VIRTIO_XDP_TX;
-			rcu_read_unlock();
-			goto xdp_xmit;
-		case XDP_REDIRECT:
-			stats->xdp_redirects++;
-			err = xdp_do_redirect(dev, &xdp, xdp_prog);
-			if (err)
-				goto err_xdp_frags;
-			*xdp_xmit |= VIRTIO_XDP_REDIR;
+
+		case VIRTNET_XDP_RES_CONSUMED:
 			rcu_read_unlock();
 			goto xdp_xmit;
-		default:
-			bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
-			fallthrough;
-		case XDP_ABORTED:
-			trace_xdp_exception(vi->dev, xdp_prog, act);
-			fallthrough;
-		case XDP_DROP:
+
+		case VIRTNET_XDP_RES_DROP:
 			goto err_xdp_frags;
 		}
 err_xdp_frags:
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
                   ` (2 preceding siblings ...)
  2023-03-28 12:04 ` [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-04-03  2:44   ` Jason Wang
  2023-03-28 12:04 ` [PATCH net-next 5/8] virtio_net: separate the logic of freeing the rest mergeable buf Xuan Zhuo
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

This patch introduce a new function that releases the
xdp shinfo. The subsequent patch will reuse this function.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 27 ++++++++++++++++-----------
 1 file changed, 16 insertions(+), 11 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 72b9d6ee4024..09aed60e2f51 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -798,6 +798,21 @@ static int virtnet_xdp_xmit(struct net_device *dev,
 	return ret;
 }
 
+static void put_xdp_frags(struct xdp_buff *xdp)
+{
+	struct skb_shared_info *shinfo;
+	struct page *xdp_page;
+	int i;
+
+	if (xdp_buff_has_frags(xdp)) {
+		shinfo = xdp_get_shared_info_from_buff(xdp);
+		for (i = 0; i < shinfo->nr_frags; i++) {
+			xdp_page = skb_frag_page(&shinfo->frags[i]);
+			put_page(xdp_page);
+		}
+	}
+}
+
 static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 			       struct net_device *dev,
 			       unsigned int *xdp_xmit,
@@ -1312,12 +1327,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (xdp_prog) {
 		unsigned int xdp_frags_truesz = 0;
-		struct skb_shared_info *shinfo;
-		struct page *xdp_page;
 		struct xdp_buff xdp;
 		void *data;
 		u32 act;
-		int i;
 
 		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
 					     offset, &len, hdr);
@@ -1348,14 +1360,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 			goto err_xdp_frags;
 		}
 err_xdp_frags:
-		if (xdp_buff_has_frags(&xdp)) {
-			shinfo = xdp_get_shared_info_from_buff(&xdp);
-			for (i = 0; i < shinfo->nr_frags; i++) {
-				xdp_page = skb_frag_page(&shinfo->frags[i]);
-				put_page(xdp_page);
-			}
-		}
-
+		put_xdp_frags(&xdp);
 		goto err_xdp;
 	}
 	rcu_read_unlock();
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH net-next 5/8] virtio_net: separate the logic of freeing the rest mergeable buf
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
                   ` (3 preceding siblings ...)
  2023-03-28 12:04 ` [PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-04-03  2:46   ` Jason Wang
  2023-03-28 12:04 ` [PATCH net-next 6/8] virtio_net: auto release xdp shinfo Xuan Zhuo
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

This patch introduce a new function that frees the rest mergeable buf.
The subsequent patch will reuse this function.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 36 ++++++++++++++++++++++++------------
 1 file changed, 24 insertions(+), 12 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 09aed60e2f51..a3f2bcb3db27 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1076,6 +1076,28 @@ static struct sk_buff *receive_big(struct net_device *dev,
 	return NULL;
 }
 
+static void mergeable_buf_free(struct receive_queue *rq, int num_buf,
+			       struct net_device *dev,
+			       struct virtnet_rq_stats *stats)
+{
+	struct page *page;
+	void *buf;
+	int len;
+
+	while (num_buf-- > 1) {
+		buf = virtqueue_get_buf(rq->vq, &len);
+		if (unlikely(!buf)) {
+			pr_debug("%s: rx error: %d buffers missing\n",
+				 dev->name, num_buf);
+			dev->stats.rx_length_errors++;
+			break;
+		}
+		stats->bytes += len;
+		page = virt_to_head_page(buf);
+		put_page(page);
+	}
+}
+
 /* Why not use xdp_build_skb_from_frame() ?
  * XDP core assumes that xdp frags are PAGE_SIZE in length, while in
  * virtio-net there are 2 points that do not match its requirements:
@@ -1436,18 +1458,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	stats->xdp_drops++;
 err_skb:
 	put_page(page);
-	while (num_buf-- > 1) {
-		buf = virtqueue_get_buf(rq->vq, &len);
-		if (unlikely(!buf)) {
-			pr_debug("%s: rx error: %d buffers missing\n",
-				 dev->name, num_buf);
-			dev->stats.rx_length_errors++;
-			break;
-		}
-		stats->bytes += len;
-		page = virt_to_head_page(buf);
-		put_page(page);
-	}
+	mergeable_buf_free(rq, num_buf, dev, stats);
+
 err_buf:
 	stats->drops++;
 	dev_kfree_skb(head_skb);
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH net-next 6/8] virtio_net: auto release xdp shinfo
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
                   ` (4 preceding siblings ...)
  2023-03-28 12:04 ` [PATCH net-next 5/8] virtio_net: separate the logic of freeing the rest mergeable buf Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-04-03  3:18   ` Jason Wang
  2023-03-28 12:04 ` [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp() Xuan Zhuo
  2023-03-28 12:04 ` [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp() Xuan Zhuo
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

virtnet_build_xdp_buff_mrg() and virtnet_xdp_handler() auto
release xdp shinfo then the caller no need to careful the xdp shinfo.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 29 +++++++++++++++++------------
 1 file changed, 17 insertions(+), 12 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index a3f2bcb3db27..136131a7868a 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -833,14 +833,14 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 		stats->xdp_tx++;
 		xdpf = xdp_convert_buff_to_frame(xdp);
 		if (unlikely(!xdpf))
-			return VIRTNET_XDP_RES_DROP;
+			goto drop;
 
 		err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
 		if (unlikely(!err)) {
 			xdp_return_frame_rx_napi(xdpf);
 		} else if (unlikely(err < 0)) {
 			trace_xdp_exception(dev, xdp_prog, act);
-			return VIRTNET_XDP_RES_DROP;
+			goto drop;
 		}
 
 		*xdp_xmit |= VIRTIO_XDP_TX;
@@ -850,7 +850,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 		stats->xdp_redirects++;
 		err = xdp_do_redirect(dev, xdp, xdp_prog);
 		if (err)
-			return VIRTNET_XDP_RES_DROP;
+			goto drop;
 
 		*xdp_xmit |= VIRTIO_XDP_REDIR;
 		return VIRTNET_XDP_RES_CONSUMED;
@@ -862,8 +862,12 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
 		trace_xdp_exception(dev, xdp_prog, act);
 		fallthrough;
 	case XDP_DROP:
-		return VIRTNET_XDP_RES_DROP;
+		goto drop;
 	}
+
+drop:
+	put_xdp_frags(xdp);
+	return VIRTNET_XDP_RES_DROP;
 }
 
 static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
@@ -1199,7 +1203,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 				 dev->name, *num_buf,
 				 virtio16_to_cpu(vi->vdev, hdr->num_buffers));
 			dev->stats.rx_length_errors++;
-			return -EINVAL;
+			goto err;
 		}
 
 		stats->bytes += len;
@@ -1218,7 +1222,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 			pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
 				 dev->name, len, (unsigned long)(truesize - room));
 			dev->stats.rx_length_errors++;
-			return -EINVAL;
+			goto err;
 		}
 
 		frag = &shinfo->frags[shinfo->nr_frags++];
@@ -1233,6 +1237,10 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 
 	*xdp_frags_truesize = xdp_frags_truesz;
 	return 0;
+
+err:
+	put_xdp_frags(xdp);
+	return -EINVAL;
 }
 
 static void *mergeable_xdp_prepare(struct virtnet_info *vi,
@@ -1361,7 +1369,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
 						 &num_buf, &xdp_frags_truesz, stats);
 		if (unlikely(err))
-			goto err_xdp_frags;
+			goto err_xdp;
 
 		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
 
@@ -1369,7 +1377,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		case VIRTNET_XDP_RES_PASS:
 			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
 			if (unlikely(!head_skb))
-				goto err_xdp_frags;
+				goto err_xdp;
 
 			rcu_read_unlock();
 			return head_skb;
@@ -1379,11 +1387,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 			goto xdp_xmit;
 
 		case VIRTNET_XDP_RES_DROP:
-			goto err_xdp_frags;
+			goto err_xdp;
 		}
-err_xdp_frags:
-		put_xdp_frags(&xdp);
-		goto err_xdp;
 	}
 	rcu_read_unlock();
 
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp()
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
                   ` (5 preceding siblings ...)
  2023-03-28 12:04 ` [PATCH net-next 6/8] virtio_net: auto release xdp shinfo Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-03-30 10:18   ` Paolo Abeni
  2023-03-28 12:04 ` [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp() Xuan Zhuo
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

The purpose of this patch is to simplify the receive_mergeable().
Separate all the logic of XDP into a function.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 128 +++++++++++++++++++++++----------------
 1 file changed, 76 insertions(+), 52 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 136131a7868a..c8978d8d8adb 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1316,6 +1316,63 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi,
 	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
 }
 
+static struct sk_buff *receive_mergeable_xdp(struct net_device *dev,
+					     struct virtnet_info *vi,
+					     struct receive_queue *rq,
+					     struct bpf_prog *xdp_prog,
+					     void *buf,
+					     void *ctx,
+					     unsigned int len,
+					     unsigned int *xdp_xmit,
+					     struct virtnet_rq_stats *stats)
+{
+	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
+	int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+	struct page *page = virt_to_head_page(buf);
+	int offset = buf - page_address(page);
+	unsigned int xdp_frags_truesz = 0;
+	struct sk_buff *head_skb;
+	unsigned int frame_sz;
+	struct xdp_buff xdp;
+	void *data;
+	u32 act;
+	int err;
+
+	data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
+				     offset, &len, hdr);
+	if (!data)
+		goto err_xdp;
+
+	err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
+					 &num_buf, &xdp_frags_truesz, stats);
+	if (unlikely(err))
+		goto err_xdp;
+
+	act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
+
+	switch (act) {
+	case VIRTNET_XDP_RES_PASS:
+		head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
+		if (unlikely(!head_skb))
+			goto err_xdp;
+		return head_skb;
+
+	case VIRTNET_XDP_RES_CONSUMED:
+		return NULL;
+
+	case VIRTNET_XDP_RES_DROP:
+		break;
+	}
+
+err_xdp:
+	put_page(page);
+	mergeable_buf_free(rq, num_buf, dev, stats);
+
+	stats->xdp_drops++;
+	stats->drops++;
+	return NULL;
+}
+
 static struct sk_buff *receive_mergeable(struct net_device *dev,
 					 struct virtnet_info *vi,
 					 struct receive_queue *rq,
@@ -1325,21 +1382,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 					 unsigned int *xdp_xmit,
 					 struct virtnet_rq_stats *stats)
 {
-	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
-	int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
-	struct page *page = virt_to_head_page(buf);
-	int offset = buf - page_address(page);
-	struct sk_buff *head_skb, *curr_skb;
-	struct bpf_prog *xdp_prog;
 	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
 	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
 	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
 	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
-	unsigned int frame_sz;
-	int err;
+	struct virtio_net_hdr_mrg_rxbuf *hdr;
+	struct sk_buff *head_skb, *curr_skb;
+	struct bpf_prog *xdp_prog;
+	struct page *page;
+	int num_buf;
+	int offset;
 
 	head_skb = NULL;
 	stats->bytes += len - vi->hdr_len;
+	hdr = buf;
+	num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
+	page = virt_to_head_page(buf);
 
 	if (unlikely(len > truesize - room)) {
 		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
@@ -1348,51 +1406,21 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		goto err_skb;
 	}
 
-	if (likely(!vi->xdp_enabled)) {
-		xdp_prog = NULL;
-		goto skip_xdp;
-	}
-
-	rcu_read_lock();
-	xdp_prog = rcu_dereference(rq->xdp_prog);
-	if (xdp_prog) {
-		unsigned int xdp_frags_truesz = 0;
-		struct xdp_buff xdp;
-		void *data;
-		u32 act;
-
-		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
-					     offset, &len, hdr);
-		if (!data)
-			goto err_xdp;
-
-		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
-						 &num_buf, &xdp_frags_truesz, stats);
-		if (unlikely(err))
-			goto err_xdp;
-
-		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
-
-		switch (act) {
-		case VIRTNET_XDP_RES_PASS:
-			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
-			if (unlikely(!head_skb))
-				goto err_xdp;
-
+	if (likely(vi->xdp_enabled)) {
+		rcu_read_lock();
+		xdp_prog = rcu_dereference(rq->xdp_prog);
+		if (xdp_prog) {
+			head_skb = receive_mergeable_xdp(dev, vi, rq, xdp_prog,
+							 buf, ctx, len, xdp_xmit,
+							 stats);
 			rcu_read_unlock();
 			return head_skb;
-
-		case VIRTNET_XDP_RES_CONSUMED:
-			rcu_read_unlock();
-			goto xdp_xmit;
-
-		case VIRTNET_XDP_RES_DROP:
-			goto err_xdp;
 		}
+		rcu_read_unlock();
 	}
-	rcu_read_unlock();
 
-skip_xdp:
+	offset = buf - page_address(page);
+
 	head_skb = page_to_skb(vi, rq, page, offset, len, truesize, headroom);
 	curr_skb = head_skb;
 
@@ -1458,9 +1486,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	ewma_pkt_len_add(&rq->mrg_avg_pkt_len, head_skb->len);
 	return head_skb;
 
-err_xdp:
-	rcu_read_unlock();
-	stats->xdp_drops++;
 err_skb:
 	put_page(page);
 	mergeable_buf_free(rq, num_buf, dev, stats);
@@ -1468,7 +1493,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 err_buf:
 	stats->drops++;
 	dev_kfree_skb(head_skb);
-xdp_xmit:
 	return NULL;
 }
 
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp()
  2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
                   ` (6 preceding siblings ...)
  2023-03-28 12:04 ` [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp() Xuan Zhuo
@ 2023-03-28 12:04 ` Xuan Zhuo
  2023-03-30 10:48   ` Paolo Abeni
  7 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-28 12:04 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

The purpose of this patch is to simplify the receive_small().
Separate all the logic of XDP of small into a function.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 168 +++++++++++++++++++++++----------------
 1 file changed, 100 insertions(+), 68 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index c8978d8d8adb..37cd0bf97a16 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -939,6 +939,99 @@ static struct page *xdp_linearize_page(struct receive_queue *rq,
 	return NULL;
 }
 
+static struct sk_buff *receive_small_xdp(struct net_device *dev,
+					 struct virtnet_info *vi,
+					 struct receive_queue *rq,
+					 struct bpf_prog *xdp_prog,
+					 void *buf,
+					 void *ctx,
+					 unsigned int len,
+					 unsigned int *xdp_xmit,
+					 struct virtnet_rq_stats *stats)
+{
+	unsigned int xdp_headroom = (unsigned long)ctx;
+	unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom;
+	unsigned int headroom = vi->hdr_len + header_offset;
+	struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
+	struct page *page = virt_to_head_page(buf);
+	struct page *xdp_page;
+	unsigned int buflen;
+	struct xdp_buff xdp;
+	struct sk_buff *skb;
+	unsigned int delta = 0;
+	unsigned int metasize = 0;
+	void *orig_data;
+	u32 act;
+
+	buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
+		SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+
+	if (unlikely(hdr->hdr.gso_type))
+		goto err_xdp;
+
+	if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) {
+		int offset = buf - page_address(page) + header_offset;
+		unsigned int tlen = len + vi->hdr_len;
+		int num_buf = 1;
+
+		xdp_headroom = virtnet_get_headroom(vi);
+		header_offset = VIRTNET_RX_PAD + xdp_headroom;
+		headroom = vi->hdr_len + header_offset;
+		buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
+			SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+		xdp_page = xdp_linearize_page(rq, &num_buf, page,
+					      offset, header_offset,
+					      &tlen);
+		if (!xdp_page)
+			goto err_xdp;
+
+		buf = page_address(xdp_page);
+		put_page(page);
+		page = xdp_page;
+	}
+
+	xdp_init_buff(&xdp, buflen, &rq->xdp_rxq);
+	xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
+			 xdp_headroom, len, true);
+	orig_data = xdp.data;
+
+	act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
+
+	switch (act) {
+	case VIRTNET_XDP_RES_PASS:
+		/* Recalculate length in case bpf program changed it */
+		delta = orig_data - xdp.data;
+		len = xdp.data_end - xdp.data;
+		metasize = xdp.data - xdp.data_meta;
+		break;
+
+	case VIRTNET_XDP_RES_CONSUMED:
+		goto xdp_xmit;
+
+	case VIRTNET_XDP_RES_DROP:
+		goto err_xdp;
+	}
+
+	skb = build_skb(buf, buflen);
+	if (!skb)
+		goto err;
+
+	skb_reserve(skb, headroom - delta);
+	skb_put(skb, len);
+	if (metasize)
+		skb_metadata_set(skb, metasize);
+
+	return skb;
+
+err_xdp:
+	stats->xdp_drops++;
+err:
+	stats->drops++;
+	put_page(page);
+xdp_xmit:
+	return NULL;
+}
+
 static struct sk_buff *receive_small(struct net_device *dev,
 				     struct virtnet_info *vi,
 				     struct receive_queue *rq,
@@ -949,15 +1042,11 @@ static struct sk_buff *receive_small(struct net_device *dev,
 {
 	struct sk_buff *skb;
 	struct bpf_prog *xdp_prog;
-	unsigned int xdp_headroom = (unsigned long)ctx;
-	unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom;
+	unsigned int header_offset = VIRTNET_RX_PAD;
 	unsigned int headroom = vi->hdr_len + header_offset;
 	unsigned int buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
 			      SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 	struct page *page = virt_to_head_page(buf);
-	unsigned int delta = 0;
-	struct page *xdp_page;
-	unsigned int metasize = 0;
 
 	len -= vi->hdr_len;
 	stats->bytes += len;
@@ -977,57 +1066,9 @@ static struct sk_buff *receive_small(struct net_device *dev,
 	rcu_read_lock();
 	xdp_prog = rcu_dereference(rq->xdp_prog);
 	if (xdp_prog) {
-		struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
-		struct xdp_buff xdp;
-		void *orig_data;
-		u32 act;
-
-		if (unlikely(hdr->hdr.gso_type))
-			goto err_xdp;
-
-		if (unlikely(xdp_headroom < virtnet_get_headroom(vi))) {
-			int offset = buf - page_address(page) + header_offset;
-			unsigned int tlen = len + vi->hdr_len;
-			int num_buf = 1;
-
-			xdp_headroom = virtnet_get_headroom(vi);
-			header_offset = VIRTNET_RX_PAD + xdp_headroom;
-			headroom = vi->hdr_len + header_offset;
-			buflen = SKB_DATA_ALIGN(GOOD_PACKET_LEN + headroom) +
-				 SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-			xdp_page = xdp_linearize_page(rq, &num_buf, page,
-						      offset, header_offset,
-						      &tlen);
-			if (!xdp_page)
-				goto err_xdp;
-
-			buf = page_address(xdp_page);
-			put_page(page);
-			page = xdp_page;
-		}
-
-		xdp_init_buff(&xdp, buflen, &rq->xdp_rxq);
-		xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
-				 xdp_headroom, len, true);
-		orig_data = xdp.data;
-
-		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
-
-		switch (act) {
-		case VIRTNET_XDP_RES_PASS:
-			/* Recalculate length in case bpf program changed it */
-			delta = orig_data - xdp.data;
-			len = xdp.data_end - xdp.data;
-			metasize = xdp.data - xdp.data_meta;
-			break;
-
-		case VIRTNET_XDP_RES_CONSUMED:
-			rcu_read_unlock();
-			goto xdp_xmit;
-
-		case VIRTNET_XDP_RES_DROP:
-			goto err_xdp;
-		}
+		skb = receive_small_xdp(dev, vi, rq, xdp_prog, buf, ctx, len, xdp_xmit, stats);
+		rcu_read_unlock();
+		return skb;
 	}
 	rcu_read_unlock();
 
@@ -1035,25 +1076,16 @@ static struct sk_buff *receive_small(struct net_device *dev,
 	skb = build_skb(buf, buflen);
 	if (!skb)
 		goto err;
-	skb_reserve(skb, headroom - delta);
+	skb_reserve(skb, headroom);
 	skb_put(skb, len);
-	if (!xdp_prog) {
-		buf += header_offset;
-		memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len);
-	} /* keep zeroed vnet hdr since XDP is loaded */
-
-	if (metasize)
-		skb_metadata_set(skb, metasize);
 
+	buf += header_offset;
+	memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len);
 	return skb;
 
-err_xdp:
-	rcu_read_unlock();
-	stats->xdp_drops++;
 err:
 	stats->drops++;
 	put_page(page);
-xdp_xmit:
 	return NULL;
 }
 
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp()
  2023-03-28 12:04 ` [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp() Xuan Zhuo
@ 2023-03-30 10:18   ` Paolo Abeni
  2023-03-31  7:18     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Paolo Abeni @ 2023-03-30 10:18 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

Hi,

On Tue, 2023-03-28 at 20:04 +0800, Xuan Zhuo wrote:
> The purpose of this patch is to simplify the receive_mergeable().
> Separate all the logic of XDP into a function.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 128 +++++++++++++++++++++++----------------
>  1 file changed, 76 insertions(+), 52 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 136131a7868a..c8978d8d8adb 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1316,6 +1316,63 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi,
>  	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
>  }
>  
> +static struct sk_buff *receive_mergeable_xdp(struct net_device *dev,
> +					     struct virtnet_info *vi,
> +					     struct receive_queue *rq,
> +					     struct bpf_prog *xdp_prog,
> +					     void *buf,
> +					     void *ctx,
> +					     unsigned int len,
> +					     unsigned int *xdp_xmit,
> +					     struct virtnet_rq_stats *stats)
> +{
> +	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
> +	int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> +	struct page *page = virt_to_head_page(buf);
> +	int offset = buf - page_address(page);
> +	unsigned int xdp_frags_truesz = 0;
> +	struct sk_buff *head_skb;
> +	unsigned int frame_sz;
> +	struct xdp_buff xdp;
> +	void *data;
> +	u32 act;
> +	int err;
> +
> +	data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> +				     offset, &len, hdr);
> +	if (!data)
> +		goto err_xdp;
> +
> +	err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
> +					 &num_buf, &xdp_frags_truesz, stats);
> +	if (unlikely(err))
> +		goto err_xdp;
> +
> +	act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> +
> +	switch (act) {
> +	case VIRTNET_XDP_RES_PASS:
> +		head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> +		if (unlikely(!head_skb))
> +			goto err_xdp;
> +		return head_skb;
> +
> +	case VIRTNET_XDP_RES_CONSUMED:
> +		return NULL;
> +
> +	case VIRTNET_XDP_RES_DROP:
> +		break;
> +	}
> +
> +err_xdp:
> +	put_page(page);
> +	mergeable_buf_free(rq, num_buf, dev, stats);
> +
> +	stats->xdp_drops++;
> +	stats->drops++;
> +	return NULL;
> +}
> +
>  static struct sk_buff *receive_mergeable(struct net_device *dev,
>  					 struct virtnet_info *vi,
>  					 struct receive_queue *rq,
> @@ -1325,21 +1382,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>  					 unsigned int *xdp_xmit,
>  					 struct virtnet_rq_stats *stats)
>  {
> -	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
> -	int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> -	struct page *page = virt_to_head_page(buf);
> -	int offset = buf - page_address(page);
> -	struct sk_buff *head_skb, *curr_skb;
> -	struct bpf_prog *xdp_prog;
>  	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
>  	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
>  	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
>  	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
> -	unsigned int frame_sz;
> -	int err;
> +	struct virtio_net_hdr_mrg_rxbuf *hdr;
> +	struct sk_buff *head_skb, *curr_skb;
> +	struct bpf_prog *xdp_prog;
> +	struct page *page;
> +	int num_buf;
> +	int offset;
>  
>  	head_skb = NULL;
>  	stats->bytes += len - vi->hdr_len;
> +	hdr = buf;
> +	num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> +	page = virt_to_head_page(buf);
>  
>  	if (unlikely(len > truesize - room)) {
>  		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
> @@ -1348,51 +1406,21 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>  		goto err_skb;
>  	}
>  
> -	if (likely(!vi->xdp_enabled)) {
> -		xdp_prog = NULL;
> -		goto skip_xdp;
> -	}
> -
> -	rcu_read_lock();
> -	xdp_prog = rcu_dereference(rq->xdp_prog);
> -	if (xdp_prog) {
> -		unsigned int xdp_frags_truesz = 0;
> -		struct xdp_buff xdp;
> -		void *data;
> -		u32 act;
> -
> -		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> -					     offset, &len, hdr);
> -		if (!data)
> -			goto err_xdp;
> -
> -		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
> -						 &num_buf, &xdp_frags_truesz, stats);
> -		if (unlikely(err))
> -			goto err_xdp;
> -
> -		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> -
> -		switch (act) {
> -		case VIRTNET_XDP_RES_PASS:
> -			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> -			if (unlikely(!head_skb))
> -				goto err_xdp;
> -
> +	if (likely(vi->xdp_enabled)) {

This changes the branch prediction hint compared to the existing code;
as we currently have:
	if (likely(!vi->xdp_enabled)) {


and I think it would be better avoid such change.

Thanks,

Paolo


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp()
  2023-03-28 12:04 ` [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp() Xuan Zhuo
@ 2023-03-30 10:48   ` Paolo Abeni
  2023-03-31  7:20     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Paolo Abeni @ 2023-03-30 10:48 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, 2023-03-28 at 20:04 +0800, Xuan Zhuo wrote:
> @@ -949,15 +1042,11 @@ static struct sk_buff *receive_small(struct net_device *dev,
>  {
>  	struct sk_buff *skb;
>  	struct bpf_prog *xdp_prog;
> -	unsigned int xdp_headroom = (unsigned long)ctx;
> -	unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom;
> +	unsigned int header_offset = VIRTNET_RX_PAD;
>  	unsigned int headroom = vi->hdr_len + header_offset;

This changes (reduces) the headroom for non-xpd-pass skbs.

[...]
> +	buf += header_offset;
> +	memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len);

AFAICS, that also means that receive_small(), for such packets, will
look for the virtio header in a different location. Is that expected?

Thanks.

Paolo


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp()
  2023-03-30 10:18   ` Paolo Abeni
@ 2023-03-31  7:18     ` Xuan Zhuo
  0 siblings, 0 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-31  7:18 UTC (permalink / raw)
  To: Paolo Abeni
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Thu, 30 Mar 2023 12:18:15 +0200, Paolo Abeni <pabeni@redhat.com> wrote:
> Hi,
>
> On Tue, 2023-03-28 at 20:04 +0800, Xuan Zhuo wrote:
> > The purpose of this patch is to simplify the receive_mergeable().
> > Separate all the logic of XDP into a function.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 128 +++++++++++++++++++++++----------------
> >  1 file changed, 76 insertions(+), 52 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 136131a7868a..c8978d8d8adb 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1316,6 +1316,63 @@ static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> >  	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
> >  }
> >
> > +static struct sk_buff *receive_mergeable_xdp(struct net_device *dev,
> > +					     struct virtnet_info *vi,
> > +					     struct receive_queue *rq,
> > +					     struct bpf_prog *xdp_prog,
> > +					     void *buf,
> > +					     void *ctx,
> > +					     unsigned int len,
> > +					     unsigned int *xdp_xmit,
> > +					     struct virtnet_rq_stats *stats)
> > +{
> > +	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
> > +	int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> > +	struct page *page = virt_to_head_page(buf);
> > +	int offset = buf - page_address(page);
> > +	unsigned int xdp_frags_truesz = 0;
> > +	struct sk_buff *head_skb;
> > +	unsigned int frame_sz;
> > +	struct xdp_buff xdp;
> > +	void *data;
> > +	u32 act;
> > +	int err;
> > +
> > +	data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> > +				     offset, &len, hdr);
> > +	if (!data)
> > +		goto err_xdp;
> > +
> > +	err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
> > +					 &num_buf, &xdp_frags_truesz, stats);
> > +	if (unlikely(err))
> > +		goto err_xdp;
> > +
> > +	act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > +
> > +	switch (act) {
> > +	case VIRTNET_XDP_RES_PASS:
> > +		head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > +		if (unlikely(!head_skb))
> > +			goto err_xdp;
> > +		return head_skb;
> > +
> > +	case VIRTNET_XDP_RES_CONSUMED:
> > +		return NULL;
> > +
> > +	case VIRTNET_XDP_RES_DROP:
> > +		break;
> > +	}
> > +
> > +err_xdp:
> > +	put_page(page);
> > +	mergeable_buf_free(rq, num_buf, dev, stats);
> > +
> > +	stats->xdp_drops++;
> > +	stats->drops++;
> > +	return NULL;
> > +}
> > +
> >  static struct sk_buff *receive_mergeable(struct net_device *dev,
> >  					 struct virtnet_info *vi,
> >  					 struct receive_queue *rq,
> > @@ -1325,21 +1382,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >  					 unsigned int *xdp_xmit,
> >  					 struct virtnet_rq_stats *stats)
> >  {
> > -	struct virtio_net_hdr_mrg_rxbuf *hdr = buf;
> > -	int num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> > -	struct page *page = virt_to_head_page(buf);
> > -	int offset = buf - page_address(page);
> > -	struct sk_buff *head_skb, *curr_skb;
> > -	struct bpf_prog *xdp_prog;
> >  	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
> >  	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> >  	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
> >  	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
> > -	unsigned int frame_sz;
> > -	int err;
> > +	struct virtio_net_hdr_mrg_rxbuf *hdr;
> > +	struct sk_buff *head_skb, *curr_skb;
> > +	struct bpf_prog *xdp_prog;
> > +	struct page *page;
> > +	int num_buf;
> > +	int offset;
> >
> >  	head_skb = NULL;
> >  	stats->bytes += len - vi->hdr_len;
> > +	hdr = buf;
> > +	num_buf = virtio16_to_cpu(vi->vdev, hdr->num_buffers);
> > +	page = virt_to_head_page(buf);
> >
> >  	if (unlikely(len > truesize - room)) {
> >  		pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
> > @@ -1348,51 +1406,21 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >  		goto err_skb;
> >  	}
> >
> > -	if (likely(!vi->xdp_enabled)) {
> > -		xdp_prog = NULL;
> > -		goto skip_xdp;
> > -	}
> > -
> > -	rcu_read_lock();
> > -	xdp_prog = rcu_dereference(rq->xdp_prog);
> > -	if (xdp_prog) {
> > -		unsigned int xdp_frags_truesz = 0;
> > -		struct xdp_buff xdp;
> > -		void *data;
> > -		u32 act;
> > -
> > -		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> > -					     offset, &len, hdr);
> > -		if (!data)
> > -			goto err_xdp;
> > -
> > -		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
> > -						 &num_buf, &xdp_frags_truesz, stats);
> > -		if (unlikely(err))
> > -			goto err_xdp;
> > -
> > -		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > -
> > -		switch (act) {
> > -		case VIRTNET_XDP_RES_PASS:
> > -			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > -			if (unlikely(!head_skb))
> > -				goto err_xdp;
> > -
> > +	if (likely(vi->xdp_enabled)) {
>
> This changes the branch prediction hint compared to the existing code;
> as we currently have:
> 	if (likely(!vi->xdp_enabled)) {
>
>
> and I think it would be better avoid such change.


Yes.

Will fix.

Thanks.


>
> Thanks,
>
> Paolo
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp()
  2023-03-30 10:48   ` Paolo Abeni
@ 2023-03-31  7:20     ` Xuan Zhuo
  2023-03-31  7:24       ` Michael S. Tsirkin
  0 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-31  7:20 UTC (permalink / raw)
  To: Paolo Abeni
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Thu, 30 Mar 2023 12:48:22 +0200, Paolo Abeni <pabeni@redhat.com> wrote:
> On Tue, 2023-03-28 at 20:04 +0800, Xuan Zhuo wrote:
> > @@ -949,15 +1042,11 @@ static struct sk_buff *receive_small(struct net_device *dev,
> >  {
> >  	struct sk_buff *skb;
> >  	struct bpf_prog *xdp_prog;
> > -	unsigned int xdp_headroom = (unsigned long)ctx;
> > -	unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom;
> > +	unsigned int header_offset = VIRTNET_RX_PAD;
> >  	unsigned int headroom = vi->hdr_len + header_offset;
>
> This changes (reduces) the headroom for non-xpd-pass skbs.
>
> [...]
> > +	buf += header_offset;
> > +	memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len);
>
> AFAICS, that also means that receive_small(), for such packets, will
> look for the virtio header in a different location. Is that expected?


That is a mistake.

Will fix.

Thanks.

>
> Thanks.
>
> Paolo
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp()
  2023-03-31  7:20     ` Xuan Zhuo
@ 2023-03-31  7:24       ` Michael S. Tsirkin
  0 siblings, 0 replies; 41+ messages in thread
From: Michael S. Tsirkin @ 2023-03-31  7:24 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Paolo Abeni, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Fri, Mar 31, 2023 at 03:20:35PM +0800, Xuan Zhuo wrote:
> On Thu, 30 Mar 2023 12:48:22 +0200, Paolo Abeni <pabeni@redhat.com> wrote:
> > On Tue, 2023-03-28 at 20:04 +0800, Xuan Zhuo wrote:
> > > @@ -949,15 +1042,11 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > >  {
> > >  	struct sk_buff *skb;
> > >  	struct bpf_prog *xdp_prog;
> > > -	unsigned int xdp_headroom = (unsigned long)ctx;
> > > -	unsigned int header_offset = VIRTNET_RX_PAD + xdp_headroom;
> > > +	unsigned int header_offset = VIRTNET_RX_PAD;
> > >  	unsigned int headroom = vi->hdr_len + header_offset;
> >
> > This changes (reduces) the headroom for non-xpd-pass skbs.
> >
> > [...]
> > > +	buf += header_offset;
> > > +	memcpy(skb_vnet_hdr(skb), buf, vi->hdr_len);
> >
> > AFAICS, that also means that receive_small(), for such packets, will
> > look for the virtio header in a different location. Is that expected?
> 
> 
> That is a mistake.
> 
> Will fix.
> 
> Thanks.

Do try to test small and big packet configurations though, too.

> >
> > Thanks.
> >
> > Paolo
> >


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately
  2023-03-28 12:04 ` [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately Xuan Zhuo
@ 2023-03-31  9:01   ` Jason Wang
  2023-04-03  4:11     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-03-31  9:01 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> In the xdp implementation of virtio-net mergeable, it always checks
> whether two page is used and a page is selected to release. This is
> complicated for the processing of action, and be careful.
>
> In the entire process, we have such principles:
> * If xdp_page is used (PASS, TX, Redirect), then we release the old
>   page.
> * If it is a drop case, we will release two. The old page obtained from
>   buf is release inside err_xdp, and xdp_page needs be relased by us.
>
> But in fact, when we allocate a new page, we can release the old page
> immediately. Then just one is using, we just need to release the new
> page for drop case. On the drop path, err_xdp will release the variable
> "page", so we only need to let "page" point to the new xdp_page in
> advance.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 15 ++++++---------
>  1 file changed, 6 insertions(+), 9 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index e2560b6f7980..4d2bf1ce0730 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1245,6 +1245,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                         if (!xdp_page)
>                                 goto err_xdp;
>                         offset = VIRTIO_XDP_HEADROOM;
> +
> +                       put_page(page);
> +                       page = xdp_page;
>                 } else if (unlikely(headroom < virtnet_get_headroom(vi))) {
>                         xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
>                                                   sizeof(struct skb_shared_info));
> @@ -1259,6 +1262,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                                page_address(page) + offset, len);
>                         frame_sz = PAGE_SIZE;
>                         offset = VIRTIO_XDP_HEADROOM;
> +
> +                       put_page(page);
> +                       page = xdp_page;
>                 } else {
>                         xdp_page = page;
>                 }

While at this I would go a little further, just remove the above
assignment then we can use:

                data = page_address(page) + offset;

Which seems cleaner?

Thanks

> @@ -1278,8 +1284,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                         if (unlikely(!head_skb))
>                                 goto err_xdp_frags;
>
> -                       if (unlikely(xdp_page != page))
> -                               put_page(page);
>                         rcu_read_unlock();
>                         return head_skb;
>                 case XDP_TX:
> @@ -1297,8 +1301,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                                 goto err_xdp_frags;
>                         }
>                         *xdp_xmit |= VIRTIO_XDP_TX;
> -                       if (unlikely(xdp_page != page))
> -                               put_page(page);
>                         rcu_read_unlock();
>                         goto xdp_xmit;
>                 case XDP_REDIRECT:
> @@ -1307,8 +1309,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                         if (err)
>                                 goto err_xdp_frags;
>                         *xdp_xmit |= VIRTIO_XDP_REDIR;
> -                       if (unlikely(xdp_page != page))
> -                               put_page(page);
>                         rcu_read_unlock();
>                         goto xdp_xmit;
>                 default:
> @@ -1321,9 +1321,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                         goto err_xdp_frags;
>                 }
>  err_xdp_frags:
> -               if (unlikely(xdp_page != page))
> -                       __free_pages(xdp_page, 0);
> -
>                 if (xdp_buff_has_frags(&xdp)) {
>                         shinfo = xdp_get_shared_info_from_buff(&xdp);
>                         for (i = 0; i < shinfo->nr_frags; i++) {
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-28 12:04 ` [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare Xuan Zhuo
@ 2023-03-31  9:14   ` Jason Wang
  2023-04-03  4:11     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-03-31  9:14 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf


在 2023/3/28 20:04, Xuan Zhuo 写道:
> Separating the logic of preparation for xdp from receive_mergeable.
>
> The purpose of this is to simplify the logic of execution of XDP.
>
> The main logic here is that when headroom is insufficient, we need to
> allocate a new page and calculate offset. It should be noted that if
> there is new page, the variable page will refer to the new page.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>   drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
>   1 file changed, 77 insertions(+), 58 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 4d2bf1ce0730..bb426958cdd4 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>   	return 0;
>   }
>   
> +static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> +				   struct receive_queue *rq,
> +				   struct bpf_prog *xdp_prog,
> +				   void *ctx,
> +				   unsigned int *frame_sz,
> +				   int *num_buf,
> +				   struct page **page,
> +				   int offset,
> +				   unsigned int *len,
> +				   struct virtio_net_hdr_mrg_rxbuf *hdr)
> +{
> +	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
> +	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> +	struct page *xdp_page;
> +	unsigned int xdp_room;
> +
> +	/* Transient failure which in theory could occur if
> +	 * in-flight packets from before XDP was enabled reach
> +	 * the receive path after XDP is loaded.
> +	 */
> +	if (unlikely(hdr->hdr.gso_type))
> +		return NULL;
> +
> +	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> +	 * with headroom may add hole in truesize, which
> +	 * make their length exceed PAGE_SIZE. So we disabled the
> +	 * hole mechanism for xdp. See add_recvbuf_mergeable().
> +	 */
> +	*frame_sz = truesize;
> +
> +	/* This happens when headroom is not enough because
> +	 * of the buffer was prefilled before XDP is set.
> +	 * This should only happen for the first several packets.
> +	 * In fact, vq reset can be used here to help us clean up
> +	 * the prefilled buffers, but many existing devices do not
> +	 * support it, and we don't want to bother users who are
> +	 * using xdp normally.
> +	 */
> +	if (!xdp_prog->aux->xdp_has_frags &&
> +	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> +		/* linearize data for XDP */
> +		xdp_page = xdp_linearize_page(rq, num_buf,
> +					      *page, offset,
> +					      VIRTIO_XDP_HEADROOM,
> +					      len);
> +
> +		if (!xdp_page)
> +			return NULL;
> +	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> +		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> +					  sizeof(struct skb_shared_info));
> +		if (*len + xdp_room > PAGE_SIZE)
> +			return NULL;
> +
> +		xdp_page = alloc_page(GFP_ATOMIC);
> +		if (!xdp_page)
> +			return NULL;
> +
> +		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> +		       page_address(*page) + offset, *len);
> +	} else {
> +		return page_address(*page) + offset;


This makes the code a little harder to be read than the original code.

Why not do a verbatim moving without introducing new logic? (Or 
introducing new logic on top?)

Thanks


> +	}
> +
> +	*frame_sz = PAGE_SIZE;
> +
> +	put_page(*page);
> +
> +	*page = xdp_page;
> +
> +	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
> +}
> +
>   static struct sk_buff *receive_mergeable(struct net_device *dev,
>   					 struct virtnet_info *vi,
>   					 struct receive_queue *rq,
> @@ -1181,7 +1254,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
>   	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
>   	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
> -	unsigned int frame_sz, xdp_room;
> +	unsigned int frame_sz;
>   	int err;
>   
>   	head_skb = NULL;
> @@ -1211,65 +1284,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   		u32 act;
>   		int i;
>   
> -		/* Transient failure which in theory could occur if
> -		 * in-flight packets from before XDP was enabled reach
> -		 * the receive path after XDP is loaded.
> -		 */
> -		if (unlikely(hdr->hdr.gso_type))
> +		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> +					     offset, &len, hdr);
> +		if (!data)
>   			goto err_xdp;
>   
> -		/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> -		 * with headroom may add hole in truesize, which
> -		 * make their length exceed PAGE_SIZE. So we disabled the
> -		 * hole mechanism for xdp. See add_recvbuf_mergeable().
> -		 */
> -		frame_sz = truesize;
> -
> -		/* This happens when headroom is not enough because
> -		 * of the buffer was prefilled before XDP is set.
> -		 * This should only happen for the first several packets.
> -		 * In fact, vq reset can be used here to help us clean up
> -		 * the prefilled buffers, but many existing devices do not
> -		 * support it, and we don't want to bother users who are
> -		 * using xdp normally.
> -		 */
> -		if (!xdp_prog->aux->xdp_has_frags &&
> -		    (num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> -			/* linearize data for XDP */
> -			xdp_page = xdp_linearize_page(rq, &num_buf,
> -						      page, offset,
> -						      VIRTIO_XDP_HEADROOM,
> -						      &len);
> -			frame_sz = PAGE_SIZE;
> -
> -			if (!xdp_page)
> -				goto err_xdp;
> -			offset = VIRTIO_XDP_HEADROOM;
> -
> -			put_page(page);
> -			page = xdp_page;
> -		} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> -			xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> -						  sizeof(struct skb_shared_info));
> -			if (len + xdp_room > PAGE_SIZE)
> -				goto err_xdp;
> -
> -			xdp_page = alloc_page(GFP_ATOMIC);
> -			if (!xdp_page)
> -				goto err_xdp;
> -
> -			memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> -			       page_address(page) + offset, len);
> -			frame_sz = PAGE_SIZE;
> -			offset = VIRTIO_XDP_HEADROOM;
> -
> -			put_page(page);
> -			page = xdp_page;
> -		} else {
> -			xdp_page = page;
> -		}
> -
> -		data = page_address(xdp_page) + offset;
>   		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
>   						 &num_buf, &xdp_frags_truesz, stats);
>   		if (unlikely(err))


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-03-28 12:04 ` [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
@ 2023-04-03  2:43   ` Jason Wang
  2023-04-03  4:12     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-04-03  2:43 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> At present, we have two similar logic to perform the XDP prog.
>
> Therefore, this PATCH separates the code of executing XDP, which is
> conducive to later maintenance.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
>  1 file changed, 75 insertions(+), 67 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index bb426958cdd4..72b9d6ee4024 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
>         char padding[12];
>  };
>
> +enum {
> +       /* xdp pass */
> +       VIRTNET_XDP_RES_PASS,
> +       /* drop packet. the caller needs to release the page. */
> +       VIRTNET_XDP_RES_DROP,
> +       /* packet is consumed by xdp. the caller needs to do nothing. */
> +       VIRTNET_XDP_RES_CONSUMED,
> +};

I'd prefer this to be done on top unless it is a must. But I don't see
any advantage of introducing this, it's partial mapping of XDP action
and it needs to be extended when XDP action is extended. (And we've
already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)

> +
>  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
>  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
>
> @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>         return ret;
>  }
>
> +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> +                              struct net_device *dev,
> +                              unsigned int *xdp_xmit,
> +                              struct virtnet_rq_stats *stats)
> +{
> +       struct xdp_frame *xdpf;
> +       int err;
> +       u32 act;
> +
> +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> +       stats->xdp_packets++;
> +
> +       switch (act) {
> +       case XDP_PASS:
> +               return VIRTNET_XDP_RES_PASS;
> +
> +       case XDP_TX:
> +               stats->xdp_tx++;
> +               xdpf = xdp_convert_buff_to_frame(xdp);
> +               if (unlikely(!xdpf))
> +                       return VIRTNET_XDP_RES_DROP;
> +
> +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> +               if (unlikely(!err)) {
> +                       xdp_return_frame_rx_napi(xdpf);
> +               } else if (unlikely(err < 0)) {
> +                       trace_xdp_exception(dev, xdp_prog, act);
> +                       return VIRTNET_XDP_RES_DROP;
> +               }
> +
> +               *xdp_xmit |= VIRTIO_XDP_TX;
> +               return VIRTNET_XDP_RES_CONSUMED;
> +
> +       case XDP_REDIRECT:
> +               stats->xdp_redirects++;
> +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> +               if (err)
> +                       return VIRTNET_XDP_RES_DROP;
> +
> +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> +               return VIRTNET_XDP_RES_CONSUMED;
> +
> +       default:
> +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> +               fallthrough;
> +       case XDP_ABORTED:
> +               trace_xdp_exception(dev, xdp_prog, act);
> +               fallthrough;
> +       case XDP_DROP:
> +               return VIRTNET_XDP_RES_DROP;
> +       }
> +}
> +
>  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
>  {
>         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
>         struct page *page = virt_to_head_page(buf);
>         unsigned int delta = 0;
>         struct page *xdp_page;
> -       int err;
>         unsigned int metasize = 0;
>
>         len -= vi->hdr_len;
> @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
>         xdp_prog = rcu_dereference(rq->xdp_prog);
>         if (xdp_prog) {
>                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> -               struct xdp_frame *xdpf;
>                 struct xdp_buff xdp;
>                 void *orig_data;
>                 u32 act;
> @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
>                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
>                                  xdp_headroom, len, true);
>                 orig_data = xdp.data;
> -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> -               stats->xdp_packets++;
> +
> +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
>
>                 switch (act) {
> -               case XDP_PASS:
> +               case VIRTNET_XDP_RES_PASS:
>                         /* Recalculate length in case bpf program changed it */
>                         delta = orig_data - xdp.data;
>                         len = xdp.data_end - xdp.data;
>                         metasize = xdp.data - xdp.data_meta;
>                         break;
> -               case XDP_TX:
> -                       stats->xdp_tx++;
> -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> -                       if (unlikely(!xdpf))
> -                               goto err_xdp;
> -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> -                       if (unlikely(!err)) {
> -                               xdp_return_frame_rx_napi(xdpf);
> -                       } else if (unlikely(err < 0)) {
> -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> -                               goto err_xdp;
> -                       }
> -                       *xdp_xmit |= VIRTIO_XDP_TX;
> -                       rcu_read_unlock();
> -                       goto xdp_xmit;
> -               case XDP_REDIRECT:
> -                       stats->xdp_redirects++;
> -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> -                       if (err)
> -                               goto err_xdp;
> -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> +
> +               case VIRTNET_XDP_RES_CONSUMED:
>                         rcu_read_unlock();
>                         goto xdp_xmit;
> -               default:
> -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> -                       fallthrough;
> -               case XDP_ABORTED:
> -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> -                       goto err_xdp;
> -               case XDP_DROP:
> +
> +               case VIRTNET_XDP_RES_DROP:
>                         goto err_xdp;
>                 }
>         }
> @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>         if (xdp_prog) {
>                 unsigned int xdp_frags_truesz = 0;
>                 struct skb_shared_info *shinfo;
> -               struct xdp_frame *xdpf;
>                 struct page *xdp_page;
>                 struct xdp_buff xdp;
>                 void *data;
> @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                 if (unlikely(err))
>                         goto err_xdp_frags;
>
> -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> -               stats->xdp_packets++;
> +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
>
>                 switch (act) {
> -               case XDP_PASS:
> +               case VIRTNET_XDP_RES_PASS:
>                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
>                         if (unlikely(!head_skb))
>                                 goto err_xdp_frags;
>
>                         rcu_read_unlock();
>                         return head_skb;
> -               case XDP_TX:
> -                       stats->xdp_tx++;
> -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> -                       if (unlikely(!xdpf)) {
> -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");

Nit: This debug is lost after the conversion.

Thanks

> -                               goto err_xdp_frags;
> -                       }
> -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> -                       if (unlikely(!err)) {
> -                               xdp_return_frame_rx_napi(xdpf);
> -                       } else if (unlikely(err < 0)) {
> -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> -                               goto err_xdp_frags;
> -                       }
> -                       *xdp_xmit |= VIRTIO_XDP_TX;
> -                       rcu_read_unlock();
> -                       goto xdp_xmit;
> -               case XDP_REDIRECT:
> -                       stats->xdp_redirects++;
> -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> -                       if (err)
> -                               goto err_xdp_frags;
> -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> +
> +               case VIRTNET_XDP_RES_CONSUMED:
>                         rcu_read_unlock();
>                         goto xdp_xmit;
> -               default:
> -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> -                       fallthrough;
> -               case XDP_ABORTED:
> -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> -                       fallthrough;
> -               case XDP_DROP:
> +
> +               case VIRTNET_XDP_RES_DROP:
>                         goto err_xdp_frags;
>                 }
>  err_xdp_frags:
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo
  2023-03-28 12:04 ` [PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo Xuan Zhuo
@ 2023-04-03  2:44   ` Jason Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Jason Wang @ 2023-04-03  2:44 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> This patch introduce a new function that releases the
> xdp shinfo. The subsequent patch will reuse this function.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/net/virtio_net.c | 27 ++++++++++++++++-----------
>  1 file changed, 16 insertions(+), 11 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 72b9d6ee4024..09aed60e2f51 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -798,6 +798,21 @@ static int virtnet_xdp_xmit(struct net_device *dev,
>         return ret;
>  }
>
> +static void put_xdp_frags(struct xdp_buff *xdp)
> +{
> +       struct skb_shared_info *shinfo;
> +       struct page *xdp_page;
> +       int i;
> +
> +       if (xdp_buff_has_frags(xdp)) {
> +               shinfo = xdp_get_shared_info_from_buff(xdp);
> +               for (i = 0; i < shinfo->nr_frags; i++) {
> +                       xdp_page = skb_frag_page(&shinfo->frags[i]);
> +                       put_page(xdp_page);
> +               }
> +       }
> +}
> +
>  static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>                                struct net_device *dev,
>                                unsigned int *xdp_xmit,
> @@ -1312,12 +1327,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>         xdp_prog = rcu_dereference(rq->xdp_prog);
>         if (xdp_prog) {
>                 unsigned int xdp_frags_truesz = 0;
> -               struct skb_shared_info *shinfo;
> -               struct page *xdp_page;
>                 struct xdp_buff xdp;
>                 void *data;
>                 u32 act;
> -               int i;
>
>                 data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
>                                              offset, &len, hdr);
> @@ -1348,14 +1360,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>                         goto err_xdp_frags;
>                 }
>  err_xdp_frags:
> -               if (xdp_buff_has_frags(&xdp)) {
> -                       shinfo = xdp_get_shared_info_from_buff(&xdp);
> -                       for (i = 0; i < shinfo->nr_frags; i++) {
> -                               xdp_page = skb_frag_page(&shinfo->frags[i]);
> -                               put_page(xdp_page);
> -                       }
> -               }
> -
> +               put_xdp_frags(&xdp);
>                 goto err_xdp;
>         }
>         rcu_read_unlock();
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 5/8] virtio_net: separate the logic of freeing the rest mergeable buf
  2023-03-28 12:04 ` [PATCH net-next 5/8] virtio_net: separate the logic of freeing the rest mergeable buf Xuan Zhuo
@ 2023-04-03  2:46   ` Jason Wang
  0 siblings, 0 replies; 41+ messages in thread
From: Jason Wang @ 2023-04-03  2:46 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> This patch introduce a new function that frees the rest mergeable buf.
> The subsequent patch will reuse this function.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>

Acked-by: Jason Wang <jasowang@redhat.com>

Thanks

> ---
>  drivers/net/virtio_net.c | 36 ++++++++++++++++++++++++------------
>  1 file changed, 24 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 09aed60e2f51..a3f2bcb3db27 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1076,6 +1076,28 @@ static struct sk_buff *receive_big(struct net_device *dev,
>         return NULL;
>  }
>
> +static void mergeable_buf_free(struct receive_queue *rq, int num_buf,
> +                              struct net_device *dev,
> +                              struct virtnet_rq_stats *stats)
> +{
> +       struct page *page;
> +       void *buf;
> +       int len;
> +
> +       while (num_buf-- > 1) {
> +               buf = virtqueue_get_buf(rq->vq, &len);
> +               if (unlikely(!buf)) {
> +                       pr_debug("%s: rx error: %d buffers missing\n",
> +                                dev->name, num_buf);
> +                       dev->stats.rx_length_errors++;
> +                       break;
> +               }
> +               stats->bytes += len;
> +               page = virt_to_head_page(buf);
> +               put_page(page);
> +       }
> +}
> +
>  /* Why not use xdp_build_skb_from_frame() ?
>   * XDP core assumes that xdp frags are PAGE_SIZE in length, while in
>   * virtio-net there are 2 points that do not match its requirements:
> @@ -1436,18 +1458,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>         stats->xdp_drops++;
>  err_skb:
>         put_page(page);
> -       while (num_buf-- > 1) {
> -               buf = virtqueue_get_buf(rq->vq, &len);
> -               if (unlikely(!buf)) {
> -                       pr_debug("%s: rx error: %d buffers missing\n",
> -                                dev->name, num_buf);
> -                       dev->stats.rx_length_errors++;
> -                       break;
> -               }
> -               stats->bytes += len;
> -               page = virt_to_head_page(buf);
> -               put_page(page);
> -       }
> +       mergeable_buf_free(rq, num_buf, dev, stats);
> +
>  err_buf:
>         stats->drops++;
>         dev_kfree_skb(head_skb);
> --
> 2.32.0.3.g01195cf9f
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 6/8] virtio_net: auto release xdp shinfo
  2023-03-28 12:04 ` [PATCH net-next 6/8] virtio_net: auto release xdp shinfo Xuan Zhuo
@ 2023-04-03  3:18   ` Jason Wang
  2023-04-03  4:17     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-04-03  3:18 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf


在 2023/3/28 20:04, Xuan Zhuo 写道:
> virtnet_build_xdp_buff_mrg() and virtnet_xdp_handler() auto


I think you meant virtnet_xdp_handler() actually?


> release xdp shinfo then the caller no need to careful the xdp shinfo.
>
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>   drivers/net/virtio_net.c | 29 +++++++++++++++++------------
>   1 file changed, 17 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index a3f2bcb3db27..136131a7868a 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -833,14 +833,14 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>   		stats->xdp_tx++;
>   		xdpf = xdp_convert_buff_to_frame(xdp);
>   		if (unlikely(!xdpf))
> -			return VIRTNET_XDP_RES_DROP;
> +			goto drop;
>   
>   		err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
>   		if (unlikely(!err)) {
>   			xdp_return_frame_rx_napi(xdpf);
>   		} else if (unlikely(err < 0)) {
>   			trace_xdp_exception(dev, xdp_prog, act);
> -			return VIRTNET_XDP_RES_DROP;
> +			goto drop;
>   		}
>   
>   		*xdp_xmit |= VIRTIO_XDP_TX;
> @@ -850,7 +850,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>   		stats->xdp_redirects++;
>   		err = xdp_do_redirect(dev, xdp, xdp_prog);
>   		if (err)
> -			return VIRTNET_XDP_RES_DROP;
> +			goto drop;
>   
>   		*xdp_xmit |= VIRTIO_XDP_REDIR;
>   		return VIRTNET_XDP_RES_CONSUMED;
> @@ -862,8 +862,12 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
>   		trace_xdp_exception(dev, xdp_prog, act);
>   		fallthrough;
>   	case XDP_DROP:
> -		return VIRTNET_XDP_RES_DROP;
> +		goto drop;


This goto is kind of meaningless.

Thanks


>   	}
> +
> +drop:
> +	put_xdp_frags(xdp);
> +	return VIRTNET_XDP_RES_DROP;
>   }
>   
>   static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> @@ -1199,7 +1203,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>   				 dev->name, *num_buf,
>   				 virtio16_to_cpu(vi->vdev, hdr->num_buffers));
>   			dev->stats.rx_length_errors++;
> -			return -EINVAL;
> +			goto err;
>   		}
>   
>   		stats->bytes += len;
> @@ -1218,7 +1222,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>   			pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
>   				 dev->name, len, (unsigned long)(truesize - room));
>   			dev->stats.rx_length_errors++;
> -			return -EINVAL;
> +			goto err;
>   		}
>   
>   		frag = &shinfo->frags[shinfo->nr_frags++];
> @@ -1233,6 +1237,10 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>   
>   	*xdp_frags_truesize = xdp_frags_truesz;
>   	return 0;
> +
> +err:
> +	put_xdp_frags(xdp);
> +	return -EINVAL;
>   }
>   
>   static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> @@ -1361,7 +1369,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
>   						 &num_buf, &xdp_frags_truesz, stats);
>   		if (unlikely(err))
> -			goto err_xdp_frags;
> +			goto err_xdp;
>   
>   		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
>   
> @@ -1369,7 +1377,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   		case VIRTNET_XDP_RES_PASS:
>   			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
>   			if (unlikely(!head_skb))
> -				goto err_xdp_frags;
> +				goto err_xdp;
>   
>   			rcu_read_unlock();
>   			return head_skb;
> @@ -1379,11 +1387,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>   			goto xdp_xmit;
>   
>   		case VIRTNET_XDP_RES_DROP:
> -			goto err_xdp_frags;
> +			goto err_xdp;
>   		}
> -err_xdp_frags:
> -		put_xdp_frags(&xdp);
> -		goto err_xdp;
>   	}
>   	rcu_read_unlock();
>   


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately
  2023-03-31  9:01   ` Jason Wang
@ 2023-04-03  4:11     ` Xuan Zhuo
  0 siblings, 0 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-03  4:11 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Fri, 31 Mar 2023 17:01:54 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > In the xdp implementation of virtio-net mergeable, it always checks
> > whether two page is used and a page is selected to release. This is
> > complicated for the processing of action, and be careful.
> >
> > In the entire process, we have such principles:
> > * If xdp_page is used (PASS, TX, Redirect), then we release the old
> >   page.
> > * If it is a drop case, we will release two. The old page obtained from
> >   buf is release inside err_xdp, and xdp_page needs be relased by us.
> >
> > But in fact, when we allocate a new page, we can release the old page
> > immediately. Then just one is using, we just need to release the new
> > page for drop case. On the drop path, err_xdp will release the variable
> > "page", so we only need to let "page" point to the new xdp_page in
> > advance.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 15 ++++++---------
> >  1 file changed, 6 insertions(+), 9 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index e2560b6f7980..4d2bf1ce0730 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1245,6 +1245,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >                         if (!xdp_page)
> >                                 goto err_xdp;
> >                         offset = VIRTIO_XDP_HEADROOM;
> > +
> > +                       put_page(page);
> > +                       page = xdp_page;
> >                 } else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> >                         xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> >                                                   sizeof(struct skb_shared_info));
> > @@ -1259,6 +1262,9 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >                                page_address(page) + offset, len);
> >                         frame_sz = PAGE_SIZE;
> >                         offset = VIRTIO_XDP_HEADROOM;
> > +
> > +                       put_page(page);
> > +                       page = xdp_page;
> >                 } else {
> >                         xdp_page = page;
> >                 }
>
> While at this I would go a little further, just remove the above
> assignment then we can use:
>
>                 data = page_address(page) + offset;
>
> Which seems cleaner?

I will try.

Thanks.


>
> Thanks
>
> > @@ -1278,8 +1284,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >                         if (unlikely(!head_skb))
> >                                 goto err_xdp_frags;
> >
> > -                       if (unlikely(xdp_page != page))
> > -                               put_page(page);
> >                         rcu_read_unlock();
> >                         return head_skb;
> >                 case XDP_TX:
> > @@ -1297,8 +1301,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >                                 goto err_xdp_frags;
> >                         }
> >                         *xdp_xmit |= VIRTIO_XDP_TX;
> > -                       if (unlikely(xdp_page != page))
> > -                               put_page(page);
> >                         rcu_read_unlock();
> >                         goto xdp_xmit;
> >                 case XDP_REDIRECT:
> > @@ -1307,8 +1309,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >                         if (err)
> >                                 goto err_xdp_frags;
> >                         *xdp_xmit |= VIRTIO_XDP_REDIR;
> > -                       if (unlikely(xdp_page != page))
> > -                               put_page(page);
> >                         rcu_read_unlock();
> >                         goto xdp_xmit;
> >                 default:
> > @@ -1321,9 +1321,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >                         goto err_xdp_frags;
> >                 }
> >  err_xdp_frags:
> > -               if (unlikely(xdp_page != page))
> > -                       __free_pages(xdp_page, 0);
> > -
> >                 if (xdp_buff_has_frags(&xdp)) {
> >                         shinfo = xdp_get_shared_info_from_buff(&xdp);
> >                         for (i = 0; i < shinfo->nr_frags; i++) {
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-31  9:14   ` Jason Wang
@ 2023-04-03  4:11     ` Xuan Zhuo
  0 siblings, 0 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-03  4:11 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Fri, 31 Mar 2023 17:14:33 +0800, Jason Wang <jasowang@redhat.com> wrote:
>
> 在 2023/3/28 20:04, Xuan Zhuo 写道:
> > Separating the logic of preparation for xdp from receive_mergeable.
> >
> > The purpose of this is to simplify the logic of execution of XDP.
> >
> > The main logic here is that when headroom is insufficient, we need to
> > allocate a new page and calculate offset. It should be noted that if
> > there is new page, the variable page will refer to the new page.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >   drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
> >   1 file changed, 77 insertions(+), 58 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 4d2bf1ce0730..bb426958cdd4 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
> >   	return 0;
> >   }
> >
> > +static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> > +				   struct receive_queue *rq,
> > +				   struct bpf_prog *xdp_prog,
> > +				   void *ctx,
> > +				   unsigned int *frame_sz,
> > +				   int *num_buf,
> > +				   struct page **page,
> > +				   int offset,
> > +				   unsigned int *len,
> > +				   struct virtio_net_hdr_mrg_rxbuf *hdr)
> > +{
> > +	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
> > +	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> > +	struct page *xdp_page;
> > +	unsigned int xdp_room;
> > +
> > +	/* Transient failure which in theory could occur if
> > +	 * in-flight packets from before XDP was enabled reach
> > +	 * the receive path after XDP is loaded.
> > +	 */
> > +	if (unlikely(hdr->hdr.gso_type))
> > +		return NULL;
> > +
> > +	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> > +	 * with headroom may add hole in truesize, which
> > +	 * make their length exceed PAGE_SIZE. So we disabled the
> > +	 * hole mechanism for xdp. See add_recvbuf_mergeable().
> > +	 */
> > +	*frame_sz = truesize;
> > +
> > +	/* This happens when headroom is not enough because
> > +	 * of the buffer was prefilled before XDP is set.
> > +	 * This should only happen for the first several packets.
> > +	 * In fact, vq reset can be used here to help us clean up
> > +	 * the prefilled buffers, but many existing devices do not
> > +	 * support it, and we don't want to bother users who are
> > +	 * using xdp normally.
> > +	 */
> > +	if (!xdp_prog->aux->xdp_has_frags &&
> > +	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> > +		/* linearize data for XDP */
> > +		xdp_page = xdp_linearize_page(rq, num_buf,
> > +					      *page, offset,
> > +					      VIRTIO_XDP_HEADROOM,
> > +					      len);
> > +
> > +		if (!xdp_page)
> > +			return NULL;
> > +	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> > +		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> > +					  sizeof(struct skb_shared_info));
> > +		if (*len + xdp_room > PAGE_SIZE)
> > +			return NULL;
> > +
> > +		xdp_page = alloc_page(GFP_ATOMIC);
> > +		if (!xdp_page)
> > +			return NULL;
> > +
> > +		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> > +		       page_address(*page) + offset, *len);
> > +	} else {
> > +		return page_address(*page) + offset;
>
>
> This makes the code a little harder to be read than the original code.
>
> Why not do a verbatim moving without introducing new logic? (Or
> introducing new logic on top?)


Yes. Will fix.

Thanks.

>
> Thanks
>
>
> > +	}
> > +
> > +	*frame_sz = PAGE_SIZE;
> > +
> > +	put_page(*page);
> > +
> > +	*page = xdp_page;
> > +
> > +	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
> > +}
> > +
> >   static struct sk_buff *receive_mergeable(struct net_device *dev,
> >   					 struct virtnet_info *vi,
> >   					 struct receive_queue *rq,
> > @@ -1181,7 +1254,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >   	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> >   	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
> >   	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
> > -	unsigned int frame_sz, xdp_room;
> > +	unsigned int frame_sz;
> >   	int err;
> >
> >   	head_skb = NULL;
> > @@ -1211,65 +1284,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >   		u32 act;
> >   		int i;
> >
> > -		/* Transient failure which in theory could occur if
> > -		 * in-flight packets from before XDP was enabled reach
> > -		 * the receive path after XDP is loaded.
> > -		 */
> > -		if (unlikely(hdr->hdr.gso_type))
> > +		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> > +					     offset, &len, hdr);
> > +		if (!data)
> >   			goto err_xdp;
> >
> > -		/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> > -		 * with headroom may add hole in truesize, which
> > -		 * make their length exceed PAGE_SIZE. So we disabled the
> > -		 * hole mechanism for xdp. See add_recvbuf_mergeable().
> > -		 */
> > -		frame_sz = truesize;
> > -
> > -		/* This happens when headroom is not enough because
> > -		 * of the buffer was prefilled before XDP is set.
> > -		 * This should only happen for the first several packets.
> > -		 * In fact, vq reset can be used here to help us clean up
> > -		 * the prefilled buffers, but many existing devices do not
> > -		 * support it, and we don't want to bother users who are
> > -		 * using xdp normally.
> > -		 */
> > -		if (!xdp_prog->aux->xdp_has_frags &&
> > -		    (num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> > -			/* linearize data for XDP */
> > -			xdp_page = xdp_linearize_page(rq, &num_buf,
> > -						      page, offset,
> > -						      VIRTIO_XDP_HEADROOM,
> > -						      &len);
> > -			frame_sz = PAGE_SIZE;
> > -
> > -			if (!xdp_page)
> > -				goto err_xdp;
> > -			offset = VIRTIO_XDP_HEADROOM;
> > -
> > -			put_page(page);
> > -			page = xdp_page;
> > -		} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> > -			xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> > -						  sizeof(struct skb_shared_info));
> > -			if (len + xdp_room > PAGE_SIZE)
> > -				goto err_xdp;
> > -
> > -			xdp_page = alloc_page(GFP_ATOMIC);
> > -			if (!xdp_page)
> > -				goto err_xdp;
> > -
> > -			memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> > -			       page_address(page) + offset, len);
> > -			frame_sz = PAGE_SIZE;
> > -			offset = VIRTIO_XDP_HEADROOM;
> > -
> > -			put_page(page);
> > -			page = xdp_page;
> > -		} else {
> > -			xdp_page = page;
> > -		}
> > -
> > -		data = page_address(xdp_page) + offset;
> >   		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
> >   						 &num_buf, &xdp_frags_truesz, stats);
> >   		if (unlikely(err))
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-03  2:43   ` Jason Wang
@ 2023-04-03  4:12     ` Xuan Zhuo
  2023-04-04  5:04       ` Jason Wang
  0 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-03  4:12 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > At present, we have two similar logic to perform the XDP prog.
> >
> > Therefore, this PATCH separates the code of executing XDP, which is
> > conducive to later maintenance.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> >  1 file changed, 75 insertions(+), 67 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index bb426958cdd4..72b9d6ee4024 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> >         char padding[12];
> >  };
> >
> > +enum {
> > +       /* xdp pass */
> > +       VIRTNET_XDP_RES_PASS,
> > +       /* drop packet. the caller needs to release the page. */
> > +       VIRTNET_XDP_RES_DROP,
> > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > +       VIRTNET_XDP_RES_CONSUMED,
> > +};
>
> I'd prefer this to be done on top unless it is a must. But I don't see
> any advantage of introducing this, it's partial mapping of XDP action
> and it needs to be extended when XDP action is extended. (And we've
> already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)

No, these are the three states of buffer after XDP processing.

* PASS: goto make skb
* DROP: we should release buffer
* CUNSUMED: xdp prog used the buffer, we do nothing

The latter two are not particularly related to XDP ACTION. And it does not need
to extend when XDP action is extended. At least I have not thought of this
situation.


>
> > +
> >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> >
> > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> >         return ret;
> >  }
> >
> > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > +                              struct net_device *dev,
> > +                              unsigned int *xdp_xmit,
> > +                              struct virtnet_rq_stats *stats)
> > +{
> > +       struct xdp_frame *xdpf;
> > +       int err;
> > +       u32 act;
> > +
> > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > +       stats->xdp_packets++;
> > +
> > +       switch (act) {
> > +       case XDP_PASS:
> > +               return VIRTNET_XDP_RES_PASS;
> > +
> > +       case XDP_TX:
> > +               stats->xdp_tx++;
> > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > +               if (unlikely(!xdpf))
> > +                       return VIRTNET_XDP_RES_DROP;
> > +
> > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > +               if (unlikely(!err)) {
> > +                       xdp_return_frame_rx_napi(xdpf);
> > +               } else if (unlikely(err < 0)) {
> > +                       trace_xdp_exception(dev, xdp_prog, act);
> > +                       return VIRTNET_XDP_RES_DROP;
> > +               }
> > +
> > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > +               return VIRTNET_XDP_RES_CONSUMED;
> > +
> > +       case XDP_REDIRECT:
> > +               stats->xdp_redirects++;
> > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > +               if (err)
> > +                       return VIRTNET_XDP_RES_DROP;
> > +
> > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > +               return VIRTNET_XDP_RES_CONSUMED;
> > +
> > +       default:
> > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > +               fallthrough;
> > +       case XDP_ABORTED:
> > +               trace_xdp_exception(dev, xdp_prog, act);
> > +               fallthrough;
> > +       case XDP_DROP:
> > +               return VIRTNET_XDP_RES_DROP;
> > +       }
> > +}
> > +
> >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> >  {
> >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> >         struct page *page = virt_to_head_page(buf);
> >         unsigned int delta = 0;
> >         struct page *xdp_page;
> > -       int err;
> >         unsigned int metasize = 0;
> >
> >         len -= vi->hdr_len;
> > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> >         xdp_prog = rcu_dereference(rq->xdp_prog);
> >         if (xdp_prog) {
> >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > -               struct xdp_frame *xdpf;
> >                 struct xdp_buff xdp;
> >                 void *orig_data;
> >                 u32 act;
> > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> >                                  xdp_headroom, len, true);
> >                 orig_data = xdp.data;
> > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > -               stats->xdp_packets++;
> > +
> > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> >
> >                 switch (act) {
> > -               case XDP_PASS:
> > +               case VIRTNET_XDP_RES_PASS:
> >                         /* Recalculate length in case bpf program changed it */
> >                         delta = orig_data - xdp.data;
> >                         len = xdp.data_end - xdp.data;
> >                         metasize = xdp.data - xdp.data_meta;
> >                         break;
> > -               case XDP_TX:
> > -                       stats->xdp_tx++;
> > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > -                       if (unlikely(!xdpf))
> > -                               goto err_xdp;
> > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > -                       if (unlikely(!err)) {
> > -                               xdp_return_frame_rx_napi(xdpf);
> > -                       } else if (unlikely(err < 0)) {
> > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > -                               goto err_xdp;
> > -                       }
> > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > -                       rcu_read_unlock();
> > -                       goto xdp_xmit;
> > -               case XDP_REDIRECT:
> > -                       stats->xdp_redirects++;
> > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > -                       if (err)
> > -                               goto err_xdp;
> > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > +
> > +               case VIRTNET_XDP_RES_CONSUMED:
> >                         rcu_read_unlock();
> >                         goto xdp_xmit;
> > -               default:
> > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > -                       fallthrough;
> > -               case XDP_ABORTED:
> > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > -                       goto err_xdp;
> > -               case XDP_DROP:
> > +
> > +               case VIRTNET_XDP_RES_DROP:
> >                         goto err_xdp;
> >                 }
> >         }
> > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >         if (xdp_prog) {
> >                 unsigned int xdp_frags_truesz = 0;
> >                 struct skb_shared_info *shinfo;
> > -               struct xdp_frame *xdpf;
> >                 struct page *xdp_page;
> >                 struct xdp_buff xdp;
> >                 void *data;
> > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >                 if (unlikely(err))
> >                         goto err_xdp_frags;
> >
> > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > -               stats->xdp_packets++;
> > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> >
> >                 switch (act) {
> > -               case XDP_PASS:
> > +               case VIRTNET_XDP_RES_PASS:
> >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> >                         if (unlikely(!head_skb))
> >                                 goto err_xdp_frags;
> >
> >                         rcu_read_unlock();
> >                         return head_skb;
> > -               case XDP_TX:
> > -                       stats->xdp_tx++;
> > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > -                       if (unlikely(!xdpf)) {
> > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
>
> Nit: This debug is lost after the conversion.

Will fix.

Thanks.

>
> Thanks
>
> > -                               goto err_xdp_frags;
> > -                       }
> > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > -                       if (unlikely(!err)) {
> > -                               xdp_return_frame_rx_napi(xdpf);
> > -                       } else if (unlikely(err < 0)) {
> > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > -                               goto err_xdp_frags;
> > -                       }
> > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > -                       rcu_read_unlock();
> > -                       goto xdp_xmit;
> > -               case XDP_REDIRECT:
> > -                       stats->xdp_redirects++;
> > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > -                       if (err)
> > -                               goto err_xdp_frags;
> > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > +
> > +               case VIRTNET_XDP_RES_CONSUMED:
> >                         rcu_read_unlock();
> >                         goto xdp_xmit;
> > -               default:
> > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > -                       fallthrough;
> > -               case XDP_ABORTED:
> > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > -                       fallthrough;
> > -               case XDP_DROP:
> > +
> > +               case VIRTNET_XDP_RES_DROP:
> >                         goto err_xdp_frags;
> >                 }
> >  err_xdp_frags:
> > --
> > 2.32.0.3.g01195cf9f
> >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 6/8] virtio_net: auto release xdp shinfo
  2023-04-03  3:18   ` Jason Wang
@ 2023-04-03  4:17     ` Xuan Zhuo
  0 siblings, 0 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-03  4:17 UTC (permalink / raw)
  To: Jason Wang
  Cc: Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Mon, 3 Apr 2023 11:18:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
>
> 在 2023/3/28 20:04, Xuan Zhuo 写道:
> > virtnet_build_xdp_buff_mrg() and virtnet_xdp_handler() auto
>
>
> I think you meant virtnet_xdp_handler() actually?
>
>
> > release xdp shinfo then the caller no need to careful the xdp shinfo.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >   drivers/net/virtio_net.c | 29 +++++++++++++++++------------
> >   1 file changed, 17 insertions(+), 12 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index a3f2bcb3db27..136131a7868a 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -833,14 +833,14 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> >   		stats->xdp_tx++;
> >   		xdpf = xdp_convert_buff_to_frame(xdp);
> >   		if (unlikely(!xdpf))
> > -			return VIRTNET_XDP_RES_DROP;
> > +			goto drop;
> >
> >   		err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> >   		if (unlikely(!err)) {
> >   			xdp_return_frame_rx_napi(xdpf);
> >   		} else if (unlikely(err < 0)) {
> >   			trace_xdp_exception(dev, xdp_prog, act);
> > -			return VIRTNET_XDP_RES_DROP;
> > +			goto drop;
> >   		}
> >
> >   		*xdp_xmit |= VIRTIO_XDP_TX;
> > @@ -850,7 +850,7 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> >   		stats->xdp_redirects++;
> >   		err = xdp_do_redirect(dev, xdp, xdp_prog);
> >   		if (err)
> > -			return VIRTNET_XDP_RES_DROP;
> > +			goto drop;
> >
> >   		*xdp_xmit |= VIRTIO_XDP_REDIR;
> >   		return VIRTNET_XDP_RES_CONSUMED;
> > @@ -862,8 +862,12 @@ static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> >   		trace_xdp_exception(dev, xdp_prog, act);
> >   		fallthrough;
> >   	case XDP_DROP:
> > -		return VIRTNET_XDP_RES_DROP;
> > +		goto drop;
>
>
> This goto is kind of meaningless.

Will fix.

Thanks.


>
> Thanks
>
>
> >   	}
> > +
> > +drop:
> > +	put_xdp_frags(xdp);
> > +	return VIRTNET_XDP_RES_DROP;
> >   }
> >
> >   static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > @@ -1199,7 +1203,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
> >   				 dev->name, *num_buf,
> >   				 virtio16_to_cpu(vi->vdev, hdr->num_buffers));
> >   			dev->stats.rx_length_errors++;
> > -			return -EINVAL;
> > +			goto err;
> >   		}
> >
> >   		stats->bytes += len;
> > @@ -1218,7 +1222,7 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
> >   			pr_debug("%s: rx error: len %u exceeds truesize %lu\n",
> >   				 dev->name, len, (unsigned long)(truesize - room));
> >   			dev->stats.rx_length_errors++;
> > -			return -EINVAL;
> > +			goto err;
> >   		}
> >
> >   		frag = &shinfo->frags[shinfo->nr_frags++];
> > @@ -1233,6 +1237,10 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
> >
> >   	*xdp_frags_truesize = xdp_frags_truesz;
> >   	return 0;
> > +
> > +err:
> > +	put_xdp_frags(xdp);
> > +	return -EINVAL;
> >   }
> >
> >   static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> > @@ -1361,7 +1369,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >   		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
> >   						 &num_buf, &xdp_frags_truesz, stats);
> >   		if (unlikely(err))
> > -			goto err_xdp_frags;
> > +			goto err_xdp;
> >
> >   		act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> >
> > @@ -1369,7 +1377,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >   		case VIRTNET_XDP_RES_PASS:
> >   			head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> >   			if (unlikely(!head_skb))
> > -				goto err_xdp_frags;
> > +				goto err_xdp;
> >
> >   			rcu_read_unlock();
> >   			return head_skb;
> > @@ -1379,11 +1387,8 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >   			goto xdp_xmit;
> >
> >   		case VIRTNET_XDP_RES_DROP:
> > -			goto err_xdp_frags;
> > +			goto err_xdp;
> >   		}
> > -err_xdp_frags:
> > -		put_xdp_frags(&xdp);
> > -		goto err_xdp;
> >   	}
> >   	rcu_read_unlock();
> >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-03  4:12     ` Xuan Zhuo
@ 2023-04-04  5:04       ` Jason Wang
  2023-04-04  6:11         ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-04-04  5:04 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > At present, we have two similar logic to perform the XDP prog.
> > >
> > > Therefore, this PATCH separates the code of executing XDP, which is
> > > conducive to later maintenance.
> > >
> > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > ---
> > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > >
> > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > index bb426958cdd4..72b9d6ee4024 100644
> > > --- a/drivers/net/virtio_net.c
> > > +++ b/drivers/net/virtio_net.c
> > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > >         char padding[12];
> > >  };
> > >
> > > +enum {
> > > +       /* xdp pass */
> > > +       VIRTNET_XDP_RES_PASS,
> > > +       /* drop packet. the caller needs to release the page. */
> > > +       VIRTNET_XDP_RES_DROP,
> > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > +       VIRTNET_XDP_RES_CONSUMED,
> > > +};
> >
> > I'd prefer this to be done on top unless it is a must. But I don't see
> > any advantage of introducing this, it's partial mapping of XDP action
> > and it needs to be extended when XDP action is extended. (And we've
> > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
>
> No, these are the three states of buffer after XDP processing.
>
> * PASS: goto make skb

XDP_PASS goes for this.

> * DROP: we should release buffer

XDP_DROP and error conditions go with this.

> * CUNSUMED: xdp prog used the buffer, we do nothing

XDP_TX/XDP_REDIRECTION goes for this.

So t virtnet_xdp_handler() just maps XDP ACTION plus the error
conditions to the above three states.

We can simply map error to XDP_DROP like:

       case XDP_TX:
              stats->xdp_tx++;
               xdpf = xdp_convert_buff_to_frame(xdp);
               if (unlikely(!xdpf))
                       return XDP_DROP;

A good side effect is to avoid the xdp_xmit pointer to be passed to
the function.

>
> The latter two are not particularly related to XDP ACTION. And it does not need
> to extend when XDP action is extended. At least I have not thought of this
> situation.

What's the advantages of such indirection compared to using XDP action directly?

Thanks

>
>
> >
> > > +
> > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > >
> > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > >         return ret;
> > >  }
> > >
> > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > +                              struct net_device *dev,
> > > +                              unsigned int *xdp_xmit,
> > > +                              struct virtnet_rq_stats *stats)
> > > +{
> > > +       struct xdp_frame *xdpf;
> > > +       int err;
> > > +       u32 act;
> > > +
> > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > +       stats->xdp_packets++;
> > > +
> > > +       switch (act) {
> > > +       case XDP_PASS:
> > > +               return VIRTNET_XDP_RES_PASS;
> > > +
> > > +       case XDP_TX:
> > > +               stats->xdp_tx++;
> > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > +               if (unlikely(!xdpf))
> > > +                       return VIRTNET_XDP_RES_DROP;
> > > +
> > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > +               if (unlikely(!err)) {
> > > +                       xdp_return_frame_rx_napi(xdpf);
> > > +               } else if (unlikely(err < 0)) {
> > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > +                       return VIRTNET_XDP_RES_DROP;
> > > +               }
> > > +
> > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > +
> > > +       case XDP_REDIRECT:
> > > +               stats->xdp_redirects++;
> > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > +               if (err)
> > > +                       return VIRTNET_XDP_RES_DROP;
> > > +
> > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > +
> > > +       default:
> > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > +               fallthrough;
> > > +       case XDP_ABORTED:
> > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > +               fallthrough;
> > > +       case XDP_DROP:
> > > +               return VIRTNET_XDP_RES_DROP;
> > > +       }
> > > +}
> > > +
> > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > >  {
> > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > >         struct page *page = virt_to_head_page(buf);
> > >         unsigned int delta = 0;
> > >         struct page *xdp_page;
> > > -       int err;
> > >         unsigned int metasize = 0;
> > >
> > >         len -= vi->hdr_len;
> > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > >         if (xdp_prog) {
> > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > -               struct xdp_frame *xdpf;
> > >                 struct xdp_buff xdp;
> > >                 void *orig_data;
> > >                 u32 act;
> > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > >                                  xdp_headroom, len, true);
> > >                 orig_data = xdp.data;
> > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > -               stats->xdp_packets++;
> > > +
> > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > >
> > >                 switch (act) {
> > > -               case XDP_PASS:
> > > +               case VIRTNET_XDP_RES_PASS:
> > >                         /* Recalculate length in case bpf program changed it */
> > >                         delta = orig_data - xdp.data;
> > >                         len = xdp.data_end - xdp.data;
> > >                         metasize = xdp.data - xdp.data_meta;
> > >                         break;
> > > -               case XDP_TX:
> > > -                       stats->xdp_tx++;
> > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > -                       if (unlikely(!xdpf))
> > > -                               goto err_xdp;
> > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > -                       if (unlikely(!err)) {
> > > -                               xdp_return_frame_rx_napi(xdpf);
> > > -                       } else if (unlikely(err < 0)) {
> > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > -                               goto err_xdp;
> > > -                       }
> > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > -                       rcu_read_unlock();
> > > -                       goto xdp_xmit;
> > > -               case XDP_REDIRECT:
> > > -                       stats->xdp_redirects++;
> > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > -                       if (err)
> > > -                               goto err_xdp;
> > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > +
> > > +               case VIRTNET_XDP_RES_CONSUMED:
> > >                         rcu_read_unlock();
> > >                         goto xdp_xmit;
> > > -               default:
> > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > -                       fallthrough;
> > > -               case XDP_ABORTED:
> > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > -                       goto err_xdp;
> > > -               case XDP_DROP:
> > > +
> > > +               case VIRTNET_XDP_RES_DROP:
> > >                         goto err_xdp;
> > >                 }
> > >         }
> > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > >         if (xdp_prog) {
> > >                 unsigned int xdp_frags_truesz = 0;
> > >                 struct skb_shared_info *shinfo;
> > > -               struct xdp_frame *xdpf;
> > >                 struct page *xdp_page;
> > >                 struct xdp_buff xdp;
> > >                 void *data;
> > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > >                 if (unlikely(err))
> > >                         goto err_xdp_frags;
> > >
> > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > -               stats->xdp_packets++;
> > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > >
> > >                 switch (act) {
> > > -               case XDP_PASS:
> > > +               case VIRTNET_XDP_RES_PASS:
> > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > >                         if (unlikely(!head_skb))
> > >                                 goto err_xdp_frags;
> > >
> > >                         rcu_read_unlock();
> > >                         return head_skb;
> > > -               case XDP_TX:
> > > -                       stats->xdp_tx++;
> > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > -                       if (unlikely(!xdpf)) {
> > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> >
> > Nit: This debug is lost after the conversion.
>
> Will fix.
>
> Thanks.
>
> >
> > Thanks
> >
> > > -                               goto err_xdp_frags;
> > > -                       }
> > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > -                       if (unlikely(!err)) {
> > > -                               xdp_return_frame_rx_napi(xdpf);
> > > -                       } else if (unlikely(err < 0)) {
> > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > -                               goto err_xdp_frags;
> > > -                       }
> > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > -                       rcu_read_unlock();
> > > -                       goto xdp_xmit;
> > > -               case XDP_REDIRECT:
> > > -                       stats->xdp_redirects++;
> > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > -                       if (err)
> > > -                               goto err_xdp_frags;
> > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > +
> > > +               case VIRTNET_XDP_RES_CONSUMED:
> > >                         rcu_read_unlock();
> > >                         goto xdp_xmit;
> > > -               default:
> > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > -                       fallthrough;
> > > -               case XDP_ABORTED:
> > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > -                       fallthrough;
> > > -               case XDP_DROP:
> > > +
> > > +               case VIRTNET_XDP_RES_DROP:
> > >                         goto err_xdp_frags;
> > >                 }
> > >  err_xdp_frags:
> > > --
> > > 2.32.0.3.g01195cf9f
> > >
> >
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-04  5:04       ` Jason Wang
@ 2023-04-04  6:11         ` Xuan Zhuo
  2023-04-04  6:35           ` Jason Wang
  0 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-04  6:11 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, 4 Apr 2023 13:04:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > At present, we have two similar logic to perform the XDP prog.
> > > >
> > > > Therefore, this PATCH separates the code of executing XDP, which is
> > > > conducive to later maintenance.
> > > >
> > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > ---
> > > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > > >
> > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > index bb426958cdd4..72b9d6ee4024 100644
> > > > --- a/drivers/net/virtio_net.c
> > > > +++ b/drivers/net/virtio_net.c
> > > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > > >         char padding[12];
> > > >  };
> > > >
> > > > +enum {
> > > > +       /* xdp pass */
> > > > +       VIRTNET_XDP_RES_PASS,
> > > > +       /* drop packet. the caller needs to release the page. */
> > > > +       VIRTNET_XDP_RES_DROP,
> > > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > > +       VIRTNET_XDP_RES_CONSUMED,
> > > > +};
> > >
> > > I'd prefer this to be done on top unless it is a must. But I don't see
> > > any advantage of introducing this, it's partial mapping of XDP action
> > > and it needs to be extended when XDP action is extended. (And we've
> > > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
> >
> > No, these are the three states of buffer after XDP processing.
> >
> > * PASS: goto make skb
>
> XDP_PASS goes for this.
>
> > * DROP: we should release buffer
>
> XDP_DROP and error conditions go with this.
>
> > * CUNSUMED: xdp prog used the buffer, we do nothing
>
> XDP_TX/XDP_REDIRECTION goes for this.
>
> So t virtnet_xdp_handler() just maps XDP ACTION plus the error
> conditions to the above three states.
>
> We can simply map error to XDP_DROP like:
>
>        case XDP_TX:
>               stats->xdp_tx++;
>                xdpf = xdp_convert_buff_to_frame(xdp);
>                if (unlikely(!xdpf))
>                        return XDP_DROP;
>
> A good side effect is to avoid the xdp_xmit pointer to be passed to
> the function.


So, I guess you mean this:

	switch (act) {
	case XDP_PASS:
		/* handle pass */
		return skb;

	case XDP_TX:
		*xdp_xmit |= VIRTIO_XDP_TX;
		goto xmit;

	case XDP_REDIRECT:
		*xdp_xmit |= VIRTIO_XDP_REDIR;
		goto xmit;

	case XDP_DROP:
	default:
		goto err_xdp;
	}

I have to say there is no problem from the perspective of code implementation.

But if the a new ACTION liking XDP_TX,XDP_REDIRECT is added in the future, then
we must modify all the callers. This is the benefit of using CUNSUMED.

I think it is a good advantage to put xdp_xmit in virtnet_xdp_handler(),
which makes the caller not care too much about these details. If you take into
account the problem of increasing the number of parameters, I advise to put it
in rq.

Thanks.



>
> >
> > The latter two are not particularly related to XDP ACTION. And it does not need
> > to extend when XDP action is extended. At least I have not thought of this
> > situation.
>
> What's the advantages of such indirection compared to using XDP action directly?
>
> Thanks
>
> >
> >
> > >
> > > > +
> > > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > >
> > > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > > >         return ret;
> > > >  }
> > > >
> > > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > > +                              struct net_device *dev,
> > > > +                              unsigned int *xdp_xmit,
> > > > +                              struct virtnet_rq_stats *stats)
> > > > +{
> > > > +       struct xdp_frame *xdpf;
> > > > +       int err;
> > > > +       u32 act;
> > > > +
> > > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > > +       stats->xdp_packets++;
> > > > +
> > > > +       switch (act) {
> > > > +       case XDP_PASS:
> > > > +               return VIRTNET_XDP_RES_PASS;
> > > > +
> > > > +       case XDP_TX:
> > > > +               stats->xdp_tx++;
> > > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > > +               if (unlikely(!xdpf))
> > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > +
> > > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > +               if (unlikely(!err)) {
> > > > +                       xdp_return_frame_rx_napi(xdpf);
> > > > +               } else if (unlikely(err < 0)) {
> > > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > +               }
> > > > +
> > > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > +
> > > > +       case XDP_REDIRECT:
> > > > +               stats->xdp_redirects++;
> > > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > > +               if (err)
> > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > +
> > > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > +
> > > > +       default:
> > > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > > +               fallthrough;
> > > > +       case XDP_ABORTED:
> > > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > > +               fallthrough;
> > > > +       case XDP_DROP:
> > > > +               return VIRTNET_XDP_RES_DROP;
> > > > +       }
> > > > +}
> > > > +
> > > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > > >  {
> > > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > >         struct page *page = virt_to_head_page(buf);
> > > >         unsigned int delta = 0;
> > > >         struct page *xdp_page;
> > > > -       int err;
> > > >         unsigned int metasize = 0;
> > > >
> > > >         len -= vi->hdr_len;
> > > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > > >         if (xdp_prog) {
> > > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > > -               struct xdp_frame *xdpf;
> > > >                 struct xdp_buff xdp;
> > > >                 void *orig_data;
> > > >                 u32 act;
> > > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > > >                                  xdp_headroom, len, true);
> > > >                 orig_data = xdp.data;
> > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > -               stats->xdp_packets++;
> > > > +
> > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > >
> > > >                 switch (act) {
> > > > -               case XDP_PASS:
> > > > +               case VIRTNET_XDP_RES_PASS:
> > > >                         /* Recalculate length in case bpf program changed it */
> > > >                         delta = orig_data - xdp.data;
> > > >                         len = xdp.data_end - xdp.data;
> > > >                         metasize = xdp.data - xdp.data_meta;
> > > >                         break;
> > > > -               case XDP_TX:
> > > > -                       stats->xdp_tx++;
> > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > -                       if (unlikely(!xdpf))
> > > > -                               goto err_xdp;
> > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > -                       if (unlikely(!err)) {
> > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > -                       } else if (unlikely(err < 0)) {
> > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > -                               goto err_xdp;
> > > > -                       }
> > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > -                       rcu_read_unlock();
> > > > -                       goto xdp_xmit;
> > > > -               case XDP_REDIRECT:
> > > > -                       stats->xdp_redirects++;
> > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > -                       if (err)
> > > > -                               goto err_xdp;
> > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > +
> > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > >                         rcu_read_unlock();
> > > >                         goto xdp_xmit;
> > > > -               default:
> > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > -                       fallthrough;
> > > > -               case XDP_ABORTED:
> > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > -                       goto err_xdp;
> > > > -               case XDP_DROP:
> > > > +
> > > > +               case VIRTNET_XDP_RES_DROP:
> > > >                         goto err_xdp;
> > > >                 }
> > > >         }
> > > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > >         if (xdp_prog) {
> > > >                 unsigned int xdp_frags_truesz = 0;
> > > >                 struct skb_shared_info *shinfo;
> > > > -               struct xdp_frame *xdpf;
> > > >                 struct page *xdp_page;
> > > >                 struct xdp_buff xdp;
> > > >                 void *data;
> > > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > >                 if (unlikely(err))
> > > >                         goto err_xdp_frags;
> > > >
> > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > -               stats->xdp_packets++;
> > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > >
> > > >                 switch (act) {
> > > > -               case XDP_PASS:
> > > > +               case VIRTNET_XDP_RES_PASS:
> > > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > > >                         if (unlikely(!head_skb))
> > > >                                 goto err_xdp_frags;
> > > >
> > > >                         rcu_read_unlock();
> > > >                         return head_skb;
> > > > -               case XDP_TX:
> > > > -                       stats->xdp_tx++;
> > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > -                       if (unlikely(!xdpf)) {
> > > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> > >
> > > Nit: This debug is lost after the conversion.
> >
> > Will fix.
> >
> > Thanks.
> >
> > >
> > > Thanks
> > >
> > > > -                               goto err_xdp_frags;
> > > > -                       }
> > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > -                       if (unlikely(!err)) {
> > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > -                       } else if (unlikely(err < 0)) {
> > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > -                               goto err_xdp_frags;
> > > > -                       }
> > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > -                       rcu_read_unlock();
> > > > -                       goto xdp_xmit;
> > > > -               case XDP_REDIRECT:
> > > > -                       stats->xdp_redirects++;
> > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > -                       if (err)
> > > > -                               goto err_xdp_frags;
> > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > +
> > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > >                         rcu_read_unlock();
> > > >                         goto xdp_xmit;
> > > > -               default:
> > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > -                       fallthrough;
> > > > -               case XDP_ABORTED:
> > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > -                       fallthrough;
> > > > -               case XDP_DROP:
> > > > +
> > > > +               case VIRTNET_XDP_RES_DROP:
> > > >                         goto err_xdp_frags;
> > > >                 }
> > > >  err_xdp_frags:
> > > > --
> > > > 2.32.0.3.g01195cf9f
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-04  6:11         ` Xuan Zhuo
@ 2023-04-04  6:35           ` Jason Wang
  2023-04-04  6:44             ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-04-04  6:35 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Apr 4, 2023 at 2:22 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 4 Apr 2023 13:04:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > At present, we have two similar logic to perform the XDP prog.
> > > > >
> > > > > Therefore, this PATCH separates the code of executing XDP, which is
> > > > > conducive to later maintenance.
> > > > >
> > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > ---
> > > > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > > > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > > > >
> > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > index bb426958cdd4..72b9d6ee4024 100644
> > > > > --- a/drivers/net/virtio_net.c
> > > > > +++ b/drivers/net/virtio_net.c
> > > > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > > > >         char padding[12];
> > > > >  };
> > > > >
> > > > > +enum {
> > > > > +       /* xdp pass */
> > > > > +       VIRTNET_XDP_RES_PASS,
> > > > > +       /* drop packet. the caller needs to release the page. */
> > > > > +       VIRTNET_XDP_RES_DROP,
> > > > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > > > +       VIRTNET_XDP_RES_CONSUMED,
> > > > > +};
> > > >
> > > > I'd prefer this to be done on top unless it is a must. But I don't see
> > > > any advantage of introducing this, it's partial mapping of XDP action
> > > > and it needs to be extended when XDP action is extended. (And we've
> > > > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
> > >
> > > No, these are the three states of buffer after XDP processing.
> > >
> > > * PASS: goto make skb
> >
> > XDP_PASS goes for this.
> >
> > > * DROP: we should release buffer
> >
> > XDP_DROP and error conditions go with this.
> >
> > > * CUNSUMED: xdp prog used the buffer, we do nothing
> >
> > XDP_TX/XDP_REDIRECTION goes for this.
> >
> > So t virtnet_xdp_handler() just maps XDP ACTION plus the error
> > conditions to the above three states.
> >
> > We can simply map error to XDP_DROP like:
> >
> >        case XDP_TX:
> >               stats->xdp_tx++;
> >                xdpf = xdp_convert_buff_to_frame(xdp);
> >                if (unlikely(!xdpf))
> >                        return XDP_DROP;
> >
> > A good side effect is to avoid the xdp_xmit pointer to be passed to
> > the function.
>
>
> So, I guess you mean this:
>
>         switch (act) {
>         case XDP_PASS:
>                 /* handle pass */
>                 return skb;
>
>         case XDP_TX:
>                 *xdp_xmit |= VIRTIO_XDP_TX;
>                 goto xmit;
>
>         case XDP_REDIRECT:
>                 *xdp_xmit |= VIRTIO_XDP_REDIR;
>                 goto xmit;
>
>         case XDP_DROP:
>         default:
>                 goto err_xdp;
>         }
>
> I have to say there is no problem from the perspective of code implementation.

Note that this is the current logic where it is determined in
receive_small() and receive_mergeable().

>
> But if the a new ACTION liking XDP_TX,XDP_REDIRECT is added in the future, then
> we must modify all the callers.

This is fine since we only use a single type for XDP action.

> This is the benefit of using CUNSUMED.

It's very hard to say, e.g if we want to support cloning in the future.

>
> I think it is a good advantage to put xdp_xmit in virtnet_xdp_handler(),
> which makes the caller not care too much about these details.

This part I don't understand, having xdp_xmit means the caller need to
know whether it is xmited or redirected. The point of the enum is to
hide the XDP actions, but it's conflict with what xdp_xmit who want to
expose (part of) the XDP actions.

> If you take into
> account the problem of increasing the number of parameters, I advise to put it
> in rq.

I don't have strong opinion to introduce the enum, what I want to say
is, use a separated patch to do that.

Thanks

>
> Thanks.
>
>
>
> >
> > >
> > > The latter two are not particularly related to XDP ACTION. And it does not need
> > > to extend when XDP action is extended. At least I have not thought of this
> > > situation.
> >
> > What's the advantages of such indirection compared to using XDP action directly?
> >
> > Thanks
> >
> > >
> > >
> > > >
> > > > > +
> > > > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > >
> > > > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > > > >         return ret;
> > > > >  }
> > > > >
> > > > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > > > +                              struct net_device *dev,
> > > > > +                              unsigned int *xdp_xmit,
> > > > > +                              struct virtnet_rq_stats *stats)
> > > > > +{
> > > > > +       struct xdp_frame *xdpf;
> > > > > +       int err;
> > > > > +       u32 act;
> > > > > +
> > > > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > > > +       stats->xdp_packets++;
> > > > > +
> > > > > +       switch (act) {
> > > > > +       case XDP_PASS:
> > > > > +               return VIRTNET_XDP_RES_PASS;
> > > > > +
> > > > > +       case XDP_TX:
> > > > > +               stats->xdp_tx++;
> > > > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > +               if (unlikely(!xdpf))
> > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > +
> > > > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > +               if (unlikely(!err)) {
> > > > > +                       xdp_return_frame_rx_napi(xdpf);
> > > > > +               } else if (unlikely(err < 0)) {
> > > > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > +               }
> > > > > +
> > > > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > +
> > > > > +       case XDP_REDIRECT:
> > > > > +               stats->xdp_redirects++;
> > > > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > > > +               if (err)
> > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > +
> > > > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > +
> > > > > +       default:
> > > > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > > > +               fallthrough;
> > > > > +       case XDP_ABORTED:
> > > > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > > > +               fallthrough;
> > > > > +       case XDP_DROP:
> > > > > +               return VIRTNET_XDP_RES_DROP;
> > > > > +       }
> > > > > +}
> > > > > +
> > > > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > > > >  {
> > > > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > >         struct page *page = virt_to_head_page(buf);
> > > > >         unsigned int delta = 0;
> > > > >         struct page *xdp_page;
> > > > > -       int err;
> > > > >         unsigned int metasize = 0;
> > > > >
> > > > >         len -= vi->hdr_len;
> > > > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > > > >         if (xdp_prog) {
> > > > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > > > -               struct xdp_frame *xdpf;
> > > > >                 struct xdp_buff xdp;
> > > > >                 void *orig_data;
> > > > >                 u32 act;
> > > > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > > > >                                  xdp_headroom, len, true);
> > > > >                 orig_data = xdp.data;
> > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > -               stats->xdp_packets++;
> > > > > +
> > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > >
> > > > >                 switch (act) {
> > > > > -               case XDP_PASS:
> > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > >                         /* Recalculate length in case bpf program changed it */
> > > > >                         delta = orig_data - xdp.data;
> > > > >                         len = xdp.data_end - xdp.data;
> > > > >                         metasize = xdp.data - xdp.data_meta;
> > > > >                         break;
> > > > > -               case XDP_TX:
> > > > > -                       stats->xdp_tx++;
> > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > -                       if (unlikely(!xdpf))
> > > > > -                               goto err_xdp;
> > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > -                       if (unlikely(!err)) {
> > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > -                       } else if (unlikely(err < 0)) {
> > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > -                               goto err_xdp;
> > > > > -                       }
> > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > -                       rcu_read_unlock();
> > > > > -                       goto xdp_xmit;
> > > > > -               case XDP_REDIRECT:
> > > > > -                       stats->xdp_redirects++;
> > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > -                       if (err)
> > > > > -                               goto err_xdp;
> > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > +
> > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > >                         rcu_read_unlock();
> > > > >                         goto xdp_xmit;
> > > > > -               default:
> > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > -                       fallthrough;
> > > > > -               case XDP_ABORTED:
> > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > -                       goto err_xdp;
> > > > > -               case XDP_DROP:
> > > > > +
> > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > >                         goto err_xdp;
> > > > >                 }
> > > > >         }
> > > > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > >         if (xdp_prog) {
> > > > >                 unsigned int xdp_frags_truesz = 0;
> > > > >                 struct skb_shared_info *shinfo;
> > > > > -               struct xdp_frame *xdpf;
> > > > >                 struct page *xdp_page;
> > > > >                 struct xdp_buff xdp;
> > > > >                 void *data;
> > > > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > >                 if (unlikely(err))
> > > > >                         goto err_xdp_frags;
> > > > >
> > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > -               stats->xdp_packets++;
> > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > >
> > > > >                 switch (act) {
> > > > > -               case XDP_PASS:
> > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > > > >                         if (unlikely(!head_skb))
> > > > >                                 goto err_xdp_frags;
> > > > >
> > > > >                         rcu_read_unlock();
> > > > >                         return head_skb;
> > > > > -               case XDP_TX:
> > > > > -                       stats->xdp_tx++;
> > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > -                       if (unlikely(!xdpf)) {
> > > > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> > > >
> > > > Nit: This debug is lost after the conversion.
> > >
> > > Will fix.
> > >
> > > Thanks.
> > >
> > > >
> > > > Thanks
> > > >
> > > > > -                               goto err_xdp_frags;
> > > > > -                       }
> > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > -                       if (unlikely(!err)) {
> > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > -                       } else if (unlikely(err < 0)) {
> > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > -                               goto err_xdp_frags;
> > > > > -                       }
> > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > -                       rcu_read_unlock();
> > > > > -                       goto xdp_xmit;
> > > > > -               case XDP_REDIRECT:
> > > > > -                       stats->xdp_redirects++;
> > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > -                       if (err)
> > > > > -                               goto err_xdp_frags;
> > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > +
> > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > >                         rcu_read_unlock();
> > > > >                         goto xdp_xmit;
> > > > > -               default:
> > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > -                       fallthrough;
> > > > > -               case XDP_ABORTED:
> > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > -                       fallthrough;
> > > > > -               case XDP_DROP:
> > > > > +
> > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > >                         goto err_xdp_frags;
> > > > >                 }
> > > > >  err_xdp_frags:
> > > > > --
> > > > > 2.32.0.3.g01195cf9f
> > > > >
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-04  6:35           ` Jason Wang
@ 2023-04-04  6:44             ` Xuan Zhuo
  2023-04-04  7:01               ` Jason Wang
  0 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-04  6:44 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, 4 Apr 2023 14:35:05 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Apr 4, 2023 at 2:22 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 4 Apr 2023 13:04:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > At present, we have two similar logic to perform the XDP prog.
> > > > > >
> > > > > > Therefore, this PATCH separates the code of executing XDP, which is
> > > > > > conducive to later maintenance.
> > > > > >
> > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > > ---
> > > > > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > > > > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > > > > >
> > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > index bb426958cdd4..72b9d6ee4024 100644
> > > > > > --- a/drivers/net/virtio_net.c
> > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > > > > >         char padding[12];
> > > > > >  };
> > > > > >
> > > > > > +enum {
> > > > > > +       /* xdp pass */
> > > > > > +       VIRTNET_XDP_RES_PASS,
> > > > > > +       /* drop packet. the caller needs to release the page. */
> > > > > > +       VIRTNET_XDP_RES_DROP,
> > > > > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > > > > +       VIRTNET_XDP_RES_CONSUMED,
> > > > > > +};
> > > > >
> > > > > I'd prefer this to be done on top unless it is a must. But I don't see
> > > > > any advantage of introducing this, it's partial mapping of XDP action
> > > > > and it needs to be extended when XDP action is extended. (And we've
> > > > > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
> > > >
> > > > No, these are the three states of buffer after XDP processing.
> > > >
> > > > * PASS: goto make skb
> > >
> > > XDP_PASS goes for this.
> > >
> > > > * DROP: we should release buffer
> > >
> > > XDP_DROP and error conditions go with this.
> > >
> > > > * CUNSUMED: xdp prog used the buffer, we do nothing
> > >
> > > XDP_TX/XDP_REDIRECTION goes for this.
> > >
> > > So t virtnet_xdp_handler() just maps XDP ACTION plus the error
> > > conditions to the above three states.
> > >
> > > We can simply map error to XDP_DROP like:
> > >
> > >        case XDP_TX:
> > >               stats->xdp_tx++;
> > >                xdpf = xdp_convert_buff_to_frame(xdp);
> > >                if (unlikely(!xdpf))
> > >                        return XDP_DROP;
> > >
> > > A good side effect is to avoid the xdp_xmit pointer to be passed to
> > > the function.
> >
> >
> > So, I guess you mean this:
> >
> >         switch (act) {
> >         case XDP_PASS:
> >                 /* handle pass */
> >                 return skb;
> >
> >         case XDP_TX:
> >                 *xdp_xmit |= VIRTIO_XDP_TX;
> >                 goto xmit;
> >
> >         case XDP_REDIRECT:
> >                 *xdp_xmit |= VIRTIO_XDP_REDIR;
> >                 goto xmit;
> >
> >         case XDP_DROP:
> >         default:
> >                 goto err_xdp;
> >         }
> >
> > I have to say there is no problem from the perspective of code implementation.
>
> Note that this is the current logic where it is determined in
> receive_small() and receive_mergeable().

Yes, but the purpose of this patches is to simplify the call.

>
> >
> > But if the a new ACTION liking XDP_TX,XDP_REDIRECT is added in the future, then
> > we must modify all the callers.
>
> This is fine since we only use a single type for XDP action.

a single type?

>
> > This is the benefit of using CUNSUMED.
>
> It's very hard to say, e.g if we want to support cloning in the future.

cloning? You mean clone one new buffer.

It is true that no matter what realization, the logic must be modified.

>
> >
> > I think it is a good advantage to put xdp_xmit in virtnet_xdp_handler(),
> > which makes the caller not care too much about these details.
>
> This part I don't understand, having xdp_xmit means the caller need to
> know whether it is xmited or redirected. The point of the enum is to
> hide the XDP actions, but it's conflict with what xdp_xmit who want to
> expose (part of) the XDP actions.

I mean, no matter what virtnet_xdp_handler () returns? XDP_ACTION or some one I
defined, I want to hide the modification of xdp_xmit to virtnet_xdp_handler().

Even if virtnet_xdp_handler() returns XDP_TX, we can also complete the
modification of XDP_XMIT within Virtnet_xdp_handler().


>
> > If you take into
> > account the problem of increasing the number of parameters, I advise to put it
> > in rq.
>
> I don't have strong opinion to introduce the enum,

OK, I will drop these new enums.

> what I want to say
> is, use a separated patch to do that.

Does this part refer to putting xdp_xmit in rq?

Thanks.


>
> Thanks
>
> >
> > Thanks.
> >
> >
> >
> > >
> > > >
> > > > The latter two are not particularly related to XDP ACTION. And it does not need
> > > > to extend when XDP action is extended. At least I have not thought of this
> > > > situation.
> > >
> > > What's the advantages of such indirection compared to using XDP action directly?
> > >
> > > Thanks
> > >
> > > >
> > > >
> > > > >
> > > > > > +
> > > > > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > >
> > > > > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > > > > >         return ret;
> > > > > >  }
> > > > > >
> > > > > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > > > > +                              struct net_device *dev,
> > > > > > +                              unsigned int *xdp_xmit,
> > > > > > +                              struct virtnet_rq_stats *stats)
> > > > > > +{
> > > > > > +       struct xdp_frame *xdpf;
> > > > > > +       int err;
> > > > > > +       u32 act;
> > > > > > +
> > > > > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > > > > +       stats->xdp_packets++;
> > > > > > +
> > > > > > +       switch (act) {
> > > > > > +       case XDP_PASS:
> > > > > > +               return VIRTNET_XDP_RES_PASS;
> > > > > > +
> > > > > > +       case XDP_TX:
> > > > > > +               stats->xdp_tx++;
> > > > > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > > +               if (unlikely(!xdpf))
> > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > +
> > > > > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > +               if (unlikely(!err)) {
> > > > > > +                       xdp_return_frame_rx_napi(xdpf);
> > > > > > +               } else if (unlikely(err < 0)) {
> > > > > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > +               }
> > > > > > +
> > > > > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > +
> > > > > > +       case XDP_REDIRECT:
> > > > > > +               stats->xdp_redirects++;
> > > > > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > > > > +               if (err)
> > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > +
> > > > > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > +
> > > > > > +       default:
> > > > > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > > > > +               fallthrough;
> > > > > > +       case XDP_ABORTED:
> > > > > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > > > > +               fallthrough;
> > > > > > +       case XDP_DROP:
> > > > > > +               return VIRTNET_XDP_RES_DROP;
> > > > > > +       }
> > > > > > +}
> > > > > > +
> > > > > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > > > > >  {
> > > > > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > > > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > >         struct page *page = virt_to_head_page(buf);
> > > > > >         unsigned int delta = 0;
> > > > > >         struct page *xdp_page;
> > > > > > -       int err;
> > > > > >         unsigned int metasize = 0;
> > > > > >
> > > > > >         len -= vi->hdr_len;
> > > > > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > > > > >         if (xdp_prog) {
> > > > > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > > > > -               struct xdp_frame *xdpf;
> > > > > >                 struct xdp_buff xdp;
> > > > > >                 void *orig_data;
> > > > > >                 u32 act;
> > > > > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > > > > >                                  xdp_headroom, len, true);
> > > > > >                 orig_data = xdp.data;
> > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > -               stats->xdp_packets++;
> > > > > > +
> > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > >
> > > > > >                 switch (act) {
> > > > > > -               case XDP_PASS:
> > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > >                         /* Recalculate length in case bpf program changed it */
> > > > > >                         delta = orig_data - xdp.data;
> > > > > >                         len = xdp.data_end - xdp.data;
> > > > > >                         metasize = xdp.data - xdp.data_meta;
> > > > > >                         break;
> > > > > > -               case XDP_TX:
> > > > > > -                       stats->xdp_tx++;
> > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > -                       if (unlikely(!xdpf))
> > > > > > -                               goto err_xdp;
> > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > -                       if (unlikely(!err)) {
> > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > -                               goto err_xdp;
> > > > > > -                       }
> > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > -                       rcu_read_unlock();
> > > > > > -                       goto xdp_xmit;
> > > > > > -               case XDP_REDIRECT:
> > > > > > -                       stats->xdp_redirects++;
> > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > -                       if (err)
> > > > > > -                               goto err_xdp;
> > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > +
> > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > >                         rcu_read_unlock();
> > > > > >                         goto xdp_xmit;
> > > > > > -               default:
> > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > -                       fallthrough;
> > > > > > -               case XDP_ABORTED:
> > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > -                       goto err_xdp;
> > > > > > -               case XDP_DROP:
> > > > > > +
> > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > >                         goto err_xdp;
> > > > > >                 }
> > > > > >         }
> > > > > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > >         if (xdp_prog) {
> > > > > >                 unsigned int xdp_frags_truesz = 0;
> > > > > >                 struct skb_shared_info *shinfo;
> > > > > > -               struct xdp_frame *xdpf;
> > > > > >                 struct page *xdp_page;
> > > > > >                 struct xdp_buff xdp;
> > > > > >                 void *data;
> > > > > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > >                 if (unlikely(err))
> > > > > >                         goto err_xdp_frags;
> > > > > >
> > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > -               stats->xdp_packets++;
> > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > >
> > > > > >                 switch (act) {
> > > > > > -               case XDP_PASS:
> > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > > > > >                         if (unlikely(!head_skb))
> > > > > >                                 goto err_xdp_frags;
> > > > > >
> > > > > >                         rcu_read_unlock();
> > > > > >                         return head_skb;
> > > > > > -               case XDP_TX:
> > > > > > -                       stats->xdp_tx++;
> > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > -                       if (unlikely(!xdpf)) {
> > > > > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> > > > >
> > > > > Nit: This debug is lost after the conversion.
> > > >
> > > > Will fix.
> > > >
> > > > Thanks.
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > > -                               goto err_xdp_frags;
> > > > > > -                       }
> > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > -                       if (unlikely(!err)) {
> > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > -                               goto err_xdp_frags;
> > > > > > -                       }
> > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > -                       rcu_read_unlock();
> > > > > > -                       goto xdp_xmit;
> > > > > > -               case XDP_REDIRECT:
> > > > > > -                       stats->xdp_redirects++;
> > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > -                       if (err)
> > > > > > -                               goto err_xdp_frags;
> > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > +
> > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > >                         rcu_read_unlock();
> > > > > >                         goto xdp_xmit;
> > > > > > -               default:
> > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > -                       fallthrough;
> > > > > > -               case XDP_ABORTED:
> > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > -                       fallthrough;
> > > > > > -               case XDP_DROP:
> > > > > > +
> > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > >                         goto err_xdp_frags;
> > > > > >                 }
> > > > > >  err_xdp_frags:
> > > > > > --
> > > > > > 2.32.0.3.g01195cf9f
> > > > > >
> > > > >
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-04  6:44             ` Xuan Zhuo
@ 2023-04-04  7:01               ` Jason Wang
  2023-04-04  7:06                 ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-04-04  7:01 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Apr 4, 2023 at 2:55 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 4 Apr 2023 14:35:05 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Apr 4, 2023 at 2:22 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 4 Apr 2023 13:04:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > At present, we have two similar logic to perform the XDP prog.
> > > > > > >
> > > > > > > Therefore, this PATCH separates the code of executing XDP, which is
> > > > > > > conducive to later maintenance.
> > > > > > >
> > > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > > > ---
> > > > > > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > > > > > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > > > > > >
> > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > > index bb426958cdd4..72b9d6ee4024 100644
> > > > > > > --- a/drivers/net/virtio_net.c
> > > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > > > > > >         char padding[12];
> > > > > > >  };
> > > > > > >
> > > > > > > +enum {
> > > > > > > +       /* xdp pass */
> > > > > > > +       VIRTNET_XDP_RES_PASS,
> > > > > > > +       /* drop packet. the caller needs to release the page. */
> > > > > > > +       VIRTNET_XDP_RES_DROP,
> > > > > > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > > > > > +       VIRTNET_XDP_RES_CONSUMED,
> > > > > > > +};
> > > > > >
> > > > > > I'd prefer this to be done on top unless it is a must. But I don't see
> > > > > > any advantage of introducing this, it's partial mapping of XDP action
> > > > > > and it needs to be extended when XDP action is extended. (And we've
> > > > > > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
> > > > >
> > > > > No, these are the three states of buffer after XDP processing.
> > > > >
> > > > > * PASS: goto make skb
> > > >
> > > > XDP_PASS goes for this.
> > > >
> > > > > * DROP: we should release buffer
> > > >
> > > > XDP_DROP and error conditions go with this.
> > > >
> > > > > * CUNSUMED: xdp prog used the buffer, we do nothing
> > > >
> > > > XDP_TX/XDP_REDIRECTION goes for this.
> > > >
> > > > So t virtnet_xdp_handler() just maps XDP ACTION plus the error
> > > > conditions to the above three states.
> > > >
> > > > We can simply map error to XDP_DROP like:
> > > >
> > > >        case XDP_TX:
> > > >               stats->xdp_tx++;
> > > >                xdpf = xdp_convert_buff_to_frame(xdp);
> > > >                if (unlikely(!xdpf))
> > > >                        return XDP_DROP;
> > > >
> > > > A good side effect is to avoid the xdp_xmit pointer to be passed to
> > > > the function.
> > >
> > >
> > > So, I guess you mean this:
> > >
> > >         switch (act) {
> > >         case XDP_PASS:
> > >                 /* handle pass */
> > >                 return skb;
> > >
> > >         case XDP_TX:
> > >                 *xdp_xmit |= VIRTIO_XDP_TX;
> > >                 goto xmit;
> > >
> > >         case XDP_REDIRECT:
> > >                 *xdp_xmit |= VIRTIO_XDP_REDIR;
> > >                 goto xmit;
> > >
> > >         case XDP_DROP:
> > >         default:
> > >                 goto err_xdp;
> > >         }
> > >
> > > I have to say there is no problem from the perspective of code implementation.
> >
> > Note that this is the current logic where it is determined in
> > receive_small() and receive_mergeable().
>
> Yes, but the purpose of this patches is to simplify the call.

You mean simplify the receive_small()/mergeable()?

>
> >
> > >
> > > But if the a new ACTION liking XDP_TX,XDP_REDIRECT is added in the future, then
> > > we must modify all the callers.
> >
> > This is fine since we only use a single type for XDP action.
>
> a single type?

Instead of (partial) duplicating XDP actions in the new enums.

>
> >
> > > This is the benefit of using CUNSUMED.
> >
> > It's very hard to say, e.g if we want to support cloning in the future.
>
> cloning? You mean clone one new buffer.
>
> It is true that no matter what realization, the logic must be modified.

Yes.

>
> >
> > >
> > > I think it is a good advantage to put xdp_xmit in virtnet_xdp_handler(),
> > > which makes the caller not care too much about these details.
> >
> > This part I don't understand, having xdp_xmit means the caller need to
> > know whether it is xmited or redirected. The point of the enum is to
> > hide the XDP actions, but it's conflict with what xdp_xmit who want to
> > expose (part of) the XDP actions.
>
> I mean, no matter what virtnet_xdp_handler () returns? XDP_ACTION or some one I
> defined, I want to hide the modification of xdp_xmit to virtnet_xdp_handler().
>
> Even if virtnet_xdp_handler() returns XDP_TX, we can also complete the
> modification of XDP_XMIT within Virtnet_xdp_handler().
>
>
> >
> > > If you take into
> > > account the problem of increasing the number of parameters, I advise to put it
> > > in rq.
> >
> > I don't have strong opinion to introduce the enum,
>
> OK, I will drop these new enums.

Just to make sure we are at the same page. I mean, if there is no
objection from others, I'm ok to have an enum, but we need to use a
separate patch to do that.

>
> > what I want to say
> > is, use a separated patch to do that.
>
> Does this part refer to putting xdp_xmit in rq?

I mean it's better to be done separately. But I don't see the
advantage of this other than reducing the parameters.

Thanks

>
> Thanks.
>
>
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > >
> > >
> > > >
> > > > >
> > > > > The latter two are not particularly related to XDP ACTION. And it does not need
> > > > > to extend when XDP action is extended. At least I have not thought of this
> > > > > situation.
> > > >
> > > > What's the advantages of such indirection compared to using XDP action directly?
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > >
> > > > > >
> > > > > > > +
> > > > > > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > >
> > > > > > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > > > > > >         return ret;
> > > > > > >  }
> > > > > > >
> > > > > > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > > > > > +                              struct net_device *dev,
> > > > > > > +                              unsigned int *xdp_xmit,
> > > > > > > +                              struct virtnet_rq_stats *stats)
> > > > > > > +{
> > > > > > > +       struct xdp_frame *xdpf;
> > > > > > > +       int err;
> > > > > > > +       u32 act;
> > > > > > > +
> > > > > > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > > > > > +       stats->xdp_packets++;
> > > > > > > +
> > > > > > > +       switch (act) {
> > > > > > > +       case XDP_PASS:
> > > > > > > +               return VIRTNET_XDP_RES_PASS;
> > > > > > > +
> > > > > > > +       case XDP_TX:
> > > > > > > +               stats->xdp_tx++;
> > > > > > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > > > +               if (unlikely(!xdpf))
> > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > +
> > > > > > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > +               if (unlikely(!err)) {
> > > > > > > +                       xdp_return_frame_rx_napi(xdpf);
> > > > > > > +               } else if (unlikely(err < 0)) {
> > > > > > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > +               }
> > > > > > > +
> > > > > > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > +
> > > > > > > +       case XDP_REDIRECT:
> > > > > > > +               stats->xdp_redirects++;
> > > > > > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > > > > > +               if (err)
> > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > +
> > > > > > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > +
> > > > > > > +       default:
> > > > > > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > > > > > +               fallthrough;
> > > > > > > +       case XDP_ABORTED:
> > > > > > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > +               fallthrough;
> > > > > > > +       case XDP_DROP:
> > > > > > > +               return VIRTNET_XDP_RES_DROP;
> > > > > > > +       }
> > > > > > > +}
> > > > > > > +
> > > > > > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > > > > > >  {
> > > > > > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > > > > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > >         struct page *page = virt_to_head_page(buf);
> > > > > > >         unsigned int delta = 0;
> > > > > > >         struct page *xdp_page;
> > > > > > > -       int err;
> > > > > > >         unsigned int metasize = 0;
> > > > > > >
> > > > > > >         len -= vi->hdr_len;
> > > > > > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > > > > > >         if (xdp_prog) {
> > > > > > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > > > > > -               struct xdp_frame *xdpf;
> > > > > > >                 struct xdp_buff xdp;
> > > > > > >                 void *orig_data;
> > > > > > >                 u32 act;
> > > > > > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > > > > > >                                  xdp_headroom, len, true);
> > > > > > >                 orig_data = xdp.data;
> > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > -               stats->xdp_packets++;
> > > > > > > +
> > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > >
> > > > > > >                 switch (act) {
> > > > > > > -               case XDP_PASS:
> > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > >                         /* Recalculate length in case bpf program changed it */
> > > > > > >                         delta = orig_data - xdp.data;
> > > > > > >                         len = xdp.data_end - xdp.data;
> > > > > > >                         metasize = xdp.data - xdp.data_meta;
> > > > > > >                         break;
> > > > > > > -               case XDP_TX:
> > > > > > > -                       stats->xdp_tx++;
> > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > -                       if (unlikely(!xdpf))
> > > > > > > -                               goto err_xdp;
> > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > -                       if (unlikely(!err)) {
> > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > -                               goto err_xdp;
> > > > > > > -                       }
> > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > -                       rcu_read_unlock();
> > > > > > > -                       goto xdp_xmit;
> > > > > > > -               case XDP_REDIRECT:
> > > > > > > -                       stats->xdp_redirects++;
> > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > -                       if (err)
> > > > > > > -                               goto err_xdp;
> > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > +
> > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > >                         rcu_read_unlock();
> > > > > > >                         goto xdp_xmit;
> > > > > > > -               default:
> > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > -                       fallthrough;
> > > > > > > -               case XDP_ABORTED:
> > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > -                       goto err_xdp;
> > > > > > > -               case XDP_DROP:
> > > > > > > +
> > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > >                         goto err_xdp;
> > > > > > >                 }
> > > > > > >         }
> > > > > > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > >         if (xdp_prog) {
> > > > > > >                 unsigned int xdp_frags_truesz = 0;
> > > > > > >                 struct skb_shared_info *shinfo;
> > > > > > > -               struct xdp_frame *xdpf;
> > > > > > >                 struct page *xdp_page;
> > > > > > >                 struct xdp_buff xdp;
> > > > > > >                 void *data;
> > > > > > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > >                 if (unlikely(err))
> > > > > > >                         goto err_xdp_frags;
> > > > > > >
> > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > -               stats->xdp_packets++;
> > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > >
> > > > > > >                 switch (act) {
> > > > > > > -               case XDP_PASS:
> > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > > > > > >                         if (unlikely(!head_skb))
> > > > > > >                                 goto err_xdp_frags;
> > > > > > >
> > > > > > >                         rcu_read_unlock();
> > > > > > >                         return head_skb;
> > > > > > > -               case XDP_TX:
> > > > > > > -                       stats->xdp_tx++;
> > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > -                       if (unlikely(!xdpf)) {
> > > > > > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> > > > > >
> > > > > > Nit: This debug is lost after the conversion.
> > > > >
> > > > > Will fix.
> > > > >
> > > > > Thanks.
> > > > >
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > > -                               goto err_xdp_frags;
> > > > > > > -                       }
> > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > -                       if (unlikely(!err)) {
> > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > -                               goto err_xdp_frags;
> > > > > > > -                       }
> > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > -                       rcu_read_unlock();
> > > > > > > -                       goto xdp_xmit;
> > > > > > > -               case XDP_REDIRECT:
> > > > > > > -                       stats->xdp_redirects++;
> > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > -                       if (err)
> > > > > > > -                               goto err_xdp_frags;
> > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > +
> > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > >                         rcu_read_unlock();
> > > > > > >                         goto xdp_xmit;
> > > > > > > -               default:
> > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > -                       fallthrough;
> > > > > > > -               case XDP_ABORTED:
> > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > -                       fallthrough;
> > > > > > > -               case XDP_DROP:
> > > > > > > +
> > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > >                         goto err_xdp_frags;
> > > > > > >                 }
> > > > > > >  err_xdp_frags:
> > > > > > > --
> > > > > > > 2.32.0.3.g01195cf9f
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-04  7:01               ` Jason Wang
@ 2023-04-04  7:06                 ` Xuan Zhuo
  2023-04-04  8:03                   ` Jason Wang
  0 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-04  7:06 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, 4 Apr 2023 15:01:36 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Apr 4, 2023 at 2:55 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 4 Apr 2023 14:35:05 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Apr 4, 2023 at 2:22 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 4 Apr 2023 13:04:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > At present, we have two similar logic to perform the XDP prog.
> > > > > > > >
> > > > > > > > Therefore, this PATCH separates the code of executing XDP, which is
> > > > > > > > conducive to later maintenance.
> > > > > > > >
> > > > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > > > > ---
> > > > > > > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > > > > > > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > > > > > > >
> > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > > > index bb426958cdd4..72b9d6ee4024 100644
> > > > > > > > --- a/drivers/net/virtio_net.c
> > > > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > > > > > > >         char padding[12];
> > > > > > > >  };
> > > > > > > >
> > > > > > > > +enum {
> > > > > > > > +       /* xdp pass */
> > > > > > > > +       VIRTNET_XDP_RES_PASS,
> > > > > > > > +       /* drop packet. the caller needs to release the page. */
> > > > > > > > +       VIRTNET_XDP_RES_DROP,
> > > > > > > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > > > > > > +       VIRTNET_XDP_RES_CONSUMED,
> > > > > > > > +};
> > > > > > >
> > > > > > > I'd prefer this to be done on top unless it is a must. But I don't see
> > > > > > > any advantage of introducing this, it's partial mapping of XDP action
> > > > > > > and it needs to be extended when XDP action is extended. (And we've
> > > > > > > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
> > > > > >
> > > > > > No, these are the three states of buffer after XDP processing.
> > > > > >
> > > > > > * PASS: goto make skb
> > > > >
> > > > > XDP_PASS goes for this.
> > > > >
> > > > > > * DROP: we should release buffer
> > > > >
> > > > > XDP_DROP and error conditions go with this.
> > > > >
> > > > > > * CUNSUMED: xdp prog used the buffer, we do nothing
> > > > >
> > > > > XDP_TX/XDP_REDIRECTION goes for this.
> > > > >
> > > > > So t virtnet_xdp_handler() just maps XDP ACTION plus the error
> > > > > conditions to the above three states.
> > > > >
> > > > > We can simply map error to XDP_DROP like:
> > > > >
> > > > >        case XDP_TX:
> > > > >               stats->xdp_tx++;
> > > > >                xdpf = xdp_convert_buff_to_frame(xdp);
> > > > >                if (unlikely(!xdpf))
> > > > >                        return XDP_DROP;
> > > > >
> > > > > A good side effect is to avoid the xdp_xmit pointer to be passed to
> > > > > the function.
> > > >
> > > >
> > > > So, I guess you mean this:
> > > >
> > > >         switch (act) {
> > > >         case XDP_PASS:
> > > >                 /* handle pass */
> > > >                 return skb;
> > > >
> > > >         case XDP_TX:
> > > >                 *xdp_xmit |= VIRTIO_XDP_TX;
> > > >                 goto xmit;
> > > >
> > > >         case XDP_REDIRECT:
> > > >                 *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > >                 goto xmit;
> > > >
> > > >         case XDP_DROP:
> > > >         default:
> > > >                 goto err_xdp;
> > > >         }
> > > >
> > > > I have to say there is no problem from the perspective of code implementation.
> > >
> > > Note that this is the current logic where it is determined in
> > > receive_small() and receive_mergeable().
> >
> > Yes, but the purpose of this patches is to simplify the call.
>
> You mean simplify the receive_small()/mergeable()?

YES.


>
> >
> > >
> > > >
> > > > But if the a new ACTION liking XDP_TX,XDP_REDIRECT is added in the future, then
> > > > we must modify all the callers.
> > >
> > > This is fine since we only use a single type for XDP action.
> >
> > a single type?
>
> Instead of (partial) duplicating XDP actions in the new enums.


I think it's really misunderstand here. So your thought is these?

   VIRTNET_XDP_RES_PASS,
   VIRTNET_XDP_RES_TX_REDIRECT,
   VIRTNET_XDP_RES_DROP,



>
> >
> > >
> > > > This is the benefit of using CUNSUMED.
> > >
> > > It's very hard to say, e.g if we want to support cloning in the future.
> >
> > cloning? You mean clone one new buffer.
> >
> > It is true that no matter what realization, the logic must be modified.
>
> Yes.
>
> >
> > >
> > > >
> > > > I think it is a good advantage to put xdp_xmit in virtnet_xdp_handler(),
> > > > which makes the caller not care too much about these details.
> > >
> > > This part I don't understand, having xdp_xmit means the caller need to
> > > know whether it is xmited or redirected. The point of the enum is to
> > > hide the XDP actions, but it's conflict with what xdp_xmit who want to
> > > expose (part of) the XDP actions.
> >
> > I mean, no matter what virtnet_xdp_handler () returns? XDP_ACTION or some one I
> > defined, I want to hide the modification of xdp_xmit to virtnet_xdp_handler().
> >
> > Even if virtnet_xdp_handler() returns XDP_TX, we can also complete the
> > modification of XDP_XMIT within Virtnet_xdp_handler().
> >
> >
> > >
> > > > If you take into
> > > > account the problem of increasing the number of parameters, I advise to put it
> > > > in rq.
> > >
> > > I don't have strong opinion to introduce the enum,
> >
> > OK, I will drop these new enums.
>
> Just to make sure we are at the same page. I mean, if there is no
> objection from others, I'm ok to have an enum, but we need to use a
> separate patch to do that.

Do you refer to introduce enums alone without virtnet_xdp_handler()?

>
> >
> > > what I want to say
> > > is, use a separated patch to do that.
> >
> > Does this part refer to putting xdp_xmit in rq?
>
> I mean it's better to be done separately. But I don't see the
> advantage of this other than reducing the parameters.

I think so also.

Thanks.


>
> Thanks
>
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > The latter two are not particularly related to XDP ACTION. And it does not need
> > > > > > to extend when XDP action is extended. At least I have not thought of this
> > > > > > situation.
> > > > >
> > > > > What's the advantages of such indirection compared to using XDP action directly?
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > > +
> > > > > > > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > > >
> > > > > > > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > > > > > > >         return ret;
> > > > > > > >  }
> > > > > > > >
> > > > > > > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > > > > > > +                              struct net_device *dev,
> > > > > > > > +                              unsigned int *xdp_xmit,
> > > > > > > > +                              struct virtnet_rq_stats *stats)
> > > > > > > > +{
> > > > > > > > +       struct xdp_frame *xdpf;
> > > > > > > > +       int err;
> > > > > > > > +       u32 act;
> > > > > > > > +
> > > > > > > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > > > > > > +       stats->xdp_packets++;
> > > > > > > > +
> > > > > > > > +       switch (act) {
> > > > > > > > +       case XDP_PASS:
> > > > > > > > +               return VIRTNET_XDP_RES_PASS;
> > > > > > > > +
> > > > > > > > +       case XDP_TX:
> > > > > > > > +               stats->xdp_tx++;
> > > > > > > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > > > > +               if (unlikely(!xdpf))
> > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > +
> > > > > > > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > +               if (unlikely(!err)) {
> > > > > > > > +                       xdp_return_frame_rx_napi(xdpf);
> > > > > > > > +               } else if (unlikely(err < 0)) {
> > > > > > > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > +               }
> > > > > > > > +
> > > > > > > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > > +
> > > > > > > > +       case XDP_REDIRECT:
> > > > > > > > +               stats->xdp_redirects++;
> > > > > > > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > > > > > > +               if (err)
> > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > +
> > > > > > > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > > +
> > > > > > > > +       default:
> > > > > > > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > > > > > > +               fallthrough;
> > > > > > > > +       case XDP_ABORTED:
> > > > > > > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > > +               fallthrough;
> > > > > > > > +       case XDP_DROP:
> > > > > > > > +               return VIRTNET_XDP_RES_DROP;
> > > > > > > > +       }
> > > > > > > > +}
> > > > > > > > +
> > > > > > > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > > > > > > >  {
> > > > > > > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > > > > > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > >         struct page *page = virt_to_head_page(buf);
> > > > > > > >         unsigned int delta = 0;
> > > > > > > >         struct page *xdp_page;
> > > > > > > > -       int err;
> > > > > > > >         unsigned int metasize = 0;
> > > > > > > >
> > > > > > > >         len -= vi->hdr_len;
> > > > > > > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > > > > > > >         if (xdp_prog) {
> > > > > > > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > > > > > > -               struct xdp_frame *xdpf;
> > > > > > > >                 struct xdp_buff xdp;
> > > > > > > >                 void *orig_data;
> > > > > > > >                 u32 act;
> > > > > > > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > > > > > > >                                  xdp_headroom, len, true);
> > > > > > > >                 orig_data = xdp.data;
> > > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > > -               stats->xdp_packets++;
> > > > > > > > +
> > > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > > >
> > > > > > > >                 switch (act) {
> > > > > > > > -               case XDP_PASS:
> > > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > > >                         /* Recalculate length in case bpf program changed it */
> > > > > > > >                         delta = orig_data - xdp.data;
> > > > > > > >                         len = xdp.data_end - xdp.data;
> > > > > > > >                         metasize = xdp.data - xdp.data_meta;
> > > > > > > >                         break;
> > > > > > > > -               case XDP_TX:
> > > > > > > > -                       stats->xdp_tx++;
> > > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > > -                       if (unlikely(!xdpf))
> > > > > > > > -                               goto err_xdp;
> > > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > -                       if (unlikely(!err)) {
> > > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > -                               goto err_xdp;
> > > > > > > > -                       }
> > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > -                       rcu_read_unlock();
> > > > > > > > -                       goto xdp_xmit;
> > > > > > > > -               case XDP_REDIRECT:
> > > > > > > > -                       stats->xdp_redirects++;
> > > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > > -                       if (err)
> > > > > > > > -                               goto err_xdp;
> > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > +
> > > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > > >                         rcu_read_unlock();
> > > > > > > >                         goto xdp_xmit;
> > > > > > > > -               default:
> > > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > > -                       fallthrough;
> > > > > > > > -               case XDP_ABORTED:
> > > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > -                       goto err_xdp;
> > > > > > > > -               case XDP_DROP:
> > > > > > > > +
> > > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > > >                         goto err_xdp;
> > > > > > > >                 }
> > > > > > > >         }
> > > > > > > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > > >         if (xdp_prog) {
> > > > > > > >                 unsigned int xdp_frags_truesz = 0;
> > > > > > > >                 struct skb_shared_info *shinfo;
> > > > > > > > -               struct xdp_frame *xdpf;
> > > > > > > >                 struct page *xdp_page;
> > > > > > > >                 struct xdp_buff xdp;
> > > > > > > >                 void *data;
> > > > > > > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > > >                 if (unlikely(err))
> > > > > > > >                         goto err_xdp_frags;
> > > > > > > >
> > > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > > -               stats->xdp_packets++;
> > > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > > >
> > > > > > > >                 switch (act) {
> > > > > > > > -               case XDP_PASS:
> > > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > > > > > > >                         if (unlikely(!head_skb))
> > > > > > > >                                 goto err_xdp_frags;
> > > > > > > >
> > > > > > > >                         rcu_read_unlock();
> > > > > > > >                         return head_skb;
> > > > > > > > -               case XDP_TX:
> > > > > > > > -                       stats->xdp_tx++;
> > > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > > -                       if (unlikely(!xdpf)) {
> > > > > > > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> > > > > > >
> > > > > > > Nit: This debug is lost after the conversion.
> > > > > >
> > > > > > Will fix.
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > -                       }
> > > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > -                       if (unlikely(!err)) {
> > > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > -                       }
> > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > -                       rcu_read_unlock();
> > > > > > > > -                       goto xdp_xmit;
> > > > > > > > -               case XDP_REDIRECT:
> > > > > > > > -                       stats->xdp_redirects++;
> > > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > > -                       if (err)
> > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > +
> > > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > > >                         rcu_read_unlock();
> > > > > > > >                         goto xdp_xmit;
> > > > > > > > -               default:
> > > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > > -                       fallthrough;
> > > > > > > > -               case XDP_ABORTED:
> > > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > -                       fallthrough;
> > > > > > > > -               case XDP_DROP:
> > > > > > > > +
> > > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > > >                         goto err_xdp_frags;
> > > > > > > >                 }
> > > > > > > >  err_xdp_frags:
> > > > > > > > --
> > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-04  7:06                 ` Xuan Zhuo
@ 2023-04-04  8:03                   ` Jason Wang
  2023-04-04  8:09                     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-04-04  8:03 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, Apr 4, 2023 at 3:12 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
>
> On Tue, 4 Apr 2023 15:01:36 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > On Tue, Apr 4, 2023 at 2:55 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > >
> > > On Tue, 4 Apr 2023 14:35:05 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > On Tue, Apr 4, 2023 at 2:22 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > >
> > > > > On Tue, 4 Apr 2023 13:04:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > >
> > > > > > > On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > >
> > > > > > > > > At present, we have two similar logic to perform the XDP prog.
> > > > > > > > >
> > > > > > > > > Therefore, this PATCH separates the code of executing XDP, which is
> > > > > > > > > conducive to later maintenance.
> > > > > > > > >
> > > > > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > > > > > ---
> > > > > > > > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > > > > > > > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > > > > > > > >
> > > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > > > > index bb426958cdd4..72b9d6ee4024 100644
> > > > > > > > > --- a/drivers/net/virtio_net.c
> > > > > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > > > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > > > > > > > >         char padding[12];
> > > > > > > > >  };
> > > > > > > > >
> > > > > > > > > +enum {
> > > > > > > > > +       /* xdp pass */
> > > > > > > > > +       VIRTNET_XDP_RES_PASS,
> > > > > > > > > +       /* drop packet. the caller needs to release the page. */
> > > > > > > > > +       VIRTNET_XDP_RES_DROP,
> > > > > > > > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > > > > > > > +       VIRTNET_XDP_RES_CONSUMED,
> > > > > > > > > +};
> > > > > > > >
> > > > > > > > I'd prefer this to be done on top unless it is a must. But I don't see
> > > > > > > > any advantage of introducing this, it's partial mapping of XDP action
> > > > > > > > and it needs to be extended when XDP action is extended. (And we've
> > > > > > > > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
> > > > > > >
> > > > > > > No, these are the three states of buffer after XDP processing.
> > > > > > >
> > > > > > > * PASS: goto make skb
> > > > > >
> > > > > > XDP_PASS goes for this.
> > > > > >
> > > > > > > * DROP: we should release buffer
> > > > > >
> > > > > > XDP_DROP and error conditions go with this.
> > > > > >
> > > > > > > * CUNSUMED: xdp prog used the buffer, we do nothing
> > > > > >
> > > > > > XDP_TX/XDP_REDIRECTION goes for this.
> > > > > >
> > > > > > So t virtnet_xdp_handler() just maps XDP ACTION plus the error
> > > > > > conditions to the above three states.
> > > > > >
> > > > > > We can simply map error to XDP_DROP like:
> > > > > >
> > > > > >        case XDP_TX:
> > > > > >               stats->xdp_tx++;
> > > > > >                xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > >                if (unlikely(!xdpf))
> > > > > >                        return XDP_DROP;
> > > > > >
> > > > > > A good side effect is to avoid the xdp_xmit pointer to be passed to
> > > > > > the function.
> > > > >
> > > > >
> > > > > So, I guess you mean this:
> > > > >
> > > > >         switch (act) {
> > > > >         case XDP_PASS:
> > > > >                 /* handle pass */
> > > > >                 return skb;
> > > > >
> > > > >         case XDP_TX:
> > > > >                 *xdp_xmit |= VIRTIO_XDP_TX;
> > > > >                 goto xmit;
> > > > >
> > > > >         case XDP_REDIRECT:
> > > > >                 *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > >                 goto xmit;
> > > > >
> > > > >         case XDP_DROP:
> > > > >         default:
> > > > >                 goto err_xdp;
> > > > >         }
> > > > >
> > > > > I have to say there is no problem from the perspective of code implementation.
> > > >
> > > > Note that this is the current logic where it is determined in
> > > > receive_small() and receive_mergeable().
> > >
> > > Yes, but the purpose of this patches is to simplify the call.
> >
> > You mean simplify the receive_small()/mergeable()?
>
> YES.
>
>
> >
> > >
> > > >
> > > > >
> > > > > But if the a new ACTION liking XDP_TX,XDP_REDIRECT is added in the future, then
> > > > > we must modify all the callers.
> > > >
> > > > This is fine since we only use a single type for XDP action.
> > >
> > > a single type?
> >
> > Instead of (partial) duplicating XDP actions in the new enums.
>
>
> I think it's really misunderstand here. So your thought is these?
>
>    VIRTNET_XDP_RES_PASS,
>    VIRTNET_XDP_RES_TX_REDIRECT,
>    VIRTNET_XDP_RES_DROP,

No, I meant the enum you introduced.

>
>
>
> >
> > >
> > > >
> > > > > This is the benefit of using CUNSUMED.
> > > >
> > > > It's very hard to say, e.g if we want to support cloning in the future.
> > >
> > > cloning? You mean clone one new buffer.
> > >
> > > It is true that no matter what realization, the logic must be modified.
> >
> > Yes.
> >
> > >
> > > >
> > > > >
> > > > > I think it is a good advantage to put xdp_xmit in virtnet_xdp_handler(),
> > > > > which makes the caller not care too much about these details.
> > > >
> > > > This part I don't understand, having xdp_xmit means the caller need to
> > > > know whether it is xmited or redirected. The point of the enum is to
> > > > hide the XDP actions, but it's conflict with what xdp_xmit who want to
> > > > expose (part of) the XDP actions.
> > >
> > > I mean, no matter what virtnet_xdp_handler () returns? XDP_ACTION or some one I
> > > defined, I want to hide the modification of xdp_xmit to virtnet_xdp_handler().
> > >
> > > Even if virtnet_xdp_handler() returns XDP_TX, we can also complete the
> > > modification of XDP_XMIT within Virtnet_xdp_handler().
> > >
> > >
> > > >
> > > > > If you take into
> > > > > account the problem of increasing the number of parameters, I advise to put it
> > > > > in rq.
> > > >
> > > > I don't have strong opinion to introduce the enum,
> > >
> > > OK, I will drop these new enums.
> >
> > Just to make sure we are at the same page. I mean, if there is no
> > objection from others, I'm ok to have an enum, but we need to use a
> > separate patch to do that.
>
> Do you refer to introduce enums alone without virtnet_xdp_handler()?

I meant, having two patches

1) split out virtnet_xdp_handler() without introducing any new enums
2) introduce the new enum to simplify the codes

Thanks

>
> >
> > >
> > > > what I want to say
> > > > is, use a separated patch to do that.
> > >
> > > Does this part refer to putting xdp_xmit in rq?
> >
> > I mean it's better to be done separately. But I don't see the
> > advantage of this other than reducing the parameters.
>
> I think so also.
>
> Thanks.
>
>
> >
> > Thanks
> >
> > >
> > > Thanks.
> > >
> > >
> > > >
> > > > Thanks
> > > >
> > > > >
> > > > > Thanks.
> > > > >
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > The latter two are not particularly related to XDP ACTION. And it does not need
> > > > > > > to extend when XDP action is extended. At least I have not thought of this
> > > > > > > situation.
> > > > > >
> > > > > > What's the advantages of such indirection compared to using XDP action directly?
> > > > > >
> > > > > > Thanks
> > > > > >
> > > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > > +
> > > > > > > > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > > > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > > > >
> > > > > > > > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > > > > > > > >         return ret;
> > > > > > > > >  }
> > > > > > > > >
> > > > > > > > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > > > > > > > +                              struct net_device *dev,
> > > > > > > > > +                              unsigned int *xdp_xmit,
> > > > > > > > > +                              struct virtnet_rq_stats *stats)
> > > > > > > > > +{
> > > > > > > > > +       struct xdp_frame *xdpf;
> > > > > > > > > +       int err;
> > > > > > > > > +       u32 act;
> > > > > > > > > +
> > > > > > > > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > > > > > > > +       stats->xdp_packets++;
> > > > > > > > > +
> > > > > > > > > +       switch (act) {
> > > > > > > > > +       case XDP_PASS:
> > > > > > > > > +               return VIRTNET_XDP_RES_PASS;
> > > > > > > > > +
> > > > > > > > > +       case XDP_TX:
> > > > > > > > > +               stats->xdp_tx++;
> > > > > > > > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > > > > > +               if (unlikely(!xdpf))
> > > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > > +
> > > > > > > > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > > +               if (unlikely(!err)) {
> > > > > > > > > +                       xdp_return_frame_rx_napi(xdpf);
> > > > > > > > > +               } else if (unlikely(err < 0)) {
> > > > > > > > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > > +               }
> > > > > > > > > +
> > > > > > > > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > > > +
> > > > > > > > > +       case XDP_REDIRECT:
> > > > > > > > > +               stats->xdp_redirects++;
> > > > > > > > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > > > > > > > +               if (err)
> > > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > > +
> > > > > > > > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > > > +
> > > > > > > > > +       default:
> > > > > > > > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > > > > > > > +               fallthrough;
> > > > > > > > > +       case XDP_ABORTED:
> > > > > > > > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > > > +               fallthrough;
> > > > > > > > > +       case XDP_DROP:
> > > > > > > > > +               return VIRTNET_XDP_RES_DROP;
> > > > > > > > > +       }
> > > > > > > > > +}
> > > > > > > > > +
> > > > > > > > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > > > > > > > >  {
> > > > > > > > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > > > > > > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > > >         struct page *page = virt_to_head_page(buf);
> > > > > > > > >         unsigned int delta = 0;
> > > > > > > > >         struct page *xdp_page;
> > > > > > > > > -       int err;
> > > > > > > > >         unsigned int metasize = 0;
> > > > > > > > >
> > > > > > > > >         len -= vi->hdr_len;
> > > > > > > > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > > > > > > > >         if (xdp_prog) {
> > > > > > > > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > > > > > > > -               struct xdp_frame *xdpf;
> > > > > > > > >                 struct xdp_buff xdp;
> > > > > > > > >                 void *orig_data;
> > > > > > > > >                 u32 act;
> > > > > > > > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > > > > > > > >                                  xdp_headroom, len, true);
> > > > > > > > >                 orig_data = xdp.data;
> > > > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > > > -               stats->xdp_packets++;
> > > > > > > > > +
> > > > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > > > >
> > > > > > > > >                 switch (act) {
> > > > > > > > > -               case XDP_PASS:
> > > > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > > > >                         /* Recalculate length in case bpf program changed it */
> > > > > > > > >                         delta = orig_data - xdp.data;
> > > > > > > > >                         len = xdp.data_end - xdp.data;
> > > > > > > > >                         metasize = xdp.data - xdp.data_meta;
> > > > > > > > >                         break;
> > > > > > > > > -               case XDP_TX:
> > > > > > > > > -                       stats->xdp_tx++;
> > > > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > > > -                       if (unlikely(!xdpf))
> > > > > > > > > -                               goto err_xdp;
> > > > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > > -                       if (unlikely(!err)) {
> > > > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > -                               goto err_xdp;
> > > > > > > > > -                       }
> > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > > -                       rcu_read_unlock();
> > > > > > > > > -                       goto xdp_xmit;
> > > > > > > > > -               case XDP_REDIRECT:
> > > > > > > > > -                       stats->xdp_redirects++;
> > > > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > > > -                       if (err)
> > > > > > > > > -                               goto err_xdp;
> > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > > +
> > > > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > > > >                         rcu_read_unlock();
> > > > > > > > >                         goto xdp_xmit;
> > > > > > > > > -               default:
> > > > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > > > -                       fallthrough;
> > > > > > > > > -               case XDP_ABORTED:
> > > > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > -                       goto err_xdp;
> > > > > > > > > -               case XDP_DROP:
> > > > > > > > > +
> > > > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > > > >                         goto err_xdp;
> > > > > > > > >                 }
> > > > > > > > >         }
> > > > > > > > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > > > >         if (xdp_prog) {
> > > > > > > > >                 unsigned int xdp_frags_truesz = 0;
> > > > > > > > >                 struct skb_shared_info *shinfo;
> > > > > > > > > -               struct xdp_frame *xdpf;
> > > > > > > > >                 struct page *xdp_page;
> > > > > > > > >                 struct xdp_buff xdp;
> > > > > > > > >                 void *data;
> > > > > > > > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > > > >                 if (unlikely(err))
> > > > > > > > >                         goto err_xdp_frags;
> > > > > > > > >
> > > > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > > > -               stats->xdp_packets++;
> > > > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > > > >
> > > > > > > > >                 switch (act) {
> > > > > > > > > -               case XDP_PASS:
> > > > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > > > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > > > > > > > >                         if (unlikely(!head_skb))
> > > > > > > > >                                 goto err_xdp_frags;
> > > > > > > > >
> > > > > > > > >                         rcu_read_unlock();
> > > > > > > > >                         return head_skb;
> > > > > > > > > -               case XDP_TX:
> > > > > > > > > -                       stats->xdp_tx++;
> > > > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > > > -                       if (unlikely(!xdpf)) {
> > > > > > > > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> > > > > > > >
> > > > > > > > Nit: This debug is lost after the conversion.
> > > > > > >
> > > > > > > Will fix.
> > > > > > >
> > > > > > > Thanks.
> > > > > > >
> > > > > > > >
> > > > > > > > Thanks
> > > > > > > >
> > > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > > -                       }
> > > > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > > -                       if (unlikely(!err)) {
> > > > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > > -                       }
> > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > > -                       rcu_read_unlock();
> > > > > > > > > -                       goto xdp_xmit;
> > > > > > > > > -               case XDP_REDIRECT:
> > > > > > > > > -                       stats->xdp_redirects++;
> > > > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > > > -                       if (err)
> > > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > > +
> > > > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > > > >                         rcu_read_unlock();
> > > > > > > > >                         goto xdp_xmit;
> > > > > > > > > -               default:
> > > > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > > > -                       fallthrough;
> > > > > > > > > -               case XDP_ABORTED:
> > > > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > -                       fallthrough;
> > > > > > > > > -               case XDP_DROP:
> > > > > > > > > +
> > > > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > > > >                         goto err_xdp_frags;
> > > > > > > > >                 }
> > > > > > > > >  err_xdp_frags:
> > > > > > > > > --
> > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp
  2023-04-04  8:03                   ` Jason Wang
@ 2023-04-04  8:09                     ` Xuan Zhuo
  0 siblings, 0 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-04-04  8:09 UTC (permalink / raw)
  To: Jason Wang
  Cc: netdev, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On Tue, 4 Apr 2023 16:03:49 +0800, Jason Wang <jasowang@redhat.com> wrote:
> On Tue, Apr 4, 2023 at 3:12 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> >
> > On Tue, 4 Apr 2023 15:01:36 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > On Tue, Apr 4, 2023 at 2:55 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > >
> > > > On Tue, 4 Apr 2023 14:35:05 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > On Tue, Apr 4, 2023 at 2:22 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > >
> > > > > > On Tue, 4 Apr 2023 13:04:02 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > On Mon, Apr 3, 2023 at 12:17 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > >
> > > > > > > > On Mon, 3 Apr 2023 10:43:03 +0800, Jason Wang <jasowang@redhat.com> wrote:
> > > > > > > > > On Tue, Mar 28, 2023 at 8:04 PM Xuan Zhuo <xuanzhuo@linux.alibaba.com> wrote:
> > > > > > > > > >
> > > > > > > > > > At present, we have two similar logic to perform the XDP prog.
> > > > > > > > > >
> > > > > > > > > > Therefore, this PATCH separates the code of executing XDP, which is
> > > > > > > > > > conducive to later maintenance.
> > > > > > > > > >
> > > > > > > > > > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > > > > > > > > > ---
> > > > > > > > > >  drivers/net/virtio_net.c | 142 +++++++++++++++++++++------------------
> > > > > > > > > >  1 file changed, 75 insertions(+), 67 deletions(-)
> > > > > > > > > >
> > > > > > > > > > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > > > > > > > > > index bb426958cdd4..72b9d6ee4024 100644
> > > > > > > > > > --- a/drivers/net/virtio_net.c
> > > > > > > > > > +++ b/drivers/net/virtio_net.c
> > > > > > > > > > @@ -301,6 +301,15 @@ struct padded_vnet_hdr {
> > > > > > > > > >         char padding[12];
> > > > > > > > > >  };
> > > > > > > > > >
> > > > > > > > > > +enum {
> > > > > > > > > > +       /* xdp pass */
> > > > > > > > > > +       VIRTNET_XDP_RES_PASS,
> > > > > > > > > > +       /* drop packet. the caller needs to release the page. */
> > > > > > > > > > +       VIRTNET_XDP_RES_DROP,
> > > > > > > > > > +       /* packet is consumed by xdp. the caller needs to do nothing. */
> > > > > > > > > > +       VIRTNET_XDP_RES_CONSUMED,
> > > > > > > > > > +};
> > > > > > > > >
> > > > > > > > > I'd prefer this to be done on top unless it is a must. But I don't see
> > > > > > > > > any advantage of introducing this, it's partial mapping of XDP action
> > > > > > > > > and it needs to be extended when XDP action is extended. (And we've
> > > > > > > > > already had: VIRTIO_XDP_REDIR and VIRTIO_XDP_TX ...)
> > > > > > > >
> > > > > > > > No, these are the three states of buffer after XDP processing.
> > > > > > > >
> > > > > > > > * PASS: goto make skb
> > > > > > >
> > > > > > > XDP_PASS goes for this.
> > > > > > >
> > > > > > > > * DROP: we should release buffer
> > > > > > >
> > > > > > > XDP_DROP and error conditions go with this.
> > > > > > >
> > > > > > > > * CUNSUMED: xdp prog used the buffer, we do nothing
> > > > > > >
> > > > > > > XDP_TX/XDP_REDIRECTION goes for this.
> > > > > > >
> > > > > > > So t virtnet_xdp_handler() just maps XDP ACTION plus the error
> > > > > > > conditions to the above three states.
> > > > > > >
> > > > > > > We can simply map error to XDP_DROP like:
> > > > > > >
> > > > > > >        case XDP_TX:
> > > > > > >               stats->xdp_tx++;
> > > > > > >                xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > > >                if (unlikely(!xdpf))
> > > > > > >                        return XDP_DROP;
> > > > > > >
> > > > > > > A good side effect is to avoid the xdp_xmit pointer to be passed to
> > > > > > > the function.
> > > > > >
> > > > > >
> > > > > > So, I guess you mean this:
> > > > > >
> > > > > >         switch (act) {
> > > > > >         case XDP_PASS:
> > > > > >                 /* handle pass */
> > > > > >                 return skb;
> > > > > >
> > > > > >         case XDP_TX:
> > > > > >                 *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > >                 goto xmit;
> > > > > >
> > > > > >         case XDP_REDIRECT:
> > > > > >                 *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > >                 goto xmit;
> > > > > >
> > > > > >         case XDP_DROP:
> > > > > >         default:
> > > > > >                 goto err_xdp;
> > > > > >         }
> > > > > >
> > > > > > I have to say there is no problem from the perspective of code implementation.
> > > > >
> > > > > Note that this is the current logic where it is determined in
> > > > > receive_small() and receive_mergeable().
> > > >
> > > > Yes, but the purpose of this patches is to simplify the call.
> > >
> > > You mean simplify the receive_small()/mergeable()?
> >
> > YES.
> >
> >
> > >
> > > >
> > > > >
> > > > > >
> > > > > > But if the a new ACTION liking XDP_TX,XDP_REDIRECT is added in the future, then
> > > > > > we must modify all the callers.
> > > > >
> > > > > This is fine since we only use a single type for XDP action.
> > > >
> > > > a single type?
> > >
> > > Instead of (partial) duplicating XDP actions in the new enums.
> >
> >
> > I think it's really misunderstand here. So your thought is these?
> >
> >    VIRTNET_XDP_RES_PASS,
> >    VIRTNET_XDP_RES_TX_REDIRECT,
> >    VIRTNET_XDP_RES_DROP,
>
> No, I meant the enum you introduced.
>
> >
> >
> >
> > >
> > > >
> > > > >
> > > > > > This is the benefit of using CUNSUMED.
> > > > >
> > > > > It's very hard to say, e.g if we want to support cloning in the future.
> > > >
> > > > cloning? You mean clone one new buffer.
> > > >
> > > > It is true that no matter what realization, the logic must be modified.
> > >
> > > Yes.
> > >
> > > >
> > > > >
> > > > > >
> > > > > > I think it is a good advantage to put xdp_xmit in virtnet_xdp_handler(),
> > > > > > which makes the caller not care too much about these details.
> > > > >
> > > > > This part I don't understand, having xdp_xmit means the caller need to
> > > > > know whether it is xmited or redirected. The point of the enum is to
> > > > > hide the XDP actions, but it's conflict with what xdp_xmit who want to
> > > > > expose (part of) the XDP actions.
> > > >
> > > > I mean, no matter what virtnet_xdp_handler () returns? XDP_ACTION or some one I
> > > > defined, I want to hide the modification of xdp_xmit to virtnet_xdp_handler().
> > > >
> > > > Even if virtnet_xdp_handler() returns XDP_TX, we can also complete the
> > > > modification of XDP_XMIT within Virtnet_xdp_handler().
> > > >
> > > >
> > > > >
> > > > > > If you take into
> > > > > > account the problem of increasing the number of parameters, I advise to put it
> > > > > > in rq.
> > > > >
> > > > > I don't have strong opinion to introduce the enum,
> > > >
> > > > OK, I will drop these new enums.
> > >
> > > Just to make sure we are at the same page. I mean, if there is no
> > > objection from others, I'm ok to have an enum, but we need to use a
> > > separate patch to do that.
> >
> > Do you refer to introduce enums alone without virtnet_xdp_handler()?
>
> I meant, having two patches
>
> 1) split out virtnet_xdp_handler() without introducing any new enums
> 2) introduce the new enum to simplify the codes

OK. I see.

Thanks.


>
> Thanks
>
> >
> > >
> > > >
> > > > > what I want to say
> > > > > is, use a separated patch to do that.
> > > >
> > > > Does this part refer to putting xdp_xmit in rq?
> > >
> > > I mean it's better to be done separately. But I don't see the
> > > advantage of this other than reducing the parameters.
> >
> > I think so also.
> >
> > Thanks.
> >
> >
> > >
> > > Thanks
> > >
> > > >
> > > > Thanks.
> > > >
> > > >
> > > > >
> > > > > Thanks
> > > > >
> > > > > >
> > > > > > Thanks.
> > > > > >
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > The latter two are not particularly related to XDP ACTION. And it does not need
> > > > > > > > to extend when XDP action is extended. At least I have not thought of this
> > > > > > > > situation.
> > > > > > >
> > > > > > > What's the advantages of such indirection compared to using XDP action directly?
> > > > > > >
> > > > > > > Thanks
> > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > >
> > > > > > > > > > +
> > > > > > > > > >  static void virtnet_rq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > > > > >  static void virtnet_sq_free_unused_buf(struct virtqueue *vq, void *buf);
> > > > > > > > > >
> > > > > > > > > > @@ -789,6 +798,59 @@ static int virtnet_xdp_xmit(struct net_device *dev,
> > > > > > > > > >         return ret;
> > > > > > > > > >  }
> > > > > > > > > >
> > > > > > > > > > +static int virtnet_xdp_handler(struct bpf_prog *xdp_prog, struct xdp_buff *xdp,
> > > > > > > > > > +                              struct net_device *dev,
> > > > > > > > > > +                              unsigned int *xdp_xmit,
> > > > > > > > > > +                              struct virtnet_rq_stats *stats)
> > > > > > > > > > +{
> > > > > > > > > > +       struct xdp_frame *xdpf;
> > > > > > > > > > +       int err;
> > > > > > > > > > +       u32 act;
> > > > > > > > > > +
> > > > > > > > > > +       act = bpf_prog_run_xdp(xdp_prog, xdp);
> > > > > > > > > > +       stats->xdp_packets++;
> > > > > > > > > > +
> > > > > > > > > > +       switch (act) {
> > > > > > > > > > +       case XDP_PASS:
> > > > > > > > > > +               return VIRTNET_XDP_RES_PASS;
> > > > > > > > > > +
> > > > > > > > > > +       case XDP_TX:
> > > > > > > > > > +               stats->xdp_tx++;
> > > > > > > > > > +               xdpf = xdp_convert_buff_to_frame(xdp);
> > > > > > > > > > +               if (unlikely(!xdpf))
> > > > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > > > +
> > > > > > > > > > +               err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > > > +               if (unlikely(!err)) {
> > > > > > > > > > +                       xdp_return_frame_rx_napi(xdpf);
> > > > > > > > > > +               } else if (unlikely(err < 0)) {
> > > > > > > > > > +                       trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > > > +               }
> > > > > > > > > > +
> > > > > > > > > > +               *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > > > > +
> > > > > > > > > > +       case XDP_REDIRECT:
> > > > > > > > > > +               stats->xdp_redirects++;
> > > > > > > > > > +               err = xdp_do_redirect(dev, xdp, xdp_prog);
> > > > > > > > > > +               if (err)
> > > > > > > > > > +                       return VIRTNET_XDP_RES_DROP;
> > > > > > > > > > +
> > > > > > > > > > +               *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > > > +               return VIRTNET_XDP_RES_CONSUMED;
> > > > > > > > > > +
> > > > > > > > > > +       default:
> > > > > > > > > > +               bpf_warn_invalid_xdp_action(dev, xdp_prog, act);
> > > > > > > > > > +               fallthrough;
> > > > > > > > > > +       case XDP_ABORTED:
> > > > > > > > > > +               trace_xdp_exception(dev, xdp_prog, act);
> > > > > > > > > > +               fallthrough;
> > > > > > > > > > +       case XDP_DROP:
> > > > > > > > > > +               return VIRTNET_XDP_RES_DROP;
> > > > > > > > > > +       }
> > > > > > > > > > +}
> > > > > > > > > > +
> > > > > > > > > >  static unsigned int virtnet_get_headroom(struct virtnet_info *vi)
> > > > > > > > > >  {
> > > > > > > > > >         return vi->xdp_enabled ? VIRTIO_XDP_HEADROOM : 0;
> > > > > > > > > > @@ -876,7 +938,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > > > >         struct page *page = virt_to_head_page(buf);
> > > > > > > > > >         unsigned int delta = 0;
> > > > > > > > > >         struct page *xdp_page;
> > > > > > > > > > -       int err;
> > > > > > > > > >         unsigned int metasize = 0;
> > > > > > > > > >
> > > > > > > > > >         len -= vi->hdr_len;
> > > > > > > > > > @@ -898,7 +959,6 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > > > >         xdp_prog = rcu_dereference(rq->xdp_prog);
> > > > > > > > > >         if (xdp_prog) {
> > > > > > > > > >                 struct virtio_net_hdr_mrg_rxbuf *hdr = buf + header_offset;
> > > > > > > > > > -               struct xdp_frame *xdpf;
> > > > > > > > > >                 struct xdp_buff xdp;
> > > > > > > > > >                 void *orig_data;
> > > > > > > > > >                 u32 act;
> > > > > > > > > > @@ -931,46 +991,22 @@ static struct sk_buff *receive_small(struct net_device *dev,
> > > > > > > > > >                 xdp_prepare_buff(&xdp, buf + VIRTNET_RX_PAD + vi->hdr_len,
> > > > > > > > > >                                  xdp_headroom, len, true);
> > > > > > > > > >                 orig_data = xdp.data;
> > > > > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > > > > -               stats->xdp_packets++;
> > > > > > > > > > +
> > > > > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > > > > >
> > > > > > > > > >                 switch (act) {
> > > > > > > > > > -               case XDP_PASS:
> > > > > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > > > > >                         /* Recalculate length in case bpf program changed it */
> > > > > > > > > >                         delta = orig_data - xdp.data;
> > > > > > > > > >                         len = xdp.data_end - xdp.data;
> > > > > > > > > >                         metasize = xdp.data - xdp.data_meta;
> > > > > > > > > >                         break;
> > > > > > > > > > -               case XDP_TX:
> > > > > > > > > > -                       stats->xdp_tx++;
> > > > > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > > > > -                       if (unlikely(!xdpf))
> > > > > > > > > > -                               goto err_xdp;
> > > > > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > > > -                       if (unlikely(!err)) {
> > > > > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > > -                               goto err_xdp;
> > > > > > > > > > -                       }
> > > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > > > -                       rcu_read_unlock();
> > > > > > > > > > -                       goto xdp_xmit;
> > > > > > > > > > -               case XDP_REDIRECT:
> > > > > > > > > > -                       stats->xdp_redirects++;
> > > > > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > > > > -                       if (err)
> > > > > > > > > > -                               goto err_xdp;
> > > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > > > +
> > > > > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > > > > >                         rcu_read_unlock();
> > > > > > > > > >                         goto xdp_xmit;
> > > > > > > > > > -               default:
> > > > > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > > > > -                       fallthrough;
> > > > > > > > > > -               case XDP_ABORTED:
> > > > > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > > -                       goto err_xdp;
> > > > > > > > > > -               case XDP_DROP:
> > > > > > > > > > +
> > > > > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > > > > >                         goto err_xdp;
> > > > > > > > > >                 }
> > > > > > > > > >         }
> > > > > > > > > > @@ -1277,7 +1313,6 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > > > > >         if (xdp_prog) {
> > > > > > > > > >                 unsigned int xdp_frags_truesz = 0;
> > > > > > > > > >                 struct skb_shared_info *shinfo;
> > > > > > > > > > -               struct xdp_frame *xdpf;
> > > > > > > > > >                 struct page *xdp_page;
> > > > > > > > > >                 struct xdp_buff xdp;
> > > > > > > > > >                 void *data;
> > > > > > > > > > @@ -1294,49 +1329,22 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> > > > > > > > > >                 if (unlikely(err))
> > > > > > > > > >                         goto err_xdp_frags;
> > > > > > > > > >
> > > > > > > > > > -               act = bpf_prog_run_xdp(xdp_prog, &xdp);
> > > > > > > > > > -               stats->xdp_packets++;
> > > > > > > > > > +               act = virtnet_xdp_handler(xdp_prog, &xdp, dev, xdp_xmit, stats);
> > > > > > > > > >
> > > > > > > > > >                 switch (act) {
> > > > > > > > > > -               case XDP_PASS:
> > > > > > > > > > +               case VIRTNET_XDP_RES_PASS:
> > > > > > > > > >                         head_skb = build_skb_from_xdp_buff(dev, vi, &xdp, xdp_frags_truesz);
> > > > > > > > > >                         if (unlikely(!head_skb))
> > > > > > > > > >                                 goto err_xdp_frags;
> > > > > > > > > >
> > > > > > > > > >                         rcu_read_unlock();
> > > > > > > > > >                         return head_skb;
> > > > > > > > > > -               case XDP_TX:
> > > > > > > > > > -                       stats->xdp_tx++;
> > > > > > > > > > -                       xdpf = xdp_convert_buff_to_frame(&xdp);
> > > > > > > > > > -                       if (unlikely(!xdpf)) {
> > > > > > > > > > -                               netdev_dbg(dev, "convert buff to frame failed for xdp\n");
> > > > > > > > >
> > > > > > > > > Nit: This debug is lost after the conversion.
> > > > > > > >
> > > > > > > > Will fix.
> > > > > > > >
> > > > > > > > Thanks.
> > > > > > > >
> > > > > > > > >
> > > > > > > > > Thanks
> > > > > > > > >
> > > > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > > > -                       }
> > > > > > > > > > -                       err = virtnet_xdp_xmit(dev, 1, &xdpf, 0);
> > > > > > > > > > -                       if (unlikely(!err)) {
> > > > > > > > > > -                               xdp_return_frame_rx_napi(xdpf);
> > > > > > > > > > -                       } else if (unlikely(err < 0)) {
> > > > > > > > > > -                               trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > > > -                       }
> > > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_TX;
> > > > > > > > > > -                       rcu_read_unlock();
> > > > > > > > > > -                       goto xdp_xmit;
> > > > > > > > > > -               case XDP_REDIRECT:
> > > > > > > > > > -                       stats->xdp_redirects++;
> > > > > > > > > > -                       err = xdp_do_redirect(dev, &xdp, xdp_prog);
> > > > > > > > > > -                       if (err)
> > > > > > > > > > -                               goto err_xdp_frags;
> > > > > > > > > > -                       *xdp_xmit |= VIRTIO_XDP_REDIR;
> > > > > > > > > > +
> > > > > > > > > > +               case VIRTNET_XDP_RES_CONSUMED:
> > > > > > > > > >                         rcu_read_unlock();
> > > > > > > > > >                         goto xdp_xmit;
> > > > > > > > > > -               default:
> > > > > > > > > > -                       bpf_warn_invalid_xdp_action(vi->dev, xdp_prog, act);
> > > > > > > > > > -                       fallthrough;
> > > > > > > > > > -               case XDP_ABORTED:
> > > > > > > > > > -                       trace_xdp_exception(vi->dev, xdp_prog, act);
> > > > > > > > > > -                       fallthrough;
> > > > > > > > > > -               case XDP_DROP:
> > > > > > > > > > +
> > > > > > > > > > +               case VIRTNET_XDP_RES_DROP:
> > > > > > > > > >                         goto err_xdp_frags;
> > > > > > > > > >                 }
> > > > > > > > > >  err_xdp_frags:
> > > > > > > > > > --
> > > > > > > > > > 2.32.0.3.g01195cf9f
> > > > > > > > > >
> > > > > > > > >
> > > > > > > >
> > > > > > >
> > > > > >
> > > > >
> > > >
> > >
> >
>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-23  7:24           ` Yunsheng Lin
@ 2023-03-23 11:04             ` Xuan Zhuo
  0 siblings, 0 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-23 11:04 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev, Jason Wang

On Thu, 23 Mar 2023 15:24:38 +0800, Yunsheng Lin <linyunsheng@huawei.com> wrote:
> On 2023/3/23 13:40, Jason Wang wrote:
> >>>
> >>>>
> >>>> Also, it seems better to split the xdp_linearize_page() to two functions
> >>>> as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
> >>>> the other one to do the linearizing.
> >>>
> >>> No skb here.
> >>
> >> I means following the semantics of pskb_expand_head() and __skb_linearize(),
> >> not to combine the headroom expanding and linearizing into one function as
> >> xdp_linearize_page() does now if we want a better refoctor result.
> >
> > Not sure it's worth it, since the use is very specific unless we could
> > find a case that wants only one of them.
>
> It seems receive_small() only need the headroom expanding one.
> For receive_mergeable(), it seems we can split into the below cases:
> 1. " (!xdp_prog->aux->xdp_has_frags && (num_buf > 1 || headroom < virtnet_get_headroom(vi)))"
>    case only need linearizing.
> 2. other cases only need headroom/tailroom expanding.
>
> Anyway, it is your call to decide if you want to take this
> opportunity do a better refoctoring to virtio_net.

Compared to the chaotic state of the virtio-net XDP, this is a small point.
And I don’t think this brings any practical optimization. If you think this
division is better. You can submit a new patch on the top of this patch set.
I think the code can be clearer.

Thanks.

>
> >
> > Thanks
> >
> >>
> >>>
> >>>
> >>>>
> >>
> >
> >
> > .
> >

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-23  4:45       ` Yunsheng Lin
  2023-03-23  4:52         ` Jakub Kicinski
  2023-03-23  5:40         ` Jason Wang
@ 2023-03-23 10:59         ` Xuan Zhuo
  2 siblings, 0 replies; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-23 10:59 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Thu, 23 Mar 2023 12:45:41 +0800, Yunsheng Lin <linyunsheng@huawei.com> wrote:
> On 2023/3/23 9:45, Xuan Zhuo wrote:
> > On Wed, 22 Mar 2023 19:52:48 +0800, Yunsheng Lin <linyunsheng@huawei.com> wrote:
> >> On 2023/3/22 11:03, Xuan Zhuo wrote:
> >>> Separating the logic of preparation for xdp from receive_mergeable.
> >>>
> >>> The purpose of this is to simplify the logic of execution of XDP.
> >>>
> >>> The main logic here is that when headroom is insufficient, we need to
> >>> allocate a new page and calculate offset. It should be noted that if
> >>> there is new page, the variable page will refer to the new page.
> >>>
> >>> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> >>> ---
> >>>  drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
> >>>  1 file changed, 77 insertions(+), 58 deletions(-)
> >>>
> >>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> >>> index 4d2bf1ce0730..bb426958cdd4 100644
> >>> --- a/drivers/net/virtio_net.c
> >>> +++ b/drivers/net/virtio_net.c
> >>> @@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
> >>>  	return 0;
> >>>  }
> >>>
> >>> +static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> >>> +				   struct receive_queue *rq,
> >>> +				   struct bpf_prog *xdp_prog,
> >>> +				   void *ctx,
> >>> +				   unsigned int *frame_sz,
> >>> +				   int *num_buf,
> >>> +				   struct page **page,
> >>> +				   int offset,
> >>> +				   unsigned int *len,
> >>> +				   struct virtio_net_hdr_mrg_rxbuf *hdr)
> >>
> >> The naming convention seems to be xdp_prepare_mergeable().
> >
> > What convention?
> >
> >
> >>
> >>> +{
> >>> +	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
> >>> +	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> >>> +	struct page *xdp_page;
> >>> +	unsigned int xdp_room;
> >>> +
> >>> +	/* Transient failure which in theory could occur if
> >>> +	 * in-flight packets from before XDP was enabled reach
> >>> +	 * the receive path after XDP is loaded.
> >>> +	 */
> >>> +	if (unlikely(hdr->hdr.gso_type))
> >>> +		return NULL;
> >>> +
> >>> +	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> >>> +	 * with headroom may add hole in truesize, which
> >>> +	 * make their length exceed PAGE_SIZE. So we disabled the
> >>> +	 * hole mechanism for xdp. See add_recvbuf_mergeable().
> >>> +	 */
> >>> +	*frame_sz = truesize;
> >>> +
> >>> +	/* This happens when headroom is not enough because
> >>> +	 * of the buffer was prefilled before XDP is set.
> >>> +	 * This should only happen for the first several packets.
> >>> +	 * In fact, vq reset can be used here to help us clean up
> >>> +	 * the prefilled buffers, but many existing devices do not
> >>> +	 * support it, and we don't want to bother users who are
> >>> +	 * using xdp normally.
> >>> +	 */
> >>> +	if (!xdp_prog->aux->xdp_has_frags &&
> >>> +	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> >>> +		/* linearize data for XDP */
> >>> +		xdp_page = xdp_linearize_page(rq, num_buf,
> >>> +					      *page, offset,
> >>> +					      VIRTIO_XDP_HEADROOM,
> >>> +					      len);
> >>> +
> >>> +		if (!xdp_page)
> >>> +			return NULL;
> >>> +	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> >>> +		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> >>> +					  sizeof(struct skb_shared_info));
> >>> +		if (*len + xdp_room > PAGE_SIZE)
> >>> +			return NULL;
> >>> +
> >>> +		xdp_page = alloc_page(GFP_ATOMIC);
> >>> +		if (!xdp_page)
> >>> +			return NULL;
> >>> +
> >>> +		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> >>> +		       page_address(*page) + offset, *len);
> >>
> >> It seems the above 'else if' was not really tested even before this patch,
> >> as there is no "--*num_buf" if xdp_linearize_page() is not called, which
> >> may causes virtnet_build_xdp_buff_mrg() to comsume one more buffer than
> >> expected?
> >
> > Why do you think so?
>
>
> In first 'if' block, there is a "--*num_buf" before gotoing 'err_xdp'
> for virtqueue_get_buf() failure in xdp_linearize_page().
>
> But here there is no "--*num_buf" before gotoing 'err_xdp' for
> alloc_page() failure.

Inside err_xdp, we will get all bufs and free them util num_buf is 0.

Thanks.

>
> So one of them has to be wrong, right?
>
> >
> >>
> >> Also, it seems better to split the xdp_linearize_page() to two functions
> >> as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
> >> the other one to do the linearizing.
> >
> > No skb here.
>
> I means following the semantics of pskb_expand_head() and __skb_linearize(),
> not to combine the headroom expanding and linearizing into one function as
> xdp_linearize_page() does now if we want a better refoctor result.
>
> >
> >
> >>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-23  5:40         ` Jason Wang
@ 2023-03-23  7:24           ` Yunsheng Lin
  2023-03-23 11:04             ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Yunsheng Lin @ 2023-03-23  7:24 UTC (permalink / raw)
  To: Jason Wang
  Cc: Xuan Zhuo, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On 2023/3/23 13:40, Jason Wang wrote:
>>>
>>>>
>>>> Also, it seems better to split the xdp_linearize_page() to two functions
>>>> as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
>>>> the other one to do the linearizing.
>>>
>>> No skb here.
>>
>> I means following the semantics of pskb_expand_head() and __skb_linearize(),
>> not to combine the headroom expanding and linearizing into one function as
>> xdp_linearize_page() does now if we want a better refoctor result.
> 
> Not sure it's worth it, since the use is very specific unless we could
> find a case that wants only one of them.

It seems receive_small() only need the headroom expanding one.
For receive_mergeable(), it seems we can split into the below cases:
1. " (!xdp_prog->aux->xdp_has_frags && (num_buf > 1 || headroom < virtnet_get_headroom(vi)))"
   case only need linearizing.
2. other cases only need headroom/tailroom expanding.

Anyway, it is your call to decide if you want to take this
opportunity do a better refoctoring to virtio_net.

> 
> Thanks
> 
>>
>>>
>>>
>>>>
>>
> 
> 
> .
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-23  4:45       ` Yunsheng Lin
  2023-03-23  4:52         ` Jakub Kicinski
@ 2023-03-23  5:40         ` Jason Wang
  2023-03-23  7:24           ` Yunsheng Lin
  2023-03-23 10:59         ` Xuan Zhuo
  2 siblings, 1 reply; 41+ messages in thread
From: Jason Wang @ 2023-03-23  5:40 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Xuan Zhuo, Michael S. Tsirkin, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

> >
> >>
> >> Also, it seems better to split the xdp_linearize_page() to two functions
> >> as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
> >> the other one to do the linearizing.
> >
> > No skb here.
>
> I means following the semantics of pskb_expand_head() and __skb_linearize(),
> not to combine the headroom expanding and linearizing into one function as
> xdp_linearize_page() does now if we want a better refoctor result.

Not sure it's worth it, since the use is very specific unless we could
find a case that wants only one of them.

Thanks

>
> >
> >
> >>
>


^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-23  4:45       ` Yunsheng Lin
@ 2023-03-23  4:52         ` Jakub Kicinski
  2023-03-23  5:40         ` Jason Wang
  2023-03-23 10:59         ` Xuan Zhuo
  2 siblings, 0 replies; 41+ messages in thread
From: Jakub Kicinski @ 2023-03-23  4:52 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Xuan Zhuo, Michael S. Tsirkin, Jason Wang, David S. Miller,
	Eric Dumazet, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Thu, 23 Mar 2023 12:45:41 +0800 Yunsheng Lin wrote:
> >> Also, it seems better to split the xdp_linearize_page() to two functions
> >> as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
> >> the other one to do the linearizing.  
> > 
> > No skb here.  
> 
> I means following the semantics of pskb_expand_head() and __skb_linearize(),
> not to combine the headroom expanding and linearizing into one function as
> xdp_linearize_page() does now if we want a better refoctor result.

It's a driver-local function, if I was reading the code and saw
xdp_prepare_mergeable() I'd have thought it's a function from XDP core.
If anything the functions are missing local driver prefix. But it's
not a huge deal (given Michael's ping yesterday asking us if we can
expedite merging..)

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-23  1:45     ` Xuan Zhuo
@ 2023-03-23  4:45       ` Yunsheng Lin
  2023-03-23  4:52         ` Jakub Kicinski
                           ` (2 more replies)
  0 siblings, 3 replies; 41+ messages in thread
From: Yunsheng Lin @ 2023-03-23  4:45 UTC (permalink / raw)
  To: Xuan Zhuo
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On 2023/3/23 9:45, Xuan Zhuo wrote:
> On Wed, 22 Mar 2023 19:52:48 +0800, Yunsheng Lin <linyunsheng@huawei.com> wrote:
>> On 2023/3/22 11:03, Xuan Zhuo wrote:
>>> Separating the logic of preparation for xdp from receive_mergeable.
>>>
>>> The purpose of this is to simplify the logic of execution of XDP.
>>>
>>> The main logic here is that when headroom is insufficient, we need to
>>> allocate a new page and calculate offset. It should be noted that if
>>> there is new page, the variable page will refer to the new page.
>>>
>>> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
>>> ---
>>>  drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
>>>  1 file changed, 77 insertions(+), 58 deletions(-)
>>>
>>> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
>>> index 4d2bf1ce0730..bb426958cdd4 100644
>>> --- a/drivers/net/virtio_net.c
>>> +++ b/drivers/net/virtio_net.c
>>> @@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>>>  	return 0;
>>>  }
>>>
>>> +static void *mergeable_xdp_prepare(struct virtnet_info *vi,
>>> +				   struct receive_queue *rq,
>>> +				   struct bpf_prog *xdp_prog,
>>> +				   void *ctx,
>>> +				   unsigned int *frame_sz,
>>> +				   int *num_buf,
>>> +				   struct page **page,
>>> +				   int offset,
>>> +				   unsigned int *len,
>>> +				   struct virtio_net_hdr_mrg_rxbuf *hdr)
>>
>> The naming convention seems to be xdp_prepare_mergeable().
> 
> What convention?
> 
> 
>>
>>> +{
>>> +	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
>>> +	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
>>> +	struct page *xdp_page;
>>> +	unsigned int xdp_room;
>>> +
>>> +	/* Transient failure which in theory could occur if
>>> +	 * in-flight packets from before XDP was enabled reach
>>> +	 * the receive path after XDP is loaded.
>>> +	 */
>>> +	if (unlikely(hdr->hdr.gso_type))
>>> +		return NULL;
>>> +
>>> +	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
>>> +	 * with headroom may add hole in truesize, which
>>> +	 * make their length exceed PAGE_SIZE. So we disabled the
>>> +	 * hole mechanism for xdp. See add_recvbuf_mergeable().
>>> +	 */
>>> +	*frame_sz = truesize;
>>> +
>>> +	/* This happens when headroom is not enough because
>>> +	 * of the buffer was prefilled before XDP is set.
>>> +	 * This should only happen for the first several packets.
>>> +	 * In fact, vq reset can be used here to help us clean up
>>> +	 * the prefilled buffers, but many existing devices do not
>>> +	 * support it, and we don't want to bother users who are
>>> +	 * using xdp normally.
>>> +	 */
>>> +	if (!xdp_prog->aux->xdp_has_frags &&
>>> +	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
>>> +		/* linearize data for XDP */
>>> +		xdp_page = xdp_linearize_page(rq, num_buf,
>>> +					      *page, offset,
>>> +					      VIRTIO_XDP_HEADROOM,
>>> +					      len);
>>> +
>>> +		if (!xdp_page)
>>> +			return NULL;
>>> +	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
>>> +		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
>>> +					  sizeof(struct skb_shared_info));
>>> +		if (*len + xdp_room > PAGE_SIZE)
>>> +			return NULL;
>>> +
>>> +		xdp_page = alloc_page(GFP_ATOMIC);
>>> +		if (!xdp_page)
>>> +			return NULL;
>>> +
>>> +		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
>>> +		       page_address(*page) + offset, *len);
>>
>> It seems the above 'else if' was not really tested even before this patch,
>> as there is no "--*num_buf" if xdp_linearize_page() is not called, which
>> may causes virtnet_build_xdp_buff_mrg() to comsume one more buffer than
>> expected?
> 
> Why do you think so?


In first 'if' block, there is a "--*num_buf" before gotoing 'err_xdp'
for virtqueue_get_buf() failure in xdp_linearize_page().

But here there is no "--*num_buf" before gotoing 'err_xdp' for
alloc_page() failure.

So one of them has to be wrong, right?

> 
>>
>> Also, it seems better to split the xdp_linearize_page() to two functions
>> as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
>> the other one to do the linearizing.
> 
> No skb here.

I means following the semantics of pskb_expand_head() and __skb_linearize(),
not to combine the headroom expanding and linearizing into one function as
xdp_linearize_page() does now if we want a better refoctor result.

> 
> 
>>

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-22 11:52   ` Yunsheng Lin
@ 2023-03-23  1:45     ` Xuan Zhuo
  2023-03-23  4:45       ` Yunsheng Lin
  0 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-23  1:45 UTC (permalink / raw)
  To: Yunsheng Lin
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf,
	netdev

On Wed, 22 Mar 2023 19:52:48 +0800, Yunsheng Lin <linyunsheng@huawei.com> wrote:
> On 2023/3/22 11:03, Xuan Zhuo wrote:
> > Separating the logic of preparation for xdp from receive_mergeable.
> >
> > The purpose of this is to simplify the logic of execution of XDP.
> >
> > The main logic here is that when headroom is insufficient, we need to
> > allocate a new page and calculate offset. It should be noted that if
> > there is new page, the variable page will refer to the new page.
> >
> > Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> > ---
> >  drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
> >  1 file changed, 77 insertions(+), 58 deletions(-)
> >
> > diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> > index 4d2bf1ce0730..bb426958cdd4 100644
> > --- a/drivers/net/virtio_net.c
> > +++ b/drivers/net/virtio_net.c
> > @@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
> >  	return 0;
> >  }
> >
> > +static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> > +				   struct receive_queue *rq,
> > +				   struct bpf_prog *xdp_prog,
> > +				   void *ctx,
> > +				   unsigned int *frame_sz,
> > +				   int *num_buf,
> > +				   struct page **page,
> > +				   int offset,
> > +				   unsigned int *len,
> > +				   struct virtio_net_hdr_mrg_rxbuf *hdr)
>
> The naming convention seems to be xdp_prepare_mergeable().

What convention?


>
> > +{
> > +	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
> > +	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> > +	struct page *xdp_page;
> > +	unsigned int xdp_room;
> > +
> > +	/* Transient failure which in theory could occur if
> > +	 * in-flight packets from before XDP was enabled reach
> > +	 * the receive path after XDP is loaded.
> > +	 */
> > +	if (unlikely(hdr->hdr.gso_type))
> > +		return NULL;
> > +
> > +	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> > +	 * with headroom may add hole in truesize, which
> > +	 * make their length exceed PAGE_SIZE. So we disabled the
> > +	 * hole mechanism for xdp. See add_recvbuf_mergeable().
> > +	 */
> > +	*frame_sz = truesize;
> > +
> > +	/* This happens when headroom is not enough because
> > +	 * of the buffer was prefilled before XDP is set.
> > +	 * This should only happen for the first several packets.
> > +	 * In fact, vq reset can be used here to help us clean up
> > +	 * the prefilled buffers, but many existing devices do not
> > +	 * support it, and we don't want to bother users who are
> > +	 * using xdp normally.
> > +	 */
> > +	if (!xdp_prog->aux->xdp_has_frags &&
> > +	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> > +		/* linearize data for XDP */
> > +		xdp_page = xdp_linearize_page(rq, num_buf,
> > +					      *page, offset,
> > +					      VIRTIO_XDP_HEADROOM,
> > +					      len);
> > +
> > +		if (!xdp_page)
> > +			return NULL;
> > +	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> > +		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> > +					  sizeof(struct skb_shared_info));
> > +		if (*len + xdp_room > PAGE_SIZE)
> > +			return NULL;
> > +
> > +		xdp_page = alloc_page(GFP_ATOMIC);
> > +		if (!xdp_page)
> > +			return NULL;
> > +
> > +		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> > +		       page_address(*page) + offset, *len);
>
> It seems the above 'else if' was not really tested even before this patch,
> as there is no "--*num_buf" if xdp_linearize_page() is not called, which
> may causes virtnet_build_xdp_buff_mrg() to comsume one more buffer than
> expected?

Why do you think so?

>
> Also, it seems better to split the xdp_linearize_page() to two functions
> as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
> the other one to do the linearizing.

No skb here.


>
>
> > +	} else {
> > +		return page_address(*page) + offset;
> > +	}
> > +
> > +	*frame_sz = PAGE_SIZE;
> > +
> > +	put_page(*page);
> > +
> > +	*page = xdp_page;
> > +
> > +	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
> > +}
> > +
> >  static struct sk_buff *receive_mergeable(struct net_device *dev,
> >  					 struct virtnet_info *vi,
> >  					 struct receive_queue *rq,
> > @@ -1181,7 +1254,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >  	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> >  	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
> >  	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
> > -	unsigned int frame_sz, xdp_room;
> > +	unsigned int frame_sz;
> >  	int err;
> >
> >  	head_skb = NULL;
> > @@ -1211,65 +1284,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
> >  		u32 act;
> >  		int i;
> >
> > -		/* Transient failure which in theory could occur if
> > -		 * in-flight packets from before XDP was enabled reach
> > -		 * the receive path after XDP is loaded.
> > -		 */
> > -		if (unlikely(hdr->hdr.gso_type))
> > +		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> > +					     offset, &len, hdr);
> > +		if (!data)
>
> unlikely().


Thanks.

>
> >  			goto err_xdp;
> >
> > -		/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> > -		 * with headroom may add hole in truesize, which
> > -		 * make their length exceed PAGE_SIZE. So we disabled the
> > -		 * hole mechanism for xdp. See add_recvbuf_mergeable().
> > -		 */
> > -		frame_sz = truesize;
> > -
> > -		/* This happens when headroom is not enough because
> > -		 * of the buffer was prefilled before XDP is set.
> > -		 * This should only happen for the first several packets.
> > -		 * In fact, vq reset can be used here to help us clean up
> > -		 * the prefilled buffers, but many existing devices do not
> > -		 * support it, and we don't want to bother users who are
> > -		 * using xdp normally.
> > -		 */
> > -		if (!xdp_prog->aux->xdp_has_frags &&
> > -		    (num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> > -			/* linearize data for XDP */
> > -			xdp_page = xdp_linearize_page(rq, &num_buf,
> > -						      page, offset,
> > -						      VIRTIO_XDP_HEADROOM,
> > -						      &len);
> > -			frame_sz = PAGE_SIZE;
> > -
> > -			if (!xdp_page)
> > -				goto err_xdp;
> > -			offset = VIRTIO_XDP_HEADROOM;
> > -
> > -			put_page(page);
> > -			page = xdp_page;
> > -		} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> > -			xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> > -						  sizeof(struct skb_shared_info));
> > -			if (len + xdp_room > PAGE_SIZE)
> > -				goto err_xdp;
> > -
> > -			xdp_page = alloc_page(GFP_ATOMIC);
> > -			if (!xdp_page)
> > -				goto err_xdp;
> > -
> > -			memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> > -			       page_address(page) + offset, len);
> > -			frame_sz = PAGE_SIZE;
> > -			offset = VIRTIO_XDP_HEADROOM;
> > -
> > -			put_page(page);
> > -			page = xdp_page;
> > -		} else {
> > -			xdp_page = page;
> > -		}
> > -
> > -		data = page_address(xdp_page) + offset;
> >  		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
> >  						 &num_buf, &xdp_frags_truesz, stats);
> >  		if (unlikely(err))
> >

^ permalink raw reply	[flat|nested] 41+ messages in thread

* Re: [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-22  3:03 ` [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare Xuan Zhuo
@ 2023-03-22 11:52   ` Yunsheng Lin
  2023-03-23  1:45     ` Xuan Zhuo
  0 siblings, 1 reply; 41+ messages in thread
From: Yunsheng Lin @ 2023-03-22 11:52 UTC (permalink / raw)
  To: Xuan Zhuo, netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

On 2023/3/22 11:03, Xuan Zhuo wrote:
> Separating the logic of preparation for xdp from receive_mergeable.
> 
> The purpose of this is to simplify the logic of execution of XDP.
> 
> The main logic here is that when headroom is insufficient, we need to
> allocate a new page and calculate offset. It should be noted that if
> there is new page, the variable page will refer to the new page.
> 
> Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
> ---
>  drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
>  1 file changed, 77 insertions(+), 58 deletions(-)
> 
> diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
> index 4d2bf1ce0730..bb426958cdd4 100644
> --- a/drivers/net/virtio_net.c
> +++ b/drivers/net/virtio_net.c
> @@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
>  	return 0;
>  }
>  
> +static void *mergeable_xdp_prepare(struct virtnet_info *vi,
> +				   struct receive_queue *rq,
> +				   struct bpf_prog *xdp_prog,
> +				   void *ctx,
> +				   unsigned int *frame_sz,
> +				   int *num_buf,
> +				   struct page **page,
> +				   int offset,
> +				   unsigned int *len,
> +				   struct virtio_net_hdr_mrg_rxbuf *hdr)

The naming convention seems to be xdp_prepare_mergeable().

> +{
> +	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
> +	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
> +	struct page *xdp_page;
> +	unsigned int xdp_room;
> +
> +	/* Transient failure which in theory could occur if
> +	 * in-flight packets from before XDP was enabled reach
> +	 * the receive path after XDP is loaded.
> +	 */
> +	if (unlikely(hdr->hdr.gso_type))
> +		return NULL;
> +
> +	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> +	 * with headroom may add hole in truesize, which
> +	 * make their length exceed PAGE_SIZE. So we disabled the
> +	 * hole mechanism for xdp. See add_recvbuf_mergeable().
> +	 */
> +	*frame_sz = truesize;
> +
> +	/* This happens when headroom is not enough because
> +	 * of the buffer was prefilled before XDP is set.
> +	 * This should only happen for the first several packets.
> +	 * In fact, vq reset can be used here to help us clean up
> +	 * the prefilled buffers, but many existing devices do not
> +	 * support it, and we don't want to bother users who are
> +	 * using xdp normally.
> +	 */
> +	if (!xdp_prog->aux->xdp_has_frags &&
> +	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> +		/* linearize data for XDP */
> +		xdp_page = xdp_linearize_page(rq, num_buf,
> +					      *page, offset,
> +					      VIRTIO_XDP_HEADROOM,
> +					      len);
> +
> +		if (!xdp_page)
> +			return NULL;
> +	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> +		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> +					  sizeof(struct skb_shared_info));
> +		if (*len + xdp_room > PAGE_SIZE)
> +			return NULL;
> +
> +		xdp_page = alloc_page(GFP_ATOMIC);
> +		if (!xdp_page)
> +			return NULL;
> +
> +		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> +		       page_address(*page) + offset, *len);

It seems the above 'else if' was not really tested even before this patch,
as there is no "--*num_buf" if xdp_linearize_page() is not called, which
may causes virtnet_build_xdp_buff_mrg() to comsume one more buffer than
expected?

Also, it seems better to split the xdp_linearize_page() to two functions
as pskb_expand_head() and __skb_linearize() do, one to expand the headroom,
the other one to do the linearizing.


> +	} else {
> +		return page_address(*page) + offset;
> +	}
> +
> +	*frame_sz = PAGE_SIZE;
> +
> +	put_page(*page);
> +
> +	*page = xdp_page;
> +
> +	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
> +}
> +
>  static struct sk_buff *receive_mergeable(struct net_device *dev,
>  					 struct virtnet_info *vi,
>  					 struct receive_queue *rq,
> @@ -1181,7 +1254,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>  	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
>  	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
>  	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
> -	unsigned int frame_sz, xdp_room;
> +	unsigned int frame_sz;
>  	int err;
>  
>  	head_skb = NULL;
> @@ -1211,65 +1284,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
>  		u32 act;
>  		int i;
>  
> -		/* Transient failure which in theory could occur if
> -		 * in-flight packets from before XDP was enabled reach
> -		 * the receive path after XDP is loaded.
> -		 */
> -		if (unlikely(hdr->hdr.gso_type))
> +		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
> +					     offset, &len, hdr);
> +		if (!data)

unlikely().

>  			goto err_xdp;
>  
> -		/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
> -		 * with headroom may add hole in truesize, which
> -		 * make their length exceed PAGE_SIZE. So we disabled the
> -		 * hole mechanism for xdp. See add_recvbuf_mergeable().
> -		 */
> -		frame_sz = truesize;
> -
> -		/* This happens when headroom is not enough because
> -		 * of the buffer was prefilled before XDP is set.
> -		 * This should only happen for the first several packets.
> -		 * In fact, vq reset can be used here to help us clean up
> -		 * the prefilled buffers, but many existing devices do not
> -		 * support it, and we don't want to bother users who are
> -		 * using xdp normally.
> -		 */
> -		if (!xdp_prog->aux->xdp_has_frags &&
> -		    (num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
> -			/* linearize data for XDP */
> -			xdp_page = xdp_linearize_page(rq, &num_buf,
> -						      page, offset,
> -						      VIRTIO_XDP_HEADROOM,
> -						      &len);
> -			frame_sz = PAGE_SIZE;
> -
> -			if (!xdp_page)
> -				goto err_xdp;
> -			offset = VIRTIO_XDP_HEADROOM;
> -
> -			put_page(page);
> -			page = xdp_page;
> -		} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
> -			xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
> -						  sizeof(struct skb_shared_info));
> -			if (len + xdp_room > PAGE_SIZE)
> -				goto err_xdp;
> -
> -			xdp_page = alloc_page(GFP_ATOMIC);
> -			if (!xdp_page)
> -				goto err_xdp;
> -
> -			memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
> -			       page_address(page) + offset, len);
> -			frame_sz = PAGE_SIZE;
> -			offset = VIRTIO_XDP_HEADROOM;
> -
> -			put_page(page);
> -			page = xdp_page;
> -		} else {
> -			xdp_page = page;
> -		}
> -
> -		data = page_address(xdp_page) + offset;
>  		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
>  						 &num_buf, &xdp_frags_truesz, stats);
>  		if (unlikely(err))
> 

^ permalink raw reply	[flat|nested] 41+ messages in thread

* [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare
  2023-03-22  3:03 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
@ 2023-03-22  3:03 ` Xuan Zhuo
  2023-03-22 11:52   ` Yunsheng Lin
  0 siblings, 1 reply; 41+ messages in thread
From: Xuan Zhuo @ 2023-03-22  3:03 UTC (permalink / raw)
  To: netdev
  Cc: Michael S. Tsirkin, Jason Wang, David S. Miller, Eric Dumazet,
	Jakub Kicinski, Paolo Abeni, Alexei Starovoitov, Daniel Borkmann,
	Jesper Dangaard Brouer, John Fastabend, virtualization, bpf

Separating the logic of preparation for xdp from receive_mergeable.

The purpose of this is to simplify the logic of execution of XDP.

The main logic here is that when headroom is insufficient, we need to
allocate a new page and calculate offset. It should be noted that if
there is new page, the variable page will refer to the new page.

Signed-off-by: Xuan Zhuo <xuanzhuo@linux.alibaba.com>
---
 drivers/net/virtio_net.c | 135 ++++++++++++++++++++++-----------------
 1 file changed, 77 insertions(+), 58 deletions(-)

diff --git a/drivers/net/virtio_net.c b/drivers/net/virtio_net.c
index 4d2bf1ce0730..bb426958cdd4 100644
--- a/drivers/net/virtio_net.c
+++ b/drivers/net/virtio_net.c
@@ -1162,6 +1162,79 @@ static int virtnet_build_xdp_buff_mrg(struct net_device *dev,
 	return 0;
 }
 
+static void *mergeable_xdp_prepare(struct virtnet_info *vi,
+				   struct receive_queue *rq,
+				   struct bpf_prog *xdp_prog,
+				   void *ctx,
+				   unsigned int *frame_sz,
+				   int *num_buf,
+				   struct page **page,
+				   int offset,
+				   unsigned int *len,
+				   struct virtio_net_hdr_mrg_rxbuf *hdr)
+{
+	unsigned int truesize = mergeable_ctx_to_truesize(ctx);
+	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
+	struct page *xdp_page;
+	unsigned int xdp_room;
+
+	/* Transient failure which in theory could occur if
+	 * in-flight packets from before XDP was enabled reach
+	 * the receive path after XDP is loaded.
+	 */
+	if (unlikely(hdr->hdr.gso_type))
+		return NULL;
+
+	/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
+	 * with headroom may add hole in truesize, which
+	 * make their length exceed PAGE_SIZE. So we disabled the
+	 * hole mechanism for xdp. See add_recvbuf_mergeable().
+	 */
+	*frame_sz = truesize;
+
+	/* This happens when headroom is not enough because
+	 * of the buffer was prefilled before XDP is set.
+	 * This should only happen for the first several packets.
+	 * In fact, vq reset can be used here to help us clean up
+	 * the prefilled buffers, but many existing devices do not
+	 * support it, and we don't want to bother users who are
+	 * using xdp normally.
+	 */
+	if (!xdp_prog->aux->xdp_has_frags &&
+	    (*num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
+		/* linearize data for XDP */
+		xdp_page = xdp_linearize_page(rq, num_buf,
+					      *page, offset,
+					      VIRTIO_XDP_HEADROOM,
+					      len);
+
+		if (!xdp_page)
+			return NULL;
+	} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
+		xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
+					  sizeof(struct skb_shared_info));
+		if (*len + xdp_room > PAGE_SIZE)
+			return NULL;
+
+		xdp_page = alloc_page(GFP_ATOMIC);
+		if (!xdp_page)
+			return NULL;
+
+		memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
+		       page_address(*page) + offset, *len);
+	} else {
+		return page_address(*page) + offset;
+	}
+
+	*frame_sz = PAGE_SIZE;
+
+	put_page(*page);
+
+	*page = xdp_page;
+
+	return page_address(xdp_page) + VIRTIO_XDP_HEADROOM;
+}
+
 static struct sk_buff *receive_mergeable(struct net_device *dev,
 					 struct virtnet_info *vi,
 					 struct receive_queue *rq,
@@ -1181,7 +1254,7 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 	unsigned int headroom = mergeable_ctx_to_headroom(ctx);
 	unsigned int tailroom = headroom ? sizeof(struct skb_shared_info) : 0;
 	unsigned int room = SKB_DATA_ALIGN(headroom + tailroom);
-	unsigned int frame_sz, xdp_room;
+	unsigned int frame_sz;
 	int err;
 
 	head_skb = NULL;
@@ -1211,65 +1284,11 @@ static struct sk_buff *receive_mergeable(struct net_device *dev,
 		u32 act;
 		int i;
 
-		/* Transient failure which in theory could occur if
-		 * in-flight packets from before XDP was enabled reach
-		 * the receive path after XDP is loaded.
-		 */
-		if (unlikely(hdr->hdr.gso_type))
+		data = mergeable_xdp_prepare(vi, rq, xdp_prog, ctx, &frame_sz, &num_buf, &page,
+					     offset, &len, hdr);
+		if (!data)
 			goto err_xdp;
 
-		/* Now XDP core assumes frag size is PAGE_SIZE, but buffers
-		 * with headroom may add hole in truesize, which
-		 * make their length exceed PAGE_SIZE. So we disabled the
-		 * hole mechanism for xdp. See add_recvbuf_mergeable().
-		 */
-		frame_sz = truesize;
-
-		/* This happens when headroom is not enough because
-		 * of the buffer was prefilled before XDP is set.
-		 * This should only happen for the first several packets.
-		 * In fact, vq reset can be used here to help us clean up
-		 * the prefilled buffers, but many existing devices do not
-		 * support it, and we don't want to bother users who are
-		 * using xdp normally.
-		 */
-		if (!xdp_prog->aux->xdp_has_frags &&
-		    (num_buf > 1 || headroom < virtnet_get_headroom(vi))) {
-			/* linearize data for XDP */
-			xdp_page = xdp_linearize_page(rq, &num_buf,
-						      page, offset,
-						      VIRTIO_XDP_HEADROOM,
-						      &len);
-			frame_sz = PAGE_SIZE;
-
-			if (!xdp_page)
-				goto err_xdp;
-			offset = VIRTIO_XDP_HEADROOM;
-
-			put_page(page);
-			page = xdp_page;
-		} else if (unlikely(headroom < virtnet_get_headroom(vi))) {
-			xdp_room = SKB_DATA_ALIGN(VIRTIO_XDP_HEADROOM +
-						  sizeof(struct skb_shared_info));
-			if (len + xdp_room > PAGE_SIZE)
-				goto err_xdp;
-
-			xdp_page = alloc_page(GFP_ATOMIC);
-			if (!xdp_page)
-				goto err_xdp;
-
-			memcpy(page_address(xdp_page) + VIRTIO_XDP_HEADROOM,
-			       page_address(page) + offset, len);
-			frame_sz = PAGE_SIZE;
-			offset = VIRTIO_XDP_HEADROOM;
-
-			put_page(page);
-			page = xdp_page;
-		} else {
-			xdp_page = page;
-		}
-
-		data = page_address(xdp_page) + offset;
 		err = virtnet_build_xdp_buff_mrg(dev, vi, rq, &xdp, data, len, frame_sz,
 						 &num_buf, &xdp_frags_truesz, stats);
 		if (unlikely(err))
-- 
2.32.0.3.g01195cf9f


^ permalink raw reply related	[flat|nested] 41+ messages in thread

end of thread, other threads:[~2023-04-04  8:10 UTC | newest]

Thread overview: 41+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-28 12:04 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
2023-03-28 12:04 ` [PATCH net-next 1/8] virtio_net: mergeable xdp: put old page immediately Xuan Zhuo
2023-03-31  9:01   ` Jason Wang
2023-04-03  4:11     ` Xuan Zhuo
2023-03-28 12:04 ` [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare Xuan Zhuo
2023-03-31  9:14   ` Jason Wang
2023-04-03  4:11     ` Xuan Zhuo
2023-03-28 12:04 ` [PATCH net-next 3/8] virtio_net: introduce virtnet_xdp_handler() to seprate the logic of run xdp Xuan Zhuo
2023-04-03  2:43   ` Jason Wang
2023-04-03  4:12     ` Xuan Zhuo
2023-04-04  5:04       ` Jason Wang
2023-04-04  6:11         ` Xuan Zhuo
2023-04-04  6:35           ` Jason Wang
2023-04-04  6:44             ` Xuan Zhuo
2023-04-04  7:01               ` Jason Wang
2023-04-04  7:06                 ` Xuan Zhuo
2023-04-04  8:03                   ` Jason Wang
2023-04-04  8:09                     ` Xuan Zhuo
2023-03-28 12:04 ` [PATCH net-next 4/8] virtio_net: separate the logic of freeing xdp shinfo Xuan Zhuo
2023-04-03  2:44   ` Jason Wang
2023-03-28 12:04 ` [PATCH net-next 5/8] virtio_net: separate the logic of freeing the rest mergeable buf Xuan Zhuo
2023-04-03  2:46   ` Jason Wang
2023-03-28 12:04 ` [PATCH net-next 6/8] virtio_net: auto release xdp shinfo Xuan Zhuo
2023-04-03  3:18   ` Jason Wang
2023-04-03  4:17     ` Xuan Zhuo
2023-03-28 12:04 ` [PATCH net-next 7/8] virtio_net: introduce receive_mergeable_xdp() Xuan Zhuo
2023-03-30 10:18   ` Paolo Abeni
2023-03-31  7:18     ` Xuan Zhuo
2023-03-28 12:04 ` [PATCH net-next 8/8] virtio_net: introduce receive_small_xdp() Xuan Zhuo
2023-03-30 10:48   ` Paolo Abeni
2023-03-31  7:20     ` Xuan Zhuo
2023-03-31  7:24       ` Michael S. Tsirkin
  -- strict thread matches above, loose matches on Subject: below --
2023-03-22  3:03 [PATCH net-next 0/8] virtio_net: refactor xdp codes Xuan Zhuo
2023-03-22  3:03 ` [PATCH net-next 2/8] virtio_net: mergeable xdp: introduce mergeable_xdp_prepare Xuan Zhuo
2023-03-22 11:52   ` Yunsheng Lin
2023-03-23  1:45     ` Xuan Zhuo
2023-03-23  4:45       ` Yunsheng Lin
2023-03-23  4:52         ` Jakub Kicinski
2023-03-23  5:40         ` Jason Wang
2023-03-23  7:24           ` Yunsheng Lin
2023-03-23 11:04             ` Xuan Zhuo
2023-03-23 10:59         ` Xuan Zhuo

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).