All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver
@ 2022-03-11  9:14 Lorenzo Bianconi
  2022-03-11  9:14 ` [PATCH v5 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
                   ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Lorenzo Bianconi @ 2022-03-11  9:14 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Introduce xdp frags support to veth driver in order to allow increasing the mtu
over the page boundary if the attached xdp program declares to support xdp
fragments.
This series has been tested running xdp_router_ipv4 sample available in the
kernel tree redirecting tcp traffic from veth pair into the mvneta driver.

Changes since v4:
- remove TSO support for the moment
- rename veth_convert_skb_from_xdp_buff to veth_convert_skb_to_xdp_buff

Changes since v3:
- introduce a check on max_mtu for xdp mode in veth_xdp_set()

Changes since v2:
- move rcu_access_pointer() check in veth_skb_is_eligible_for_gro

Changes since v1:
- always consider skb paged are non-writable
- fix tpt issue with sctp
- always use napi if we are running in xdp mode in veth_xmit

Lorenzo Bianconi (3):
  net: veth: account total xdp_frame len running ndo_xdp_xmit
  veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  veth: allow jumbo frames in xdp mode

 drivers/net/veth.c | 192 +++++++++++++++++++++++++++++++--------------
 include/net/xdp.h  |  14 ++++
 net/core/xdp.c     |   1 +
 3 files changed, 146 insertions(+), 61 deletions(-)

-- 
2.35.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v5 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit
  2022-03-11  9:14 [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
@ 2022-03-11  9:14 ` Lorenzo Bianconi
  2022-03-16  3:22   ` John Fastabend
  2022-03-11  9:14 ` [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 10+ messages in thread
From: Lorenzo Bianconi @ 2022-03-11  9:14 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Even if this is a theoretical issue since it is not possible to perform
XDP_REDIRECT on a non-linear xdp_frame, veth driver does not account
paged area in ndo_xdp_xmit function pointer.
Introduce xdp_get_frame_len utility routine to get the xdp_frame full
length and account total frame size running XDP_REDIRECT of a
non-linear xdp frame into a veth device.

Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/veth.c |  4 ++--
 include/net/xdp.h  | 14 ++++++++++++++
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 58b20ea171dd..b77ce3fdcfe8 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -494,7 +494,7 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
 		struct xdp_frame *frame = frames[i];
 		void *ptr = veth_xdp_to_ptr(frame);
 
-		if (unlikely(frame->len > max_len ||
+		if (unlikely(xdp_get_frame_len(frame) > max_len ||
 			     __ptr_ring_produce(&rq->xdp_ring, ptr)))
 			break;
 		nxmit++;
@@ -855,7 +855,7 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
 			/* ndo_xdp_xmit */
 			struct xdp_frame *frame = veth_ptr_to_xdp(ptr);
 
-			stats->xdp_bytes += frame->len;
+			stats->xdp_bytes += xdp_get_frame_len(frame);
 			frame = veth_xdp_rcv_one(rq, frame, bq, stats);
 			if (frame) {
 				/* XDP_PASS */
diff --git a/include/net/xdp.h b/include/net/xdp.h
index b7721c3e4d1f..04c852c7a77f 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -343,6 +343,20 @@ static inline void xdp_release_frame(struct xdp_frame *xdpf)
 	__xdp_release_frame(xdpf->data, mem);
 }
 
+static __always_inline unsigned int xdp_get_frame_len(struct xdp_frame *xdpf)
+{
+	struct skb_shared_info *sinfo;
+	unsigned int len = xdpf->len;
+
+	if (likely(!xdp_frame_has_frags(xdpf)))
+		goto out;
+
+	sinfo = xdp_get_shared_info_from_frame(xdpf);
+	len += sinfo->xdp_frags_size;
+out:
+	return len;
+}
+
 int __xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
 		       struct net_device *dev, u32 queue_index,
 		       unsigned int napi_id, u32 frag_size);
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-03-11  9:14 [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
  2022-03-11  9:14 ` [PATCH v5 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
@ 2022-03-11  9:14 ` Lorenzo Bianconi
  2022-03-11 14:59   ` Toke Høiland-Jørgensen
  2022-03-16  3:49   ` John Fastabend
  2022-03-11  9:14 ` [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
  2022-03-17 19:40 ` [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver patchwork-bot+netdevbpf
  3 siblings, 2 replies; 10+ messages in thread
From: Lorenzo Bianconi @ 2022-03-11  9:14 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Introduce veth_convert_skb_to_xdp_buff routine in order to
convert a non-linear skb into a xdp buffer. If the received skb
is cloned or shared, veth_convert_skb_to_xdp_buff will copy it
in a new skb composed by order-0 pages for the linear and the
fragmented area. Moreover veth_convert_skb_to_xdp_buff guarantees
we have enough headroom for xdp.
This is a preliminary patch to allow attaching xdp programs with frags
support on veth devices.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/veth.c | 177 +++++++++++++++++++++++++++++++--------------
 net/core/xdp.c     |   1 +
 2 files changed, 122 insertions(+), 56 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index b77ce3fdcfe8..bfae15ec902b 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -433,21 +433,6 @@ static void veth_set_multicast_list(struct net_device *dev)
 {
 }
 
-static struct sk_buff *veth_build_skb(void *head, int headroom, int len,
-				      int buflen)
-{
-	struct sk_buff *skb;
-
-	skb = build_skb(head, buflen);
-	if (!skb)
-		return NULL;
-
-	skb_reserve(skb, headroom);
-	skb_put(skb, len);
-
-	return skb;
-}
-
 static int veth_select_rxq(struct net_device *dev)
 {
 	return smp_processor_id() % dev->real_num_rx_queues;
@@ -695,72 +680,143 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames,
 	}
 }
 
-static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
-					struct sk_buff *skb,
-					struct veth_xdp_tx_bq *bq,
-					struct veth_stats *stats)
+static void veth_xdp_get(struct xdp_buff *xdp)
 {
-	u32 pktlen, headroom, act, metalen, frame_sz;
-	void *orig_data, *orig_data_end;
-	struct bpf_prog *xdp_prog;
-	int mac_len, delta, off;
-	struct xdp_buff xdp;
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	int i;
 
-	skb_prepare_for_gro(skb);
+	get_page(virt_to_page(xdp->data));
+	if (likely(!xdp_buff_has_frags(xdp)))
+		return;
 
-	rcu_read_lock();
-	xdp_prog = rcu_dereference(rq->xdp_prog);
-	if (unlikely(!xdp_prog)) {
-		rcu_read_unlock();
-		goto out;
-	}
+	for (i = 0; i < sinfo->nr_frags; i++)
+		__skb_frag_ref(&sinfo->frags[i]);
+}
 
-	mac_len = skb->data - skb_mac_header(skb);
-	pktlen = skb->len + mac_len;
-	headroom = skb_headroom(skb) - mac_len;
+static int veth_convert_skb_to_xdp_buff(struct veth_rq *rq,
+					struct xdp_buff *xdp,
+					struct sk_buff **pskb)
+{
+	struct sk_buff *skb = *pskb;
+	u32 frame_sz;
 
 	if (skb_shared(skb) || skb_head_is_locked(skb) ||
-	    skb_is_nonlinear(skb) || headroom < XDP_PACKET_HEADROOM) {
+	    skb_shinfo(skb)->nr_frags) {
+		u32 size, len, max_head_size, off;
 		struct sk_buff *nskb;
-		int size, head_off;
-		void *head, *start;
 		struct page *page;
+		int i, head_off;
 
-		size = SKB_DATA_ALIGN(VETH_XDP_HEADROOM + pktlen) +
-		       SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-		if (size > PAGE_SIZE)
+		/* We need a private copy of the skb and data buffers since
+		 * the ebpf program can modify it. We segment the original skb
+		 * into order-0 pages without linearize it.
+		 *
+		 * Make sure we have enough space for linear and paged area
+		 */
+		max_head_size = SKB_WITH_OVERHEAD(PAGE_SIZE -
+						  VETH_XDP_HEADROOM);
+		if (skb->len > PAGE_SIZE * MAX_SKB_FRAGS + max_head_size)
 			goto drop;
 
+		/* Allocate skb head */
 		page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
 		if (!page)
 			goto drop;
 
-		head = page_address(page);
-		start = head + VETH_XDP_HEADROOM;
-		if (skb_copy_bits(skb, -mac_len, start, pktlen)) {
-			page_frag_free(head);
+		nskb = build_skb(page_address(page), PAGE_SIZE);
+		if (!nskb) {
+			put_page(page);
 			goto drop;
 		}
 
-		nskb = veth_build_skb(head, VETH_XDP_HEADROOM + mac_len,
-				      skb->len, PAGE_SIZE);
-		if (!nskb) {
-			page_frag_free(head);
+		skb_reserve(nskb, VETH_XDP_HEADROOM);
+		size = min_t(u32, skb->len, max_head_size);
+		if (skb_copy_bits(skb, 0, nskb->data, size)) {
+			consume_skb(nskb);
 			goto drop;
 		}
+		skb_put(nskb, size);
 
 		skb_copy_header(nskb, skb);
 		head_off = skb_headroom(nskb) - skb_headroom(skb);
 		skb_headers_offset_update(nskb, head_off);
+
+		/* Allocate paged area of new skb */
+		off = size;
+		len = skb->len - off;
+
+		for (i = 0; i < MAX_SKB_FRAGS && off < skb->len; i++) {
+			page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
+			if (!page) {
+				consume_skb(nskb);
+				goto drop;
+			}
+
+			size = min_t(u32, len, PAGE_SIZE);
+			skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE);
+			if (skb_copy_bits(skb, off, page_address(page),
+					  size)) {
+				consume_skb(nskb);
+				goto drop;
+			}
+
+			len -= size;
+			off += size;
+		}
+
 		consume_skb(skb);
 		skb = nskb;
+	} else if (skb_headroom(skb) < XDP_PACKET_HEADROOM &&
+		   pskb_expand_head(skb, VETH_XDP_HEADROOM, 0, GFP_ATOMIC)) {
+		goto drop;
 	}
 
 	/* SKB "head" area always have tailroom for skb_shared_info */
 	frame_sz = skb_end_pointer(skb) - skb->head;
 	frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-	xdp_init_buff(&xdp, frame_sz, &rq->xdp_rxq);
-	xdp_prepare_buff(&xdp, skb->head, skb->mac_header, pktlen, true);
+	xdp_init_buff(xdp, frame_sz, &rq->xdp_rxq);
+	xdp_prepare_buff(xdp, skb->head, skb_headroom(skb),
+			 skb_headlen(skb), true);
+
+	if (skb_is_nonlinear(skb)) {
+		skb_shinfo(skb)->xdp_frags_size = skb->data_len;
+		xdp_buff_set_frags_flag(xdp);
+	} else {
+		xdp_buff_clear_frags_flag(xdp);
+	}
+	*pskb = skb;
+
+	return 0;
+drop:
+	consume_skb(skb);
+	*pskb = NULL;
+
+	return -ENOMEM;
+}
+
+static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
+					struct sk_buff *skb,
+					struct veth_xdp_tx_bq *bq,
+					struct veth_stats *stats)
+{
+	void *orig_data, *orig_data_end;
+	struct bpf_prog *xdp_prog;
+	struct xdp_buff xdp;
+	u32 act, metalen;
+	int off;
+
+	skb_prepare_for_gro(skb);
+
+	rcu_read_lock();
+	xdp_prog = rcu_dereference(rq->xdp_prog);
+	if (unlikely(!xdp_prog)) {
+		rcu_read_unlock();
+		goto out;
+	}
+
+	__skb_push(skb, skb->data - skb_mac_header(skb));
+	if (veth_convert_skb_to_xdp_buff(rq, &xdp, &skb))
+		goto drop;
 
 	orig_data = xdp.data;
 	orig_data_end = xdp.data_end;
@@ -771,7 +827,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	case XDP_PASS:
 		break;
 	case XDP_TX:
-		get_page(virt_to_page(xdp.data));
+		veth_xdp_get(&xdp);
 		consume_skb(skb);
 		xdp.rxq->mem = rq->xdp_mem;
 		if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) {
@@ -783,7 +839,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 		rcu_read_unlock();
 		goto xdp_xmit;
 	case XDP_REDIRECT:
-		get_page(virt_to_page(xdp.data));
+		veth_xdp_get(&xdp);
 		consume_skb(skb);
 		xdp.rxq->mem = rq->xdp_mem;
 		if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
@@ -806,18 +862,27 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	rcu_read_unlock();
 
 	/* check if bpf_xdp_adjust_head was used */
-	delta = orig_data - xdp.data;
-	off = mac_len + delta;
+	off = orig_data - xdp.data;
 	if (off > 0)
 		__skb_push(skb, off);
 	else if (off < 0)
 		__skb_pull(skb, -off);
-	skb->mac_header -= delta;
+
+	skb_reset_mac_header(skb);
 
 	/* check if bpf_xdp_adjust_tail was used */
 	off = xdp.data_end - orig_data_end;
 	if (off != 0)
 		__skb_put(skb, off); /* positive on grow, negative on shrink */
+
+	/* XDP frag metadata (e.g. nr_frags) are updated in eBPF helpers
+	 * (e.g. bpf_xdp_adjust_tail), we need to update data_len here.
+	 */
+	if (xdp_buff_has_frags(&xdp))
+		skb->data_len = skb_shinfo(skb)->xdp_frags_size;
+	else
+		skb->data_len = 0;
+
 	skb->protocol = eth_type_trans(skb, rq->dev);
 
 	metalen = xdp.data - xdp.data_meta;
@@ -833,7 +898,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	return NULL;
 err_xdp:
 	rcu_read_unlock();
-	page_frag_free(xdp.data);
+	xdp_return_buff(&xdp);
 xdp_xmit:
 	return NULL;
 }
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 361df312ee7f..b5f2d428d856 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -528,6 +528,7 @@ void xdp_return_buff(struct xdp_buff *xdp)
 out:
 	__xdp_return(xdp->data, &xdp->rxq->mem, true, xdp);
 }
+EXPORT_SYMBOL_GPL(xdp_return_buff);
 
 /* Only called for MEM_TYPE_PAGE_POOL see xdp.h */
 void __xdp_release_frame(void *data, struct xdp_mem_info *mem)
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode
  2022-03-11  9:14 [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
  2022-03-11  9:14 ` [PATCH v5 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
  2022-03-11  9:14 ` [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
@ 2022-03-11  9:14 ` Lorenzo Bianconi
  2022-03-11 14:59   ` Toke Høiland-Jørgensen
  2022-03-16  3:50   ` John Fastabend
  2022-03-17 19:40 ` [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver patchwork-bot+netdevbpf
  3 siblings, 2 replies; 10+ messages in thread
From: Lorenzo Bianconi @ 2022-03-11  9:14 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Allow increasing the MTU over page boundaries on veth devices
if the attached xdp program declares to support xdp fragments.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/veth.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index bfae15ec902b..1b5714926d81 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -1528,9 +1528,14 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 			goto err;
 		}
 
-		max_mtu = PAGE_SIZE - VETH_XDP_HEADROOM -
-			  peer->hard_header_len -
-			  SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+		max_mtu = SKB_WITH_OVERHEAD(PAGE_SIZE - VETH_XDP_HEADROOM) -
+			  peer->hard_header_len;
+		/* Allow increasing the max_mtu if the program supports
+		 * XDP fragments.
+		 */
+		if (prog->aux->xdp_has_frags)
+			max_mtu += PAGE_SIZE * MAX_SKB_FRAGS;
+
 		if (peer->mtu > max_mtu) {
 			NL_SET_ERR_MSG_MOD(extack, "Peer MTU is too large to set XDP");
 			err = -ERANGE;
-- 
2.35.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-03-11  9:14 ` [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
@ 2022-03-11 14:59   ` Toke Høiland-Jørgensen
  2022-03-16  3:49   ` John Fastabend
  1 sibling, 0 replies; 10+ messages in thread
From: Toke Høiland-Jørgensen @ 2022-03-11 14:59 UTC (permalink / raw)
  To: Lorenzo Bianconi, bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Lorenzo Bianconi <lorenzo@kernel.org> writes:

> Introduce veth_convert_skb_to_xdp_buff routine in order to
> convert a non-linear skb into a xdp buffer. If the received skb
> is cloned or shared, veth_convert_skb_to_xdp_buff will copy it
> in a new skb composed by order-0 pages for the linear and the
> fragmented area. Moreover veth_convert_skb_to_xdp_buff guarantees
> we have enough headroom for xdp.
> This is a preliminary patch to allow attaching xdp programs with frags
> support on veth devices.
>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>

Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode
  2022-03-11  9:14 ` [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
@ 2022-03-11 14:59   ` Toke Høiland-Jørgensen
  2022-03-16  3:50   ` John Fastabend
  1 sibling, 0 replies; 10+ messages in thread
From: Toke Høiland-Jørgensen @ 2022-03-11 14:59 UTC (permalink / raw)
  To: Lorenzo Bianconi, bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Lorenzo Bianconi <lorenzo@kernel.org> writes:

> Allow increasing the MTU over page boundaries on veth devices
> if the attached xdp program declares to support xdp fragments.
>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>

Acked-by: Toke Høiland-Jørgensen <toke@redhat.com>


^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v5 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit
  2022-03-11  9:14 ` [PATCH v5 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
@ 2022-03-16  3:22   ` John Fastabend
  0 siblings, 0 replies; 10+ messages in thread
From: John Fastabend @ 2022-03-16  3:22 UTC (permalink / raw)
  To: Lorenzo Bianconi, bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Lorenzo Bianconi wrote:
> Even if this is a theoretical issue since it is not possible to perform
> XDP_REDIRECT on a non-linear xdp_frame, veth driver does not account
> paged area in ndo_xdp_xmit function pointer.
> Introduce xdp_get_frame_len utility routine to get the xdp_frame full
> length and account total frame size running XDP_REDIRECT of a
> non-linear xdp frame into a veth device.
> 
> Acked-by: Toke Hoiland-Jorgensen <toke@redhat.com>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-03-11  9:14 ` [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
  2022-03-11 14:59   ` Toke Høiland-Jørgensen
@ 2022-03-16  3:49   ` John Fastabend
  1 sibling, 0 replies; 10+ messages in thread
From: John Fastabend @ 2022-03-16  3:49 UTC (permalink / raw)
  To: Lorenzo Bianconi, bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Lorenzo Bianconi wrote:
> Introduce veth_convert_skb_to_xdp_buff routine in order to
> convert a non-linear skb into a xdp buffer. If the received skb
> is cloned or shared, veth_convert_skb_to_xdp_buff will copy it
> in a new skb composed by order-0 pages for the linear and the
> fragmented area. Moreover veth_convert_skb_to_xdp_buff guarantees
> we have enough headroom for xdp.
> This is a preliminary patch to allow attaching xdp programs with frags
> support on veth devices.
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* RE: [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode
  2022-03-11  9:14 ` [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
  2022-03-11 14:59   ` Toke Høiland-Jørgensen
@ 2022-03-16  3:50   ` John Fastabend
  1 sibling, 0 replies; 10+ messages in thread
From: John Fastabend @ 2022-03-16  3:50 UTC (permalink / raw)
  To: Lorenzo Bianconi, bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Lorenzo Bianconi wrote:
> Allow increasing the MTU over page boundaries on veth devices
> if the attached xdp program declares to support xdp fragments.
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---

Thanks!

Acked-by: John Fastabend <john.fastabend@gmail.com>

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver
  2022-03-11  9:14 [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
                   ` (2 preceding siblings ...)
  2022-03-11  9:14 ` [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
@ 2022-03-17 19:40 ` patchwork-bot+netdevbpf
  3 siblings, 0 replies; 10+ messages in thread
From: patchwork-bot+netdevbpf @ 2022-03-17 19:40 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: bpf, netdev, davem, kuba, ast, daniel, brouer, toke, pabeni,
	echaudro, lorenzo.bianconi, toshiaki.makita1, andrii

Hello:

This series was applied to bpf/bpf-next.git (master)
by Daniel Borkmann <daniel@iogearbox.net>:

On Fri, 11 Mar 2022 10:14:17 +0100 you wrote:
> Introduce xdp frags support to veth driver in order to allow increasing the mtu
> over the page boundary if the attached xdp program declares to support xdp
> fragments.
> This series has been tested running xdp_router_ipv4 sample available in the
> kernel tree redirecting tcp traffic from veth pair into the mvneta driver.
> 
> Changes since v4:
> - remove TSO support for the moment
> - rename veth_convert_skb_from_xdp_buff to veth_convert_skb_to_xdp_buff
> 
> [...]

Here is the summary with links:
  - [v5,bpf-next,1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit
    https://git.kernel.org/bpf/bpf-next/c/5142239a2221
  - [v5,bpf-next,2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
    https://git.kernel.org/bpf/bpf-next/c/718a18a0c8a6
  - [v5,bpf-next,3/3] veth: allow jumbo frames in xdp mode
    https://git.kernel.org/bpf/bpf-next/c/7cda76d858a4

You are awesome, thank you!
-- 
Deet-doot-dot, I am a bot.
https://korg.docs.kernel.org/patchwork/pwbot.html



^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-03-17 19:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-11  9:14 [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
2022-03-11  9:14 ` [PATCH v5 bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
2022-03-16  3:22   ` John Fastabend
2022-03-11  9:14 ` [PATCH v5 bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
2022-03-11 14:59   ` Toke Høiland-Jørgensen
2022-03-16  3:49   ` John Fastabend
2022-03-11  9:14 ` [PATCH v5 bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
2022-03-11 14:59   ` Toke Høiland-Jørgensen
2022-03-16  3:50   ` John Fastabend
2022-03-17 19:40 ` [PATCH v5 bpf-next 0/3] introduce xdp frags support to veth driver patchwork-bot+netdevbpf

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.