All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH bpf-next 0/3] introduce xdp frags support to veth driver
@ 2022-02-11  1:20 Lorenzo Bianconi
  2022-02-11  1:20 ` [PATCH bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-02-11  1:20 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Introduce xdp frags support to veth driver in order to allow increasing the mtu
over the page boundary if the attached xdp program declares to support xdp
fragments. Enable NETIF_F_ALL_TSO when the device is running in xdp mode.
This series has been tested running xdp_router_ipv4 sample available in the
kernel tree redirecting tcp traffic from veth pair into the mvneta driver.

Lorenzo Bianconi (3):
  net: veth: account total xdp_frame len running ndo_xdp_xmit
  veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  veth: allow jumbo frames in xdp mode

 drivers/net/veth.c | 197 ++++++++++++++++++++++++++++-----------------
 include/net/xdp.h  |  14 ++++
 net/core/xdp.c     |   1 +
 3 files changed, 138 insertions(+), 74 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit
  2022-02-11  1:20 [PATCH bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
@ 2022-02-11  1:20 ` Lorenzo Bianconi
  2022-02-11  1:20 ` [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
  2022-02-11  1:20 ` [PATCH bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
  2 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-02-11  1:20 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Even if this is a theoretical issue since it is not possible to perform
XDP_REDIRECT on a non-linear xdp_frame, veth driver does not account
paged area in ndo_xdp_xmit function pointer.
Introduce xdp_get_frame_len utility routine to get the xdp_frame full
length and account total frame size running XDP_REDIRECT of a
non-linear xdp frame into a veth device.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/veth.c |  4 ++--
 include/net/xdp.h  | 14 ++++++++++++++
 2 files changed, 16 insertions(+), 2 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 354a963075c5..22ecaf8b8f98 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -493,7 +493,7 @@ static int veth_xdp_xmit(struct net_device *dev, int n,
 		struct xdp_frame *frame = frames[i];
 		void *ptr = veth_xdp_to_ptr(frame);
 
-		if (unlikely(frame->len > max_len ||
+		if (unlikely(xdp_get_frame_len(frame) > max_len ||
 			     __ptr_ring_produce(&rq->xdp_ring, ptr)))
 			break;
 		nxmit++;
@@ -854,7 +854,7 @@ static int veth_xdp_rcv(struct veth_rq *rq, int budget,
 			/* ndo_xdp_xmit */
 			struct xdp_frame *frame = veth_ptr_to_xdp(ptr);
 
-			stats->xdp_bytes += frame->len;
+			stats->xdp_bytes += xdp_get_frame_len(frame);
 			frame = veth_xdp_rcv_one(rq, frame, bq, stats);
 			if (frame) {
 				/* XDP_PASS */
diff --git a/include/net/xdp.h b/include/net/xdp.h
index b7721c3e4d1f..04c852c7a77f 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -343,6 +343,20 @@ static inline void xdp_release_frame(struct xdp_frame *xdpf)
 	__xdp_release_frame(xdpf->data, mem);
 }
 
+static __always_inline unsigned int xdp_get_frame_len(struct xdp_frame *xdpf)
+{
+	struct skb_shared_info *sinfo;
+	unsigned int len = xdpf->len;
+
+	if (likely(!xdp_frame_has_frags(xdpf)))
+		goto out;
+
+	sinfo = xdp_get_shared_info_from_frame(xdpf);
+	len += sinfo->xdp_frags_size;
+out:
+	return len;
+}
+
 int __xdp_rxq_info_reg(struct xdp_rxq_info *xdp_rxq,
 		       struct net_device *dev, u32 queue_index,
 		       unsigned int napi_id, u32 frag_size);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-02-11  1:20 [PATCH bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
  2022-02-11  1:20 ` [PATCH bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
@ 2022-02-11  1:20 ` Lorenzo Bianconi
  2022-02-12  1:04   ` Jakub Kicinski
  2022-02-11  1:20 ` [PATCH bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi
  2 siblings, 1 reply; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-02-11  1:20 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Introduce veth_convert_xdp_buff_from_skb routine in order to
convert a non-linear skb into a xdp buffer. If the received skb
is cloned or shared, veth_convert_xdp_buff_from_skb will copy it
in a new skb composed by order-0 pages for the linear and the
fragmented area. Moreover veth_convert_xdp_buff_from_skb guarantees
we have enough headroom for xdp.
This is a preliminary patch to allow attaching xdp programs with frags
support on veth devices.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/veth.c | 172 +++++++++++++++++++++++++++++----------------
 net/core/xdp.c     |   1 +
 2 files changed, 113 insertions(+), 60 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 22ecaf8b8f98..6419de5d1f49 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -432,21 +432,6 @@ static void veth_set_multicast_list(struct net_device *dev)
 {
 }
 
-static struct sk_buff *veth_build_skb(void *head, int headroom, int len,
-				      int buflen)
-{
-	struct sk_buff *skb;
-
-	skb = build_skb(head, buflen);
-	if (!skb)
-		return NULL;
-
-	skb_reserve(skb, headroom);
-	skb_put(skb, len);
-
-	return skb;
-}
-
 static int veth_select_rxq(struct net_device *dev)
 {
 	return smp_processor_id() % dev->real_num_rx_queues;
@@ -694,72 +679,133 @@ static void veth_xdp_rcv_bulk_skb(struct veth_rq *rq, void **frames,
 	}
 }
 
-static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
-					struct sk_buff *skb,
-					struct veth_xdp_tx_bq *bq,
-					struct veth_stats *stats)
+static void veth_xdp_get(struct xdp_buff *xdp)
 {
-	u32 pktlen, headroom, act, metalen, frame_sz;
-	void *orig_data, *orig_data_end;
-	struct bpf_prog *xdp_prog;
-	int mac_len, delta, off;
-	struct xdp_buff xdp;
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	int i;
 
-	skb_prepare_for_gro(skb);
+	get_page(virt_to_page(xdp->data));
+	if (likely(!xdp_buff_has_frags(xdp)))
+		return;
 
-	rcu_read_lock();
-	xdp_prog = rcu_dereference(rq->xdp_prog);
-	if (unlikely(!xdp_prog)) {
-		rcu_read_unlock();
-		goto out;
-	}
+	for (i = 0; i < sinfo->nr_frags; i++)
+		__skb_frag_ref(&sinfo->frags[i]);
+}
 
-	mac_len = skb->data - skb_mac_header(skb);
-	pktlen = skb->len + mac_len;
-	headroom = skb_headroom(skb) - mac_len;
+static int veth_convert_xdp_buff_from_skb(struct veth_rq *rq,
+					  struct xdp_buff *xdp,
+					  struct sk_buff **pskb)
+{
+	struct sk_buff *skb = *pskb;
+	u32 frame_sz;
 
-	if (skb_shared(skb) || skb_head_is_locked(skb) ||
-	    skb_is_nonlinear(skb) || headroom < XDP_PACKET_HEADROOM) {
+	if (skb_shared(skb) || skb_head_is_locked(skb)) {
+		int n_pages = (skb->data_len + (PAGE_SIZE - 1)) >> PAGE_SHIFT;
+		u32 size, data_len = skb->data_len;
 		struct sk_buff *nskb;
-		int size, head_off;
-		void *head, *start;
 		struct page *page;
+		int i, off;
 
-		size = SKB_DATA_ALIGN(VETH_XDP_HEADROOM + pktlen) +
+		/* We need a private copy of the skb and data buffers since
+		 * the ebpf program can modify it. We segment the original skb
+		 * into order-0 pages without linearize it
+		 */
+		size = SKB_DATA_ALIGN(VETH_XDP_HEADROOM + skb_headlen(skb)) +
 		       SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-		if (size > PAGE_SIZE)
+
+		/* Make sure we have enough space for linear and paged area */
+		if (size > PAGE_SIZE || n_pages > MAX_SKB_FRAGS)
 			goto drop;
 
+		/* Allocate skb head */
 		page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
 		if (!page)
 			goto drop;
 
-		head = page_address(page);
-		start = head + VETH_XDP_HEADROOM;
-		if (skb_copy_bits(skb, -mac_len, start, pktlen)) {
-			page_frag_free(head);
-			goto drop;
-		}
-
-		nskb = veth_build_skb(head, VETH_XDP_HEADROOM + mac_len,
-				      skb->len, PAGE_SIZE);
+		nskb = build_skb(page_address(page), PAGE_SIZE);
 		if (!nskb) {
-			page_frag_free(head);
+			put_page(page);
 			goto drop;
 		}
 
+		skb_reserve(nskb, VETH_XDP_HEADROOM);
+		skb_copy_from_linear_data(skb, nskb->data, skb_headlen(skb));
+		skb_put(nskb, skb_headlen(skb));
+
 		skb_copy_header(nskb, skb);
-		head_off = skb_headroom(nskb) - skb_headroom(skb);
-		skb_headers_offset_update(nskb, head_off);
+		off = skb_headroom(nskb) - skb_headroom(skb);
+		skb_headers_offset_update(nskb, off);
+
+		/* Allocate paged area of new skb */
+		off = skb_headlen(skb);
+		for (i = 0; i < n_pages; i++) {
+			page = alloc_page(GFP_ATOMIC | __GFP_NOWARN);
+			if (!page) {
+				consume_skb(nskb);
+				goto drop;
+			}
+
+			size = min_t(u32, data_len, PAGE_SIZE);
+			skb_add_rx_frag(nskb, i, page, 0, size, PAGE_SIZE);
+			skb_copy_bits(skb, off, page_address(page), size);
+
+			data_len -= size;
+			off += size;
+		}
+
 		consume_skb(skb);
 		skb = nskb;
+	} else if (skb_headroom(skb) < XDP_PACKET_HEADROOM &&
+		   pskb_expand_head(skb, VETH_XDP_HEADROOM, 0, GFP_ATOMIC)) {
+		goto drop;
 	}
 
 	/* SKB "head" area always have tailroom for skb_shared_info */
 	frame_sz = skb_end_pointer(skb) - skb->head;
 	frame_sz += SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-	xdp_init_buff(&xdp, frame_sz, &rq->xdp_rxq);
-	xdp_prepare_buff(&xdp, skb->head, skb->mac_header, pktlen, true);
+	xdp_init_buff(xdp, frame_sz, &rq->xdp_rxq);
+	xdp_prepare_buff(xdp, skb->head, skb_headroom(skb),
+			 skb_headlen(skb), true);
+
+	if (skb_is_nonlinear(skb)) {
+		skb_shinfo(skb)->xdp_frags_size = skb->data_len;
+		xdp_buff_set_frags_flag(xdp);
+	} else {
+		xdp_buff_clear_frags_flag(xdp);
+	}
+	*pskb = skb;
+
+	return 0;
+drop:
+	consume_skb(skb);
+	*pskb = NULL;
+
+	return -ENOMEM;
+}
+
+static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
+					struct sk_buff *skb,
+					struct veth_xdp_tx_bq *bq,
+					struct veth_stats *stats)
+{
+	void *orig_data, *orig_data_end;
+	struct bpf_prog *xdp_prog;
+	struct xdp_buff xdp;
+	u32 act, metalen;
+	int off;
+
+	skb_prepare_for_gro(skb);
+
+	rcu_read_lock();
+	xdp_prog = rcu_dereference(rq->xdp_prog);
+	if (unlikely(!xdp_prog)) {
+		rcu_read_unlock();
+		goto out;
+	}
+
+	__skb_push(skb, skb->data - skb_mac_header(skb));
+	if (veth_convert_xdp_buff_from_skb(rq, &xdp, &skb))
+		goto drop;
 
 	orig_data = xdp.data;
 	orig_data_end = xdp.data_end;
@@ -770,7 +816,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	case XDP_PASS:
 		break;
 	case XDP_TX:
-		get_page(virt_to_page(xdp.data));
+		veth_xdp_get(&xdp);
 		consume_skb(skb);
 		xdp.rxq->mem = rq->xdp_mem;
 		if (unlikely(veth_xdp_tx(rq, &xdp, bq) < 0)) {
@@ -782,7 +828,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 		rcu_read_unlock();
 		goto xdp_xmit;
 	case XDP_REDIRECT:
-		get_page(virt_to_page(xdp.data));
+		veth_xdp_get(&xdp);
 		consume_skb(skb);
 		xdp.rxq->mem = rq->xdp_mem;
 		if (xdp_do_redirect(rq->dev, &xdp, xdp_prog)) {
@@ -805,18 +851,24 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	rcu_read_unlock();
 
 	/* check if bpf_xdp_adjust_head was used */
-	delta = orig_data - xdp.data;
-	off = mac_len + delta;
+	off = orig_data - xdp.data;
 	if (off > 0)
 		__skb_push(skb, off);
 	else if (off < 0)
 		__skb_pull(skb, -off);
-	skb->mac_header -= delta;
+
+	skb_reset_mac_header(skb);
 
 	/* check if bpf_xdp_adjust_tail was used */
 	off = xdp.data_end - orig_data_end;
 	if (off != 0)
 		__skb_put(skb, off); /* positive on grow, negative on shrink */
+
+	if (xdp_buff_has_frags(&xdp))
+		skb->data_len = skb_shinfo(skb)->xdp_frags_size;
+	else
+		skb->data_len = 0;
+
 	skb->protocol = eth_type_trans(skb, rq->dev);
 
 	metalen = xdp.data - xdp.data_meta;
@@ -832,7 +884,7 @@ static struct sk_buff *veth_xdp_rcv_skb(struct veth_rq *rq,
 	return NULL;
 err_xdp:
 	rcu_read_unlock();
-	page_frag_free(xdp.data);
+	xdp_return_buff(&xdp);
 xdp_xmit:
 	return NULL;
 }
diff --git a/net/core/xdp.c b/net/core/xdp.c
index 361df312ee7f..b5f2d428d856 100644
--- a/net/core/xdp.c
+++ b/net/core/xdp.c
@@ -528,6 +528,7 @@ void xdp_return_buff(struct xdp_buff *xdp)
 out:
 	__xdp_return(xdp->data, &xdp->rxq->mem, true, xdp);
 }
+EXPORT_SYMBOL_GPL(xdp_return_buff);
 
 /* Only called for MEM_TYPE_PAGE_POOL see xdp.h */
 void __xdp_release_frame(void *data, struct xdp_mem_info *mem)
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH bpf-next 3/3] veth: allow jumbo frames in xdp mode
  2022-02-11  1:20 [PATCH bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
  2022-02-11  1:20 ` [PATCH bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
  2022-02-11  1:20 ` [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
@ 2022-02-11  1:20 ` Lorenzo Bianconi
  2 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-02-11  1:20 UTC (permalink / raw)
  To: bpf, netdev
  Cc: davem, kuba, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

Allow increasing the MTU over page boundaries on veth devices
if the attached xdp program declares to support xdp fragments.
Enable NETIF_F_ALL_TSO when the device is running in xdp mode.

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/veth.c | 21 +++++++++------------
 1 file changed, 9 insertions(+), 12 deletions(-)

diff --git a/drivers/net/veth.c b/drivers/net/veth.c
index 6419de5d1f49..d6ebbfe52969 100644
--- a/drivers/net/veth.c
+++ b/drivers/net/veth.c
@@ -1498,7 +1498,6 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 	struct veth_priv *priv = netdev_priv(dev);
 	struct bpf_prog *old_prog;
 	struct net_device *peer;
-	unsigned int max_mtu;
 	int err;
 
 	old_prog = priv->_xdp_prog;
@@ -1506,6 +1505,8 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 	peer = rtnl_dereference(priv->peer);
 
 	if (prog) {
+		unsigned int max_mtu;
+
 		if (!peer) {
 			NL_SET_ERR_MSG_MOD(extack, "Cannot set XDP when peer is detached");
 			err = -ENOTCONN;
@@ -1515,9 +1516,9 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 		max_mtu = PAGE_SIZE - VETH_XDP_HEADROOM -
 			  peer->hard_header_len -
 			  SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
-		if (peer->mtu > max_mtu) {
-			NL_SET_ERR_MSG_MOD(extack, "Peer MTU is too large to set XDP");
-			err = -ERANGE;
+		if (!prog->aux->xdp_has_frags && peer->mtu > max_mtu) {
+			NL_SET_ERR_MSG_MOD(extack, "prog does not support XDP frags");
+			err = -EOPNOTSUPP;
 			goto err;
 		}
 
@@ -1535,10 +1536,8 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 			}
 		}
 
-		if (!old_prog) {
-			peer->hw_features &= ~NETIF_F_GSO_SOFTWARE;
-			peer->max_mtu = max_mtu;
-		}
+		if (!old_prog)
+			peer->hw_features &= ~NETIF_F_GSO_FRAGLIST;
 	}
 
 	if (old_prog) {
@@ -1546,10 +1545,8 @@ static int veth_xdp_set(struct net_device *dev, struct bpf_prog *prog,
 			if (dev->flags & IFF_UP)
 				veth_disable_xdp(dev);
 
-			if (peer) {
-				peer->hw_features |= NETIF_F_GSO_SOFTWARE;
-				peer->max_mtu = ETH_MAX_MTU;
-			}
+			if (peer)
+				peer->hw_features |= NETIF_F_GSO_FRAGLIST;
 		}
 		bpf_prog_put(old_prog);
 	}
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-02-11  1:20 ` [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
@ 2022-02-12  1:04   ` Jakub Kicinski
  2022-02-12 11:03     ` Lorenzo Bianconi
  0 siblings, 1 reply; 8+ messages in thread
From: Jakub Kicinski @ 2022-02-12  1:04 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: bpf, netdev, davem, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

On Fri, 11 Feb 2022 02:20:31 +0100 Lorenzo Bianconi wrote:
> +	if (skb_shared(skb) || skb_head_is_locked(skb)) {

Is this sufficient to guarantee that the frags can be written?
skb_cow_data() tells a different story.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-02-12  1:04   ` Jakub Kicinski
@ 2022-02-12 11:03     ` Lorenzo Bianconi
  2022-02-14 15:48       ` Jakub Kicinski
  0 siblings, 1 reply; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-02-12 11:03 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, netdev, davem, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

[-- Attachment #1: Type: text/plain, Size: 516 bytes --]

On Feb 11, Jakub Kicinski wrote:
> On Fri, 11 Feb 2022 02:20:31 +0100 Lorenzo Bianconi wrote:
> > +	if (skb_shared(skb) || skb_head_is_locked(skb)) {
> 
> Is this sufficient to guarantee that the frags can be written?
> skb_cow_data() tells a different story.

Do you mean to consider paged part of the skb always not writable, right?
In other words, we should check something like:

	if (skb_shared(skb) || skb_head_is_locked(skb) ||
	    skb_shinfo(skb)->nr_frags) {
	    ...
	}

Regards,
Lorenzo

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-02-12 11:03     ` Lorenzo Bianconi
@ 2022-02-14 15:48       ` Jakub Kicinski
  2022-02-14 21:03         ` Lorenzo Bianconi
  0 siblings, 1 reply; 8+ messages in thread
From: Jakub Kicinski @ 2022-02-14 15:48 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: bpf, netdev, davem, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

On Sat, 12 Feb 2022 12:03:49 +0100 Lorenzo Bianconi wrote:
> On Feb 11, Jakub Kicinski wrote:
> > On Fri, 11 Feb 2022 02:20:31 +0100 Lorenzo Bianconi wrote:  
> > > +	if (skb_shared(skb) || skb_head_is_locked(skb)) {  
> > 
> > Is this sufficient to guarantee that the frags can be written?
> > skb_cow_data() tells a different story.  
> 
> Do you mean to consider paged part of the skb always not writable, right?
> In other words, we should check something like:
> 
> 	if (skb_shared(skb) || skb_head_is_locked(skb) ||
> 	    skb_shinfo(skb)->nr_frags) {
> 	    ...
> 	}

Yes, we do have skb_has_shared_frag() but IDK if it guarantees frags
are writable :S

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb
  2022-02-14 15:48       ` Jakub Kicinski
@ 2022-02-14 21:03         ` Lorenzo Bianconi
  0 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2022-02-14 21:03 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: bpf, netdev, davem, ast, daniel, brouer, toke, pabeni, echaudro,
	lorenzo.bianconi, toshiaki.makita1, andrii

[-- Attachment #1: Type: text/plain, Size: 793 bytes --]

> On Sat, 12 Feb 2022 12:03:49 +0100 Lorenzo Bianconi wrote:
> > On Feb 11, Jakub Kicinski wrote:
> > > On Fri, 11 Feb 2022 02:20:31 +0100 Lorenzo Bianconi wrote:  
> > > > +	if (skb_shared(skb) || skb_head_is_locked(skb)) {  
> > > 
> > > Is this sufficient to guarantee that the frags can be written?
> > > skb_cow_data() tells a different story.  
> > 
> > Do you mean to consider paged part of the skb always not writable, right?
> > In other words, we should check something like:
> > 
> > 	if (skb_shared(skb) || skb_head_is_locked(skb) ||
> > 	    skb_shinfo(skb)->nr_frags) {
> > 	    ...
> > 	}
> 
> Yes, we do have skb_has_shared_frag() but IDK if it guarantees frags
> are writable :S

ack, I will add skb_shinfo(skb)->nr_frags check in v2.

Regards,
Lorenzo

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2022-02-14 21:03 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-11  1:20 [PATCH bpf-next 0/3] introduce xdp frags support to veth driver Lorenzo Bianconi
2022-02-11  1:20 ` [PATCH bpf-next 1/3] net: veth: account total xdp_frame len running ndo_xdp_xmit Lorenzo Bianconi
2022-02-11  1:20 ` [PATCH bpf-next 2/3] veth: rework veth_xdp_rcv_skb in order to accept non-linear skb Lorenzo Bianconi
2022-02-12  1:04   ` Jakub Kicinski
2022-02-12 11:03     ` Lorenzo Bianconi
2022-02-14 15:48       ` Jakub Kicinski
2022-02-14 21:03         ` Lorenzo Bianconi
2022-02-11  1:20 ` [PATCH bpf-next 3/3] veth: allow jumbo frames in xdp mode Lorenzo Bianconi

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.