bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers
@ 2020-07-09 15:57 Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
                   ` (5 more replies)
  0 siblings, 6 replies; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-09 15:57 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Rework mvneta_rx_swbm routine in order to process all rx descriptors before
building the skb or run the xdp program attached to the interface.
Introduce xdp_get_shared_info_from_{buff,frame} utility routines to get the
skb_shared_info pointer from xdp_buff or xdp_frame.
This is a preliminary series to enable multi-buffers and jumbo frames for XDP
according to [1]

[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org

Lorenzo Bianconi (6):
  xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines
  net: mvneta: move skb build after descriptors processing
  net: mvneta: move mvneta_run_xdp after descriptors processing
  net: mvneta: drop all fragments in XDP_DROP
  net: mvneta: get rid of skb in mvneta_rx_queue
  net: mvneta: move rxq->left_size on the stack

 drivers/net/ethernet/marvell/mvneta.c | 220 ++++++++++++++------------
 include/net/xdp.h                     |  15 ++
 2 files changed, 137 insertions(+), 98 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines
  2020-07-09 15:57 [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
@ 2020-07-09 15:57 ` Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 2/6] net: mvneta: move skb build after descriptors processing Lorenzo Bianconi
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-09 15:57 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Introduce xdp_get_shared_info_from_{buff,frame} utility routines to get
skb_shared_info from xdp buffer/frame pointer.
xdp_get_shared_info_from_{buff,frame} will be used to implement xdp
multi-buffer support

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 include/net/xdp.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index 609f819ed08b..d3005bef812f 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -85,6 +85,12 @@ struct xdp_buff {
 	((xdp)->data_hard_start + (xdp)->frame_sz -	\
 	 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
 
+static inline struct skb_shared_info *
+xdp_get_shared_info_from_buff(struct xdp_buff *xdp)
+{
+	return (struct skb_shared_info *)xdp_data_hard_end(xdp);
+}
+
 struct xdp_frame {
 	void *data;
 	u16 len;
@@ -98,6 +104,15 @@ struct xdp_frame {
 	struct net_device *dev_rx; /* used by cpumap */
 };
 
+static inline struct skb_shared_info *
+xdp_get_shared_info_from_frame(struct xdp_frame *frame)
+{
+	void *data_hard_start = frame->data - frame->headroom - sizeof(*frame);
+
+	return (struct skb_shared_info *)(data_hard_start + frame->frame_sz -
+				SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
+}
+
 /* Clear kernel pointers in xdp_frame */
 static inline void xdp_scrub_frame(struct xdp_frame *frame)
 {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 2/6] net: mvneta: move skb build after descriptors processing
  2020-07-09 15:57 [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
@ 2020-07-09 15:57 ` Lorenzo Bianconi
  2020-07-15 19:58   ` Jakub Kicinski
  2020-07-09 15:57 ` [PATCH 3/6] net: mvneta: move mvneta_run_xdp " Lorenzo Bianconi
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-09 15:57 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Move skb build after all descriptors processing. This is a preliminary
patch to enable multi-buffers and JUMBO frames support for XDP.
Introduce mvneta_xdp_put_buff routine to release all pages used by a
XDP multi-buffer

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 157 ++++++++++++++++----------
 1 file changed, 100 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 37b9c75a5a60..678f90ccf271 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2026,6 +2026,21 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
 	return i;
 }
 
+static void
+mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+		    struct xdp_buff *xdp, int sync_len, bool napi)
+{
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	struct page *page = virt_to_head_page(xdp->data);
+	int i, size = sync_len > 0 ? sync_len : -1;
+
+	page_pool_put_page(rxq->page_pool, page, size, napi);
+	for (i = 0; i < sinfo->nr_frags; i++) {
+		page = sinfo->frags[i].bv_page;
+		page_pool_put_page(rxq->page_pool, page, size, napi);
+	}
+}
+
 static int
 mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq,
 			struct xdp_frame *xdpf, bool dma_map)
@@ -2229,6 +2244,7 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	int data_len = -MVNETA_MH_SIZE, len;
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
+	struct skb_shared_info *sinfo;
 	int ret = 0;
 
 	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
@@ -2252,35 +2268,13 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	xdp->data_end = xdp->data + data_len;
 	xdp_set_data_meta_invalid(xdp);
 
-	if (xdp_prog) {
-		ret = mvneta_run_xdp(pp, rxq, xdp_prog, xdp, stats);
-		if (ret)
-			goto out;
-	}
-
-	rxq->skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
-	if (unlikely(!rxq->skb)) {
-		struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
-
-		netdev_err(dev, "Can't allocate skb on queue %d\n", rxq->id);
-
-		u64_stats_update_begin(&stats->syncp);
-		stats->es.skb_alloc_error++;
-		stats->rx_dropped++;
-		u64_stats_update_end(&stats->syncp);
+	sinfo = xdp_get_shared_info_from_buff(xdp);
+	sinfo->nr_frags = 0;
 
-		return -ENOMEM;
-	}
-	page_pool_release_page(rxq->page_pool, page);
-
-	skb_reserve(rxq->skb,
-		    xdp->data - xdp->data_hard_start);
-	skb_put(rxq->skb, xdp->data_end - xdp->data);
-	mvneta_rx_csum(pp, rx_desc->status, rxq->skb);
+	if (xdp_prog)
+		ret = mvneta_run_xdp(pp, rxq, xdp_prog, xdp, stats);
 
 	rxq->left_size = rx_desc->data_size - len;
-
-out:
 	rx_desc->buf_phys_addr = 0;
 
 	return ret;
@@ -2290,8 +2284,10 @@ static void
 mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc,
 			    struct mvneta_rx_queue *rxq,
+			    struct xdp_buff *xdp,
 			    struct page *page)
 {
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
 	int data_len, len;
@@ -2307,18 +2303,51 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 	dma_sync_single_for_cpu(dev->dev.parent,
 				rx_desc->buf_phys_addr,
 				len, dma_dir);
-	if (data_len > 0) {
-		/* refill descriptor with new buffer later */
-		skb_add_rx_frag(rxq->skb,
-				skb_shinfo(rxq->skb)->nr_frags,
-				page, pp->rx_offset_correction, data_len,
-				PAGE_SIZE);
-	}
-	page_pool_release_page(rxq->page_pool, page);
-	rx_desc->buf_phys_addr = 0;
+
+	if (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) {
+		skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags];
+
+		frag->bv_offset = pp->rx_offset_correction;
+		skb_frag_size_set(frag, data_len);
+		frag->bv_page = page;
+		sinfo->nr_frags++;
+
+		rx_desc->buf_phys_addr = 0;
+	}
 	rxq->left_size -= len;
 }
 
+static struct sk_buff *
+mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+		      struct xdp_buff *xdp, u32 desc_status)
+{
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	int i, num_frags = sinfo->nr_frags;
+	skb_frag_t frags[MAX_SKB_FRAGS];
+	struct sk_buff *skb;
+
+	memcpy(frags, sinfo->frags, sizeof(skb_frag_t) * num_frags);
+
+	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
+	if (!skb)
+		return ERR_PTR(-ENOMEM);
+
+	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
+
+	skb_reserve(skb, xdp->data - xdp->data_hard_start);
+	skb_put(skb, xdp->data_end - xdp->data);
+	mvneta_rx_csum(pp, desc_status, skb);
+
+	for (i = 0; i < num_frags; i++) {
+		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+				frags[i].bv_page, frags[i].bv_offset,
+				skb_frag_size(&frags[i]), PAGE_SIZE);
+		page_pool_release_page(rxq->page_pool, frags[i].bv_page);
+	}
+
+	return skb;
+}
+
 /* Main rx processing when using software buffer management */
 static int mvneta_rx_swbm(struct napi_struct *napi,
 			  struct mvneta_port *pp, int budget,
@@ -2326,22 +2355,25 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 {
 	int rx_proc = 0, rx_todo, refill;
 	struct net_device *dev = pp->dev;
+	struct xdp_buff xdp_buf = {
+		.frame_sz = PAGE_SIZE,
+		.rxq = &rxq->xdp_rxq,
+	};
 	struct mvneta_stats ps = {};
 	struct bpf_prog *xdp_prog;
-	struct xdp_buff xdp_buf;
+	u32 desc_status;
 
 	/* Get number of received packets */
 	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
 
 	rcu_read_lock();
 	xdp_prog = READ_ONCE(pp->xdp_prog);
-	xdp_buf.rxq = &rxq->xdp_rxq;
-	xdp_buf.frame_sz = PAGE_SIZE;
 
 	/* Fairness NAPI loop */
 	while (rx_proc < budget && rx_proc < rx_todo) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
 		u32 rx_status, index;
+		struct sk_buff *skb;
 		struct page *page;
 
 		index = rx_desc - rxq->descs;
@@ -2357,21 +2389,21 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			/* Check errors only for FIRST descriptor */
 			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
 				mvneta_rx_error(pp, rx_desc);
-				/* leave the descriptor untouched */
-				continue;
+				goto next;
 			}
 
 			err = mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
 						   xdp_prog, page, &ps);
 			if (err)
 				continue;
+
+			desc_status = rx_desc->status;
 		} else {
-			if (unlikely(!rxq->skb)) {
-				pr_debug("no skb for rx_status 0x%x\n",
-					 rx_status);
+			if (unlikely(!xdp_buf.data_hard_start))
 				continue;
-			}
-			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, page);
+
+			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
+						    page);
 		} /* Middle or Last descriptor */
 
 		if (!(rx_status & MVNETA_RXD_LAST_DESC))
@@ -2379,27 +2411,38 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			continue;
 
 		if (rxq->left_size) {
-			pr_err("get last desc, but left_size (%d) != 0\n",
-			       rxq->left_size);
-			dev_kfree_skb_any(rxq->skb);
 			rxq->left_size = 0;
-			rxq->skb = NULL;
-			continue;
+			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
+			goto next;
 		}
 
-		ps.rx_bytes += rxq->skb->len;
-		ps.rx_packets++;
+		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
+		if (IS_ERR(skb)) {
+			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 
-		/* Linux processing */
-		rxq->skb->protocol = eth_type_trans(rxq->skb, dev);
+			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
+
+			u64_stats_update_begin(&stats->syncp);
+			stats->es.skb_alloc_error++;
+			stats->rx_dropped++;
+			u64_stats_update_end(&stats->syncp);
+
+			goto next;
+		}
 
-		napi_gro_receive(napi, rxq->skb);
+		ps.rx_bytes += skb->len;
+		ps.rx_packets++;
 
-		/* clean uncomplete skb pointer in queue */
-		rxq->skb = NULL;
+		skb->protocol = eth_type_trans(skb, dev);
+		napi_gro_receive(napi, skb);
+next:
+		xdp_buf.data_hard_start = NULL;
 	}
 	rcu_read_unlock();
 
+	if (xdp_buf.data_hard_start)
+		mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
+
 	if (ps.xdp_redirect)
 		xdp_do_flush_map();
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 3/6] net: mvneta: move mvneta_run_xdp after descriptors processing
  2020-07-09 15:57 [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 2/6] net: mvneta: move skb build after descriptors processing Lorenzo Bianconi
@ 2020-07-09 15:57 ` Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 4/6] net: mvneta: drop all fragments in XDP_DROP Lorenzo Bianconi
                   ` (2 subsequent siblings)
  5 siblings, 0 replies; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-09 15:57 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Move mvneta_run_xdp routine after all descriptor processing. This is a
preliminary patch to enable multi-buffers and JUMBO frames support for
XDP

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 678f90ccf271..fbf88e524210 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2231,12 +2231,11 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	return ret;
 }
 
-static int
+static void
 mvneta_swbm_rx_frame(struct mvneta_port *pp,
 		     struct mvneta_rx_desc *rx_desc,
 		     struct mvneta_rx_queue *rxq,
 		     struct xdp_buff *xdp,
-		     struct bpf_prog *xdp_prog,
 		     struct page *page,
 		     struct mvneta_stats *stats)
 {
@@ -2245,7 +2244,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
 	struct skb_shared_info *sinfo;
-	int ret = 0;
 
 	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
@@ -2271,13 +2269,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	sinfo = xdp_get_shared_info_from_buff(xdp);
 	sinfo->nr_frags = 0;
 
-	if (xdp_prog)
-		ret = mvneta_run_xdp(pp, rxq, xdp_prog, xdp, stats);
-
 	rxq->left_size = rx_desc->data_size - len;
 	rx_desc->buf_phys_addr = 0;
-
-	return ret;
 }
 
 static void
@@ -2384,20 +2377,15 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		rxq->refill_num++;
 
 		if (rx_status & MVNETA_RXD_FIRST_DESC) {
-			int err;
-
 			/* Check errors only for FIRST descriptor */
 			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
 				mvneta_rx_error(pp, rx_desc);
 				goto next;
 			}
 
-			err = mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
-						   xdp_prog, page, &ps);
-			if (err)
-				continue;
-
 			desc_status = rx_desc->status;
+			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, page,
+					     &ps);
 		} else {
 			if (unlikely(!xdp_buf.data_hard_start))
 				continue;
@@ -2416,6 +2404,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			goto next;
 		}
 
+		if (xdp_prog &&
+		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, &ps))
+			goto next;
+
 		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
 		if (IS_ERR(skb)) {
 			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 4/6] net: mvneta: drop all fragments in XDP_DROP
  2020-07-09 15:57 [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
                   ` (2 preceding siblings ...)
  2020-07-09 15:57 ` [PATCH 3/6] net: mvneta: move mvneta_run_xdp " Lorenzo Bianconi
@ 2020-07-09 15:57 ` Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 5/6] net: mvneta: get rid of skb in mvneta_rx_queue Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 6/6] net: mvneta: move rxq->left_size on the stack Lorenzo Bianconi
  5 siblings, 0 replies; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-09 15:57 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Release all consumed pages if the eBPF program returns XDP_DROP for XDP
multi-buffers

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 26 ++++++++++++--------------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index fbf88e524210..f75a45ad586c 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2173,13 +2173,13 @@ mvneta_xdp_xmit(struct net_device *dev, int num_frame,
 static int
 mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	       struct bpf_prog *prog, struct xdp_buff *xdp,
-	       struct mvneta_stats *stats)
+	       u32 frame_sz, struct mvneta_stats *stats)
 {
-	unsigned int len, sync;
-	struct page *page;
+	unsigned int len, data_len, sync;
 	u32 ret, act;
 
 	len = xdp->data_end - xdp->data_hard_start - pp->rx_offset_correction;
+	data_len = xdp->data_end - xdp->data;
 	act = bpf_prog_run_xdp(prog, xdp);
 
 	/* Due xdp_adjust_tail: DMA sync for_device cover max len CPU touch */
@@ -2195,9 +2195,8 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 
 		err = xdp_do_redirect(pp->dev, xdp, prog);
 		if (unlikely(err)) {
+			mvneta_xdp_put_buff(pp, rxq, xdp, sync, true);
 			ret = MVNETA_XDP_DROPPED;
-			page = virt_to_head_page(xdp->data);
-			page_pool_put_page(rxq->page_pool, page, sync, true);
 		} else {
 			ret = MVNETA_XDP_REDIR;
 			stats->xdp_redirect++;
@@ -2206,10 +2205,8 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	}
 	case XDP_TX:
 		ret = mvneta_xdp_xmit_back(pp, xdp);
-		if (ret != MVNETA_XDP_TX) {
-			page = virt_to_head_page(xdp->data);
-			page_pool_put_page(rxq->page_pool, page, sync, true);
-		}
+		if (ret != MVNETA_XDP_TX)
+			mvneta_xdp_put_buff(pp, rxq, xdp, sync, true);
 		break;
 	default:
 		bpf_warn_invalid_xdp_action(act);
@@ -2218,14 +2215,13 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		trace_xdp_exception(pp->dev, prog, act);
 		/* fall through */
 	case XDP_DROP:
-		page = virt_to_head_page(xdp->data);
-		page_pool_put_page(rxq->page_pool, page, sync, true);
+		mvneta_xdp_put_buff(pp, rxq, xdp, sync, true);
 		ret = MVNETA_XDP_DROPPED;
 		stats->xdp_drop++;
 		break;
 	}
 
-	stats->rx_bytes += xdp->data_end - xdp->data;
+	stats->rx_bytes += frame_sz + xdp->data_end - xdp->data - data_len;
 	stats->rx_packets++;
 
 	return ret;
@@ -2354,7 +2350,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 	};
 	struct mvneta_stats ps = {};
 	struct bpf_prog *xdp_prog;
-	u32 desc_status;
+	u32 desc_status, frame_sz;
 
 	/* Get number of received packets */
 	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
@@ -2383,7 +2379,9 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 				goto next;
 			}
 
+			frame_sz = rx_desc->data_size - ETH_FCS_LEN;
 			desc_status = rx_desc->status;
+
 			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, page,
 					     &ps);
 		} else {
@@ -2405,7 +2403,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		}
 
 		if (xdp_prog &&
-		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, &ps))
+		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps))
 			goto next;
 
 		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 5/6] net: mvneta: get rid of skb in mvneta_rx_queue
  2020-07-09 15:57 [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
                   ` (3 preceding siblings ...)
  2020-07-09 15:57 ` [PATCH 4/6] net: mvneta: drop all fragments in XDP_DROP Lorenzo Bianconi
@ 2020-07-09 15:57 ` Lorenzo Bianconi
  2020-07-09 15:57 ` [PATCH 6/6] net: mvneta: move rxq->left_size on the stack Lorenzo Bianconi
  5 siblings, 0 replies; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-09 15:57 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Remove skb pointer in mvneta_rx_queue data structure since it is no
longer used

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index f75a45ad586c..1a0f34a9c01e 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -699,8 +699,6 @@ struct mvneta_rx_queue {
 	int first_to_refill;
 	u32 refill_num;
 
-	/* pointer to uncomplete skb buffer */
-	struct sk_buff *skb;
 	int left_size;
 };
 
@@ -3361,9 +3359,6 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 {
 	mvneta_rxq_drop_pkts(pp, rxq);
 
-	if (rxq->skb)
-		dev_kfree_skb_any(rxq->skb);
-
 	if (rxq->descs)
 		dma_free_coherent(pp->dev->dev.parent,
 				  rxq->size * MVNETA_DESC_ALIGNED_SIZE,
@@ -3376,7 +3371,6 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 	rxq->descs_phys        = 0;
 	rxq->first_to_refill   = 0;
 	rxq->refill_num        = 0;
-	rxq->skb               = NULL;
 	rxq->left_size         = 0;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH 6/6] net: mvneta: move rxq->left_size on the stack
  2020-07-09 15:57 [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
                   ` (4 preceding siblings ...)
  2020-07-09 15:57 ` [PATCH 5/6] net: mvneta: get rid of skb in mvneta_rx_queue Lorenzo Bianconi
@ 2020-07-09 15:57 ` Lorenzo Bianconi
  5 siblings, 0 replies; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-09 15:57 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Allocate rxq->left_size on mvneta_rx_swbm stack since it is used just
in sw bm napi_poll

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 29 ++++++++++++---------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 1a0f34a9c01e..bf98db56cd88 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -698,8 +698,6 @@ struct mvneta_rx_queue {
 	/* Index of first RX DMA descriptor to refill */
 	int first_to_refill;
 	u32 refill_num;
-
-	int left_size;
 };
 
 static enum cpuhp_state online_hpstate;
@@ -2229,7 +2227,7 @@ static void
 mvneta_swbm_rx_frame(struct mvneta_port *pp,
 		     struct mvneta_rx_desc *rx_desc,
 		     struct mvneta_rx_queue *rxq,
-		     struct xdp_buff *xdp,
+		     struct xdp_buff *xdp, int *size,
 		     struct page *page,
 		     struct mvneta_stats *stats)
 {
@@ -2263,7 +2261,7 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	sinfo = xdp_get_shared_info_from_buff(xdp);
 	sinfo->nr_frags = 0;
 
-	rxq->left_size = rx_desc->data_size - len;
+	*size = rx_desc->data_size - len;
 	rx_desc->buf_phys_addr = 0;
 }
 
@@ -2271,7 +2269,7 @@ static void
 mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc,
 			    struct mvneta_rx_queue *rxq,
-			    struct xdp_buff *xdp,
+			    struct xdp_buff *xdp, int *size,
 			    struct page *page)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
@@ -2279,11 +2277,11 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 	enum dma_data_direction dma_dir;
 	int data_len, len;
 
-	if (rxq->left_size > MVNETA_MAX_RX_BUF_SIZE) {
+	if (*size > MVNETA_MAX_RX_BUF_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
 		data_len = len;
 	} else {
-		len = rxq->left_size;
+		len = *size;
 		data_len = len - ETH_FCS_LEN;
 	}
 	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
@@ -2301,7 +2299,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 
 		rx_desc->buf_phys_addr = 0;
 	}
-	rxq->left_size -= len;
+	*size -= len;
 }
 
 static struct sk_buff *
@@ -2340,7 +2338,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			  struct mvneta_port *pp, int budget,
 			  struct mvneta_rx_queue *rxq)
 {
-	int rx_proc = 0, rx_todo, refill;
+	int rx_proc = 0, rx_todo, refill, size = 0;
 	struct net_device *dev = pp->dev;
 	struct xdp_buff xdp_buf = {
 		.frame_sz = PAGE_SIZE,
@@ -2377,25 +2375,25 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 				goto next;
 			}
 
-			frame_sz = rx_desc->data_size - ETH_FCS_LEN;
+			size = rx_desc->data_size;
+			frame_sz = size - ETH_FCS_LEN;
 			desc_status = rx_desc->status;
 
-			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, page,
-					     &ps);
+			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
+					     &size, page, &ps);
 		} else {
 			if (unlikely(!xdp_buf.data_hard_start))
 				continue;
 
 			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
-						    page);
+						    &size, page);
 		} /* Middle or Last descriptor */
 
 		if (!(rx_status & MVNETA_RXD_LAST_DESC))
 			/* no last descriptor this time */
 			continue;
 
-		if (rxq->left_size) {
-			rxq->left_size = 0;
+		if (size) {
 			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
 			goto next;
 		}
@@ -3371,7 +3369,6 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 	rxq->descs_phys        = 0;
 	rxq->first_to_refill   = 0;
 	rxq->refill_num        = 0;
-	rxq->left_size         = 0;
 }
 
 static int mvneta_txq_sw_init(struct mvneta_port *pp,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/6] net: mvneta: move skb build after descriptors processing
  2020-07-09 15:57 ` [PATCH 2/6] net: mvneta: move skb build after descriptors processing Lorenzo Bianconi
@ 2020-07-15 19:58   ` Jakub Kicinski
  2020-07-16 19:12     ` Lorenzo Bianconi
  0 siblings, 1 reply; 11+ messages in thread
From: Jakub Kicinski @ 2020-07-15 19:58 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, davem, bpf, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

On Thu,  9 Jul 2020 17:57:19 +0200 Lorenzo Bianconi wrote:
> +		frag->bv_offset = pp->rx_offset_correction;
> +		skb_frag_size_set(frag, data_len);
> +		frag->bv_page = page;
> +		sinfo->nr_frags++;

nit: please use the skb_frag_* helpers, in case we have to rename those
     fields again. You should also consider adding a helper for the
     operation of appending a frag, I bet most drivers will needs this.

> +static struct sk_buff *
> +mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
> +		      struct xdp_buff *xdp, u32 desc_status)
> +{
> +	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> +	int i, num_frags = sinfo->nr_frags;
> +	skb_frag_t frags[MAX_SKB_FRAGS];
> +	struct sk_buff *skb;
> +
> +	memcpy(frags, sinfo->frags, sizeof(skb_frag_t) * num_frags);
> +
> +	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
> +	if (!skb)
> +		return ERR_PTR(-ENOMEM);
> +
> +	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
> +
> +	skb_reserve(skb, xdp->data - xdp->data_hard_start);
> +	skb_put(skb, xdp->data_end - xdp->data);
> +	mvneta_rx_csum(pp, desc_status, skb);
> +
> +	for (i = 0; i < num_frags; i++) {
> +		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> +				frags[i].bv_page, frags[i].bv_offset,
> +				skb_frag_size(&frags[i]), PAGE_SIZE);
> +		page_pool_release_page(rxq->page_pool, frags[i].bv_page);
> +	}
> +
> +	return skb;
> +}

Here as well - is the plan to turn more of this function into common
code later on? Looks like most of this is not really driver specific.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/6] net: mvneta: move skb build after descriptors processing
  2020-07-15 19:58   ` Jakub Kicinski
@ 2020-07-16 19:12     ` Lorenzo Bianconi
  2020-07-16 19:44       ` Jakub Kicinski
  0 siblings, 1 reply; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 19:12 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Lorenzo Bianconi, netdev, davem, bpf, ilias.apalodimas, brouer,
	echaudro, sameehj

[-- Attachment #1: Type: text/plain, Size: 1902 bytes --]

> On Thu,  9 Jul 2020 17:57:19 +0200 Lorenzo Bianconi wrote:
> > +		frag->bv_offset = pp->rx_offset_correction;
> > +		skb_frag_size_set(frag, data_len);
> > +		frag->bv_page = page;
> > +		sinfo->nr_frags++;
> 
> nit: please use the skb_frag_* helpers, in case we have to rename those
>      fields again. You should also consider adding a helper for the
>      operation of appending a frag, I bet most drivers will needs this.

Hi Jakub,

thx for the review. Ack, I will fix them in v2.

> 
> > +static struct sk_buff *
> > +mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
> > +		      struct xdp_buff *xdp, u32 desc_status)
> > +{
> > +	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> > +	int i, num_frags = sinfo->nr_frags;
> > +	skb_frag_t frags[MAX_SKB_FRAGS];
> > +	struct sk_buff *skb;
> > +
> > +	memcpy(frags, sinfo->frags, sizeof(skb_frag_t) * num_frags);
> > +
> > +	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
> > +	if (!skb)
> > +		return ERR_PTR(-ENOMEM);
> > +
> > +	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
> > +
> > +	skb_reserve(skb, xdp->data - xdp->data_hard_start);
> > +	skb_put(skb, xdp->data_end - xdp->data);
> > +	mvneta_rx_csum(pp, desc_status, skb);
> > +
> > +	for (i = 0; i < num_frags; i++) {
> > +		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> > +				frags[i].bv_page, frags[i].bv_offset,
> > +				skb_frag_size(&frags[i]), PAGE_SIZE);
> > +		page_pool_release_page(rxq->page_pool, frags[i].bv_page);
> > +	}
> > +
> > +	return skb;
> > +}
> 
> Here as well - is the plan to turn more of this function into common
> code later on? Looks like most of this is not really driver specific.

I agree. What about adding it when other drivers will add multi-buff support?
(here we have even page_pool dependency)

Regards,
Lorenzo

> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/6] net: mvneta: move skb build after descriptors processing
  2020-07-16 19:12     ` Lorenzo Bianconi
@ 2020-07-16 19:44       ` Jakub Kicinski
  2020-07-16 20:09         ` Lorenzo Bianconi
  0 siblings, 1 reply; 11+ messages in thread
From: Jakub Kicinski @ 2020-07-16 19:44 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: Lorenzo Bianconi, netdev, davem, bpf, ilias.apalodimas, brouer,
	echaudro, sameehj

On Thu, 16 Jul 2020 21:12:51 +0200 Lorenzo Bianconi wrote:
> > > +static struct sk_buff *
> > > +mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
> > > +		      struct xdp_buff *xdp, u32 desc_status)
> > > +{
> > > +	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> > > +	int i, num_frags = sinfo->nr_frags;
> > > +	skb_frag_t frags[MAX_SKB_FRAGS];
> > > +	struct sk_buff *skb;
> > > +
> > > +	memcpy(frags, sinfo->frags, sizeof(skb_frag_t) * num_frags);
> > > +
> > > +	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
> > > +	if (!skb)
> > > +		return ERR_PTR(-ENOMEM);
> > > +
> > > +	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
> > > +
> > > +	skb_reserve(skb, xdp->data - xdp->data_hard_start);
> > > +	skb_put(skb, xdp->data_end - xdp->data);
> > > +	mvneta_rx_csum(pp, desc_status, skb);
> > > +
> > > +	for (i = 0; i < num_frags; i++) {
> > > +		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> > > +				frags[i].bv_page, frags[i].bv_offset,
> > > +				skb_frag_size(&frags[i]), PAGE_SIZE);
> > > +		page_pool_release_page(rxq->page_pool, frags[i].bv_page);
> > > +	}
> > > +
> > > +	return skb;
> > > +}  
> > 
> > Here as well - is the plan to turn more of this function into common
> > code later on? Looks like most of this is not really driver specific.  
> 
> I agree. What about adding it when other drivers will add multi-buff support?
> (here we have even page_pool dependency)

I guess that's okay on the condition that you're going to be the one
adding the support to the next driver, or at least review it very
closely to make sure it's done.

In general vendors prove rather resistant to factoring code out, 
the snowflakes they feel they are.

^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH 2/6] net: mvneta: move skb build after descriptors processing
  2020-07-16 19:44       ` Jakub Kicinski
@ 2020-07-16 20:09         ` Lorenzo Bianconi
  0 siblings, 0 replies; 11+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 20:09 UTC (permalink / raw)
  To: Jakub Kicinski
  Cc: Lorenzo Bianconi, netdev, davem, bpf, ilias.apalodimas, brouer,
	echaudro, sameehj

[-- Attachment #1: Type: text/plain, Size: 2025 bytes --]

> On Thu, 16 Jul 2020 21:12:51 +0200 Lorenzo Bianconi wrote:
> > > > +static struct sk_buff *
> > > > +mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
> > > > +		      struct xdp_buff *xdp, u32 desc_status)
> > > > +{
> > > > +	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
> > > > +	int i, num_frags = sinfo->nr_frags;
> > > > +	skb_frag_t frags[MAX_SKB_FRAGS];
> > > > +	struct sk_buff *skb;
> > > > +
> > > > +	memcpy(frags, sinfo->frags, sizeof(skb_frag_t) * num_frags);
> > > > +
> > > > +	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
> > > > +	if (!skb)
> > > > +		return ERR_PTR(-ENOMEM);
> > > > +
> > > > +	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
> > > > +
> > > > +	skb_reserve(skb, xdp->data - xdp->data_hard_start);
> > > > +	skb_put(skb, xdp->data_end - xdp->data);
> > > > +	mvneta_rx_csum(pp, desc_status, skb);
> > > > +
> > > > +	for (i = 0; i < num_frags; i++) {
> > > > +		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
> > > > +				frags[i].bv_page, frags[i].bv_offset,
> > > > +				skb_frag_size(&frags[i]), PAGE_SIZE);
> > > > +		page_pool_release_page(rxq->page_pool, frags[i].bv_page);
> > > > +	}
> > > > +
> > > > +	return skb;
> > > > +}  
> > > 
> > > Here as well - is the plan to turn more of this function into common
> > > code later on? Looks like most of this is not really driver specific.  
> > 
> > I agree. What about adding it when other drivers will add multi-buff support?
> > (here we have even page_pool dependency)
> 
> I guess that's okay on the condition that you're going to be the one
> adding the support to the next driver, or at least review it very
> closely to make sure it's done.

I am completely fine to work on it if I have the hw handy (or if someone else
can test it) otherwise I will review the code :)

Regards,
Lorenzo

> 
> In general vendors prove rather resistant to factoring code out, 
> the snowflakes they feel they are.
> 

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-07-16 20:09 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-09 15:57 [PATCH 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
2020-07-09 15:57 ` [PATCH 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
2020-07-09 15:57 ` [PATCH 2/6] net: mvneta: move skb build after descriptors processing Lorenzo Bianconi
2020-07-15 19:58   ` Jakub Kicinski
2020-07-16 19:12     ` Lorenzo Bianconi
2020-07-16 19:44       ` Jakub Kicinski
2020-07-16 20:09         ` Lorenzo Bianconi
2020-07-09 15:57 ` [PATCH 3/6] net: mvneta: move mvneta_run_xdp " Lorenzo Bianconi
2020-07-09 15:57 ` [PATCH 4/6] net: mvneta: drop all fragments in XDP_DROP Lorenzo Bianconi
2020-07-09 15:57 ` [PATCH 5/6] net: mvneta: get rid of skb in mvneta_rx_queue Lorenzo Bianconi
2020-07-09 15:57 ` [PATCH 6/6] net: mvneta: move rxq->left_size on the stack Lorenzo Bianconi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).