bpf.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers
@ 2020-07-16 22:16 Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
                   ` (6 more replies)
  0 siblings, 7 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 22:16 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Rework mvneta_rx_swbm routine in order to process all rx descriptors before
building the skb or run the xdp program attached to the interface.
Introduce xdp_get_shared_info_from_{buff,frame} utility routines to get the
skb_shared_info pointer from xdp_buff or xdp_frame.
This is a preliminary series to enable multi-buffers and jumbo frames for XDP
according to [1]

[1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org

Changes since v1:
- rely on skb_frag_* utility routines to access page/offset/len of the xdp multi-buffer

Lorenzo Bianconi (6):
  xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines
  net: mvneta: move skb build after descriptors processing
  net: mvneta: move mvneta_run_xdp after descriptors processing
  net: mvneta: drop all fragments in XDP_DROP
  net: mvneta: get rid of skb in mvneta_rx_queue
  net: mvneta: move rxq->left_size on the stack

 drivers/net/ethernet/marvell/mvneta.c | 221 ++++++++++++++------------
 include/net/xdp.h                     |  15 ++
 2 files changed, 138 insertions(+), 98 deletions(-)

-- 
2.26.2


^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH v2 net-next 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines
  2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
@ 2020-07-16 22:16 ` Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 2/6] net: mvneta: move skb build after descriptors processing Lorenzo Bianconi
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 22:16 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Introduce xdp_get_shared_info_from_{buff,frame} utility routines to get
skb_shared_info from xdp buffer/frame pointer.
xdp_get_shared_info_from_{buff,frame} will be used to implement xdp
multi-buffer support

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 include/net/xdp.h | 15 +++++++++++++++
 1 file changed, 15 insertions(+)

diff --git a/include/net/xdp.h b/include/net/xdp.h
index 609f819ed08b..d3005bef812f 100644
--- a/include/net/xdp.h
+++ b/include/net/xdp.h
@@ -85,6 +85,12 @@ struct xdp_buff {
 	((xdp)->data_hard_start + (xdp)->frame_sz -	\
 	 SKB_DATA_ALIGN(sizeof(struct skb_shared_info)))
 
+static inline struct skb_shared_info *
+xdp_get_shared_info_from_buff(struct xdp_buff *xdp)
+{
+	return (struct skb_shared_info *)xdp_data_hard_end(xdp);
+}
+
 struct xdp_frame {
 	void *data;
 	u16 len;
@@ -98,6 +104,15 @@ struct xdp_frame {
 	struct net_device *dev_rx; /* used by cpumap */
 };
 
+static inline struct skb_shared_info *
+xdp_get_shared_info_from_frame(struct xdp_frame *frame)
+{
+	void *data_hard_start = frame->data - frame->headroom - sizeof(*frame);
+
+	return (struct skb_shared_info *)(data_hard_start + frame->frame_sz -
+				SKB_DATA_ALIGN(sizeof(struct skb_shared_info)));
+}
+
 /* Clear kernel pointers in xdp_frame */
 static inline void xdp_scrub_frame(struct xdp_frame *frame)
 {
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 net-next 2/6] net: mvneta: move skb build after descriptors processing
  2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
@ 2020-07-16 22:16 ` Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 3/6] net: mvneta: move mvneta_run_xdp " Lorenzo Bianconi
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 22:16 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Move skb build after all descriptors processing. This is a preliminary
patch to enable multi-buffers and JUMBO frames support for XDP.
Introduce mvneta_xdp_put_buff routine to release all pages used by a
XDP multi-buffer

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 158 ++++++++++++++++----------
 1 file changed, 101 insertions(+), 57 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 7191902f6cfe..e9b3fb778d76 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2026,6 +2026,20 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
 	return i;
 }
 
+static void
+mvneta_xdp_put_buff(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+		    struct xdp_buff *xdp, int sync_len, bool napi)
+{
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	int i;
+
+	page_pool_put_page(rxq->page_pool, virt_to_head_page(xdp->data),
+			   sync_len, napi);
+	for (i = 0; i < sinfo->nr_frags; i++)
+		page_pool_put_full_page(rxq->page_pool,
+					skb_frag_page(&sinfo->frags[i]), napi);
+}
+
 static int
 mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq,
 			struct xdp_frame *xdpf, bool dma_map)
@@ -2229,6 +2243,7 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	int data_len = -MVNETA_MH_SIZE, len;
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
+	struct skb_shared_info *sinfo;
 	int ret = 0;
 
 	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
@@ -2252,35 +2267,13 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	xdp->data_end = xdp->data + data_len;
 	xdp_set_data_meta_invalid(xdp);
 
-	if (xdp_prog) {
-		ret = mvneta_run_xdp(pp, rxq, xdp_prog, xdp, stats);
-		if (ret)
-			goto out;
-	}
-
-	rxq->skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
-	if (unlikely(!rxq->skb)) {
-		struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
+	sinfo = xdp_get_shared_info_from_buff(xdp);
+	sinfo->nr_frags = 0;
 
-		netdev_err(dev, "Can't allocate skb on queue %d\n", rxq->id);
-
-		u64_stats_update_begin(&stats->syncp);
-		stats->es.skb_alloc_error++;
-		stats->rx_dropped++;
-		u64_stats_update_end(&stats->syncp);
-
-		return -ENOMEM;
-	}
-	page_pool_release_page(rxq->page_pool, page);
-
-	skb_reserve(rxq->skb,
-		    xdp->data - xdp->data_hard_start);
-	skb_put(rxq->skb, xdp->data_end - xdp->data);
-	mvneta_rx_csum(pp, rx_desc->status, rxq->skb);
+	if (xdp_prog)
+		ret = mvneta_run_xdp(pp, rxq, xdp_prog, xdp, stats);
 
 	rxq->left_size = rx_desc->data_size - len;
-
-out:
 	rx_desc->buf_phys_addr = 0;
 
 	return ret;
@@ -2290,8 +2283,10 @@ static void
 mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc,
 			    struct mvneta_rx_queue *rxq,
+			    struct xdp_buff *xdp,
 			    struct page *page)
 {
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
 	int data_len, len;
@@ -2307,18 +2302,53 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 	dma_sync_single_for_cpu(dev->dev.parent,
 				rx_desc->buf_phys_addr,
 				len, dma_dir);
-	if (data_len > 0) {
-		/* refill descriptor with new buffer later */
-		skb_add_rx_frag(rxq->skb,
-				skb_shinfo(rxq->skb)->nr_frags,
-				page, pp->rx_offset_correction, data_len,
-				PAGE_SIZE);
-	}
-	page_pool_release_page(rxq->page_pool, page);
-	rx_desc->buf_phys_addr = 0;
+
+	if (data_len > 0 && sinfo->nr_frags < MAX_SKB_FRAGS) {
+		skb_frag_t *frag = &sinfo->frags[sinfo->nr_frags];
+
+		skb_frag_off_set(frag, pp->rx_offset_correction);
+		skb_frag_size_set(frag, data_len);
+		__skb_frag_set_page(frag, page);
+		sinfo->nr_frags++;
+
+		rx_desc->buf_phys_addr = 0;
+	}
 	rxq->left_size -= len;
 }
 
+static struct sk_buff *
+mvneta_swbm_build_skb(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
+		      struct xdp_buff *xdp, u32 desc_status)
+{
+	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
+	int i, num_frags = sinfo->nr_frags;
+	skb_frag_t frags[MAX_SKB_FRAGS];
+	struct sk_buff *skb;
+
+	memcpy(frags, sinfo->frags, sizeof(skb_frag_t) * num_frags);
+
+	skb = build_skb(xdp->data_hard_start, PAGE_SIZE);
+	if (!skb)
+		return ERR_PTR(-ENOMEM);
+
+	page_pool_release_page(rxq->page_pool, virt_to_page(xdp->data));
+
+	skb_reserve(skb, xdp->data - xdp->data_hard_start);
+	skb_put(skb, xdp->data_end - xdp->data);
+	mvneta_rx_csum(pp, desc_status, skb);
+
+	for (i = 0; i < num_frags; i++) {
+		struct page *page = skb_frag_page(&frags[i]);
+
+		skb_add_rx_frag(skb, skb_shinfo(skb)->nr_frags,
+				page, skb_frag_off(&frags[i]),
+				skb_frag_size(&frags[i]), PAGE_SIZE);
+		page_pool_release_page(rxq->page_pool, page);
+	}
+
+	return skb;
+}
+
 /* Main rx processing when using software buffer management */
 static int mvneta_rx_swbm(struct napi_struct *napi,
 			  struct mvneta_port *pp, int budget,
@@ -2326,22 +2356,25 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 {
 	int rx_proc = 0, rx_todo, refill;
 	struct net_device *dev = pp->dev;
+	struct xdp_buff xdp_buf = {
+		.frame_sz = PAGE_SIZE,
+		.rxq = &rxq->xdp_rxq,
+	};
 	struct mvneta_stats ps = {};
 	struct bpf_prog *xdp_prog;
-	struct xdp_buff xdp_buf;
+	u32 desc_status;
 
 	/* Get number of received packets */
 	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
 
 	rcu_read_lock();
 	xdp_prog = READ_ONCE(pp->xdp_prog);
-	xdp_buf.rxq = &rxq->xdp_rxq;
-	xdp_buf.frame_sz = PAGE_SIZE;
 
 	/* Fairness NAPI loop */
 	while (rx_proc < budget && rx_proc < rx_todo) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
 		u32 rx_status, index;
+		struct sk_buff *skb;
 		struct page *page;
 
 		index = rx_desc - rxq->descs;
@@ -2357,21 +2390,21 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			/* Check errors only for FIRST descriptor */
 			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
 				mvneta_rx_error(pp, rx_desc);
-				/* leave the descriptor untouched */
-				continue;
+				goto next;
 			}
 
 			err = mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
 						   xdp_prog, page, &ps);
 			if (err)
 				continue;
+
+			desc_status = rx_desc->status;
 		} else {
-			if (unlikely(!rxq->skb)) {
-				pr_debug("no skb for rx_status 0x%x\n",
-					 rx_status);
+			if (unlikely(!xdp_buf.data_hard_start))
 				continue;
-			}
-			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, page);
+
+			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
+						    page);
 		} /* Middle or Last descriptor */
 
 		if (!(rx_status & MVNETA_RXD_LAST_DESC))
@@ -2379,27 +2412,38 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			continue;
 
 		if (rxq->left_size) {
-			pr_err("get last desc, but left_size (%d) != 0\n",
-			       rxq->left_size);
-			dev_kfree_skb_any(rxq->skb);
 			rxq->left_size = 0;
-			rxq->skb = NULL;
-			continue;
+			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
+			goto next;
 		}
 
-		ps.rx_bytes += rxq->skb->len;
-		ps.rx_packets++;
+		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
+		if (IS_ERR(skb)) {
+			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 
-		/* Linux processing */
-		rxq->skb->protocol = eth_type_trans(rxq->skb, dev);
+			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
+
+			u64_stats_update_begin(&stats->syncp);
+			stats->es.skb_alloc_error++;
+			stats->rx_dropped++;
+			u64_stats_update_end(&stats->syncp);
+
+			goto next;
+		}
 
-		napi_gro_receive(napi, rxq->skb);
+		ps.rx_bytes += skb->len;
+		ps.rx_packets++;
 
-		/* clean uncomplete skb pointer in queue */
-		rxq->skb = NULL;
+		skb->protocol = eth_type_trans(skb, dev);
+		napi_gro_receive(napi, skb);
+next:
+		xdp_buf.data_hard_start = NULL;
 	}
 	rcu_read_unlock();
 
+	if (xdp_buf.data_hard_start)
+		mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
+
 	if (ps.xdp_redirect)
 		xdp_do_flush_map();
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 net-next 3/6] net: mvneta: move mvneta_run_xdp after descriptors processing
  2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 2/6] net: mvneta: move skb build after descriptors processing Lorenzo Bianconi
@ 2020-07-16 22:16 ` Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 4/6] net: mvneta: drop all fragments in XDP_DROP Lorenzo Bianconi
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 22:16 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Move mvneta_run_xdp routine after all descriptor processing. This is a
preliminary patch to enable multi-buffers and JUMBO frames support for
XDP

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 22 +++++++---------------
 1 file changed, 7 insertions(+), 15 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index e9b3fb778d76..021419428ff8 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2230,12 +2230,11 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	return ret;
 }
 
-static int
+static void
 mvneta_swbm_rx_frame(struct mvneta_port *pp,
 		     struct mvneta_rx_desc *rx_desc,
 		     struct mvneta_rx_queue *rxq,
 		     struct xdp_buff *xdp,
-		     struct bpf_prog *xdp_prog,
 		     struct page *page,
 		     struct mvneta_stats *stats)
 {
@@ -2244,7 +2243,6 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
 	struct skb_shared_info *sinfo;
-	int ret = 0;
 
 	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
@@ -2270,13 +2268,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	sinfo = xdp_get_shared_info_from_buff(xdp);
 	sinfo->nr_frags = 0;
 
-	if (xdp_prog)
-		ret = mvneta_run_xdp(pp, rxq, xdp_prog, xdp, stats);
-
 	rxq->left_size = rx_desc->data_size - len;
 	rx_desc->buf_phys_addr = 0;
-
-	return ret;
 }
 
 static void
@@ -2385,20 +2378,15 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		rxq->refill_num++;
 
 		if (rx_status & MVNETA_RXD_FIRST_DESC) {
-			int err;
-
 			/* Check errors only for FIRST descriptor */
 			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
 				mvneta_rx_error(pp, rx_desc);
 				goto next;
 			}
 
-			err = mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
-						   xdp_prog, page, &ps);
-			if (err)
-				continue;
-
 			desc_status = rx_desc->status;
+			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, page,
+					     &ps);
 		} else {
 			if (unlikely(!xdp_buf.data_hard_start))
 				continue;
@@ -2417,6 +2405,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			goto next;
 		}
 
+		if (xdp_prog &&
+		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, &ps))
+			goto next;
+
 		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
 		if (IS_ERR(skb)) {
 			struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 net-next 4/6] net: mvneta: drop all fragments in XDP_DROP
  2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
                   ` (2 preceding siblings ...)
  2020-07-16 22:16 ` [PATCH v2 net-next 3/6] net: mvneta: move mvneta_run_xdp " Lorenzo Bianconi
@ 2020-07-16 22:16 ` Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 5/6] net: mvneta: get rid of skb in mvneta_rx_queue Lorenzo Bianconi
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 22:16 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Release all consumed pages if the eBPF program returns XDP_DROP for XDP
multi-buffers

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 26 ++++++++++++--------------
 1 file changed, 12 insertions(+), 14 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 021419428ff8..7746693a3de1 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2172,13 +2172,13 @@ mvneta_xdp_xmit(struct net_device *dev, int num_frame,
 static int
 mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	       struct bpf_prog *prog, struct xdp_buff *xdp,
-	       struct mvneta_stats *stats)
+	       u32 frame_sz, struct mvneta_stats *stats)
 {
-	unsigned int len, sync;
-	struct page *page;
+	unsigned int len, data_len, sync;
 	u32 ret, act;
 
 	len = xdp->data_end - xdp->data_hard_start - pp->rx_offset_correction;
+	data_len = xdp->data_end - xdp->data;
 	act = bpf_prog_run_xdp(prog, xdp);
 
 	/* Due xdp_adjust_tail: DMA sync for_device cover max len CPU touch */
@@ -2194,9 +2194,8 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 
 		err = xdp_do_redirect(pp->dev, xdp, prog);
 		if (unlikely(err)) {
+			mvneta_xdp_put_buff(pp, rxq, xdp, sync, true);
 			ret = MVNETA_XDP_DROPPED;
-			page = virt_to_head_page(xdp->data);
-			page_pool_put_page(rxq->page_pool, page, sync, true);
 		} else {
 			ret = MVNETA_XDP_REDIR;
 			stats->xdp_redirect++;
@@ -2205,10 +2204,8 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 	}
 	case XDP_TX:
 		ret = mvneta_xdp_xmit_back(pp, xdp);
-		if (ret != MVNETA_XDP_TX) {
-			page = virt_to_head_page(xdp->data);
-			page_pool_put_page(rxq->page_pool, page, sync, true);
-		}
+		if (ret != MVNETA_XDP_TX)
+			mvneta_xdp_put_buff(pp, rxq, xdp, sync, true);
 		break;
 	default:
 		bpf_warn_invalid_xdp_action(act);
@@ -2217,14 +2214,13 @@ mvneta_run_xdp(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 		trace_xdp_exception(pp->dev, prog, act);
 		/* fall through */
 	case XDP_DROP:
-		page = virt_to_head_page(xdp->data);
-		page_pool_put_page(rxq->page_pool, page, sync, true);
+		mvneta_xdp_put_buff(pp, rxq, xdp, sync, true);
 		ret = MVNETA_XDP_DROPPED;
 		stats->xdp_drop++;
 		break;
 	}
 
-	stats->rx_bytes += xdp->data_end - xdp->data;
+	stats->rx_bytes += frame_sz + xdp->data_end - xdp->data - data_len;
 	stats->rx_packets++;
 
 	return ret;
@@ -2355,7 +2351,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 	};
 	struct mvneta_stats ps = {};
 	struct bpf_prog *xdp_prog;
-	u32 desc_status;
+	u32 desc_status, frame_sz;
 
 	/* Get number of received packets */
 	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
@@ -2384,7 +2380,9 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 				goto next;
 			}
 
+			frame_sz = rx_desc->data_size - ETH_FCS_LEN;
 			desc_status = rx_desc->status;
+
 			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, page,
 					     &ps);
 		} else {
@@ -2406,7 +2404,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		}
 
 		if (xdp_prog &&
-		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, &ps))
+		    mvneta_run_xdp(pp, rxq, xdp_prog, &xdp_buf, frame_sz, &ps))
 			goto next;
 
 		skb = mvneta_swbm_build_skb(pp, rxq, &xdp_buf, desc_status);
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 net-next 5/6] net: mvneta: get rid of skb in mvneta_rx_queue
  2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
                   ` (3 preceding siblings ...)
  2020-07-16 22:16 ` [PATCH v2 net-next 4/6] net: mvneta: drop all fragments in XDP_DROP Lorenzo Bianconi
@ 2020-07-16 22:16 ` Lorenzo Bianconi
  2020-07-16 22:16 ` [PATCH v2 net-next 6/6] net: mvneta: move rxq->left_size on the stack Lorenzo Bianconi
  2020-07-20  1:28 ` [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers David Miller
  6 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 22:16 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Remove skb pointer in mvneta_rx_queue data structure since it is no
longer used

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 6 ------
 1 file changed, 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 7746693a3de1..8b7f6fcd4cca 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -699,8 +699,6 @@ struct mvneta_rx_queue {
 	int first_to_refill;
 	u32 refill_num;
 
-	/* pointer to uncomplete skb buffer */
-	struct sk_buff *skb;
 	int left_size;
 };
 
@@ -3362,9 +3360,6 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 {
 	mvneta_rxq_drop_pkts(pp, rxq);
 
-	if (rxq->skb)
-		dev_kfree_skb_any(rxq->skb);
-
 	if (rxq->descs)
 		dma_free_coherent(pp->dev->dev.parent,
 				  rxq->size * MVNETA_DESC_ALIGNED_SIZE,
@@ -3377,7 +3372,6 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 	rxq->descs_phys        = 0;
 	rxq->first_to_refill   = 0;
 	rxq->refill_num        = 0;
-	rxq->skb               = NULL;
 	rxq->left_size         = 0;
 }
 
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH v2 net-next 6/6] net: mvneta: move rxq->left_size on the stack
  2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
                   ` (4 preceding siblings ...)
  2020-07-16 22:16 ` [PATCH v2 net-next 5/6] net: mvneta: get rid of skb in mvneta_rx_queue Lorenzo Bianconi
@ 2020-07-16 22:16 ` Lorenzo Bianconi
  2020-07-20  1:28 ` [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers David Miller
  6 siblings, 0 replies; 8+ messages in thread
From: Lorenzo Bianconi @ 2020-07-16 22:16 UTC (permalink / raw)
  To: netdev
  Cc: davem, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

Allocate rxq->left_size on mvneta_rx_swbm stack since it is used just
in sw bm napi_poll

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 29 ++++++++++++---------------
 1 file changed, 13 insertions(+), 16 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 8b7f6fcd4cca..2c9277e73cef 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -698,8 +698,6 @@ struct mvneta_rx_queue {
 	/* Index of first RX DMA descriptor to refill */
 	int first_to_refill;
 	u32 refill_num;
-
-	int left_size;
 };
 
 static enum cpuhp_state online_hpstate;
@@ -2228,7 +2226,7 @@ static void
 mvneta_swbm_rx_frame(struct mvneta_port *pp,
 		     struct mvneta_rx_desc *rx_desc,
 		     struct mvneta_rx_queue *rxq,
-		     struct xdp_buff *xdp,
+		     struct xdp_buff *xdp, int *size,
 		     struct page *page,
 		     struct mvneta_stats *stats)
 {
@@ -2262,7 +2260,7 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	sinfo = xdp_get_shared_info_from_buff(xdp);
 	sinfo->nr_frags = 0;
 
-	rxq->left_size = rx_desc->data_size - len;
+	*size = rx_desc->data_size - len;
 	rx_desc->buf_phys_addr = 0;
 }
 
@@ -2270,7 +2268,7 @@ static void
 mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc,
 			    struct mvneta_rx_queue *rxq,
-			    struct xdp_buff *xdp,
+			    struct xdp_buff *xdp, int *size,
 			    struct page *page)
 {
 	struct skb_shared_info *sinfo = xdp_get_shared_info_from_buff(xdp);
@@ -2278,11 +2276,11 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 	enum dma_data_direction dma_dir;
 	int data_len, len;
 
-	if (rxq->left_size > MVNETA_MAX_RX_BUF_SIZE) {
+	if (*size > MVNETA_MAX_RX_BUF_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
 		data_len = len;
 	} else {
-		len = rxq->left_size;
+		len = *size;
 		data_len = len - ETH_FCS_LEN;
 	}
 	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
@@ -2300,7 +2298,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 
 		rx_desc->buf_phys_addr = 0;
 	}
-	rxq->left_size -= len;
+	*size -= len;
 }
 
 static struct sk_buff *
@@ -2341,7 +2339,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 			  struct mvneta_port *pp, int budget,
 			  struct mvneta_rx_queue *rxq)
 {
-	int rx_proc = 0, rx_todo, refill;
+	int rx_proc = 0, rx_todo, refill, size = 0;
 	struct net_device *dev = pp->dev;
 	struct xdp_buff xdp_buf = {
 		.frame_sz = PAGE_SIZE,
@@ -2378,25 +2376,25 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 				goto next;
 			}
 
-			frame_sz = rx_desc->data_size - ETH_FCS_LEN;
+			size = rx_desc->data_size;
+			frame_sz = size - ETH_FCS_LEN;
 			desc_status = rx_desc->status;
 
-			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf, page,
-					     &ps);
+			mvneta_swbm_rx_frame(pp, rx_desc, rxq, &xdp_buf,
+					     &size, page, &ps);
 		} else {
 			if (unlikely(!xdp_buf.data_hard_start))
 				continue;
 
 			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, &xdp_buf,
-						    page);
+						    &size, page);
 		} /* Middle or Last descriptor */
 
 		if (!(rx_status & MVNETA_RXD_LAST_DESC))
 			/* no last descriptor this time */
 			continue;
 
-		if (rxq->left_size) {
-			rxq->left_size = 0;
+		if (size) {
 			mvneta_xdp_put_buff(pp, rxq, &xdp_buf, -1, true);
 			goto next;
 		}
@@ -3372,7 +3370,6 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 	rxq->descs_phys        = 0;
 	rxq->first_to_refill   = 0;
 	rxq->refill_num        = 0;
-	rxq->left_size         = 0;
 }
 
 static int mvneta_txq_sw_init(struct mvneta_port *pp,
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers
  2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
                   ` (5 preceding siblings ...)
  2020-07-16 22:16 ` [PATCH v2 net-next 6/6] net: mvneta: move rxq->left_size on the stack Lorenzo Bianconi
@ 2020-07-20  1:28 ` David Miller
  6 siblings, 0 replies; 8+ messages in thread
From: David Miller @ 2020-07-20  1:28 UTC (permalink / raw)
  To: lorenzo
  Cc: netdev, bpf, kuba, ilias.apalodimas, lorenzo.bianconi, brouer,
	echaudro, sameehj

From: Lorenzo Bianconi <lorenzo@kernel.org>
Date: Fri, 17 Jul 2020 00:16:28 +0200

> Rework mvneta_rx_swbm routine in order to process all rx descriptors before
> building the skb or run the xdp program attached to the interface.
> Introduce xdp_get_shared_info_from_{buff,frame} utility routines to get the
> skb_shared_info pointer from xdp_buff or xdp_frame.
> This is a preliminary series to enable multi-buffers and jumbo frames for XDP
> according to [1]
> 
> [1] https://github.com/xdp-project/xdp-project/blob/master/areas/core/xdp-multi-buffer01-design.org
> 
> Changes since v1:
> - rely on skb_frag_* utility routines to access page/offset/len of the xdp multi-buffer

Series applied, thank you.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-07-20  1:28 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-16 22:16 [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers Lorenzo Bianconi
2020-07-16 22:16 ` [PATCH v2 net-next 1/6] xdp: introduce xdp_get_shared_info_from_{buff,frame} utility routines Lorenzo Bianconi
2020-07-16 22:16 ` [PATCH v2 net-next 2/6] net: mvneta: move skb build after descriptors processing Lorenzo Bianconi
2020-07-16 22:16 ` [PATCH v2 net-next 3/6] net: mvneta: move mvneta_run_xdp " Lorenzo Bianconi
2020-07-16 22:16 ` [PATCH v2 net-next 4/6] net: mvneta: drop all fragments in XDP_DROP Lorenzo Bianconi
2020-07-16 22:16 ` [PATCH v2 net-next 5/6] net: mvneta: get rid of skb in mvneta_rx_queue Lorenzo Bianconi
2020-07-16 22:16 ` [PATCH v2 net-next 6/6] net: mvneta: move rxq->left_size on the stack Lorenzo Bianconi
2020-07-20  1:28 ` [PATCH v2 net-next 0/6] rework mvneta napi_poll loop for XDP multi-buffers David Miller

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).