netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jesper Dangaard Brouer <brouer@redhat.com>
To: netdev@vger.kernel.org, "David S. Miller" <davem@davemloft.net>,
	Jesper Dangaard Brouer <brouer@redhat.com>
Cc: "Toke Høiland-Jørgensen" <toke@toke.dk>,
	ard.biesheuvel@linaro.org, "Jason Wang" <jasowang@redhat.com>,
	ilias.apalodimas@linaro.org, BjörnTöpel <bjorn.topel@intel.com>,
	w@1wt.eu, "Saeed Mahameed" <saeedm@mellanox.com>,
	mykyta.iziumtsev@gmail.com,
	"Daniel Borkmann" <borkmann@iogearbox.net>,
	"Alexei Starovoitov" <alexei.starovoitov@gmail.com>,
	"Tariq Toukan" <tariqt@mellanox.com>
Subject: [net-next PATCH RFC 6/8] mvneta: activate page recycling via skb using page_pool
Date: Fri, 07 Dec 2018 00:25:57 +0100	[thread overview]
Message-ID: <154413875746.21735.15285083274392253125.stgit@firesoul> (raw)
In-Reply-To: <154413868810.21735.572808840657728172.stgit@firesoul>

Previous mvneta patches have already started to use page_pool, but
this was primarily for RX page alloc-side and for doing DMA map/unmap
handling.  Pages traveling through the netstack was unmapped and
returned through the normal page allocator.

It is now time to activate that pages are recycled back. This involves
registering the page_pool with the XDP rxq memory model API, even-though
the driver doesn't support XDP itself yet.  And simply updating the
SKB->mem_info field with info from the xdp_rxq_info.

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
---
 drivers/net/ethernet/marvell/mvneta.c |   29 +++++++++++++++++++++++++----
 1 file changed, 25 insertions(+), 4 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 78f1fcdc1f00..449c19829d67 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -628,6 +628,9 @@ struct mvneta_rx_queue {
 	/* page pool */
 	struct page_pool *page_pool;
 
+	/* XDP */
+	struct xdp_rxq_info xdp_rxq;
+
 	/* error counters */
 	u32 skb_alloc_err;
 	u32 refill_err;
@@ -1892,6 +1895,9 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 		page_pool_put_page(rxq->page_pool, data, false);
 	}
 
+	if (xdp_rxq_info_is_reg(&rxq->xdp_rxq))
+		xdp_rxq_info_unreg(&rxq->xdp_rxq);
+
 	if (rxq->page_pool)
 		page_pool_destroy(rxq->page_pool);
 }
@@ -1978,11 +1984,11 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 
 			rx_desc->buf_phys_addr = 0;
 			frag_num = 0;
+			rxq->skb->mem_info = rxq->xdp_rxq.mem;
 			skb_reserve(rxq->skb, MVNETA_MH_SIZE + NET_SKB_PAD);
 			skb_put(rxq->skb, rx_bytes < PAGE_SIZE ? rx_bytes :
 				PAGE_SIZE);
 			mvneta_rx_csum(pp, rx_status, rxq->skb);
-			page_pool_unmap_page(rxq->page_pool, page);
 			rxq->left_size = rx_bytes < PAGE_SIZE ? 0 : rx_bytes -
 				PAGE_SIZE;
 		} else {
@@ -2001,7 +2007,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 				skb_add_rx_frag(rxq->skb, frag_num, page,
 						0, frag_size,
 						PAGE_SIZE);
-
+				/* skb frags[] are not recycled, unmap now */
 				page_pool_unmap_page(rxq->page_pool, page);
 
 				rxq->left_size -= frag_size;
@@ -2815,10 +2821,25 @@ static int mvneta_create_page_pool(struct mvneta_port *pp,
 static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 			   int num)
 {
-	int i = 0;
+	int err, i = 0;
+
+	err = mvneta_create_page_pool(pp, rxq, num);
+	if (err)
+		goto out;
 
-	if (mvneta_create_page_pool(pp, rxq, num))
+	err = xdp_rxq_info_reg(&rxq->xdp_rxq, pp->dev, rxq->id);
+	if (err) {
+		page_pool_destroy(rxq->page_pool);
+		goto out;
+	}
+
+	err = xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq, MEM_TYPE_PAGE_POOL,
+					 rxq->page_pool);
+	if (err) {
+		xdp_rxq_info_unreg(&rxq->xdp_rxq);
+		page_pool_destroy(rxq->page_pool);
 		goto out;
+	}
 
 	for (i = 0; i < num; i++) {
 		memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));

  parent reply	other threads:[~2018-12-06 23:25 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-12-06 23:25 [net-next PATCH RFC 0/8] page_pool DMA handling and allow to recycles frames via SKB Jesper Dangaard Brouer
2018-12-06 23:25 ` [net-next PATCH RFC 1/8] page_pool: add helper functions for DMA Jesper Dangaard Brouer
2018-12-08  7:06   ` David Miller
2018-12-08  7:55     ` Ilias Apalodimas
2018-12-06 23:25 ` [net-next PATCH RFC 2/8] net: mvneta: use page pool API for sw buffer manager Jesper Dangaard Brouer
2018-12-06 23:25 ` [net-next PATCH RFC 3/8] xdp: reduce size of struct xdp_mem_info Jesper Dangaard Brouer
2018-12-06 23:25 ` [net-next PATCH RFC 4/8] net: core: add recycle capabilities on skbs via page_pool API Jesper Dangaard Brouer
2018-12-08  7:15   ` David Miller
2018-12-08  7:54     ` Ilias Apalodimas
2018-12-08  9:57   ` [net-next, RFC, " Florian Westphal
2018-12-08 11:36     ` Jesper Dangaard Brouer
2018-12-08 20:10       ` David Miller
2018-12-08 12:29     ` Eric Dumazet
2018-12-08 12:34       ` Eric Dumazet
2018-12-08 13:45       ` Jesper Dangaard Brouer
2018-12-08 14:57       ` Ilias Apalodimas
2018-12-08 17:07         ` Andrew Lunn
2018-12-08 19:26         ` Eric Dumazet
2018-12-08 20:11           ` Jesper Dangaard Brouer
2018-12-08 20:14             ` Ilias Apalodimas
2018-12-08 21:06               ` Willy Tarreau
2018-12-10  7:54                 ` Ilias Apalodimas
2018-12-08 22:36               ` Eric Dumazet
2018-12-08 20:21         ` David Miller
2018-12-08 20:29           ` Ilias Apalodimas
2018-12-10  9:51           ` Saeed Mahameed
2018-12-06 23:25 ` [net-next PATCH RFC 5/8] net: mvneta: remove copybreak, prefetch and use build_skb Jesper Dangaard Brouer
2018-12-06 23:25 ` Jesper Dangaard Brouer [this message]
2018-12-06 23:26 ` [net-next PATCH RFC 7/8] xdp: bpf: cpumap redirect must update skb->mem_info Jesper Dangaard Brouer
2018-12-06 23:26 ` [net-next PATCH RFC 8/8] veth: xdp_frames redirected into veth need to transfer xdp_mem_info Jesper Dangaard Brouer

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=154413875746.21735.15285083274392253125.stgit@firesoul \
    --to=brouer@redhat.com \
    --cc=alexei.starovoitov@gmail.com \
    --cc=ard.biesheuvel@linaro.org \
    --cc=bjorn.topel@intel.com \
    --cc=borkmann@iogearbox.net \
    --cc=davem@davemloft.net \
    --cc=ilias.apalodimas@linaro.org \
    --cc=jasowang@redhat.com \
    --cc=mykyta.iziumtsev@gmail.com \
    --cc=netdev@vger.kernel.org \
    --cc=saeedm@mellanox.com \
    --cc=tariqt@mellanox.com \
    --cc=toke@toke.dk \
    --cc=w@1wt.eu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).