netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 net-next 0/8] add XDP support to mvneta driver
@ 2019-10-09 23:18 Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine Lorenzo Bianconi
                   ` (7 more replies)
  0 siblings, 8 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Add XDP support to mvneta driver for devices that rely on software
buffer management. Supported verdicts are:
- XDP_DROP
- XDP_PASS
- XDP_REDIRECT
- XDP_TX
Moreover set ndo_xdp_xmit net_device_ops function pointer in order to support
redirecting from other device (e.g. virtio-net).
Convert mvneta driver to page_pool API.
This series is based on previous work done by Jesper and Ilias.

Changes since v1:
- sync dma buffers before refilling hw queues
- fix stats accounting

Changes since RFC:
- implement XDP_TX
- make tx pending buffer list agnostic
- code refactoring
- check if device is running in mvneta_xdp_setup

Lorenzo Bianconi (8):
  net: mvneta: introduce mvneta_update_stats routine
  net: mvneta: introduce page pool API for sw buffer manager
  net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine
  net: mvneta: sync dma buffers before refilling hw queues
  net: mvneta: add basic XDP support
  net: mvneta: move header prefetch in mvneta_swbm_rx_frame
  net: mvneta: make tx buffer array agnostic
  net: mvneta: add XDP_TX support

 drivers/net/ethernet/marvell/Kconfig  |   1 +
 drivers/net/ethernet/marvell/mvneta.c | 630 +++++++++++++++++++-------
 2 files changed, 471 insertions(+), 160 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 2/8] net: mvneta: introduce page pool API for sw buffer manager Lorenzo Bianconi
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Introduce mvneta_update_stats routine to collect {rx/tx} statistics
(packets and bytes). This is a preliminary patch to add XDP support to
mvneta driver

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 41 +++++++++++++--------------
 1 file changed, 20 insertions(+), 21 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index e49820675c8c..128b9fded959 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1900,6 +1900,23 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	}
 }
 
+static void
+mvneta_update_stats(struct mvneta_port *pp, u32 pkts,
+		    u32 len, bool tx)
+{
+	struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
+
+	u64_stats_update_begin(&stats->syncp);
+	if (tx) {
+		stats->tx_packets += pkts;
+		stats->tx_bytes += len;
+	} else {
+		stats->rx_packets += pkts;
+		stats->rx_bytes += len;
+	}
+	u64_stats_update_end(&stats->syncp);
+}
+
 static inline
 int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
 {
@@ -2075,14 +2092,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		rxq->left_size = 0;
 	}
 
-	if (rcvd_pkts) {
-		struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
-
-		u64_stats_update_begin(&stats->syncp);
-		stats->rx_packets += rcvd_pkts;
-		stats->rx_bytes   += rcvd_bytes;
-		u64_stats_update_end(&stats->syncp);
-	}
+	mvneta_update_stats(pp, rcvd_pkts, rcvd_bytes, false);
 
 	/* return some buffers to hardware queue, one at a time is too slow */
 	refill = mvneta_rx_refill_queue(pp, rxq);
@@ -2206,14 +2216,7 @@ static int mvneta_rx_hwbm(struct napi_struct *napi,
 		napi_gro_receive(napi, skb);
 	}
 
-	if (rcvd_pkts) {
-		struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
-
-		u64_stats_update_begin(&stats->syncp);
-		stats->rx_packets += rcvd_pkts;
-		stats->rx_bytes   += rcvd_bytes;
-		u64_stats_update_end(&stats->syncp);
-	}
+	mvneta_update_stats(pp, rcvd_pkts, rcvd_bytes, false);
 
 	/* Update rxq management counters */
 	mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);
@@ -2459,7 +2462,6 @@ static netdev_tx_t mvneta_tx(struct sk_buff *skb, struct net_device *dev)
 
 out:
 	if (frags > 0) {
-		struct mvneta_pcpu_stats *stats = this_cpu_ptr(pp->stats);
 		struct netdev_queue *nq = netdev_get_tx_queue(dev, txq_id);
 
 		netdev_tx_sent_queue(nq, len);
@@ -2474,10 +2476,7 @@ static netdev_tx_t mvneta_tx(struct sk_buff *skb, struct net_device *dev)
 		else
 			txq->pending += frags;
 
-		u64_stats_update_begin(&stats->syncp);
-		stats->tx_packets++;
-		stats->tx_bytes  += len;
-		u64_stats_update_end(&stats->syncp);
+		mvneta_update_stats(pp, 1, len, true);
 	} else {
 		dev->stats.tx_dropped++;
 		dev_kfree_skb_any(skb);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 2/8] net: mvneta: introduce page pool API for sw buffer manager
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine Lorenzo Bianconi
                   ` (5 subsequent siblings)
  7 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Use the page_pool api for allocations and DMA handling instead of
__dev_alloc_page()/dma_map_page() and free_page()/dma_unmap_page().
Pages are unmapped using page_pool_release_page before packets
go into the network stack.

The page_pool API offers buffer recycling capabilities for XDP but
allocates one page per packet, unless the driver splits and manages
the allocated page.
This is a preliminary patch to add XDP support to mvneta driver

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/Kconfig  |  1 +
 drivers/net/ethernet/marvell/mvneta.c | 77 ++++++++++++++++++++-------
 2 files changed, 59 insertions(+), 19 deletions(-)

diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
index fb942167ee54..3d5caea096fb 100644
--- a/drivers/net/ethernet/marvell/Kconfig
+++ b/drivers/net/ethernet/marvell/Kconfig
@@ -61,6 +61,7 @@ config MVNETA
 	depends on ARCH_MVEBU || COMPILE_TEST
 	select MVMDIO
 	select PHYLINK
+	select PAGE_POOL
 	---help---
 	  This driver supports the network interface units in the
 	  Marvell ARMADA XP, ARMADA 370, ARMADA 38x and
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 128b9fded959..31cecc1ed848 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -37,6 +37,7 @@
 #include <net/ip.h>
 #include <net/ipv6.h>
 #include <net/tso.h>
+#include <net/page_pool.h>
 
 /* Registers */
 #define MVNETA_RXQ_CONFIG_REG(q)                (0x1400 + ((q) << 2))
@@ -603,6 +604,10 @@ struct mvneta_rx_queue {
 	u32 pkts_coal;
 	u32 time_coal;
 
+	/* page_pool */
+	struct page_pool *page_pool;
+	struct xdp_rxq_info xdp_rxq;
+
 	/* Virtual address of the RX buffer */
 	void  **buf_virt_addr;
 
@@ -1815,20 +1820,14 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
 	dma_addr_t phys_addr;
 	struct page *page;
 
-	page = __dev_alloc_page(gfp_mask);
+	page = page_pool_alloc_pages(rxq->page_pool,
+				     gfp_mask | __GFP_NOWARN);
 	if (!page)
 		return -ENOMEM;
 
-	/* map page for use */
-	phys_addr = dma_map_page(pp->dev->dev.parent, page, 0, PAGE_SIZE,
-				 DMA_FROM_DEVICE);
-	if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
-		__free_page(page);
-		return -ENOMEM;
-	}
-
-	phys_addr += pp->rx_offset_correction;
+	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
 	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
+
 	return 0;
 }
 
@@ -1894,10 +1893,11 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 		if (!data || !(rx_desc->buf_phys_addr))
 			continue;
 
-		dma_unmap_page(pp->dev->dev.parent, rx_desc->buf_phys_addr,
-			       PAGE_SIZE, DMA_FROM_DEVICE);
-		__free_page(data);
+		page_pool_put_page(rxq->page_pool, data, false);
 	}
+	if (xdp_rxq_info_is_reg(&rxq->xdp_rxq))
+		xdp_rxq_info_unreg(&rxq->xdp_rxq);
+	page_pool_destroy(rxq->page_pool);
 }
 
 static void
@@ -2029,8 +2029,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 				skb_add_rx_frag(rxq->skb, frag_num, page,
 						frag_offset, frag_size,
 						PAGE_SIZE);
-				dma_unmap_page(dev->dev.parent, phys_addr,
-					       PAGE_SIZE, DMA_FROM_DEVICE);
+				page_pool_release_page(rxq->page_pool, page);
 				rxq->left_size -= frag_size;
 			}
 		} else {
@@ -2060,9 +2059,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 						frag_offset, frag_size,
 						PAGE_SIZE);
 
-				dma_unmap_page(dev->dev.parent, phys_addr,
-					       PAGE_SIZE, DMA_FROM_DEVICE);
-
+				page_pool_release_page(rxq->page_pool, page);
 				rxq->left_size -= frag_size;
 			}
 		} /* Middle or Last descriptor */
@@ -2829,11 +2826,53 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 	return rx_done;
 }
 
+static int mvneta_create_page_pool(struct mvneta_port *pp,
+				   struct mvneta_rx_queue *rxq, int size)
+{
+	struct page_pool_params pp_params = {
+		.order = 0,
+		.flags = PP_FLAG_DMA_MAP,
+		.pool_size = size,
+		.nid = cpu_to_node(0),
+		.dev = pp->dev->dev.parent,
+		.dma_dir = DMA_FROM_DEVICE,
+	};
+	int err;
+
+	rxq->page_pool = page_pool_create(&pp_params);
+	if (IS_ERR(rxq->page_pool)) {
+		err = PTR_ERR(rxq->page_pool);
+		rxq->page_pool = NULL;
+		return err;
+	}
+
+	err = xdp_rxq_info_reg(&rxq->xdp_rxq, pp->dev, 0);
+	if (err < 0)
+		goto err_free_pp;
+
+	err = xdp_rxq_info_reg_mem_model(&rxq->xdp_rxq, MEM_TYPE_PAGE_POOL,
+					 rxq->page_pool);
+	if (err)
+		goto err_unregister_rxq;
+
+	return 0;
+
+err_unregister_rxq:
+	xdp_rxq_info_unreg(&rxq->xdp_rxq);
+err_free_pp:
+	page_pool_destroy(rxq->page_pool);
+	return err;
+}
+
 /* Handle rxq fill: allocates rxq skbs; called when initializing a port */
 static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 			   int num)
 {
-	int i;
+	int i, err;
+
+	err = mvneta_create_page_pool(pp, rxq, num);
+	if (err < 0)
+		return err;
 
 	for (i = 0; i < num; i++) {
 		memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 2/8] net: mvneta: introduce page pool API for sw buffer manager Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  2019-10-10  7:16   ` Ilias Apalodimas
  2019-10-09 23:18 ` [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues Lorenzo Bianconi
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Refactor mvneta_rx_swbm code introducing mvneta_swbm_rx_frame and
mvneta_swbm_add_rx_fragment routines. Rely on build_skb in oreder to
allocate skb since the previous patch introduced buffer recycling using
the page_pool API.
This patch fixes even an issue in the original driver where dma buffers
are accessed before dma sync

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 198 ++++++++++++++------------
 1 file changed, 104 insertions(+), 94 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 31cecc1ed848..79a6bac0192b 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -323,6 +323,11 @@
 	      ETH_HLEN + ETH_FCS_LEN,			     \
 	      cache_line_size())
 
+#define MVNETA_SKB_PAD	(SKB_DATA_ALIGN(sizeof(struct skb_shared_info) + \
+			 NET_SKB_PAD))
+#define MVNETA_SKB_SIZE(len)	(SKB_DATA_ALIGN(len) + MVNETA_SKB_PAD)
+#define MVNETA_MAX_RX_BUF_SIZE	(PAGE_SIZE - MVNETA_SKB_PAD)
+
 #define IS_TSO_HEADER(txq, addr) \
 	((addr >= txq->tso_hdrs_phys) && \
 	 (addr < txq->tso_hdrs_phys + txq->size * TSO_HEADER_SIZE))
@@ -646,7 +651,6 @@ static int txq_number = 8;
 static int rxq_def;
 
 static int rx_copybreak __read_mostly = 256;
-static int rx_header_size __read_mostly = 128;
 
 /* HW BM need that each port be identify by a unique ID */
 static int global_port_id;
@@ -1942,30 +1946,102 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
 	return i;
 }
 
+static int
+mvneta_swbm_rx_frame(struct mvneta_port *pp,
+		     struct mvneta_rx_desc *rx_desc,
+		     struct mvneta_rx_queue *rxq,
+		     struct page *page)
+{
+	unsigned char *data = page_address(page);
+	int data_len = -MVNETA_MH_SIZE, len;
+	struct net_device *dev = pp->dev;
+	enum dma_data_direction dma_dir;
+
+	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
+		len = MVNETA_MAX_RX_BUF_SIZE;
+		data_len += len;
+	} else {
+		len = rx_desc->data_size;
+		data_len += len - ETH_FCS_LEN;
+	}
+
+	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
+	dma_sync_single_range_for_cpu(dev->dev.parent,
+				      rx_desc->buf_phys_addr, 0,
+				      len, dma_dir);
+
+	rxq->skb = build_skb(data, PAGE_SIZE);
+	if (unlikely(!rxq->skb)) {
+		netdev_err(dev,
+			   "Can't allocate skb on queue %d\n",
+			   rxq->id);
+		dev->stats.rx_dropped++;
+		rxq->skb_alloc_err++;
+		return -ENOMEM;
+	}
+	page_pool_release_page(rxq->page_pool, page);
+
+	skb_reserve(rxq->skb, MVNETA_MH_SIZE + NET_SKB_PAD);
+	skb_put(rxq->skb, data_len);
+	mvneta_rx_csum(pp, rx_desc->status, rxq->skb);
+
+	rxq->left_size = rx_desc->data_size - len;
+	rx_desc->buf_phys_addr = 0;
+
+	return 0;
+}
+
+static void
+mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
+			    struct mvneta_rx_desc *rx_desc,
+			    struct mvneta_rx_queue *rxq,
+			    struct page *page)
+{
+	struct net_device *dev = pp->dev;
+	enum dma_data_direction dma_dir;
+	int data_len, len;
+
+	if (rxq->left_size > MVNETA_MAX_RX_BUF_SIZE) {
+		len = MVNETA_MAX_RX_BUF_SIZE;
+		data_len = len;
+	} else {
+		len = rxq->left_size;
+		data_len = len - ETH_FCS_LEN;
+	}
+	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
+	dma_sync_single_range_for_cpu(dev->dev.parent,
+				      rx_desc->buf_phys_addr, 0,
+				      len, dma_dir);
+	if (data_len > 0) {
+		/* refill descriptor with new buffer later */
+		skb_add_rx_frag(rxq->skb,
+				skb_shinfo(rxq->skb)->nr_frags,
+				page, NET_SKB_PAD, data_len,
+				PAGE_SIZE);
+
+		page_pool_release_page(rxq->page_pool, page);
+		rx_desc->buf_phys_addr = 0;
+	}
+	rxq->left_size -= len;
+}
+
 /* Main rx processing when using software buffer management */
 static int mvneta_rx_swbm(struct napi_struct *napi,
 			  struct mvneta_port *pp, int budget,
 			  struct mvneta_rx_queue *rxq)
 {
-	struct net_device *dev = pp->dev;
-	int rx_todo, rx_proc;
-	int refill = 0;
-	u32 rcvd_pkts = 0;
-	u32 rcvd_bytes = 0;
+	int rcvd_pkts = 0, rcvd_bytes = 0;
+	int rx_pending, refill, done = 0;
 
 	/* Get number of received packets */
-	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
-	rx_proc = 0;
+	rx_pending = mvneta_rxq_busy_desc_num_get(pp, rxq);
 
 	/* Fairness NAPI loop */
-	while ((rcvd_pkts < budget) && (rx_proc < rx_todo)) {
+	while (done < budget && done < rx_pending) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
 		unsigned char *data;
 		struct page *page;
-		dma_addr_t phys_addr;
-		u32 rx_status, index;
-		int rx_bytes, skb_size, copy_size;
-		int frag_num, frag_size, frag_offset;
+		int index;
 
 		index = rx_desc - rxq->descs;
 		page = (struct page *)rxq->buf_virt_addr[index];
@@ -1973,98 +2049,33 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		/* Prefetch header */
 		prefetch(data);
 
-		phys_addr = rx_desc->buf_phys_addr;
-		rx_status = rx_desc->status;
-		rx_proc++;
 		rxq->refill_num++;
+		done++;
+
+		if (rx_desc->status & MVNETA_RXD_FIRST_DESC) {
+			int err;
 
-		if (rx_status & MVNETA_RXD_FIRST_DESC) {
 			/* Check errors only for FIRST descriptor */
-			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
+			if (rx_desc->status & MVNETA_RXD_ERR_SUMMARY) {
 				mvneta_rx_error(pp, rx_desc);
-				dev->stats.rx_errors++;
+				pp->dev->stats.rx_errors++;
 				/* leave the descriptor untouched */
 				continue;
 			}
-			rx_bytes = rx_desc->data_size -
-				   (ETH_FCS_LEN + MVNETA_MH_SIZE);
 
-			/* Allocate small skb for each new packet */
-			skb_size = max(rx_copybreak, rx_header_size);
-			rxq->skb = netdev_alloc_skb_ip_align(dev, skb_size);
-			if (unlikely(!rxq->skb)) {
-				netdev_err(dev,
-					   "Can't allocate skb on queue %d\n",
-					   rxq->id);
-				dev->stats.rx_dropped++;
-				rxq->skb_alloc_err++;
+			err = mvneta_swbm_rx_frame(pp, rx_desc, rxq, page);
+			if (err)
 				continue;
-			}
-			copy_size = min(skb_size, rx_bytes);
-
-			/* Copy data from buffer to SKB, skip Marvell header */
-			memcpy(rxq->skb->data, data + MVNETA_MH_SIZE,
-			       copy_size);
-			skb_put(rxq->skb, copy_size);
-			rxq->left_size = rx_bytes - copy_size;
-
-			mvneta_rx_csum(pp, rx_status, rxq->skb);
-			if (rxq->left_size == 0) {
-				int size = copy_size + MVNETA_MH_SIZE;
-
-				dma_sync_single_range_for_cpu(dev->dev.parent,
-							      phys_addr, 0,
-							      size,
-							      DMA_FROM_DEVICE);
-
-				/* leave the descriptor and buffer untouched */
-			} else {
-				/* refill descriptor with new buffer later */
-				rx_desc->buf_phys_addr = 0;
-
-				frag_num = 0;
-				frag_offset = copy_size + MVNETA_MH_SIZE;
-				frag_size = min(rxq->left_size,
-						(int)(PAGE_SIZE - frag_offset));
-				skb_add_rx_frag(rxq->skb, frag_num, page,
-						frag_offset, frag_size,
-						PAGE_SIZE);
-				page_pool_release_page(rxq->page_pool, page);
-				rxq->left_size -= frag_size;
-			}
 		} else {
-			/* Middle or Last descriptor */
 			if (unlikely(!rxq->skb)) {
 				pr_debug("no skb for rx_status 0x%x\n",
-					 rx_status);
+					 rx_desc->status);
 				continue;
 			}
-			if (!rxq->left_size) {
-				/* last descriptor has only FCS */
-				/* and can be discarded */
-				dma_sync_single_range_for_cpu(dev->dev.parent,
-							      phys_addr, 0,
-							      ETH_FCS_LEN,
-							      DMA_FROM_DEVICE);
-				/* leave the descriptor and buffer untouched */
-			} else {
-				/* refill descriptor with new buffer later */
-				rx_desc->buf_phys_addr = 0;
-
-				frag_num = skb_shinfo(rxq->skb)->nr_frags;
-				frag_offset = 0;
-				frag_size = min(rxq->left_size,
-						(int)(PAGE_SIZE - frag_offset));
-				skb_add_rx_frag(rxq->skb, frag_num, page,
-						frag_offset, frag_size,
-						PAGE_SIZE);
-
-				page_pool_release_page(rxq->page_pool, page);
-				rxq->left_size -= frag_size;
-			}
+			mvneta_swbm_add_rx_fragment(pp, rx_desc, rxq, page);
 		} /* Middle or Last descriptor */
 
-		if (!(rx_status & MVNETA_RXD_LAST_DESC))
+		if (!(rx_desc->status & MVNETA_RXD_LAST_DESC))
 			/* no last descriptor this time */
 			continue;
 
@@ -2080,13 +2091,12 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		rcvd_bytes += rxq->skb->len;
 
 		/* Linux processing */
-		rxq->skb->protocol = eth_type_trans(rxq->skb, dev);
+		rxq->skb->protocol = eth_type_trans(rxq->skb, pp->dev);
 
 		napi_gro_receive(napi, rxq->skb);
 
 		/* clean uncomplete skb pointer in queue */
 		rxq->skb = NULL;
-		rxq->left_size = 0;
 	}
 
 	mvneta_update_stats(pp, rcvd_pkts, rcvd_bytes, false);
@@ -2095,7 +2105,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 	refill = mvneta_rx_refill_queue(pp, rxq);
 
 	/* Update rxq management counters */
-	mvneta_rxq_desc_num_update(pp, rxq, rx_proc, refill);
+	mvneta_rxq_desc_num_update(pp, rxq, done, refill);
 
 	return rcvd_pkts;
 }
@@ -2946,7 +2956,7 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
 		/* Set Offset */
 		mvneta_rxq_offset_set(pp, rxq, 0);
 		mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
-					PAGE_SIZE :
+					MVNETA_MAX_RX_BUF_SIZE :
 					MVNETA_RX_BUF_SIZE(pp->pkt_size));
 		mvneta_rxq_bm_disable(pp, rxq);
 		mvneta_rxq_fill(pp, rxq, rxq->size);
@@ -4656,7 +4666,7 @@ static int mvneta_probe(struct platform_device *pdev)
 	SET_NETDEV_DEV(dev, &pdev->dev);
 
 	pp->id = global_port_id++;
-	pp->rx_offset_correction = 0; /* not relevant for SW BM */
+	pp->rx_offset_correction = NET_SKB_PAD;
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
                   ` (2 preceding siblings ...)
  2019-10-09 23:18 ` [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  2019-10-10  7:08   ` Jesper Dangaard Brouer
  2019-10-09 23:18 ` [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support Lorenzo Bianconi
                   ` (3 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

mvneta driver can run on not cache coherent devices so it is
necessary to sync dma buffers before sending them to the device
in order to avoid memory corruption. This patch introduce a performance
penalty and it is necessary to introduce a more sophisticated logic
in order to avoid dma sync as much as we can

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 79a6bac0192b..ba4aa9bbc798 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1821,6 +1821,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
 			    struct mvneta_rx_queue *rxq,
 			    gfp_t gfp_mask)
 {
+	enum dma_data_direction dma_dir;
 	dma_addr_t phys_addr;
 	struct page *page;
 
@@ -1830,6 +1831,9 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
 		return -ENOMEM;
 
 	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
+	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
+	dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
+				   MVNETA_MAX_RX_BUF_SIZE, dma_dir);
 	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
 
 	return 0;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
                   ` (3 preceding siblings ...)
  2019-10-09 23:18 ` [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  2019-10-10  8:50   ` Jesper Dangaard Brouer
  2019-10-09 23:18 ` [PATCH v2 net-next 6/8] net: mvneta: move header prefetch in mvneta_swbm_rx_frame Lorenzo Bianconi
                   ` (2 subsequent siblings)
  7 siblings, 1 reply; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Add basic XDP support to mvneta driver for devices that rely on software
buffer management. Currently supported verdicts are:
- XDP_DROP
- XDP_PASS
- XDP_REDIRECT
- XDP_ABORTED

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 144 ++++++++++++++++++++++++--
 1 file changed, 135 insertions(+), 9 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index ba4aa9bbc798..e2795dddbcaf 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -38,6 +38,7 @@
 #include <net/ipv6.h>
 #include <net/tso.h>
 #include <net/page_pool.h>
+#include <linux/bpf_trace.h>
 
 /* Registers */
 #define MVNETA_RXQ_CONFIG_REG(q)                (0x1400 + ((q) << 2))
@@ -323,8 +324,10 @@
 	      ETH_HLEN + ETH_FCS_LEN,			     \
 	      cache_line_size())
 
+#define MVNETA_SKB_HEADROOM	(max(XDP_PACKET_HEADROOM, NET_SKB_PAD) + \
+				 NET_IP_ALIGN)
 #define MVNETA_SKB_PAD	(SKB_DATA_ALIGN(sizeof(struct skb_shared_info) + \
-			 NET_SKB_PAD))
+			 MVNETA_SKB_HEADROOM))
 #define MVNETA_SKB_SIZE(len)	(SKB_DATA_ALIGN(len) + MVNETA_SKB_PAD)
 #define MVNETA_MAX_RX_BUF_SIZE	(PAGE_SIZE - MVNETA_SKB_PAD)
 
@@ -352,6 +355,11 @@ struct mvneta_statistic {
 #define T_REG_64	64
 #define T_SW		1
 
+#define MVNETA_XDP_PASS		BIT(0)
+#define MVNETA_XDP_CONSUMED	BIT(1)
+#define MVNETA_XDP_TX		BIT(2)
+#define MVNETA_XDP_REDIR	BIT(3)
+
 static const struct mvneta_statistic mvneta_statistics[] = {
 	{ 0x3000, T_REG_64, "good_octets_received", },
 	{ 0x3010, T_REG_32, "good_frames_received", },
@@ -431,6 +439,8 @@ struct mvneta_port {
 	u32 cause_rx_tx;
 	struct napi_struct napi;
 
+	struct bpf_prog *xdp_prog;
+
 	/* Core clock */
 	struct clk *clk;
 	/* AXI clock */
@@ -1950,16 +1960,60 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
 	return i;
 }
 
+static int
+mvneta_run_xdp(struct mvneta_port *pp, struct bpf_prog *prog,
+	       struct xdp_buff *xdp)
+{
+	u32 ret, act = bpf_prog_run_xdp(prog, xdp);
+
+	switch (act) {
+	case XDP_PASS:
+		ret = MVNETA_XDP_PASS;
+		break;
+	case XDP_REDIRECT: {
+		int err;
+
+		err = xdp_do_redirect(pp->dev, xdp, prog);
+		if (err) {
+			ret = MVNETA_XDP_CONSUMED;
+			xdp_return_buff(xdp);
+		} else {
+			ret = MVNETA_XDP_REDIR;
+		}
+		break;
+	}
+	default:
+		bpf_warn_invalid_xdp_action(act);
+		/* fall through */
+	case XDP_ABORTED:
+		trace_xdp_exception(pp->dev, prog, act);
+		/* fall through */
+	case XDP_DROP:
+		ret = MVNETA_XDP_CONSUMED;
+		xdp_return_buff(xdp);
+		break;
+	}
+
+	return ret;
+}
+
 static int
 mvneta_swbm_rx_frame(struct mvneta_port *pp,
 		     struct mvneta_rx_desc *rx_desc,
 		     struct mvneta_rx_queue *rxq,
-		     struct page *page)
+		     struct bpf_prog *xdp_prog,
+		     struct page *page, u32 *xdp_ret)
 {
 	unsigned char *data = page_address(page);
 	int data_len = -MVNETA_MH_SIZE, len;
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
+	struct xdp_buff xdp = {
+		.data_hard_start = data,
+		.data = data + MVNETA_SKB_HEADROOM + MVNETA_MH_SIZE,
+		.rxq = &rxq->xdp_rxq,
+	};
+	xdp_set_data_meta_invalid(&xdp);
 
 	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
@@ -1968,13 +2022,27 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 		len = rx_desc->data_size;
 		data_len += len - ETH_FCS_LEN;
 	}
+	xdp.data_end = xdp.data + data_len;
 
 	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
 	dma_sync_single_range_for_cpu(dev->dev.parent,
 				      rx_desc->buf_phys_addr, 0,
 				      len, dma_dir);
 
-	rxq->skb = build_skb(data, PAGE_SIZE);
+	if (xdp_prog) {
+		u32 ret;
+
+		ret = mvneta_run_xdp(pp, xdp_prog, &xdp);
+		if (ret != MVNETA_XDP_PASS) {
+			mvneta_update_stats(pp, 1, xdp.data_end - xdp.data,
+					    false);
+			rx_desc->buf_phys_addr = 0;
+			*xdp_ret |= ret;
+			return ret;
+		}
+	}
+
+	rxq->skb = build_skb(xdp.data_hard_start, PAGE_SIZE);
 	if (unlikely(!rxq->skb)) {
 		netdev_err(dev,
 			   "Can't allocate skb on queue %d\n",
@@ -1985,8 +2053,9 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	}
 	page_pool_release_page(rxq->page_pool, page);
 
-	skb_reserve(rxq->skb, MVNETA_MH_SIZE + NET_SKB_PAD);
-	skb_put(rxq->skb, data_len);
+	skb_reserve(rxq->skb,
+		    xdp.data - xdp.data_hard_start);
+	skb_put(rxq->skb, xdp.data_end - xdp.data);
 	mvneta_rx_csum(pp, rx_desc->status, rxq->skb);
 
 	rxq->left_size = rx_desc->data_size - len;
@@ -2020,7 +2089,7 @@ mvneta_swbm_add_rx_fragment(struct mvneta_port *pp,
 		/* refill descriptor with new buffer later */
 		skb_add_rx_frag(rxq->skb,
 				skb_shinfo(rxq->skb)->nr_frags,
-				page, NET_SKB_PAD, data_len,
+				page, MVNETA_SKB_HEADROOM, data_len,
 				PAGE_SIZE);
 
 		page_pool_release_page(rxq->page_pool, page);
@@ -2036,10 +2105,15 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 {
 	int rcvd_pkts = 0, rcvd_bytes = 0;
 	int rx_pending, refill, done = 0;
+	struct bpf_prog *xdp_prog;
+	u32 xdp_ret = 0;
 
 	/* Get number of received packets */
 	rx_pending = mvneta_rxq_busy_desc_num_get(pp, rxq);
 
+	rcu_read_lock();
+	xdp_prog = READ_ONCE(pp->xdp_prog);
+
 	/* Fairness NAPI loop */
 	while (done < budget && done < rx_pending) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
@@ -2067,7 +2141,8 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 				continue;
 			}
 
-			err = mvneta_swbm_rx_frame(pp, rx_desc, rxq, page);
+			err = mvneta_swbm_rx_frame(pp, rx_desc, rxq,
+						   xdp_prog, page, &xdp_ret);
 			if (err)
 				continue;
 		} else {
@@ -2102,6 +2177,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		/* clean uncomplete skb pointer in queue */
 		rxq->skb = NULL;
 	}
+	rcu_read_unlock();
+
+	if (xdp_ret & MVNETA_XDP_REDIR)
+		xdp_do_flush_map();
 
 	mvneta_update_stats(pp, rcvd_pkts, rcvd_bytes, false);
 
@@ -2843,13 +2922,14 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 static int mvneta_create_page_pool(struct mvneta_port *pp,
 				   struct mvneta_rx_queue *rxq, int size)
 {
+	struct bpf_prog *xdp_prog = READ_ONCE(pp->xdp_prog);
 	struct page_pool_params pp_params = {
 		.order = 0,
 		.flags = PP_FLAG_DMA_MAP,
 		.pool_size = size,
 		.nid = cpu_to_node(0),
 		.dev = pp->dev->dev.parent,
-		.dma_dir = DMA_FROM_DEVICE,
+		.dma_dir = xdp_prog ? DMA_BIDIRECTIONAL : DMA_FROM_DEVICE,
 	};
 	int err;
 
@@ -3315,6 +3395,11 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		mtu = ALIGN(MVNETA_RX_PKT_SIZE(mtu), 8);
 	}
 
+	if (pp->xdp_prog && mtu > MVNETA_MAX_RX_BUF_SIZE) {
+		netdev_info(dev, "Illegal MTU value %d for XDP mode\n", mtu);
+		return -EINVAL;
+	}
+
 	dev->mtu = mtu;
 
 	if (!netif_running(dev)) {
@@ -3984,6 +4069,46 @@ static int mvneta_ioctl(struct net_device *dev, struct ifreq *ifr, int cmd)
 	return phylink_mii_ioctl(pp->phylink, ifr, cmd);
 }
 
+static int mvneta_xdp_setup(struct net_device *dev, struct bpf_prog *prog,
+			    struct netlink_ext_ack *extack)
+{
+	struct mvneta_port *pp = netdev_priv(dev);
+	struct bpf_prog *old_prog;
+
+	if (prog && dev->mtu > MVNETA_MAX_RX_BUF_SIZE) {
+		NL_SET_ERR_MSG_MOD(extack, "Jumbo frames not supported on XDP");
+		return -EOPNOTSUPP;
+	}
+
+	if (netif_running(dev))
+		mvneta_stop(dev);
+
+	old_prog = xchg(&pp->xdp_prog, prog);
+	if (old_prog)
+		bpf_prog_put(old_prog);
+
+	if (netif_running(dev))
+		mvneta_open(dev);
+
+	return 0;
+}
+
+static int mvneta_xdp(struct net_device *dev, struct netdev_bpf *xdp)
+{
+	struct mvneta_port *pp = netdev_priv(dev);
+
+	switch (xdp->command) {
+	case XDP_SETUP_PROG:
+		return mvneta_xdp_setup(dev, xdp->prog, xdp->extack);
+	case XDP_QUERY_PROG:
+		xdp->prog_id = pp->xdp_prog ? pp->xdp_prog->aux->id : 0;
+		return 0;
+	default:
+		NL_SET_ERR_MSG_MOD(xdp->extack, "unknown XDP command");
+		return -EINVAL;
+	}
+}
+
 /* Ethtool methods */
 
 /* Set link ksettings (phy address, speed) for ethtools */
@@ -4380,6 +4505,7 @@ static const struct net_device_ops mvneta_netdev_ops = {
 	.ndo_fix_features    = mvneta_fix_features,
 	.ndo_get_stats64     = mvneta_get_stats64,
 	.ndo_do_ioctl        = mvneta_ioctl,
+	.ndo_bpf	     = mvneta_xdp,
 };
 
 static const struct ethtool_ops mvneta_eth_tool_ops = {
@@ -4670,7 +4796,7 @@ static int mvneta_probe(struct platform_device *pdev)
 	SET_NETDEV_DEV(dev, &pdev->dev);
 
 	pp->id = global_port_id++;
-	pp->rx_offset_correction = NET_SKB_PAD;
+	pp->rx_offset_correction = MVNETA_SKB_HEADROOM;
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 6/8] net: mvneta: move header prefetch in mvneta_swbm_rx_frame
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
                   ` (4 preceding siblings ...)
  2019-10-09 23:18 ` [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 7/8] net: mvneta: make tx buffer array agnostic Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 8/8] net: mvneta: add XDP_TX support Lorenzo Bianconi
  7 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Move data buffer prefetch in mvneta_swbm_rx_frame after
dma_sync_single_range_for_cpu

Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 16 ++++++++--------
 1 file changed, 8 insertions(+), 8 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index e2795dddbcaf..91d8cb45104f 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -2009,11 +2009,8 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 	struct net_device *dev = pp->dev;
 	enum dma_data_direction dma_dir;
 	struct xdp_buff xdp = {
-		.data_hard_start = data,
-		.data = data + MVNETA_SKB_HEADROOM + MVNETA_MH_SIZE,
 		.rxq = &rxq->xdp_rxq,
 	};
-	xdp_set_data_meta_invalid(&xdp);
 
 	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
 		len = MVNETA_MAX_RX_BUF_SIZE;
@@ -2022,13 +2019,20 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
 		len = rx_desc->data_size;
 		data_len += len - ETH_FCS_LEN;
 	}
-	xdp.data_end = xdp.data + data_len;
 
 	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
 	dma_sync_single_range_for_cpu(dev->dev.parent,
 				      rx_desc->buf_phys_addr, 0,
 				      len, dma_dir);
 
+	/* Prefetch header */
+	prefetch(data);
+
+	xdp.data_hard_start = data;
+	xdp.data = data + MVNETA_SKB_HEADROOM + MVNETA_MH_SIZE;
+	xdp.data_end = xdp.data + data_len;
+	xdp_set_data_meta_invalid(&xdp);
+
 	if (xdp_prog) {
 		u32 ret;
 
@@ -2117,15 +2121,11 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 	/* Fairness NAPI loop */
 	while (done < budget && done < rx_pending) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
-		unsigned char *data;
 		struct page *page;
 		int index;
 
 		index = rx_desc - rxq->descs;
 		page = (struct page *)rxq->buf_virt_addr[index];
-		data = page_address(page);
-		/* Prefetch header */
-		prefetch(data);
 
 		rxq->refill_num++;
 		done++;
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 7/8] net: mvneta: make tx buffer array agnostic
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
                   ` (5 preceding siblings ...)
  2019-10-09 23:18 ` [PATCH v2 net-next 6/8] net: mvneta: move header prefetch in mvneta_swbm_rx_frame Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  2019-10-09 23:18 ` [PATCH v2 net-next 8/8] net: mvneta: add XDP_TX support Lorenzo Bianconi
  7 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Allow tx buffer array to contain both skb and xdp buffers in order to
enable xdp frame recycling adding XDP_TX verdict support

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 66 +++++++++++++++++----------
 1 file changed, 43 insertions(+), 23 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 91d8cb45104f..5a21d03e9c7b 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -561,6 +561,20 @@ struct mvneta_rx_desc {
 };
 #endif
 
+enum mvneta_tx_buf_type {
+	MVNETA_TYPE_SKB,
+	MVNETA_TYPE_XDP_TX,
+	MVNETA_TYPE_XDP_NDO,
+};
+
+struct mvneta_tx_buf {
+	enum mvneta_tx_buf_type type;
+	union {
+		struct xdp_frame *xdpf;
+		struct sk_buff *skb;
+	};
+};
+
 struct mvneta_tx_queue {
 	/* Number of this TX queue, in the range 0-7 */
 	u8 id;
@@ -576,8 +590,8 @@ struct mvneta_tx_queue {
 	int tx_stop_threshold;
 	int tx_wake_threshold;
 
-	/* Array of transmitted skb */
-	struct sk_buff **tx_skb;
+	/* Array of transmitted buffers */
+	struct mvneta_tx_buf *buf;
 
 	/* Index of last TX DMA descriptor that was inserted */
 	int txq_put_index;
@@ -1780,14 +1794,9 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp,
 	int i;
 
 	for (i = 0; i < num; i++) {
+		struct mvneta_tx_buf *buf = &txq->buf[txq->txq_get_index];
 		struct mvneta_tx_desc *tx_desc = txq->descs +
 			txq->txq_get_index;
-		struct sk_buff *skb = txq->tx_skb[txq->txq_get_index];
-
-		if (skb) {
-			bytes_compl += skb->len;
-			pkts_compl++;
-		}
 
 		mvneta_txq_inc_get(txq);
 
@@ -1795,9 +1804,12 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp,
 			dma_unmap_single(pp->dev->dev.parent,
 					 tx_desc->buf_phys_addr,
 					 tx_desc->data_size, DMA_TO_DEVICE);
-		if (!skb)
+		if (!buf->skb)
 			continue;
-		dev_kfree_skb_any(skb);
+
+		bytes_compl += buf->skb->len;
+		pkts_compl++;
+		dev_kfree_skb_any(buf->skb);
 	}
 
 	netdev_tx_completed_queue(nq, pkts_compl, bytes_compl);
@@ -2318,16 +2330,19 @@ static inline void
 mvneta_tso_put_hdr(struct sk_buff *skb,
 		   struct mvneta_port *pp, struct mvneta_tx_queue *txq)
 {
-	struct mvneta_tx_desc *tx_desc;
 	int hdr_len = skb_transport_offset(skb) + tcp_hdrlen(skb);
+	struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index];
+	struct mvneta_tx_desc *tx_desc;
 
-	txq->tx_skb[txq->txq_put_index] = NULL;
 	tx_desc = mvneta_txq_next_desc_get(txq);
 	tx_desc->data_size = hdr_len;
 	tx_desc->command = mvneta_skb_tx_csum(pp, skb);
 	tx_desc->command |= MVNETA_TXD_F_DESC;
 	tx_desc->buf_phys_addr = txq->tso_hdrs_phys +
 				 txq->txq_put_index * TSO_HEADER_SIZE;
+	buf->type = MVNETA_TYPE_SKB;
+	buf->skb = NULL;
+
 	mvneta_txq_inc_put(txq);
 }
 
@@ -2336,6 +2351,7 @@ mvneta_tso_put_data(struct net_device *dev, struct mvneta_tx_queue *txq,
 		    struct sk_buff *skb, char *data, int size,
 		    bool last_tcp, bool is_last)
 {
+	struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index];
 	struct mvneta_tx_desc *tx_desc;
 
 	tx_desc = mvneta_txq_next_desc_get(txq);
@@ -2349,7 +2365,8 @@ mvneta_tso_put_data(struct net_device *dev, struct mvneta_tx_queue *txq,
 	}
 
 	tx_desc->command = 0;
-	txq->tx_skb[txq->txq_put_index] = NULL;
+	buf->type = MVNETA_TYPE_SKB;
+	buf->skb = NULL;
 
 	if (last_tcp) {
 		/* last descriptor in the TCP packet */
@@ -2357,7 +2374,7 @@ mvneta_tso_put_data(struct net_device *dev, struct mvneta_tx_queue *txq,
 
 		/* last descriptor in SKB */
 		if (is_last)
-			txq->tx_skb[txq->txq_put_index] = skb;
+			buf->skb = skb;
 	}
 	mvneta_txq_inc_put(txq);
 	return 0;
@@ -2442,6 +2459,7 @@ static int mvneta_tx_frag_process(struct mvneta_port *pp, struct sk_buff *skb,
 	int i, nr_frags = skb_shinfo(skb)->nr_frags;
 
 	for (i = 0; i < nr_frags; i++) {
+		struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index];
 		skb_frag_t *frag = &skb_shinfo(skb)->frags[i];
 		void *addr = skb_frag_address(frag);
 
@@ -2461,12 +2479,13 @@ static int mvneta_tx_frag_process(struct mvneta_port *pp, struct sk_buff *skb,
 		if (i == nr_frags - 1) {
 			/* Last descriptor */
 			tx_desc->command = MVNETA_TXD_L_DESC | MVNETA_TXD_Z_PAD;
-			txq->tx_skb[txq->txq_put_index] = skb;
+			buf->skb = skb;
 		} else {
 			/* Descriptor in the middle: Not First, Not Last */
 			tx_desc->command = 0;
-			txq->tx_skb[txq->txq_put_index] = NULL;
+			buf->skb = NULL;
 		}
+		buf->type = MVNETA_TYPE_SKB;
 		mvneta_txq_inc_put(txq);
 	}
 
@@ -2494,6 +2513,7 @@ static netdev_tx_t mvneta_tx(struct sk_buff *skb, struct net_device *dev)
 	struct mvneta_port *pp = netdev_priv(dev);
 	u16 txq_id = skb_get_queue_mapping(skb);
 	struct mvneta_tx_queue *txq = &pp->txqs[txq_id];
+	struct mvneta_tx_buf *buf = &txq->buf[txq->txq_put_index];
 	struct mvneta_tx_desc *tx_desc;
 	int len = skb->len;
 	int frags = 0;
@@ -2526,16 +2546,17 @@ static netdev_tx_t mvneta_tx(struct sk_buff *skb, struct net_device *dev)
 		goto out;
 	}
 
+	buf->type = MVNETA_TYPE_SKB;
 	if (frags == 1) {
 		/* First and Last descriptor */
 		tx_cmd |= MVNETA_TXD_FLZ_DESC;
 		tx_desc->command = tx_cmd;
-		txq->tx_skb[txq->txq_put_index] = skb;
+		buf->skb = skb;
 		mvneta_txq_inc_put(txq);
 	} else {
 		/* First but not Last */
 		tx_cmd |= MVNETA_TXD_F_DESC;
-		txq->tx_skb[txq->txq_put_index] = NULL;
+		buf->skb = NULL;
 		mvneta_txq_inc_put(txq);
 		tx_desc->command = tx_cmd;
 		/* Continue with other skb fragments */
@@ -3121,9 +3142,8 @@ static int mvneta_txq_sw_init(struct mvneta_port *pp,
 
 	txq->last_desc = txq->size - 1;
 
-	txq->tx_skb = kmalloc_array(txq->size, sizeof(*txq->tx_skb),
-				    GFP_KERNEL);
-	if (!txq->tx_skb) {
+	txq->buf = kmalloc_array(txq->size, sizeof(*txq->buf), GFP_KERNEL);
+	if (!txq->buf) {
 		dma_free_coherent(pp->dev->dev.parent,
 				  txq->size * MVNETA_DESC_ALIGNED_SIZE,
 				  txq->descs, txq->descs_phys);
@@ -3135,7 +3155,7 @@ static int mvneta_txq_sw_init(struct mvneta_port *pp,
 					   txq->size * TSO_HEADER_SIZE,
 					   &txq->tso_hdrs_phys, GFP_KERNEL);
 	if (!txq->tso_hdrs) {
-		kfree(txq->tx_skb);
+		kfree(txq->buf);
 		dma_free_coherent(pp->dev->dev.parent,
 				  txq->size * MVNETA_DESC_ALIGNED_SIZE,
 				  txq->descs, txq->descs_phys);
@@ -3188,7 +3208,7 @@ static void mvneta_txq_sw_deinit(struct mvneta_port *pp,
 {
 	struct netdev_queue *nq = netdev_get_tx_queue(pp->dev, txq->id);
 
-	kfree(txq->tx_skb);
+	kfree(txq->buf);
 
 	if (txq->tso_hdrs)
 		dma_free_coherent(pp->dev->dev.parent,
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH v2 net-next 8/8] net: mvneta: add XDP_TX support
  2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
                   ` (6 preceding siblings ...)
  2019-10-09 23:18 ` [PATCH v2 net-next 7/8] net: mvneta: make tx buffer array agnostic Lorenzo Bianconi
@ 2019-10-09 23:18 ` Lorenzo Bianconi
  7 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-09 23:18 UTC (permalink / raw)
  To: netdev
  Cc: lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	ilias.apalodimas, matteo.croce, mw

Implement XDP_TX verdict and ndo_xdp_xmit net_device_ops function
pointer

Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
---
 drivers/net/ethernet/marvell/mvneta.c | 126 ++++++++++++++++++++++++--
 1 file changed, 119 insertions(+), 7 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 5a21d03e9c7b..b68f9d03e939 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1800,16 +1800,19 @@ static void mvneta_txq_bufs_free(struct mvneta_port *pp,
 
 		mvneta_txq_inc_get(txq);
 
-		if (!IS_TSO_HEADER(txq, tx_desc->buf_phys_addr))
+		if (!IS_TSO_HEADER(txq, tx_desc->buf_phys_addr) &&
+		    buf->type != MVNETA_TYPE_XDP_TX)
 			dma_unmap_single(pp->dev->dev.parent,
 					 tx_desc->buf_phys_addr,
 					 tx_desc->data_size, DMA_TO_DEVICE);
-		if (!buf->skb)
-			continue;
-
-		bytes_compl += buf->skb->len;
-		pkts_compl++;
-		dev_kfree_skb_any(buf->skb);
+		if (buf->type == MVNETA_TYPE_SKB && buf->skb) {
+			bytes_compl += buf->skb->len;
+			pkts_compl++;
+			dev_kfree_skb_any(buf->skb);
+		} else if (buf->type == MVNETA_TYPE_XDP_TX ||
+			   buf->type == MVNETA_TYPE_XDP_NDO) {
+			xdp_return_frame(buf->xdpf);
+		}
 	}
 
 	netdev_tx_completed_queue(nq, pkts_compl, bytes_compl);
@@ -1972,6 +1975,109 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
 	return i;
 }
 
+static int
+mvneta_xdp_submit_frame(struct mvneta_port *pp, struct mvneta_tx_queue *txq,
+			struct xdp_frame *xdpf, bool dma_map)
+{
+	struct mvneta_tx_desc *tx_desc;
+	struct mvneta_tx_buf *buf;
+	dma_addr_t dma_addr;
+
+	if (txq->count >= txq->tx_stop_threshold)
+		return MVNETA_XDP_CONSUMED;
+
+	tx_desc = mvneta_txq_next_desc_get(txq);
+
+	buf = &txq->buf[txq->txq_put_index];
+	if (dma_map) {
+		/* ndo_xdp_xmit */
+		dma_addr = dma_map_single(pp->dev->dev.parent, xdpf->data,
+					  xdpf->len, DMA_TO_DEVICE);
+		if (dma_mapping_error(pp->dev->dev.parent, dma_addr)) {
+			mvneta_txq_desc_put(txq);
+			return MVNETA_XDP_CONSUMED;
+		}
+		buf->type = MVNETA_TYPE_XDP_NDO;
+	} else {
+		struct page *page = virt_to_page(xdpf->data);
+
+		dma_addr = page_pool_get_dma_addr(page) +
+			   pp->rx_offset_correction + MVNETA_MH_SIZE;
+		dma_sync_single_for_device(pp->dev->dev.parent, dma_addr,
+					   xdpf->len, DMA_BIDIRECTIONAL);
+		buf->type = MVNETA_TYPE_XDP_TX;
+	}
+	buf->xdpf = xdpf;
+
+	tx_desc->command = MVNETA_TXD_FLZ_DESC;
+	tx_desc->buf_phys_addr = dma_addr;
+	tx_desc->data_size = xdpf->len;
+
+	mvneta_update_stats(pp, 1, xdpf->len, true);
+	mvneta_txq_inc_put(txq);
+	txq->pending++;
+	txq->count++;
+
+	return MVNETA_XDP_TX;
+}
+
+static int
+mvneta_xdp_xmit_back(struct mvneta_port *pp, struct xdp_buff *xdp)
+{
+	struct xdp_frame *xdpf = convert_to_xdp_frame(xdp);
+	int cpu = smp_processor_id();
+	struct mvneta_tx_queue *txq;
+	struct netdev_queue *nq;
+	u32 ret;
+
+	if (unlikely(!xdpf))
+		return MVNETA_XDP_CONSUMED;
+
+	txq = &pp->txqs[cpu % txq_number];
+	nq = netdev_get_tx_queue(pp->dev, txq->id);
+
+	__netif_tx_lock(nq, cpu);
+	ret = mvneta_xdp_submit_frame(pp, txq, xdpf, false);
+	if (ret == MVNETA_XDP_TX)
+		mvneta_txq_pend_desc_add(pp, txq, 0);
+	__netif_tx_unlock(nq);
+
+	return ret;
+}
+
+static int
+mvneta_xdp_xmit(struct net_device *dev, int num_frame,
+		struct xdp_frame **frames, u32 flags)
+{
+	struct mvneta_port *pp = netdev_priv(dev);
+	int cpu = smp_processor_id();
+	struct mvneta_tx_queue *txq;
+	struct netdev_queue *nq;
+	int i, drops = 0;
+	u32 ret;
+
+	if (unlikely(flags & ~XDP_XMIT_FLAGS_MASK))
+		return -EINVAL;
+
+	txq = &pp->txqs[cpu % txq_number];
+	nq = netdev_get_tx_queue(pp->dev, txq->id);
+
+	__netif_tx_lock(nq, cpu);
+	for (i = 0; i < num_frame; i++) {
+		ret = mvneta_xdp_submit_frame(pp, txq, frames[i], true);
+		if (ret != MVNETA_XDP_TX) {
+			xdp_return_frame_rx_napi(frames[i]);
+			drops++;
+		}
+	}
+
+	if (unlikely(flags & XDP_XMIT_FLUSH))
+		mvneta_txq_pend_desc_add(pp, txq, 0);
+	__netif_tx_unlock(nq);
+
+	return num_frame - drops;
+}
+
 static int
 mvneta_run_xdp(struct mvneta_port *pp, struct bpf_prog *prog,
 	       struct xdp_buff *xdp)
@@ -1994,6 +2100,11 @@ mvneta_run_xdp(struct mvneta_port *pp, struct bpf_prog *prog,
 		}
 		break;
 	}
+	case XDP_TX:
+		ret = mvneta_xdp_xmit_back(pp, xdp);
+		if (ret != MVNETA_XDP_TX)
+			xdp_return_buff(xdp);
+		break;
 	default:
 		bpf_warn_invalid_xdp_action(act);
 		/* fall through */
@@ -4526,6 +4637,7 @@ static const struct net_device_ops mvneta_netdev_ops = {
 	.ndo_get_stats64     = mvneta_get_stats64,
 	.ndo_do_ioctl        = mvneta_ioctl,
 	.ndo_bpf	     = mvneta_xdp,
+	.ndo_xdp_xmit        = mvneta_xdp_xmit,
 };
 
 static const struct ethtool_ops mvneta_eth_tool_ops = {
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues
  2019-10-09 23:18 ` [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues Lorenzo Bianconi
@ 2019-10-10  7:08   ` Jesper Dangaard Brouer
  2019-10-10  7:21     ` Ilias Apalodimas
  2019-10-10  9:20     ` Lorenzo Bianconi
  0 siblings, 2 replies; 17+ messages in thread
From: Jesper Dangaard Brouer @ 2019-10-10  7:08 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, lorenzo.bianconi, davem, thomas.petazzoni,
	ilias.apalodimas, matteo.croce, mw, brouer

On Thu, 10 Oct 2019 01:18:34 +0200
Lorenzo Bianconi <lorenzo@kernel.org> wrote:

> mvneta driver can run on not cache coherent devices so it is
> necessary to sync dma buffers before sending them to the device
> in order to avoid memory corruption. This patch introduce a performance
> penalty and it is necessary to introduce a more sophisticated logic
> in order to avoid dma sync as much as we can

Report with benchmarks here:
 https://github.com/xdp-project/xdp-project/blob/master/areas/arm64/board_espressobin08_bench_xdp.org

We are testing this on an Espressobin board, and do see a huge
performance cost associated with this DMA-sync.   Regardless we still
want to get this patch merged, to move forward with XDP support for
this driver. 

We promised each-other (on IRC freenode #xdp) that we will follow-up
with a solution/mitigation, after this patchset is merged.  There are
several ideas, that likely should get separate upstream review.

> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>

Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>

> ---
>  drivers/net/ethernet/marvell/mvneta.c | 4 ++++
>  1 file changed, 4 insertions(+)
> 
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 79a6bac0192b..ba4aa9bbc798 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -1821,6 +1821,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>  			    struct mvneta_rx_queue *rxq,
>  			    gfp_t gfp_mask)
>  {
> +	enum dma_data_direction dma_dir;
>  	dma_addr_t phys_addr;
>  	struct page *page;
>  
> @@ -1830,6 +1831,9 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>  		return -ENOMEM;
>  
>  	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
> +	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> +	dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
> +				   MVNETA_MAX_RX_BUF_SIZE, dma_dir);
>  	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
>  
>  	return 0;



-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine
  2019-10-09 23:18 ` [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine Lorenzo Bianconi
@ 2019-10-10  7:16   ` Ilias Apalodimas
  2019-10-10  9:58     ` Lorenzo Bianconi
  0 siblings, 1 reply; 17+ messages in thread
From: Ilias Apalodimas @ 2019-10-10  7:16 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	matteo.croce, mw

Hi Lorenzo, 

On Thu, Oct 10, 2019 at 01:18:33AM +0200, Lorenzo Bianconi wrote:
> Refactor mvneta_rx_swbm code introducing mvneta_swbm_rx_frame and
> mvneta_swbm_add_rx_fragment routines. Rely on build_skb in oreder to
> allocate skb since the previous patch introduced buffer recycling using
> the page_pool API.
> This patch fixes even an issue in the original driver where dma buffers
> are accessed before dma sync
> 
> Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 198 ++++++++++++++------------
>  1 file changed, 104 insertions(+), 94 deletions(-)
> 
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 31cecc1ed848..79a6bac0192b 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -323,6 +323,11 @@
>  	      ETH_HLEN + ETH_FCS_LEN,			     \
>  	      cache_line_size())
>  
> +#define MVNETA_SKB_PAD	(SKB_DATA_ALIGN(sizeof(struct skb_shared_info) + \
> +			 NET_SKB_PAD))
> +#define MVNETA_SKB_SIZE(len)	(SKB_DATA_ALIGN(len) + MVNETA_SKB_PAD)
> +#define MVNETA_MAX_RX_BUF_SIZE	(PAGE_SIZE - MVNETA_SKB_PAD)
> +
>  #define IS_TSO_HEADER(txq, addr) \
>  	((addr >= txq->tso_hdrs_phys) && \
>  	 (addr < txq->tso_hdrs_phys + txq->size * TSO_HEADER_SIZE))
> @@ -646,7 +651,6 @@ static int txq_number = 8;
>  static int rxq_def;
>  
>  static int rx_copybreak __read_mostly = 256;
> -static int rx_header_size __read_mostly = 128;
>  
>  /* HW BM need that each port be identify by a unique ID */
>  static int global_port_id;
> +	if (rxq->left_size > MVNETA_MAX_RX_BUF_SIZE) {

[...]

> +		len = MVNETA_MAX_RX_BUF_SIZE;
> +		data_len = len;
> +	} else {
> +		len = rxq->left_size;
> +		data_len = len - ETH_FCS_LEN;
> +	}
> +	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> +	dma_sync_single_range_for_cpu(dev->dev.parent,
> +				      rx_desc->buf_phys_addr, 0,
> +				      len, dma_dir);
> +	if (data_len > 0) {
> +		/* refill descriptor with new buffer later */
> +		skb_add_rx_frag(rxq->skb,
> +				skb_shinfo(rxq->skb)->nr_frags,
> +				page, NET_SKB_PAD, data_len,
> +				PAGE_SIZE);
> +
> +		page_pool_release_page(rxq->page_pool, page);
> +		rx_desc->buf_phys_addr = 0;

Shouldn't we unmap and set the buf_phys_addr to 0 regardless of the data_len?

> +	}
> +	rxq->left_size -= len;
> +}
> +
>  		mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?

[...]

> -					PAGE_SIZE :
> +					MVNETA_MAX_RX_BUF_SIZE :
>  					MVNETA_RX_BUF_SIZE(pp->pkt_size));
>  		mvneta_rxq_bm_disable(pp, rxq);
>  		mvneta_rxq_fill(pp, rxq, rxq->size);
> @@ -4656,7 +4666,7 @@ static int mvneta_probe(struct platform_device *pdev)
>  	SET_NETDEV_DEV(dev, &pdev->dev);
>  
>  	pp->id = global_port_id++;
> -	pp->rx_offset_correction = 0; /* not relevant for SW BM */
> +	pp->rx_offset_correction = NET_SKB_PAD;
>  
>  	/* Obtain access to BM resources if enabled and already initialized */
>  	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
> -- 
> 2.21.0
> 

Regards
/Ilias

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues
  2019-10-10  7:08   ` Jesper Dangaard Brouer
@ 2019-10-10  7:21     ` Ilias Apalodimas
  2019-10-10  9:18       ` Lorenzo Bianconi
  2019-10-10  9:20     ` Lorenzo Bianconi
  1 sibling, 1 reply; 17+ messages in thread
From: Ilias Apalodimas @ 2019-10-10  7:21 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: Lorenzo Bianconi, netdev, lorenzo.bianconi, davem,
	thomas.petazzoni, matteo.croce, mw

Hi Lorenzo, Jesper,

On Thu, Oct 10, 2019 at 09:08:31AM +0200, Jesper Dangaard Brouer wrote:
> On Thu, 10 Oct 2019 01:18:34 +0200
> Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> 
> > mvneta driver can run on not cache coherent devices so it is
> > necessary to sync dma buffers before sending them to the device
> > in order to avoid memory corruption. This patch introduce a performance
> > penalty and it is necessary to introduce a more sophisticated logic
> > in order to avoid dma sync as much as we can
> 
> Report with benchmarks here:
>  https://github.com/xdp-project/xdp-project/blob/master/areas/arm64/board_espressobin08_bench_xdp.org
> 
> We are testing this on an Espressobin board, and do see a huge
> performance cost associated with this DMA-sync.   Regardless we still
> want to get this patch merged, to move forward with XDP support for
> this driver. 
> 
> We promised each-other (on IRC freenode #xdp) that we will follow-up
> with a solution/mitigation, after this patchset is merged.  There are
> several ideas, that likely should get separate upstream review.

I think mentioning that the patch *introduces* a performance penalty is a bit
misleading. 
The dma sync does have a performance penalty but it was always there. 
The initial driver was mapping the DMA with DMA_FROM_DEVICE, which implies
syncing as well. In page_pool we do not explicitly sync buffers on allocation
and leave it up the driver writer (and allow him some tricks to avoid that),
thus this patch is needed.

In any case what Jesper mentions is correct, we do have a plan :)

> 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> 
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index 79a6bac0192b..ba4aa9bbc798 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -1821,6 +1821,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> >  			    struct mvneta_rx_queue *rxq,
> >  			    gfp_t gfp_mask)
> >  {
> > +	enum dma_data_direction dma_dir;
> >  	dma_addr_t phys_addr;
> >  	struct page *page;
> >  
> > @@ -1830,6 +1831,9 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> >  		return -ENOMEM;
> >  
> >  	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
> > +	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > +	dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
> > +				   MVNETA_MAX_RX_BUF_SIZE, dma_dir);
> >  	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
> >  
> >  	return 0;
> 
> 
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer

Thanks!
/Ilias

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support
  2019-10-09 23:18 ` [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support Lorenzo Bianconi
@ 2019-10-10  8:50   ` Jesper Dangaard Brouer
  2019-10-10  9:57     ` Lorenzo Bianconi
  0 siblings, 1 reply; 17+ messages in thread
From: Jesper Dangaard Brouer @ 2019-10-10  8:50 UTC (permalink / raw)
  To: Lorenzo Bianconi
  Cc: netdev, lorenzo.bianconi, davem, thomas.petazzoni,
	ilias.apalodimas, matteo.croce, mw, brouer,
	Björn Töpel

On Thu, 10 Oct 2019 01:18:35 +0200
Lorenzo Bianconi <lorenzo@kernel.org> wrote:

> Add basic XDP support to mvneta driver for devices that rely on software
> buffer management. Currently supported verdicts are:
> - XDP_DROP
> - XDP_PASS
> - XDP_REDIRECT
> - XDP_ABORTED
> 
> Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 144 ++++++++++++++++++++++++--
>  1 file changed, 135 insertions(+), 9 deletions(-)
> 
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index ba4aa9bbc798..e2795dddbcaf 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
[...]

> @@ -1950,16 +1960,60 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
>  	return i;
>  }
>  
> +static int
> +mvneta_run_xdp(struct mvneta_port *pp, struct bpf_prog *prog,
> +	       struct xdp_buff *xdp)
> +{
> +	u32 ret, act = bpf_prog_run_xdp(prog, xdp);
> +
> +	switch (act) {
> +	case XDP_PASS:
> +		ret = MVNETA_XDP_PASS;
> +		break;
> +	case XDP_REDIRECT: {
> +		int err;
> +
> +		err = xdp_do_redirect(pp->dev, xdp, prog);
> +		if (err) {
> +			ret = MVNETA_XDP_CONSUMED;
> +			xdp_return_buff(xdp);

> +		} else {
> +			ret = MVNETA_XDP_REDIR;
> +		}
> +		break;
> +	}
> +	default:
> +		bpf_warn_invalid_xdp_action(act);
> +		/* fall through */
> +	case XDP_ABORTED:
> +		trace_xdp_exception(pp->dev, prog, act);
> +		/* fall through */
> +	case XDP_DROP:
> +		ret = MVNETA_XDP_CONSUMED;
> +		xdp_return_buff(xdp);

Using xdp_return_buff() here is actually not optimal for performance.
I can see that others socionext/netsec.c and AF_XDP also use this
xdp_return_buff().

I do think code wise it looks a lot nice to use xdp_return_buff(), so
maybe we should optimize xdp_return_buff(), instead of using
page_pool_recycle_direct() here?  (That would also help AF_XDP ?)

The problem with xdp_return_buff() is that it does a "full" lookup from
the mem.id (xdp_buff->xdp_rxq_info->mem.id) to find the "allocator"
pointer in this case the page_pool pointer.  Here in the driver we
already have access to the stable allocator page_pool pointer via
struct mvneta_rx_queue *rxq->page_pool.


> +		break;
> +	}
> +
> +	return ret;
> +}
> +
>  static int
>  mvneta_swbm_rx_frame(struct mvneta_port *pp,
>  		     struct mvneta_rx_desc *rx_desc,
>  		     struct mvneta_rx_queue *rxq,
> -		     struct page *page)
> +		     struct bpf_prog *xdp_prog,
> +		     struct page *page, u32 *xdp_ret)
>  {
>  	unsigned char *data = page_address(page);
>  	int data_len = -MVNETA_MH_SIZE, len;
>  	struct net_device *dev = pp->dev;
>  	enum dma_data_direction dma_dir;
> +	struct xdp_buff xdp = {
> +		.data_hard_start = data,
> +		.data = data + MVNETA_SKB_HEADROOM + MVNETA_MH_SIZE,
> +		.rxq = &rxq->xdp_rxq,
> +	};

Creating the struct xdp_buff (on call-stack) this way is not optimal
for performance (IHMO it looks nicer code-wise, but too bad).

This kind of initialization of only some of the members, cause GCC to
zero out other members (I observed this on Intel, which use an
expensive rep-sto operation).  Thus, this cause extra unnecessary memory
writes.

A further optimization, is that you can avoid re-assigning:
 rxq = &rxq->xdp_rxq
for each frame, as this actually stays the same for all the frames in
this NAPI cycle.  By instead allocating the xdp_buff on the callers
stack, and parsing in xdp_buff as a pointer.


> +	xdp_set_data_meta_invalid(&xdp);
>  
>  	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
>  		len = MVNETA_MAX_RX_BUF_SIZE;
> @@ -1968,13 +2022,27 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
>  		len = rx_desc->data_size;
>  		data_len += len - ETH_FCS_LEN;
>  	}
> +	xdp.data_end = xdp.data + data_len;
>  
>  	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
>  	dma_sync_single_range_for_cpu(dev->dev.parent,
>  				      rx_desc->buf_phys_addr, 0,
>  				      len, dma_dir);
>  
> -	rxq->skb = build_skb(data, PAGE_SIZE);
> +	if (xdp_prog) {
> +		u32 ret;
> +
> +		ret = mvneta_run_xdp(pp, xdp_prog, &xdp);
> +		if (ret != MVNETA_XDP_PASS) {
> +			mvneta_update_stats(pp, 1, xdp.data_end - xdp.data,
> +					    false);
> +			rx_desc->buf_phys_addr = 0;
> +			*xdp_ret |= ret;
> +			return ret;
> +		}
> +	}
> +
> +	rxq->skb = build_skb(xdp.data_hard_start, PAGE_SIZE);
>  	if (unlikely(!rxq->skb)) {
>  		netdev_err(dev,
>  			   "Can't allocate skb on queue %d\n",
[...]

-- 
Best regards,
  Jesper Dangaard Brouer
  MSc.CS, Principal Kernel Engineer at Red Hat
  LinkedIn: http://www.linkedin.com/in/brouer

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues
  2019-10-10  7:21     ` Ilias Apalodimas
@ 2019-10-10  9:18       ` Lorenzo Bianconi
  0 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-10  9:18 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: Jesper Dangaard Brouer, netdev, lorenzo.bianconi, davem,
	thomas.petazzoni, matteo.croce, mw

[-- Attachment #1: Type: text/plain, Size: 3216 bytes --]

> Hi Lorenzo, Jesper,
> 
> On Thu, Oct 10, 2019 at 09:08:31AM +0200, Jesper Dangaard Brouer wrote:
> > On Thu, 10 Oct 2019 01:18:34 +0200
> > Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> > 
> > > mvneta driver can run on not cache coherent devices so it is
> > > necessary to sync dma buffers before sending them to the device
> > > in order to avoid memory corruption. This patch introduce a performance
> > > penalty and it is necessary to introduce a more sophisticated logic
> > > in order to avoid dma sync as much as we can
> > 
> > Report with benchmarks here:
> >  https://github.com/xdp-project/xdp-project/blob/master/areas/arm64/board_espressobin08_bench_xdp.org
> > 
> > We are testing this on an Espressobin board, and do see a huge
> > performance cost associated with this DMA-sync.   Regardless we still
> > want to get this patch merged, to move forward with XDP support for
> > this driver. 
> > 
> > We promised each-other (on IRC freenode #xdp) that we will follow-up
> > with a solution/mitigation, after this patchset is merged.  There are
> > several ideas, that likely should get separate upstream review.
> 
> I think mentioning that the patch *introduces* a performance penalty is a bit
> misleading. 
> The dma sync does have a performance penalty but it was always there. 
> The initial driver was mapping the DMA with DMA_FROM_DEVICE, which implies
> syncing as well. In page_pool we do not explicitly sync buffers on allocation
> and leave it up the driver writer (and allow him some tricks to avoid that),
> thus this patch is needed.

Reviewing the commit log I definitely agree, I will rewrite it in v3. Thx

Regards,
Lorenzo

> 
> In any case what Jesper mentions is correct, we do have a plan :)
> 
> > 
> > > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > 
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > 
> > > ---
> > >  drivers/net/ethernet/marvell/mvneta.c | 4 ++++
> > >  1 file changed, 4 insertions(+)
> > > 
> > > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > > index 79a6bac0192b..ba4aa9bbc798 100644
> > > --- a/drivers/net/ethernet/marvell/mvneta.c
> > > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > > @@ -1821,6 +1821,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> > >  			    struct mvneta_rx_queue *rxq,
> > >  			    gfp_t gfp_mask)
> > >  {
> > > +	enum dma_data_direction dma_dir;
> > >  	dma_addr_t phys_addr;
> > >  	struct page *page;
> > >  
> > > @@ -1830,6 +1831,9 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> > >  		return -ENOMEM;
> > >  
> > >  	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
> > > +	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > > +	dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
> > > +				   MVNETA_MAX_RX_BUF_SIZE, dma_dir);
> > >  	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
> > >  
> > >  	return 0;
> > 
> > 
> > 
> > -- 
> > Best regards,
> >   Jesper Dangaard Brouer
> >   MSc.CS, Principal Kernel Engineer at Red Hat
> >   LinkedIn: http://www.linkedin.com/in/brouer
> 
> Thanks!
> /Ilias

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues
  2019-10-10  7:08   ` Jesper Dangaard Brouer
  2019-10-10  7:21     ` Ilias Apalodimas
@ 2019-10-10  9:20     ` Lorenzo Bianconi
  1 sibling, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-10  9:20 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: netdev, lorenzo.bianconi, davem, thomas.petazzoni,
	ilias.apalodimas, matteo.croce, mw

[-- Attachment #1: Type: text/plain, Size: 2458 bytes --]

> On Thu, 10 Oct 2019 01:18:34 +0200
> Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> 
> > mvneta driver can run on not cache coherent devices so it is
> > necessary to sync dma buffers before sending them to the device
> > in order to avoid memory corruption. This patch introduce a performance
> > penalty and it is necessary to introduce a more sophisticated logic
> > in order to avoid dma sync as much as we can
> 
> Report with benchmarks here:
>  https://github.com/xdp-project/xdp-project/blob/master/areas/arm64/board_espressobin08_bench_xdp.org

Thx a lot Jesper for detailed report, I will include this info in the commit log.

Regards,
Lorenzo

> 
> We are testing this on an Espressobin board, and do see a huge
> performance cost associated with this DMA-sync.   Regardless we still
> want to get this patch merged, to move forward with XDP support for
> this driver. 
> 
> We promised each-other (on IRC freenode #xdp) that we will follow-up
> with a solution/mitigation, after this patchset is merged.  There are
> several ideas, that likely should get separate upstream review.
> 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> 
> Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> 
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 4 ++++
> >  1 file changed, 4 insertions(+)
> > 
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index 79a6bac0192b..ba4aa9bbc798 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -1821,6 +1821,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> >  			    struct mvneta_rx_queue *rxq,
> >  			    gfp_t gfp_mask)
> >  {
> > +	enum dma_data_direction dma_dir;
> >  	dma_addr_t phys_addr;
> >  	struct page *page;
> >  
> > @@ -1830,6 +1831,9 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
> >  		return -ENOMEM;
> >  
> >  	phys_addr = page_pool_get_dma_addr(page) + pp->rx_offset_correction;
> > +	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > +	dma_sync_single_for_device(pp->dev->dev.parent, phys_addr,
> > +				   MVNETA_MAX_RX_BUF_SIZE, dma_dir);
> >  	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
> >  
> >  	return 0;
> 
> 
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support
  2019-10-10  8:50   ` Jesper Dangaard Brouer
@ 2019-10-10  9:57     ` Lorenzo Bianconi
  0 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-10  9:57 UTC (permalink / raw)
  To: Jesper Dangaard Brouer
  Cc: netdev, lorenzo.bianconi, davem, thomas.petazzoni,
	ilias.apalodimas, matteo.croce, mw, Björn Töpel

[-- Attachment #1: Type: text/plain, Size: 5156 bytes --]

> On Thu, 10 Oct 2019 01:18:35 +0200
> Lorenzo Bianconi <lorenzo@kernel.org> wrote:
> 
> > Add basic XDP support to mvneta driver for devices that rely on software
> > buffer management. Currently supported verdicts are:
> > - XDP_DROP
> > - XDP_PASS
> > - XDP_REDIRECT
> > - XDP_ABORTED
> > 
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 144 ++++++++++++++++++++++++--
> >  1 file changed, 135 insertions(+), 9 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index ba4aa9bbc798..e2795dddbcaf 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> [...]
> 
> > @@ -1950,16 +1960,60 @@ int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
> >  	return i;
> >  }
> >  
> > +static int
> > +mvneta_run_xdp(struct mvneta_port *pp, struct bpf_prog *prog,
> > +	       struct xdp_buff *xdp)
> > +{
> > +	u32 ret, act = bpf_prog_run_xdp(prog, xdp);
> > +
> > +	switch (act) {
> > +	case XDP_PASS:
> > +		ret = MVNETA_XDP_PASS;
> > +		break;
> > +	case XDP_REDIRECT: {
> > +		int err;
> > +
> > +		err = xdp_do_redirect(pp->dev, xdp, prog);
> > +		if (err) {
> > +			ret = MVNETA_XDP_CONSUMED;
> > +			xdp_return_buff(xdp);
> 
> > +		} else {
> > +			ret = MVNETA_XDP_REDIR;
> > +		}
> > +		break;
> > +	}
> > +	default:
> > +		bpf_warn_invalid_xdp_action(act);
> > +		/* fall through */
> > +	case XDP_ABORTED:
> > +		trace_xdp_exception(pp->dev, prog, act);
> > +		/* fall through */
> > +	case XDP_DROP:
> > +		ret = MVNETA_XDP_CONSUMED;
> > +		xdp_return_buff(xdp);
> 
> Using xdp_return_buff() here is actually not optimal for performance.
> I can see that others socionext/netsec.c and AF_XDP also use this
> xdp_return_buff().
> 
> I do think code wise it looks a lot nice to use xdp_return_buff(), so
> maybe we should optimize xdp_return_buff(), instead of using
> page_pool_recycle_direct() here?  (That would also help AF_XDP ?)
> 
> The problem with xdp_return_buff() is that it does a "full" lookup from
> the mem.id (xdp_buff->xdp_rxq_info->mem.id) to find the "allocator"
> pointer in this case the page_pool pointer.  Here in the driver we
> already have access to the stable allocator page_pool pointer via
> struct mvneta_rx_queue *rxq->page_pool.

ack, right. Thx for pointing it out. I will fix it in v3

> 
> 
> > +		break;
> > +	}
> > +
> > +	return ret;
> > +}
> > +
> >  static int
> >  mvneta_swbm_rx_frame(struct mvneta_port *pp,
> >  		     struct mvneta_rx_desc *rx_desc,
> >  		     struct mvneta_rx_queue *rxq,
> > -		     struct page *page)
> > +		     struct bpf_prog *xdp_prog,
> > +		     struct page *page, u32 *xdp_ret)
> >  {
> >  	unsigned char *data = page_address(page);
> >  	int data_len = -MVNETA_MH_SIZE, len;
> >  	struct net_device *dev = pp->dev;
> >  	enum dma_data_direction dma_dir;
> > +	struct xdp_buff xdp = {
> > +		.data_hard_start = data,
> > +		.data = data + MVNETA_SKB_HEADROOM + MVNETA_MH_SIZE,
> > +		.rxq = &rxq->xdp_rxq,
> > +	};
> 
> Creating the struct xdp_buff (on call-stack) this way is not optimal
> for performance (IHMO it looks nicer code-wise, but too bad).
> 
> This kind of initialization of only some of the members, cause GCC to
> zero out other members (I observed this on Intel, which use an
> expensive rep-sto operation).  Thus, this cause extra unnecessary memory
> writes.
> 
> A further optimization, is that you can avoid re-assigning:
>  rxq = &rxq->xdp_rxq
> for each frame, as this actually stays the same for all the frames in
> this NAPI cycle.  By instead allocating the xdp_buff on the callers
> stack, and parsing in xdp_buff as a pointer.

ack, will fix it in v3.

Regards,
Lorenzo

> 
> 
> > +	xdp_set_data_meta_invalid(&xdp);
> >  
> >  	if (MVNETA_SKB_SIZE(rx_desc->data_size) > PAGE_SIZE) {
> >  		len = MVNETA_MAX_RX_BUF_SIZE;
> > @@ -1968,13 +2022,27 @@ mvneta_swbm_rx_frame(struct mvneta_port *pp,
> >  		len = rx_desc->data_size;
> >  		data_len += len - ETH_FCS_LEN;
> >  	}
> > +	xdp.data_end = xdp.data + data_len;
> >  
> >  	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> >  	dma_sync_single_range_for_cpu(dev->dev.parent,
> >  				      rx_desc->buf_phys_addr, 0,
> >  				      len, dma_dir);
> >  
> > -	rxq->skb = build_skb(data, PAGE_SIZE);
> > +	if (xdp_prog) {
> > +		u32 ret;
> > +
> > +		ret = mvneta_run_xdp(pp, xdp_prog, &xdp);
> > +		if (ret != MVNETA_XDP_PASS) {
> > +			mvneta_update_stats(pp, 1, xdp.data_end - xdp.data,
> > +					    false);
> > +			rx_desc->buf_phys_addr = 0;
> > +			*xdp_ret |= ret;
> > +			return ret;
> > +		}
> > +	}
> > +
> > +	rxq->skb = build_skb(xdp.data_hard_start, PAGE_SIZE);
> >  	if (unlikely(!rxq->skb)) {
> >  		netdev_err(dev,
> >  			   "Can't allocate skb on queue %d\n",
> [...]
> 
> -- 
> Best regards,
>   Jesper Dangaard Brouer
>   MSc.CS, Principal Kernel Engineer at Red Hat
>   LinkedIn: http://www.linkedin.com/in/brouer

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine
  2019-10-10  7:16   ` Ilias Apalodimas
@ 2019-10-10  9:58     ` Lorenzo Bianconi
  0 siblings, 0 replies; 17+ messages in thread
From: Lorenzo Bianconi @ 2019-10-10  9:58 UTC (permalink / raw)
  To: Ilias Apalodimas
  Cc: netdev, lorenzo.bianconi, davem, thomas.petazzoni, brouer,
	matteo.croce, mw

[-- Attachment #1: Type: text/plain, Size: 3502 bytes --]

> Hi Lorenzo, 
> 
> On Thu, Oct 10, 2019 at 01:18:33AM +0200, Lorenzo Bianconi wrote:
> > Refactor mvneta_rx_swbm code introducing mvneta_swbm_rx_frame and
> > mvneta_swbm_add_rx_fragment routines. Rely on build_skb in oreder to
> > allocate skb since the previous patch introduced buffer recycling using
> > the page_pool API.
> > This patch fixes even an issue in the original driver where dma buffers
> > are accessed before dma sync
> > 
> > Signed-off-by: Ilias Apalodimas <ilias.apalodimas@linaro.org>
> > Signed-off-by: Jesper Dangaard Brouer <brouer@redhat.com>
> > Signed-off-by: Lorenzo Bianconi <lorenzo@kernel.org>
> > ---
> >  drivers/net/ethernet/marvell/mvneta.c | 198 ++++++++++++++------------
> >  1 file changed, 104 insertions(+), 94 deletions(-)
> > 
> > diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> > index 31cecc1ed848..79a6bac0192b 100644
> > --- a/drivers/net/ethernet/marvell/mvneta.c
> > +++ b/drivers/net/ethernet/marvell/mvneta.c
> > @@ -323,6 +323,11 @@
> >  	      ETH_HLEN + ETH_FCS_LEN,			     \
> >  	      cache_line_size())
> >  
> > +#define MVNETA_SKB_PAD	(SKB_DATA_ALIGN(sizeof(struct skb_shared_info) + \
> > +			 NET_SKB_PAD))
> > +#define MVNETA_SKB_SIZE(len)	(SKB_DATA_ALIGN(len) + MVNETA_SKB_PAD)
> > +#define MVNETA_MAX_RX_BUF_SIZE	(PAGE_SIZE - MVNETA_SKB_PAD)
> > +
> >  #define IS_TSO_HEADER(txq, addr) \
> >  	((addr >= txq->tso_hdrs_phys) && \
> >  	 (addr < txq->tso_hdrs_phys + txq->size * TSO_HEADER_SIZE))
> > @@ -646,7 +651,6 @@ static int txq_number = 8;
> >  static int rxq_def;
> >  
> >  static int rx_copybreak __read_mostly = 256;
> > -static int rx_header_size __read_mostly = 128;
> >  
> >  /* HW BM need that each port be identify by a unique ID */
> >  static int global_port_id;
> > +	if (rxq->left_size > MVNETA_MAX_RX_BUF_SIZE) {
> 
> [...]
> 
> > +		len = MVNETA_MAX_RX_BUF_SIZE;
> > +		data_len = len;
> > +	} else {
> > +		len = rxq->left_size;
> > +		data_len = len - ETH_FCS_LEN;
> > +	}
> > +	dma_dir = page_pool_get_dma_dir(rxq->page_pool);
> > +	dma_sync_single_range_for_cpu(dev->dev.parent,
> > +				      rx_desc->buf_phys_addr, 0,
> > +				      len, dma_dir);
> > +	if (data_len > 0) {
> > +		/* refill descriptor with new buffer later */
> > +		skb_add_rx_frag(rxq->skb,
> > +				skb_shinfo(rxq->skb)->nr_frags,
> > +				page, NET_SKB_PAD, data_len,
> > +				PAGE_SIZE);
> > +
> > +		page_pool_release_page(rxq->page_pool, page);
> > +		rx_desc->buf_phys_addr = 0;
> 
> Shouldn't we unmap and set the buf_phys_addr to 0 regardless of the data_len?

ack, right. I will fix it in v3.

Regards,
Lorenzo

> 
> > +	}
> > +	rxq->left_size -= len;
> > +}
> > +
> >  		mvneta_rxq_buf_size_set(pp, rxq, PAGE_SIZE < SZ_64K ?
> 
> [...]
> 
> > -					PAGE_SIZE :
> > +					MVNETA_MAX_RX_BUF_SIZE :
> >  					MVNETA_RX_BUF_SIZE(pp->pkt_size));
> >  		mvneta_rxq_bm_disable(pp, rxq);
> >  		mvneta_rxq_fill(pp, rxq, rxq->size);
> > @@ -4656,7 +4666,7 @@ static int mvneta_probe(struct platform_device *pdev)
> >  	SET_NETDEV_DEV(dev, &pdev->dev);
> >  
> >  	pp->id = global_port_id++;
> > -	pp->rx_offset_correction = 0; /* not relevant for SW BM */
> > +	pp->rx_offset_correction = NET_SKB_PAD;
> >  
> >  	/* Obtain access to BM resources if enabled and already initialized */
> >  	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
> > -- 
> > 2.21.0
> > 
> 
> Regards
> /Ilias

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 228 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2019-10-10  9:58 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-09 23:18 [PATCH v2 net-next 0/8] add XDP support to mvneta driver Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 1/8] net: mvneta: introduce mvneta_update_stats routine Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 2/8] net: mvneta: introduce page pool API for sw buffer manager Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 3/8] net: mvneta: rely on build_skb in mvneta_rx_swbm poll routine Lorenzo Bianconi
2019-10-10  7:16   ` Ilias Apalodimas
2019-10-10  9:58     ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 4/8] net: mvneta: sync dma buffers before refilling hw queues Lorenzo Bianconi
2019-10-10  7:08   ` Jesper Dangaard Brouer
2019-10-10  7:21     ` Ilias Apalodimas
2019-10-10  9:18       ` Lorenzo Bianconi
2019-10-10  9:20     ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 5/8] net: mvneta: add basic XDP support Lorenzo Bianconi
2019-10-10  8:50   ` Jesper Dangaard Brouer
2019-10-10  9:57     ` Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 6/8] net: mvneta: move header prefetch in mvneta_swbm_rx_frame Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 7/8] net: mvneta: make tx buffer array agnostic Lorenzo Bianconi
2019-10-09 23:18 ` [PATCH v2 net-next 8/8] net: mvneta: add XDP_TX support Lorenzo Bianconi

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).