linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/6] Few improvements on mvneta
@ 2018-07-06 13:19 Gregory CLEMENT
  2018-07-06 13:19 ` [PATCH net-next 1/6] net: mvneta: remove data pointer usage from device_node structure Gregory CLEMENT
                   ` (5 more replies)
  0 siblings, 6 replies; 10+ messages in thread
From: Gregory CLEMENT @ 2018-07-06 13:19 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

Hello,

This series brings some improvements for the mvneta driver.

The main one is the last patch which improves the driver for system
with low memory.

Patch 2 and 4 also contribute to improve the performance, whereas the
3 other ones are more about cleanup.

The series had been tested on ARM32 (clearfog) and ARM64
(EspressoBin), with and without hardware buffer management.

Gregory

Gregory CLEMENT (2):
  net: mvneta: remove data pointer usage from device_node structure
  net: mvneta: Allocate page for the descriptor

Yelena Krivosheev (4):
  net: mvneta: increase number of buffers in RX and TX queue
  net: mvneta: discriminate error cause for missed packet
  net: mvneta: Verify hardware checksum only when offload checksum
    feature is set
  net: mvneta: Improve the buffer allocation method for SWBM

 drivers/net/ethernet/marvell/mvneta.c    | 395 ++++++++++++++---------
 drivers/net/ethernet/marvell/mvneta_bm.c |  13 +
 drivers/net/ethernet/marvell/mvneta_bm.h |   8 +-
 3 files changed, 258 insertions(+), 158 deletions(-)

-- 
2.17.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH net-next 1/6] net: mvneta: remove data pointer usage from device_node structure
  2018-07-06 13:19 [PATCH net-next 0/6] Few improvements on mvneta Gregory CLEMENT
@ 2018-07-06 13:19 ` Gregory CLEMENT
  2018-07-07 10:45   ` kbuild test robot
  2018-07-06 13:19 ` [PATCH net-next 2/6] net: mvneta: increase number of buffers in RX and TX queue Gregory CLEMENT
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Gregory CLEMENT @ 2018-07-06 13:19 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

On year ago Rob Herring wanted to remove the data pointer from the
device_node structure[1]. The mvneta driver seemed to be the only one
which used (abused ?) it. However, the proposal of Rob to remove this
pointer from the driver introduced a regression, and I tested and fixed an
alternative way, but it was never submitted as a proper patch.

Now here it is: Instead of using the device_node structure ->data
pointer, we store the BM private data as the driver data of the BM
platform_device. The core mvneta code can retrieve it by doing a lookup
on which platform_device corresponds to the BM device tree node using
of_find_device_by_node(), and get its driver data

[1]https://www.spinics.net/lists/netdev/msg445197.html

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c    | 18 ++++++++++++------
 drivers/net/ethernet/marvell/mvneta_bm.c | 13 +++++++++++++
 drivers/net/ethernet/marvell/mvneta_bm.h |  5 +++++
 3 files changed, 30 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 0ad2f3f7da85..26d68add184f 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -4460,12 +4460,16 @@ static int mvneta_probe(struct platform_device *pdev)
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
-	if (bm_node && bm_node->data) {
-		pp->bm_priv = bm_node->data;
-		err = mvneta_bm_port_init(pdev, pp);
-		if (err < 0) {
-			dev_info(&pdev->dev, "use SW buffer management\n");
-			pp->bm_priv = NULL;
+	if (bm_node) {
+		pp->bm_priv = mvneta_bm_get(bm_node);
+		if (pp->bm_priv) {
+			err = mvneta_bm_port_init(pdev, pp);
+			if (err < 0) {
+				dev_info(&pdev->dev,
+					 "use SW buffer management\n");
+				mvneta_bm_put(pp->bm_priv);
+				pp->bm_priv = NULL;
+			}
 		}
 	}
 	of_node_put(bm_node);
@@ -4526,6 +4530,7 @@ static int mvneta_probe(struct platform_device *pdev)
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
 				       1 << pp->id);
+		mvneta_bm_put(pp->bm_priv);
 	}
 err_free_stats:
 	free_percpu(pp->stats);
@@ -4563,6 +4568,7 @@ static int mvneta_remove(struct platform_device *pdev)
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
 				       1 << pp->id);
+		mvneta_bm_put(pp->bm_priv);
 	}
 
 	return 0;
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.c b/drivers/net/ethernet/marvell/mvneta_bm.c
index 466939f8f0cf..01e3152e76c8 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.c
+++ b/drivers/net/ethernet/marvell/mvneta_bm.c
@@ -18,6 +18,7 @@
 #include <linux/module.h>
 #include <linux/netdevice.h>
 #include <linux/of.h>
+#include <linux/of_platform.h>
 #include <linux/platform_device.h>
 #include <linux/skbuff.h>
 #include <net/hwbm.h>
@@ -392,6 +393,18 @@ static void mvneta_bm_put_sram(struct mvneta_bm *priv)
 		      MVNETA_BM_BPPI_SIZE);
 }
 
+struct mvneta_bm *mvneta_bm_get(struct device_node *node)
+{
+	struct platform_device *pdev = of_find_device_by_node(node);
+
+	return pdev ? platform_get_drvdata(pdev) : NULL;
+}
+
+void mvneta_bm_put(struct mvneta_bm *priv)
+{
+	platform_device_put(priv->pdev);
+}
+
 static int mvneta_bm_probe(struct platform_device *pdev)
 {
 	struct device_node *dn = pdev->dev.of_node;
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h
index a32de432800c..9358626e51ec 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.h
+++ b/drivers/net/ethernet/marvell/mvneta_bm.h
@@ -134,6 +134,9 @@ void *mvneta_frag_alloc(unsigned int frag_size);
 void mvneta_frag_free(unsigned int frag_size, void *data);
 
 #if IS_ENABLED(CONFIG_MVNETA_BM)
+struct mvneta_bm *mvneta_bm_get(struct device_node *node);
+void mvneta_bm_put(struct mvneta_bm *priv);
+
 void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
 			    struct mvneta_bm_pool *bm_pool, u8 port_map);
 void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
@@ -178,5 +181,7 @@ static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
 static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
 					struct mvneta_bm_pool *bm_pool)
 { return 0; }
+struct mvneta_bm *mvneta_bm_get(struct device_node *node) { return NULL; }
+void mvneta_bm_put(struct mvneta_bm *priv) {}
 #endif /* CONFIG_MVNETA_BM */
 #endif
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 2/6] net: mvneta: increase number of buffers in RX and TX queue
  2018-07-06 13:19 [PATCH net-next 0/6] Few improvements on mvneta Gregory CLEMENT
  2018-07-06 13:19 ` [PATCH net-next 1/6] net: mvneta: remove data pointer usage from device_node structure Gregory CLEMENT
@ 2018-07-06 13:19 ` Gregory CLEMENT
  2018-07-06 13:19 ` [PATCH net-next 3/6] net: mvneta: discriminate error cause for missed packet Gregory CLEMENT
                   ` (3 subsequent siblings)
  5 siblings, 0 replies; 10+ messages in thread
From: Gregory CLEMENT @ 2018-07-06 13:19 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

The initial values were too small leading to poor performance when using
the software buffer management.

Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory: extract from a larger patch]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 26d68add184f..531227e2e48e 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -295,10 +295,10 @@
 #define MVNETA_RSS_LU_TABLE_SIZE	1
 
 /* Max number of Rx descriptors */
-#define MVNETA_MAX_RXD 128
+#define MVNETA_MAX_RXD 512
 
 /* Max number of Tx descriptors */
-#define MVNETA_MAX_TXD 532
+#define MVNETA_MAX_TXD 1024
 
 /* Max number of allowed TCP segments for software TSO */
 #define MVNETA_MAX_TSO_SEGS 100
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 3/6] net: mvneta: discriminate error cause for missed packet
  2018-07-06 13:19 [PATCH net-next 0/6] Few improvements on mvneta Gregory CLEMENT
  2018-07-06 13:19 ` [PATCH net-next 1/6] net: mvneta: remove data pointer usage from device_node structure Gregory CLEMENT
  2018-07-06 13:19 ` [PATCH net-next 2/6] net: mvneta: increase number of buffers in RX and TX queue Gregory CLEMENT
@ 2018-07-06 13:19 ` Gregory CLEMENT
  2018-07-07  2:09   ` David Miller
  2018-07-06 13:19 ` [PATCH net-next 4/6] net: mvneta: Allocate page for the descriptor Gregory CLEMENT
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 10+ messages in thread
From: Gregory CLEMENT @ 2018-07-06 13:19 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

In order to improve the diagnostic in case of error, make the distinction
between refill error and skb allocation error.

Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory: extract from a larger patch]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 18 ++++++++++++------
 1 file changed, 12 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 531227e2e48e..2c575c6732ce 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -589,9 +589,6 @@ struct mvneta_rx_queue {
 	/* num of rx descriptors in the rx descriptor ring */
 	int size;
 
-	/* counter of times when mvneta_refill() failed */
-	int missed;
-
 	u32 pkts_coal;
 	u32 time_coal;
 
@@ -609,6 +606,10 @@ struct mvneta_rx_queue {
 
 	/* Index of the next RX DMA descriptor to process */
 	int next_desc_to_proc;
+
+	/* error counters */
+	u32 skb_alloc_err;
+	u32 refill_err;
 };
 
 static enum cpuhp_state online_hpstate;
@@ -1946,8 +1947,13 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		if (rx_bytes <= rx_copybreak) {
 		/* better copy a small frame and not unmap the DMA region */
 			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
-			if (unlikely(!skb))
+			if (unlikely(!skb)) {
+				netdev_err(dev,
+					   "Can't allocate skb on queue %d\n",
+					   rxq->id);
+				rxq->skb_alloc_err++;
 				goto err_drop_frame;
+			}
 
 			dma_sync_single_range_for_cpu(dev->dev.parent,
 						      phys_addr,
@@ -1972,7 +1978,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		err = mvneta_rx_refill(pp, rx_desc, rxq);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->missed++;
+			rxq->refill_err++;
 			goto err_drop_frame;
 		}
 
@@ -2102,7 +2108,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
 		err = hwbm_pool_refill(&bm_pool->hwbm_pool, GFP_ATOMIC);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->missed++;
+			rxq->refill_err++;
 			goto err_drop_frame_ret_pool;
 		}
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 4/6] net: mvneta: Allocate page for the descriptor
  2018-07-06 13:19 [PATCH net-next 0/6] Few improvements on mvneta Gregory CLEMENT
                   ` (2 preceding siblings ...)
  2018-07-06 13:19 ` [PATCH net-next 3/6] net: mvneta: discriminate error cause for missed packet Gregory CLEMENT
@ 2018-07-06 13:19 ` Gregory CLEMENT
  2018-07-06 13:19 ` [PATCH net-next 5/6] net: mvneta: Verify hardware checksum only when offload checksum feature is set Gregory CLEMENT
  2018-07-06 13:19 ` [PATCH net-next 6/6] net: mvneta: Improve the buffer allocation method for SWBM Gregory CLEMENT
  5 siblings, 0 replies; 10+ messages in thread
From: Gregory CLEMENT @ 2018-07-06 13:19 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

Instead of trying to allocate the exact amount of memory for each
descriptor use a page for each of them, it allows to simplify the
allocation management and increase the performance of the driver.

Based on the work of Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c    | 66 ++++++++++--------------
 drivers/net/ethernet/marvell/mvneta_bm.h |  3 --
 2 files changed, 26 insertions(+), 43 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 2c575c6732ce..12739fb60732 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1791,47 +1791,30 @@ static void mvneta_txq_done(struct mvneta_port *pp,
 	}
 }
 
-void *mvneta_frag_alloc(unsigned int frag_size)
-{
-	if (likely(frag_size <= PAGE_SIZE))
-		return netdev_alloc_frag(frag_size);
-	else
-		return kmalloc(frag_size, GFP_ATOMIC);
-}
-EXPORT_SYMBOL_GPL(mvneta_frag_alloc);
-
-void mvneta_frag_free(unsigned int frag_size, void *data)
-{
-	if (likely(frag_size <= PAGE_SIZE))
-		skb_free_frag(data);
-	else
-		kfree(data);
-}
-EXPORT_SYMBOL_GPL(mvneta_frag_free);
-
 /* Refill processing for SW buffer management */
-static int mvneta_rx_refill(struct mvneta_port *pp,
-			    struct mvneta_rx_desc *rx_desc,
-			    struct mvneta_rx_queue *rxq)
-
+/* Allocate page per descriptor */
+static inline int mvneta_rx_refill(struct mvneta_port *pp,
+				   struct mvneta_rx_desc *rx_desc,
+				   struct mvneta_rx_queue *rxq,
+				   gfp_t gfp_mask)
 {
 	dma_addr_t phys_addr;
-	void *data;
+	struct page *page;
 
-	data = mvneta_frag_alloc(pp->frag_size);
-	if (!data)
+	page = __dev_alloc_page(gfp_mask);
+	if (!page)
 		return -ENOMEM;
 
-	phys_addr = dma_map_single(pp->dev->dev.parent, data,
-				   MVNETA_RX_BUF_SIZE(pp->pkt_size),
-				   DMA_FROM_DEVICE);
+	/* map page for use */
+	phys_addr = dma_map_page(pp->dev->dev.parent, page, 0, PAGE_SIZE,
+				 DMA_FROM_DEVICE);
 	if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
-		mvneta_frag_free(pp->frag_size, data);
+		__free_page(page);
 		return -ENOMEM;
 	}
 
 	phys_addr += pp->rx_offset_correction;
-	mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
+	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
 	return 0;
 }
 
@@ -1897,7 +1880,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
-		mvneta_frag_free(pp->frag_size, data);
+		__free_page(data);
 	}
 }
 
@@ -1924,6 +1907,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
 		struct sk_buff *skb;
 		unsigned char *data;
+		struct page *page;
 		dma_addr_t phys_addr;
 		u32 rx_status, frag_size;
 		int rx_bytes, err, index;
@@ -1932,7 +1916,10 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		rx_status = rx_desc->status;
 		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
 		index = rx_desc - rxq->descs;
-		data = rxq->buf_virt_addr[index];
+		page = (struct page *)rxq->buf_virt_addr[index];
+		data = page_address(page);
+		/* Prefetch header */
+		prefetch(data);
 		phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction;
 
 		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
@@ -1975,7 +1962,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		}
 
 		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc, rxq);
+		err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
 			rxq->refill_err++;
@@ -2767,9 +2754,11 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 
 	for (i = 0; i < num; i++) {
 		memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
-		if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) {
-			netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
-				__func__, rxq->id, i, num);
+		if (mvneta_rx_refill(pp, rxq->descs + i, rxq,
+				     GFP_KERNEL) != 0) {
+			netdev_err(pp->dev,
+				   "%s:rxq %d, %d of %d buffs  filled\n",
+				   __func__, rxq->id, i, num);
 			break;
 		}
 	}
@@ -3183,8 +3172,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		mvneta_bm_update_mtu(pp, mtu);
 
 	pp->pkt_size = MVNETA_RX_PKT_SIZE(dev->mtu);
-	pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
-	                SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 
 	ret = mvneta_setup_rxqs(pp);
 	if (ret) {
@@ -3672,8 +3659,7 @@ static int mvneta_open(struct net_device *dev)
 	int ret;
 
 	pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
-	pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
-	                SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	pp->frag_size = PAGE_SIZE;
 
 	ret = mvneta_setup_rxqs(pp);
 	if (ret)
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h
index 9358626e51ec..c8425d35c049 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.h
+++ b/drivers/net/ethernet/marvell/mvneta_bm.h
@@ -130,9 +130,6 @@ struct mvneta_bm_pool {
 };
 
 /* Declarations and definitions */
-void *mvneta_frag_alloc(unsigned int frag_size);
-void mvneta_frag_free(unsigned int frag_size, void *data);
-
 #if IS_ENABLED(CONFIG_MVNETA_BM)
 struct mvneta_bm *mvneta_bm_get(struct device_node *node);
 void mvneta_bm_put(struct mvneta_bm *priv);
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 5/6] net: mvneta: Verify hardware checksum only when offload checksum feature is set
  2018-07-06 13:19 [PATCH net-next 0/6] Few improvements on mvneta Gregory CLEMENT
                   ` (3 preceding siblings ...)
  2018-07-06 13:19 ` [PATCH net-next 4/6] net: mvneta: Allocate page for the descriptor Gregory CLEMENT
@ 2018-07-06 13:19 ` Gregory CLEMENT
  2018-07-07 14:58   ` Marcin Wojtas
  2018-07-06 13:19 ` [PATCH net-next 6/6] net: mvneta: Improve the buffer allocation method for SWBM Gregory CLEMENT
  5 siblings, 1 reply; 10+ messages in thread
From: Gregory CLEMENT @ 2018-07-06 13:19 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

If the checksum offload feature is not set, then there is no point to
check the status of the hardware.

[gregory: extract from a larger patch]
Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 12 +++++++-----
 1 file changed, 7 insertions(+), 5 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 12739fb60732..8fc4be238083 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1716,11 +1716,13 @@ static void mvneta_rx_error(struct mvneta_port *pp,
 static void mvneta_rx_csum(struct mvneta_port *pp, u32 status,
 			   struct sk_buff *skb)
 {
-	if ((status & MVNETA_RXD_L3_IP4) &&
-	    (status & MVNETA_RXD_L4_CSUM_OK)) {
-		skb->csum = 0;
-		skb->ip_summed = CHECKSUM_UNNECESSARY;
-		return;
+	if (pp->dev->features & NETIF_F_RXCSUM) {
+		if ((status & MVNETA_RXD_L3_IP4) &&
+		    (status & MVNETA_RXD_L4_CSUM_OK)) {
+			skb->csum = 0;
+			skb->ip_summed = CHECKSUM_UNNECESSARY;
+			return;
+		}
 	}
 
 	skb->ip_summed = CHECKSUM_NONE;
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH net-next 6/6] net: mvneta: Improve the buffer allocation method for SWBM
  2018-07-06 13:19 [PATCH net-next 0/6] Few improvements on mvneta Gregory CLEMENT
                   ` (4 preceding siblings ...)
  2018-07-06 13:19 ` [PATCH net-next 5/6] net: mvneta: Verify hardware checksum only when offload checksum feature is set Gregory CLEMENT
@ 2018-07-06 13:19 ` Gregory CLEMENT
  5 siblings, 0 replies; 10+ messages in thread
From: Gregory CLEMENT @ 2018-07-06 13:19 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

With system having a small memory (around 256MB), the state "cannot
allocate memory to refill with new buffer" is reach pretty quickly.

By this patch we changed buffer allocation method to a better handling of
this use case by avoiding memory allocation issues.

Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory: extract from a larger patch]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 281 +++++++++++++++++---------
 1 file changed, 183 insertions(+), 98 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 8fc4be238083..806e78700e50 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -479,7 +479,10 @@ struct mvneta_port {
 #define MVNETA_RXD_ERR_RESOURCE		(BIT(17) | BIT(18))
 #define MVNETA_RXD_ERR_CODE_MASK	(BIT(17) | BIT(18))
 #define MVNETA_RXD_L3_IP4		BIT(25)
-#define MVNETA_RXD_FIRST_LAST_DESC	(BIT(26) | BIT(27))
+#define MVNETA_RXD_LAST_DESC		BIT(26)
+#define MVNETA_RXD_FIRST_DESC		BIT(27)
+#define MVNETA_RXD_FIRST_LAST_DESC	(MVNETA_RXD_FIRST_DESC | \
+					 MVNETA_RXD_LAST_DESC)
 #define MVNETA_RXD_L4_CSUM_OK		BIT(30)
 
 #if defined(__LITTLE_ENDIAN)
@@ -607,6 +610,14 @@ struct mvneta_rx_queue {
 	/* Index of the next RX DMA descriptor to process */
 	int next_desc_to_proc;
 
+	/* Index of first RX DMA descriptor to refill */
+	int first_to_refill;
+	u32 refill_num;
+
+	/* pointer to uncomplete skb buffer */
+	struct sk_buff *skb;
+	int left_size;
+
 	/* error counters */
 	u32 skb_alloc_err;
 	u32 refill_err;
@@ -622,6 +633,7 @@ static int txq_number = 8;
 static int rxq_def;
 
 static int rx_copybreak __read_mostly = 256;
+static int rx_header_size __read_mostly = 128;
 
 /* HW BM need that each port be identify by a unique ID */
 static int global_port_id;
@@ -1685,13 +1697,6 @@ static void mvneta_rx_error(struct mvneta_port *pp,
 {
 	u32 status = rx_desc->status;
 
-	if (!mvneta_rxq_desc_is_first_last(status)) {
-		netdev_err(pp->dev,
-			   "bad rx status %08x (buffer oversize), size=%d\n",
-			   status, rx_desc->data_size);
-		return;
-	}
-
 	switch (status & MVNETA_RXD_ERR_CODE_MASK) {
 	case MVNETA_RXD_ERR_CRC:
 		netdev_err(pp->dev, "bad rx status %08x (crc error), size=%d\n",
@@ -1879,6 +1884,8 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	for (i = 0; i < rxq->size; i++) {
 		struct mvneta_rx_desc *rx_desc = rxq->descs + i;
 		void *data = rxq->buf_virt_addr[i];
+		if (!data || !(rx_desc->buf_phys_addr))
+			continue;
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
@@ -1886,117 +1893,183 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	}
 }
 
+static inline
+int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
+{
+	struct mvneta_rx_desc *rx_desc;
+	int curr_desc = rxq->first_to_refill;
+	int i;
+
+	for (i = 0; (i < rxq->refill_num) && (i < 64); i++) {
+		rx_desc = rxq->descs + curr_desc;
+		if (!(rx_desc->buf_phys_addr)) {
+			if (mvneta_rx_refill(pp, rx_desc, rxq, GFP_ATOMIC)) {
+				pr_err("Can't refill queue %d. Done %d from %d\n",
+				       rxq->id, i, rxq->refill_num);
+				rxq->refill_err++;
+				break;
+			}
+		}
+		curr_desc = MVNETA_QUEUE_NEXT_DESC(rxq, curr_desc);
+	}
+	rxq->refill_num -= i;
+	rxq->first_to_refill = curr_desc;
+
+	return i;
+}
+
 /* Main rx processing when using software buffer management */
-static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
+static int mvneta_rx_swbm(struct mvneta_port *pp, int budget,
 			  struct mvneta_rx_queue *rxq)
 {
 	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
 	struct net_device *dev = pp->dev;
-	int rx_done;
+	int rx_todo, rx_proc;
+	int refill = 0;
 	u32 rcvd_pkts = 0;
 	u32 rcvd_bytes = 0;
 
 	/* Get number of received packets */
-	rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq);
-
-	if (rx_todo > rx_done)
-		rx_todo = rx_done;
-
-	rx_done = 0;
+	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
+	rx_proc = 0;
 
 	/* Fairness NAPI loop */
-	while (rx_done < rx_todo) {
+	while ((rcvd_pkts < budget) && (rx_proc < rx_todo)) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
-		struct sk_buff *skb;
 		unsigned char *data;
 		struct page *page;
 		dma_addr_t phys_addr;
-		u32 rx_status, frag_size;
-		int rx_bytes, err, index;
+		u32 rx_status, index;
+		int rx_bytes, skb_size, copy_size;
+		int frag_num, frag_size, frag_offset;
 
-		rx_done++;
-		rx_status = rx_desc->status;
-		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
 		index = rx_desc - rxq->descs;
 		page = (struct page *)rxq->buf_virt_addr[index];
 		data = page_address(page);
 		/* Prefetch header */
 		prefetch(data);
-		phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction;
 
-		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
-		    (rx_status & MVNETA_RXD_ERR_SUMMARY)) {
-			mvneta_rx_error(pp, rx_desc);
-err_drop_frame:
-			dev->stats.rx_errors++;
-			/* leave the descriptor untouched */
-			continue;
-		}
+		phys_addr = rx_desc->buf_phys_addr;
+		rx_status = rx_desc->status;
+		rx_proc++;
+		rxq->refill_num++;
+
+		if (rx_status & MVNETA_RXD_FIRST_DESC) {
+			/* Check errors only for FIRST descriptor */
+			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
+				mvneta_rx_error(pp, rx_desc);
+				dev->stats.rx_errors++;
+				/* leave the descriptor untouched */
+				continue;
+			}
+			rx_bytes = rx_desc->data_size -
+				   (ETH_FCS_LEN + MVNETA_MH_SIZE);
 
-		if (rx_bytes <= rx_copybreak) {
-		/* better copy a small frame and not unmap the DMA region */
-			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
-			if (unlikely(!skb)) {
+			/* Allocate small skb for each new packet */
+			skb_size = max(rx_copybreak, rx_header_size);
+			rxq->skb = netdev_alloc_skb_ip_align(dev, skb_size);
+			if (unlikely(!rxq->skb)) {
 				netdev_err(dev,
 					   "Can't allocate skb on queue %d\n",
 					   rxq->id);
+				dev->stats.rx_dropped++;
 				rxq->skb_alloc_err++;
-				goto err_drop_frame;
+				continue;
 			}
+			copy_size = min(skb_size, rx_bytes);
+
+			/* Copy data from buffer to SKB, skip Marvell header */
+			memcpy(rxq->skb->data, data + MVNETA_MH_SIZE,
+			       copy_size);
+			skb_put(rxq->skb, copy_size);
+			rxq->left_size = rx_bytes - copy_size;
+
+			mvneta_rx_csum(pp, rx_status, rxq->skb);
+			if (rxq->left_size == 0) {
+				int size = copy_size + MVNETA_MH_SIZE;
+
+				dma_sync_single_range_for_cpu(dev->dev.parent,
+							      phys_addr, 0,
+							      size,
+							      DMA_FROM_DEVICE);
+
+				/* leave the descriptor and buffer untouched */
+			} else {
+				/* refill descriptor with new buffer later */
+				rx_desc->buf_phys_addr = 0;
+
+				frag_num = 0;
+				frag_offset = copy_size + MVNETA_MH_SIZE;
+				frag_size = min(rxq->left_size,
+						(int)(PAGE_SIZE - frag_offset));
+				skb_add_rx_frag(rxq->skb, frag_num, page,
+						frag_offset, frag_size,
+						PAGE_SIZE);
+				dma_unmap_single(dev->dev.parent, phys_addr,
+						 PAGE_SIZE, DMA_FROM_DEVICE);
+				rxq->left_size -= frag_size;
+			}
+		} else {
+			/* Middle or Last descriptor */
+			if (unlikely(!rxq->skb)) {
+				pr_debug("no skb for rx_status 0x%x\n",
+					 rx_status);
+				continue;
+			}
+			if (!rxq->left_size) {
+				/* last descriptor has only FCS */
+				/* and can be discarded */
+				dma_sync_single_range_for_cpu(dev->dev.parent,
+							      phys_addr, 0,
+							      ETH_FCS_LEN,
+							      DMA_FROM_DEVICE);
+				/* leave the descriptor and buffer untouched */
+			} else {
+				/* refill descriptor with new buffer later */
+				rx_desc->buf_phys_addr = 0;
+
+				frag_num = skb_shinfo(rxq->skb)->nr_frags;
+				frag_offset = 0;
+				frag_size = min(rxq->left_size,
+						(int)(PAGE_SIZE - frag_offset));
+				skb_add_rx_frag(rxq->skb, frag_num, page,
+						frag_offset, frag_size,
+						PAGE_SIZE);
+
+				dma_unmap_single(dev->dev.parent, phys_addr,
+						 PAGE_SIZE,
+						 DMA_FROM_DEVICE);
+
+				rxq->left_size -= frag_size;
+			}
+		} /* Middle or Last descriptor */
 
-			dma_sync_single_range_for_cpu(dev->dev.parent,
-						      phys_addr,
-						      MVNETA_MH_SIZE + NET_SKB_PAD,
-						      rx_bytes,
-						      DMA_FROM_DEVICE);
-			skb_put_data(skb, data + MVNETA_MH_SIZE + NET_SKB_PAD,
-				     rx_bytes);
-
-			skb->protocol = eth_type_trans(skb, dev);
-			mvneta_rx_csum(pp, rx_status, skb);
-			napi_gro_receive(&port->napi, skb);
-
-			rcvd_pkts++;
-			rcvd_bytes += rx_bytes;
-
-			/* leave the descriptor and buffer untouched */
+		if (!(rx_status & MVNETA_RXD_LAST_DESC))
+			/* no last descriptor this time */
 			continue;
-		}
 
-		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL);
-		if (err) {
-			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->refill_err++;
-			goto err_drop_frame;
+		if (rxq->left_size) {
+			pr_err("get last desc, but left_size (%d) != 0\n",
+			       rxq->left_size);
+			dev_kfree_skb_any(rxq->skb);
+			rxq->left_size = 0;
+			rxq->skb = NULL;
+			continue;
 		}
-
-		frag_size = pp->frag_size;
-
-		skb = build_skb(data, frag_size > PAGE_SIZE ? 0 : frag_size);
-
-		/* After refill old buffer has to be unmapped regardless
-		 * the skb is successfully built or not.
-		 */
-		dma_unmap_single(dev->dev.parent, phys_addr,
-				 MVNETA_RX_BUF_SIZE(pp->pkt_size),
-				 DMA_FROM_DEVICE);
-
-		if (!skb)
-			goto err_drop_frame;
-
 		rcvd_pkts++;
-		rcvd_bytes += rx_bytes;
+		rcvd_bytes += rxq->skb->len;
 
 		/* Linux processing */
-		skb_reserve(skb, MVNETA_MH_SIZE + NET_SKB_PAD);
-		skb_put(skb, rx_bytes);
-
-		skb->protocol = eth_type_trans(skb, dev);
+		rxq->skb->protocol = eth_type_trans(rxq->skb, dev);
 
-		mvneta_rx_csum(pp, rx_status, skb);
+		if (dev->features & NETIF_F_GRO)
+			napi_gro_receive(&port->napi, rxq->skb);
+		else
+			netif_receive_skb(rxq->skb);
 
-		napi_gro_receive(&port->napi, skb);
+		/* clean uncomplete skb pointer in queue */
+		rxq->skb = NULL;
+		rxq->left_size = 0;
 	}
 
 	if (rcvd_pkts) {
@@ -2008,10 +2081,13 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		u64_stats_update_end(&stats->syncp);
 	}
 
+	/* return some buffers to hardware queue, one at a time is too slow */
+	refill = mvneta_rx_refill_queue(pp, rxq);
+
 	/* Update rxq management counters */
-	mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);
+	mvneta_rxq_desc_num_update(pp, rxq, rx_proc, refill);
 
-	return rx_done;
+	return rcvd_pkts;
 }
 
 /* Main rx processing when using hardware buffer management */
@@ -2818,21 +2894,23 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
 	mvreg_write(pp, MVNETA_RXQ_BASE_ADDR_REG(rxq->id), rxq->descs_phys);
 	mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), rxq->size);
 
-	/* Set Offset */
-	mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD - pp->rx_offset_correction);
-
 	/* Set coalescing pkts and time */
 	mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal);
 	mvneta_rx_time_coal_set(pp, rxq, rxq->time_coal);
 
 	if (!pp->bm_priv) {
-		/* Fill RXQ with buffers from RX pool */
-		mvneta_rxq_buf_size_set(pp, rxq,
-					MVNETA_RX_BUF_SIZE(pp->pkt_size));
+		/* Set Offset */
+		mvneta_rxq_offset_set(pp, rxq, 0);
+		mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
 		mvneta_rxq_bm_disable(pp, rxq);
 		mvneta_rxq_fill(pp, rxq, rxq->size);
 	} else {
+		/* Set Offset */
+		mvneta_rxq_offset_set(pp, rxq,
+				      NET_SKB_PAD - pp->rx_offset_correction);
+
 		mvneta_rxq_bm_enable(pp, rxq);
+		/* Fill RXQ with buffers from RX pool */
 		mvneta_rxq_long_pool_set(pp, rxq);
 		mvneta_rxq_short_pool_set(pp, rxq);
 		mvneta_rxq_non_occup_desc_add(pp, rxq, rxq->size);
@@ -2861,6 +2939,9 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 {
 	mvneta_rxq_drop_pkts(pp, rxq);
 
+	if (rxq->skb)
+		dev_kfree_skb_any(rxq->skb);
+
 	if (rxq->descs)
 		dma_free_coherent(pp->dev->dev.parent,
 				  rxq->size * MVNETA_DESC_ALIGNED_SIZE,
@@ -2871,6 +2952,10 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 	rxq->last_desc         = 0;
 	rxq->next_desc_to_proc = 0;
 	rxq->descs_phys        = 0;
+	rxq->first_to_refill   = 0;
+	rxq->refill_num        = 0;
+	rxq->skb               = NULL;
+	rxq->left_size         = 0;
 }
 
 static int mvneta_txq_sw_init(struct mvneta_port *pp,
@@ -4356,14 +4441,6 @@ static int mvneta_probe(struct platform_device *pdev)
 	pp->dn = dn;
 
 	pp->rxq_def = rxq_def;
-
-	/* Set RX packet offset correction for platforms, whose
-	 * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit
-	 * platforms and 0B for 32-bit ones.
-	 */
-	pp->rx_offset_correction =
-		max(0, NET_SKB_PAD - MVNETA_RX_PKT_OFFSET_CORRECTION);
-
 	pp->indir[0] = rxq_def;
 
 	/* Get special SoC configurations */
@@ -4451,6 +4528,7 @@ static int mvneta_probe(struct platform_device *pdev)
 	SET_NETDEV_DEV(dev, &pdev->dev);
 
 	pp->id = global_port_id++;
+	pp->rx_offset_correction = 0; /* not relevant for SW BM */
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
@@ -4465,6 +4543,13 @@ static int mvneta_probe(struct platform_device *pdev)
 				pp->bm_priv = NULL;
 			}
 		}
+		/* Set RX packet offset correction for platforms, whose
+		 * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit
+		 * platforms and 0B for 32-bit ones.
+		 */
+		pp->rx_offset_correction = max(0,
+					       NET_SKB_PAD -
+					       MVNETA_RX_PKT_OFFSET_CORRECTION);
 	}
 	of_node_put(bm_node);
 
-- 
2.17.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 3/6] net: mvneta: discriminate error cause for missed packet
  2018-07-06 13:19 ` [PATCH net-next 3/6] net: mvneta: discriminate error cause for missed packet Gregory CLEMENT
@ 2018-07-07  2:09   ` David Miller
  0 siblings, 0 replies; 10+ messages in thread
From: David Miller @ 2018-07-07  2:09 UTC (permalink / raw)
  To: gregory.clement
  Cc: linux-kernel, netdev, thomas.petazzoni, linux-arm-kernel, jason,
	andrew, sebastian.hesselbarth, yelena, nadavh, mw, dima,
	antoine.tenart, miquel.raynal, maxime.chevallier

From: Gregory CLEMENT <gregory.clement@bootlin.com>
Date: Fri,  6 Jul 2018 15:19:46 +0200

> @@ -609,6 +606,10 @@ struct mvneta_rx_queue {
>  
>  	/* Index of the next RX DMA descriptor to process */
>  	int next_desc_to_proc;
> +
> +	/* error counters */
> +	u32 skb_alloc_err;
> +	u32 refill_err;
>  };

These counters aren't exported anywhere, making them not so useful.

Either remove this stuff or add them to the driver's ethtool state.

Thank you.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 1/6] net: mvneta: remove data pointer usage from device_node structure
  2018-07-06 13:19 ` [PATCH net-next 1/6] net: mvneta: remove data pointer usage from device_node structure Gregory CLEMENT
@ 2018-07-07 10:45   ` kbuild test robot
  0 siblings, 0 replies; 10+ messages in thread
From: kbuild test robot @ 2018-07-07 10:45 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: kbuild-all, David S. Miller, linux-kernel, netdev,
	Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

[-- Attachment #1: Type: text/plain, Size: 747 bytes --]

Hi Gregory,

I love your patch! Yet something to improve:

[auto build test ERROR on net-next/master]

url:    https://github.com/0day-ci/linux/commits/Gregory-CLEMENT/Few-improvements-on-mvneta/20180707-034934
config: i386-allmodconfig (attached as .config)
compiler: gcc-7 (Debian 7.3.0-16) 7.3.0
reproduce:
        # save the attached .config to linux build tree
        make ARCH=i386 

All errors (new ones prefixed by >>):

>> ERROR: "mvneta_bm_get" [drivers/net/ethernet/marvell/mvneta.ko] undefined!
>> ERROR: "mvneta_bm_put" [drivers/net/ethernet/marvell/mvneta.ko] undefined!

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 64043 bytes --]

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH net-next 5/6] net: mvneta: Verify hardware checksum only when offload checksum feature is set
  2018-07-06 13:19 ` [PATCH net-next 5/6] net: mvneta: Verify hardware checksum only when offload checksum feature is set Gregory CLEMENT
@ 2018-07-07 14:58   ` Marcin Wojtas
  0 siblings, 0 replies; 10+ messages in thread
From: Marcin Wojtas @ 2018-07-07 14:58 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: David S. Miller, Linux Kernel Mailing List, netdev,
	Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Yelena Krivosheev, Nadav Haklai,
	Dmitri Epshtein, Antoine Tenart, Miquèl Raynal,
	Maxime Chevallier

HI Gregory,

2018-07-06 15:19 GMT+02:00 Gregory CLEMENT <gregory.clement@bootlin.com>:
>
> From: Yelena Krivosheev <yelena@marvell.com>
>
> If the checksum offload feature is not set, then there is no point to
> check the status of the hardware.
>
> [gregory: extract from a larger patch]
> Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
> Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 12 +++++++-----
>  1 file changed, 7 insertions(+), 5 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 12739fb60732..8fc4be238083 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -1716,11 +1716,13 @@ static void mvneta_rx_error(struct mvneta_port *pp,
>  static void mvneta_rx_csum(struct mvneta_port *pp, u32 status,
>                            struct sk_buff *skb)
>  {
> -       if ((status & MVNETA_RXD_L3_IP4) &&
> -           (status & MVNETA_RXD_L4_CSUM_OK)) {
> -               skb->csum = 0;
> -               skb->ip_summed = CHECKSUM_UNNECESSARY;
> -               return;
> +       if (pp->dev->features & NETIF_F_RXCSUM) {
> +               if ((status & MVNETA_RXD_L3_IP4) &&
> +                   (status & MVNETA_RXD_L4_CSUM_OK)) {

Small nit - wouldn't it be nicer to check conditions in a single 'if':

       if ((pp->dev->features & NETIF_F_RXCSUM) &&
           (status & MVNETA_RXD_L3_IP4) &&
           (status & MVNETA_RXD_L4_CSUM_OK)) {
?

Best regards,
Marcin

> +                       skb->csum = 0;
> +                       skb->ip_summed = CHECKSUM_UNNECESSARY;
> +                       return;
> +               }
>         }
>
>         skb->ip_summed = CHECKSUM_NONE;
> --
> 2.17.1
>

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-07-07 14:58 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-06 13:19 [PATCH net-next 0/6] Few improvements on mvneta Gregory CLEMENT
2018-07-06 13:19 ` [PATCH net-next 1/6] net: mvneta: remove data pointer usage from device_node structure Gregory CLEMENT
2018-07-07 10:45   ` kbuild test robot
2018-07-06 13:19 ` [PATCH net-next 2/6] net: mvneta: increase number of buffers in RX and TX queue Gregory CLEMENT
2018-07-06 13:19 ` [PATCH net-next 3/6] net: mvneta: discriminate error cause for missed packet Gregory CLEMENT
2018-07-07  2:09   ` David Miller
2018-07-06 13:19 ` [PATCH net-next 4/6] net: mvneta: Allocate page for the descriptor Gregory CLEMENT
2018-07-06 13:19 ` [PATCH net-next 5/6] net: mvneta: Verify hardware checksum only when offload checksum feature is set Gregory CLEMENT
2018-07-07 14:58   ` Marcin Wojtas
2018-07-06 13:19 ` [PATCH net-next 6/6] net: mvneta: Improve the buffer allocation method for SWBM Gregory CLEMENT

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).