All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
@ 2018-07-18 16:10 ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

Hello,

This series brings some improvements for the mvneta driver and also
adds a fix.

Compared to the v2, the main change is another patch fixing a bug
in mtu_change.

Gregory

Changelog:
v1 -> v2

 - In patch 2, use EXPORT_SYMBOL_GPL for mvneta_bm_get and
   mvneta_bm_put to be used in module, reported by kbuild test robot.

 - In patch 4, add the counter to the driver's ethtool state,
   suggested by David Miller.

 - In patch 6, use a single if, suggested by Marcin Wojtas

v2 -> v3

 - Adding a patch fixing the mtu change issue

 - Removing the inline keyword for mvneta_rx_refill() and let the
   comiler decided, suggested by David Miller.

Andrew Lunn (1):
  net: ethernet: mvneta: Fix napi structure mixup on armada 3700

Gregory CLEMENT (3):
  net: mvneta: remove data pointer usage from device_node structure
  net: mvneta: discriminate error cause for missed packet
  net: mvneta: Allocate page for the descriptor

Yelena Krivosheev (4):
  net: mvneta: fix mtu change on port without link
  net: mvneta: increase number of buffers in RX and TX queue
  net: mvneta: Verify hardware checksum only when offload checksum
    feature is set
  net: mvneta: Improve the buffer allocation method for SWBM

 drivers/net/ethernet/marvell/mvneta.c    | 409 ++++++++++++++---------
 drivers/net/ethernet/marvell/mvneta_bm.c |  15 +
 drivers/net/ethernet/marvell/mvneta_bm.h |   8 +-
 3 files changed, 272 insertions(+), 160 deletions(-)

-- 
2.18.0


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
@ 2018-07-18 16:10 ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

Hello,

This series brings some improvements for the mvneta driver and also
adds a fix.

Compared to the v2, the main change is another patch fixing a bug
in mtu_change.

Gregory

Changelog:
v1 -> v2

 - In patch 2, use EXPORT_SYMBOL_GPL for mvneta_bm_get and
   mvneta_bm_put to be used in module, reported by kbuild test robot.

 - In patch 4, add the counter to the driver's ethtool state,
   suggested by David Miller.

 - In patch 6, use a single if, suggested by Marcin Wojtas

v2 -> v3

 - Adding a patch fixing the mtu change issue

 - Removing the inline keyword for mvneta_rx_refill() and let the
   comiler decided, suggested by David Miller.

Andrew Lunn (1):
  net: ethernet: mvneta: Fix napi structure mixup on armada 3700

Gregory CLEMENT (3):
  net: mvneta: remove data pointer usage from device_node structure
  net: mvneta: discriminate error cause for missed packet
  net: mvneta: Allocate page for the descriptor

Yelena Krivosheev (4):
  net: mvneta: fix mtu change on port without link
  net: mvneta: increase number of buffers in RX and TX queue
  net: mvneta: Verify hardware checksum only when offload checksum
    feature is set
  net: mvneta: Improve the buffer allocation method for SWBM

 drivers/net/ethernet/marvell/mvneta.c    | 409 ++++++++++++++---------
 drivers/net/ethernet/marvell/mvneta_bm.c |  15 +
 drivers/net/ethernet/marvell/mvneta_bm.h |   8 +-
 3 files changed, 272 insertions(+), 160 deletions(-)

-- 
2.18.0

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 1/8] net: ethernet: mvneta: Fix napi structure mixup on armada 3700
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Andrew Lunn <andrew@lunn.ch>

The mvneta Ethernet driver is used on a few different Marvell SoCs.
Some SoCs have per cpu interrupts for Ethernet events. Some SoCs have
a single interrupt, independent of the CPU. The driver handles this by
having a per CPU napi structure when there are per CPU interrupts, and
a global napi structure when there is a single interrupt.

When the napi core calls mvneta_poll(), it passes the napi
instance. This was not being propagated through the call chain, and
instead the per-cpu napi instance was passed to napi_gro_receive()
call. This breaks when there is a single global napi instance.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Fixes: 2636ac3cc2b4 ("net: mvneta: Add network support for Armada 3700 SoC")
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 0ad2f3f7da85..ec84db47d82d 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1901,10 +1901,10 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 }
 
 /* Main rx processing when using software buffer management */
-static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
+static int mvneta_rx_swbm(struct napi_struct *napi,
+			  struct mvneta_port *pp, int rx_todo,
 			  struct mvneta_rx_queue *rxq)
 {
-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
 	struct net_device *dev = pp->dev;
 	int rx_done;
 	u32 rcvd_pkts = 0;
@@ -1959,7 +1959,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 
 			skb->protocol = eth_type_trans(skb, dev);
 			mvneta_rx_csum(pp, rx_status, skb);
-			napi_gro_receive(&port->napi, skb);
+			napi_gro_receive(napi, skb);
 
 			rcvd_pkts++;
 			rcvd_bytes += rx_bytes;
@@ -2001,7 +2001,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 
 		mvneta_rx_csum(pp, rx_status, skb);
 
-		napi_gro_receive(&port->napi, skb);
+		napi_gro_receive(napi, skb);
 	}
 
 	if (rcvd_pkts) {
@@ -2020,10 +2020,10 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 }
 
 /* Main rx processing when using hardware buffer management */
-static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
+static int mvneta_rx_hwbm(struct napi_struct *napi,
+			  struct mvneta_port *pp, int rx_todo,
 			  struct mvneta_rx_queue *rxq)
 {
-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
 	struct net_device *dev = pp->dev;
 	int rx_done;
 	u32 rcvd_pkts = 0;
@@ -2085,7 +2085,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
 
 			skb->protocol = eth_type_trans(skb, dev);
 			mvneta_rx_csum(pp, rx_status, skb);
-			napi_gro_receive(&port->napi, skb);
+			napi_gro_receive(napi, skb);
 
 			rcvd_pkts++;
 			rcvd_bytes += rx_bytes;
@@ -2129,7 +2129,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
 
 		mvneta_rx_csum(pp, rx_status, skb);
 
-		napi_gro_receive(&port->napi, skb);
+		napi_gro_receive(napi, skb);
 	}
 
 	if (rcvd_pkts) {
@@ -2722,9 +2722,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 	if (rx_queue) {
 		rx_queue = rx_queue - 1;
 		if (pp->bm_priv)
-			rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]);
+			rx_done = mvneta_rx_hwbm(napi, pp, budget,
+						 &pp->rxqs[rx_queue]);
 		else
-			rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]);
+			rx_done = mvneta_rx_swbm(napi, pp, budget,
+						 &pp->rxqs[rx_queue]);
 	}
 
 	if (rx_done < budget) {
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 1/8] net: ethernet: mvneta: Fix napi structure mixup on armada 3700
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

From: Andrew Lunn <andrew@lunn.ch>

The mvneta Ethernet driver is used on a few different Marvell SoCs.
Some SoCs have per cpu interrupts for Ethernet events. Some SoCs have
a single interrupt, independent of the CPU. The driver handles this by
having a per CPU napi structure when there are per CPU interrupts, and
a global napi structure when there is a single interrupt.

When the napi core calls mvneta_poll(), it passes the napi
instance. This was not being propagated through the call chain, and
instead the per-cpu napi instance was passed to napi_gro_receive()
call. This breaks when there is a single global napi instance.

Signed-off-by: Andrew Lunn <andrew@lunn.ch>
Fixes: 2636ac3cc2b4 ("net: mvneta: Add network support for Armada 3700 SoC")
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 0ad2f3f7da85..ec84db47d82d 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1901,10 +1901,10 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 }
 
 /* Main rx processing when using software buffer management */
-static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
+static int mvneta_rx_swbm(struct napi_struct *napi,
+			  struct mvneta_port *pp, int rx_todo,
 			  struct mvneta_rx_queue *rxq)
 {
-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
 	struct net_device *dev = pp->dev;
 	int rx_done;
 	u32 rcvd_pkts = 0;
@@ -1959,7 +1959,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 
 			skb->protocol = eth_type_trans(skb, dev);
 			mvneta_rx_csum(pp, rx_status, skb);
-			napi_gro_receive(&port->napi, skb);
+			napi_gro_receive(napi, skb);
 
 			rcvd_pkts++;
 			rcvd_bytes += rx_bytes;
@@ -2001,7 +2001,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 
 		mvneta_rx_csum(pp, rx_status, skb);
 
-		napi_gro_receive(&port->napi, skb);
+		napi_gro_receive(napi, skb);
 	}
 
 	if (rcvd_pkts) {
@@ -2020,10 +2020,10 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 }
 
 /* Main rx processing when using hardware buffer management */
-static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
+static int mvneta_rx_hwbm(struct napi_struct *napi,
+			  struct mvneta_port *pp, int rx_todo,
 			  struct mvneta_rx_queue *rxq)
 {
-	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
 	struct net_device *dev = pp->dev;
 	int rx_done;
 	u32 rcvd_pkts = 0;
@@ -2085,7 +2085,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
 
 			skb->protocol = eth_type_trans(skb, dev);
 			mvneta_rx_csum(pp, rx_status, skb);
-			napi_gro_receive(&port->napi, skb);
+			napi_gro_receive(napi, skb);
 
 			rcvd_pkts++;
 			rcvd_bytes += rx_bytes;
@@ -2129,7 +2129,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
 
 		mvneta_rx_csum(pp, rx_status, skb);
 
-		napi_gro_receive(&port->napi, skb);
+		napi_gro_receive(napi, skb);
 	}
 
 	if (rcvd_pkts) {
@@ -2722,9 +2722,11 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 	if (rx_queue) {
 		rx_queue = rx_queue - 1;
 		if (pp->bm_priv)
-			rx_done = mvneta_rx_hwbm(pp, budget, &pp->rxqs[rx_queue]);
+			rx_done = mvneta_rx_hwbm(napi, pp, budget,
+						 &pp->rxqs[rx_queue]);
 		else
-			rx_done = mvneta_rx_swbm(pp, budget, &pp->rxqs[rx_queue]);
+			rx_done = mvneta_rx_swbm(napi, pp, budget,
+						 &pp->rxqs[rx_queue]);
 	}
 
 	if (rx_done < budget) {
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 2/8] net: mvneta: fix mtu change on port without link
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

It is incorrect to enable TX/RX queues (call by mvneta_port_up()) for
port without link. Indeed MTU change for interface without link causes TX
queues to stuck.

Fixes: c5aff18204da ("net: mvneta: driver for Marvell Armada 370/XP
network unit")
Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory.clement: adding Fixes tags and rewording commit log]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index ec84db47d82d..3d1f4335b0d3 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -3196,7 +3196,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 
 	on_each_cpu(mvneta_percpu_enable, pp, true);
 	mvneta_start_dev(pp);
-	mvneta_port_up(pp);
 
 	netdev_update_features(dev);
 
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 2/8] net: mvneta: fix mtu change on port without link
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

From: Yelena Krivosheev <yelena@marvell.com>

It is incorrect to enable TX/RX queues (call by mvneta_port_up()) for
port without link. Indeed MTU change for interface without link causes TX
queues to stuck.

Fixes: c5aff18204da ("net: mvneta: driver for Marvell Armada 370/XP
network unit")
Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory.clement: adding Fixes tags and rewording commit log]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index ec84db47d82d..3d1f4335b0d3 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -3196,7 +3196,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 
 	on_each_cpu(mvneta_percpu_enable, pp, true);
 	mvneta_start_dev(pp);
-	mvneta_port_up(pp);
 
 	netdev_update_features(dev);
 
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 3/8] net: mvneta: remove data pointer usage from device_node structure
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

On year ago Rob Herring wanted to remove the data pointer from the
device_node structure[1]. The mvneta driver seemed to be the only one
which used (abused ?) it. However, the proposal of Rob to remove this
pointer from the driver introduced a regression, and I tested and fixed an
alternative way, but it was never submitted as a proper patch.

Now here it is: Instead of using the device_node structure ->data
pointer, we store the BM private data as the driver data of the BM
platform_device. The core mvneta code can retrieve it by doing a lookup
on which platform_device corresponds to the BM device tree node using
of_find_device_by_node(), and get its driver data

[1]https://www.spinics.net/lists/netdev/msg445197.html

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c    | 18 ++++++++++++------
 drivers/net/ethernet/marvell/mvneta_bm.c | 15 +++++++++++++++
 drivers/net/ethernet/marvell/mvneta_bm.h |  5 +++++
 3 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 3d1f4335b0d3..011794148ec8 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -4461,12 +4461,16 @@ static int mvneta_probe(struct platform_device *pdev)
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
-	if (bm_node && bm_node->data) {
-		pp->bm_priv = bm_node->data;
-		err = mvneta_bm_port_init(pdev, pp);
-		if (err < 0) {
-			dev_info(&pdev->dev, "use SW buffer management\n");
-			pp->bm_priv = NULL;
+	if (bm_node) {
+		pp->bm_priv = mvneta_bm_get(bm_node);
+		if (pp->bm_priv) {
+			err = mvneta_bm_port_init(pdev, pp);
+			if (err < 0) {
+				dev_info(&pdev->dev,
+					 "use SW buffer management\n");
+				mvneta_bm_put(pp->bm_priv);
+				pp->bm_priv = NULL;
+			}
 		}
 	}
 	of_node_put(bm_node);
@@ -4527,6 +4531,7 @@ static int mvneta_probe(struct platform_device *pdev)
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
 				       1 << pp->id);
+		mvneta_bm_put(pp->bm_priv);
 	}
 err_free_stats:
 	free_percpu(pp->stats);
@@ -4564,6 +4569,7 @@ static int mvneta_remove(struct platform_device *pdev)
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
 				       1 << pp->id);
+		mvneta_bm_put(pp->bm_priv);
 	}
 
 	return 0;
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.c b/drivers/net/ethernet/marvell/mvneta_bm.c
index 466939f8f0cf..de468e1bdba9 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.c
+++ b/drivers/net/ethernet/marvell/mvneta_bm.c
@@ -18,6 +18,7 @@
 #include <linux/module.h>
 #include <linux/netdevice.h>
 #include <linux/of.h>
+#include <linux/of_platform.h>
 #include <linux/platform_device.h>
 #include <linux/skbuff.h>
 #include <net/hwbm.h>
@@ -392,6 +393,20 @@ static void mvneta_bm_put_sram(struct mvneta_bm *priv)
 		      MVNETA_BM_BPPI_SIZE);
 }
 
+struct mvneta_bm *mvneta_bm_get(struct device_node *node)
+{
+	struct platform_device *pdev = of_find_device_by_node(node);
+
+	return pdev ? platform_get_drvdata(pdev) : NULL;
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_get);
+
+void mvneta_bm_put(struct mvneta_bm *priv)
+{
+	platform_device_put(priv->pdev);
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_put);
+
 static int mvneta_bm_probe(struct platform_device *pdev)
 {
 	struct device_node *dn = pdev->dev.of_node;
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h
index a32de432800c..9358626e51ec 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.h
+++ b/drivers/net/ethernet/marvell/mvneta_bm.h
@@ -134,6 +134,9 @@ void *mvneta_frag_alloc(unsigned int frag_size);
 void mvneta_frag_free(unsigned int frag_size, void *data);
 
 #if IS_ENABLED(CONFIG_MVNETA_BM)
+struct mvneta_bm *mvneta_bm_get(struct device_node *node);
+void mvneta_bm_put(struct mvneta_bm *priv);
+
 void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
 			    struct mvneta_bm_pool *bm_pool, u8 port_map);
 void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
@@ -178,5 +181,7 @@ static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
 static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
 					struct mvneta_bm_pool *bm_pool)
 { return 0; }
+struct mvneta_bm *mvneta_bm_get(struct device_node *node) { return NULL; }
+void mvneta_bm_put(struct mvneta_bm *priv) {}
 #endif /* CONFIG_MVNETA_BM */
 #endif
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 3/8] net: mvneta: remove data pointer usage from device_node structure
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

On year ago Rob Herring wanted to remove the data pointer from the
device_node structure[1]. The mvneta driver seemed to be the only one
which used (abused ?) it. However, the proposal of Rob to remove this
pointer from the driver introduced a regression, and I tested and fixed an
alternative way, but it was never submitted as a proper patch.

Now here it is: Instead of using the device_node structure ->data
pointer, we store the BM private data as the driver data of the BM
platform_device. The core mvneta code can retrieve it by doing a lookup
on which platform_device corresponds to the BM device tree node using
of_find_device_by_node(), and get its driver data

[1]https://www.spinics.net/lists/netdev/msg445197.html

Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c    | 18 ++++++++++++------
 drivers/net/ethernet/marvell/mvneta_bm.c | 15 +++++++++++++++
 drivers/net/ethernet/marvell/mvneta_bm.h |  5 +++++
 3 files changed, 32 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 3d1f4335b0d3..011794148ec8 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -4461,12 +4461,16 @@ static int mvneta_probe(struct platform_device *pdev)
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
-	if (bm_node && bm_node->data) {
-		pp->bm_priv = bm_node->data;
-		err = mvneta_bm_port_init(pdev, pp);
-		if (err < 0) {
-			dev_info(&pdev->dev, "use SW buffer management\n");
-			pp->bm_priv = NULL;
+	if (bm_node) {
+		pp->bm_priv = mvneta_bm_get(bm_node);
+		if (pp->bm_priv) {
+			err = mvneta_bm_port_init(pdev, pp);
+			if (err < 0) {
+				dev_info(&pdev->dev,
+					 "use SW buffer management\n");
+				mvneta_bm_put(pp->bm_priv);
+				pp->bm_priv = NULL;
+			}
 		}
 	}
 	of_node_put(bm_node);
@@ -4527,6 +4531,7 @@ static int mvneta_probe(struct platform_device *pdev)
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
 				       1 << pp->id);
+		mvneta_bm_put(pp->bm_priv);
 	}
 err_free_stats:
 	free_percpu(pp->stats);
@@ -4564,6 +4569,7 @@ static int mvneta_remove(struct platform_device *pdev)
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_long, 1 << pp->id);
 		mvneta_bm_pool_destroy(pp->bm_priv, pp->pool_short,
 				       1 << pp->id);
+		mvneta_bm_put(pp->bm_priv);
 	}
 
 	return 0;
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.c b/drivers/net/ethernet/marvell/mvneta_bm.c
index 466939f8f0cf..de468e1bdba9 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.c
+++ b/drivers/net/ethernet/marvell/mvneta_bm.c
@@ -18,6 +18,7 @@
 #include <linux/module.h>
 #include <linux/netdevice.h>
 #include <linux/of.h>
+#include <linux/of_platform.h>
 #include <linux/platform_device.h>
 #include <linux/skbuff.h>
 #include <net/hwbm.h>
@@ -392,6 +393,20 @@ static void mvneta_bm_put_sram(struct mvneta_bm *priv)
 		      MVNETA_BM_BPPI_SIZE);
 }
 
+struct mvneta_bm *mvneta_bm_get(struct device_node *node)
+{
+	struct platform_device *pdev = of_find_device_by_node(node);
+
+	return pdev ? platform_get_drvdata(pdev) : NULL;
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_get);
+
+void mvneta_bm_put(struct mvneta_bm *priv)
+{
+	platform_device_put(priv->pdev);
+}
+EXPORT_SYMBOL_GPL(mvneta_bm_put);
+
 static int mvneta_bm_probe(struct platform_device *pdev)
 {
 	struct device_node *dn = pdev->dev.of_node;
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h
index a32de432800c..9358626e51ec 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.h
+++ b/drivers/net/ethernet/marvell/mvneta_bm.h
@@ -134,6 +134,9 @@ void *mvneta_frag_alloc(unsigned int frag_size);
 void mvneta_frag_free(unsigned int frag_size, void *data);
 
 #if IS_ENABLED(CONFIG_MVNETA_BM)
+struct mvneta_bm *mvneta_bm_get(struct device_node *node);
+void mvneta_bm_put(struct mvneta_bm *priv);
+
 void mvneta_bm_pool_destroy(struct mvneta_bm *priv,
 			    struct mvneta_bm_pool *bm_pool, u8 port_map);
 void mvneta_bm_bufs_free(struct mvneta_bm *priv, struct mvneta_bm_pool *bm_pool,
@@ -178,5 +181,7 @@ static inline void mvneta_bm_pool_put_bp(struct mvneta_bm *priv,
 static inline u32 mvneta_bm_pool_get_bp(struct mvneta_bm *priv,
 					struct mvneta_bm_pool *bm_pool)
 { return 0; }
+struct mvneta_bm *mvneta_bm_get(struct device_node *node) { return NULL; }
+void mvneta_bm_put(struct mvneta_bm *priv) {}
 #endif /* CONFIG_MVNETA_BM */
 #endif
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 4/8] net: mvneta: increase number of buffers in RX and TX queue
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

The initial values were too small leading to poor performance when using
the software buffer management.

Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory: extract from a larger patch]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 011794148ec8..e519439bac97 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -295,10 +295,10 @@
 #define MVNETA_RSS_LU_TABLE_SIZE	1
 
 /* Max number of Rx descriptors */
-#define MVNETA_MAX_RXD 128
+#define MVNETA_MAX_RXD 512
 
 /* Max number of Tx descriptors */
-#define MVNETA_MAX_TXD 532
+#define MVNETA_MAX_TXD 1024
 
 /* Max number of allowed TCP segments for software TSO */
 #define MVNETA_MAX_TSO_SEGS 100
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 4/8] net: mvneta: increase number of buffers in RX and TX queue
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

From: Yelena Krivosheev <yelena@marvell.com>

The initial values were too small leading to poor performance when using
the software buffer management.

Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory: extract from a larger patch]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 4 ++--
 1 file changed, 2 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 011794148ec8..e519439bac97 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -295,10 +295,10 @@
 #define MVNETA_RSS_LU_TABLE_SIZE	1
 
 /* Max number of Rx descriptors */
-#define MVNETA_MAX_RXD 128
+#define MVNETA_MAX_RXD 512
 
 /* Max number of Tx descriptors */
-#define MVNETA_MAX_TXD 532
+#define MVNETA_MAX_TXD 1024
 
 /* Max number of allowed TCP segments for software TSO */
 #define MVNETA_MAX_TSO_SEGS 100
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 5/8] net: mvneta: discriminate error cause for missed packet
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

In order to improve the diagnostic in case of error, make the distinction
between refill error and skb allocation error. Also make the information
available through the ethtool state.

Based on the work of Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 28 +++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index e519439bac97..da2d56826295 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -328,6 +328,8 @@
 
 enum {
 	ETHTOOL_STAT_EEE_WAKEUP,
+	ETHTOOL_STAT_SKB_ALLOC_ERR,
+	ETHTOOL_STAT_REFILL_ERR,
 	ETHTOOL_MAX_STATS,
 };
 
@@ -375,6 +377,8 @@ static const struct mvneta_statistic mvneta_statistics[] = {
 	{ 0x3054, T_REG_32, "fc_sent", },
 	{ 0x300c, T_REG_32, "internal_mac_transmit_err", },
 	{ ETHTOOL_STAT_EEE_WAKEUP, T_SW, "eee_wakeup_errors", },
+	{ ETHTOOL_STAT_SKB_ALLOC_ERR, T_SW, "skb_alloc_errors", },
+	{ ETHTOOL_STAT_REFILL_ERR, T_SW, "refill_errors", },
 };
 
 struct mvneta_pcpu_stats {
@@ -589,9 +593,6 @@ struct mvneta_rx_queue {
 	/* num of rx descriptors in the rx descriptor ring */
 	int size;
 
-	/* counter of times when mvneta_refill() failed */
-	int missed;
-
 	u32 pkts_coal;
 	u32 time_coal;
 
@@ -609,6 +610,10 @@ struct mvneta_rx_queue {
 
 	/* Index of the next RX DMA descriptor to process */
 	int next_desc_to_proc;
+
+	/* error counters */
+	u32 skb_alloc_err;
+	u32 refill_err;
 };
 
 static enum cpuhp_state online_hpstate;
@@ -1946,8 +1951,13 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		if (rx_bytes <= rx_copybreak) {
 		/* better copy a small frame and not unmap the DMA region */
 			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
-			if (unlikely(!skb))
+			if (unlikely(!skb)) {
+				netdev_err(dev,
+					   "Can't allocate skb on queue %d\n",
+					   rxq->id);
+				rxq->skb_alloc_err++;
 				goto err_drop_frame;
+			}
 
 			dma_sync_single_range_for_cpu(dev->dev.parent,
 						      phys_addr,
@@ -1972,7 +1982,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		err = mvneta_rx_refill(pp, rx_desc, rxq);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->missed++;
+			rxq->refill_err++;
 			goto err_drop_frame;
 		}
 
@@ -2102,7 +2112,7 @@ static int mvneta_rx_hwbm(struct napi_struct *napi,
 		err = hwbm_pool_refill(&bm_pool->hwbm_pool, GFP_ATOMIC);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->missed++;
+			rxq->refill_err++;
 			goto err_drop_frame_ret_pool;
 		}
 
@@ -3963,6 +3973,12 @@ static void mvneta_ethtool_update_stats(struct mvneta_port *pp)
 			case ETHTOOL_STAT_EEE_WAKEUP:
 				val = phylink_get_eee_err(pp->phylink);
 				break;
+			case ETHTOOL_STAT_SKB_ALLOC_ERR:
+				val = pp->rxqs[0].skb_alloc_err;
+				break;
+			case ETHTOOL_STAT_REFILL_ERR:
+				val = pp->rxqs[0].refill_err;
+				break;
 			}
 			break;
 		}
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 5/8] net: mvneta: discriminate error cause for missed packet
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

In order to improve the diagnostic in case of error, make the distinction
between refill error and skb allocation error. Also make the information
available through the ethtool state.

Based on the work of Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 28 +++++++++++++++++++++------
 1 file changed, 22 insertions(+), 6 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index e519439bac97..da2d56826295 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -328,6 +328,8 @@
 
 enum {
 	ETHTOOL_STAT_EEE_WAKEUP,
+	ETHTOOL_STAT_SKB_ALLOC_ERR,
+	ETHTOOL_STAT_REFILL_ERR,
 	ETHTOOL_MAX_STATS,
 };
 
@@ -375,6 +377,8 @@ static const struct mvneta_statistic mvneta_statistics[] = {
 	{ 0x3054, T_REG_32, "fc_sent", },
 	{ 0x300c, T_REG_32, "internal_mac_transmit_err", },
 	{ ETHTOOL_STAT_EEE_WAKEUP, T_SW, "eee_wakeup_errors", },
+	{ ETHTOOL_STAT_SKB_ALLOC_ERR, T_SW, "skb_alloc_errors", },
+	{ ETHTOOL_STAT_REFILL_ERR, T_SW, "refill_errors", },
 };
 
 struct mvneta_pcpu_stats {
@@ -589,9 +593,6 @@ struct mvneta_rx_queue {
 	/* num of rx descriptors in the rx descriptor ring */
 	int size;
 
-	/* counter of times when mvneta_refill() failed */
-	int missed;
-
 	u32 pkts_coal;
 	u32 time_coal;
 
@@ -609,6 +610,10 @@ struct mvneta_rx_queue {
 
 	/* Index of the next RX DMA descriptor to process */
 	int next_desc_to_proc;
+
+	/* error counters */
+	u32 skb_alloc_err;
+	u32 refill_err;
 };
 
 static enum cpuhp_state online_hpstate;
@@ -1946,8 +1951,13 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		if (rx_bytes <= rx_copybreak) {
 		/* better copy a small frame and not unmap the DMA region */
 			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
-			if (unlikely(!skb))
+			if (unlikely(!skb)) {
+				netdev_err(dev,
+					   "Can't allocate skb on queue %d\n",
+					   rxq->id);
+				rxq->skb_alloc_err++;
 				goto err_drop_frame;
+			}
 
 			dma_sync_single_range_for_cpu(dev->dev.parent,
 						      phys_addr,
@@ -1972,7 +1982,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		err = mvneta_rx_refill(pp, rx_desc, rxq);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->missed++;
+			rxq->refill_err++;
 			goto err_drop_frame;
 		}
 
@@ -2102,7 +2112,7 @@ static int mvneta_rx_hwbm(struct napi_struct *napi,
 		err = hwbm_pool_refill(&bm_pool->hwbm_pool, GFP_ATOMIC);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->missed++;
+			rxq->refill_err++;
 			goto err_drop_frame_ret_pool;
 		}
 
@@ -3963,6 +3973,12 @@ static void mvneta_ethtool_update_stats(struct mvneta_port *pp)
 			case ETHTOOL_STAT_EEE_WAKEUP:
 				val = phylink_get_eee_err(pp->phylink);
 				break;
+			case ETHTOOL_STAT_SKB_ALLOC_ERR:
+				val = pp->rxqs[0].skb_alloc_err;
+				break;
+			case ETHTOOL_STAT_REFILL_ERR:
+				val = pp->rxqs[0].refill_err;
+				break;
 			}
 			break;
 		}
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 6/8] net: mvneta: Allocate page for the descriptor
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

Instead of trying to allocate the exact amount of memory for each
descriptor use a page for each of them, it allows to simplify the
allocation management and increase the performance of the driver.

Based on the work of Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c    | 62 +++++++++---------------
 drivers/net/ethernet/marvell/mvneta_bm.h |  3 --
 2 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index da2d56826295..6af583ef2af2 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1795,47 +1795,30 @@ static void mvneta_txq_done(struct mvneta_port *pp,
 	}
 }
 
-void *mvneta_frag_alloc(unsigned int frag_size)
-{
-	if (likely(frag_size <= PAGE_SIZE))
-		return netdev_alloc_frag(frag_size);
-	else
-		return kmalloc(frag_size, GFP_ATOMIC);
-}
-EXPORT_SYMBOL_GPL(mvneta_frag_alloc);
-
-void mvneta_frag_free(unsigned int frag_size, void *data)
-{
-	if (likely(frag_size <= PAGE_SIZE))
-		skb_free_frag(data);
-	else
-		kfree(data);
-}
-EXPORT_SYMBOL_GPL(mvneta_frag_free);
-
 /* Refill processing for SW buffer management */
+/* Allocate page per descriptor */
 static int mvneta_rx_refill(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc,
-			    struct mvneta_rx_queue *rxq)
-
+			    struct mvneta_rx_queue *rxq,
+			    gfp_t gfp_mask)
 {
 	dma_addr_t phys_addr;
-	void *data;
+	struct page *page;
 
-	data = mvneta_frag_alloc(pp->frag_size);
-	if (!data)
+	page = __dev_alloc_page(gfp_mask);
+	if (!page)
 		return -ENOMEM;
 
-	phys_addr = dma_map_single(pp->dev->dev.parent, data,
-				   MVNETA_RX_BUF_SIZE(pp->pkt_size),
-				   DMA_FROM_DEVICE);
+	/* map page for use */
+	phys_addr = dma_map_page(pp->dev->dev.parent, page, 0, PAGE_SIZE,
+				 DMA_FROM_DEVICE);
 	if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
-		mvneta_frag_free(pp->frag_size, data);
+		__free_page(page);
 		return -ENOMEM;
 	}
 
 	phys_addr += pp->rx_offset_correction;
-	mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
+	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
 	return 0;
 }
 
@@ -1901,7 +1884,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
-		mvneta_frag_free(pp->frag_size, data);
+		__free_page(data);
 	}
 }
 
@@ -1928,6 +1911,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
 		struct sk_buff *skb;
 		unsigned char *data;
+		struct page *page;
 		dma_addr_t phys_addr;
 		u32 rx_status, frag_size;
 		int rx_bytes, err, index;
@@ -1936,7 +1920,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		rx_status = rx_desc->status;
 		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
 		index = rx_desc - rxq->descs;
-		data = rxq->buf_virt_addr[index];
+		page = (struct page *)rxq->buf_virt_addr[index];
+		data = page_address(page);
+		/* Prefetch header */
+		prefetch(data);
 		phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction;
 
 		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
@@ -1979,7 +1966,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		}
 
 		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc, rxq);
+		err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
 			rxq->refill_err++;
@@ -2773,9 +2760,11 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 
 	for (i = 0; i < num; i++) {
 		memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
-		if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) {
-			netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
-				__func__, rxq->id, i, num);
+		if (mvneta_rx_refill(pp, rxq->descs + i, rxq,
+				     GFP_KERNEL) != 0) {
+			netdev_err(pp->dev,
+				   "%s:rxq %d, %d of %d buffs  filled\n",
+				   __func__, rxq->id, i, num);
 			break;
 		}
 	}
@@ -3189,8 +3178,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		mvneta_bm_update_mtu(pp, mtu);
 
 	pp->pkt_size = MVNETA_RX_PKT_SIZE(dev->mtu);
-	pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
-	                SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 
 	ret = mvneta_setup_rxqs(pp);
 	if (ret) {
@@ -3677,8 +3664,7 @@ static int mvneta_open(struct net_device *dev)
 	int ret;
 
 	pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
-	pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
-	                SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	pp->frag_size = PAGE_SIZE;
 
 	ret = mvneta_setup_rxqs(pp);
 	if (ret)
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h
index 9358626e51ec..c8425d35c049 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.h
+++ b/drivers/net/ethernet/marvell/mvneta_bm.h
@@ -130,9 +130,6 @@ struct mvneta_bm_pool {
 };
 
 /* Declarations and definitions */
-void *mvneta_frag_alloc(unsigned int frag_size);
-void mvneta_frag_free(unsigned int frag_size, void *data);
-
 #if IS_ENABLED(CONFIG_MVNETA_BM)
 struct mvneta_bm *mvneta_bm_get(struct device_node *node);
 void mvneta_bm_put(struct mvneta_bm *priv);
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 6/8] net: mvneta: Allocate page for the descriptor
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

Instead of trying to allocate the exact amount of memory for each
descriptor use a page for each of them, it allows to simplify the
allocation management and increase the performance of the driver.

Based on the work of Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c    | 62 +++++++++---------------
 drivers/net/ethernet/marvell/mvneta_bm.h |  3 --
 2 files changed, 24 insertions(+), 41 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index da2d56826295..6af583ef2af2 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1795,47 +1795,30 @@ static void mvneta_txq_done(struct mvneta_port *pp,
 	}
 }
 
-void *mvneta_frag_alloc(unsigned int frag_size)
-{
-	if (likely(frag_size <= PAGE_SIZE))
-		return netdev_alloc_frag(frag_size);
-	else
-		return kmalloc(frag_size, GFP_ATOMIC);
-}
-EXPORT_SYMBOL_GPL(mvneta_frag_alloc);
-
-void mvneta_frag_free(unsigned int frag_size, void *data)
-{
-	if (likely(frag_size <= PAGE_SIZE))
-		skb_free_frag(data);
-	else
-		kfree(data);
-}
-EXPORT_SYMBOL_GPL(mvneta_frag_free);
-
 /* Refill processing for SW buffer management */
+/* Allocate page per descriptor */
 static int mvneta_rx_refill(struct mvneta_port *pp,
 			    struct mvneta_rx_desc *rx_desc,
-			    struct mvneta_rx_queue *rxq)
-
+			    struct mvneta_rx_queue *rxq,
+			    gfp_t gfp_mask)
 {
 	dma_addr_t phys_addr;
-	void *data;
+	struct page *page;
 
-	data = mvneta_frag_alloc(pp->frag_size);
-	if (!data)
+	page = __dev_alloc_page(gfp_mask);
+	if (!page)
 		return -ENOMEM;
 
-	phys_addr = dma_map_single(pp->dev->dev.parent, data,
-				   MVNETA_RX_BUF_SIZE(pp->pkt_size),
-				   DMA_FROM_DEVICE);
+	/* map page for use */
+	phys_addr = dma_map_page(pp->dev->dev.parent, page, 0, PAGE_SIZE,
+				 DMA_FROM_DEVICE);
 	if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
-		mvneta_frag_free(pp->frag_size, data);
+		__free_page(page);
 		return -ENOMEM;
 	}
 
 	phys_addr += pp->rx_offset_correction;
-	mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
+	mvneta_rx_desc_fill(rx_desc, phys_addr, page, rxq);
 	return 0;
 }
 
@@ -1901,7 +1884,7 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
-		mvneta_frag_free(pp->frag_size, data);
+		__free_page(data);
 	}
 }
 
@@ -1928,6 +1911,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
 		struct sk_buff *skb;
 		unsigned char *data;
+		struct page *page;
 		dma_addr_t phys_addr;
 		u32 rx_status, frag_size;
 		int rx_bytes, err, index;
@@ -1936,7 +1920,10 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		rx_status = rx_desc->status;
 		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
 		index = rx_desc - rxq->descs;
-		data = rxq->buf_virt_addr[index];
+		page = (struct page *)rxq->buf_virt_addr[index];
+		data = page_address(page);
+		/* Prefetch header */
+		prefetch(data);
 		phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction;
 
 		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
@@ -1979,7 +1966,7 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		}
 
 		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc, rxq);
+		err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
 			rxq->refill_err++;
@@ -2773,9 +2760,11 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 
 	for (i = 0; i < num; i++) {
 		memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
-		if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) {
-			netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
-				__func__, rxq->id, i, num);
+		if (mvneta_rx_refill(pp, rxq->descs + i, rxq,
+				     GFP_KERNEL) != 0) {
+			netdev_err(pp->dev,
+				   "%s:rxq %d, %d of %d buffs  filled\n",
+				   __func__, rxq->id, i, num);
 			break;
 		}
 	}
@@ -3189,8 +3178,6 @@ static int mvneta_change_mtu(struct net_device *dev, int mtu)
 		mvneta_bm_update_mtu(pp, mtu);
 
 	pp->pkt_size = MVNETA_RX_PKT_SIZE(dev->mtu);
-	pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
-	                SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
 
 	ret = mvneta_setup_rxqs(pp);
 	if (ret) {
@@ -3677,8 +3664,7 @@ static int mvneta_open(struct net_device *dev)
 	int ret;
 
 	pp->pkt_size = MVNETA_RX_PKT_SIZE(pp->dev->mtu);
-	pp->frag_size = SKB_DATA_ALIGN(MVNETA_RX_BUF_SIZE(pp->pkt_size)) +
-	                SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
+	pp->frag_size = PAGE_SIZE;
 
 	ret = mvneta_setup_rxqs(pp);
 	if (ret)
diff --git a/drivers/net/ethernet/marvell/mvneta_bm.h b/drivers/net/ethernet/marvell/mvneta_bm.h
index 9358626e51ec..c8425d35c049 100644
--- a/drivers/net/ethernet/marvell/mvneta_bm.h
+++ b/drivers/net/ethernet/marvell/mvneta_bm.h
@@ -130,9 +130,6 @@ struct mvneta_bm_pool {
 };
 
 /* Declarations and definitions */
-void *mvneta_frag_alloc(unsigned int frag_size);
-void mvneta_frag_free(unsigned int frag_size, void *data);
-
 #if IS_ENABLED(CONFIG_MVNETA_BM)
 struct mvneta_bm *mvneta_bm_get(struct device_node *node);
 void mvneta_bm_put(struct mvneta_bm *priv);
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 7/8] net: mvneta: Verify hardware checksum only when offload checksum feature is set
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

If the checksum offload feature is not set, then there is no point to
check the status of the hardware.

[gregory: extract from a larger patch]
Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 6af583ef2af2..079b515c54c1 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1720,7 +1720,8 @@ static void mvneta_rx_error(struct mvneta_port *pp,
 static void mvneta_rx_csum(struct mvneta_port *pp, u32 status,
 			   struct sk_buff *skb)
 {
-	if ((status & MVNETA_RXD_L3_IP4) &&
+	if ((pp->dev->features & NETIF_F_RXCSUM) &&
+	    (status & MVNETA_RXD_L3_IP4) &&
 	    (status & MVNETA_RXD_L4_CSUM_OK)) {
 		skb->csum = 0;
 		skb->ip_summed = CHECKSUM_UNNECESSARY;
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 7/8] net: mvneta: Verify hardware checksum only when offload checksum feature is set
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

From: Yelena Krivosheev <yelena@marvell.com>

If the checksum offload feature is not set, then there is no point to
check the status of the hardware.

[gregory: extract from a larger patch]
Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 6af583ef2af2..079b515c54c1 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1720,7 +1720,8 @@ static void mvneta_rx_error(struct mvneta_port *pp,
 static void mvneta_rx_csum(struct mvneta_port *pp, u32 status,
 			   struct sk_buff *skb)
 {
-	if ((status & MVNETA_RXD_L3_IP4) &&
+	if ((pp->dev->features & NETIF_F_RXCSUM) &&
+	    (status & MVNETA_RXD_L3_IP4) &&
 	    (status & MVNETA_RXD_L4_CSUM_OK)) {
 		skb->csum = 0;
 		skb->ip_summed = CHECKSUM_UNNECESSARY;
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 8/8] net: mvneta: Improve the buffer allocation method for SWBM
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-18 16:10   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Thomas Petazzoni, linux-arm-kernel, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Yelena Krivosheev,
	Nadav Haklai, Marcin Wojtas, Dmitri Epshtein, Antoine Tenart,
	Miquèl Raynal, Maxime Chevallier

From: Yelena Krivosheev <yelena@marvell.com>

With system having a small memory (around 256MB), the state "cannot
allocate memory to refill with new buffer" is reach pretty quickly.

By this patch we changed buffer allocation method to a better handling of
this use case by avoiding memory allocation issues.

Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory: extract from a larger patch]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 281 +++++++++++++++++---------
 1 file changed, 183 insertions(+), 98 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 079b515c54c1..55c2a56c5dae 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -483,7 +483,10 @@ struct mvneta_port {
 #define MVNETA_RXD_ERR_RESOURCE		(BIT(17) | BIT(18))
 #define MVNETA_RXD_ERR_CODE_MASK	(BIT(17) | BIT(18))
 #define MVNETA_RXD_L3_IP4		BIT(25)
-#define MVNETA_RXD_FIRST_LAST_DESC	(BIT(26) | BIT(27))
+#define MVNETA_RXD_LAST_DESC		BIT(26)
+#define MVNETA_RXD_FIRST_DESC		BIT(27)
+#define MVNETA_RXD_FIRST_LAST_DESC	(MVNETA_RXD_FIRST_DESC | \
+					 MVNETA_RXD_LAST_DESC)
 #define MVNETA_RXD_L4_CSUM_OK		BIT(30)
 
 #if defined(__LITTLE_ENDIAN)
@@ -611,6 +614,14 @@ struct mvneta_rx_queue {
 	/* Index of the next RX DMA descriptor to process */
 	int next_desc_to_proc;
 
+	/* Index of first RX DMA descriptor to refill */
+	int first_to_refill;
+	u32 refill_num;
+
+	/* pointer to uncomplete skb buffer */
+	struct sk_buff *skb;
+	int left_size;
+
 	/* error counters */
 	u32 skb_alloc_err;
 	u32 refill_err;
@@ -626,6 +637,7 @@ static int txq_number = 8;
 static int rxq_def;
 
 static int rx_copybreak __read_mostly = 256;
+static int rx_header_size __read_mostly = 128;
 
 /* HW BM need that each port be identify by a unique ID */
 static int global_port_id;
@@ -1689,13 +1701,6 @@ static void mvneta_rx_error(struct mvneta_port *pp,
 {
 	u32 status = rx_desc->status;
 
-	if (!mvneta_rxq_desc_is_first_last(status)) {
-		netdev_err(pp->dev,
-			   "bad rx status %08x (buffer oversize), size=%d\n",
-			   status, rx_desc->data_size);
-		return;
-	}
-
 	switch (status & MVNETA_RXD_ERR_CODE_MASK) {
 	case MVNETA_RXD_ERR_CRC:
 		netdev_err(pp->dev, "bad rx status %08x (crc error), size=%d\n",
@@ -1882,6 +1887,8 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	for (i = 0; i < rxq->size; i++) {
 		struct mvneta_rx_desc *rx_desc = rxq->descs + i;
 		void *data = rxq->buf_virt_addr[i];
+		if (!data || !(rx_desc->buf_phys_addr))
+			continue;
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
@@ -1889,117 +1896,183 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	}
 }
 
+static inline
+int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
+{
+	struct mvneta_rx_desc *rx_desc;
+	int curr_desc = rxq->first_to_refill;
+	int i;
+
+	for (i = 0; (i < rxq->refill_num) && (i < 64); i++) {
+		rx_desc = rxq->descs + curr_desc;
+		if (!(rx_desc->buf_phys_addr)) {
+			if (mvneta_rx_refill(pp, rx_desc, rxq, GFP_ATOMIC)) {
+				pr_err("Can't refill queue %d. Done %d from %d\n",
+				       rxq->id, i, rxq->refill_num);
+				rxq->refill_err++;
+				break;
+			}
+		}
+		curr_desc = MVNETA_QUEUE_NEXT_DESC(rxq, curr_desc);
+	}
+	rxq->refill_num -= i;
+	rxq->first_to_refill = curr_desc;
+
+	return i;
+}
+
 /* Main rx processing when using software buffer management */
 static int mvneta_rx_swbm(struct napi_struct *napi,
-			  struct mvneta_port *pp, int rx_todo,
+			  struct mvneta_port *pp, int budget,
 			  struct mvneta_rx_queue *rxq)
 {
 	struct net_device *dev = pp->dev;
-	int rx_done;
+	int rx_todo, rx_proc;
+	int refill = 0;
 	u32 rcvd_pkts = 0;
 	u32 rcvd_bytes = 0;
 
 	/* Get number of received packets */
-	rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq);
-
-	if (rx_todo > rx_done)
-		rx_todo = rx_done;
-
-	rx_done = 0;
+	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
+	rx_proc = 0;
 
 	/* Fairness NAPI loop */
-	while (rx_done < rx_todo) {
+	while ((rcvd_pkts < budget) && (rx_proc < rx_todo)) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
-		struct sk_buff *skb;
 		unsigned char *data;
 		struct page *page;
 		dma_addr_t phys_addr;
-		u32 rx_status, frag_size;
-		int rx_bytes, err, index;
+		u32 rx_status, index;
+		int rx_bytes, skb_size, copy_size;
+		int frag_num, frag_size, frag_offset;
 
-		rx_done++;
-		rx_status = rx_desc->status;
-		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
 		index = rx_desc - rxq->descs;
 		page = (struct page *)rxq->buf_virt_addr[index];
 		data = page_address(page);
 		/* Prefetch header */
 		prefetch(data);
-		phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction;
 
-		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
-		    (rx_status & MVNETA_RXD_ERR_SUMMARY)) {
-			mvneta_rx_error(pp, rx_desc);
-err_drop_frame:
-			dev->stats.rx_errors++;
-			/* leave the descriptor untouched */
-			continue;
-		}
+		phys_addr = rx_desc->buf_phys_addr;
+		rx_status = rx_desc->status;
+		rx_proc++;
+		rxq->refill_num++;
+
+		if (rx_status & MVNETA_RXD_FIRST_DESC) {
+			/* Check errors only for FIRST descriptor */
+			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
+				mvneta_rx_error(pp, rx_desc);
+				dev->stats.rx_errors++;
+				/* leave the descriptor untouched */
+				continue;
+			}
+			rx_bytes = rx_desc->data_size -
+				   (ETH_FCS_LEN + MVNETA_MH_SIZE);
 
-		if (rx_bytes <= rx_copybreak) {
-		/* better copy a small frame and not unmap the DMA region */
-			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
-			if (unlikely(!skb)) {
+			/* Allocate small skb for each new packet */
+			skb_size = max(rx_copybreak, rx_header_size);
+			rxq->skb = netdev_alloc_skb_ip_align(dev, skb_size);
+			if (unlikely(!rxq->skb)) {
 				netdev_err(dev,
 					   "Can't allocate skb on queue %d\n",
 					   rxq->id);
+				dev->stats.rx_dropped++;
 				rxq->skb_alloc_err++;
-				goto err_drop_frame;
+				continue;
 			}
+			copy_size = min(skb_size, rx_bytes);
+
+			/* Copy data from buffer to SKB, skip Marvell header */
+			memcpy(rxq->skb->data, data + MVNETA_MH_SIZE,
+			       copy_size);
+			skb_put(rxq->skb, copy_size);
+			rxq->left_size = rx_bytes - copy_size;
+
+			mvneta_rx_csum(pp, rx_status, rxq->skb);
+			if (rxq->left_size == 0) {
+				int size = copy_size + MVNETA_MH_SIZE;
+
+				dma_sync_single_range_for_cpu(dev->dev.parent,
+							      phys_addr, 0,
+							      size,
+							      DMA_FROM_DEVICE);
+
+				/* leave the descriptor and buffer untouched */
+			} else {
+				/* refill descriptor with new buffer later */
+				rx_desc->buf_phys_addr = 0;
+
+				frag_num = 0;
+				frag_offset = copy_size + MVNETA_MH_SIZE;
+				frag_size = min(rxq->left_size,
+						(int)(PAGE_SIZE - frag_offset));
+				skb_add_rx_frag(rxq->skb, frag_num, page,
+						frag_offset, frag_size,
+						PAGE_SIZE);
+				dma_unmap_single(dev->dev.parent, phys_addr,
+						 PAGE_SIZE, DMA_FROM_DEVICE);
+				rxq->left_size -= frag_size;
+			}
+		} else {
+			/* Middle or Last descriptor */
+			if (unlikely(!rxq->skb)) {
+				pr_debug("no skb for rx_status 0x%x\n",
+					 rx_status);
+				continue;
+			}
+			if (!rxq->left_size) {
+				/* last descriptor has only FCS */
+				/* and can be discarded */
+				dma_sync_single_range_for_cpu(dev->dev.parent,
+							      phys_addr, 0,
+							      ETH_FCS_LEN,
+							      DMA_FROM_DEVICE);
+				/* leave the descriptor and buffer untouched */
+			} else {
+				/* refill descriptor with new buffer later */
+				rx_desc->buf_phys_addr = 0;
+
+				frag_num = skb_shinfo(rxq->skb)->nr_frags;
+				frag_offset = 0;
+				frag_size = min(rxq->left_size,
+						(int)(PAGE_SIZE - frag_offset));
+				skb_add_rx_frag(rxq->skb, frag_num, page,
+						frag_offset, frag_size,
+						PAGE_SIZE);
+
+				dma_unmap_single(dev->dev.parent, phys_addr,
+						 PAGE_SIZE,
+						 DMA_FROM_DEVICE);
+
+				rxq->left_size -= frag_size;
+			}
+		} /* Middle or Last descriptor */
 
-			dma_sync_single_range_for_cpu(dev->dev.parent,
-						      phys_addr,
-						      MVNETA_MH_SIZE + NET_SKB_PAD,
-						      rx_bytes,
-						      DMA_FROM_DEVICE);
-			skb_put_data(skb, data + MVNETA_MH_SIZE + NET_SKB_PAD,
-				     rx_bytes);
-
-			skb->protocol = eth_type_trans(skb, dev);
-			mvneta_rx_csum(pp, rx_status, skb);
-			napi_gro_receive(napi, skb);
-
-			rcvd_pkts++;
-			rcvd_bytes += rx_bytes;
-
-			/* leave the descriptor and buffer untouched */
+		if (!(rx_status & MVNETA_RXD_LAST_DESC))
+			/* no last descriptor this time */
 			continue;
-		}
 
-		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL);
-		if (err) {
-			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->refill_err++;
-			goto err_drop_frame;
+		if (rxq->left_size) {
+			pr_err("get last desc, but left_size (%d) != 0\n",
+			       rxq->left_size);
+			dev_kfree_skb_any(rxq->skb);
+			rxq->left_size = 0;
+			rxq->skb = NULL;
+			continue;
 		}
-
-		frag_size = pp->frag_size;
-
-		skb = build_skb(data, frag_size > PAGE_SIZE ? 0 : frag_size);
-
-		/* After refill old buffer has to be unmapped regardless
-		 * the skb is successfully built or not.
-		 */
-		dma_unmap_single(dev->dev.parent, phys_addr,
-				 MVNETA_RX_BUF_SIZE(pp->pkt_size),
-				 DMA_FROM_DEVICE);
-
-		if (!skb)
-			goto err_drop_frame;
-
 		rcvd_pkts++;
-		rcvd_bytes += rx_bytes;
+		rcvd_bytes += rxq->skb->len;
 
 		/* Linux processing */
-		skb_reserve(skb, MVNETA_MH_SIZE + NET_SKB_PAD);
-		skb_put(skb, rx_bytes);
-
-		skb->protocol = eth_type_trans(skb, dev);
+		rxq->skb->protocol = eth_type_trans(rxq->skb, dev);
 
-		mvneta_rx_csum(pp, rx_status, skb);
+		if (dev->features & NETIF_F_GRO)
+			napi_gro_receive(napi, rxq->skb);
+		else
+			netif_receive_skb(rxq->skb);
 
-		napi_gro_receive(napi, skb);
+		/* clean uncomplete skb pointer in queue */
+		rxq->skb = NULL;
+		rxq->left_size = 0;
 	}
 
 	if (rcvd_pkts) {
@@ -2011,10 +2084,13 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		u64_stats_update_end(&stats->syncp);
 	}
 
+	/* return some buffers to hardware queue, one at a time is too slow */
+	refill = mvneta_rx_refill_queue(pp, rxq);
+
 	/* Update rxq management counters */
-	mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);
+	mvneta_rxq_desc_num_update(pp, rxq, rx_proc, refill);
 
-	return rx_done;
+	return rcvd_pkts;
 }
 
 /* Main rx processing when using hardware buffer management */
@@ -2823,21 +2899,23 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
 	mvreg_write(pp, MVNETA_RXQ_BASE_ADDR_REG(rxq->id), rxq->descs_phys);
 	mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), rxq->size);
 
-	/* Set Offset */
-	mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD - pp->rx_offset_correction);
-
 	/* Set coalescing pkts and time */
 	mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal);
 	mvneta_rx_time_coal_set(pp, rxq, rxq->time_coal);
 
 	if (!pp->bm_priv) {
-		/* Fill RXQ with buffers from RX pool */
-		mvneta_rxq_buf_size_set(pp, rxq,
-					MVNETA_RX_BUF_SIZE(pp->pkt_size));
+		/* Set Offset */
+		mvneta_rxq_offset_set(pp, rxq, 0);
+		mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
 		mvneta_rxq_bm_disable(pp, rxq);
 		mvneta_rxq_fill(pp, rxq, rxq->size);
 	} else {
+		/* Set Offset */
+		mvneta_rxq_offset_set(pp, rxq,
+				      NET_SKB_PAD - pp->rx_offset_correction);
+
 		mvneta_rxq_bm_enable(pp, rxq);
+		/* Fill RXQ with buffers from RX pool */
 		mvneta_rxq_long_pool_set(pp, rxq);
 		mvneta_rxq_short_pool_set(pp, rxq);
 		mvneta_rxq_non_occup_desc_add(pp, rxq, rxq->size);
@@ -2866,6 +2944,9 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 {
 	mvneta_rxq_drop_pkts(pp, rxq);
 
+	if (rxq->skb)
+		dev_kfree_skb_any(rxq->skb);
+
 	if (rxq->descs)
 		dma_free_coherent(pp->dev->dev.parent,
 				  rxq->size * MVNETA_DESC_ALIGNED_SIZE,
@@ -2876,6 +2957,10 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 	rxq->last_desc         = 0;
 	rxq->next_desc_to_proc = 0;
 	rxq->descs_phys        = 0;
+	rxq->first_to_refill   = 0;
+	rxq->refill_num        = 0;
+	rxq->skb               = NULL;
+	rxq->left_size         = 0;
 }
 
 static int mvneta_txq_sw_init(struct mvneta_port *pp,
@@ -4366,14 +4451,6 @@ static int mvneta_probe(struct platform_device *pdev)
 	pp->dn = dn;
 
 	pp->rxq_def = rxq_def;
-
-	/* Set RX packet offset correction for platforms, whose
-	 * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit
-	 * platforms and 0B for 32-bit ones.
-	 */
-	pp->rx_offset_correction =
-		max(0, NET_SKB_PAD - MVNETA_RX_PKT_OFFSET_CORRECTION);
-
 	pp->indir[0] = rxq_def;
 
 	/* Get special SoC configurations */
@@ -4461,6 +4538,7 @@ static int mvneta_probe(struct platform_device *pdev)
 	SET_NETDEV_DEV(dev, &pdev->dev);
 
 	pp->id = global_port_id++;
+	pp->rx_offset_correction = 0; /* not relevant for SW BM */
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
@@ -4475,6 +4553,13 @@ static int mvneta_probe(struct platform_device *pdev)
 				pp->bm_priv = NULL;
 			}
 		}
+		/* Set RX packet offset correction for platforms, whose
+		 * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit
+		 * platforms and 0B for 32-bit ones.
+		 */
+		pp->rx_offset_correction = max(0,
+					       NET_SKB_PAD -
+					       MVNETA_RX_PKT_OFFSET_CORRECTION);
 	}
 	of_node_put(bm_node);
 
-- 
2.18.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 8/8] net: mvneta: Improve the buffer allocation method for SWBM
@ 2018-07-18 16:10   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-18 16:10 UTC (permalink / raw)
  To: linux-arm-kernel

From: Yelena Krivosheev <yelena@marvell.com>

With system having a small memory (around 256MB), the state "cannot
allocate memory to refill with new buffer" is reach pretty quickly.

By this patch we changed buffer allocation method to a better handling of
this use case by avoiding memory allocation issues.

Signed-off-by: Yelena Krivosheev <yelena@marvell.com>
[gregory: extract from a larger patch]
Signed-off-by: Gregory CLEMENT <gregory.clement@bootlin.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 281 +++++++++++++++++---------
 1 file changed, 183 insertions(+), 98 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 079b515c54c1..55c2a56c5dae 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -483,7 +483,10 @@ struct mvneta_port {
 #define MVNETA_RXD_ERR_RESOURCE		(BIT(17) | BIT(18))
 #define MVNETA_RXD_ERR_CODE_MASK	(BIT(17) | BIT(18))
 #define MVNETA_RXD_L3_IP4		BIT(25)
-#define MVNETA_RXD_FIRST_LAST_DESC	(BIT(26) | BIT(27))
+#define MVNETA_RXD_LAST_DESC		BIT(26)
+#define MVNETA_RXD_FIRST_DESC		BIT(27)
+#define MVNETA_RXD_FIRST_LAST_DESC	(MVNETA_RXD_FIRST_DESC | \
+					 MVNETA_RXD_LAST_DESC)
 #define MVNETA_RXD_L4_CSUM_OK		BIT(30)
 
 #if defined(__LITTLE_ENDIAN)
@@ -611,6 +614,14 @@ struct mvneta_rx_queue {
 	/* Index of the next RX DMA descriptor to process */
 	int next_desc_to_proc;
 
+	/* Index of first RX DMA descriptor to refill */
+	int first_to_refill;
+	u32 refill_num;
+
+	/* pointer to uncomplete skb buffer */
+	struct sk_buff *skb;
+	int left_size;
+
 	/* error counters */
 	u32 skb_alloc_err;
 	u32 refill_err;
@@ -626,6 +637,7 @@ static int txq_number = 8;
 static int rxq_def;
 
 static int rx_copybreak __read_mostly = 256;
+static int rx_header_size __read_mostly = 128;
 
 /* HW BM need that each port be identify by a unique ID */
 static int global_port_id;
@@ -1689,13 +1701,6 @@ static void mvneta_rx_error(struct mvneta_port *pp,
 {
 	u32 status = rx_desc->status;
 
-	if (!mvneta_rxq_desc_is_first_last(status)) {
-		netdev_err(pp->dev,
-			   "bad rx status %08x (buffer oversize), size=%d\n",
-			   status, rx_desc->data_size);
-		return;
-	}
-
 	switch (status & MVNETA_RXD_ERR_CODE_MASK) {
 	case MVNETA_RXD_ERR_CRC:
 		netdev_err(pp->dev, "bad rx status %08x (crc error), size=%d\n",
@@ -1882,6 +1887,8 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	for (i = 0; i < rxq->size; i++) {
 		struct mvneta_rx_desc *rx_desc = rxq->descs + i;
 		void *data = rxq->buf_virt_addr[i];
+		if (!data || !(rx_desc->buf_phys_addr))
+			continue;
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
@@ -1889,117 +1896,183 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 	}
 }
 
+static inline
+int mvneta_rx_refill_queue(struct mvneta_port *pp, struct mvneta_rx_queue *rxq)
+{
+	struct mvneta_rx_desc *rx_desc;
+	int curr_desc = rxq->first_to_refill;
+	int i;
+
+	for (i = 0; (i < rxq->refill_num) && (i < 64); i++) {
+		rx_desc = rxq->descs + curr_desc;
+		if (!(rx_desc->buf_phys_addr)) {
+			if (mvneta_rx_refill(pp, rx_desc, rxq, GFP_ATOMIC)) {
+				pr_err("Can't refill queue %d. Done %d from %d\n",
+				       rxq->id, i, rxq->refill_num);
+				rxq->refill_err++;
+				break;
+			}
+		}
+		curr_desc = MVNETA_QUEUE_NEXT_DESC(rxq, curr_desc);
+	}
+	rxq->refill_num -= i;
+	rxq->first_to_refill = curr_desc;
+
+	return i;
+}
+
 /* Main rx processing when using software buffer management */
 static int mvneta_rx_swbm(struct napi_struct *napi,
-			  struct mvneta_port *pp, int rx_todo,
+			  struct mvneta_port *pp, int budget,
 			  struct mvneta_rx_queue *rxq)
 {
 	struct net_device *dev = pp->dev;
-	int rx_done;
+	int rx_todo, rx_proc;
+	int refill = 0;
 	u32 rcvd_pkts = 0;
 	u32 rcvd_bytes = 0;
 
 	/* Get number of received packets */
-	rx_done = mvneta_rxq_busy_desc_num_get(pp, rxq);
-
-	if (rx_todo > rx_done)
-		rx_todo = rx_done;
-
-	rx_done = 0;
+	rx_todo = mvneta_rxq_busy_desc_num_get(pp, rxq);
+	rx_proc = 0;
 
 	/* Fairness NAPI loop */
-	while (rx_done < rx_todo) {
+	while ((rcvd_pkts < budget) && (rx_proc < rx_todo)) {
 		struct mvneta_rx_desc *rx_desc = mvneta_rxq_next_desc_get(rxq);
-		struct sk_buff *skb;
 		unsigned char *data;
 		struct page *page;
 		dma_addr_t phys_addr;
-		u32 rx_status, frag_size;
-		int rx_bytes, err, index;
+		u32 rx_status, index;
+		int rx_bytes, skb_size, copy_size;
+		int frag_num, frag_size, frag_offset;
 
-		rx_done++;
-		rx_status = rx_desc->status;
-		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
 		index = rx_desc - rxq->descs;
 		page = (struct page *)rxq->buf_virt_addr[index];
 		data = page_address(page);
 		/* Prefetch header */
 		prefetch(data);
-		phys_addr = rx_desc->buf_phys_addr - pp->rx_offset_correction;
 
-		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
-		    (rx_status & MVNETA_RXD_ERR_SUMMARY)) {
-			mvneta_rx_error(pp, rx_desc);
-err_drop_frame:
-			dev->stats.rx_errors++;
-			/* leave the descriptor untouched */
-			continue;
-		}
+		phys_addr = rx_desc->buf_phys_addr;
+		rx_status = rx_desc->status;
+		rx_proc++;
+		rxq->refill_num++;
+
+		if (rx_status & MVNETA_RXD_FIRST_DESC) {
+			/* Check errors only for FIRST descriptor */
+			if (rx_status & MVNETA_RXD_ERR_SUMMARY) {
+				mvneta_rx_error(pp, rx_desc);
+				dev->stats.rx_errors++;
+				/* leave the descriptor untouched */
+				continue;
+			}
+			rx_bytes = rx_desc->data_size -
+				   (ETH_FCS_LEN + MVNETA_MH_SIZE);
 
-		if (rx_bytes <= rx_copybreak) {
-		/* better copy a small frame and not unmap the DMA region */
-			skb = netdev_alloc_skb_ip_align(dev, rx_bytes);
-			if (unlikely(!skb)) {
+			/* Allocate small skb for each new packet */
+			skb_size = max(rx_copybreak, rx_header_size);
+			rxq->skb = netdev_alloc_skb_ip_align(dev, skb_size);
+			if (unlikely(!rxq->skb)) {
 				netdev_err(dev,
 					   "Can't allocate skb on queue %d\n",
 					   rxq->id);
+				dev->stats.rx_dropped++;
 				rxq->skb_alloc_err++;
-				goto err_drop_frame;
+				continue;
 			}
+			copy_size = min(skb_size, rx_bytes);
+
+			/* Copy data from buffer to SKB, skip Marvell header */
+			memcpy(rxq->skb->data, data + MVNETA_MH_SIZE,
+			       copy_size);
+			skb_put(rxq->skb, copy_size);
+			rxq->left_size = rx_bytes - copy_size;
+
+			mvneta_rx_csum(pp, rx_status, rxq->skb);
+			if (rxq->left_size == 0) {
+				int size = copy_size + MVNETA_MH_SIZE;
+
+				dma_sync_single_range_for_cpu(dev->dev.parent,
+							      phys_addr, 0,
+							      size,
+							      DMA_FROM_DEVICE);
+
+				/* leave the descriptor and buffer untouched */
+			} else {
+				/* refill descriptor with new buffer later */
+				rx_desc->buf_phys_addr = 0;
+
+				frag_num = 0;
+				frag_offset = copy_size + MVNETA_MH_SIZE;
+				frag_size = min(rxq->left_size,
+						(int)(PAGE_SIZE - frag_offset));
+				skb_add_rx_frag(rxq->skb, frag_num, page,
+						frag_offset, frag_size,
+						PAGE_SIZE);
+				dma_unmap_single(dev->dev.parent, phys_addr,
+						 PAGE_SIZE, DMA_FROM_DEVICE);
+				rxq->left_size -= frag_size;
+			}
+		} else {
+			/* Middle or Last descriptor */
+			if (unlikely(!rxq->skb)) {
+				pr_debug("no skb for rx_status 0x%x\n",
+					 rx_status);
+				continue;
+			}
+			if (!rxq->left_size) {
+				/* last descriptor has only FCS */
+				/* and can be discarded */
+				dma_sync_single_range_for_cpu(dev->dev.parent,
+							      phys_addr, 0,
+							      ETH_FCS_LEN,
+							      DMA_FROM_DEVICE);
+				/* leave the descriptor and buffer untouched */
+			} else {
+				/* refill descriptor with new buffer later */
+				rx_desc->buf_phys_addr = 0;
+
+				frag_num = skb_shinfo(rxq->skb)->nr_frags;
+				frag_offset = 0;
+				frag_size = min(rxq->left_size,
+						(int)(PAGE_SIZE - frag_offset));
+				skb_add_rx_frag(rxq->skb, frag_num, page,
+						frag_offset, frag_size,
+						PAGE_SIZE);
+
+				dma_unmap_single(dev->dev.parent, phys_addr,
+						 PAGE_SIZE,
+						 DMA_FROM_DEVICE);
+
+				rxq->left_size -= frag_size;
+			}
+		} /* Middle or Last descriptor */
 
-			dma_sync_single_range_for_cpu(dev->dev.parent,
-						      phys_addr,
-						      MVNETA_MH_SIZE + NET_SKB_PAD,
-						      rx_bytes,
-						      DMA_FROM_DEVICE);
-			skb_put_data(skb, data + MVNETA_MH_SIZE + NET_SKB_PAD,
-				     rx_bytes);
-
-			skb->protocol = eth_type_trans(skb, dev);
-			mvneta_rx_csum(pp, rx_status, skb);
-			napi_gro_receive(napi, skb);
-
-			rcvd_pkts++;
-			rcvd_bytes += rx_bytes;
-
-			/* leave the descriptor and buffer untouched */
+		if (!(rx_status & MVNETA_RXD_LAST_DESC))
+			/* no last descriptor this time */
 			continue;
-		}
 
-		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc, rxq, GFP_KERNEL);
-		if (err) {
-			netdev_err(dev, "Linux processing - Can't refill\n");
-			rxq->refill_err++;
-			goto err_drop_frame;
+		if (rxq->left_size) {
+			pr_err("get last desc, but left_size (%d) != 0\n",
+			       rxq->left_size);
+			dev_kfree_skb_any(rxq->skb);
+			rxq->left_size = 0;
+			rxq->skb = NULL;
+			continue;
 		}
-
-		frag_size = pp->frag_size;
-
-		skb = build_skb(data, frag_size > PAGE_SIZE ? 0 : frag_size);
-
-		/* After refill old buffer has to be unmapped regardless
-		 * the skb is successfully built or not.
-		 */
-		dma_unmap_single(dev->dev.parent, phys_addr,
-				 MVNETA_RX_BUF_SIZE(pp->pkt_size),
-				 DMA_FROM_DEVICE);
-
-		if (!skb)
-			goto err_drop_frame;
-
 		rcvd_pkts++;
-		rcvd_bytes += rx_bytes;
+		rcvd_bytes += rxq->skb->len;
 
 		/* Linux processing */
-		skb_reserve(skb, MVNETA_MH_SIZE + NET_SKB_PAD);
-		skb_put(skb, rx_bytes);
-
-		skb->protocol = eth_type_trans(skb, dev);
+		rxq->skb->protocol = eth_type_trans(rxq->skb, dev);
 
-		mvneta_rx_csum(pp, rx_status, skb);
+		if (dev->features & NETIF_F_GRO)
+			napi_gro_receive(napi, rxq->skb);
+		else
+			netif_receive_skb(rxq->skb);
 
-		napi_gro_receive(napi, skb);
+		/* clean uncomplete skb pointer in queue */
+		rxq->skb = NULL;
+		rxq->left_size = 0;
 	}
 
 	if (rcvd_pkts) {
@@ -2011,10 +2084,13 @@ static int mvneta_rx_swbm(struct napi_struct *napi,
 		u64_stats_update_end(&stats->syncp);
 	}
 
+	/* return some buffers to hardware queue, one at a time is too slow */
+	refill = mvneta_rx_refill_queue(pp, rxq);
+
 	/* Update rxq management counters */
-	mvneta_rxq_desc_num_update(pp, rxq, rx_done, rx_done);
+	mvneta_rxq_desc_num_update(pp, rxq, rx_proc, refill);
 
-	return rx_done;
+	return rcvd_pkts;
 }
 
 /* Main rx processing when using hardware buffer management */
@@ -2823,21 +2899,23 @@ static void mvneta_rxq_hw_init(struct mvneta_port *pp,
 	mvreg_write(pp, MVNETA_RXQ_BASE_ADDR_REG(rxq->id), rxq->descs_phys);
 	mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), rxq->size);
 
-	/* Set Offset */
-	mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD - pp->rx_offset_correction);
-
 	/* Set coalescing pkts and time */
 	mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal);
 	mvneta_rx_time_coal_set(pp, rxq, rxq->time_coal);
 
 	if (!pp->bm_priv) {
-		/* Fill RXQ with buffers from RX pool */
-		mvneta_rxq_buf_size_set(pp, rxq,
-					MVNETA_RX_BUF_SIZE(pp->pkt_size));
+		/* Set Offset */
+		mvneta_rxq_offset_set(pp, rxq, 0);
+		mvneta_rxq_buf_size_set(pp, rxq, pp->frag_size);
 		mvneta_rxq_bm_disable(pp, rxq);
 		mvneta_rxq_fill(pp, rxq, rxq->size);
 	} else {
+		/* Set Offset */
+		mvneta_rxq_offset_set(pp, rxq,
+				      NET_SKB_PAD - pp->rx_offset_correction);
+
 		mvneta_rxq_bm_enable(pp, rxq);
+		/* Fill RXQ with buffers from RX pool */
 		mvneta_rxq_long_pool_set(pp, rxq);
 		mvneta_rxq_short_pool_set(pp, rxq);
 		mvneta_rxq_non_occup_desc_add(pp, rxq, rxq->size);
@@ -2866,6 +2944,9 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 {
 	mvneta_rxq_drop_pkts(pp, rxq);
 
+	if (rxq->skb)
+		dev_kfree_skb_any(rxq->skb);
+
 	if (rxq->descs)
 		dma_free_coherent(pp->dev->dev.parent,
 				  rxq->size * MVNETA_DESC_ALIGNED_SIZE,
@@ -2876,6 +2957,10 @@ static void mvneta_rxq_deinit(struct mvneta_port *pp,
 	rxq->last_desc         = 0;
 	rxq->next_desc_to_proc = 0;
 	rxq->descs_phys        = 0;
+	rxq->first_to_refill   = 0;
+	rxq->refill_num        = 0;
+	rxq->skb               = NULL;
+	rxq->left_size         = 0;
 }
 
 static int mvneta_txq_sw_init(struct mvneta_port *pp,
@@ -4366,14 +4451,6 @@ static int mvneta_probe(struct platform_device *pdev)
 	pp->dn = dn;
 
 	pp->rxq_def = rxq_def;
-
-	/* Set RX packet offset correction for platforms, whose
-	 * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit
-	 * platforms and 0B for 32-bit ones.
-	 */
-	pp->rx_offset_correction =
-		max(0, NET_SKB_PAD - MVNETA_RX_PKT_OFFSET_CORRECTION);
-
 	pp->indir[0] = rxq_def;
 
 	/* Get special SoC configurations */
@@ -4461,6 +4538,7 @@ static int mvneta_probe(struct platform_device *pdev)
 	SET_NETDEV_DEV(dev, &pdev->dev);
 
 	pp->id = global_port_id++;
+	pp->rx_offset_correction = 0; /* not relevant for SW BM */
 
 	/* Obtain access to BM resources if enabled and already initialized */
 	bm_node = of_parse_phandle(dn, "buffer-manager", 0);
@@ -4475,6 +4553,13 @@ static int mvneta_probe(struct platform_device *pdev)
 				pp->bm_priv = NULL;
 			}
 		}
+		/* Set RX packet offset correction for platforms, whose
+		 * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit
+		 * platforms and 0B for 32-bit ones.
+		 */
+		pp->rx_offset_correction = max(0,
+					       NET_SKB_PAD -
+					       MVNETA_RX_PKT_OFFSET_CORRECTION);
 	}
 	of_node_put(bm_node);
 
-- 
2.18.0

^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
  2018-07-18 16:10 ` Gregory CLEMENT
@ 2018-07-24 13:16   ` Gregory CLEMENT
  -1 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-24 13:16 UTC (permalink / raw)
  To: David S. Miller
  Cc: linux-kernel, netdev, Thomas Petazzoni, linux-arm-kernel,
	Jason Cooper, Andrew Lunn, Sebastian Hesselbarth,
	Yelena Krivosheev, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Antoine Tenart, Miquèl Raynal, Maxime Chevallier

Hi David,
 
 On mer., juil. 18 2018, Gregory CLEMENT <gregory.clement@bootlin.com> wrote:

> Hello,
>
> This series brings some improvements for the mvneta driver and also
> adds a fix.
>
> Compared to the v2, the main change is another patch fixing a bug
> in mtu_change.
>

I saw in patchwork, the status was "Accepted" (for example the 1st patch
[1]). However the series is not in your git branches and usually you
also answered to the series to inform that you have applied it. So I
would like to know what is the status of the series.

Thanks,

Gregory


[1]: https://patchwork.ozlabs.org/patch/945807/

> Gregory
>
> Changelog:
> v1 -> v2
>
>  - In patch 2, use EXPORT_SYMBOL_GPL for mvneta_bm_get and
>    mvneta_bm_put to be used in module, reported by kbuild test robot.
>
>  - In patch 4, add the counter to the driver's ethtool state,
>    suggested by David Miller.
>
>  - In patch 6, use a single if, suggested by Marcin Wojtas
>
> v2 -> v3
>
>  - Adding a patch fixing the mtu change issue
>
>  - Removing the inline keyword for mvneta_rx_refill() and let the
>    comiler decided, suggested by David Miller.
>
> Andrew Lunn (1):
>   net: ethernet: mvneta: Fix napi structure mixup on armada 3700
>
> Gregory CLEMENT (3):
>   net: mvneta: remove data pointer usage from device_node structure
>   net: mvneta: discriminate error cause for missed packet
>   net: mvneta: Allocate page for the descriptor
>
> Yelena Krivosheev (4):
>   net: mvneta: fix mtu change on port without link
>   net: mvneta: increase number of buffers in RX and TX queue
>   net: mvneta: Verify hardware checksum only when offload checksum
>     feature is set
>   net: mvneta: Improve the buffer allocation method for SWBM
>
>  drivers/net/ethernet/marvell/mvneta.c    | 409 ++++++++++++++---------
>  drivers/net/ethernet/marvell/mvneta_bm.c |  15 +
>  drivers/net/ethernet/marvell/mvneta_bm.h |   8 +-
>  3 files changed, 272 insertions(+), 160 deletions(-)
>
> -- 
> 2.18.0
>

-- 
Gregory Clement, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
http://bootlin.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
@ 2018-07-24 13:16   ` Gregory CLEMENT
  0 siblings, 0 replies; 25+ messages in thread
From: Gregory CLEMENT @ 2018-07-24 13:16 UTC (permalink / raw)
  To: linux-arm-kernel

Hi David,
 
 On mer., juil. 18 2018, Gregory CLEMENT <gregory.clement@bootlin.com> wrote:

> Hello,
>
> This series brings some improvements for the mvneta driver and also
> adds a fix.
>
> Compared to the v2, the main change is another patch fixing a bug
> in mtu_change.
>

I saw in patchwork, the status was "Accepted" (for example the 1st patch
[1]). However the series is not in your git branches and usually you
also answered to the series to inform that you have applied it. So I
would like to know what is the status of the series.

Thanks,

Gregory


[1]: https://patchwork.ozlabs.org/patch/945807/

> Gregory
>
> Changelog:
> v1 -> v2
>
>  - In patch 2, use EXPORT_SYMBOL_GPL for mvneta_bm_get and
>    mvneta_bm_put to be used in module, reported by kbuild test robot.
>
>  - In patch 4, add the counter to the driver's ethtool state,
>    suggested by David Miller.
>
>  - In patch 6, use a single if, suggested by Marcin Wojtas
>
> v2 -> v3
>
>  - Adding a patch fixing the mtu change issue
>
>  - Removing the inline keyword for mvneta_rx_refill() and let the
>    comiler decided, suggested by David Miller.
>
> Andrew Lunn (1):
>   net: ethernet: mvneta: Fix napi structure mixup on armada 3700
>
> Gregory CLEMENT (3):
>   net: mvneta: remove data pointer usage from device_node structure
>   net: mvneta: discriminate error cause for missed packet
>   net: mvneta: Allocate page for the descriptor
>
> Yelena Krivosheev (4):
>   net: mvneta: fix mtu change on port without link
>   net: mvneta: increase number of buffers in RX and TX queue
>   net: mvneta: Verify hardware checksum only when offload checksum
>     feature is set
>   net: mvneta: Improve the buffer allocation method for SWBM
>
>  drivers/net/ethernet/marvell/mvneta.c    | 409 ++++++++++++++---------
>  drivers/net/ethernet/marvell/mvneta_bm.c |  15 +
>  drivers/net/ethernet/marvell/mvneta_bm.h |   8 +-
>  3 files changed, 272 insertions(+), 160 deletions(-)
>
> -- 
> 2.18.0
>

-- 
Gregory Clement, Bootlin (formerly Free Electrons)
Embedded Linux and Kernel engineering
http://bootlin.com

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
  2018-07-24 13:16   ` Gregory CLEMENT
@ 2018-07-24 16:50     ` David Miller
  -1 siblings, 0 replies; 25+ messages in thread
From: David Miller @ 2018-07-24 16:50 UTC (permalink / raw)
  To: gregory.clement
  Cc: linux-kernel, netdev, thomas.petazzoni, linux-arm-kernel, jason,
	andrew, sebastian.hesselbarth, yelena, nadavh, mw, dima,
	antoine.tenart, miquel.raynal, maxime.chevallier

From: Gregory CLEMENT <gregory.clement@bootlin.com>
Date: Tue, 24 Jul 2018 15:16:52 +0200

> I saw in patchwork, the status was "Accepted" (for example the 1st patch
> [1]). However the series is not in your git branches and usually you
> also answered to the series to inform that you have applied it. So I
> would like to know what is the status of the series.

Sorry about that, I'll take a look and try to figure out what happened
and fix it up.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
@ 2018-07-24 16:50     ` David Miller
  0 siblings, 0 replies; 25+ messages in thread
From: David Miller @ 2018-07-24 16:50 UTC (permalink / raw)
  To: linux-arm-kernel

From: Gregory CLEMENT <gregory.clement@bootlin.com>
Date: Tue, 24 Jul 2018 15:16:52 +0200

> I saw in patchwork, the status was "Accepted" (for example the 1st patch
> [1]). However the series is not in your git branches and usually you
> also answered to the series to inform that you have applied it. So I
> would like to know what is the status of the series.

Sorry about that, I'll take a look and try to figure out what happened
and fix it up.

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
  2018-07-24 13:16   ` Gregory CLEMENT
  (?)
@ 2018-07-29  5:14     ` David Miller
  -1 siblings, 0 replies; 25+ messages in thread
From: David Miller @ 2018-07-29  5:14 UTC (permalink / raw)
  To: gregory.clement
  Cc: linux-kernel, netdev, thomas.petazzoni, linux-arm-kernel, jason,
	andrew, sebastian.hesselbarth, yelena, nadavh, mw, dima,
	antoine.tenart, miquel.raynal, maxime.chevallier

From: Gregory CLEMENT <gregory.clement@bootlin.com>
Date: Tue, 24 Jul 2018 15:16:52 +0200

> Hi David,
>  
>  On mer., juil. 18 2018, Gregory CLEMENT <gregory.clement@bootlin.com> wrote:
> 
>> Hello,
>>
>> This series brings some improvements for the mvneta driver and also
>> adds a fix.
>>
>> Compared to the v2, the main change is another patch fixing a bug
>> in mtu_change.
>>
> 
> I saw in patchwork, the status was "Accepted" (for example the 1st patch
> [1]). However the series is not in your git branches and usually you
> also answered to the series to inform that you have applied it. So I
> would like to know what is the status of the series.

This series should now really be in net-next.

Sorry about that!

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
@ 2018-07-29  5:14     ` David Miller
  0 siblings, 0 replies; 25+ messages in thread
From: David Miller @ 2018-07-29  5:14 UTC (permalink / raw)
  To: gregory.clement
  Cc: andrew, jason, antoine.tenart, netdev, linux-kernel,
	maxime.chevallier, nadavh, yelena, thomas.petazzoni,
	miquel.raynal, mw, dima, linux-arm-kernel, sebastian.hesselbarth

From: Gregory CLEMENT <gregory.clement@bootlin.com>
Date: Tue, 24 Jul 2018 15:16:52 +0200

> Hi David,
>  
>  On mer., juil. 18 2018, Gregory CLEMENT <gregory.clement@bootlin.com> wrote:
> 
>> Hello,
>>
>> This series brings some improvements for the mvneta driver and also
>> adds a fix.
>>
>> Compared to the v2, the main change is another patch fixing a bug
>> in mtu_change.
>>
> 
> I saw in patchwork, the status was "Accepted" (for example the 1st patch
> [1]). However the series is not in your git branches and usually you
> also answered to the series to inform that you have applied it. So I
> would like to know what is the status of the series.

This series should now really be in net-next.

Sorry about that!

^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH net-next v3 0/8] A fix and a few improvements on mvneta
@ 2018-07-29  5:14     ` David Miller
  0 siblings, 0 replies; 25+ messages in thread
From: David Miller @ 2018-07-29  5:14 UTC (permalink / raw)
  To: linux-arm-kernel

From: Gregory CLEMENT <gregory.clement@bootlin.com>
Date: Tue, 24 Jul 2018 15:16:52 +0200

> Hi David,
>  
>  On mer., juil. 18 2018, Gregory CLEMENT <gregory.clement@bootlin.com> wrote:
> 
>> Hello,
>>
>> This series brings some improvements for the mvneta driver and also
>> adds a fix.
>>
>> Compared to the v2, the main change is another patch fixing a bug
>> in mtu_change.
>>
> 
> I saw in patchwork, the status was "Accepted" (for example the 1st patch
> [1]). However the series is not in your git branches and usually you
> also answered to the series to inform that you have applied it. So I
> would like to know what is the status of the series.

This series should now really be in net-next.

Sorry about that!

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2018-07-29  5:14 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-18 16:10 [PATCH net-next v3 0/8] A fix and a few improvements on mvneta Gregory CLEMENT
2018-07-18 16:10 ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 1/8] net: ethernet: mvneta: Fix napi structure mixup on armada 3700 Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 2/8] net: mvneta: fix mtu change on port without link Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 3/8] net: mvneta: remove data pointer usage from device_node structure Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 4/8] net: mvneta: increase number of buffers in RX and TX queue Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 5/8] net: mvneta: discriminate error cause for missed packet Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 6/8] net: mvneta: Allocate page for the descriptor Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 7/8] net: mvneta: Verify hardware checksum only when offload checksum feature is set Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-18 16:10 ` [PATCH net-next v3 8/8] net: mvneta: Improve the buffer allocation method for SWBM Gregory CLEMENT
2018-07-18 16:10   ` Gregory CLEMENT
2018-07-24 13:16 ` [PATCH net-next v3 0/8] A fix and a few improvements on mvneta Gregory CLEMENT
2018-07-24 13:16   ` Gregory CLEMENT
2018-07-24 16:50   ` David Miller
2018-07-24 16:50     ` David Miller
2018-07-29  5:14   ` David Miller
2018-07-29  5:14     ` David Miller
2018-07-29  5:14     ` David Miller

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.