linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver
@ 2016-11-29  9:37 Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 1/6] net: mvneta: Optimize rx path for small frame Gregory CLEMENT
                   ` (6 more replies)
  0 siblings, 7 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29  9:37 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Jisheng Zhang, Arnd Bergmann, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

Hi,

The Armada 37xx is a new ARMv8 SoC from Marvell using same network
controller as the older Armada 370/38x/XP SoCs. This series adapts the
driver in order to be able to use it on this new SoC. The main changes
are:

- 64-bits support: the first patches allow using the driver on a 64-bit
  architecture.

- MBUS support: the mbus configuration is different on Armada 37xx
  from the older SoCs.

- per cpu interrupt: Armada 37xx do not support per cpu interrupt for
  the NETA IP, the non-per-CPU behavior was added back.

The first item is solved by patches 1 to 3.
The 2 last items are solved by patch 4.
In patch 5 the dt support is added.

Beside Armada 37xx, the series have been tested on Armada XP and
Armada 38x (with Hardware Buffer Management and with Software Buffer
Managment).

Thanks,

Gregory

Gregory CLEMENT (4):
  net: mvneta: Optimize rx path for small frame
  net: mvneta: Use cacheable memory to store the rx buffer virtual address
  net: mvneta: Only disable mvneta_bm for 64-bits
  ARM64: dts: marvell: Add network support for Armada 3700

Marcin Wojtas (2):
  net: mvneta: Convert to be 64 bits compatible
  net: mvneta: Add network support for Armada 3700 SoC

 Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt |   7 +-
 arch/arm64/boot/dts/marvell/armada-3720-db.dts                    |  23 ++++-
 arch/arm64/boot/dts/marvell/armada-37xx.dtsi                      |  23 ++++-
 drivers/net/ethernet/marvell/Kconfig                              |  10 +-
 drivers/net/ethernet/marvell/mvneta.c                             | 400 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------
 5 files changed, 361 insertions(+), 102 deletions(-)

base-commit: 436accebb53021ef7c63535f60bda410aa87c136
-- 
git-series 0.8.10

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v3 net-next 1/6] net: mvneta: Optimize rx path for small frame
  2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
@ 2016-11-29  9:37 ` Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address Gregory CLEMENT
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29  9:37 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Jisheng Zhang, Arnd Bergmann, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

For small frame reuse the phys_addr variable instead of accessing the
uncacheable value in the rx descriptor.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 87274d4ab102..1b84f746d748 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -1918,7 +1918,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 				goto err_drop_frame;
 
 			dma_sync_single_range_for_cpu(dev->dev.parent,
-						      rx_desc->buf_phys_addr,
+						      phys_addr,
 						      MVNETA_MH_SIZE + NET_SKB_PAD,
 						      rx_bytes,
 						      DMA_FROM_DEVICE);
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address
  2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 1/6] net: mvneta: Optimize rx path for small frame Gregory CLEMENT
@ 2016-11-29  9:37 ` Gregory CLEMENT
  2016-11-29  9:50   ` Marcin Wojtas
  2016-11-29  9:59   ` Marcin Wojtas
  2016-11-29  9:37 ` [PATCH v3 net-next 3/6] net: mvneta: Convert to be 64 bits compatible Gregory CLEMENT
                   ` (4 subsequent siblings)
  6 siblings, 2 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29  9:37 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Jisheng Zhang, Arnd Bergmann, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

Until now the virtual address of the received buffer were stored in the
cookie field of the rx descriptor. However, this field is 32-bits only
which prevents to use the driver on a 64-bits architecture.

With this patch the virtual address is stored in an array not shared with
the hardware (no more need to use the DMA API). Thanks to this, it is
possible to use cache contrary to the access of the rx descriptor member.

The change is done in the swbm path only because the hwbm uses the cookie
field, this also means that currently the hwbm is not usable in 64-bits.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
 1 file changed, 81 insertions(+), 12 deletions(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 1b84f746d748..32b142d0e44e 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -561,6 +561,9 @@ struct mvneta_rx_queue {
 	u32 pkts_coal;
 	u32 time_coal;
 
+	/* Virtual address of the RX buffer */
+	void  **buf_virt_addr;
+
 	/* Virtual address of the RX DMA descriptors array */
 	struct mvneta_rx_desc *descs;
 
@@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
 
 /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
 static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
-				u32 phys_addr, u32 cookie)
+				u32 phys_addr, void *virt_addr,
+				struct mvneta_rx_queue *rxq)
 {
-	rx_desc->buf_cookie = cookie;
+	int i;
+
 	rx_desc->buf_phys_addr = phys_addr;
+	i = rx_desc - rxq->descs;
+	rxq->buf_virt_addr[i] = virt_addr;
 }
 
 /* Decrement sent descriptors counter */
@@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
 
 /* Refill processing for SW buffer management */
 static int mvneta_rx_refill(struct mvneta_port *pp,
-			    struct mvneta_rx_desc *rx_desc)
+			    struct mvneta_rx_desc *rx_desc,
+			    struct mvneta_rx_queue *rxq)
 
 {
 	dma_addr_t phys_addr;
@@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
 		return -ENOMEM;
 	}
 
-	mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
+	mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
 	return 0;
 }
 
@@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
 
 	for (i = 0; i < rxq->size; i++) {
 		struct mvneta_rx_desc *rx_desc = rxq->descs + i;
-		void *data = (void *)rx_desc->buf_cookie;
+		void *data;
+
+		if (!pp->bm_priv)
+			data = rxq->buf_virt_addr[i];
+		else
+			data = (void *)(uintptr_t)rx_desc->buf_cookie;
 
 		dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
 				 MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
@@ -1894,12 +1907,13 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		unsigned char *data;
 		dma_addr_t phys_addr;
 		u32 rx_status, frag_size;
-		int rx_bytes, err;
+		int rx_bytes, err, index;
 
 		rx_done++;
 		rx_status = rx_desc->status;
 		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
-		data = (unsigned char *)rx_desc->buf_cookie;
+		index = rx_desc - rxq->descs;
+		data = (unsigned char *)rxq->buf_virt_addr[index];
 		phys_addr = rx_desc->buf_phys_addr;
 
 		if (!mvneta_rxq_desc_is_first_last(rx_status) ||
@@ -1938,7 +1952,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
 		}
 
 		/* Refill processing */
-		err = mvneta_rx_refill(pp, rx_desc);
+		err = mvneta_rx_refill(pp, rx_desc, rxq);
 		if (err) {
 			netdev_err(dev, "Linux processing - Can't refill\n");
 			rxq->missed++;
@@ -2020,7 +2034,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
 		rx_done++;
 		rx_status = rx_desc->status;
 		rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
-		data = (unsigned char *)rx_desc->buf_cookie;
+		data = (u8 *)(uintptr_t)rx_desc->buf_cookie;
 		phys_addr = rx_desc->buf_phys_addr;
 		pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc);
 		bm_pool = &pp->bm_priv->bm_pools[pool_id];
@@ -2708,6 +2722,56 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 	return rx_done;
 }
 
+/* Refill processing for HW buffer management */
+static int mvneta_rx_hwbm_refill(struct mvneta_port *pp,
+				 struct mvneta_rx_desc *rx_desc)
+
+{
+	dma_addr_t phys_addr;
+	void *data;
+
+	data = mvneta_frag_alloc(pp->frag_size);
+	if (!data)
+		return -ENOMEM;
+
+	phys_addr = dma_map_single(pp->dev->dev.parent, data,
+				   MVNETA_RX_BUF_SIZE(pp->pkt_size),
+				   DMA_FROM_DEVICE);
+	if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
+		mvneta_frag_free(pp->frag_size, data);
+		return -ENOMEM;
+	}
+
+	rx_desc->buf_phys_addr = phys_addr;
+	rx_desc->buf_cookie = (uintptr_t)data;
+
+	return 0;
+}
+
+/* Handle rxq fill: allocates rxq skbs; called when initializing a port */
+static int mvneta_rxq_bm_fill(struct mvneta_port *pp,
+			      struct mvneta_rx_queue *rxq,
+			      int num)
+{
+	int i;
+
+	for (i = 0; i < num; i++) {
+		memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
+		if (mvneta_rx_hwbm_refill(pp, rxq->descs + i) != 0) {
+			netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
+				   __func__, rxq->id, i, num);
+			break;
+		}
+	}
+
+	/* Add this number of RX descriptors as non occupied (ready to
+	 * get packets)
+	 */
+	mvneta_rxq_non_occup_desc_add(pp, rxq, i);
+
+	return i;
+}
+
 /* Handle rxq fill: allocates rxq skbs; called when initializing a port */
 static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 			   int num)
@@ -2716,7 +2780,7 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
 
 	for (i = 0; i < num; i++) {
 		memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
-		if (mvneta_rx_refill(pp, rxq->descs + i) != 0) {
+		if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) {
 			netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
 				__func__, rxq->id, i, num);
 			break;
@@ -2784,14 +2848,14 @@ static int mvneta_rxq_init(struct mvneta_port *pp,
 		mvneta_rxq_buf_size_set(pp, rxq,
 					MVNETA_RX_BUF_SIZE(pp->pkt_size));
 		mvneta_rxq_bm_disable(pp, rxq);
+		mvneta_rxq_fill(pp, rxq, rxq->size);
 	} else {
 		mvneta_rxq_bm_enable(pp, rxq);
 		mvneta_rxq_long_pool_set(pp, rxq);
 		mvneta_rxq_short_pool_set(pp, rxq);
+		mvneta_rxq_bm_fill(pp, rxq, rxq->size);
 	}
 
-	mvneta_rxq_fill(pp, rxq, rxq->size);
-
 	return 0;
 }
 
@@ -3865,6 +3929,11 @@ static int mvneta_init(struct device *dev, struct mvneta_port *pp)
 		rxq->size = pp->rx_ring_size;
 		rxq->pkts_coal = MVNETA_RX_COAL_PKTS;
 		rxq->time_coal = MVNETA_RX_COAL_USEC;
+		rxq->buf_virt_addr = devm_kmalloc(pp->dev->dev.parent,
+						  rxq->size * sizeof(void *),
+						  GFP_KERNEL);
+		if (!rxq->buf_virt_addr)
+			return -ENOMEM;
 	}
 
 	return 0;
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 net-next 3/6] net: mvneta: Convert to be 64 bits compatible
  2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 1/6] net: mvneta: Optimize rx path for small frame Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address Gregory CLEMENT
@ 2016-11-29  9:37 ` Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 4/6] net: mvneta: Only disable mvneta_bm for 64-bits Gregory CLEMENT
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29  9:37 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Jisheng Zhang, Arnd Bergmann, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

From: Marcin Wojtas <mw@semihalf.com>

Prepare the mvneta driver in order to be usable on the 64 bits platform
such as the Armada 3700.

[gregory.clement@free-electrons.com]: this patch was extract from a larger
one to ease review and maintenance.

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
 drivers/net/ethernet/marvell/mvneta.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index 32b142d0e44e..a8bd0d83028f 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -296,6 +296,12 @@
 /* descriptor aligned size */
 #define MVNETA_DESC_ALIGNED_SIZE	32
 
+/* Number of bytes to be taken into account by HW when putting incoming data
+ * to the buffers. It is needed in case NET_SKB_PAD exceeds maximum packet
+ * offset supported in MVNETA_RXQ_CONFIG_REG(q) registers.
+ */
+#define MVNETA_RX_PKT_OFFSET_CORRECTION		64
+
 #define MVNETA_RX_PKT_SIZE(mtu) \
 	ALIGN((mtu) + MVNETA_MH_SIZE + MVNETA_VLAN_TAG_LEN + \
 	      ETH_HLEN + ETH_FCS_LEN,			     \
@@ -416,6 +422,7 @@ struct mvneta_port {
 	u64 ethtool_stats[ARRAY_SIZE(mvneta_statistics)];
 
 	u32 indir[MVNETA_RSS_LU_TABLE_SIZE];
+	u16 rx_offset_correction;
 };
 
 /* The mvneta_tx_desc and mvneta_rx_desc structures describe the
@@ -1807,6 +1814,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
 		return -ENOMEM;
 	}
 
+	phys_addr += pp->rx_offset_correction;
 	mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
 	return 0;
 }
@@ -2742,6 +2750,7 @@ static int mvneta_rx_hwbm_refill(struct mvneta_port *pp,
 		return -ENOMEM;
 	}
 
+	phys_addr += pp->rx_offset_correction;
 	rx_desc->buf_phys_addr = phys_addr;
 	rx_desc->buf_cookie = (uintptr_t)data;
 
@@ -2837,7 +2846,7 @@ static int mvneta_rxq_init(struct mvneta_port *pp,
 	mvreg_write(pp, MVNETA_RXQ_SIZE_REG(rxq->id), rxq->size);
 
 	/* Set Offset */
-	mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD);
+	mvneta_rxq_offset_set(pp, rxq, NET_SKB_PAD - pp->rx_offset_correction);
 
 	/* Set coalescing pkts and time */
 	mvneta_rx_pkts_coal_set(pp, rxq, rxq->pkts_coal);
@@ -4088,6 +4097,13 @@ static int mvneta_probe(struct platform_device *pdev)
 
 	pp->rxq_def = rxq_def;
 
+	/* Set RX packet offset correction for platforms, whose
+	 * NET_SKB_PAD, exceeds 64B. It should be 64B for 64-bit
+	 * platforms and 0B for 32-bit ones.
+	 */
+	pp->rx_offset_correction =
+		max(0, NET_SKB_PAD - MVNETA_RX_PKT_OFFSET_CORRECTION);
+
 	pp->indir[0] = rxq_def;
 
 	pp->clk = devm_clk_get(&pdev->dev, "core");
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 net-next 4/6] net: mvneta: Only disable mvneta_bm for 64-bits
  2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
                   ` (2 preceding siblings ...)
  2016-11-29  9:37 ` [PATCH v3 net-next 3/6] net: mvneta: Convert to be 64 bits compatible Gregory CLEMENT
@ 2016-11-29  9:37 ` Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 5/6] net: mvneta: Add network support for Armada 3700 SoC Gregory CLEMENT
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29  9:37 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Jisheng Zhang, Arnd Bergmann, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

Actually only the mvneta_bm support is not 64-bits compatible.
The mvneta code itself can run on 64-bits architecture.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
 drivers/net/ethernet/marvell/Kconfig | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
index 66fd9dbb2ca7..2ccea9dd9248 100644
--- a/drivers/net/ethernet/marvell/Kconfig
+++ b/drivers/net/ethernet/marvell/Kconfig
@@ -44,6 +44,7 @@ config MVMDIO
 config MVNETA_BM_ENABLE
 	tristate "Marvell Armada 38x/XP network interface BM support"
 	depends on MVNETA
+	depends on !64BIT
 	---help---
 	  This driver supports auxiliary block of the network
 	  interface units in the Marvell ARMADA XP and ARMADA 38x SoC
@@ -58,7 +59,6 @@ config MVNETA
 	tristate "Marvell Armada 370/38x/XP network interface support"
 	depends on PLAT_ORION || COMPILE_TEST
 	depends on HAS_DMA
-	depends on !64BIT
 	select MVMDIO
 	select FIXED_PHY
 	---help---
@@ -71,6 +71,7 @@ config MVNETA
 
 config MVNETA_BM
 	tristate
+	depends on !64BIT
 	default y if MVNETA=y && MVNETA_BM_ENABLE!=n
 	default MVNETA_BM_ENABLE
 	select HWBM
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 net-next 5/6] net: mvneta: Add network support for Armada 3700 SoC
  2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
                   ` (3 preceding siblings ...)
  2016-11-29  9:37 ` [PATCH v3 net-next 4/6] net: mvneta: Only disable mvneta_bm for 64-bits Gregory CLEMENT
@ 2016-11-29  9:37 ` Gregory CLEMENT
  2016-11-29  9:37 ` [PATCH v3 net-next 6/6] ARM64: dts: marvell: Add network support for Armada 3700 Gregory CLEMENT
  2016-11-29 10:05 ` [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
  6 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29  9:37 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Jisheng Zhang, Arnd Bergmann, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

From: Marcin Wojtas <mw@semihalf.com>

Armada 3700 is a new ARMv8 SoC from Marvell using same network controller
as older Armada 370/38x/XP. There are however some differences that
needed taking into account when adding support for it:

* open default MBUS window to 4GB of DRAM - Armada 3700 SoC's Mbus
  configuration for network controller has to be done on two levels:
  global and per-port. The first one is inherited from the
  bootloader. The latter can be opened in a default way, leaving
  arbitration to the bus controller.  Hence filled mbus_dram_target_info
  structure is not needed

* make per-CPU operation optional - Recent patches adding RSS and XPS
  support for Armada 38x/XP enabled per-CPU operation of the controller
  by default. Contrary to older SoC's Armada 3700 SoC's network
  controller is not capable of per-CPU processing due to interrupt lines'
  connectivity.  This patch restores non-per-CPU operation, which is now
  optional and depends on neta_armada3700 flag value in mvneta_port
  structure. In order not to complicate the code, separate interrupt
  subroutine is implemented.

For now, on the Armada 3700, RSS is disabled as the current
implementation depend on the per cpu interrupts.

[gregory.clement@free-electrons.com: extract from a larger patch, replace
some ifdef and port to net-next for v4.10]

Signed-off-by: Marcin Wojtas <mw@semihalf.com>
Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
 Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt |   7 +-
 drivers/net/ethernet/marvell/Kconfig                              |   7 +-
 drivers/net/ethernet/marvell/mvneta.c                             | 287 +++++++++++++++++++++++++++++++++++++++++++++++++++---------------------
 3 files changed, 214 insertions(+), 87 deletions(-)

diff --git a/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
index 73be8970815e..7aa840c8768d 100644
--- a/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
+++ b/Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt
@@ -1,7 +1,10 @@
-* Marvell Armada 370 / Armada XP Ethernet Controller (NETA)
+* Marvell Armada 370 / Armada XP / Armada 3700 Ethernet Controller (NETA)
 
 Required properties:
-- compatible: "marvell,armada-370-neta" or "marvell,armada-xp-neta".
+- compatible: could be one of the followings
+	"marvell,armada-370-neta"
+	"marvell,armada-xp-neta"
+	"marvell,armada-3700-neta"
 - reg: address and length of the register set for the device.
 - interrupts: interrupt for the device
 - phy: See ethernet.txt file in the same directory.
diff --git a/drivers/net/ethernet/marvell/Kconfig b/drivers/net/ethernet/marvell/Kconfig
index 2ccea9dd9248..3b8f11fe5e13 100644
--- a/drivers/net/ethernet/marvell/Kconfig
+++ b/drivers/net/ethernet/marvell/Kconfig
@@ -56,14 +56,15 @@ config MVNETA_BM_ENABLE
 	  buffer management.
 
 config MVNETA
-	tristate "Marvell Armada 370/38x/XP network interface support"
-	depends on PLAT_ORION || COMPILE_TEST
+	tristate "Marvell Armada 370/38x/XP/37xx network interface support"
+	depends on ARCH_MVEBU || COMPILE_TEST
 	depends on HAS_DMA
 	select MVMDIO
 	select FIXED_PHY
 	---help---
 	  This driver supports the network interface units in the
-	  Marvell ARMADA XP, ARMADA 370 and ARMADA 38x SoC family.
+	  Marvell ARMADA XP, ARMADA 370, ARMADA 38x and
+	  ARMADA 37xx SoC family.
 
 	  Note that this driver is distinct from the mv643xx_eth
 	  driver, which should be used for the older Marvell SoCs
diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
index a8bd0d83028f..99cee88d5052 100644
--- a/drivers/net/ethernet/marvell/mvneta.c
+++ b/drivers/net/ethernet/marvell/mvneta.c
@@ -397,6 +397,9 @@ struct mvneta_port {
 	spinlock_t lock;
 	bool is_stopped;
 
+	u32 cause_rx_tx;
+	struct napi_struct napi;
+
 	/* Core clock */
 	struct clk *clk;
 	/* AXI clock */
@@ -422,6 +425,9 @@ struct mvneta_port {
 	u64 ethtool_stats[ARRAY_SIZE(mvneta_statistics)];
 
 	u32 indir[MVNETA_RSS_LU_TABLE_SIZE];
+
+	/* Flags for special SoC configurations */
+	bool neta_armada3700;
 	u16 rx_offset_correction;
 };
 
@@ -965,14 +971,9 @@ static int mvneta_mbus_io_win_set(struct mvneta_port *pp, u32 base, u32 wsize,
 	return 0;
 }
 
-/* Assign and initialize pools for port. In case of fail
- * buffer manager will remain disabled for current port.
- */
-static int mvneta_bm_port_init(struct platform_device *pdev,
-			       struct mvneta_port *pp)
+static  int mvneta_bm_port_mbus_init(struct mvneta_port *pp)
 {
-	struct device_node *dn = pdev->dev.of_node;
-	u32 long_pool_id, short_pool_id, wsize;
+	u32 wsize;
 	u8 target, attr;
 	int err;
 
@@ -991,6 +992,25 @@ static int mvneta_bm_port_init(struct platform_device *pdev,
 		netdev_info(pp->dev, "fail to configure mbus window to BM\n");
 		return err;
 	}
+	return 0;
+}
+
+/* Assign and initialize pools for port. In case of fail
+ * buffer manager will remain disabled for current port.
+ */
+static int mvneta_bm_port_init(struct platform_device *pdev,
+			       struct mvneta_port *pp)
+{
+	struct device_node *dn = pdev->dev.of_node;
+	u32 long_pool_id, short_pool_id;
+
+	if (!pp->neta_armada3700) {
+		int ret;
+
+		ret = mvneta_bm_port_mbus_init(pp);
+		if (ret)
+			return ret;
+	}
 
 	if (of_property_read_u32(dn, "bm,pool-long", &long_pool_id)) {
 		netdev_info(pp->dev, "missing long pool id\n");
@@ -1359,22 +1379,27 @@ static void mvneta_defaults_set(struct mvneta_port *pp)
 	for_each_present_cpu(cpu) {
 		int rxq_map = 0, txq_map = 0;
 		int rxq, txq;
+		if (!pp->neta_armada3700) {
+			for (rxq = 0; rxq < rxq_number; rxq++)
+				if ((rxq % max_cpu) == cpu)
+					rxq_map |= MVNETA_CPU_RXQ_ACCESS(rxq);
+
+			for (txq = 0; txq < txq_number; txq++)
+				if ((txq % max_cpu) == cpu)
+					txq_map |= MVNETA_CPU_TXQ_ACCESS(txq);
+
+			/* With only one TX queue we configure a special case
+			 * which will allow to get all the irq on a single
+			 * CPU
+			 */
+			if (txq_number == 1)
+				txq_map = (cpu == pp->rxq_def) ?
+					MVNETA_CPU_TXQ_ACCESS(1) : 0;
 
-		for (rxq = 0; rxq < rxq_number; rxq++)
-			if ((rxq % max_cpu) == cpu)
-				rxq_map |= MVNETA_CPU_RXQ_ACCESS(rxq);
-
-		for (txq = 0; txq < txq_number; txq++)
-			if ((txq % max_cpu) == cpu)
-				txq_map |= MVNETA_CPU_TXQ_ACCESS(txq);
-
-		/* With only one TX queue we configure a special case
-		 * which will allow to get all the irq on a single
-		 * CPU
-		 */
-		if (txq_number == 1)
-			txq_map = (cpu == pp->rxq_def) ?
-				MVNETA_CPU_TXQ_ACCESS(1) : 0;
+		} else {
+			txq_map = MVNETA_CPU_TXQ_ACCESS_ALL_MASK;
+			rxq_map = MVNETA_CPU_RXQ_ACCESS_ALL_MASK;
+		}
 
 		mvreg_write(pp, MVNETA_CPU_MAP(cpu), rxq_map | txq_map);
 	}
@@ -2632,6 +2657,17 @@ static void mvneta_set_rx_mode(struct net_device *dev)
 /* Interrupt handling - the callback for request_irq() */
 static irqreturn_t mvneta_isr(int irq, void *dev_id)
 {
+	struct mvneta_port *pp = (struct mvneta_port *)dev_id;
+
+	mvreg_write(pp, MVNETA_INTR_NEW_MASK, 0);
+	napi_schedule(&pp->napi);
+
+	return IRQ_HANDLED;
+}
+
+/* Interrupt handling - the callback for request_percpu_irq() */
+static irqreturn_t mvneta_percpu_isr(int irq, void *dev_id)
+{
 	struct mvneta_pcpu_port *port = (struct mvneta_pcpu_port *)dev_id;
 
 	disable_percpu_irq(port->pp->dev->irq);
@@ -2679,7 +2715,7 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 	struct mvneta_pcpu_port *port = this_cpu_ptr(pp->ports);
 
 	if (!netif_running(pp->dev)) {
-		napi_complete(&port->napi);
+		napi_complete(napi);
 		return rx_done;
 	}
 
@@ -2708,7 +2744,8 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 	 */
 	rx_queue = fls(((cause_rx_tx >> 8) & 0xff));
 
-	cause_rx_tx |= port->cause_rx_tx;
+	cause_rx_tx |= pp->neta_armada3700 ? pp->cause_rx_tx :
+		port->cause_rx_tx;
 
 	if (rx_queue) {
 		rx_queue = rx_queue - 1;
@@ -2722,11 +2759,27 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
 
 	if (budget > 0) {
 		cause_rx_tx = 0;
-		napi_complete(&port->napi);
-		enable_percpu_irq(pp->dev->irq, 0);
+		napi_complete(napi);
+
+		if (pp->neta_armada3700) {
+			unsigned long flags;
+
+			local_irq_save(flags);
+			mvreg_write(pp, MVNETA_INTR_NEW_MASK,
+				    MVNETA_RX_INTR_MASK(rxq_number) |
+				    MVNETA_TX_INTR_MASK(txq_number) |
+				    MVNETA_MISCINTR_INTR_MASK);
+			local_irq_restore(flags);
+		} else {
+			enable_percpu_irq(pp->dev->irq, 0);
+		}
 	}
 
-	port->cause_rx_tx = cause_rx_tx;
+	if (pp->neta_armada3700)
+		pp->cause_rx_tx = cause_rx_tx;
+	else
+		port->cause_rx_tx = cause_rx_tx;
+
 	return rx_done;
 }
 
@@ -3047,11 +3100,16 @@ static void mvneta_start_dev(struct mvneta_port *pp)
 	/* start the Rx/Tx activity */
 	mvneta_port_enable(pp);
 
-	/* Enable polling on the port */
-	for_each_online_cpu(cpu) {
-		struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+	if (!pp->neta_armada3700) {
+		/* Enable polling on the port */
+		for_each_online_cpu(cpu) {
+			struct mvneta_pcpu_port *port =
+				per_cpu_ptr(pp->ports, cpu);
 
-		napi_enable(&port->napi);
+			napi_enable(&port->napi);
+		}
+	} else {
+		napi_enable(&pp->napi);
 	}
 
 	/* Unmask interrupts. It has to be done from each CPU */
@@ -3073,10 +3131,15 @@ static void mvneta_stop_dev(struct mvneta_port *pp)
 
 	phy_stop(ndev->phydev);
 
-	for_each_online_cpu(cpu) {
-		struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+	if (!pp->neta_armada3700) {
+		for_each_online_cpu(cpu) {
+			struct mvneta_pcpu_port *port =
+				per_cpu_ptr(pp->ports, cpu);
 
-		napi_disable(&port->napi);
+			napi_disable(&port->napi);
+		}
+	} else {
+		napi_disable(&pp->napi);
 	}
 
 	netif_carrier_off(pp->dev);
@@ -3486,31 +3549,37 @@ static int mvneta_open(struct net_device *dev)
 		goto err_cleanup_rxqs;
 
 	/* Connect to port interrupt line */
-	ret = request_percpu_irq(pp->dev->irq, mvneta_isr,
-				 MVNETA_DRIVER_NAME, pp->ports);
+	if (pp->neta_armada3700)
+		ret = request_irq(pp->dev->irq, mvneta_isr, 0,
+				  dev->name, pp);
+	else
+		ret = request_percpu_irq(pp->dev->irq, mvneta_percpu_isr,
+					 dev->name, pp->ports);
 	if (ret) {
 		netdev_err(pp->dev, "cannot request irq %d\n", pp->dev->irq);
 		goto err_cleanup_txqs;
 	}
 
-	/* Enable per-CPU interrupt on all the CPU to handle our RX
-	 * queue interrupts
-	 */
-	on_each_cpu(mvneta_percpu_enable, pp, true);
+	if (!pp->neta_armada3700) {
+		/* Enable per-CPU interrupt on all the CPU to handle our RX
+		 * queue interrupts
+		 */
+		on_each_cpu(mvneta_percpu_enable, pp, true);
 
-	pp->is_stopped = false;
-	/* Register a CPU notifier to handle the case where our CPU
-	 * might be taken offline.
-	 */
-	ret = cpuhp_state_add_instance_nocalls(online_hpstate,
-					       &pp->node_online);
-	if (ret)
-		goto err_free_irq;
+		pp->is_stopped = false;
+		/* Register a CPU notifier to handle the case where our CPU
+		 * might be taken offline.
+		 */
+		ret = cpuhp_state_add_instance_nocalls(online_hpstate,
+						       &pp->node_online);
+		if (ret)
+			goto err_free_irq;
 
-	ret = cpuhp_state_add_instance_nocalls(CPUHP_NET_MVNETA_DEAD,
-					       &pp->node_dead);
-	if (ret)
-		goto err_free_online_hp;
+		ret = cpuhp_state_add_instance_nocalls(CPUHP_NET_MVNETA_DEAD,
+						       &pp->node_dead);
+		if (ret)
+			goto err_free_online_hp;
+	}
 
 	/* In default link is down */
 	netif_carrier_off(pp->dev);
@@ -3526,13 +3595,20 @@ static int mvneta_open(struct net_device *dev)
 	return 0;
 
 err_free_dead_hp:
-	cpuhp_state_remove_instance_nocalls(CPUHP_NET_MVNETA_DEAD,
-					    &pp->node_dead);
+	if (!pp->neta_armada3700)
+		cpuhp_state_remove_instance_nocalls(CPUHP_NET_MVNETA_DEAD,
+						    &pp->node_dead);
 err_free_online_hp:
-	cpuhp_state_remove_instance_nocalls(online_hpstate, &pp->node_online);
+	if (!pp->neta_armada3700)
+		cpuhp_state_remove_instance_nocalls(online_hpstate,
+						    &pp->node_online);
 err_free_irq:
-	on_each_cpu(mvneta_percpu_disable, pp, true);
-	free_percpu_irq(pp->dev->irq, pp->ports);
+	if (pp->neta_armada3700) {
+		free_irq(pp->dev->irq, pp);
+	} else {
+		on_each_cpu(mvneta_percpu_disable, pp, true);
+		free_percpu_irq(pp->dev->irq, pp->ports);
+	}
 err_cleanup_txqs:
 	mvneta_cleanup_txqs(pp);
 err_cleanup_rxqs:
@@ -3545,23 +3621,30 @@ static int mvneta_stop(struct net_device *dev)
 {
 	struct mvneta_port *pp = netdev_priv(dev);
 
-	/* Inform that we are stopping so we don't want to setup the
-	 * driver for new CPUs in the notifiers. The code of the
-	 * notifier for CPU online is protected by the same spinlock,
-	 * so when we get the lock, the notifer work is done.
-	 */
-	spin_lock(&pp->lock);
-	pp->is_stopped = true;
-	spin_unlock(&pp->lock);
+	if (!pp->neta_armada3700) {
+		/* Inform that we are stopping so we don't want to setup the
+		 * driver for new CPUs in the notifiers. The code of the
+		 * notifier for CPU online is protected by the same spinlock,
+		 * so when we get the lock, the notifer work is done.
+		 */
+		spin_lock(&pp->lock);
+		pp->is_stopped = true;
+		spin_unlock(&pp->lock);
 
-	mvneta_stop_dev(pp);
-	mvneta_mdio_remove(pp);
+		mvneta_stop_dev(pp);
+		mvneta_mdio_remove(pp);
 
 	cpuhp_state_remove_instance_nocalls(online_hpstate, &pp->node_online);
 	cpuhp_state_remove_instance_nocalls(CPUHP_NET_MVNETA_DEAD,
 					    &pp->node_dead);
-	on_each_cpu(mvneta_percpu_disable, pp, true);
-	free_percpu_irq(dev->irq, pp->ports);
+		on_each_cpu(mvneta_percpu_disable, pp, true);
+		free_percpu_irq(dev->irq, pp->ports);
+	} else {
+		mvneta_stop_dev(pp);
+		mvneta_mdio_remove(pp);
+		free_irq(dev->irq, pp);
+	}
+
 	mvneta_cleanup_rxqs(pp);
 	mvneta_cleanup_txqs(pp);
 
@@ -3840,6 +3923,11 @@ static int mvneta_ethtool_set_rxfh(struct net_device *dev, const u32 *indir,
 				   const u8 *key, const u8 hfunc)
 {
 	struct mvneta_port *pp = netdev_priv(dev);
+
+	/* Current code for Armada 3700 doesn't support RSS features yet */
+	if (pp->neta_armada3700)
+		return -EOPNOTSUPP;
+
 	/* We require at least one supported parameter to be changed
 	 * and no change in any of the unsupported parameters
 	 */
@@ -3860,6 +3948,10 @@ static int mvneta_ethtool_get_rxfh(struct net_device *dev, u32 *indir, u8 *key,
 {
 	struct mvneta_port *pp = netdev_priv(dev);
 
+	/* Current code for Armada 3700 doesn't support RSS features yet */
+	if (pp->neta_armada3700)
+		return -EOPNOTSUPP;
+
 	if (hfunc)
 		*hfunc = ETH_RSS_HASH_TOP;
 
@@ -3967,16 +4059,29 @@ static void mvneta_conf_mbus_windows(struct mvneta_port *pp,
 	win_enable = 0x3f;
 	win_protect = 0;
 
-	for (i = 0; i < dram->num_cs; i++) {
-		const struct mbus_dram_window *cs = dram->cs + i;
-		mvreg_write(pp, MVNETA_WIN_BASE(i), (cs->base & 0xffff0000) |
-			    (cs->mbus_attr << 8) | dram->mbus_dram_target_id);
+	if (dram) {
+		for (i = 0; i < dram->num_cs; i++) {
+			const struct mbus_dram_window *cs = dram->cs + i;
+
+			mvreg_write(pp, MVNETA_WIN_BASE(i),
+				    (cs->base & 0xffff0000) |
+				    (cs->mbus_attr << 8) |
+				    dram->mbus_dram_target_id);
 
-		mvreg_write(pp, MVNETA_WIN_SIZE(i),
-			    (cs->size - 1) & 0xffff0000);
+			mvreg_write(pp, MVNETA_WIN_SIZE(i),
+				    (cs->size - 1) & 0xffff0000);
 
-		win_enable &= ~(1 << i);
-		win_protect |= 3 << (2 * i);
+			win_enable &= ~(1 << i);
+			win_protect |= 3 << (2 * i);
+		}
+	} else {
+		/* For Armada3700 open default 4GB Mbus window, leaving
+		 * arbitration of target/attribute to a different layer
+		 * of configuration.
+		 */
+		mvreg_write(pp, MVNETA_WIN_SIZE(0), 0xffff0000);
+		win_enable &= ~BIT(0);
+		win_protect = 3;
 	}
 
 	mvreg_write(pp, MVNETA_BASE_ADDR_ENABLE, win_enable);
@@ -4106,6 +4211,10 @@ static int mvneta_probe(struct platform_device *pdev)
 
 	pp->indir[0] = rxq_def;
 
+	/* Get special SoC configurations */
+	if (of_device_is_compatible(dn, "marvell,armada-3700-neta"))
+		pp->neta_armada3700 = true;
+
 	pp->clk = devm_clk_get(&pdev->dev, "core");
 	if (IS_ERR(pp->clk))
 		pp->clk = devm_clk_get(&pdev->dev, NULL);
@@ -4173,7 +4282,11 @@ static int mvneta_probe(struct platform_device *pdev)
 	pp->tx_csum_limit = tx_csum_limit;
 
 	dram_target_info = mv_mbus_dram_info();
-	if (dram_target_info)
+	/* Armada3700 requires setting default configuration of Mbus
+	 * windows, however without using filled mbus_dram_target_info
+	 * structure.
+	 */
+	if (dram_target_info || pp->neta_armada3700)
 		mvneta_conf_mbus_windows(pp, dram_target_info);
 
 	pp->tx_ring_size = MVNETA_MAX_TXD;
@@ -4206,11 +4319,20 @@ static int mvneta_probe(struct platform_device *pdev)
 		goto err_netdev;
 	}
 
-	for_each_present_cpu(cpu) {
-		struct mvneta_pcpu_port *port = per_cpu_ptr(pp->ports, cpu);
+	/* Armada3700 network controller does not support per-cpu
+	 * operation, so only single NAPI should be initialized.
+	 */
+	if (pp->neta_armada3700) {
+		netif_napi_add(dev, &pp->napi, mvneta_poll, NAPI_POLL_WEIGHT);
+	} else {
+		for_each_present_cpu(cpu) {
+			struct mvneta_pcpu_port *port =
+				per_cpu_ptr(pp->ports, cpu);
 
-		netif_napi_add(dev, &port->napi, mvneta_poll, NAPI_POLL_WEIGHT);
-		port->pp = pp;
+			netif_napi_add(dev, &port->napi, mvneta_poll,
+				       NAPI_POLL_WEIGHT);
+			port->pp = pp;
+		}
 	}
 
 	dev->features = NETIF_F_SG | NETIF_F_IP_CSUM | NETIF_F_TSO;
@@ -4295,6 +4417,7 @@ static int mvneta_remove(struct platform_device *pdev)
 static const struct of_device_id mvneta_match[] = {
 	{ .compatible = "marvell,armada-370-neta" },
 	{ .compatible = "marvell,armada-xp-neta" },
+	{ .compatible = "marvell,armada-3700-neta" },
 	{ }
 };
 MODULE_DEVICE_TABLE(of, mvneta_match);
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v3 net-next 6/6] ARM64: dts: marvell: Add network support for Armada 3700
  2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
                   ` (4 preceding siblings ...)
  2016-11-29  9:37 ` [PATCH v3 net-next 5/6] net: mvneta: Add network support for Armada 3700 SoC Gregory CLEMENT
@ 2016-11-29  9:37 ` Gregory CLEMENT
  2016-11-29 10:05 ` [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
  6 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29  9:37 UTC (permalink / raw)
  To: David S. Miller, linux-kernel, netdev
  Cc: Jisheng Zhang, Arnd Bergmann, Jason Cooper, Andrew Lunn,
	Sebastian Hesselbarth, Gregory CLEMENT, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

Add neta nodes for network support both in device tree for the SoC and
the board.

Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
---
 arch/arm64/boot/dts/marvell/armada-3720-db.dts | 23 +++++++++++++++++++-
 arch/arm64/boot/dts/marvell/armada-37xx.dtsi   | 23 +++++++++++++++++++-
 2 files changed, 46 insertions(+), 0 deletions(-)

diff --git a/arch/arm64/boot/dts/marvell/armada-3720-db.dts b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
index 1372e9a6aaa4..c8b82e4145de 100644
--- a/arch/arm64/boot/dts/marvell/armada-3720-db.dts
+++ b/arch/arm64/boot/dts/marvell/armada-3720-db.dts
@@ -81,3 +81,26 @@
 &pcie0 {
 	status = "okay";
 };
+
+&mdio {
+	status = "okay";
+	phy0: ethernet-phy@0 {
+		reg = <0>;
+	};
+
+	phy1: ethernet-phy@1 {
+		reg = <1>;
+	};
+};
+
+&eth0 {
+	phy-mode = "rgmii-id";
+	phy = <&phy0>;
+	status = "okay";
+};
+
+&eth1 {
+	phy-mode = "rgmii-id";
+	phy = <&phy1>;
+	status = "okay";
+};
diff --git a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
index e9bd58793464..3b8eb45bdc76 100644
--- a/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
+++ b/arch/arm64/boot/dts/marvell/armada-37xx.dtsi
@@ -140,6 +140,29 @@
 				};
 			};
 
+			eth0: ethernet@30000 {
+				   compatible = "marvell,armada-3700-neta";
+				   reg = <0x30000 0x4000>;
+				   interrupts = <GIC_SPI 42 IRQ_TYPE_LEVEL_HIGH>;
+				   clocks = <&sb_periph_clk 8>;
+				   status = "disabled";
+			};
+
+			mdio: mdio@32004 {
+				#address-cells = <1>;
+				#size-cells = <0>;
+				compatible = "marvell,orion-mdio";
+				reg = <0x32004 0x4>;
+			};
+
+			eth1: ethernet@40000 {
+				compatible = "marvell,armada-3700-neta";
+				reg = <0x40000 0x4000>;
+				interrupts = <GIC_SPI 45 IRQ_TYPE_LEVEL_HIGH>;
+				clocks = <&sb_periph_clk 7>;
+				status = "disabled";
+			};
+
 			usb3: usb@58000 {
 				compatible = "marvell,armada3700-xhci",
 				"generic-xhci";
-- 
git-series 0.8.10

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address
  2016-11-29  9:37 ` [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address Gregory CLEMENT
@ 2016-11-29  9:50   ` Marcin Wojtas
  2016-11-29 10:17     ` Gregory CLEMENT
  2016-11-29  9:59   ` Marcin Wojtas
  1 sibling, 1 reply; 14+ messages in thread
From: Marcin Wojtas @ 2016-11-29  9:50 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: David S. Miller, linux-kernel, netdev, Jisheng Zhang,
	Arnd Bergmann, Jason Cooper, Andrew Lunn, Sebastian Hesselbarth,
	Thomas Petazzoni, linux-arm-kernel, Nadav Haklai,
	Dmitri Epshtein, Yelena Krivosheev

Hi Gregory,

Apparently HWBM had a mistake in implementation, please see below.

2016-11-29 10:37 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
> Until now the virtual address of the received buffer were stored in the
> cookie field of the rx descriptor. However, this field is 32-bits only
> which prevents to use the driver on a 64-bits architecture.
>
> With this patch the virtual address is stored in an array not shared with
> the hardware (no more need to use the DMA API). Thanks to this, it is
> possible to use cache contrary to the access of the rx descriptor member.
>
> The change is done in the swbm path only because the hwbm uses the cookie
> field, this also means that currently the hwbm is not usable in 64-bits.
>
> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
>  1 file changed, 81 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 1b84f746d748..32b142d0e44e 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -561,6 +561,9 @@ struct mvneta_rx_queue {
>         u32 pkts_coal;
>         u32 time_coal;
>
> +       /* Virtual address of the RX buffer */
> +       void  **buf_virt_addr;
> +
>         /* Virtual address of the RX DMA descriptors array */
>         struct mvneta_rx_desc *descs;
>
> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
>
>  /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
>  static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
> -                               u32 phys_addr, u32 cookie)
> +                               u32 phys_addr, void *virt_addr,
> +                               struct mvneta_rx_queue *rxq)
>  {
> -       rx_desc->buf_cookie = cookie;
> +       int i;
> +
>         rx_desc->buf_phys_addr = phys_addr;
> +       i = rx_desc - rxq->descs;
> +       rxq->buf_virt_addr[i] = virt_addr;
>  }
>
>  /* Decrement sent descriptors counter */
> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
>
>  /* Refill processing for SW buffer management */
>  static int mvneta_rx_refill(struct mvneta_port *pp,
> -                           struct mvneta_rx_desc *rx_desc)
> +                           struct mvneta_rx_desc *rx_desc,
> +                           struct mvneta_rx_queue *rxq)
>
>  {
>         dma_addr_t phys_addr;
> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>                 return -ENOMEM;
>         }
>
> -       mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
> +       mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
>         return 0;
>  }
>
> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
>
>         for (i = 0; i < rxq->size; i++) {
>                 struct mvneta_rx_desc *rx_desc = rxq->descs + i;
> -               void *data = (void *)rx_desc->buf_cookie;
> +               void *data;
> +
> +               if (!pp->bm_priv)
> +                       data = rxq->buf_virt_addr[i];
> +               else
> +                       data = (void *)(uintptr_t)rx_desc->buf_cookie;
>
>                 dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
>                                  MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
> @@ -1894,12 +1907,13 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
>                 unsigned char *data;
>                 dma_addr_t phys_addr;
>                 u32 rx_status, frag_size;
> -               int rx_bytes, err;
> +               int rx_bytes, err, index;
>
>                 rx_done++;
>                 rx_status = rx_desc->status;
>                 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
> -               data = (unsigned char *)rx_desc->buf_cookie;
> +               index = rx_desc - rxq->descs;
> +               data = (unsigned char *)rxq->buf_virt_addr[index];
>                 phys_addr = rx_desc->buf_phys_addr;
>
>                 if (!mvneta_rxq_desc_is_first_last(rx_status) ||
> @@ -1938,7 +1952,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
>                 }
>
>                 /* Refill processing */
> -               err = mvneta_rx_refill(pp, rx_desc);
> +               err = mvneta_rx_refill(pp, rx_desc, rxq);
>                 if (err) {
>                         netdev_err(dev, "Linux processing - Can't refill\n");
>                         rxq->missed++;
> @@ -2020,7 +2034,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
>                 rx_done++;
>                 rx_status = rx_desc->status;
>                 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
> -               data = (unsigned char *)rx_desc->buf_cookie;
> +               data = (u8 *)(uintptr_t)rx_desc->buf_cookie;
>                 phys_addr = rx_desc->buf_phys_addr;
>                 pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc);
>                 bm_pool = &pp->bm_priv->bm_pools[pool_id];
> @@ -2708,6 +2722,56 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
>         return rx_done;
>  }
>
> +/* Refill processing for HW buffer management */
> +static int mvneta_rx_hwbm_refill(struct mvneta_port *pp,
> +                                struct mvneta_rx_desc *rx_desc)
> +
> +{
> +       dma_addr_t phys_addr;
> +       void *data;
> +
> +       data = mvneta_frag_alloc(pp->frag_size);
> +       if (!data)
> +               return -ENOMEM;
> +
> +       phys_addr = dma_map_single(pp->dev->dev.parent, data,
> +                                  MVNETA_RX_BUF_SIZE(pp->pkt_size),
> +                                  DMA_FROM_DEVICE);
> +       if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
> +               mvneta_frag_free(pp->frag_size, data);
> +               return -ENOMEM;
> +       }
> +
> +       rx_desc->buf_phys_addr = phys_addr;
> +       rx_desc->buf_cookie = (uintptr_t)data;
> +
> +       return 0;
> +}
> +
> +/* Handle rxq fill: allocates rxq skbs; called when initializing a port */
> +static int mvneta_rxq_bm_fill(struct mvneta_port *pp,
> +                             struct mvneta_rx_queue *rxq,
> +                             int num)
> +{
> +       int i;
> +
> +       for (i = 0; i < num; i++) {
> +               memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
> +               if (mvneta_rx_hwbm_refill(pp, rxq->descs + i) != 0) {
> +                       netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
> +                                  __func__, rxq->id, i, num);
> +                       break;
> +               }
> +       }
> +
> +       /* Add this number of RX descriptors as non occupied (ready to
> +        * get packets)
> +        */
> +       mvneta_rxq_non_occup_desc_add(pp, rxq, i);
> +
> +       return i;
> +}
> +
>  /* Handle rxq fill: allocates rxq skbs; called when initializing a port */
>  static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>                            int num)
> @@ -2716,7 +2780,7 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>
>         for (i = 0; i < num; i++) {
>                 memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
> -               if (mvneta_rx_refill(pp, rxq->descs + i) != 0) {
> +               if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) {
>                         netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
>                                 __func__, rxq->id, i, num);
>                         break;
> @@ -2784,14 +2848,14 @@ static int mvneta_rxq_init(struct mvneta_port *pp,
>                 mvneta_rxq_buf_size_set(pp, rxq,
>                                         MVNETA_RX_BUF_SIZE(pp->pkt_size));
>                 mvneta_rxq_bm_disable(pp, rxq);
> +               mvneta_rxq_fill(pp, rxq, rxq->size);
>         } else {
>                 mvneta_rxq_bm_enable(pp, rxq);
>                 mvneta_rxq_long_pool_set(pp, rxq);
>                 mvneta_rxq_short_pool_set(pp, rxq);
> +               mvneta_rxq_bm_fill(pp, rxq, rxq->size);

Manual filling descriptors with new buffers is redundant. For HWBM,
all buffers are allocated in mvneta_bm_construct() and in runtime they
are put into descriptors by hardware. I think it's enough to add here:
mvneta_rxq_non_occup_desc_add(pp, rxq, rxq->size);

And remove mvneta_rxq_bm_fill and mvneta_rx_hwbm_refill.

Best regards,
Marcin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address
  2016-11-29  9:37 ` [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address Gregory CLEMENT
  2016-11-29  9:50   ` Marcin Wojtas
@ 2016-11-29  9:59   ` Marcin Wojtas
  2016-11-29 10:19     ` Gregory CLEMENT
  1 sibling, 1 reply; 14+ messages in thread
From: Marcin Wojtas @ 2016-11-29  9:59 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: David S. Miller, linux-kernel, netdev, Jisheng Zhang,
	Arnd Bergmann, Jason Cooper, Andrew Lunn, Sebastian Hesselbarth,
	Thomas Petazzoni, linux-arm-kernel, Nadav Haklai,
	Dmitri Epshtein, Yelena Krivosheev

Hi Gregory,

Another remark below, sorry for noise.

2016-11-29 10:37 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
> Until now the virtual address of the received buffer were stored in the
> cookie field of the rx descriptor. However, this field is 32-bits only
> which prevents to use the driver on a 64-bits architecture.
>
> With this patch the virtual address is stored in an array not shared with
> the hardware (no more need to use the DMA API). Thanks to this, it is
> possible to use cache contrary to the access of the rx descriptor member.
>
> The change is done in the swbm path only because the hwbm uses the cookie
> field, this also means that currently the hwbm is not usable in 64-bits.
>
> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
> ---
>  drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
>  1 file changed, 81 insertions(+), 12 deletions(-)
>
> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
> index 1b84f746d748..32b142d0e44e 100644
> --- a/drivers/net/ethernet/marvell/mvneta.c
> +++ b/drivers/net/ethernet/marvell/mvneta.c
> @@ -561,6 +561,9 @@ struct mvneta_rx_queue {
>         u32 pkts_coal;
>         u32 time_coal;
>
> +       /* Virtual address of the RX buffer */
> +       void  **buf_virt_addr;
> +
>         /* Virtual address of the RX DMA descriptors array */
>         struct mvneta_rx_desc *descs;
>
> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
>
>  /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
>  static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
> -                               u32 phys_addr, u32 cookie)
> +                               u32 phys_addr, void *virt_addr,
> +                               struct mvneta_rx_queue *rxq)
>  {
> -       rx_desc->buf_cookie = cookie;
> +       int i;
> +
>         rx_desc->buf_phys_addr = phys_addr;
> +       i = rx_desc - rxq->descs;
> +       rxq->buf_virt_addr[i] = virt_addr;
>  }
>
>  /* Decrement sent descriptors counter */
> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
>
>  /* Refill processing for SW buffer management */
>  static int mvneta_rx_refill(struct mvneta_port *pp,
> -                           struct mvneta_rx_desc *rx_desc)
> +                           struct mvneta_rx_desc *rx_desc,
> +                           struct mvneta_rx_queue *rxq)
>
>  {
>         dma_addr_t phys_addr;
> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>                 return -ENOMEM;
>         }
>
> -       mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
> +       mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
>         return 0;
>  }
>
> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
>
>         for (i = 0; i < rxq->size; i++) {
>                 struct mvneta_rx_desc *rx_desc = rxq->descs + i;
> -               void *data = (void *)rx_desc->buf_cookie;
> +               void *data;
> +
> +               if (!pp->bm_priv)
> +                       data = rxq->buf_virt_addr[i];
> +               else
> +                       data = (void *)(uintptr_t)rx_desc->buf_cookie;

Dropping packets for HWBM (in fact returning dropped buffers to the
pool) is done a couple of lines above. This point will never be
reached with HWBM enabled (and it's also incorrect).

Best regards,
Marcin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver
  2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
                   ` (5 preceding siblings ...)
  2016-11-29  9:37 ` [PATCH v3 net-next 6/6] ARM64: dts: marvell: Add network support for Armada 3700 Gregory CLEMENT
@ 2016-11-29 10:05 ` Gregory CLEMENT
  6 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29 10:05 UTC (permalink / raw)
  To: David S. Miller
  Cc: linux-kernel, netdev, Jisheng Zhang, Arnd Bergmann, Jason Cooper,
	Andrew Lunn, Sebastian Hesselbarth, Thomas Petazzoni,
	linux-arm-kernel, Nadav Haklai, Marcin Wojtas, Dmitri Epshtein,
	Yelena Krivosheev

Hi,
 
 On mar., nov. 29 2016, Gregory CLEMENT <gregory.clement@free-electrons.com> wrote:

> Hi,
>
> The Armada 37xx is a new ARMv8 SoC from Marvell using same network
> controller as the older Armada 370/38x/XP SoCs. This series adapts the
> driver in order to be able to use it on this new SoC. The main changes
> are:
>
> - 64-bits support: the first patches allow using the driver on a 64-bit
>   architecture.
>
> - MBUS support: the mbus configuration is different on Armada 37xx
>   from the older SoCs.
>
> - per cpu interrupt: Armada 37xx do not support per cpu interrupt for
>   the NETA IP, the non-per-CPU behavior was added back.
>
> The first item is solved by patches 1 to 3.
> The 2 last items are solved by patch 4.
> In patch 5 the dt support is added.
>
> Beside Armada 37xx, the series have been tested on Armada XP and
> Armada 38x (with Hardware Buffer Management and with Software Buffer
> Managment).
>
> Thanks,
>

I forgot to commit my cover with git series, so it was the old one. It
should have been the following one:

Hi,

The Armada 37xx is a new ARMv8 SoC from Marvell using same network
controller as the older Armada 370/38x/XP SoCs. This series adapts the
driver in order to be able to use it on this new SoC. The main changes
are:

- 64-bits support: the first patches allow using the driver on a 64-bit
  architecture.

- MBUS support: the mbus configuration is different on Armada 37xx
  from the older SoCs.

- per cpu interrupt: Armada 37xx do not support per cpu interrupt for
  the NETA IP, the non-per-CPU behavior was added back.

The first patch is an optimization in the rx path in swbm mode.
The first item is solved by patches 2 to 3.
The 2 last items are solved by patch 5.
In patch 6 the dt support is added.

Beside Armada 37xx, this new series have been again tested on Armada
XP and Armada 38x (with Hardware Buffer Management and with Software
Buffer Management).

This is the 3th version of the series:

- 1st version:
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/469588.html

- 2nd version:
http://lists.infradead.org/pipermail/linux-arm-kernel/2016-November/470476.html

Changelog:
v2 -> v3:

 - Adding patch 1 "Optimize rx path for small frame"

 - Fix the kbuild error by moving the "phys_addr += pp->rx_offset_correction;"
  line from patch 2 to patch 3 where rx_offset_correction is introduced.

 - Move the memory allocation of the buf_virt_addr of the rxq to be
   called by the probe function in order to avoid a memory leak.


> Gregory
>
> Gregory CLEMENT (4):
>   net: mvneta: Optimize rx path for small frame
>   net: mvneta: Use cacheable memory to store the rx buffer virtual address
>   net: mvneta: Only disable mvneta_bm for 64-bits
>   ARM64: dts: marvell: Add network support for Armada 3700
>
> Marcin Wojtas (2):
>   net: mvneta: Convert to be 64 bits compatible
>   net: mvneta: Add network support for Armada 3700 SoC
>
>  Documentation/devicetree/bindings/net/marvell-armada-370-neta.txt |   7 +-
>  arch/arm64/boot/dts/marvell/armada-3720-db.dts                    |  23 ++++-
>  arch/arm64/boot/dts/marvell/armada-37xx.dtsi                      |  23 ++++-
>  drivers/net/ethernet/marvell/Kconfig                              |  10 +-
>  drivers/net/ethernet/marvell/mvneta.c                             | 400 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------------------
>  5 files changed, 361 insertions(+), 102 deletions(-)
>
> base-commit: 436accebb53021ef7c63535f60bda410aa87c136
> -- 
> git-series 0.8.10

-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address
  2016-11-29  9:50   ` Marcin Wojtas
@ 2016-11-29 10:17     ` Gregory CLEMENT
  0 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29 10:17 UTC (permalink / raw)
  To: Marcin Wojtas
  Cc: David S. Miller, linux-kernel, netdev, Jisheng Zhang,
	Arnd Bergmann, Jason Cooper, Andrew Lunn, Sebastian Hesselbarth,
	Thomas Petazzoni, linux-arm-kernel, Nadav Haklai,
	Dmitri Epshtein, Yelena Krivosheev

Hi Marcin,
 
 On mar., nov. 29 2016, Marcin Wojtas <mw@semihalf.com> wrote:

> Hi Gregory,
>
> Apparently HWBM had a mistake in implementation, please see below.
>
> 2016-11-29 10:37 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
>> Until now the virtual address of the received buffer were stored in the
>> cookie field of the rx descriptor. However, this field is 32-bits only
>> which prevents to use the driver on a 64-bits architecture.
>>
>> With this patch the virtual address is stored in an array not shared with
>> the hardware (no more need to use the DMA API). Thanks to this, it is
>> possible to use cache contrary to the access of the rx descriptor member.
>>
>> The change is done in the swbm path only because the hwbm uses the cookie
>> field, this also means that currently the hwbm is not usable in 64-bits.
>>
>> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
>> ---
>>  drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
>>  1 file changed, 81 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
>> index 1b84f746d748..32b142d0e44e 100644
>> --- a/drivers/net/ethernet/marvell/mvneta.c
>> +++ b/drivers/net/ethernet/marvell/mvneta.c
>> @@ -561,6 +561,9 @@ struct mvneta_rx_queue {
>>         u32 pkts_coal;
>>         u32 time_coal;
>>
>> +       /* Virtual address of the RX buffer */
>> +       void  **buf_virt_addr;
>> +
>>         /* Virtual address of the RX DMA descriptors array */
>>         struct mvneta_rx_desc *descs;
>>
>> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
>>
>>  /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
>>  static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
>> -                               u32 phys_addr, u32 cookie)
>> +                               u32 phys_addr, void *virt_addr,
>> +                               struct mvneta_rx_queue *rxq)
>>  {
>> -       rx_desc->buf_cookie = cookie;
>> +       int i;
>> +
>>         rx_desc->buf_phys_addr = phys_addr;
>> +       i = rx_desc - rxq->descs;
>> +       rxq->buf_virt_addr[i] = virt_addr;
>>  }
>>
>>  /* Decrement sent descriptors counter */
>> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
>>
>>  /* Refill processing for SW buffer management */
>>  static int mvneta_rx_refill(struct mvneta_port *pp,
>> -                           struct mvneta_rx_desc *rx_desc)
>> +                           struct mvneta_rx_desc *rx_desc,
>> +                           struct mvneta_rx_queue *rxq)
>>
>>  {
>>         dma_addr_t phys_addr;
>> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>>                 return -ENOMEM;
>>         }
>>
>> -       mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
>> +       mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
>>         return 0;
>>  }
>>
>> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
>>
>>         for (i = 0; i < rxq->size; i++) {
>>                 struct mvneta_rx_desc *rx_desc = rxq->descs + i;
>> -               void *data = (void *)rx_desc->buf_cookie;
>> +               void *data;
>> +
>> +               if (!pp->bm_priv)
>> +                       data = rxq->buf_virt_addr[i];
>> +               else
>> +                       data = (void *)(uintptr_t)rx_desc->buf_cookie;
>>
>>                 dma_unmap_single(pp->dev->dev.parent, rx_desc->buf_phys_addr,
>>                                  MVNETA_RX_BUF_SIZE(pp->pkt_size), DMA_FROM_DEVICE);
>> @@ -1894,12 +1907,13 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
>>                 unsigned char *data;
>>                 dma_addr_t phys_addr;
>>                 u32 rx_status, frag_size;
>> -               int rx_bytes, err;
>> +               int rx_bytes, err, index;
>>
>>                 rx_done++;
>>                 rx_status = rx_desc->status;
>>                 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
>> -               data = (unsigned char *)rx_desc->buf_cookie;
>> +               index = rx_desc - rxq->descs;
>> +               data = (unsigned char *)rxq->buf_virt_addr[index];
>>                 phys_addr = rx_desc->buf_phys_addr;
>>
>>                 if (!mvneta_rxq_desc_is_first_last(rx_status) ||
>> @@ -1938,7 +1952,7 @@ static int mvneta_rx_swbm(struct mvneta_port *pp, int rx_todo,
>>                 }
>>
>>                 /* Refill processing */
>> -               err = mvneta_rx_refill(pp, rx_desc);
>> +               err = mvneta_rx_refill(pp, rx_desc, rxq);
>>                 if (err) {
>>                         netdev_err(dev, "Linux processing - Can't refill\n");
>>                         rxq->missed++;
>> @@ -2020,7 +2034,7 @@ static int mvneta_rx_hwbm(struct mvneta_port *pp, int rx_todo,
>>                 rx_done++;
>>                 rx_status = rx_desc->status;
>>                 rx_bytes = rx_desc->data_size - (ETH_FCS_LEN + MVNETA_MH_SIZE);
>> -               data = (unsigned char *)rx_desc->buf_cookie;
>> +               data = (u8 *)(uintptr_t)rx_desc->buf_cookie;
>>                 phys_addr = rx_desc->buf_phys_addr;
>>                 pool_id = MVNETA_RX_GET_BM_POOL_ID(rx_desc);
>>                 bm_pool = &pp->bm_priv->bm_pools[pool_id];
>> @@ -2708,6 +2722,56 @@ static int mvneta_poll(struct napi_struct *napi, int budget)
>>         return rx_done;
>>  }
>>
>> +/* Refill processing for HW buffer management */
>> +static int mvneta_rx_hwbm_refill(struct mvneta_port *pp,
>> +                                struct mvneta_rx_desc *rx_desc)
>> +
>> +{
>> +       dma_addr_t phys_addr;
>> +       void *data;
>> +
>> +       data = mvneta_frag_alloc(pp->frag_size);
>> +       if (!data)
>> +               return -ENOMEM;
>> +
>> +       phys_addr = dma_map_single(pp->dev->dev.parent, data,
>> +                                  MVNETA_RX_BUF_SIZE(pp->pkt_size),
>> +                                  DMA_FROM_DEVICE);
>> +       if (unlikely(dma_mapping_error(pp->dev->dev.parent, phys_addr))) {
>> +               mvneta_frag_free(pp->frag_size, data);
>> +               return -ENOMEM;
>> +       }
>> +
>> +       rx_desc->buf_phys_addr = phys_addr;
>> +       rx_desc->buf_cookie = (uintptr_t)data;
>> +
>> +       return 0;
>> +}
>> +
>> +/* Handle rxq fill: allocates rxq skbs; called when initializing a port */
>> +static int mvneta_rxq_bm_fill(struct mvneta_port *pp,
>> +                             struct mvneta_rx_queue *rxq,
>> +                             int num)
>> +{
>> +       int i;
>> +
>> +       for (i = 0; i < num; i++) {
>> +               memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
>> +               if (mvneta_rx_hwbm_refill(pp, rxq->descs + i) != 0) {
>> +                       netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
>> +                                  __func__, rxq->id, i, num);
>> +                       break;
>> +               }
>> +       }
>> +
>> +       /* Add this number of RX descriptors as non occupied (ready to
>> +        * get packets)
>> +        */
>> +       mvneta_rxq_non_occup_desc_add(pp, rxq, i);
>> +
>> +       return i;
>> +}
>> +
>>  /* Handle rxq fill: allocates rxq skbs; called when initializing a port */
>>  static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>>                            int num)
>> @@ -2716,7 +2780,7 @@ static int mvneta_rxq_fill(struct mvneta_port *pp, struct mvneta_rx_queue *rxq,
>>
>>         for (i = 0; i < num; i++) {
>>                 memset(rxq->descs + i, 0, sizeof(struct mvneta_rx_desc));
>> -               if (mvneta_rx_refill(pp, rxq->descs + i) != 0) {
>> +               if (mvneta_rx_refill(pp, rxq->descs + i, rxq) != 0) {
>>                         netdev_err(pp->dev, "%s:rxq %d, %d of %d buffs  filled\n",
>>                                 __func__, rxq->id, i, num);
>>                         break;
>> @@ -2784,14 +2848,14 @@ static int mvneta_rxq_init(struct mvneta_port *pp,
>>                 mvneta_rxq_buf_size_set(pp, rxq,
>>                                         MVNETA_RX_BUF_SIZE(pp->pkt_size));
>>                 mvneta_rxq_bm_disable(pp, rxq);
>> +               mvneta_rxq_fill(pp, rxq, rxq->size);
>>         } else {
>>                 mvneta_rxq_bm_enable(pp, rxq);
>>                 mvneta_rxq_long_pool_set(pp, rxq);
>>                 mvneta_rxq_short_pool_set(pp, rxq);
>> +               mvneta_rxq_bm_fill(pp, rxq, rxq->size);
>
> Manual filling descriptors with new buffers is redundant. For HWBM,
> all buffers are allocated in mvneta_bm_construct() and in runtime they
> are put into descriptors by hardware. I think it's enough to add here:
> mvneta_rxq_non_occup_desc_add(pp, rxq, rxq->size);
>
> And remove mvneta_rxq_bm_fill and mvneta_rx_hwbm_refill.

You're right. I will do it it will simplify the code.

Thanks,

Gregory

>
> Best regards,
> Marcin

-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address
  2016-11-29  9:59   ` Marcin Wojtas
@ 2016-11-29 10:19     ` Gregory CLEMENT
  2016-11-29 10:34       ` Marcin Wojtas
  0 siblings, 1 reply; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29 10:19 UTC (permalink / raw)
  To: Marcin Wojtas
  Cc: David S. Miller, linux-kernel, netdev, Jisheng Zhang,
	Arnd Bergmann, Jason Cooper, Andrew Lunn, Sebastian Hesselbarth,
	Thomas Petazzoni, linux-arm-kernel, Nadav Haklai,
	Dmitri Epshtein, Yelena Krivosheev

Hi Marcin,
 
 On mar., nov. 29 2016, Marcin Wojtas <mw@semihalf.com> wrote:

> Hi Gregory,
>
> Another remark below, sorry for noise.
>
> 2016-11-29 10:37 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
>> Until now the virtual address of the received buffer were stored in the
>> cookie field of the rx descriptor. However, this field is 32-bits only
>> which prevents to use the driver on a 64-bits architecture.
>>
>> With this patch the virtual address is stored in an array not shared with
>> the hardware (no more need to use the DMA API). Thanks to this, it is
>> possible to use cache contrary to the access of the rx descriptor member.
>>
>> The change is done in the swbm path only because the hwbm uses the cookie
>> field, this also means that currently the hwbm is not usable in 64-bits.
>>
>> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
>> ---
>>  drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
>>  1 file changed, 81 insertions(+), 12 deletions(-)
>>
>> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
>> index 1b84f746d748..32b142d0e44e 100644
>> --- a/drivers/net/ethernet/marvell/mvneta.c
>> +++ b/drivers/net/ethernet/marvell/mvneta.c
>> @@ -561,6 +561,9 @@ struct mvneta_rx_queue {
>>         u32 pkts_coal;
>>         u32 time_coal;
>>
>> +       /* Virtual address of the RX buffer */
>> +       void  **buf_virt_addr;
>> +
>>         /* Virtual address of the RX DMA descriptors array */
>>         struct mvneta_rx_desc *descs;
>>
>> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
>>
>>  /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
>>  static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
>> -                               u32 phys_addr, u32 cookie)
>> +                               u32 phys_addr, void *virt_addr,
>> +                               struct mvneta_rx_queue *rxq)
>>  {
>> -       rx_desc->buf_cookie = cookie;
>> +       int i;
>> +
>>         rx_desc->buf_phys_addr = phys_addr;
>> +       i = rx_desc - rxq->descs;
>> +       rxq->buf_virt_addr[i] = virt_addr;
>>  }
>>
>>  /* Decrement sent descriptors counter */
>> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
>>
>>  /* Refill processing for SW buffer management */
>>  static int mvneta_rx_refill(struct mvneta_port *pp,
>> -                           struct mvneta_rx_desc *rx_desc)
>> +                           struct mvneta_rx_desc *rx_desc,
>> +                           struct mvneta_rx_queue *rxq)
>>
>>  {
>>         dma_addr_t phys_addr;
>> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>>                 return -ENOMEM;
>>         }
>>
>> -       mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
>> +       mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
>>         return 0;
>>  }
>>
>> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
>>
>>         for (i = 0; i < rxq->size; i++) {
>>                 struct mvneta_rx_desc *rx_desc = rxq->descs + i;
>> -               void *data = (void *)rx_desc->buf_cookie;
>> +               void *data;
>> +
>> +               if (!pp->bm_priv)
>> +                       data = rxq->buf_virt_addr[i];
>> +               else
>> +                       data = (void *)(uintptr_t)rx_desc->buf_cookie;
>
> Dropping packets for HWBM (in fact returning dropped buffers to the
> pool) is done a couple of lines above. This point will never be

indeed I changed the code at every place the buf_cookie was used and
missed the fact that for HWBM this code was never reached.

> reached with HWBM enabled (and it's also incorrect).

What is incorrect?

Gregory


>
> Best regards,
> Marcin

-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address
  2016-11-29 10:19     ` Gregory CLEMENT
@ 2016-11-29 10:34       ` Marcin Wojtas
  2016-11-29 10:39         ` Gregory CLEMENT
  0 siblings, 1 reply; 14+ messages in thread
From: Marcin Wojtas @ 2016-11-29 10:34 UTC (permalink / raw)
  To: Gregory CLEMENT
  Cc: David S. Miller, linux-kernel, netdev, Jisheng Zhang,
	Arnd Bergmann, Jason Cooper, Andrew Lunn, Sebastian Hesselbarth,
	Thomas Petazzoni, linux-arm-kernel, Nadav Haklai,
	Dmitri Epshtein, Yelena Krivosheev

Gregory,

2016-11-29 11:19 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
> Hi Marcin,
>
>  On mar., nov. 29 2016, Marcin Wojtas <mw@semihalf.com> wrote:
>
>> Hi Gregory,
>>
>> Another remark below, sorry for noise.
>>
>> 2016-11-29 10:37 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
>>> Until now the virtual address of the received buffer were stored in the
>>> cookie field of the rx descriptor. However, this field is 32-bits only
>>> which prevents to use the driver on a 64-bits architecture.
>>>
>>> With this patch the virtual address is stored in an array not shared with
>>> the hardware (no more need to use the DMA API). Thanks to this, it is
>>> possible to use cache contrary to the access of the rx descriptor member.
>>>
>>> The change is done in the swbm path only because the hwbm uses the cookie
>>> field, this also means that currently the hwbm is not usable in 64-bits.
>>>
>>> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
>>> ---
>>>  drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
>>>  1 file changed, 81 insertions(+), 12 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
>>> index 1b84f746d748..32b142d0e44e 100644
>>> --- a/drivers/net/ethernet/marvell/mvneta.c
>>> +++ b/drivers/net/ethernet/marvell/mvneta.c
>>> @@ -561,6 +561,9 @@ struct mvneta_rx_queue {
>>>         u32 pkts_coal;
>>>         u32 time_coal;
>>>
>>> +       /* Virtual address of the RX buffer */
>>> +       void  **buf_virt_addr;
>>> +
>>>         /* Virtual address of the RX DMA descriptors array */
>>>         struct mvneta_rx_desc *descs;
>>>
>>> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
>>>
>>>  /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
>>>  static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
>>> -                               u32 phys_addr, u32 cookie)
>>> +                               u32 phys_addr, void *virt_addr,
>>> +                               struct mvneta_rx_queue *rxq)
>>>  {
>>> -       rx_desc->buf_cookie = cookie;
>>> +       int i;
>>> +
>>>         rx_desc->buf_phys_addr = phys_addr;
>>> +       i = rx_desc - rxq->descs;
>>> +       rxq->buf_virt_addr[i] = virt_addr;
>>>  }
>>>
>>>  /* Decrement sent descriptors counter */
>>> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
>>>
>>>  /* Refill processing for SW buffer management */
>>>  static int mvneta_rx_refill(struct mvneta_port *pp,
>>> -                           struct mvneta_rx_desc *rx_desc)
>>> +                           struct mvneta_rx_desc *rx_desc,
>>> +                           struct mvneta_rx_queue *rxq)
>>>
>>>  {
>>>         dma_addr_t phys_addr;
>>> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>>>                 return -ENOMEM;
>>>         }
>>>
>>> -       mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
>>> +       mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
>>>         return 0;
>>>  }
>>>
>>> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
>>>
>>>         for (i = 0; i < rxq->size; i++) {
>>>                 struct mvneta_rx_desc *rx_desc = rxq->descs + i;
>>> -               void *data = (void *)rx_desc->buf_cookie;
>>> +               void *data;
>>> +
>>> +               if (!pp->bm_priv)
>>> +                       data = rxq->buf_virt_addr[i];
>>> +               else
>>> +                       data = (void *)(uintptr_t)rx_desc->buf_cookie;
>>
>> Dropping packets for HWBM (in fact returning dropped buffers to the
>> pool) is done a couple of lines above. This point will never be
>
> indeed I changed the code at every place the buf_cookie was used and
> missed the fact that for HWBM this code was never reached.
>
>> reached with HWBM enabled (and it's also incorrect).
>
> What is incorrect?
>

Possible dma_unmapping + mvneta_frag_free for buffers in HWBM, when
dropping packets.

Thanks,
Marcin

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address
  2016-11-29 10:34       ` Marcin Wojtas
@ 2016-11-29 10:39         ` Gregory CLEMENT
  0 siblings, 0 replies; 14+ messages in thread
From: Gregory CLEMENT @ 2016-11-29 10:39 UTC (permalink / raw)
  To: Marcin Wojtas
  Cc: David S. Miller, linux-kernel, netdev, Jisheng Zhang,
	Arnd Bergmann, Jason Cooper, Andrew Lunn, Sebastian Hesselbarth,
	Thomas Petazzoni, linux-arm-kernel, Nadav Haklai,
	Dmitri Epshtein, Yelena Krivosheev

Hi Marcin,
 
 On mar., nov. 29 2016, Marcin Wojtas <mw@semihalf.com> wrote:

> Gregory,
>
> 2016-11-29 11:19 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
>> Hi Marcin,
>>
>>  On mar., nov. 29 2016, Marcin Wojtas <mw@semihalf.com> wrote:
>>
>>> Hi Gregory,
>>>
>>> Another remark below, sorry for noise.
>>>
>>> 2016-11-29 10:37 GMT+01:00 Gregory CLEMENT <gregory.clement@free-electrons.com>:
>>>> Until now the virtual address of the received buffer were stored in the
>>>> cookie field of the rx descriptor. However, this field is 32-bits only
>>>> which prevents to use the driver on a 64-bits architecture.
>>>>
>>>> With this patch the virtual address is stored in an array not shared with
>>>> the hardware (no more need to use the DMA API). Thanks to this, it is
>>>> possible to use cache contrary to the access of the rx descriptor member.
>>>>
>>>> The change is done in the swbm path only because the hwbm uses the cookie
>>>> field, this also means that currently the hwbm is not usable in 64-bits.
>>>>
>>>> Signed-off-by: Gregory CLEMENT <gregory.clement@free-electrons.com>
>>>> ---
>>>>  drivers/net/ethernet/marvell/mvneta.c | 93 ++++++++++++++++++++++++----
>>>>  1 file changed, 81 insertions(+), 12 deletions(-)
>>>>
>>>> diff --git a/drivers/net/ethernet/marvell/mvneta.c b/drivers/net/ethernet/marvell/mvneta.c
>>>> index 1b84f746d748..32b142d0e44e 100644
>>>> --- a/drivers/net/ethernet/marvell/mvneta.c
>>>> +++ b/drivers/net/ethernet/marvell/mvneta.c
>>>> @@ -561,6 +561,9 @@ struct mvneta_rx_queue {
>>>>         u32 pkts_coal;
>>>>         u32 time_coal;
>>>>
>>>> +       /* Virtual address of the RX buffer */
>>>> +       void  **buf_virt_addr;
>>>> +
>>>>         /* Virtual address of the RX DMA descriptors array */
>>>>         struct mvneta_rx_desc *descs;
>>>>
>>>> @@ -1573,10 +1576,14 @@ static void mvneta_tx_done_pkts_coal_set(struct mvneta_port *pp,
>>>>
>>>>  /* Handle rx descriptor fill by setting buf_cookie and buf_phys_addr */
>>>>  static void mvneta_rx_desc_fill(struct mvneta_rx_desc *rx_desc,
>>>> -                               u32 phys_addr, u32 cookie)
>>>> +                               u32 phys_addr, void *virt_addr,
>>>> +                               struct mvneta_rx_queue *rxq)
>>>>  {
>>>> -       rx_desc->buf_cookie = cookie;
>>>> +       int i;
>>>> +
>>>>         rx_desc->buf_phys_addr = phys_addr;
>>>> +       i = rx_desc - rxq->descs;
>>>> +       rxq->buf_virt_addr[i] = virt_addr;
>>>>  }
>>>>
>>>>  /* Decrement sent descriptors counter */
>>>> @@ -1781,7 +1788,8 @@ EXPORT_SYMBOL_GPL(mvneta_frag_free);
>>>>
>>>>  /* Refill processing for SW buffer management */
>>>>  static int mvneta_rx_refill(struct mvneta_port *pp,
>>>> -                           struct mvneta_rx_desc *rx_desc)
>>>> +                           struct mvneta_rx_desc *rx_desc,
>>>> +                           struct mvneta_rx_queue *rxq)
>>>>
>>>>  {
>>>>         dma_addr_t phys_addr;
>>>> @@ -1799,7 +1807,7 @@ static int mvneta_rx_refill(struct mvneta_port *pp,
>>>>                 return -ENOMEM;
>>>>         }
>>>>
>>>> -       mvneta_rx_desc_fill(rx_desc, phys_addr, (u32)data);
>>>> +       mvneta_rx_desc_fill(rx_desc, phys_addr, data, rxq);
>>>>         return 0;
>>>>  }
>>>>
>>>> @@ -1861,7 +1869,12 @@ static void mvneta_rxq_drop_pkts(struct mvneta_port *pp,
>>>>
>>>>         for (i = 0; i < rxq->size; i++) {
>>>>                 struct mvneta_rx_desc *rx_desc = rxq->descs + i;
>>>> -               void *data = (void *)rx_desc->buf_cookie;
>>>> +               void *data;
>>>> +
>>>> +               if (!pp->bm_priv)
>>>> +                       data = rxq->buf_virt_addr[i];
>>>> +               else
>>>> +                       data = (void *)(uintptr_t)rx_desc->buf_cookie;
>>>
>>> Dropping packets for HWBM (in fact returning dropped buffers to the
>>> pool) is done a couple of lines above. This point will never be
>>
>> indeed I changed the code at every place the buf_cookie was used and
>> missed the fact that for HWBM this code was never reached.
>>
>>> reached with HWBM enabled (and it's also incorrect).
>>
>> What is incorrect?
>>
>
> Possible dma_unmapping + mvneta_frag_free for buffers in HWBM, when
> dropping packets.

Yes sure, but as you mentioned this code is never reached when HWBM is
enabled. I thought there was other part of the code to fix.

Thanks,

Gregory

>
> Thanks,
> Marcin

-- 
Gregory Clement, Free Electrons
Kernel, drivers, real-time and embedded Linux
development, consulting, training and support.
http://free-electrons.com

^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-11-29 10:39 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-11-29  9:37 [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT
2016-11-29  9:37 ` [PATCH v3 net-next 1/6] net: mvneta: Optimize rx path for small frame Gregory CLEMENT
2016-11-29  9:37 ` [PATCH v3 net-next 2/6] net: mvneta: Use cacheable memory to store the rx buffer virtual address Gregory CLEMENT
2016-11-29  9:50   ` Marcin Wojtas
2016-11-29 10:17     ` Gregory CLEMENT
2016-11-29  9:59   ` Marcin Wojtas
2016-11-29 10:19     ` Gregory CLEMENT
2016-11-29 10:34       ` Marcin Wojtas
2016-11-29 10:39         ` Gregory CLEMENT
2016-11-29  9:37 ` [PATCH v3 net-next 3/6] net: mvneta: Convert to be 64 bits compatible Gregory CLEMENT
2016-11-29  9:37 ` [PATCH v3 net-next 4/6] net: mvneta: Only disable mvneta_bm for 64-bits Gregory CLEMENT
2016-11-29  9:37 ` [PATCH v3 net-next 5/6] net: mvneta: Add network support for Armada 3700 SoC Gregory CLEMENT
2016-11-29  9:37 ` [PATCH v3 net-next 6/6] ARM64: dts: marvell: Add network support for Armada 3700 Gregory CLEMENT
2016-11-29 10:05 ` [PATCH v3 net-next 0/6] Support Armada 37xx SoC (ARMv8 64-bits) in mvneta driver Gregory CLEMENT

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).