netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next 0/6] DPAA Ethernet changes
@ 2019-10-21 12:27 Madalin-cristian Bucur
  2019-10-21 12:27 ` [PATCH net-next 1/6] fsl/fman: don't touch liodn base regs reserved on non-PAMU SoCs Madalin-cristian Bucur
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-21 12:27 UTC (permalink / raw)
  To: davem, netdev; +Cc: Roy Pledge, Laurentiu Tudor, Madalin-cristian Bucur

Here's a series of changes for the DPAA Ethernet, addressing minor
or unapparent issues in the codebase, adding probe ordering based on
a recently added DPAA QMan API, removing some redundant code.

Laurentiu Tudor (3):
  fsl/fman: don't touch liodn base regs reserved on non-PAMU SoCs
  dpaa_eth: defer probing after qbman
  fsl/fman: add API to get the device behind a fman port

Madalin Bucur (3):
  dpaa_eth: remove redundant code
  dpaa_eth: change DMA device
  fsl/fman: remove unused struct member

 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c  | 128 +++++++++++++++---------
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h  |   8 +-
 drivers/net/ethernet/freescale/fman/fman.c      |   6 +-
 drivers/net/ethernet/freescale/fman/fman_port.c |  17 +++-
 drivers/net/ethernet/freescale/fman/fman_port.h |   2 +
 5 files changed, 108 insertions(+), 53 deletions(-)

-- 
2.1.0


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH net-next 1/6] fsl/fman: don't touch liodn base regs reserved on non-PAMU SoCs
  2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
@ 2019-10-21 12:27 ` Madalin-cristian Bucur
  2019-10-21 12:27 ` [PATCH net-next 2/6] dpaa_eth: defer probing after qbman Madalin-cristian Bucur
                   ` (5 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-21 12:27 UTC (permalink / raw)
  To: davem, netdev; +Cc: Roy Pledge, Laurentiu Tudor, Madalin-cristian Bucur

From: Laurentiu Tudor <laurentiu.tudor@nxp.com>

The liodn base registers are specific to PAMU based NXP systems and are
reserved on SMMU based ones. Don't access them unless PAMU is compiled in.

Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/fman/fman.c | 6 +++++-
 1 file changed, 5 insertions(+), 1 deletion(-)

diff --git a/drivers/net/ethernet/freescale/fman/fman.c b/drivers/net/ethernet/freescale/fman/fman.c
index 210749bf1eac..934111def0be 100644
--- a/drivers/net/ethernet/freescale/fman/fman.c
+++ b/drivers/net/ethernet/freescale/fman/fman.c
@@ -634,6 +634,9 @@ static void set_port_liodn(struct fman *fman, u8 port_id,
 {
 	u32 tmp;
 
+	iowrite32be(liodn_ofst, &fman->bmi_regs->fmbm_spliodn[port_id - 1]);
+	if (!IS_ENABLED(CONFIG_FSL_PAMU))
+		return;
 	/* set LIODN base for this port */
 	tmp = ioread32be(&fman->dma_regs->fmdmplr[port_id / 2]);
 	if (port_id % 2) {
@@ -644,7 +647,6 @@ static void set_port_liodn(struct fman *fman, u8 port_id,
 		tmp |= liodn_base << DMA_LIODN_SHIFT;
 	}
 	iowrite32be(tmp, &fman->dma_regs->fmdmplr[port_id / 2]);
-	iowrite32be(liodn_ofst, &fman->bmi_regs->fmbm_spliodn[port_id - 1]);
 }
 
 static void enable_rams_ecc(struct fman_fpm_regs __iomem *fpm_rg)
@@ -1942,6 +1944,8 @@ static int fman_init(struct fman *fman)
 
 		fman->liodn_offset[i] =
 			ioread32be(&fman->bmi_regs->fmbm_spliodn[i - 1]);
+		if (!IS_ENABLED(CONFIG_FSL_PAMU))
+			continue;
 		liodn_base = ioread32be(&fman->dma_regs->fmdmplr[i / 2]);
 		if (i % 2) {
 			/* FMDM_PLR LSB holds LIODN base for odd ports */
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 2/6] dpaa_eth: defer probing after qbman
  2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
  2019-10-21 12:27 ` [PATCH net-next 1/6] fsl/fman: don't touch liodn base regs reserved on non-PAMU SoCs Madalin-cristian Bucur
@ 2019-10-21 12:27 ` Madalin-cristian Bucur
  2019-10-21 12:27 ` [PATCH net-next 3/6] dpaa_eth: remove redundant code Madalin-cristian Bucur
                   ` (4 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-21 12:27 UTC (permalink / raw)
  To: davem, netdev; +Cc: Roy Pledge, Laurentiu Tudor, Madalin-cristian Bucur

From: Laurentiu Tudor <laurentiu.tudor@nxp.com>

If the DPAA 1 Ethernet driver gets probed before the QBMan driver it will
cause a boot crash. Add predictability in the probing order by deferring
the Ethernet driver probe after QBMan and portals by using the recently
introduced QBMan APIs.

Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 31 ++++++++++++++++++++++++++
 1 file changed, 31 insertions(+)

diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index b4b82b9c5cd6..75eeb2ef409f 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -2773,6 +2773,37 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 	int err = 0, i, channel;
 	struct device *dev;
 
+	err = bman_is_probed();
+	if (!err)
+		return -EPROBE_DEFER;
+	if (err < 0) {
+		dev_err(&pdev->dev, "failing probe due to bman probe error\n");
+		return -ENODEV;
+	}
+	err = qman_is_probed();
+	if (!err)
+		return -EPROBE_DEFER;
+	if (err < 0) {
+		dev_err(&pdev->dev, "failing probe due to qman probe error\n");
+		return -ENODEV;
+	}
+	err = bman_portals_probed();
+	if (!err)
+		return -EPROBE_DEFER;
+	if (err < 0) {
+		dev_err(&pdev->dev,
+			"failing probe due to bman portals probe error\n");
+		return -ENODEV;
+	}
+	err = qman_portals_probed();
+	if (!err)
+		return -EPROBE_DEFER;
+	if (err < 0) {
+		dev_err(&pdev->dev,
+			"failing probe due to qman portals probe error\n");
+		return -ENODEV;
+	}
+
 	/* device used for DMA mapping */
 	dev = pdev->dev.parent;
 	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 3/6] dpaa_eth: remove redundant code
  2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
  2019-10-21 12:27 ` [PATCH net-next 1/6] fsl/fman: don't touch liodn base regs reserved on non-PAMU SoCs Madalin-cristian Bucur
  2019-10-21 12:27 ` [PATCH net-next 2/6] dpaa_eth: defer probing after qbman Madalin-cristian Bucur
@ 2019-10-21 12:27 ` Madalin-cristian Bucur
  2019-10-21 12:28 ` [PATCH net-next 4/6] fsl/fman: add API to get the device behind a fman port Madalin-cristian Bucur
                   ` (3 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-21 12:27 UTC (permalink / raw)
  To: davem, netdev; +Cc: Roy Pledge, Laurentiu Tudor, Madalin-cristian Bucur

Condition was previously checked, removing duplicate code.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 4 ----
 1 file changed, 4 deletions(-)

diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 75eeb2ef409f..8d5686d88d30 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -2304,10 +2304,6 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
 		return qman_cb_dqrr_consume;
 	}
 
-	dpaa_bp = dpaa_bpid2pool(fd->bpid);
-	if (!dpaa_bp)
-		return qman_cb_dqrr_consume;
-
 	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE);
 
 	/* prefetch the first 64 bytes of the frame or the SGT start */
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 4/6] fsl/fman: add API to get the device behind a fman port
  2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
                   ` (2 preceding siblings ...)
  2019-10-21 12:27 ` [PATCH net-next 3/6] dpaa_eth: remove redundant code Madalin-cristian Bucur
@ 2019-10-21 12:28 ` Madalin-cristian Bucur
  2019-10-21 12:28 ` [PATCH net-next 5/6] dpaa_eth: change DMA device Madalin-cristian Bucur
                   ` (2 subsequent siblings)
  6 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-21 12:28 UTC (permalink / raw)
  To: davem, netdev; +Cc: Roy Pledge, Laurentiu Tudor, Madalin-cristian Bucur

From: Laurentiu Tudor <laurentiu.tudor@nxp.com>

Add an API that retrieves the 'struct device' that the specified FMan
port probed against. The new API will be used in a subsequent patch
that corrects the DMA devices used by the dpaa_eth driver.

Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/fman/fman_port.c | 14 ++++++++++++++
 drivers/net/ethernet/freescale/fman/fman_port.h |  2 ++
 2 files changed, 16 insertions(+)

diff --git a/drivers/net/ethernet/freescale/fman/fman_port.c b/drivers/net/ethernet/freescale/fman/fman_port.c
index ee82ee1384eb..bd76c9730692 100644
--- a/drivers/net/ethernet/freescale/fman/fman_port.c
+++ b/drivers/net/ethernet/freescale/fman/fman_port.c
@@ -1728,6 +1728,20 @@ u32 fman_port_get_qman_channel_id(struct fman_port *port)
 }
 EXPORT_SYMBOL(fman_port_get_qman_channel_id);
 
+/**
+ * fman_port_get_device
+ * port:	Pointer to the FMan port device
+ *
+ * Get the 'struct device' associated to the specified FMan port device
+ *
+ * Return: pointer to associated 'struct device'
+ */
+struct device *fman_port_get_device(struct fman_port *port)
+{
+	return port->dev;
+}
+EXPORT_SYMBOL(fman_port_get_device);
+
 int fman_port_get_hash_result_offset(struct fman_port *port, u32 *offset)
 {
 	if (port->buffer_offsets.hash_result_offset == ILLEGAL_BASE)
diff --git a/drivers/net/ethernet/freescale/fman/fman_port.h b/drivers/net/ethernet/freescale/fman/fman_port.h
index 9dbb69f40121..82f12661a46d 100644
--- a/drivers/net/ethernet/freescale/fman/fman_port.h
+++ b/drivers/net/ethernet/freescale/fman/fman_port.h
@@ -157,4 +157,6 @@ int fman_port_get_tstamp(struct fman_port *port, const void *data, u64 *tstamp);
 
 struct fman_port *fman_port_bind(struct device *dev);
 
+struct device *fman_port_get_device(struct fman_port *port);
+
 #endif /* __FMAN_PORT_H */
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 5/6] dpaa_eth: change DMA device
  2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
                   ` (3 preceding siblings ...)
  2019-10-21 12:28 ` [PATCH net-next 4/6] fsl/fman: add API to get the device behind a fman port Madalin-cristian Bucur
@ 2019-10-21 12:28 ` Madalin-cristian Bucur
  2019-10-22  4:22   ` Jakub Kicinski
  2019-10-22  8:49   ` Laurentiu Tudor
  2019-10-21 12:28 ` [PATCH net-next 6/6] fsl/fman: remove unused struct member Madalin-cristian Bucur
  2019-10-22  4:26 ` [PATCH net-next 0/6] DPAA Ethernet changes Jakub Kicinski
  6 siblings, 2 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-21 12:28 UTC (permalink / raw)
  To: davem, netdev; +Cc: Roy Pledge, Laurentiu Tudor, Madalin-cristian Bucur

The DPAA Ethernet driver is using the FMan MAC as the device for DMA
mapping. This is not actually correct, as the real DMA device is the
FMan port (the FMan Rx port for reception and the FMan Tx port for
transmission). Changing the device used for DMA mapping to the Fman
Rx and Tx port devices.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
---
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 105 +++++++++++++------------
 drivers/net/ethernet/freescale/dpaa/dpaa_eth.h |   8 +-
 2 files changed, 62 insertions(+), 51 deletions(-)

diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
index 8d5686d88d30..639cafaa59b8 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
@@ -1335,15 +1335,15 @@ static void dpaa_fd_release(const struct net_device *net_dev,
 		vaddr = phys_to_virt(qm_fd_addr(fd));
 		sgt = vaddr + qm_fd_get_offset(fd);
 
-		dma_unmap_single(dpaa_bp->dev, qm_fd_addr(fd), dpaa_bp->size,
-				 DMA_FROM_DEVICE);
+		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, qm_fd_addr(fd),
+				 dpaa_bp->size, DMA_FROM_DEVICE);
 
 		dpaa_release_sgt_members(sgt);
 
-		addr = dma_map_single(dpaa_bp->dev, vaddr, dpaa_bp->size,
-				      DMA_FROM_DEVICE);
-		if (dma_mapping_error(dpaa_bp->dev, addr)) {
-			dev_err(dpaa_bp->dev, "DMA mapping failed");
+		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, vaddr,
+				      dpaa_bp->size, DMA_FROM_DEVICE);
+		if (dma_mapping_error(dpaa_bp->priv->rx_dma_dev, addr)) {
+			netdev_err(net_dev, "DMA mapping failed");
 			return;
 		}
 		bm_buffer_set64(&bmb, addr);
@@ -1488,7 +1488,7 @@ static int dpaa_enable_tx_csum(struct dpaa_priv *priv,
 
 static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
 {
-	struct device *dev = dpaa_bp->dev;
+	struct net_device *net_dev = dpaa_bp->priv->net_dev;
 	struct bm_buffer bmb[8];
 	dma_addr_t addr;
 	void *new_buf;
@@ -1497,16 +1497,18 @@ static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
 	for (i = 0; i < 8; i++) {
 		new_buf = netdev_alloc_frag(dpaa_bp->raw_size);
 		if (unlikely(!new_buf)) {
-			dev_err(dev, "netdev_alloc_frag() failed, size %zu\n",
-				dpaa_bp->raw_size);
+			netdev_err(net_dev,
+				   "netdev_alloc_frag() failed, size %zu\n",
+				   dpaa_bp->raw_size);
 			goto release_previous_buffs;
 		}
 		new_buf = PTR_ALIGN(new_buf, SMP_CACHE_BYTES);
 
-		addr = dma_map_single(dev, new_buf,
+		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, new_buf,
 				      dpaa_bp->size, DMA_FROM_DEVICE);
-		if (unlikely(dma_mapping_error(dev, addr))) {
-			dev_err(dpaa_bp->dev, "DMA map failed");
+		if (unlikely(dma_mapping_error(dpaa_bp->priv->rx_dma_dev,
+					       addr))) {
+			netdev_err(net_dev, "DMA map failed");
 			goto release_previous_buffs;
 		}
 
@@ -1634,7 +1636,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
 
 	if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
 		nr_frags = skb_shinfo(skb)->nr_frags;
-		dma_unmap_single(dev, addr,
+		dma_unmap_single(priv->tx_dma_dev, addr,
 				 qm_fd_get_offset(fd) + DPAA_SGT_SIZE,
 				 dma_dir);
 
@@ -1644,21 +1646,21 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
 		sgt = phys_to_virt(addr + qm_fd_get_offset(fd));
 
 		/* sgt[0] is from lowmem, was dma_map_single()-ed */
-		dma_unmap_single(dev, qm_sg_addr(&sgt[0]),
+		dma_unmap_single(priv->tx_dma_dev, qm_sg_addr(&sgt[0]),
 				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
 
 		/* remaining pages were mapped with skb_frag_dma_map() */
 		for (i = 1; i <= nr_frags; i++) {
 			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
 
-			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
+			dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[i]),
 				       qm_sg_entry_get_len(&sgt[i]), dma_dir);
 		}
 
 		/* Free the page frag that we allocated on Tx */
 		skb_free_frag(phys_to_virt(addr));
 	} else {
-		dma_unmap_single(dev, addr,
+		dma_unmap_single(priv->tx_dma_dev, addr,
 				 skb_tail_pointer(skb) - (u8 *)skbh, dma_dir);
 	}
 
@@ -1762,8 +1764,8 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
 			goto free_buffers;
 
 		count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
-		dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size,
-				 DMA_FROM_DEVICE);
+		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, sg_addr,
+				 dpaa_bp->size, DMA_FROM_DEVICE);
 		if (!skb) {
 			sz = dpaa_bp->size +
 				SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
@@ -1853,7 +1855,6 @@ static int skb_to_contig_fd(struct dpaa_priv *priv,
 			    int *offset)
 {
 	struct net_device *net_dev = priv->net_dev;
-	struct device *dev = net_dev->dev.parent;
 	enum dma_data_direction dma_dir;
 	unsigned char *buffer_start;
 	struct sk_buff **skbh;
@@ -1889,9 +1890,9 @@ static int skb_to_contig_fd(struct dpaa_priv *priv,
 	fd->cmd |= cpu_to_be32(FM_FD_CMD_FCO);
 
 	/* Map the entire buffer size that may be seen by FMan, but no more */
-	addr = dma_map_single(dev, skbh,
+	addr = dma_map_single(priv->tx_dma_dev, skbh,
 			      skb_tail_pointer(skb) - buffer_start, dma_dir);
-	if (unlikely(dma_mapping_error(dev, addr))) {
+	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
 		if (net_ratelimit())
 			netif_err(priv, tx_err, net_dev, "dma_map_single() failed\n");
 		return -EINVAL;
@@ -1907,7 +1908,6 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
 	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
 	const int nr_frags = skb_shinfo(skb)->nr_frags;
 	struct net_device *net_dev = priv->net_dev;
-	struct device *dev = net_dev->dev.parent;
 	struct qm_sg_entry *sgt;
 	struct sk_buff **skbh;
 	int i, j, err, sz;
@@ -1946,10 +1946,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
 	qm_sg_entry_set_len(&sgt[0], frag_len);
 	sgt[0].bpid = FSL_DPAA_BPID_INV;
 	sgt[0].offset = 0;
-	addr = dma_map_single(dev, skb->data,
+	addr = dma_map_single(priv->tx_dma_dev, skb->data,
 			      skb_headlen(skb), dma_dir);
-	if (unlikely(dma_mapping_error(dev, addr))) {
-		dev_err(dev, "DMA mapping failed");
+	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
+		netdev_err(priv->net_dev, "DMA mapping failed");
 		err = -EINVAL;
 		goto sg0_map_failed;
 	}
@@ -1960,10 +1960,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
 		frag = &skb_shinfo(skb)->frags[i];
 		frag_len = skb_frag_size(frag);
 		WARN_ON(!skb_frag_page(frag));
-		addr = skb_frag_dma_map(dev, frag, 0,
+		addr = skb_frag_dma_map(priv->tx_dma_dev, frag, 0,
 					frag_len, dma_dir);
-		if (unlikely(dma_mapping_error(dev, addr))) {
-			dev_err(dev, "DMA mapping failed");
+		if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
+			netdev_err(priv->net_dev, "DMA mapping failed");
 			err = -EINVAL;
 			goto sg_map_failed;
 		}
@@ -1986,10 +1986,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
 	skbh = (struct sk_buff **)buffer_start;
 	*skbh = skb;
 
-	addr = dma_map_single(dev, buffer_start,
+	addr = dma_map_single(priv->tx_dma_dev, buffer_start,
 			      priv->tx_headroom + DPAA_SGT_SIZE, dma_dir);
-	if (unlikely(dma_mapping_error(dev, addr))) {
-		dev_err(dev, "DMA mapping failed");
+	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
+		netdev_err(priv->net_dev, "DMA mapping failed");
 		err = -EINVAL;
 		goto sgt_map_failed;
 	}
@@ -2003,7 +2003,7 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
 sgt_map_failed:
 sg_map_failed:
 	for (j = 0; j < i; j++)
-		dma_unmap_page(dev, qm_sg_addr(&sgt[j]),
+		dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[j]),
 			       qm_sg_entry_get_len(&sgt[j]), dma_dir);
 sg0_map_failed:
 csum_failed:
@@ -2304,7 +2304,8 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
 		return qman_cb_dqrr_consume;
 	}
 
-	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE);
+	dma_unmap_single(dpaa_bp->priv->rx_dma_dev, addr, dpaa_bp->size,
+			 DMA_FROM_DEVICE);
 
 	/* prefetch the first 64 bytes of the frame or the SGT start */
 	vaddr = phys_to_virt(addr);
@@ -2659,7 +2660,7 @@ static inline void dpaa_bp_free_pf(const struct dpaa_bp *bp,
 {
 	dma_addr_t addr = bm_buf_addr(bmb);
 
-	dma_unmap_single(bp->dev, addr, bp->size, DMA_FROM_DEVICE);
+	dma_unmap_single(bp->priv->rx_dma_dev, addr, bp->size, DMA_FROM_DEVICE);
 
 	skb_free_frag(phys_to_virt(addr));
 }
@@ -2769,25 +2770,27 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 	int err = 0, i, channel;
 	struct device *dev;
 
+	dev = &pdev->dev;
+
 	err = bman_is_probed();
 	if (!err)
 		return -EPROBE_DEFER;
 	if (err < 0) {
-		dev_err(&pdev->dev, "failing probe due to bman probe error\n");
+		dev_err(dev, "failing probe due to bman probe error\n");
 		return -ENODEV;
 	}
 	err = qman_is_probed();
 	if (!err)
 		return -EPROBE_DEFER;
 	if (err < 0) {
-		dev_err(&pdev->dev, "failing probe due to qman probe error\n");
+		dev_err(dev, "failing probe due to qman probe error\n");
 		return -ENODEV;
 	}
 	err = bman_portals_probed();
 	if (!err)
 		return -EPROBE_DEFER;
 	if (err < 0) {
-		dev_err(&pdev->dev,
+		dev_err(dev,
 			"failing probe due to bman portals probe error\n");
 		return -ENODEV;
 	}
@@ -2795,19 +2798,11 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 	if (!err)
 		return -EPROBE_DEFER;
 	if (err < 0) {
-		dev_err(&pdev->dev,
+		dev_err(dev,
 			"failing probe due to qman portals probe error\n");
 		return -ENODEV;
 	}
 
-	/* device used for DMA mapping */
-	dev = pdev->dev.parent;
-	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
-	if (err) {
-		dev_err(dev, "dma_coerce_mask_and_coherent() failed\n");
-		return err;
-	}
-
 	/* Allocate this early, so we can store relevant information in
 	 * the private area
 	 */
@@ -2828,11 +2823,23 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 
 	mac_dev = dpaa_mac_dev_get(pdev);
 	if (IS_ERR(mac_dev)) {
-		dev_err(dev, "dpaa_mac_dev_get() failed\n");
+		netdev_err(net_dev, "dpaa_mac_dev_get() failed\n");
 		err = PTR_ERR(mac_dev);
 		goto free_netdev;
 	}
 
+	/* Devices used for DMA mapping */
+	priv->rx_dma_dev = fman_port_get_device(mac_dev->port[RX]);
+	priv->tx_dma_dev = fman_port_get_device(mac_dev->port[TX]);
+	err = dma_coerce_mask_and_coherent(priv->rx_dma_dev, DMA_BIT_MASK(40));
+	if (!err)
+		err = dma_coerce_mask_and_coherent(priv->tx_dma_dev,
+						   DMA_BIT_MASK(40));
+	if (err) {
+		netdev_err(net_dev, "dma_coerce_mask_and_coherent() failed\n");
+		return err;
+	}
+
 	/* If fsl_fm_max_frm is set to a higher value than the all-common 1500,
 	 * we choose conservatively and let the user explicitly set a higher
 	 * MTU via ifconfig. Otherwise, the user may end up with different MTUs
@@ -2859,7 +2866,7 @@ static int dpaa_eth_probe(struct platform_device *pdev)
 		dpaa_bps[i]->raw_size = bpool_buffer_raw_size(i, DPAA_BPS_NUM);
 		/* avoid runtime computations by keeping the usable size here */
 		dpaa_bps[i]->size = dpaa_bp_size(dpaa_bps[i]->raw_size);
-		dpaa_bps[i]->dev = dev;
+		dpaa_bps[i]->priv = priv;
 
 		err = dpaa_bp_alloc_pool(dpaa_bps[i]);
 		if (err < 0)
@@ -2982,7 +2989,7 @@ static int dpaa_remove(struct platform_device *pdev)
 	struct device *dev;
 	int err;
 
-	dev = pdev->dev.parent;
+	dev = &pdev->dev;
 	net_dev = dev_get_drvdata(dev);
 
 	priv = netdev_priv(net_dev);
diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
index f7e59e8db075..1bdfead1d334 100644
--- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
+++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.h
@@ -80,9 +80,11 @@ struct dpaa_fq_cbs {
 	struct qman_fq egress_ern;
 };
 
+struct dpaa_priv;
+
 struct dpaa_bp {
-	/* device used in the DMA mapping operations */
-	struct device *dev;
+	/* used in the DMA mapping operations */
+	struct dpaa_priv *priv;
 	/* current number of buffers in the buffer pool alloted to each CPU */
 	int __percpu *percpu_count;
 	/* all buffers allocated for this pool have this raw size */
@@ -153,6 +155,8 @@ struct dpaa_priv {
 	u16 tx_headroom;
 	struct net_device *net_dev;
 	struct mac_device *mac_dev;
+	struct device *rx_dma_dev;
+	struct device *tx_dma_dev;
 	struct qman_fq *egress_fqs[DPAA_ETH_TXQ_NUM];
 	struct qman_fq *conf_fqs[DPAA_ETH_TXQ_NUM];
 
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH net-next 6/6] fsl/fman: remove unused struct member
  2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
                   ` (4 preceding siblings ...)
  2019-10-21 12:28 ` [PATCH net-next 5/6] dpaa_eth: change DMA device Madalin-cristian Bucur
@ 2019-10-21 12:28 ` Madalin-cristian Bucur
  2019-10-22  4:26 ` [PATCH net-next 0/6] DPAA Ethernet changes Jakub Kicinski
  6 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-21 12:28 UTC (permalink / raw)
  To: davem, netdev; +Cc: Roy Pledge, Laurentiu Tudor, Madalin-cristian Bucur

Remove unused struct member second_largest_buf_size. Also, an out of
bounds access would have occurred in the removed code if there was only
one buffer pool in use.

Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
---
 drivers/net/ethernet/freescale/fman/fman_port.c | 3 ---
 1 file changed, 3 deletions(-)

diff --git a/drivers/net/ethernet/freescale/fman/fman_port.c b/drivers/net/ethernet/freescale/fman/fman_port.c
index bd76c9730692..87b26f063cc8 100644
--- a/drivers/net/ethernet/freescale/fman/fman_port.c
+++ b/drivers/net/ethernet/freescale/fman/fman_port.c
@@ -435,7 +435,6 @@ struct fman_port_cfg {
 
 struct fman_port_rx_pools_params {
 	u8 num_of_pools;
-	u16 second_largest_buf_size;
 	u16 largest_buf_size;
 };
 
@@ -946,8 +945,6 @@ static int set_ext_buffer_pools(struct fman_port *port)
 	port->rx_pools_params.num_of_pools = ext_buf_pools->num_of_pools_used;
 	port->rx_pools_params.largest_buf_size =
 	    sizes_array[ordered_array[ext_buf_pools->num_of_pools_used - 1]];
-	port->rx_pools_params.second_largest_buf_size =
-	    sizes_array[ordered_array[ext_buf_pools->num_of_pools_used - 2]];
 
 	/* FMBM_RMPD reg. - pool depletion */
 	if (buf_pool_depletion->pools_grp_mode_enable) {
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
  2019-10-21 12:28 ` [PATCH net-next 5/6] dpaa_eth: change DMA device Madalin-cristian Bucur
@ 2019-10-22  4:22   ` Jakub Kicinski
  2019-10-22  6:47     ` Madalin-cristian Bucur
  2019-10-22  8:49   ` Laurentiu Tudor
  1 sibling, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2019-10-22  4:22 UTC (permalink / raw)
  To: Madalin-cristian Bucur; +Cc: davem, netdev, Roy Pledge, Laurentiu Tudor

On Mon, 21 Oct 2019 12:28:02 +0000, Madalin-cristian Bucur wrote:
> The DPAA Ethernet driver is using the FMan MAC as the device for DMA
> mapping. This is not actually correct, as the real DMA device is the
> FMan port (the FMan Rx port for reception and the FMan Tx port for
> transmission). Changing the device used for DMA mapping to the Fman
> Rx and Tx port devices.
> 
> Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>

Curious, we also have a patch for fixing this for IXP400.
Is there something in recent kernels that uncovers this bug?

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 0/6] DPAA Ethernet changes
  2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
                   ` (5 preceding siblings ...)
  2019-10-21 12:28 ` [PATCH net-next 6/6] fsl/fman: remove unused struct member Madalin-cristian Bucur
@ 2019-10-22  4:26 ` Jakub Kicinski
  2019-10-22  6:43   ` Madalin-cristian Bucur
  6 siblings, 1 reply; 15+ messages in thread
From: Jakub Kicinski @ 2019-10-22  4:26 UTC (permalink / raw)
  To: Madalin-cristian Bucur; +Cc: davem, netdev, Roy Pledge, Laurentiu Tudor

On Mon, 21 Oct 2019 12:27:51 +0000, Madalin-cristian Bucur wrote:
> Here's a series of changes for the DPAA Ethernet, addressing minor
> or unapparent issues in the codebase, adding probe ordering based on
> a recently added DPAA QMan API, removing some redundant code.

Hi Madalin!

Patch 2 looks like it may be a bug fix but I gather it has a dependency
in net-next so it can't go to net?

More importantly - I think your From: line on this posting is 

Madalin-cristian Bucur <madalin.bucur@nxp.com>

While the sign-off on the patches you wrote is:

Madalin Bucur <madalin.bucur@nxp.com>

I think these gotta be identical otherwise the bots which ensure the
author added his sign-off may scream at us.

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH net-next 0/6] DPAA Ethernet changes
  2019-10-22  4:26 ` [PATCH net-next 0/6] DPAA Ethernet changes Jakub Kicinski
@ 2019-10-22  6:43   ` Madalin-cristian Bucur
  0 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-22  6:43 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: davem, netdev, Roy Pledge, Laurentiu Tudor

> -----Original Message-----
> From: Jakub Kicinski <jakub.kicinski@netronome.com>
> Sent: Tuesday, October 22, 2019 7:26 AM
> To: Madalin-cristian Bucur <madalin.bucur@nxp.com>
> Cc: davem@davemloft.net; netdev@vger.kernel.org; Roy Pledge
> <roy.pledge@nxp.com>; Laurentiu Tudor <laurentiu.tudor@nxp.com>
> Subject: Re: [PATCH net-next 0/6] DPAA Ethernet changes
> 
> On Mon, 21 Oct 2019 12:27:51 +0000, Madalin-cristian Bucur wrote:
> > Here's a series of changes for the DPAA Ethernet, addressing minor
> > or unapparent issues in the codebase, adding probe ordering based on
> > a recently added DPAA QMan API, removing some redundant code.
> 
> Hi Madalin!
> 
> Patch 2 looks like it may be a bug fix but I gather it has a dependency
> in net-next so it can't go to net?

It's a fix for a theoretical issue that is not reproducing with the current
code base. Future changes related to the IOMMU support may make this issue
visible.

> More importantly - I think your From: line on this posting is
> 
> Madalin-cristian Bucur <madalin.bucur@nxp.com>
> 
> While the sign-off on the patches you wrote is:
> 
> Madalin Bucur <madalin.bucur@nxp.com>
> 
> I think these gotta be identical otherwise the bots which ensure the
> author added his sign-off may scream at us.

The formatted patches look like this:

From 55a524a41099fa9b2f5fbbb9f3a87108437942bb Mon Sep 17 00:00:00 2001
From: Madalin Bucur <madalin.bucur@nxp.com>
Date: Mon, 21 Oct 2019 15:21:26 +0300
Subject: [PATCH net-next 0/6] DPAA Ethernet changes
Content-Type: text/plain; charset="us-ascii"
Reply-to: madalin.bucur@nxp.com

but then there are some MS servers trying to be helpful and the message
ends up like this:

From: Madalin-cristian Bucur <madalin.bucur@nxp.com>
To: "davem@davemloft.net" <davem@davemloft.net>, "netdev@vger.kernel.org"
	<netdev@vger.kernel.org>
CC: Roy Pledge <roy.pledge@nxp.com>, Laurentiu Tudor
	<laurentiu.tudor@nxp.com>, Madalin-cristian Bucur <madalin.bucur@nxp.com>
Subject: [PATCH net-next 0/6] DPAA Ethernet changes
Thread-Topic: [PATCH net-next 0/6] DPAA Ethernet changes
Thread-Index: AQHViAr0ehww7MPYPUqedXDa00qudg==
X-MS-Exchange-MessageSentRepresentingType: 1
Date: Mon, 21 Oct 2019 12:27:51 +0000
Message-ID: <1571660862-18313-1-git-send-email-madalin.bucur@nxp.com>
Reply-To: Madalin-cristian Bucur <madalin.bucur@nxp.com>
<snip>
Return-Path: madalin.bucur@nxp.com

It's probably a good time to think about pull requests...

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH net-next 5/6] dpaa_eth: change DMA device
  2019-10-22  4:22   ` Jakub Kicinski
@ 2019-10-22  6:47     ` Madalin-cristian Bucur
  0 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-22  6:47 UTC (permalink / raw)
  To: Jakub Kicinski; +Cc: davem, netdev, Roy Pledge, Laurentiu Tudor

> -----Original Message-----
> From: Jakub Kicinski <jakub.kicinski@netronome.com>
> Sent: Tuesday, October 22, 2019 7:23 AM
> To: Madalin-cristian Bucur <madalin.bucur@nxp.com>
> Cc: davem@davemloft.net; netdev@vger.kernel.org; Roy Pledge
> <roy.pledge@nxp.com>; Laurentiu Tudor <laurentiu.tudor@nxp.com>
> Subject: Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
> 
> On Mon, 21 Oct 2019 12:28:02 +0000, Madalin-cristian Bucur wrote:
> > The DPAA Ethernet driver is using the FMan MAC as the device for DMA
> > mapping. This is not actually correct, as the real DMA device is the
> > FMan port (the FMan Rx port for reception and the FMan Tx port for
> > transmission). Changing the device used for DMA mapping to the Fman
> > Rx and Tx port devices.
> >
> > Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
> > Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> 
> Curious, we also have a patch for fixing this for IXP400.
> Is there something in recent kernels that uncovers this bug?

Hi Jakub, it's related to the IOMMU on ARM64, this change is just one
of the many required to get IOMMU working on DPAA 1 platforms. The
device used for DMA mapping was the MAC but in the DPAA it's another
HW block that is actually doing the DMA transfers, the port (be it Rx
or Tx). This fix just makes the code correct, to actually enable IOMMU
there are some more changes and some are under discussion on other
threads [1]. I'm pushing these changes so I can make other modifications
to the DPAA driver while the IOMMU related open items are solved.

Regards,
Madalin

1. Laurentiu's IOMMU related changes:
https://lkml.org/lkml/2019/4/22/357
https://lore.kernel.org/patchwork/cover/997994/
https://lore.kernel.org/patchwork/project/lkml/list/?series=396215




^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
  2019-10-21 12:28 ` [PATCH net-next 5/6] dpaa_eth: change DMA device Madalin-cristian Bucur
  2019-10-22  4:22   ` Jakub Kicinski
@ 2019-10-22  8:49   ` Laurentiu Tudor
  2019-10-22  9:10     ` Madalin-cristian Bucur
  1 sibling, 1 reply; 15+ messages in thread
From: Laurentiu Tudor @ 2019-10-22  8:49 UTC (permalink / raw)
  To: Madalin-cristian Bucur, davem, netdev; +Cc: Roy Pledge

Hello,

On 21.10.2019 15:28, Madalin-cristian Bucur wrote:
> The DPAA Ethernet driver is using the FMan MAC as the device for DMA
> mapping. This is not actually correct, as the real DMA device is the
> FMan port (the FMan Rx port for reception and the FMan Tx port for
> transmission). Changing the device used for DMA mapping to the Fman
> Rx and Tx port devices.
> 
> Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> ---
>   drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 105 +++++++++++++------------
>   drivers/net/ethernet/freescale/dpaa/dpaa_eth.h |   8 +-
>   2 files changed, 62 insertions(+), 51 deletions(-)
> 
> diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> index 8d5686d88d30..639cafaa59b8 100644
> --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> @@ -1335,15 +1335,15 @@ static void dpaa_fd_release(const struct net_device *net_dev,
>   		vaddr = phys_to_virt(qm_fd_addr(fd));
>   		sgt = vaddr + qm_fd_get_offset(fd);
>   
> -		dma_unmap_single(dpaa_bp->dev, qm_fd_addr(fd), dpaa_bp->size,
> -				 DMA_FROM_DEVICE);
> +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, qm_fd_addr(fd),
> +				 dpaa_bp->size, DMA_FROM_DEVICE);
>   
>   		dpaa_release_sgt_members(sgt);
>   
> -		addr = dma_map_single(dpaa_bp->dev, vaddr, dpaa_bp->size,
> -				      DMA_FROM_DEVICE);
> -		if (dma_mapping_error(dpaa_bp->dev, addr)) {
> -			dev_err(dpaa_bp->dev, "DMA mapping failed");
> +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, vaddr,
> +				      dpaa_bp->size, DMA_FROM_DEVICE);
> +		if (dma_mapping_error(dpaa_bp->priv->rx_dma_dev, addr)) {
> +			netdev_err(net_dev, "DMA mapping failed");
>   			return;
>   		}
>   		bm_buffer_set64(&bmb, addr);
> @@ -1488,7 +1488,7 @@ static int dpaa_enable_tx_csum(struct dpaa_priv *priv,
>   
>   static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
>   {
> -	struct device *dev = dpaa_bp->dev;
> +	struct net_device *net_dev = dpaa_bp->priv->net_dev;
>   	struct bm_buffer bmb[8];
>   	dma_addr_t addr;
>   	void *new_buf;
> @@ -1497,16 +1497,18 @@ static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
>   	for (i = 0; i < 8; i++) {
>   		new_buf = netdev_alloc_frag(dpaa_bp->raw_size);
>   		if (unlikely(!new_buf)) {
> -			dev_err(dev, "netdev_alloc_frag() failed, size %zu\n",
> -				dpaa_bp->raw_size);
> +			netdev_err(net_dev,
> +				   "netdev_alloc_frag() failed, size %zu\n",
> +				   dpaa_bp->raw_size);
>   			goto release_previous_buffs;
>   		}
>   		new_buf = PTR_ALIGN(new_buf, SMP_CACHE_BYTES);
>   
> -		addr = dma_map_single(dev, new_buf,
> +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, new_buf,
>   				      dpaa_bp->size, DMA_FROM_DEVICE);
> -		if (unlikely(dma_mapping_error(dev, addr))) {
> -			dev_err(dpaa_bp->dev, "DMA map failed");
> +		if (unlikely(dma_mapping_error(dpaa_bp->priv->rx_dma_dev,
> +					       addr))) {
> +			netdev_err(net_dev, "DMA map failed");
>   			goto release_previous_buffs;
>   		}
>   
> @@ -1634,7 +1636,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
>   
>   	if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
>   		nr_frags = skb_shinfo(skb)->nr_frags;
> -		dma_unmap_single(dev, addr,
> +		dma_unmap_single(priv->tx_dma_dev, addr,
>   				 qm_fd_get_offset(fd) + DPAA_SGT_SIZE,
>   				 dma_dir);
>   
> @@ -1644,21 +1646,21 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const struct dpaa_priv *priv,
>   		sgt = phys_to_virt(addr + qm_fd_get_offset(fd));
>   
>   		/* sgt[0] is from lowmem, was dma_map_single()-ed */
> -		dma_unmap_single(dev, qm_sg_addr(&sgt[0]),
> +		dma_unmap_single(priv->tx_dma_dev, qm_sg_addr(&sgt[0]),
>   				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
>   
>   		/* remaining pages were mapped with skb_frag_dma_map() */
>   		for (i = 1; i <= nr_frags; i++) {
>   			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
>   
> -			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
> +			dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[i]),
>   				       qm_sg_entry_get_len(&sgt[i]), dma_dir);
>   		}
>   
>   		/* Free the page frag that we allocated on Tx */
>   		skb_free_frag(phys_to_virt(addr));
>   	} else {
> -		dma_unmap_single(dev, addr,
> +		dma_unmap_single(priv->tx_dma_dev, addr,
>   				 skb_tail_pointer(skb) - (u8 *)skbh, dma_dir);
>   	}
>   
> @@ -1762,8 +1764,8 @@ static struct sk_buff *sg_fd_to_skb(const struct dpaa_priv *priv,
>   			goto free_buffers;
>   
>   		count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
> -		dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size,
> -				 DMA_FROM_DEVICE);
> +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, sg_addr,
> +				 dpaa_bp->size, DMA_FROM_DEVICE);
>   		if (!skb) {
>   			sz = dpaa_bp->size +
>   				SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> @@ -1853,7 +1855,6 @@ static int skb_to_contig_fd(struct dpaa_priv *priv,
>   			    int *offset)
>   {
>   	struct net_device *net_dev = priv->net_dev;
> -	struct device *dev = net_dev->dev.parent;
>   	enum dma_data_direction dma_dir;
>   	unsigned char *buffer_start;
>   	struct sk_buff **skbh;
> @@ -1889,9 +1890,9 @@ static int skb_to_contig_fd(struct dpaa_priv *priv,
>   	fd->cmd |= cpu_to_be32(FM_FD_CMD_FCO);
>   
>   	/* Map the entire buffer size that may be seen by FMan, but no more */
> -	addr = dma_map_single(dev, skbh,
> +	addr = dma_map_single(priv->tx_dma_dev, skbh,
>   			      skb_tail_pointer(skb) - buffer_start, dma_dir);
> -	if (unlikely(dma_mapping_error(dev, addr))) {
> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
>   		if (net_ratelimit())
>   			netif_err(priv, tx_err, net_dev, "dma_map_single() failed\n");
>   		return -EINVAL;
> @@ -1907,7 +1908,6 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>   	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
>   	const int nr_frags = skb_shinfo(skb)->nr_frags;
>   	struct net_device *net_dev = priv->net_dev;
> -	struct device *dev = net_dev->dev.parent;
>   	struct qm_sg_entry *sgt;
>   	struct sk_buff **skbh;
>   	int i, j, err, sz;
> @@ -1946,10 +1946,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>   	qm_sg_entry_set_len(&sgt[0], frag_len);
>   	sgt[0].bpid = FSL_DPAA_BPID_INV;
>   	sgt[0].offset = 0;
> -	addr = dma_map_single(dev, skb->data,
> +	addr = dma_map_single(priv->tx_dma_dev, skb->data,
>   			      skb_headlen(skb), dma_dir);
> -	if (unlikely(dma_mapping_error(dev, addr))) {
> -		dev_err(dev, "DMA mapping failed");
> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> +		netdev_err(priv->net_dev, "DMA mapping failed");
>   		err = -EINVAL;
>   		goto sg0_map_failed;
>   	}
> @@ -1960,10 +1960,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>   		frag = &skb_shinfo(skb)->frags[i];
>   		frag_len = skb_frag_size(frag);
>   		WARN_ON(!skb_frag_page(frag));
> -		addr = skb_frag_dma_map(dev, frag, 0,
> +		addr = skb_frag_dma_map(priv->tx_dma_dev, frag, 0,
>   					frag_len, dma_dir);
> -		if (unlikely(dma_mapping_error(dev, addr))) {
> -			dev_err(dev, "DMA mapping failed");
> +		if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> +			netdev_err(priv->net_dev, "DMA mapping failed");
>   			err = -EINVAL;
>   			goto sg_map_failed;
>   		}
> @@ -1986,10 +1986,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>   	skbh = (struct sk_buff **)buffer_start;
>   	*skbh = skb;
>   
> -	addr = dma_map_single(dev, buffer_start,
> +	addr = dma_map_single(priv->tx_dma_dev, buffer_start,
>   			      priv->tx_headroom + DPAA_SGT_SIZE, dma_dir);
> -	if (unlikely(dma_mapping_error(dev, addr))) {
> -		dev_err(dev, "DMA mapping failed");
> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> +		netdev_err(priv->net_dev, "DMA mapping failed");
>   		err = -EINVAL;
>   		goto sgt_map_failed;
>   	}
> @@ -2003,7 +2003,7 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>   sgt_map_failed:
>   sg_map_failed:
>   	for (j = 0; j < i; j++)
> -		dma_unmap_page(dev, qm_sg_addr(&sgt[j]),
> +		dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[j]),
>   			       qm_sg_entry_get_len(&sgt[j]), dma_dir);
>   sg0_map_failed:
>   csum_failed:
> @@ -2304,7 +2304,8 @@ static enum qman_cb_dqrr_result rx_default_dqrr(struct qman_portal *portal,
>   		return qman_cb_dqrr_consume;
>   	}
>   
> -	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size, DMA_FROM_DEVICE);
> +	dma_unmap_single(dpaa_bp->priv->rx_dma_dev, addr, dpaa_bp->size,
> +			 DMA_FROM_DEVICE);
>   
>   	/* prefetch the first 64 bytes of the frame or the SGT start */
>   	vaddr = phys_to_virt(addr);
> @@ -2659,7 +2660,7 @@ static inline void dpaa_bp_free_pf(const struct dpaa_bp *bp,
>   {
>   	dma_addr_t addr = bm_buf_addr(bmb);
>   
> -	dma_unmap_single(bp->dev, addr, bp->size, DMA_FROM_DEVICE);
> +	dma_unmap_single(bp->priv->rx_dma_dev, addr, bp->size, DMA_FROM_DEVICE);
>   
>   	skb_free_frag(phys_to_virt(addr));
>   }
> @@ -2769,25 +2770,27 @@ static int dpaa_eth_probe(struct platform_device *pdev)
>   	int err = 0, i, channel;
>   	struct device *dev;
>   
> +	dev = &pdev->dev;
> +
>   	err = bman_is_probed();
>   	if (!err)
>   		return -EPROBE_DEFER;
>   	if (err < 0) {
> -		dev_err(&pdev->dev, "failing probe due to bman probe error\n");
> +		dev_err(dev, "failing probe due to bman probe error\n");

These changes seem unrelated.

>   		return -ENODEV;
>   	}
>   	err = qman_is_probed();
>   	if (!err)
>   		return -EPROBE_DEFER;
>   	if (err < 0) {
> -		dev_err(&pdev->dev, "failing probe due to qman probe error\n");
> +		dev_err(dev, "failing probe due to qman probe error\n");
>   		return -ENODEV;
>   	}
>   	err = bman_portals_probed();
>   	if (!err)
>   		return -EPROBE_DEFER;
>   	if (err < 0) {
> -		dev_err(&pdev->dev,
> +		dev_err(dev,
>   			"failing probe due to bman portals probe error\n");
>   		return -ENODEV;
>   	}
> @@ -2795,19 +2798,11 @@ static int dpaa_eth_probe(struct platform_device *pdev)
>   	if (!err)
>   		return -EPROBE_DEFER;
>   	if (err < 0) {
> -		dev_err(&pdev->dev,
> +		dev_err(dev,
>   			"failing probe due to qman portals probe error\n");
>   		return -ENODEV;
>   	}
>   
> -	/* device used for DMA mapping */
> -	dev = pdev->dev.parent;
> -	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
> -	if (err) {
> -		dev_err(dev, "dma_coerce_mask_and_coherent() failed\n");
> -		return err;
> -	}

Why are we dropping this explicit setting of the dma mask?

---
Best Regards, Laurentiu

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH net-next 5/6] dpaa_eth: change DMA device
  2019-10-22  8:49   ` Laurentiu Tudor
@ 2019-10-22  9:10     ` Madalin-cristian Bucur
  2019-10-22 10:34       ` Laurentiu Tudor
  0 siblings, 1 reply; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-22  9:10 UTC (permalink / raw)
  To: Laurentiu Tudor, davem, netdev; +Cc: Roy Pledge

> -----Original Message-----
> From: Laurentiu Tudor
> Sent: Tuesday, October 22, 2019 11:50 AM
> To: Madalin-cristian Bucur <madalin.bucur@nxp.com>; davem@davemloft.net;
> netdev@vger.kernel.org
> Cc: Roy Pledge <roy.pledge@nxp.com>
> Subject: Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
> 
> Hello,
> 
> On 21.10.2019 15:28, Madalin-cristian Bucur wrote:
> > The DPAA Ethernet driver is using the FMan MAC as the device for DMA
> > mapping. This is not actually correct, as the real DMA device is the
> > FMan port (the FMan Rx port for reception and the FMan Tx port for
> > transmission). Changing the device used for DMA mapping to the Fman
> > Rx and Tx port devices.
> >
> > Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
> > Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> > ---
> >   drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 105 +++++++++++++----
> --------
> >   drivers/net/ethernet/freescale/dpaa/dpaa_eth.h |   8 +-
> >   2 files changed, 62 insertions(+), 51 deletions(-)
> >
> > diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > index 8d5686d88d30..639cafaa59b8 100644
> > --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> > @@ -1335,15 +1335,15 @@ static void dpaa_fd_release(const struct
> net_device *net_dev,
> >   		vaddr = phys_to_virt(qm_fd_addr(fd));
> >   		sgt = vaddr + qm_fd_get_offset(fd);
> >
> > -		dma_unmap_single(dpaa_bp->dev, qm_fd_addr(fd), dpaa_bp->size,
> > -				 DMA_FROM_DEVICE);
> > +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, qm_fd_addr(fd),
> > +				 dpaa_bp->size, DMA_FROM_DEVICE);
> >
> >   		dpaa_release_sgt_members(sgt);
> >
> > -		addr = dma_map_single(dpaa_bp->dev, vaddr, dpaa_bp->size,
> > -				      DMA_FROM_DEVICE);
> > -		if (dma_mapping_error(dpaa_bp->dev, addr)) {
> > -			dev_err(dpaa_bp->dev, "DMA mapping failed");
> > +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, vaddr,
> > +				      dpaa_bp->size, DMA_FROM_DEVICE);
> > +		if (dma_mapping_error(dpaa_bp->priv->rx_dma_dev, addr)) {
> > +			netdev_err(net_dev, "DMA mapping failed");
> >   			return;
> >   		}
> >   		bm_buffer_set64(&bmb, addr);
> > @@ -1488,7 +1488,7 @@ static int dpaa_enable_tx_csum(struct dpaa_priv
> *priv,
> >
> >   static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
> >   {
> > -	struct device *dev = dpaa_bp->dev;
> > +	struct net_device *net_dev = dpaa_bp->priv->net_dev;
> >   	struct bm_buffer bmb[8];
> >   	dma_addr_t addr;
> >   	void *new_buf;
> > @@ -1497,16 +1497,18 @@ static int dpaa_bp_add_8_bufs(const struct
> dpaa_bp *dpaa_bp)
> >   	for (i = 0; i < 8; i++) {
> >   		new_buf = netdev_alloc_frag(dpaa_bp->raw_size);
> >   		if (unlikely(!new_buf)) {
> > -			dev_err(dev, "netdev_alloc_frag() failed, size %zu\n",
> > -				dpaa_bp->raw_size);
> > +			netdev_err(net_dev,
> > +				   "netdev_alloc_frag() failed, size %zu\n",
> > +				   dpaa_bp->raw_size);
> >   			goto release_previous_buffs;
> >   		}
> >   		new_buf = PTR_ALIGN(new_buf, SMP_CACHE_BYTES);
> >
> > -		addr = dma_map_single(dev, new_buf,
> > +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, new_buf,
> >   				      dpaa_bp->size, DMA_FROM_DEVICE);
> > -		if (unlikely(dma_mapping_error(dev, addr))) {
> > -			dev_err(dpaa_bp->dev, "DMA map failed");
> > +		if (unlikely(dma_mapping_error(dpaa_bp->priv->rx_dma_dev,
> > +					       addr))) {
> > +			netdev_err(net_dev, "DMA map failed");
> >   			goto release_previous_buffs;
> >   		}
> >
> > @@ -1634,7 +1636,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const
> struct dpaa_priv *priv,
> >
> >   	if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
> >   		nr_frags = skb_shinfo(skb)->nr_frags;
> > -		dma_unmap_single(dev, addr,
> > +		dma_unmap_single(priv->tx_dma_dev, addr,
> >   				 qm_fd_get_offset(fd) + DPAA_SGT_SIZE,
> >   				 dma_dir);
> >
> > @@ -1644,21 +1646,21 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const
> struct dpaa_priv *priv,
> >   		sgt = phys_to_virt(addr + qm_fd_get_offset(fd));
> >
> >   		/* sgt[0] is from lowmem, was dma_map_single()-ed */
> > -		dma_unmap_single(dev, qm_sg_addr(&sgt[0]),
> > +		dma_unmap_single(priv->tx_dma_dev, qm_sg_addr(&sgt[0]),
> >   				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
> >
> >   		/* remaining pages were mapped with skb_frag_dma_map() */
> >   		for (i = 1; i <= nr_frags; i++) {
> >   			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
> >
> > -			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
> > +			dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[i]),
> >   				       qm_sg_entry_get_len(&sgt[i]), dma_dir);
> >   		}
> >
> >   		/* Free the page frag that we allocated on Tx */
> >   		skb_free_frag(phys_to_virt(addr));
> >   	} else {
> > -		dma_unmap_single(dev, addr,
> > +		dma_unmap_single(priv->tx_dma_dev, addr,
> >   				 skb_tail_pointer(skb) - (u8 *)skbh, dma_dir);
> >   	}
> >
> > @@ -1762,8 +1764,8 @@ static struct sk_buff *sg_fd_to_skb(const struct
> dpaa_priv *priv,
> >   			goto free_buffers;
> >
> >   		count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
> > -		dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size,
> > -				 DMA_FROM_DEVICE);
> > +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, sg_addr,
> > +				 dpaa_bp->size, DMA_FROM_DEVICE);
> >   		if (!skb) {
> >   			sz = dpaa_bp->size +
> >   				SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
> > @@ -1853,7 +1855,6 @@ static int skb_to_contig_fd(struct dpaa_priv
> *priv,
> >   			    int *offset)
> >   {
> >   	struct net_device *net_dev = priv->net_dev;
> > -	struct device *dev = net_dev->dev.parent;
> >   	enum dma_data_direction dma_dir;
> >   	unsigned char *buffer_start;
> >   	struct sk_buff **skbh;
> > @@ -1889,9 +1890,9 @@ static int skb_to_contig_fd(struct dpaa_priv
> *priv,
> >   	fd->cmd |= cpu_to_be32(FM_FD_CMD_FCO);
> >
> >   	/* Map the entire buffer size that may be seen by FMan, but no more
> */
> > -	addr = dma_map_single(dev, skbh,
> > +	addr = dma_map_single(priv->tx_dma_dev, skbh,
> >   			      skb_tail_pointer(skb) - buffer_start, dma_dir);
> > -	if (unlikely(dma_mapping_error(dev, addr))) {
> > +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> >   		if (net_ratelimit())
> >   			netif_err(priv, tx_err, net_dev, "dma_map_single()
> failed\n");
> >   		return -EINVAL;
> > @@ -1907,7 +1908,6 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
> >   	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
> >   	const int nr_frags = skb_shinfo(skb)->nr_frags;
> >   	struct net_device *net_dev = priv->net_dev;
> > -	struct device *dev = net_dev->dev.parent;
> >   	struct qm_sg_entry *sgt;
> >   	struct sk_buff **skbh;
> >   	int i, j, err, sz;
> > @@ -1946,10 +1946,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
> >   	qm_sg_entry_set_len(&sgt[0], frag_len);
> >   	sgt[0].bpid = FSL_DPAA_BPID_INV;
> >   	sgt[0].offset = 0;
> > -	addr = dma_map_single(dev, skb->data,
> > +	addr = dma_map_single(priv->tx_dma_dev, skb->data,
> >   			      skb_headlen(skb), dma_dir);
> > -	if (unlikely(dma_mapping_error(dev, addr))) {
> > -		dev_err(dev, "DMA mapping failed");
> > +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> > +		netdev_err(priv->net_dev, "DMA mapping failed");
> >   		err = -EINVAL;
> >   		goto sg0_map_failed;
> >   	}
> > @@ -1960,10 +1960,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
> >   		frag = &skb_shinfo(skb)->frags[i];
> >   		frag_len = skb_frag_size(frag);
> >   		WARN_ON(!skb_frag_page(frag));
> > -		addr = skb_frag_dma_map(dev, frag, 0,
> > +		addr = skb_frag_dma_map(priv->tx_dma_dev, frag, 0,
> >   					frag_len, dma_dir);
> > -		if (unlikely(dma_mapping_error(dev, addr))) {
> > -			dev_err(dev, "DMA mapping failed");
> > +		if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> > +			netdev_err(priv->net_dev, "DMA mapping failed");
> >   			err = -EINVAL;
> >   			goto sg_map_failed;
> >   		}
> > @@ -1986,10 +1986,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
> >   	skbh = (struct sk_buff **)buffer_start;
> >   	*skbh = skb;
> >
> > -	addr = dma_map_single(dev, buffer_start,
> > +	addr = dma_map_single(priv->tx_dma_dev, buffer_start,
> >   			      priv->tx_headroom + DPAA_SGT_SIZE, dma_dir);
> > -	if (unlikely(dma_mapping_error(dev, addr))) {
> > -		dev_err(dev, "DMA mapping failed");
> > +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> > +		netdev_err(priv->net_dev, "DMA mapping failed");
> >   		err = -EINVAL;
> >   		goto sgt_map_failed;
> >   	}
> > @@ -2003,7 +2003,7 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
> >   sgt_map_failed:
> >   sg_map_failed:
> >   	for (j = 0; j < i; j++)
> > -		dma_unmap_page(dev, qm_sg_addr(&sgt[j]),
> > +		dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[j]),
> >   			       qm_sg_entry_get_len(&sgt[j]), dma_dir);
> >   sg0_map_failed:
> >   csum_failed:
> > @@ -2304,7 +2304,8 @@ static enum qman_cb_dqrr_result
> rx_default_dqrr(struct qman_portal *portal,
> >   		return qman_cb_dqrr_consume;
> >   	}
> >
> > -	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size,
> DMA_FROM_DEVICE);
> > +	dma_unmap_single(dpaa_bp->priv->rx_dma_dev, addr, dpaa_bp->size,
> > +			 DMA_FROM_DEVICE);
> >
> >   	/* prefetch the first 64 bytes of the frame or the SGT start */
> >   	vaddr = phys_to_virt(addr);
> > @@ -2659,7 +2660,7 @@ static inline void dpaa_bp_free_pf(const struct
> dpaa_bp *bp,
> >   {
> >   	dma_addr_t addr = bm_buf_addr(bmb);
> >
> > -	dma_unmap_single(bp->dev, addr, bp->size, DMA_FROM_DEVICE);
> > +	dma_unmap_single(bp->priv->rx_dma_dev, addr, bp->size,
> DMA_FROM_DEVICE);
> >
> >   	skb_free_frag(phys_to_virt(addr));
> >   }
> > @@ -2769,25 +2770,27 @@ static int dpaa_eth_probe(struct platform_device
> *pdev)
> >   	int err = 0, i, channel;
> >   	struct device *dev;
> >
> > +	dev = &pdev->dev;
> > +
> >   	err = bman_is_probed();
> >   	if (!err)
> >   		return -EPROBE_DEFER;
> >   	if (err < 0) {
> > -		dev_err(&pdev->dev, "failing probe due to bman probe
> error\n");
> > +		dev_err(dev, "failing probe due to bman probe error\n");
> 
> These changes seem unrelated.

The &pdev->dev to dev replacement is not related directly to the incorrect
DMA mapping device but a device had to be used for the prints and propagating 
&pdev->dev did not look like a good idea. Initially I had a separate patch for
this but it's superfluous to add code and remove it in another patch doing almost
nothing.

> >   		return -ENODEV;
> >   	}
> >   	err = qman_is_probed();
> >   	if (!err)
> >   		return -EPROBE_DEFER;
> >   	if (err < 0) {
> > -		dev_err(&pdev->dev, "failing probe due to qman probe
> error\n");
> > +		dev_err(dev, "failing probe due to qman probe error\n");
> >   		return -ENODEV;
> >   	}
> >   	err = bman_portals_probed();
> >   	if (!err)
> >   		return -EPROBE_DEFER;
> >   	if (err < 0) {
> > -		dev_err(&pdev->dev,
> > +		dev_err(dev,
> >   			"failing probe due to bman portals probe error\n");
> >   		return -ENODEV;
> >   	}
> > @@ -2795,19 +2798,11 @@ static int dpaa_eth_probe(struct platform_device
> *pdev)
> >   	if (!err)
> >   		return -EPROBE_DEFER;
> >   	if (err < 0) {
> > -		dev_err(&pdev->dev,
> > +		dev_err(dev,
> >   			"failing probe due to qman portals probe error\n");
> >   		return -ENODEV;
> >   	}
> >
> > -	/* device used for DMA mapping */
> > -	dev = pdev->dev.parent;
> > -	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
> > -	if (err) {
> > -		dev_err(dev, "dma_coerce_mask_and_coherent() failed\n");
> > -		return err;
> > -	}
> 
> Why are we dropping this explicit setting of the dma mask?
> 
> ---
> Best Regards, Laurentiu

Hi Laurentiu, you are probably reviewing these changes with your initial
patch in mind that was using (incorrectly) the same (Rx) port for DMA
mapping of both receive and transmit traffic. Please take a second look at
the changes in this patch.

Regards,
Madalin

^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
  2019-10-22  9:10     ` Madalin-cristian Bucur
@ 2019-10-22 10:34       ` Laurentiu Tudor
  2019-10-22 10:49         ` Madalin-cristian Bucur
  0 siblings, 1 reply; 15+ messages in thread
From: Laurentiu Tudor @ 2019-10-22 10:34 UTC (permalink / raw)
  To: Madalin-cristian Bucur, davem, netdev; +Cc: Roy Pledge



On 22.10.2019 12:10, Madalin-cristian Bucur wrote:
>> -----Original Message-----
>> From: Laurentiu Tudor
>> Sent: Tuesday, October 22, 2019 11:50 AM
>> To: Madalin-cristian Bucur <madalin.bucur@nxp.com>; davem@davemloft.net;
>> netdev@vger.kernel.org
>> Cc: Roy Pledge <roy.pledge@nxp.com>
>> Subject: Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
>>
>> Hello,
>>
>> On 21.10.2019 15:28, Madalin-cristian Bucur wrote:
>>> The DPAA Ethernet driver is using the FMan MAC as the device for DMA
>>> mapping. This is not actually correct, as the real DMA device is the
>>> FMan port (the FMan Rx port for reception and the FMan Tx port for
>>> transmission). Changing the device used for DMA mapping to the Fman
>>> Rx and Tx port devices.
>>>
>>> Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
>>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
>>> ---
>>>    drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 105 +++++++++++++----
>> --------
>>>    drivers/net/ethernet/freescale/dpaa/dpaa_eth.h |   8 +-
>>>    2 files changed, 62 insertions(+), 51 deletions(-)
>>>
>>> diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
>> b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
>>> index 8d5686d88d30..639cafaa59b8 100644
>>> --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
>>> +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
>>> @@ -1335,15 +1335,15 @@ static void dpaa_fd_release(const struct
>> net_device *net_dev,
>>>    		vaddr = phys_to_virt(qm_fd_addr(fd));
>>>    		sgt = vaddr + qm_fd_get_offset(fd);
>>>
>>> -		dma_unmap_single(dpaa_bp->dev, qm_fd_addr(fd), dpaa_bp->size,
>>> -				 DMA_FROM_DEVICE);
>>> +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, qm_fd_addr(fd),
>>> +				 dpaa_bp->size, DMA_FROM_DEVICE);
>>>
>>>    		dpaa_release_sgt_members(sgt);
>>>
>>> -		addr = dma_map_single(dpaa_bp->dev, vaddr, dpaa_bp->size,
>>> -				      DMA_FROM_DEVICE);
>>> -		if (dma_mapping_error(dpaa_bp->dev, addr)) {
>>> -			dev_err(dpaa_bp->dev, "DMA mapping failed");
>>> +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, vaddr,
>>> +				      dpaa_bp->size, DMA_FROM_DEVICE);
>>> +		if (dma_mapping_error(dpaa_bp->priv->rx_dma_dev, addr)) {
>>> +			netdev_err(net_dev, "DMA mapping failed");
>>>    			return;
>>>    		}
>>>    		bm_buffer_set64(&bmb, addr);
>>> @@ -1488,7 +1488,7 @@ static int dpaa_enable_tx_csum(struct dpaa_priv
>> *priv,
>>>
>>>    static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
>>>    {
>>> -	struct device *dev = dpaa_bp->dev;
>>> +	struct net_device *net_dev = dpaa_bp->priv->net_dev;
>>>    	struct bm_buffer bmb[8];
>>>    	dma_addr_t addr;
>>>    	void *new_buf;
>>> @@ -1497,16 +1497,18 @@ static int dpaa_bp_add_8_bufs(const struct
>> dpaa_bp *dpaa_bp)
>>>    	for (i = 0; i < 8; i++) {
>>>    		new_buf = netdev_alloc_frag(dpaa_bp->raw_size);
>>>    		if (unlikely(!new_buf)) {
>>> -			dev_err(dev, "netdev_alloc_frag() failed, size %zu\n",
>>> -				dpaa_bp->raw_size);
>>> +			netdev_err(net_dev,
>>> +				   "netdev_alloc_frag() failed, size %zu\n",
>>> +				   dpaa_bp->raw_size);
>>>    			goto release_previous_buffs;
>>>    		}
>>>    		new_buf = PTR_ALIGN(new_buf, SMP_CACHE_BYTES);
>>>
>>> -		addr = dma_map_single(dev, new_buf,
>>> +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, new_buf,
>>>    				      dpaa_bp->size, DMA_FROM_DEVICE);
>>> -		if (unlikely(dma_mapping_error(dev, addr))) {
>>> -			dev_err(dpaa_bp->dev, "DMA map failed");
>>> +		if (unlikely(dma_mapping_error(dpaa_bp->priv->rx_dma_dev,
>>> +					       addr))) {
>>> +			netdev_err(net_dev, "DMA map failed");
>>>    			goto release_previous_buffs;
>>>    		}
>>>
>>> @@ -1634,7 +1636,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const
>> struct dpaa_priv *priv,
>>>
>>>    	if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
>>>    		nr_frags = skb_shinfo(skb)->nr_frags;
>>> -		dma_unmap_single(dev, addr,
>>> +		dma_unmap_single(priv->tx_dma_dev, addr,
>>>    				 qm_fd_get_offset(fd) + DPAA_SGT_SIZE,
>>>    				 dma_dir);
>>>
>>> @@ -1644,21 +1646,21 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const
>> struct dpaa_priv *priv,
>>>    		sgt = phys_to_virt(addr + qm_fd_get_offset(fd));
>>>
>>>    		/* sgt[0] is from lowmem, was dma_map_single()-ed */
>>> -		dma_unmap_single(dev, qm_sg_addr(&sgt[0]),
>>> +		dma_unmap_single(priv->tx_dma_dev, qm_sg_addr(&sgt[0]),
>>>    				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
>>>
>>>    		/* remaining pages were mapped with skb_frag_dma_map() */
>>>    		for (i = 1; i <= nr_frags; i++) {
>>>    			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
>>>
>>> -			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
>>> +			dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[i]),
>>>    				       qm_sg_entry_get_len(&sgt[i]), dma_dir);
>>>    		}
>>>
>>>    		/* Free the page frag that we allocated on Tx */
>>>    		skb_free_frag(phys_to_virt(addr));
>>>    	} else {
>>> -		dma_unmap_single(dev, addr,
>>> +		dma_unmap_single(priv->tx_dma_dev, addr,
>>>    				 skb_tail_pointer(skb) - (u8 *)skbh, dma_dir);
>>>    	}
>>>
>>> @@ -1762,8 +1764,8 @@ static struct sk_buff *sg_fd_to_skb(const struct
>> dpaa_priv *priv,
>>>    			goto free_buffers;
>>>
>>>    		count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
>>> -		dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size,
>>> -				 DMA_FROM_DEVICE);
>>> +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, sg_addr,
>>> +				 dpaa_bp->size, DMA_FROM_DEVICE);
>>>    		if (!skb) {
>>>    			sz = dpaa_bp->size +
>>>    				SKB_DATA_ALIGN(sizeof(struct skb_shared_info));
>>> @@ -1853,7 +1855,6 @@ static int skb_to_contig_fd(struct dpaa_priv
>> *priv,
>>>    			    int *offset)
>>>    {
>>>    	struct net_device *net_dev = priv->net_dev;
>>> -	struct device *dev = net_dev->dev.parent;
>>>    	enum dma_data_direction dma_dir;
>>>    	unsigned char *buffer_start;
>>>    	struct sk_buff **skbh;
>>> @@ -1889,9 +1890,9 @@ static int skb_to_contig_fd(struct dpaa_priv
>> *priv,
>>>    	fd->cmd |= cpu_to_be32(FM_FD_CMD_FCO);
>>>
>>>    	/* Map the entire buffer size that may be seen by FMan, but no more
>> */
>>> -	addr = dma_map_single(dev, skbh,
>>> +	addr = dma_map_single(priv->tx_dma_dev, skbh,
>>>    			      skb_tail_pointer(skb) - buffer_start, dma_dir);
>>> -	if (unlikely(dma_mapping_error(dev, addr))) {
>>> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
>>>    		if (net_ratelimit())
>>>    			netif_err(priv, tx_err, net_dev, "dma_map_single()
>> failed\n");
>>>    		return -EINVAL;
>>> @@ -1907,7 +1908,6 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>>>    	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
>>>    	const int nr_frags = skb_shinfo(skb)->nr_frags;
>>>    	struct net_device *net_dev = priv->net_dev;
>>> -	struct device *dev = net_dev->dev.parent;
>>>    	struct qm_sg_entry *sgt;
>>>    	struct sk_buff **skbh;
>>>    	int i, j, err, sz;
>>> @@ -1946,10 +1946,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>>>    	qm_sg_entry_set_len(&sgt[0], frag_len);
>>>    	sgt[0].bpid = FSL_DPAA_BPID_INV;
>>>    	sgt[0].offset = 0;
>>> -	addr = dma_map_single(dev, skb->data,
>>> +	addr = dma_map_single(priv->tx_dma_dev, skb->data,
>>>    			      skb_headlen(skb), dma_dir);
>>> -	if (unlikely(dma_mapping_error(dev, addr))) {
>>> -		dev_err(dev, "DMA mapping failed");
>>> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
>>> +		netdev_err(priv->net_dev, "DMA mapping failed");
>>>    		err = -EINVAL;
>>>    		goto sg0_map_failed;
>>>    	}
>>> @@ -1960,10 +1960,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>>>    		frag = &skb_shinfo(skb)->frags[i];
>>>    		frag_len = skb_frag_size(frag);
>>>    		WARN_ON(!skb_frag_page(frag));
>>> -		addr = skb_frag_dma_map(dev, frag, 0,
>>> +		addr = skb_frag_dma_map(priv->tx_dma_dev, frag, 0,
>>>    					frag_len, dma_dir);
>>> -		if (unlikely(dma_mapping_error(dev, addr))) {
>>> -			dev_err(dev, "DMA mapping failed");
>>> +		if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
>>> +			netdev_err(priv->net_dev, "DMA mapping failed");
>>>    			err = -EINVAL;
>>>    			goto sg_map_failed;
>>>    		}
>>> @@ -1986,10 +1986,10 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>>>    	skbh = (struct sk_buff **)buffer_start;
>>>    	*skbh = skb;
>>>
>>> -	addr = dma_map_single(dev, buffer_start,
>>> +	addr = dma_map_single(priv->tx_dma_dev, buffer_start,
>>>    			      priv->tx_headroom + DPAA_SGT_SIZE, dma_dir);
>>> -	if (unlikely(dma_mapping_error(dev, addr))) {
>>> -		dev_err(dev, "DMA mapping failed");
>>> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
>>> +		netdev_err(priv->net_dev, "DMA mapping failed");
>>>    		err = -EINVAL;
>>>    		goto sgt_map_failed;
>>>    	}
>>> @@ -2003,7 +2003,7 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
>>>    sgt_map_failed:
>>>    sg_map_failed:
>>>    	for (j = 0; j < i; j++)
>>> -		dma_unmap_page(dev, qm_sg_addr(&sgt[j]),
>>> +		dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[j]),
>>>    			       qm_sg_entry_get_len(&sgt[j]), dma_dir);
>>>    sg0_map_failed:
>>>    csum_failed:
>>> @@ -2304,7 +2304,8 @@ static enum qman_cb_dqrr_result
>> rx_default_dqrr(struct qman_portal *portal,
>>>    		return qman_cb_dqrr_consume;
>>>    	}
>>>
>>> -	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size,
>> DMA_FROM_DEVICE);
>>> +	dma_unmap_single(dpaa_bp->priv->rx_dma_dev, addr, dpaa_bp->size,
>>> +			 DMA_FROM_DEVICE);
>>>
>>>    	/* prefetch the first 64 bytes of the frame or the SGT start */
>>>    	vaddr = phys_to_virt(addr);
>>> @@ -2659,7 +2660,7 @@ static inline void dpaa_bp_free_pf(const struct
>> dpaa_bp *bp,
>>>    {
>>>    	dma_addr_t addr = bm_buf_addr(bmb);
>>>
>>> -	dma_unmap_single(bp->dev, addr, bp->size, DMA_FROM_DEVICE);
>>> +	dma_unmap_single(bp->priv->rx_dma_dev, addr, bp->size,
>> DMA_FROM_DEVICE);
>>>
>>>    	skb_free_frag(phys_to_virt(addr));
>>>    }
>>> @@ -2769,25 +2770,27 @@ static int dpaa_eth_probe(struct platform_device
>> *pdev)
>>>    	int err = 0, i, channel;
>>>    	struct device *dev;
>>>
>>> +	dev = &pdev->dev;
>>> +
>>>    	err = bman_is_probed();
>>>    	if (!err)
>>>    		return -EPROBE_DEFER;
>>>    	if (err < 0) {
>>> -		dev_err(&pdev->dev, "failing probe due to bman probe
>> error\n");
>>> +		dev_err(dev, "failing probe due to bman probe error\n");
>>
>> These changes seem unrelated.
> 
> The &pdev->dev to dev replacement is not related directly to the incorrect
> DMA mapping device but a device had to be used for the prints and propagating
> &pdev->dev did not look like a good idea. Initially I had a separate patch for
> this but it's superfluous to add code and remove it in another patch doing almost
> nothing.

Fair enough.

>>>    		return -ENODEV;
>>>    	}
>>>    	err = qman_is_probed();
>>>    	if (!err)
>>>    		return -EPROBE_DEFER;
>>>    	if (err < 0) {
>>> -		dev_err(&pdev->dev, "failing probe due to qman probe
>> error\n");
>>> +		dev_err(dev, "failing probe due to qman probe error\n");
>>>    		return -ENODEV;
>>>    	}
>>>    	err = bman_portals_probed();
>>>    	if (!err)
>>>    		return -EPROBE_DEFER;
>>>    	if (err < 0) {
>>> -		dev_err(&pdev->dev,
>>> +		dev_err(dev,
>>>    			"failing probe due to bman portals probe error\n");
>>>    		return -ENODEV;
>>>    	}
>>> @@ -2795,19 +2798,11 @@ static int dpaa_eth_probe(struct platform_device
>> *pdev)
>>>    	if (!err)
>>>    		return -EPROBE_DEFER;
>>>    	if (err < 0) {
>>> -		dev_err(&pdev->dev,
>>> +		dev_err(dev,
>>>    			"failing probe due to qman portals probe error\n");
>>>    		return -ENODEV;
>>>    	}
>>>
>>> -	/* device used for DMA mapping */
>>> -	dev = pdev->dev.parent;
>>> -	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
>>> -	if (err) {
>>> -		dev_err(dev, "dma_coerce_mask_and_coherent() failed\n");
>>> -		return err;
>>> -	}
>>
>> Why are we dropping this explicit setting of the dma mask?
>>
>> ---
>> Best Regards, Laurentiu
> 
> Hi Laurentiu, you are probably reviewing these changes with your initial
> patch in mind that was using (incorrectly) the same (Rx) port for DMA
> mapping of both receive and transmit traffic. Please take a second look at
> the changes in this patch.

My bad, didn't notice you're setting it below. Sorry for the noise.

---
Best Regards, Laurentiu

^ permalink raw reply	[flat|nested] 15+ messages in thread

* RE: [PATCH net-next 5/6] dpaa_eth: change DMA device
  2019-10-22 10:34       ` Laurentiu Tudor
@ 2019-10-22 10:49         ` Madalin-cristian Bucur
  0 siblings, 0 replies; 15+ messages in thread
From: Madalin-cristian Bucur @ 2019-10-22 10:49 UTC (permalink / raw)
  To: Laurentiu Tudor, davem, netdev; +Cc: Roy Pledge, Jakub Kicinski

> -----Original Message-----
> From: Laurentiu Tudor
> Sent: Tuesday, October 22, 2019 1:35 PM
> To: Madalin-cristian Bucur <madalin.bucur@nxp.com>; davem@davemloft.net;
> netdev@vger.kernel.org
> Cc: Roy Pledge <roy.pledge@nxp.com>
> Subject: Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
> 
> 
> 
> On 22.10.2019 12:10, Madalin-cristian Bucur wrote:
> >> -----Original Message-----
> >> From: Laurentiu Tudor
> >> Sent: Tuesday, October 22, 2019 11:50 AM
> >> To: Madalin-cristian Bucur <madalin.bucur@nxp.com>;
> davem@davemloft.net;
> >> netdev@vger.kernel.org
> >> Cc: Roy Pledge <roy.pledge@nxp.com>
> >> Subject: Re: [PATCH net-next 5/6] dpaa_eth: change DMA device
> >>
> >> Hello,
> >>
> >> On 21.10.2019 15:28, Madalin-cristian Bucur wrote:
> >>> The DPAA Ethernet driver is using the FMan MAC as the device for DMA
> >>> mapping. This is not actually correct, as the real DMA device is the
> >>> FMan port (the FMan Rx port for reception and the FMan Tx port for
> >>> transmission). Changing the device used for DMA mapping to the Fman
> >>> Rx and Tx port devices.
> >>>
> >>> Signed-off-by: Madalin Bucur <madalin.bucur@nxp.com>
> >>> Signed-off-by: Laurentiu Tudor <laurentiu.tudor@nxp.com>
> >>> ---
> >>>    drivers/net/ethernet/freescale/dpaa/dpaa_eth.c | 105 +++++++++++++-
> ---
> >> --------
> >>>    drivers/net/ethernet/freescale/dpaa/dpaa_eth.h |   8 +-
> >>>    2 files changed, 62 insertions(+), 51 deletions(-)
> >>>
> >>> diff --git a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> >> b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> >>> index 8d5686d88d30..639cafaa59b8 100644
> >>> --- a/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> >>> +++ b/drivers/net/ethernet/freescale/dpaa/dpaa_eth.c
> >>> @@ -1335,15 +1335,15 @@ static void dpaa_fd_release(const struct
> >> net_device *net_dev,
> >>>    		vaddr = phys_to_virt(qm_fd_addr(fd));
> >>>    		sgt = vaddr + qm_fd_get_offset(fd);
> >>>
> >>> -		dma_unmap_single(dpaa_bp->dev, qm_fd_addr(fd), dpaa_bp->size,
> >>> -				 DMA_FROM_DEVICE);
> >>> +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, qm_fd_addr(fd),
> >>> +				 dpaa_bp->size, DMA_FROM_DEVICE);
> >>>
> >>>    		dpaa_release_sgt_members(sgt);
> >>>
> >>> -		addr = dma_map_single(dpaa_bp->dev, vaddr, dpaa_bp->size,
> >>> -				      DMA_FROM_DEVICE);
> >>> -		if (dma_mapping_error(dpaa_bp->dev, addr)) {
> >>> -			dev_err(dpaa_bp->dev, "DMA mapping failed");
> >>> +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, vaddr,
> >>> +				      dpaa_bp->size, DMA_FROM_DEVICE);
> >>> +		if (dma_mapping_error(dpaa_bp->priv->rx_dma_dev, addr)) {
> >>> +			netdev_err(net_dev, "DMA mapping failed");
> >>>    			return;
> >>>    		}
> >>>    		bm_buffer_set64(&bmb, addr);
> >>> @@ -1488,7 +1488,7 @@ static int dpaa_enable_tx_csum(struct dpaa_priv
> >> *priv,
> >>>
> >>>    static int dpaa_bp_add_8_bufs(const struct dpaa_bp *dpaa_bp)
> >>>    {
> >>> -	struct device *dev = dpaa_bp->dev;
> >>> +	struct net_device *net_dev = dpaa_bp->priv->net_dev;
> >>>    	struct bm_buffer bmb[8];
> >>>    	dma_addr_t addr;
> >>>    	void *new_buf;
> >>> @@ -1497,16 +1497,18 @@ static int dpaa_bp_add_8_bufs(const struct
> >> dpaa_bp *dpaa_bp)
> >>>    	for (i = 0; i < 8; i++) {
> >>>    		new_buf = netdev_alloc_frag(dpaa_bp->raw_size);
> >>>    		if (unlikely(!new_buf)) {
> >>> -			dev_err(dev, "netdev_alloc_frag() failed, size %zu\n",
> >>> -				dpaa_bp->raw_size);
> >>> +			netdev_err(net_dev,
> >>> +				   "netdev_alloc_frag() failed, size %zu\n",
> >>> +				   dpaa_bp->raw_size);
> >>>    			goto release_previous_buffs;
> >>>    		}
> >>>    		new_buf = PTR_ALIGN(new_buf, SMP_CACHE_BYTES);
> >>>
> >>> -		addr = dma_map_single(dev, new_buf,
> >>> +		addr = dma_map_single(dpaa_bp->priv->rx_dma_dev, new_buf,
> >>>    				      dpaa_bp->size, DMA_FROM_DEVICE);
> >>> -		if (unlikely(dma_mapping_error(dev, addr))) {
> >>> -			dev_err(dpaa_bp->dev, "DMA map failed");
> >>> +		if (unlikely(dma_mapping_error(dpaa_bp->priv->rx_dma_dev,
> >>> +					       addr))) {
> >>> +			netdev_err(net_dev, "DMA map failed");
> >>>    			goto release_previous_buffs;
> >>>    		}
> >>>
> >>> @@ -1634,7 +1636,7 @@ static struct sk_buff *dpaa_cleanup_tx_fd(const
> >> struct dpaa_priv *priv,
> >>>
> >>>    	if (unlikely(qm_fd_get_format(fd) == qm_fd_sg)) {
> >>>    		nr_frags = skb_shinfo(skb)->nr_frags;
> >>> -		dma_unmap_single(dev, addr,
> >>> +		dma_unmap_single(priv->tx_dma_dev, addr,
> >>>    				 qm_fd_get_offset(fd) + DPAA_SGT_SIZE,
> >>>    				 dma_dir);
> >>>
> >>> @@ -1644,21 +1646,21 @@ static struct sk_buff
> *dpaa_cleanup_tx_fd(const
> >> struct dpaa_priv *priv,
> >>>    		sgt = phys_to_virt(addr + qm_fd_get_offset(fd));
> >>>
> >>>    		/* sgt[0] is from lowmem, was dma_map_single()-ed */
> >>> -		dma_unmap_single(dev, qm_sg_addr(&sgt[0]),
> >>> +		dma_unmap_single(priv->tx_dma_dev, qm_sg_addr(&sgt[0]),
> >>>    				 qm_sg_entry_get_len(&sgt[0]), dma_dir);
> >>>
> >>>    		/* remaining pages were mapped with skb_frag_dma_map()
> */
> >>>    		for (i = 1; i <= nr_frags; i++) {
> >>>    			WARN_ON(qm_sg_entry_is_ext(&sgt[i]));
> >>>
> >>> -			dma_unmap_page(dev, qm_sg_addr(&sgt[i]),
> >>> +			dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[i]),
> >>>    				       qm_sg_entry_get_len(&sgt[i]),
> dma_dir);
> >>>    		}
> >>>
> >>>    		/* Free the page frag that we allocated on Tx */
> >>>    		skb_free_frag(phys_to_virt(addr));
> >>>    	} else {
> >>> -		dma_unmap_single(dev, addr,
> >>> +		dma_unmap_single(priv->tx_dma_dev, addr,
> >>>    				 skb_tail_pointer(skb) - (u8 *)skbh,
> dma_dir);
> >>>    	}
> >>>
> >>> @@ -1762,8 +1764,8 @@ static struct sk_buff *sg_fd_to_skb(const struct
> >> dpaa_priv *priv,
> >>>    			goto free_buffers;
> >>>
> >>>    		count_ptr = this_cpu_ptr(dpaa_bp->percpu_count);
> >>> -		dma_unmap_single(dpaa_bp->dev, sg_addr, dpaa_bp->size,
> >>> -				 DMA_FROM_DEVICE);
> >>> +		dma_unmap_single(dpaa_bp->priv->rx_dma_dev, sg_addr,
> >>> +				 dpaa_bp->size, DMA_FROM_DEVICE);
> >>>    		if (!skb) {
> >>>    			sz = dpaa_bp->size +
> >>>    				SKB_DATA_ALIGN(sizeof(struct
> skb_shared_info));
> >>> @@ -1853,7 +1855,6 @@ static int skb_to_contig_fd(struct dpaa_priv
> >> *priv,
> >>>    			    int *offset)
> >>>    {
> >>>    	struct net_device *net_dev = priv->net_dev;
> >>> -	struct device *dev = net_dev->dev.parent;
> >>>    	enum dma_data_direction dma_dir;
> >>>    	unsigned char *buffer_start;
> >>>    	struct sk_buff **skbh;
> >>> @@ -1889,9 +1890,9 @@ static int skb_to_contig_fd(struct dpaa_priv
> >> *priv,
> >>>    	fd->cmd |= cpu_to_be32(FM_FD_CMD_FCO);
> >>>
> >>>    	/* Map the entire buffer size that may be seen by FMan, but no
> more
> >> */
> >>> -	addr = dma_map_single(dev, skbh,
> >>> +	addr = dma_map_single(priv->tx_dma_dev, skbh,
> >>>    			      skb_tail_pointer(skb) - buffer_start,
> dma_dir);
> >>> -	if (unlikely(dma_mapping_error(dev, addr))) {
> >>> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> >>>    		if (net_ratelimit())
> >>>    			netif_err(priv, tx_err, net_dev, "dma_map_single()
> >> failed\n");
> >>>    		return -EINVAL;
> >>> @@ -1907,7 +1908,6 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
> >>>    	const enum dma_data_direction dma_dir = DMA_TO_DEVICE;
> >>>    	const int nr_frags = skb_shinfo(skb)->nr_frags;
> >>>    	struct net_device *net_dev = priv->net_dev;
> >>> -	struct device *dev = net_dev->dev.parent;
> >>>    	struct qm_sg_entry *sgt;
> >>>    	struct sk_buff **skbh;
> >>>    	int i, j, err, sz;
> >>> @@ -1946,10 +1946,10 @@ static int skb_to_sg_fd(struct dpaa_priv
> *priv,
> >>>    	qm_sg_entry_set_len(&sgt[0], frag_len);
> >>>    	sgt[0].bpid = FSL_DPAA_BPID_INV;
> >>>    	sgt[0].offset = 0;
> >>> -	addr = dma_map_single(dev, skb->data,
> >>> +	addr = dma_map_single(priv->tx_dma_dev, skb->data,
> >>>    			      skb_headlen(skb), dma_dir);
> >>> -	if (unlikely(dma_mapping_error(dev, addr))) {
> >>> -		dev_err(dev, "DMA mapping failed");
> >>> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> >>> +		netdev_err(priv->net_dev, "DMA mapping failed");
> >>>    		err = -EINVAL;
> >>>    		goto sg0_map_failed;
> >>>    	}
> >>> @@ -1960,10 +1960,10 @@ static int skb_to_sg_fd(struct dpaa_priv
> *priv,
> >>>    		frag = &skb_shinfo(skb)->frags[i];
> >>>    		frag_len = skb_frag_size(frag);
> >>>    		WARN_ON(!skb_frag_page(frag));
> >>> -		addr = skb_frag_dma_map(dev, frag, 0,
> >>> +		addr = skb_frag_dma_map(priv->tx_dma_dev, frag, 0,
> >>>    					frag_len, dma_dir);
> >>> -		if (unlikely(dma_mapping_error(dev, addr))) {
> >>> -			dev_err(dev, "DMA mapping failed");
> >>> +		if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> >>> +			netdev_err(priv->net_dev, "DMA mapping failed");
> >>>    			err = -EINVAL;
> >>>    			goto sg_map_failed;
> >>>    		}
> >>> @@ -1986,10 +1986,10 @@ static int skb_to_sg_fd(struct dpaa_priv
> *priv,
> >>>    	skbh = (struct sk_buff **)buffer_start;
> >>>    	*skbh = skb;
> >>>
> >>> -	addr = dma_map_single(dev, buffer_start,
> >>> +	addr = dma_map_single(priv->tx_dma_dev, buffer_start,
> >>>    			      priv->tx_headroom + DPAA_SGT_SIZE, dma_dir);
> >>> -	if (unlikely(dma_mapping_error(dev, addr))) {
> >>> -		dev_err(dev, "DMA mapping failed");
> >>> +	if (unlikely(dma_mapping_error(priv->tx_dma_dev, addr))) {
> >>> +		netdev_err(priv->net_dev, "DMA mapping failed");
> >>>    		err = -EINVAL;
> >>>    		goto sgt_map_failed;
> >>>    	}
> >>> @@ -2003,7 +2003,7 @@ static int skb_to_sg_fd(struct dpaa_priv *priv,
> >>>    sgt_map_failed:
> >>>    sg_map_failed:
> >>>    	for (j = 0; j < i; j++)
> >>> -		dma_unmap_page(dev, qm_sg_addr(&sgt[j]),
> >>> +		dma_unmap_page(priv->tx_dma_dev, qm_sg_addr(&sgt[j]),
> >>>    			       qm_sg_entry_get_len(&sgt[j]), dma_dir);
> >>>    sg0_map_failed:
> >>>    csum_failed:
> >>> @@ -2304,7 +2304,8 @@ static enum qman_cb_dqrr_result
> >> rx_default_dqrr(struct qman_portal *portal,
> >>>    		return qman_cb_dqrr_consume;
> >>>    	}
> >>>
> >>> -	dma_unmap_single(dpaa_bp->dev, addr, dpaa_bp->size,
> >> DMA_FROM_DEVICE);
> >>> +	dma_unmap_single(dpaa_bp->priv->rx_dma_dev, addr, dpaa_bp->size,
> >>> +			 DMA_FROM_DEVICE);
> >>>
> >>>    	/* prefetch the first 64 bytes of the frame or the SGT start
> */
> >>>    	vaddr = phys_to_virt(addr);
> >>> @@ -2659,7 +2660,7 @@ static inline void dpaa_bp_free_pf(const struct
> >> dpaa_bp *bp,
> >>>    {
> >>>    	dma_addr_t addr = bm_buf_addr(bmb);
> >>>
> >>> -	dma_unmap_single(bp->dev, addr, bp->size, DMA_FROM_DEVICE);
> >>> +	dma_unmap_single(bp->priv->rx_dma_dev, addr, bp->size,
> >> DMA_FROM_DEVICE);
> >>>
> >>>    	skb_free_frag(phys_to_virt(addr));
> >>>    }
> >>> @@ -2769,25 +2770,27 @@ static int dpaa_eth_probe(struct
> platform_device
> >> *pdev)
> >>>    	int err = 0, i, channel;
> >>>    	struct device *dev;
> >>>
> >>> +	dev = &pdev->dev;
> >>> +
> >>>    	err = bman_is_probed();
> >>>    	if (!err)
> >>>    		return -EPROBE_DEFER;
> >>>    	if (err < 0) {
> >>> -		dev_err(&pdev->dev, "failing probe due to bman probe
> >> error\n");
> >>> +		dev_err(dev, "failing probe due to bman probe error\n");
> >>
> >> These changes seem unrelated.
> >
> > The &pdev->dev to dev replacement is not related directly to the
> incorrect
> > DMA mapping device but a device had to be used for the prints and
> propagating
> > &pdev->dev did not look like a good idea. Initially I had a separate
> patch for
> > this but it's superfluous to add code and remove it in another patch
> doing almost
> > nothing.
> 
> Fair enough.
> 
> >>>    		return -ENODEV;
> >>>    	}
> >>>    	err = qman_is_probed();
> >>>    	if (!err)
> >>>    		return -EPROBE_DEFER;
> >>>    	if (err < 0) {
> >>> -		dev_err(&pdev->dev, "failing probe due to qman probe
> >> error\n");
> >>> +		dev_err(dev, "failing probe due to qman probe error\n");
> >>>    		return -ENODEV;
> >>>    	}
> >>>    	err = bman_portals_probed();
> >>>    	if (!err)
> >>>    		return -EPROBE_DEFER;
> >>>    	if (err < 0) {
> >>> -		dev_err(&pdev->dev,
> >>> +		dev_err(dev,
> >>>    			"failing probe due to bman portals probe
> error\n");
> >>>    		return -ENODEV;
> >>>    	}
> >>> @@ -2795,19 +2798,11 @@ static int dpaa_eth_probe(struct
> platform_device
> >> *pdev)
> >>>    	if (!err)
> >>>    		return -EPROBE_DEFER;
> >>>    	if (err < 0) {
> >>> -		dev_err(&pdev->dev,
> >>> +		dev_err(dev,
> >>>    			"failing probe due to qman portals probe
> error\n");
> >>>    		return -ENODEV;
> >>>    	}
> >>>
> >>> -	/* device used for DMA mapping */
> >>> -	dev = pdev->dev.parent;
> >>> -	err = dma_coerce_mask_and_coherent(dev, DMA_BIT_MASK(40));
> >>> -	if (err) {
> >>> -		dev_err(dev, "dma_coerce_mask_and_coherent() failed\n");
> >>> -		return err;
> >>> -	}
> >>
> >> Why are we dropping this explicit setting of the dma mask?
> >>
> >> ---
> >> Best Regards, Laurentiu
> >
> > Hi Laurentiu, you are probably reviewing these changes with your initial
> > patch in mind that was using (incorrectly) the same (Rx) port for DMA
> > mapping of both receive and transmit traffic. Please take a second look
> at
> > the changes in this patch.
> 
> My bad, didn't notice you're setting it below. Sorry for the noise.
> 
> ---
> Best Regards, Laurentiu

Nevermind, I need to send everything again just to get the sender name correctly...

Madalin

^ permalink raw reply	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2019-10-22 10:49 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-21 12:27 [PATCH net-next 0/6] DPAA Ethernet changes Madalin-cristian Bucur
2019-10-21 12:27 ` [PATCH net-next 1/6] fsl/fman: don't touch liodn base regs reserved on non-PAMU SoCs Madalin-cristian Bucur
2019-10-21 12:27 ` [PATCH net-next 2/6] dpaa_eth: defer probing after qbman Madalin-cristian Bucur
2019-10-21 12:27 ` [PATCH net-next 3/6] dpaa_eth: remove redundant code Madalin-cristian Bucur
2019-10-21 12:28 ` [PATCH net-next 4/6] fsl/fman: add API to get the device behind a fman port Madalin-cristian Bucur
2019-10-21 12:28 ` [PATCH net-next 5/6] dpaa_eth: change DMA device Madalin-cristian Bucur
2019-10-22  4:22   ` Jakub Kicinski
2019-10-22  6:47     ` Madalin-cristian Bucur
2019-10-22  8:49   ` Laurentiu Tudor
2019-10-22  9:10     ` Madalin-cristian Bucur
2019-10-22 10:34       ` Laurentiu Tudor
2019-10-22 10:49         ` Madalin-cristian Bucur
2019-10-21 12:28 ` [PATCH net-next 6/6] fsl/fman: remove unused struct member Madalin-cristian Bucur
2019-10-22  4:26 ` [PATCH net-next 0/6] DPAA Ethernet changes Jakub Kicinski
2019-10-22  6:43   ` Madalin-cristian Bucur

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).