linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH net-next V3 0/3] net: axienet: Introduce dmaengine
@ 2023-05-10  8:50 Sarath Babu Naidu Gaddam
  2023-05-10  8:50 ` [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support Sarath Babu Naidu Gaddam
                   ` (2 more replies)
  0 siblings, 3 replies; 14+ messages in thread
From: Sarath Babu Naidu Gaddam @ 2023-05-10  8:50 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, robh+dt, krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	sarath.babu.naidu.gaddam, git

The axiethernet driver can use the dmaengine framework to communicate
with the xilinx DMAengine driver(AXIDMA, MCDMA). The inspiration behind
this dmaengine adoption is to reuse the in-kernel xilinx dma engine
driver[1] and remove redundant dma programming sequence[2] from the
ethernet driver. This simplifies the ethernet driver and also makes
it generic to be hooked to any complaint dma IP i.e AXIDMA, MCDMA
without any modification.

The dmaengine framework was extended for metadata API support during
the axidma RFC[3] discussion. However, it still needs further enhancements
to make it well suited for ethernet usecases.

Backward compatibility support:
To support backward compatibility, we are planning to use below approach,
1) Use "dmas" property as an optional for now to differentiate
   dmaengine based ethernet Driver or built-in dma ethernet driver.
   Will move this property to required property some time later.
2) after some time, will introduce a new compatible string to support
   the dmaengine method, This new compatible name will use different
   APIs for init and data transfer.

Comments, suggestions, thoughts to implement remaining functional features
are very welcome!

[1]: https://github.com/torvalds/linux/blob/master/drivers/dma/xilinx/xilinx_dma.c
[2]: https://github.com/torvalds/linux/blob/master/drivers/net/ethernet/xilinx/xilinx_axienet_main.c#L238
[3]: http://lkml.iu.edu/hypermail/linux/kernel/1804.0/00367.html

Changes in V3:
1) Moved RFC to PATCH.
2) Removed ethtool get/set coalesce, will be added later.
3) Added backward comapatible support.
4) Split the dmaengine support patch of V2 into two patches(2/3 and 3/3).
https://lore.kernel.org/all/20220920055703.13246-4-sarath.babu.naidu.gaddam@amd.com/

Changes in V2:
1) Add ethtool get/set coalesce and DMA reset using DMAengine framework.
2) Add performance numbers.
3) Remove .txt and change the name of file to xlnx,axiethernet.yaml.
4) Fix DT check warning(Fix DT check warning('device_type' does not match
   any of the regexes:'pinctrl-[0-9]+' From schema: Documentation/
   devicetree/bindings/net/xilinx_axienet.yaml).

Radhey Shyam Pandey (1):
  dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support

Sarath Babu Naidu Gaddam (2):
  net: axienet: Preparatory changes for dmaengine support
  net: axienet: Introduce dmaengine support

 .../bindings/net/xlnx,axi-ethernet.yaml       |  12 +
 drivers/net/ethernet/xilinx/xilinx_axienet.h  |   8 +
 .../net/ethernet/xilinx/xilinx_axienet_main.c | 636 ++++++++++++++----
 3 files changed, 533 insertions(+), 123 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support
  2023-05-10  8:50 [PATCH net-next V3 0/3] net: axienet: Introduce dmaengine Sarath Babu Naidu Gaddam
@ 2023-05-10  8:50 ` Sarath Babu Naidu Gaddam
  2023-05-10 10:08   ` Krzysztof Kozlowski
  2023-05-10  8:50 ` [PATCH net-next V3 2/3] net: axienet: Preparatory changes for dmaengine support Sarath Babu Naidu Gaddam
  2023-05-10  8:50 ` [PATCH net-next V3 3/3] net: axienet: Introduce " Sarath Babu Naidu Gaddam
  2 siblings, 1 reply; 14+ messages in thread
From: Sarath Babu Naidu Gaddam @ 2023-05-10  8:50 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, robh+dt, krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	sarath.babu.naidu.gaddam, git

From: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>

The axiethernet driver will use dmaengine framework to communicate
with dma controller IP instead of built-in dma programming sequence.

To request dma transmit and receive channels the axiethernet driver uses
generic dmas, dma-names properties.

Also to support the backward compatibility, use "dmas" property to
identify as it should use dmaengine framework or legacy
driver(built-in dma programming).

At this point it is recommended to use dmaengine framework but it's
optional. Once the solution is stable will make dmas as
required properties.

Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Sarath Babu Naidu Gaddam <sarath.babu.naidu.gaddam@amd.com>
---
These changes are on top of below txt to yaml conversion discussion
https://lore.kernel.org/all/20230308061223.1358637-1-sarath.babu.naidu.gaddam@amd.com/#Z2e.:20230308061223.1358637-1-sarath.babu.naidu.gaddam::40amd.com:1bindings:net:xlnx::2caxi-ethernet.yaml

Changes in V3:
1) Reverted reg and interrupts property to  support backward compatibility.
2) Moved dmas and dma-names properties from Required properties.

Changes in V2:
- None.
---
 .../devicetree/bindings/net/xlnx,axi-ethernet.yaml   | 12 ++++++++++++
 1 file changed, 12 insertions(+)

diff --git a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
index 80843c177029..9dfa1976e260 100644
--- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
+++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
@@ -122,6 +122,16 @@ properties:
       modes, where "pcs-handle" should be used to point to the PCS/PMA PHY,
       and "phy-handle" should point to an external PHY if exists.
 
+  dmas:
+    items:
+      - description: TX DMA Channel phandle and DMA request line number
+      - description: RX DMA Channel phandle and DMA request line number
+
+  dma-names:
+    items:
+      - const: tx_chan0
+      - const: rx_chan0
+
 required:
   - compatible
   - interrupts
@@ -157,6 +167,8 @@ examples:
         clocks = <&axi_clk>, <&axi_clk>, <&pl_enet_ref_clk>, <&mgt_clk>;
         phy-mode = "mii";
         reg = <0x40c00000 0x40000>,<0x50c00000 0x40000>;
+        dmas = <&xilinx_dma 0>, <&xilinx_dma 1>;
+        dma-names = "tx_chan0", "rx_chan0";
         xlnx,rxcsum = <0x2>;
         xlnx,rxmem = <0x800>;
         xlnx,txcsum = <0x2>;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V3 2/3] net: axienet: Preparatory changes for dmaengine support
  2023-05-10  8:50 [PATCH net-next V3 0/3] net: axienet: Introduce dmaengine Sarath Babu Naidu Gaddam
  2023-05-10  8:50 ` [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support Sarath Babu Naidu Gaddam
@ 2023-05-10  8:50 ` Sarath Babu Naidu Gaddam
  2023-05-10 13:19   ` [EXT] " Subbaraya Sundeep Bhatta
  2023-05-10  8:50 ` [PATCH net-next V3 3/3] net: axienet: Introduce " Sarath Babu Naidu Gaddam
  2 siblings, 1 reply; 14+ messages in thread
From: Sarath Babu Naidu Gaddam @ 2023-05-10  8:50 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, robh+dt, krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	sarath.babu.naidu.gaddam, git

The axiethernet driver has in-built dma programming. The aim is to remove
axiethernet axidma programming  after some time and instead use the
dmaengine framework to communicate with existing xilinx DMAengine
controller(xilinx_dma) driver.

Keep the axidma programming code under AXIENET_USE_DMA check so that
dmaengine changes can be added later.

Perform minor code reordering to minimize conditional
AXIENET_USE_DMA checks and there is no functional change.

It uses "dmas" property to identify whether it should use a dmaengine
framework or axiethernet axidma programming.

Signed-off-by: Sarath Babu Naidu Gaddam <sarath.babu.naidu.gaddam@amd.com>
---
Changes in V3:
1) New Patch.
---
 drivers/net/ethernet/xilinx/xilinx_axienet.h  |   2 +
 .../net/ethernet/xilinx/xilinx_axienet_main.c | 317 +++++++++++-------
 2 files changed, 192 insertions(+), 127 deletions(-)

diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
index 575ff9de8985..10917d997d27 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
@@ -435,6 +435,7 @@ struct axidma_bd {
  * @coalesce_usec_rx:	IRQ coalesce delay for RX
  * @coalesce_count_tx:	Store the irq coalesce on TX side.
  * @coalesce_usec_tx:	IRQ coalesce delay for TX
+ * @has_dmas:	flag to check dmaengine framework usage.
  */
 struct axienet_local {
 	struct net_device *ndev;
@@ -499,6 +500,7 @@ struct axienet_local {
 	u32 coalesce_usec_rx;
 	u32 coalesce_count_tx;
 	u32 coalesce_usec_tx;
+	u8  has_dmas;
 };
 
 /**
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index 3e310b55bce2..8678fc09245a 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -54,6 +54,8 @@
 
 #define AXIENET_REGS_N		40
 
+#define AXIENET_USE_DMA(lp) ((lp)->has_dmas)
+
 /* Match table for of_platform binding */
 static const struct of_device_id axienet_of_match[] = {
 	{ .compatible = "xlnx,axi-ethernet-1.00.a", },
@@ -588,10 +590,6 @@ static int axienet_device_reset(struct net_device *ndev)
 	struct axienet_local *lp = netdev_priv(ndev);
 	int ret;
 
-	ret = __axienet_device_reset(lp);
-	if (ret)
-		return ret;
-
 	lp->max_frm_size = XAE_MAX_VLAN_FRAME_SIZE;
 	lp->options |= XAE_OPTION_VLAN;
 	lp->options &= (~XAE_OPTION_JUMBO);
@@ -605,11 +603,17 @@ static int axienet_device_reset(struct net_device *ndev)
 			lp->options |= XAE_OPTION_JUMBO;
 	}
 
-	ret = axienet_dma_bd_init(ndev);
-	if (ret) {
-		netdev_err(ndev, "%s: descriptor allocation failed\n",
-			   __func__);
-		return ret;
+	if (!AXIENET_USE_DMA(lp)) {
+		ret = __axienet_device_reset(lp);
+		if (ret)
+			return ret;
+
+		ret = axienet_dma_bd_init(ndev);
+		if (ret) {
+			netdev_err(ndev, "%s: descriptor allocation failed\n",
+				   __func__);
+			return ret;
+		}
 	}
 
 	axienet_status = axienet_ior(lp, XAE_RCW1_OFFSET);
@@ -775,7 +779,7 @@ static int axienet_tx_poll(struct napi_struct *napi, int budget)
 }
 
 /**
- * axienet_start_xmit - Starts the transmission.
+ * axienet_start_xmit_legacy - Starts the transmission.
  * @skb:	sk_buff pointer that contains data to be Txed.
  * @ndev:	Pointer to net_device structure.
  *
@@ -788,7 +792,7 @@ static int axienet_tx_poll(struct napi_struct *napi, int budget)
  * it populates AXI Stream Control fields with appropriate values.
  */
 static netdev_tx_t
-axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+axienet_start_xmit_legacy(struct sk_buff *skb, struct net_device *ndev)
 {
 	u32 ii;
 	u32 num_frag;
@@ -890,6 +894,27 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
 	return NETDEV_TX_OK;
 }
 
+/**
+ * axienet_start_xmit - Starts the transmission.
+ * @skb:        sk_buff pointer that contains data to be Txed.
+ * @ndev:       Pointer to net_device structure.
+ *
+ * Return: NETDEV_TX_OK, on success
+ *          NETDEV_TX_BUSY, if any of the descriptors are not free
+ *
+ * This function is invoked from upper layers to initiate transmission
+ */
+static netdev_tx_t
+axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
+{
+	struct axienet_local *lp = netdev_priv(ndev);
+
+	if (!AXIENET_USE_DMA(lp))
+		return axienet_start_xmit_legacy(skb, ndev);
+	else
+		return NETDEV_TX_BUSY;
+}
+
 /**
  * axienet_rx_poll - Triggered by RX ISR to complete the BD processing.
  * @napi:	Pointer to NAPI structure.
@@ -1124,41 +1149,22 @@ static irqreturn_t axienet_eth_irq(int irq, void *_ndev)
 static void axienet_dma_err_handler(struct work_struct *work);
 
 /**
- * axienet_open - Driver open routine.
- * @ndev:	Pointer to net_device structure
+ * axienet_init_legacy_dma - init the dma legacy code.
+ * @ndev:       Pointer to net_device structure
  *
  * Return: 0, on success.
- *	    non-zero error value on failure
+ *          non-zero error value on failure
+ *
+ * This is the dma  initialization code. It also allocates interrupt
+ * service routines, enables the interrupt lines and ISR handling.
  *
- * This is the driver open routine. It calls phylink_start to start the
- * PHY device.
- * It also allocates interrupt service routines, enables the interrupt lines
- * and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer
- * descriptors are initialized.
  */
-static int axienet_open(struct net_device *ndev)
+
+static inline int axienet_init_legacy_dma(struct net_device *ndev)
 {
 	int ret;
 	struct axienet_local *lp = netdev_priv(ndev);
 
-	dev_dbg(&ndev->dev, "axienet_open()\n");
-
-	/* When we do an Axi Ethernet reset, it resets the complete core
-	 * including the MDIO. MDIO must be disabled before resetting.
-	 * Hold MDIO bus lock to avoid MDIO accesses during the reset.
-	 */
-	axienet_lock_mii(lp);
-	ret = axienet_device_reset(ndev);
-	axienet_unlock_mii(lp);
-
-	ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
-	if (ret) {
-		dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
-		return ret;
-	}
-
-	phylink_start(lp->phylink);
-
 	/* Enable worker thread for Axi DMA error handling */
 	INIT_WORK(&lp->dma_err_task, axienet_dma_err_handler);
 
@@ -1192,13 +1198,62 @@ static int axienet_open(struct net_device *ndev)
 err_tx_irq:
 	napi_disable(&lp->napi_tx);
 	napi_disable(&lp->napi_rx);
-	phylink_stop(lp->phylink);
-	phylink_disconnect_phy(lp->phylink);
 	cancel_work_sync(&lp->dma_err_task);
 	dev_err(lp->dev, "request_irq() failed\n");
 	return ret;
 }
 
+/**
+ * axienet_open - Driver open routine.
+ * @ndev:	Pointer to net_device structure
+ *
+ * Return: 0, on success.
+ *	    non-zero error value on failure
+ *
+ * This is the driver open routine. It calls phylink_start to start the
+ * PHY device.
+ * It also allocates interrupt service routines, enables the interrupt lines
+ * and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer
+ * descriptors are initialized.
+ */
+static int axienet_open(struct net_device *ndev)
+{
+	int ret;
+	struct axienet_local *lp = netdev_priv(ndev);
+
+	dev_dbg(&ndev->dev, "%s\n", __func__);
+
+	/* When we do an Axi Ethernet reset, it resets the complete core
+	 * including the MDIO. MDIO must be disabled before resetting.
+	 * Hold MDIO bus lock to avoid MDIO accesses during the reset.
+	 */
+	axienet_lock_mii(lp);
+	ret = axienet_device_reset(ndev);
+	axienet_unlock_mii(lp);
+
+	ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
+	if (ret) {
+		dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
+		return ret;
+	}
+
+	phylink_start(lp->phylink);
+
+	if (!AXIENET_USE_DMA(lp)) {
+		ret = axienet_init_legacy_dma(ndev);
+		if (ret)
+			goto error_code;
+	}
+
+	return 0;
+
+error_code:
+	phylink_stop(lp->phylink);
+	phylink_disconnect_phy(lp->phylink);
+
+	return ret;
+}
+
 /**
  * axienet_stop - Driver stop routine.
  * @ndev:	Pointer to net_device structure
@@ -1215,8 +1270,10 @@ static int axienet_stop(struct net_device *ndev)
 
 	dev_dbg(&ndev->dev, "axienet_close()\n");
 
-	napi_disable(&lp->napi_tx);
-	napi_disable(&lp->napi_rx);
+	if (!AXIENET_USE_DMA(lp)) {
+		napi_disable(&lp->napi_tx);
+		napi_disable(&lp->napi_rx);
+	}
 
 	phylink_stop(lp->phylink);
 	phylink_disconnect_phy(lp->phylink);
@@ -1224,18 +1281,18 @@ static int axienet_stop(struct net_device *ndev)
 	axienet_setoptions(ndev, lp->options &
 			   ~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
 
-	axienet_dma_stop(lp);
+	if (!AXIENET_USE_DMA(lp)) {
+		axienet_dma_stop(lp);
+		cancel_work_sync(&lp->dma_err_task);
+		free_irq(lp->tx_irq, ndev);
+		free_irq(lp->rx_irq, ndev);
+		axienet_dma_bd_release(ndev);
+	}
 
 	axienet_iow(lp, XAE_IE_OFFSET, 0);
 
-	cancel_work_sync(&lp->dma_err_task);
-
 	if (lp->eth_irq > 0)
 		free_irq(lp->eth_irq, ndev);
-	free_irq(lp->tx_irq, ndev);
-	free_irq(lp->rx_irq, ndev);
-
-	axienet_dma_bd_release(ndev);
 	return 0;
 }
 
@@ -1411,14 +1468,16 @@ static void axienet_ethtools_get_regs(struct net_device *ndev,
 	data[29] = axienet_ior(lp, XAE_FMI_OFFSET);
 	data[30] = axienet_ior(lp, XAE_AF0_OFFSET);
 	data[31] = axienet_ior(lp, XAE_AF1_OFFSET);
-	data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
-	data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
-	data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
-	data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
-	data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
-	data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
-	data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
-	data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
+	if (!AXIENET_USE_DMA(lp)) {
+		data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
+		data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
+		data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
+		data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
+		data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
+		data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
+		data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
+		data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
+	}
 }
 
 static void
@@ -1878,9 +1937,6 @@ static int axienet_probe(struct platform_device *pdev)
 	u64_stats_init(&lp->rx_stat_sync);
 	u64_stats_init(&lp->tx_stat_sync);
 
-	netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
-	netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
-
 	lp->axi_clk = devm_clk_get_optional(&pdev->dev, "s_axi_lite_clk");
 	if (!lp->axi_clk) {
 		/* For backward compatibility, if named AXI clock is not present,
@@ -2006,75 +2062,80 @@ static int axienet_probe(struct platform_device *pdev)
 		goto cleanup_clk;
 	}
 
-	/* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
-	np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
-	if (np) {
-		struct resource dmares;
+	if (!of_find_property(pdev->dev.of_node, "dmas", NULL)) {
+		/* Find the DMA node, map the DMA registers, and decode the DMA IRQs */
+		np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
 
-		ret = of_address_to_resource(np, 0, &dmares);
-		if (ret) {
-			dev_err(&pdev->dev,
-				"unable to get DMA resource\n");
+		if (np) {
+			struct resource dmares;
+
+			ret = of_address_to_resource(np, 0, &dmares);
+			if (ret) {
+				dev_err(&pdev->dev,
+					"unable to get DMA resource\n");
+				of_node_put(np);
+				goto cleanup_clk;
+			}
+			lp->dma_regs = devm_ioremap_resource(&pdev->dev,
+							     &dmares);
+			lp->rx_irq = irq_of_parse_and_map(np, 1);
+			lp->tx_irq = irq_of_parse_and_map(np, 0);
 			of_node_put(np);
+			lp->eth_irq = platform_get_irq_optional(pdev, 0);
+		} else {
+			/* Check for these resources directly on the Ethernet node. */
+			lp->dma_regs = devm_platform_get_and_ioremap_resource(pdev, 1, NULL);
+			lp->rx_irq = platform_get_irq(pdev, 1);
+			lp->tx_irq = platform_get_irq(pdev, 0);
+			lp->eth_irq = platform_get_irq_optional(pdev, 2);
+		}
+		if (IS_ERR(lp->dma_regs)) {
+			dev_err(&pdev->dev, "could not map DMA regs\n");
+			ret = PTR_ERR(lp->dma_regs);
+			goto cleanup_clk;
+		}
+		if (lp->rx_irq <= 0 || lp->tx_irq <= 0) {
+			dev_err(&pdev->dev, "could not determine irqs\n");
+			ret = -ENOMEM;
 			goto cleanup_clk;
 		}
-		lp->dma_regs = devm_ioremap_resource(&pdev->dev,
-						     &dmares);
-		lp->rx_irq = irq_of_parse_and_map(np, 1);
-		lp->tx_irq = irq_of_parse_and_map(np, 0);
-		of_node_put(np);
-		lp->eth_irq = platform_get_irq_optional(pdev, 0);
-	} else {
-		/* Check for these resources directly on the Ethernet node. */
-		lp->dma_regs = devm_platform_get_and_ioremap_resource(pdev, 1, NULL);
-		lp->rx_irq = platform_get_irq(pdev, 1);
-		lp->tx_irq = platform_get_irq(pdev, 0);
-		lp->eth_irq = platform_get_irq_optional(pdev, 2);
-	}
-	if (IS_ERR(lp->dma_regs)) {
-		dev_err(&pdev->dev, "could not map DMA regs\n");
-		ret = PTR_ERR(lp->dma_regs);
-		goto cleanup_clk;
-	}
-	if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) {
-		dev_err(&pdev->dev, "could not determine irqs\n");
-		ret = -ENOMEM;
-		goto cleanup_clk;
-	}
 
-	/* Autodetect the need for 64-bit DMA pointers.
-	 * When the IP is configured for a bus width bigger than 32 bits,
-	 * writing the MSB registers is mandatory, even if they are all 0.
-	 * We can detect this case by writing all 1's to one such register
-	 * and see if that sticks: when the IP is configured for 32 bits
-	 * only, those registers are RES0.
-	 * Those MSB registers were introduced in IP v7.1, which we check first.
-	 */
-	if ((axienet_ior(lp, XAE_ID_OFFSET) >> 24) >= 0x9) {
-		void __iomem *desc = lp->dma_regs + XAXIDMA_TX_CDESC_OFFSET + 4;
-
-		iowrite32(0x0, desc);
-		if (ioread32(desc) == 0) {	/* sanity check */
-			iowrite32(0xffffffff, desc);
-			if (ioread32(desc) > 0) {
-				lp->features |= XAE_FEATURE_DMA_64BIT;
-				addr_width = 64;
-				dev_info(&pdev->dev,
-					 "autodetected 64-bit DMA range\n");
-			}
+		/* Autodetect the need for 64-bit DMA pointers.
+		 * When the IP is configured for a bus width bigger than 32 bits,
+		 * writing the MSB registers is mandatory, even if they are all 0.
+		 * We can detect this case by writing all 1's to one such register
+		 * and see if that sticks: when the IP is configured for 32 bits
+		 * only, those registers are RES0.
+		 * Those MSB registers were introduced in IP v7.1, which we check first.
+		 */
+		if ((axienet_ior(lp, XAE_ID_OFFSET) >> 24) >= 0x9) {
+			void __iomem *desc = lp->dma_regs + XAXIDMA_TX_CDESC_OFFSET + 4;
+
 			iowrite32(0x0, desc);
+			if (ioread32(desc) == 0) {	/* sanity check */
+				iowrite32(0xffffffff, desc);
+				if (ioread32(desc) > 0) {
+					lp->features |= XAE_FEATURE_DMA_64BIT;
+					addr_width = 64;
+					dev_info(&pdev->dev,
+						 "autodetected 64-bit DMA range\n");
+				}
+				iowrite32(0x0, desc);
+			}
+		}
+		if (!IS_ENABLED(CONFIG_64BIT) && lp->features & XAE_FEATURE_DMA_64BIT) {
+			dev_err(&pdev->dev, "64-bit addressable DMA is not compatible with 32-bit archecture\n");
+			ret = -EINVAL;
+			goto cleanup_clk;
 		}
-	}
-	if (!IS_ENABLED(CONFIG_64BIT) && lp->features & XAE_FEATURE_DMA_64BIT) {
-		dev_err(&pdev->dev, "64-bit addressable DMA is not compatible with 32-bit archecture\n");
-		ret = -EINVAL;
-		goto cleanup_clk;
-	}
 
-	ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_width));
-	if (ret) {
-		dev_err(&pdev->dev, "No suitable DMA available\n");
-		goto cleanup_clk;
+		ret = dma_set_mask_and_coherent(&pdev->dev, DMA_BIT_MASK(addr_width));
+		if (ret) {
+			dev_err(&pdev->dev, "No suitable DMA available\n");
+			goto cleanup_clk;
+		}
+		netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
+		netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
 	}
 
 	/* Check for Ethernet core IRQ (optional) */
@@ -2092,14 +2153,16 @@ static int axienet_probe(struct platform_device *pdev)
 	}
 
 	lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
-	lp->coalesce_usec_rx = XAXIDMA_DFT_RX_USEC;
 	lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
-	lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
 
-	/* Reset core now that clocks are enabled, prior to accessing MDIO */
-	ret = __axienet_device_reset(lp);
-	if (ret)
-		goto cleanup_clk;
+	if (!AXIENET_USE_DMA(lp)) {
+		lp->coalesce_usec_rx = XAXIDMA_DFT_RX_USEC;
+		lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
+		/* Reset core now that clocks are enabled, prior to accessing MDIO */
+		ret = __axienet_device_reset(lp);
+		if (ret)
+			goto cleanup_clk;
+	}
 
 	ret = axienet_mdio_setup(lp);
 	if (ret)
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine support
  2023-05-10  8:50 [PATCH net-next V3 0/3] net: axienet: Introduce dmaengine Sarath Babu Naidu Gaddam
  2023-05-10  8:50 ` [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support Sarath Babu Naidu Gaddam
  2023-05-10  8:50 ` [PATCH net-next V3 2/3] net: axienet: Preparatory changes for dmaengine support Sarath Babu Naidu Gaddam
@ 2023-05-10  8:50 ` Sarath Babu Naidu Gaddam
  2023-05-10 10:10   ` Krzysztof Kozlowski
  2023-05-10 13:44   ` [EXT] " Subbaraya Sundeep Bhatta
  2 siblings, 2 replies; 14+ messages in thread
From: Sarath Babu Naidu Gaddam @ 2023-05-10  8:50 UTC (permalink / raw)
  To: davem, edumazet, kuba, pabeni, robh+dt, krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	sarath.babu.naidu.gaddam, git

Add dmaengine framework to communicate with the xilinx DMAengine
driver(AXIDMA).

Axi ethernet driver uses separate channels for transmit and receive.
Add support for these channels to handle TX and RX with skb and
appropriate callbacks. Also add axi ethernet core interrupt for
dmaengine framework support.

The dmaengine framework was extended for metadata API support during the
axidma RFC[1] discussion. However it still needs further enhancements to
make it well suited for ethernet usecases. The ethernet features i.e
ethtool set/get of DMA IP properties, ndo_poll_controller, trigger
reset of DMA IP from ethernet are not supported (mentioned in TODO)
and it requires follow-up discussion and dma framework enhancement.

[1]: https://lore.kernel.org/lkml/1522665546-10035-1-git-send-email-
radheys@xilinx.com

Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
Signed-off-by: Sarath Babu Naidu Gaddam <sarath.babu.naidu.gaddam@amd.com>
---
Performance numbers(Mbps):

              | TCP | UDP |
         -----------------
         | Tx | 920 | 800 |
         -----------------
         | Rx | 620 | 910 |

Changes in V3:
1) New patch for dmaengine framework support.
---
 drivers/net/ethernet/xilinx/xilinx_axienet.h  |   6 +
 .../net/ethernet/xilinx/xilinx_axienet_main.c | 331 +++++++++++++++++-
 2 files changed, 335 insertions(+), 2 deletions(-)

diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h b/drivers/net/ethernet/xilinx/xilinx_axienet.h
index 10917d997d27..fbe00c5390d5 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
@@ -436,6 +436,9 @@ struct axidma_bd {
  * @coalesce_count_tx:	Store the irq coalesce on TX side.
  * @coalesce_usec_tx:	IRQ coalesce delay for TX
  * @has_dmas:	flag to check dmaengine framework usage.
+ * @tx_chan:	TX DMA channel.
+ * @rx_chan:	RX DMA channel.
+ * @skb_cache:	Custom skb slab allocator
  */
 struct axienet_local {
 	struct net_device *ndev;
@@ -501,6 +504,9 @@ struct axienet_local {
 	u32 coalesce_count_tx;
 	u32 coalesce_usec_tx;
 	u8  has_dmas;
+	struct dma_chan *tx_chan;
+	struct dma_chan *rx_chan;
+	struct kmem_cache *skb_cache;
 };
 
 /**
diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
index 8678fc09245a..662c77ff0e99 100644
--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
@@ -37,6 +37,9 @@
 #include <linux/phy.h>
 #include <linux/mii.h>
 #include <linux/ethtool.h>
+#include <linux/dmaengine.h>
+#include <linux/dma-mapping.h>
+#include <linux/dma/xilinx_dma.h>
 
 #include "xilinx_axienet.h"
 
@@ -46,6 +49,9 @@
 #define TX_BD_NUM_MIN			(MAX_SKB_FRAGS + 1)
 #define TX_BD_NUM_MAX			4096
 #define RX_BD_NUM_MAX			4096
+#define DMA_NUM_APP_WORDS		5
+#define LEN_APP				4
+#define RX_BUF_NUM_DEFAULT		128
 
 /* Must be shorter than length of ethtool_drvinfo.driver field to fit */
 #define DRIVER_NAME		"xaxienet"
@@ -56,6 +62,16 @@
 
 #define AXIENET_USE_DMA(lp) ((lp)->has_dmas)
 
+struct axi_skbuff {
+	struct scatterlist sgl[MAX_SKB_FRAGS + 1];
+	struct dma_async_tx_descriptor *desc;
+	dma_addr_t dma_address;
+	struct sk_buff *skb;
+	int sg_len;
+} __packed;
+
+static int axienet_rx_submit_desc(struct net_device *ndev);
+
 /* Match table for of_platform binding */
 static const struct of_device_id axienet_of_match[] = {
 	{ .compatible = "xlnx,axi-ethernet-1.00.a", },
@@ -728,6 +744,108 @@ static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
 	return 0;
 }
 
+/**
+ * axienet_dma_tx_cb - DMA engine callback for TX channel.
+ * @data:       Pointer to the axi_skbuff structure
+ * @result:     error reporting through dmaengine_result.
+ * This function is called by dmaengine driver for TX channel to notify
+ * that the transmit is done.
+ */
+static void axienet_dma_tx_cb(void *data, const struct dmaengine_result *result)
+{
+	struct axi_skbuff *axi_skb = data;
+
+	struct net_device *netdev = axi_skb->skb->dev;
+	struct axienet_local *lp = netdev_priv(netdev);
+
+	u64_stats_update_begin(&lp->tx_stat_sync);
+	u64_stats_add(&lp->tx_bytes, axi_skb->skb->len);
+	u64_stats_add(&lp->tx_packets, 1);
+	u64_stats_update_end(&lp->tx_stat_sync);
+
+	dma_unmap_sg(lp->dev, axi_skb->sgl, axi_skb->sg_len, DMA_MEM_TO_DEV);
+	dev_kfree_skb_any(axi_skb->skb);
+	kmem_cache_free(lp->skb_cache, axi_skb);
+}
+
+/**
+ * axienet_start_xmit_dmaengine - Starts the transmission.
+ * @skb:        sk_buff pointer that contains data to be Txed.
+ * @ndev:       Pointer to net_device structure.
+ *
+ * Return: NETDEV_TX_OK, on success
+ *          NETDEV_TX_BUSY, if any memory failure or SG error.
+ *
+ * This function is invoked from xmit to initiate transmission. The
+ * function sets the skbs , call back API, SG etc.
+ * Additionally if checksum offloading is supported,
+ * it populates AXI Stream Control fields with appropriate values.
+ */
+static netdev_tx_t
+axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
+{
+	struct dma_async_tx_descriptor *dma_tx_desc = NULL;
+	struct axienet_local *lp = netdev_priv(ndev);
+	u32 app[DMA_NUM_APP_WORDS] = {0};
+	struct axi_skbuff *axi_skb;
+	u32 csum_start_off;
+	u32 csum_index_off;
+	int sg_len;
+	int ret;
+
+	sg_len = skb_shinfo(skb)->nr_frags + 1;
+	axi_skb = kmem_cache_zalloc(lp->skb_cache, GFP_KERNEL);
+	if (!axi_skb)
+		return NETDEV_TX_BUSY;
+
+	sg_init_table(axi_skb->sgl, sg_len);
+	ret = skb_to_sgvec(skb, axi_skb->sgl, 0, skb->len);
+	if (unlikely(ret < 0))
+		goto xmit_error_skb_sgvec;
+
+	ret = dma_map_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
+	if (ret == 0)
+		goto xmit_error_skb_sgvec;
+
+	/*Fill up app fields for checksum */
+	if (skb->ip_summed == CHECKSUM_PARTIAL) {
+		if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
+			/* Tx Full Checksum Offload Enabled */
+			app[0] |= 2;
+		} else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) {
+			csum_start_off = skb_transport_offset(skb);
+			csum_index_off = csum_start_off + skb->csum_offset;
+			/* Tx Partial Checksum Offload Enabled */
+			app[0] |= 1;
+			app[1] = (csum_start_off << 16) | csum_index_off;
+		}
+	} else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
+		app[0] |= 2; /* Tx Full Checksum Offload Enabled */
+	}
+
+	dma_tx_desc = lp->tx_chan->device->device_prep_slave_sg(lp->tx_chan, axi_skb->sgl,
+			sg_len, DMA_MEM_TO_DEV,
+			DMA_PREP_INTERRUPT, (void *)app);
+
+	if (!dma_tx_desc)
+		goto xmit_error_prep;
+
+	axi_skb->skb = skb;
+	axi_skb->sg_len = sg_len;
+	dma_tx_desc->callback_param =  axi_skb;
+	dma_tx_desc->callback_result = axienet_dma_tx_cb;
+	dmaengine_submit(dma_tx_desc);
+	dma_async_issue_pending(lp->tx_chan);
+
+	return NETDEV_TX_OK;
+
+xmit_error_prep:
+	dma_unmap_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
+xmit_error_skb_sgvec:
+	kmem_cache_free(lp->skb_cache, axi_skb);
+	return NETDEV_TX_BUSY;
+}
+
 /**
  * axienet_tx_poll - Invoked once a transmit is completed by the
  * Axi DMA Tx channel.
@@ -912,7 +1030,42 @@ axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
 	if (!AXIENET_USE_DMA(lp))
 		return axienet_start_xmit_legacy(skb, ndev);
 	else
-		return NETDEV_TX_BUSY;
+		return axienet_start_xmit_dmaengine(skb, ndev);
+}
+
+/**
+ * axienet_dma_rx_cb - DMA engine callback for RX channel.
+ * @data:       Pointer to the axi_skbuff structure
+ * @result:     error reporting through dmaengine_result.
+ * This function is called by dmaengine driver for RX channel to notify
+ * that the packet is received.
+ */
+static void axienet_dma_rx_cb(void *data, const struct dmaengine_result *result)
+{
+	struct axi_skbuff *axi_skb = data;
+	struct sk_buff *skb = axi_skb->skb;
+	struct net_device *netdev = skb->dev;
+	struct axienet_local *lp = netdev_priv(netdev);
+	size_t meta_len, meta_max_len, rx_len;
+	u32 *app;
+
+	app  = dmaengine_desc_get_metadata_ptr(axi_skb->desc, &meta_len, &meta_max_len);
+	dma_unmap_single(lp->dev, axi_skb->dma_address, lp->max_frm_size,
+			 DMA_FROM_DEVICE);
+	/* TODO: Derive app word index programmatically */
+	rx_len = (app[LEN_APP] & 0xFFFF);
+	skb_put(skb, rx_len);
+	skb->protocol = eth_type_trans(skb, netdev);
+	skb->ip_summed = CHECKSUM_NONE;
+
+	netif_rx(skb);
+	kmem_cache_free(lp->skb_cache, axi_skb);
+	u64_stats_update_begin(&lp->rx_stat_sync);
+	u64_stats_add(&lp->rx_packets, 1);
+	u64_stats_add(&lp->rx_bytes, rx_len);
+	u64_stats_update_end(&lp->rx_stat_sync);
+	axienet_rx_submit_desc(netdev);
+	dma_async_issue_pending(lp->rx_chan);
 }
 
 /**
@@ -1148,6 +1301,134 @@ static irqreturn_t axienet_eth_irq(int irq, void *_ndev)
 
 static void axienet_dma_err_handler(struct work_struct *work);
 
+/**
+ * axienet_rx_submit_desc - Submit the descriptors with required data
+ * like call backup API, skb buffer.. etc to dmaengine.
+ *
+ * @ndev:	net_device pointer
+ *
+ *Return: 0, on success.
+ *          non-zero error value on failure
+ */
+static int axienet_rx_submit_desc(struct net_device *ndev)
+{
+	struct dma_async_tx_descriptor *dma_rx_desc = NULL;
+	struct axienet_local *lp = netdev_priv(ndev);
+	struct axi_skbuff *axi_skb;
+	struct sk_buff *skb;
+	dma_addr_t addr;
+	int ret;
+
+	axi_skb = kmem_cache_alloc(lp->skb_cache, GFP_KERNEL);
+
+	if (!axi_skb)
+		return -ENOMEM;
+	skb = netdev_alloc_skb(ndev, lp->max_frm_size);
+	if (!skb) {
+		ret = -ENOMEM;
+		goto rx_bd_init_skb;
+	}
+
+	sg_init_table(axi_skb->sgl, 1);
+	addr = dma_map_single(lp->dev, skb->data, lp->max_frm_size, DMA_FROM_DEVICE);
+	sg_dma_address(axi_skb->sgl) = addr;
+	sg_dma_len(axi_skb->sgl) = lp->max_frm_size;
+	dma_rx_desc = dmaengine_prep_slave_sg(lp->rx_chan, axi_skb->sgl,
+					      1, DMA_DEV_TO_MEM,
+					      DMA_PREP_INTERRUPT);
+	if (!dma_rx_desc) {
+		ret = -EINVAL;
+		goto rx_bd_init_prep_sg;
+	}
+
+	axi_skb->skb = skb;
+	axi_skb->dma_address = sg_dma_address(axi_skb->sgl);
+	axi_skb->desc = dma_rx_desc;
+	dma_rx_desc->callback_param =  axi_skb;
+	dma_rx_desc->callback_result = axienet_dma_rx_cb;
+	dmaengine_submit(dma_rx_desc);
+
+	return 0;
+
+rx_bd_init_prep_sg:
+	dma_unmap_single(lp->dev, addr, lp->max_frm_size, DMA_FROM_DEVICE);
+	dev_kfree_skb(skb);
+rx_bd_init_skb:
+	kmem_cache_free(lp->skb_cache, axi_skb);
+	return ret;
+}
+
+/**
+ * axienet_setup_dma_chan - request the dma channels.
+ * @ndev:       Pointer to net_device structure
+ *
+ * Return: 0, on success.
+ *          non-zero error value on failure
+ *
+ * This function requests the TX and RX channels. It also submits the
+ * allocated skb buffers and call back APIs to dmaengine.
+ *
+ */
+static int axienet_setup_dma_chan(struct net_device *ndev)
+{
+	struct axienet_local *lp = netdev_priv(ndev);
+	int i, ret;
+
+	lp->tx_chan = dma_request_chan(lp->dev, "tx_chan0");
+	if (IS_ERR(lp->tx_chan)) {
+		ret = PTR_ERR(lp->tx_chan);
+		if (ret != -EPROBE_DEFER)
+			netdev_err(ndev, "No Ethernet DMA (TX) channel found\n");
+		return ret;
+	}
+
+	lp->rx_chan = dma_request_chan(lp->dev, "rx_chan0");
+	if (IS_ERR(lp->rx_chan)) {
+		ret = PTR_ERR(lp->rx_chan);
+		if (ret != -EPROBE_DEFER)
+			netdev_err(ndev, "No Ethernet DMA (RX) channel found\n");
+		goto err_dma_request_rx;
+	}
+	lp->skb_cache = kmem_cache_create("ethernet", sizeof(struct axi_skbuff),
+					  0, 0, NULL);
+	if (!lp->skb_cache) {
+		ret =  -ENOMEM;
+		goto err_kmem;
+	}
+	/* TODO: Instead of BD_NUM_DEFAULT use runtime support*/
+	for (i = 0; i < RX_BUF_NUM_DEFAULT; i++)
+		axienet_rx_submit_desc(ndev);
+	dma_async_issue_pending(lp->rx_chan);
+
+	return 0;
+err_kmem:
+	dma_release_channel(lp->rx_chan);
+err_dma_request_rx:
+	dma_release_channel(lp->tx_chan);
+	return ret;
+}
+
+/**
+ * axienet_init_dmaengine - init the dmaengine code.
+ * @ndev:       Pointer to net_device structure
+ *
+ * Return: 0, on success.
+ *          non-zero error value on failure
+ *
+ * This is the dmaengine initialization code.
+ */
+static inline int axienet_init_dmaengine(struct net_device *ndev)
+{
+	int ret;
+
+	ret = axienet_setup_dma_chan(ndev);
+
+	if (ret < 0)
+		return ret;
+
+	return 0;
+}
+
 /**
  * axienet_init_legacy_dma - init the dma legacy code.
  * @ndev:       Pointer to net_device structure
@@ -1239,7 +1520,20 @@ static int axienet_open(struct net_device *ndev)
 
 	phylink_start(lp->phylink);
 
-	if (!AXIENET_USE_DMA(lp)) {
+	if (AXIENET_USE_DMA(lp)) {
+		ret = axienet_init_dmaengine(ndev);
+		if (ret < 0)
+			goto error_code;
+
+		/* Enable interrupts for Axi Ethernet core (if defined) */
+		if (lp->eth_irq > 0) {
+			ret = request_irq(lp->eth_irq, axienet_eth_irq, IRQF_SHARED,
+					  ndev->name, ndev);
+			if (ret)
+				goto error_code;
+		}
+
+	} else {
 		ret = axienet_init_legacy_dma(ndev);
 		if (ret)
 			goto error_code;
@@ -1287,6 +1581,12 @@ static int axienet_stop(struct net_device *ndev)
 		free_irq(lp->tx_irq, ndev);
 		free_irq(lp->rx_irq, ndev);
 		axienet_dma_bd_release(ndev);
+	} else {
+		dmaengine_terminate_all(lp->tx_chan);
+		dmaengine_terminate_all(lp->rx_chan);
+
+		dma_release_channel(lp->rx_chan);
+		dma_release_channel(lp->tx_chan);
 	}
 
 	axienet_iow(lp, XAE_IE_OFFSET, 0);
@@ -2136,6 +2436,33 @@ static int axienet_probe(struct platform_device *pdev)
 		}
 		netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
 		netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
+	} else {
+		struct xilinx_vdma_config cfg;
+		struct dma_chan *tx_chan;
+
+		lp->eth_irq = platform_get_irq_optional(pdev, 0);
+		tx_chan = dma_request_chan(lp->dev, "tx_chan0");
+
+		if (IS_ERR(tx_chan)) {
+			ret = PTR_ERR(tx_chan);
+			if (ret != -EPROBE_DEFER)
+				dev_err(&pdev->dev, "No Ethernet DMA (TX) channel found\n");
+			goto cleanup_clk;
+		}
+
+		cfg.reset = 1;
+		/* As name says VDMA but it has support for DMA channel reset*/
+		ret = xilinx_vdma_channel_set_config(tx_chan, &cfg);
+
+		if (ret < 0) {
+			dev_err(&pdev->dev, "Reset channel failed\n");
+			dma_release_channel(tx_chan);
+			goto cleanup_clk;
+		} else {
+			lp->has_dmas = 1;
+		}
+
+		dma_release_channel(tx_chan);
 	}
 
 	/* Check for Ethernet core IRQ (optional) */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support
  2023-05-10  8:50 ` [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support Sarath Babu Naidu Gaddam
@ 2023-05-10 10:08   ` Krzysztof Kozlowski
  2023-05-11 11:32     ` Gaddam, Sarath Babu Naidu
  0 siblings, 1 reply; 14+ messages in thread
From: Krzysztof Kozlowski @ 2023-05-10 10:08 UTC (permalink / raw)
  To: Sarath Babu Naidu Gaddam, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	git

On 10/05/2023 10:50, Sarath Babu Naidu Gaddam wrote:
> From: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> 
> The axiethernet driver will use dmaengine framework to communicate
> with dma controller IP instead of built-in dma programming sequence.

Subject: drop second/last, redundant "bindings". The "dt-bindings"
prefix is already stating that these are bindings.

Actually also drop "dmaenging" as it is Linuxism. Focus on hardware,
e.g. "Add DMA support".

> 
> To request dma transmit and receive channels the axiethernet driver uses
> generic dmas, dma-names properties.
> 
> Also to support the backward compatibility, use "dmas" property to
> identify as it should use dmaengine framework or legacy
> driver(built-in dma programming).
> 
> At this point it is recommended to use dmaengine framework but it's
> optional. Once the solution is stable will make dmas as
> required properties.
> 
> Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> Signed-off-by: Sarath Babu Naidu Gaddam <sarath.babu.naidu.gaddam@amd.com>
> ---
> These changes are on top of below txt to yaml conversion discussion
> https://lore.kernel.org/all/20230308061223.1358637-1-sarath.babu.naidu.gaddam@amd.com/#Z2e.:20230308061223.1358637-1-sarath.babu.naidu.gaddam::40amd.com:1bindings:net:xlnx::2caxi-ethernet.yaml
> 
> Changes in V3:
> 1) Reverted reg and interrupts property to  support backward compatibility.
> 2) Moved dmas and dma-names properties from Required properties.
> 
> Changes in V2:
> - None.
> ---
>  .../devicetree/bindings/net/xlnx,axi-ethernet.yaml   | 12 ++++++++++++
>  1 file changed, 12 insertions(+)
> 
> diff --git a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> index 80843c177029..9dfa1976e260 100644
> --- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> +++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> @@ -122,6 +122,16 @@ properties:
>        modes, where "pcs-handle" should be used to point to the PCS/PMA PHY,
>        and "phy-handle" should point to an external PHY if exists.
>  
> +  dmas:
> +    items:
> +      - description: TX DMA Channel phandle and DMA request line number
> +      - description: RX DMA Channel phandle and DMA request line number
> +
> +  dma-names:
> +    items:
> +      - const: tx_chan0

tx

> +      - const: rx_chan0

rx

Why doing these differently than all other devices?


Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine support
  2023-05-10  8:50 ` [PATCH net-next V3 3/3] net: axienet: Introduce " Sarath Babu Naidu Gaddam
@ 2023-05-10 10:10   ` Krzysztof Kozlowski
  2023-05-10 13:44   ` [EXT] " Subbaraya Sundeep Bhatta
  1 sibling, 0 replies; 14+ messages in thread
From: Krzysztof Kozlowski @ 2023-05-10 10:10 UTC (permalink / raw)
  To: Sarath Babu Naidu Gaddam, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	git

On 10/05/2023 10:50, Sarath Babu Naidu Gaddam wrote:
> Add dmaengine framework to communicate with the xilinx DMAengine
> driver(AXIDMA).
> 
> Axi ethernet driver uses separate channels for transmit and receive.
> Add support for these channels to handle TX and RX with skb and
> appropriate callbacks. Also add axi ethernet core interrupt for



> +/**
> + * axienet_setup_dma_chan - request the dma channels.
> + * @ndev:       Pointer to net_device structure
> + *
> + * Return: 0, on success.
> + *          non-zero error value on failure
> + *
> + * This function requests the TX and RX channels. It also submits the
> + * allocated skb buffers and call back APIs to dmaengine.
> + *
> + */
> +static int axienet_setup_dma_chan(struct net_device *ndev)
> +{
> +	struct axienet_local *lp = netdev_priv(ndev);
> +	int i, ret;
> +
> +	lp->tx_chan = dma_request_chan(lp->dev, "tx_chan0");
> +	if (IS_ERR(lp->tx_chan)) {
> +		ret = PTR_ERR(lp->tx_chan);
> +		if (ret != -EPROBE_DEFER)
> +			netdev_err(ndev, "No Ethernet DMA (TX) channel found\n");

dev_err_probe seems suitable here.


> +		return ret;
> +	}
> +
> +	lp->rx_chan = dma_request_chan(lp->dev, "rx_chan0");
> +	if (IS_ERR(lp->rx_chan)) {
> +		ret = PTR_ERR(lp->rx_chan);
> +		if (ret != -EPROBE_DEFER)
> +			netdev_err(ndev, "No Ethernet DMA (RX) channel found\n");

dev_err_probe

> +		goto err_dma_request_rx;
> +	}
> +	lp->skb_cache = kmem_cache_create("ethernet", sizeof(struct axi_skbuff),
> +					  0, 0, NULL);
> +	if (!lp->skb_cache) {
> +		ret =  -ENOMEM;
> +		goto err_kmem;
> +	}
> +	/* TODO: Instead of BD_NUM_DEFAULT use runtime support*/
> +	for (i = 0; i < RX_BUF_NUM_DEFAULT; i++)
> +		axienet_rx_submit_desc(ndev);
> +	dma_async_issue_pending(lp->rx_chan);
> +
> +	return 0;
> +err_kmem:
> +	dma_release_channel(lp->rx_chan);
> +err_dma_request_rx:
> +	dma_release_channel(lp->tx_chan);
> +	return ret;
> +}
> +
> +/**
> + * axienet_init_dmaengine - init the dmaengine code.
> + * @ndev:       Pointer to net_device structure
> + *
> + * Return: 0, on success.
> + *          non-zero error value on failure
> + *
> + * This is the dmaengine initialization code.
> + */
> +static inline int axienet_init_dmaengine(struct net_device *ndev)
> +{
> +	int ret;
> +
> +	ret = axienet_setup_dma_chan(ndev);
> +
> +	if (ret < 0)
> +		return ret;
> +
> +	return 0;
> +}
> +
>  /**
>   * axienet_init_legacy_dma - init the dma legacy code.
>   * @ndev:       Pointer to net_device structure
> @@ -1239,7 +1520,20 @@ static int axienet_open(struct net_device *ndev)
>  
>  	phylink_start(lp->phylink);
>  
> -	if (!AXIENET_USE_DMA(lp)) {
> +	if (AXIENET_USE_DMA(lp)) {
> +		ret = axienet_init_dmaengine(ndev);
> +		if (ret < 0)
> +			goto error_code;
> +
> +		/* Enable interrupts for Axi Ethernet core (if defined) */
> +		if (lp->eth_irq > 0) {
> +			ret = request_irq(lp->eth_irq, axienet_eth_irq, IRQF_SHARED,
> +					  ndev->name, ndev);
> +			if (ret)
> +				goto error_code;
> +		}
> +
> +	} else {
>  		ret = axienet_init_legacy_dma(ndev);
>  		if (ret)
>  			goto error_code;
> @@ -1287,6 +1581,12 @@ static int axienet_stop(struct net_device *ndev)
>  		free_irq(lp->tx_irq, ndev);
>  		free_irq(lp->rx_irq, ndev);
>  		axienet_dma_bd_release(ndev);
> +	} else {
> +		dmaengine_terminate_all(lp->tx_chan);
> +		dmaengine_terminate_all(lp->rx_chan);
> +
> +		dma_release_channel(lp->rx_chan);
> +		dma_release_channel(lp->tx_chan);
>  	}
>  
>  	axienet_iow(lp, XAE_IE_OFFSET, 0);
> @@ -2136,6 +2436,33 @@ static int axienet_probe(struct platform_device *pdev)
>  		}
>  		netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
>  		netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
> +	} else {
> +		struct xilinx_vdma_config cfg;
> +		struct dma_chan *tx_chan;
> +
> +		lp->eth_irq = platform_get_irq_optional(pdev, 0);
> +		tx_chan = dma_request_chan(lp->dev, "tx_chan0");
> +
> +		if (IS_ERR(tx_chan)) {
> +			ret = PTR_ERR(tx_chan);
> +			if (ret != -EPROBE_DEFER)
> +				dev_err(&pdev->dev, "No Ethernet DMA (TX) channel found\n");

dev_err_probe

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [EXT] [PATCH net-next V3 2/3] net: axienet: Preparatory changes for dmaengine support
  2023-05-10  8:50 ` [PATCH net-next V3 2/3] net: axienet: Preparatory changes for dmaengine support Sarath Babu Naidu Gaddam
@ 2023-05-10 13:19   ` Subbaraya Sundeep Bhatta
  0 siblings, 0 replies; 14+ messages in thread
From: Subbaraya Sundeep Bhatta @ 2023-05-10 13:19 UTC (permalink / raw)
  To: Sarath Babu Naidu Gaddam, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	git

Hi,

>-----Original Message-----
>From: Sarath Babu Naidu Gaddam <sarath.babu.naidu.gaddam@amd.com>
>Sent: Wednesday, May 10, 2023 2:21 PM
>To: davem@davemloft.net; edumazet@google.com; kuba@kernel.org;
>pabeni@redhat.com; robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
>Cc: linux@armlinux.org.uk; michal.simek@amd.com;
>radhey.shyam.pandey@amd.com; netdev@vger.kernel.org;
>devicetree@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-
>kernel@vger.kernel.org; anirudha.sarangi@amd.com;
>harini.katakam@amd.com; sarath.babu.naidu.gaddam@amd.com;
>git@amd.com
>Subject: [PATCH net-next V3 2/3] net: axienet: Preparatory changes for
>dmaengine support
>
>The axiethernet driver has in-built dma programming. The aim is to remove
>axiethernet axidma programming  after some time and instead use the
>dmaengine framework to communicate with existing xilinx DMAengine
>controller(xilinx_dma) driver.
>
>Keep the axidma programming code under AXIENET_USE_DMA check so that
>dmaengine changes can be added later.
>
>Perform minor code reordering to minimize conditional AXIENET_USE_DMA
>checks and there is no functional change.
>
>It uses "dmas" property to identify whether it should use a dmaengine framework
>or axiethernet axidma programming.
>
>Signed-off-by: Sarath Babu Naidu Gaddam
><sarath.babu.naidu.gaddam@amd.com>
>---
>Changes in V3:
>1) New Patch.
>---
> drivers/net/ethernet/xilinx/xilinx_axienet.h  |   2 +
> .../net/ethernet/xilinx/xilinx_axienet_main.c | 317 +++++++++++-------
> 2 files changed, 192 insertions(+), 127 deletions(-)
>
>diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h
>b/drivers/net/ethernet/xilinx/xilinx_axienet.h
>index 575ff9de8985..10917d997d27 100644
>--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
>+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
>@@ -435,6 +435,7 @@ struct axidma_bd {
>  * @coalesce_usec_rx:	IRQ coalesce delay for RX
>  * @coalesce_count_tx:	Store the irq coalesce on TX side.
>  * @coalesce_usec_tx:	IRQ coalesce delay for TX
>+ * @has_dmas:	flag to check dmaengine framework usage.
>  */
> struct axienet_local {
> 	struct net_device *ndev;
>@@ -499,6 +500,7 @@ struct axienet_local {
> 	u32 coalesce_usec_rx;
> 	u32 coalesce_count_tx;
> 	u32 coalesce_usec_tx;
>+	u8  has_dmas;

Hardware always has dma. You are just switching to dmaengine software framework.
use_dmaengine might be the correct name.

> };
>
> /**
>diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>index 3e310b55bce2..8678fc09245a 100644
>--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>@@ -54,6 +54,8 @@
>
> #define AXIENET_REGS_N		40
>
>+#define AXIENET_USE_DMA(lp) ((lp)->has_dmas)
>+
IMO lp->has_dmas is already simple enough and no need of a macro.
Up to you to remove or keep it.

Thanks,
Sundeep

> /* Match table for of_platform binding */  static const struct of_device_id
>axienet_of_match[] = {
> 	{ .compatible = "xlnx,axi-ethernet-1.00.a", }, @@ -588,10 +590,6 @@
>static int axienet_device_reset(struct net_device *ndev)
> 	struct axienet_local *lp = netdev_priv(ndev);
> 	int ret;
>
>-	ret = __axienet_device_reset(lp);
>-	if (ret)
>-		return ret;
>-
> 	lp->max_frm_size = XAE_MAX_VLAN_FRAME_SIZE;
> 	lp->options |= XAE_OPTION_VLAN;
> 	lp->options &= (~XAE_OPTION_JUMBO);
>@@ -605,11 +603,17 @@ static int axienet_device_reset(struct net_device
>*ndev)
> 			lp->options |= XAE_OPTION_JUMBO;
> 	}
>
>-	ret = axienet_dma_bd_init(ndev);
>-	if (ret) {
>-		netdev_err(ndev, "%s: descriptor allocation failed\n",
>-			   __func__);
>-		return ret;
>+	if (!AXIENET_USE_DMA(lp)) {
>+		ret = __axienet_device_reset(lp);
>+		if (ret)
>+			return ret;
>+
>+		ret = axienet_dma_bd_init(ndev);
>+		if (ret) {
>+			netdev_err(ndev, "%s: descriptor allocation failed\n",
>+				   __func__);
>+			return ret;
>+		}
> 	}
>
> 	axienet_status = axienet_ior(lp, XAE_RCW1_OFFSET); @@ -775,7 +779,7
>@@ static int axienet_tx_poll(struct napi_struct *napi, int budget)  }
>
> /**
>- * axienet_start_xmit - Starts the transmission.
>+ * axienet_start_xmit_legacy - Starts the transmission.
>  * @skb:	sk_buff pointer that contains data to be Txed.
>  * @ndev:	Pointer to net_device structure.
>  *
>@@ -788,7 +792,7 @@ static int axienet_tx_poll(struct napi_struct *napi, int
>budget)
>  * it populates AXI Stream Control fields with appropriate values.
>  */
> static netdev_tx_t
>-axienet_start_xmit(struct sk_buff *skb, struct net_device *ndev)
>+axienet_start_xmit_legacy(struct sk_buff *skb, struct net_device *ndev)
> {
> 	u32 ii;
> 	u32 num_frag;
>@@ -890,6 +894,27 @@ axienet_start_xmit(struct sk_buff *skb, struct
>net_device *ndev)
> 	return NETDEV_TX_OK;
> }
>
>+/**
>+ * axienet_start_xmit - Starts the transmission.
>+ * @skb:        sk_buff pointer that contains data to be Txed.
>+ * @ndev:       Pointer to net_device structure.
>+ *
>+ * Return: NETDEV_TX_OK, on success
>+ *          NETDEV_TX_BUSY, if any of the descriptors are not free
>+ *
>+ * This function is invoked from upper layers to initiate transmission
>+*/ static netdev_tx_t axienet_start_xmit(struct sk_buff *skb, struct
>+net_device *ndev) {
>+	struct axienet_local *lp = netdev_priv(ndev);
>+
>+	if (!AXIENET_USE_DMA(lp))
>+		return axienet_start_xmit_legacy(skb, ndev);
>+	else
>+		return NETDEV_TX_BUSY;
>+}
>+
> /**
>  * axienet_rx_poll - Triggered by RX ISR to complete the BD processing.
>  * @napi:	Pointer to NAPI structure.
>@@ -1124,41 +1149,22 @@ static irqreturn_t axienet_eth_irq(int irq, void
>*_ndev)  static void axienet_dma_err_handler(struct work_struct *work);
>
> /**
>- * axienet_open - Driver open routine.
>- * @ndev:	Pointer to net_device structure
>+ * axienet_init_legacy_dma - init the dma legacy code.
>+ * @ndev:       Pointer to net_device structure
>  *
>  * Return: 0, on success.
>- *	    non-zero error value on failure
>+ *          non-zero error value on failure
>+ *
>+ * This is the dma  initialization code. It also allocates interrupt
>+ * service routines, enables the interrupt lines and ISR handling.
>  *
>- * This is the driver open routine. It calls phylink_start to start the
>- * PHY device.
>- * It also allocates interrupt service routines, enables the interrupt lines
>- * and ISR handling. Axi Ethernet core is reset through Axi DMA core. Buffer
>- * descriptors are initialized.
>  */
>-static int axienet_open(struct net_device *ndev)
>+
>+static inline int axienet_init_legacy_dma(struct net_device *ndev)
> {
> 	int ret;
> 	struct axienet_local *lp = netdev_priv(ndev);
>
>-	dev_dbg(&ndev->dev, "axienet_open()\n");
>-
>-	/* When we do an Axi Ethernet reset, it resets the complete core
>-	 * including the MDIO. MDIO must be disabled before resetting.
>-	 * Hold MDIO bus lock to avoid MDIO accesses during the reset.
>-	 */
>-	axienet_lock_mii(lp);
>-	ret = axienet_device_reset(ndev);
>-	axienet_unlock_mii(lp);
>-
>-	ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
>-	if (ret) {
>-		dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
>-		return ret;
>-	}
>-
>-	phylink_start(lp->phylink);
>-
> 	/* Enable worker thread for Axi DMA error handling */
> 	INIT_WORK(&lp->dma_err_task, axienet_dma_err_handler);
>
>@@ -1192,13 +1198,62 @@ static int axienet_open(struct net_device *ndev)
> err_tx_irq:
> 	napi_disable(&lp->napi_tx);
> 	napi_disable(&lp->napi_rx);
>-	phylink_stop(lp->phylink);
>-	phylink_disconnect_phy(lp->phylink);
> 	cancel_work_sync(&lp->dma_err_task);
> 	dev_err(lp->dev, "request_irq() failed\n");
> 	return ret;
> }
>
>+/**
>+ * axienet_open - Driver open routine.
>+ * @ndev:	Pointer to net_device structure
>+ *
>+ * Return: 0, on success.
>+ *	    non-zero error value on failure
>+ *
>+ * This is the driver open routine. It calls phylink_start to start the
>+ * PHY device.
>+ * It also allocates interrupt service routines, enables the interrupt
>+lines
>+ * and ISR handling. Axi Ethernet core is reset through Axi DMA core.
>+Buffer
>+ * descriptors are initialized.
>+ */
>+static int axienet_open(struct net_device *ndev) {
>+	int ret;
>+	struct axienet_local *lp = netdev_priv(ndev);
>+
>+	dev_dbg(&ndev->dev, "%s\n", __func__);
>+
>+	/* When we do an Axi Ethernet reset, it resets the complete core
>+	 * including the MDIO. MDIO must be disabled before resetting.
>+	 * Hold MDIO bus lock to avoid MDIO accesses during the reset.
>+	 */
>+	axienet_lock_mii(lp);
>+	ret = axienet_device_reset(ndev);
>+	axienet_unlock_mii(lp);
>+
>+	ret = phylink_of_phy_connect(lp->phylink, lp->dev->of_node, 0);
>+	if (ret) {
>+		dev_err(lp->dev, "phylink_of_phy_connect() failed: %d\n", ret);
>+		return ret;
>+	}
>+
>+	phylink_start(lp->phylink);
>+
>+	if (!AXIENET_USE_DMA(lp)) {
>+		ret = axienet_init_legacy_dma(ndev);
>+		if (ret)
>+			goto error_code;
>+	}
>+
>+	return 0;
>+
>+error_code:
>+	phylink_stop(lp->phylink);
>+	phylink_disconnect_phy(lp->phylink);
>+
>+	return ret;
>+}
>+
> /**
>  * axienet_stop - Driver stop routine.
>  * @ndev:	Pointer to net_device structure
>@@ -1215,8 +1270,10 @@ static int axienet_stop(struct net_device *ndev)
>
> 	dev_dbg(&ndev->dev, "axienet_close()\n");
>
>-	napi_disable(&lp->napi_tx);
>-	napi_disable(&lp->napi_rx);
>+	if (!AXIENET_USE_DMA(lp)) {
>+		napi_disable(&lp->napi_tx);
>+		napi_disable(&lp->napi_rx);
>+	}
>
> 	phylink_stop(lp->phylink);
> 	phylink_disconnect_phy(lp->phylink);
>@@ -1224,18 +1281,18 @@ static int axienet_stop(struct net_device *ndev)
> 	axienet_setoptions(ndev, lp->options &
> 			   ~(XAE_OPTION_TXEN | XAE_OPTION_RXEN));
>
>-	axienet_dma_stop(lp);
>+	if (!AXIENET_USE_DMA(lp)) {
>+		axienet_dma_stop(lp);
>+		cancel_work_sync(&lp->dma_err_task);
>+		free_irq(lp->tx_irq, ndev);
>+		free_irq(lp->rx_irq, ndev);
>+		axienet_dma_bd_release(ndev);
>+	}
>
> 	axienet_iow(lp, XAE_IE_OFFSET, 0);
>
>-	cancel_work_sync(&lp->dma_err_task);
>-
> 	if (lp->eth_irq > 0)
> 		free_irq(lp->eth_irq, ndev);
>-	free_irq(lp->tx_irq, ndev);
>-	free_irq(lp->rx_irq, ndev);
>-
>-	axienet_dma_bd_release(ndev);
> 	return 0;
> }
>
>@@ -1411,14 +1468,16 @@ static void axienet_ethtools_get_regs(struct
>net_device *ndev,
> 	data[29] = axienet_ior(lp, XAE_FMI_OFFSET);
> 	data[30] = axienet_ior(lp, XAE_AF0_OFFSET);
> 	data[31] = axienet_ior(lp, XAE_AF1_OFFSET);
>-	data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
>-	data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
>-	data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
>-	data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
>-	data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
>-	data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
>-	data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
>-	data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
>+	if (!AXIENET_USE_DMA(lp)) {
>+		data[32] = axienet_dma_in32(lp, XAXIDMA_TX_CR_OFFSET);
>+		data[33] = axienet_dma_in32(lp, XAXIDMA_TX_SR_OFFSET);
>+		data[34] = axienet_dma_in32(lp, XAXIDMA_TX_CDESC_OFFSET);
>+		data[35] = axienet_dma_in32(lp, XAXIDMA_TX_TDESC_OFFSET);
>+		data[36] = axienet_dma_in32(lp, XAXIDMA_RX_CR_OFFSET);
>+		data[37] = axienet_dma_in32(lp, XAXIDMA_RX_SR_OFFSET);
>+		data[38] = axienet_dma_in32(lp, XAXIDMA_RX_CDESC_OFFSET);
>+		data[39] = axienet_dma_in32(lp, XAXIDMA_RX_TDESC_OFFSET);
>+	}
> }
>
> static void
>@@ -1878,9 +1937,6 @@ static int axienet_probe(struct platform_device *pdev)
> 	u64_stats_init(&lp->rx_stat_sync);
> 	u64_stats_init(&lp->tx_stat_sync);
>
>-	netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
>-	netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
>-
> 	lp->axi_clk = devm_clk_get_optional(&pdev->dev, "s_axi_lite_clk");
> 	if (!lp->axi_clk) {
> 		/* For backward compatibility, if named AXI clock is not present,
>@@ -2006,75 +2062,80 @@ static int axienet_probe(struct platform_device
>*pdev)
> 		goto cleanup_clk;
> 	}
>
>-	/* Find the DMA node, map the DMA registers, and decode the DMA IRQs
>*/
>-	np = of_parse_phandle(pdev->dev.of_node, "axistream-connected", 0);
>-	if (np) {
>-		struct resource dmares;
>+	if (!of_find_property(pdev->dev.of_node, "dmas", NULL)) {
>+		/* Find the DMA node, map the DMA registers, and decode the
>DMA IRQs */
>+		np = of_parse_phandle(pdev->dev.of_node, "axistream-
>connected", 0);
>
>-		ret = of_address_to_resource(np, 0, &dmares);
>-		if (ret) {
>-			dev_err(&pdev->dev,
>-				"unable to get DMA resource\n");
>+		if (np) {
>+			struct resource dmares;
>+
>+			ret = of_address_to_resource(np, 0, &dmares);
>+			if (ret) {
>+				dev_err(&pdev->dev,
>+					"unable to get DMA resource\n");
>+				of_node_put(np);
>+				goto cleanup_clk;
>+			}
>+			lp->dma_regs = devm_ioremap_resource(&pdev->dev,
>+							     &dmares);
>+			lp->rx_irq = irq_of_parse_and_map(np, 1);
>+			lp->tx_irq = irq_of_parse_and_map(np, 0);
> 			of_node_put(np);
>+			lp->eth_irq = platform_get_irq_optional(pdev, 0);
>+		} else {
>+			/* Check for these resources directly on the Ethernet
>node. */
>+			lp->dma_regs =
>devm_platform_get_and_ioremap_resource(pdev, 1, NULL);
>+			lp->rx_irq = platform_get_irq(pdev, 1);
>+			lp->tx_irq = platform_get_irq(pdev, 0);
>+			lp->eth_irq = platform_get_irq_optional(pdev, 2);
>+		}
>+		if (IS_ERR(lp->dma_regs)) {
>+			dev_err(&pdev->dev, "could not map DMA regs\n");
>+			ret = PTR_ERR(lp->dma_regs);
>+			goto cleanup_clk;
>+		}
>+		if (lp->rx_irq <= 0 || lp->tx_irq <= 0) {
>+			dev_err(&pdev->dev, "could not determine irqs\n");
>+			ret = -ENOMEM;
> 			goto cleanup_clk;
> 		}
>-		lp->dma_regs = devm_ioremap_resource(&pdev->dev,
>-						     &dmares);
>-		lp->rx_irq = irq_of_parse_and_map(np, 1);
>-		lp->tx_irq = irq_of_parse_and_map(np, 0);
>-		of_node_put(np);
>-		lp->eth_irq = platform_get_irq_optional(pdev, 0);
>-	} else {
>-		/* Check for these resources directly on the Ethernet node. */
>-		lp->dma_regs =
>devm_platform_get_and_ioremap_resource(pdev, 1, NULL);
>-		lp->rx_irq = platform_get_irq(pdev, 1);
>-		lp->tx_irq = platform_get_irq(pdev, 0);
>-		lp->eth_irq = platform_get_irq_optional(pdev, 2);
>-	}
>-	if (IS_ERR(lp->dma_regs)) {
>-		dev_err(&pdev->dev, "could not map DMA regs\n");
>-		ret = PTR_ERR(lp->dma_regs);
>-		goto cleanup_clk;
>-	}
>-	if ((lp->rx_irq <= 0) || (lp->tx_irq <= 0)) {
>-		dev_err(&pdev->dev, "could not determine irqs\n");
>-		ret = -ENOMEM;
>-		goto cleanup_clk;
>-	}
>
>-	/* Autodetect the need for 64-bit DMA pointers.
>-	 * When the IP is configured for a bus width bigger than 32 bits,
>-	 * writing the MSB registers is mandatory, even if they are all 0.
>-	 * We can detect this case by writing all 1's to one such register
>-	 * and see if that sticks: when the IP is configured for 32 bits
>-	 * only, those registers are RES0.
>-	 * Those MSB registers were introduced in IP v7.1, which we check first.
>-	 */
>-	if ((axienet_ior(lp, XAE_ID_OFFSET) >> 24) >= 0x9) {
>-		void __iomem *desc = lp->dma_regs +
>XAXIDMA_TX_CDESC_OFFSET + 4;
>-
>-		iowrite32(0x0, desc);
>-		if (ioread32(desc) == 0) {	/* sanity check */
>-			iowrite32(0xffffffff, desc);
>-			if (ioread32(desc) > 0) {
>-				lp->features |= XAE_FEATURE_DMA_64BIT;
>-				addr_width = 64;
>-				dev_info(&pdev->dev,
>-					 "autodetected 64-bit DMA range\n");
>-			}
>+		/* Autodetect the need for 64-bit DMA pointers.
>+		 * When the IP is configured for a bus width bigger than 32 bits,
>+		 * writing the MSB registers is mandatory, even if they are all 0.
>+		 * We can detect this case by writing all 1's to one such register
>+		 * and see if that sticks: when the IP is configured for 32 bits
>+		 * only, those registers are RES0.
>+		 * Those MSB registers were introduced in IP v7.1, which we
>check first.
>+		 */
>+		if ((axienet_ior(lp, XAE_ID_OFFSET) >> 24) >= 0x9) {
>+			void __iomem *desc = lp->dma_regs +
>XAXIDMA_TX_CDESC_OFFSET + 4;
>+
> 			iowrite32(0x0, desc);
>+			if (ioread32(desc) == 0) {	/* sanity check */
>+				iowrite32(0xffffffff, desc);
>+				if (ioread32(desc) > 0) {
>+					lp->features |=
>XAE_FEATURE_DMA_64BIT;
>+					addr_width = 64;
>+					dev_info(&pdev->dev,
>+						 "autodetected 64-bit DMA
>range\n");
>+				}
>+				iowrite32(0x0, desc);
>+			}
>+		}
>+		if (!IS_ENABLED(CONFIG_64BIT) && lp->features &
>XAE_FEATURE_DMA_64BIT) {
>+			dev_err(&pdev->dev, "64-bit addressable DMA is not
>compatible with 32-bit archecture\n");
>+			ret = -EINVAL;
>+			goto cleanup_clk;
> 		}
>-	}
>-	if (!IS_ENABLED(CONFIG_64BIT) && lp->features &
>XAE_FEATURE_DMA_64BIT) {
>-		dev_err(&pdev->dev, "64-bit addressable DMA is not compatible
>with 32-bit archecture\n");
>-		ret = -EINVAL;
>-		goto cleanup_clk;
>-	}
>
>-	ret = dma_set_mask_and_coherent(&pdev->dev,
>DMA_BIT_MASK(addr_width));
>-	if (ret) {
>-		dev_err(&pdev->dev, "No suitable DMA available\n");
>-		goto cleanup_clk;
>+		ret = dma_set_mask_and_coherent(&pdev->dev,
>DMA_BIT_MASK(addr_width));
>+		if (ret) {
>+			dev_err(&pdev->dev, "No suitable DMA available\n");
>+			goto cleanup_clk;
>+		}
>+		netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
>+		netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
> 	}
>
> 	/* Check for Ethernet core IRQ (optional) */ @@ -2092,14 +2153,16 @@
>static int axienet_probe(struct platform_device *pdev)
> 	}
>
> 	lp->coalesce_count_rx = XAXIDMA_DFT_RX_THRESHOLD;
>-	lp->coalesce_usec_rx = XAXIDMA_DFT_RX_USEC;
> 	lp->coalesce_count_tx = XAXIDMA_DFT_TX_THRESHOLD;
>-	lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
>
>-	/* Reset core now that clocks are enabled, prior to accessing MDIO */
>-	ret = __axienet_device_reset(lp);
>-	if (ret)
>-		goto cleanup_clk;
>+	if (!AXIENET_USE_DMA(lp)) {
>+		lp->coalesce_usec_rx = XAXIDMA_DFT_RX_USEC;
>+		lp->coalesce_usec_tx = XAXIDMA_DFT_TX_USEC;
>+		/* Reset core now that clocks are enabled, prior to accessing
>MDIO */
>+		ret = __axienet_device_reset(lp);
>+		if (ret)
>+			goto cleanup_clk;
>+	}
>
> 	ret = axienet_mdio_setup(lp);
> 	if (ret)
>--
>2.25.1
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [EXT] [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine support
  2023-05-10  8:50 ` [PATCH net-next V3 3/3] net: axienet: Introduce " Sarath Babu Naidu Gaddam
  2023-05-10 10:10   ` Krzysztof Kozlowski
@ 2023-05-10 13:44   ` Subbaraya Sundeep Bhatta
  2023-05-12  4:36     ` Gaddam, Sarath Babu Naidu
  1 sibling, 1 reply; 14+ messages in thread
From: Subbaraya Sundeep Bhatta @ 2023-05-10 13:44 UTC (permalink / raw)
  To: Sarath Babu Naidu Gaddam, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, michal.simek, radhey.shyam.pandey, netdev, devicetree,
	linux-arm-kernel, linux-kernel, anirudha.sarangi, harini.katakam,
	git

Hi,

>-----Original Message-----
>From: Sarath Babu Naidu Gaddam <sarath.babu.naidu.gaddam@amd.com>
>Sent: Wednesday, May 10, 2023 2:21 PM
>To: davem@davemloft.net; edumazet@google.com; kuba@kernel.org;
>pabeni@redhat.com; robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
>Cc: linux@armlinux.org.uk; michal.simek@amd.com;
>radhey.shyam.pandey@amd.com; netdev@vger.kernel.org;
>devicetree@vger.kernel.org; linux-arm-kernel@lists.infradead.org; linux-
>kernel@vger.kernel.org; anirudha.sarangi@amd.com;
>harini.katakam@amd.com; sarath.babu.naidu.gaddam@amd.com;
>git@amd.com
>Subject: [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine
>support
>
>Add dmaengine framework to communicate with the xilinx DMAengine
>driver(AXIDMA).
>
>Axi ethernet driver uses separate channels for transmit and receive.
>Add support for these channels to handle TX and RX with skb and
>appropriate callbacks. Also add axi ethernet core interrupt for
>dmaengine framework support.
>
>The dmaengine framework was extended for metadata API support during the
>axidma RFC[1] discussion. However it still needs further enhancements to
>make it well suited for ethernet usecases. The ethernet features i.e
>ethtool set/get of DMA IP properties, ndo_poll_controller, trigger
>reset of DMA IP from ethernet are not supported (mentioned in TODO)
>and it requires follow-up discussion and dma framework enhancement.
>
>[1]: https://urldefense.proofpoint.com/v2/url?u=https-
>3A__lore.kernel.org_lkml_1522665546-2D10035-2D1-2Dgit-2Dsend-2Demail-
>2D&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=wYboOaw70DU5hRM5HDwOR
>Jx_MfD-hXXKii2eobNikgU&m=qvFFGXjULUP3IPFOil5aKySdkumrp8V0TYK4kQ-
>yzsWPmRolFaQvfKMhaR11_dZv&s=brEWb1Til18VCodb-H4tST0HOBXKIJtL-
>2ztGmMZz_8&e=
>radheys@xilinx.com
>
>Signed-off-by: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
>Signed-off-by: Sarath Babu Naidu Gaddam
><sarath.babu.naidu.gaddam@amd.com>
>---
>Performance numbers(Mbps):
>
>              | TCP | UDP |
>         -----------------
>         | Tx | 920 | 800 |
>         -----------------
>         | Rx | 620 | 910 |
>
>Changes in V3:
>1) New patch for dmaengine framework support.
>---
> drivers/net/ethernet/xilinx/xilinx_axienet.h  |   6 +
> .../net/ethernet/xilinx/xilinx_axienet_main.c | 331 +++++++++++++++++-
> 2 files changed, 335 insertions(+), 2 deletions(-)
>
>diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h
>b/drivers/net/ethernet/xilinx/xilinx_axienet.h
>index 10917d997d27..fbe00c5390d5 100644
>--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
>+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
>@@ -436,6 +436,9 @@ struct axidma_bd {
>  * @coalesce_count_tx:	Store the irq coalesce on TX side.
>  * @coalesce_usec_tx:	IRQ coalesce delay for TX
>  * @has_dmas:	flag to check dmaengine framework usage.
>+ * @tx_chan:	TX DMA channel.
>+ * @rx_chan:	RX DMA channel.
>+ * @skb_cache:	Custom skb slab allocator
>  */
> struct axienet_local {
> 	struct net_device *ndev;
>@@ -501,6 +504,9 @@ struct axienet_local {
> 	u32 coalesce_count_tx;
> 	u32 coalesce_usec_tx;
> 	u8  has_dmas;
>+	struct dma_chan *tx_chan;
>+	struct dma_chan *rx_chan;
>+	struct kmem_cache *skb_cache;
> };
>
> /**
>diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>index 8678fc09245a..662c77ff0e99 100644
>--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
>@@ -37,6 +37,9 @@
> #include <linux/phy.h>
> #include <linux/mii.h>
> #include <linux/ethtool.h>
>+#include <linux/dmaengine.h>
>+#include <linux/dma-mapping.h>
>+#include <linux/dma/xilinx_dma.h>
>
> #include "xilinx_axienet.h"
>
>@@ -46,6 +49,9 @@
> #define TX_BD_NUM_MIN			(MAX_SKB_FRAGS + 1)
> #define TX_BD_NUM_MAX			4096
> #define RX_BD_NUM_MAX			4096
>+#define DMA_NUM_APP_WORDS		5
>+#define LEN_APP				4
>+#define RX_BUF_NUM_DEFAULT		128
>
> /* Must be shorter than length of ethtool_drvinfo.driver field to fit */
> #define DRIVER_NAME		"xaxienet"
>@@ -56,6 +62,16 @@
>
> #define AXIENET_USE_DMA(lp) ((lp)->has_dmas)
>
>+struct axi_skbuff {
>+	struct scatterlist sgl[MAX_SKB_FRAGS + 1];
>+	struct dma_async_tx_descriptor *desc;
>+	dma_addr_t dma_address;
>+	struct sk_buff *skb;
>+	int sg_len;
>+} __packed;
>+
>+static int axienet_rx_submit_desc(struct net_device *ndev);
>+
> /* Match table for of_platform binding */
> static const struct of_device_id axienet_of_match[] = {
> 	{ .compatible = "xlnx,axi-ethernet-1.00.a", },
>@@ -728,6 +744,108 @@ static inline int axienet_check_tx_bd_space(struct
>axienet_local *lp,
> 	return 0;
> }
>
>+/**
>+ * axienet_dma_tx_cb - DMA engine callback for TX channel.
>+ * @data:       Pointer to the axi_skbuff structure
>+ * @result:     error reporting through dmaengine_result.
>+ * This function is called by dmaengine driver for TX channel to notify
>+ * that the transmit is done.
>+ */
>+static void axienet_dma_tx_cb(void *data, const struct dmaengine_result
>*result)
>+{
>+	struct axi_skbuff *axi_skb = data;
>+
>+	struct net_device *netdev = axi_skb->skb->dev;
>+	struct axienet_local *lp = netdev_priv(netdev);
>+
>+	u64_stats_update_begin(&lp->tx_stat_sync);
>+	u64_stats_add(&lp->tx_bytes, axi_skb->skb->len);
>+	u64_stats_add(&lp->tx_packets, 1);
>+	u64_stats_update_end(&lp->tx_stat_sync);
>+
>+	dma_unmap_sg(lp->dev, axi_skb->sgl, axi_skb->sg_len,
>DMA_MEM_TO_DEV);
>+	dev_kfree_skb_any(axi_skb->skb);
>+	kmem_cache_free(lp->skb_cache, axi_skb);
>+}
>+
>+/**
>+ * axienet_start_xmit_dmaengine - Starts the transmission.
>+ * @skb:        sk_buff pointer that contains data to be Txed.
>+ * @ndev:       Pointer to net_device structure.
>+ *
>+ * Return: NETDEV_TX_OK, on success
>+ *          NETDEV_TX_BUSY, if any memory failure or SG error.
>+ *
>+ * This function is invoked from xmit to initiate transmission. The
>+ * function sets the skbs , call back API, SG etc.
>+ * Additionally if checksum offloading is supported,
>+ * it populates AXI Stream Control fields with appropriate values.
>+ */
>+static netdev_tx_t
>+axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device *ndev)
>+{
>+	struct dma_async_tx_descriptor *dma_tx_desc = NULL;
>+	struct axienet_local *lp = netdev_priv(ndev);
>+	u32 app[DMA_NUM_APP_WORDS] = {0};
>+	struct axi_skbuff *axi_skb;
>+	u32 csum_start_off;
>+	u32 csum_index_off;
>+	int sg_len;
>+	int ret;
>+
>+	sg_len = skb_shinfo(skb)->nr_frags + 1;
>+	axi_skb = kmem_cache_zalloc(lp->skb_cache, GFP_KERNEL);
>+	if (!axi_skb)
>+		return NETDEV_TX_BUSY;
>+
>+	sg_init_table(axi_skb->sgl, sg_len);
>+	ret = skb_to_sgvec(skb, axi_skb->sgl, 0, skb->len);
>+	if (unlikely(ret < 0))
>+		goto xmit_error_skb_sgvec;
>+
>+	ret = dma_map_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
>+	if (ret == 0)
>+		goto xmit_error_skb_sgvec;
>+
>+	/*Fill up app fields for checksum */
>+	if (skb->ip_summed == CHECKSUM_PARTIAL) {
>+		if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
>+			/* Tx Full Checksum Offload Enabled */
>+			app[0] |= 2;
>+		} else if (lp->features & XAE_FEATURE_PARTIAL_RX_CSUM) {
>+			csum_start_off = skb_transport_offset(skb);
>+			csum_index_off = csum_start_off + skb->csum_offset;
>+			/* Tx Partial Checksum Offload Enabled */
>+			app[0] |= 1;
>+			app[1] = (csum_start_off << 16) | csum_index_off;
>+		}
>+	} else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
>+		app[0] |= 2; /* Tx Full Checksum Offload Enabled */
>+	}
>+
>+	dma_tx_desc = lp->tx_chan->device->device_prep_slave_sg(lp->tx_chan,
>axi_skb->sgl,
>+			sg_len, DMA_MEM_TO_DEV,
>+			DMA_PREP_INTERRUPT, (void *)app);
>+
>+	if (!dma_tx_desc)
>+		goto xmit_error_prep;
>+
>+	axi_skb->skb = skb;
>+	axi_skb->sg_len = sg_len;
>+	dma_tx_desc->callback_param =  axi_skb;
>+	dma_tx_desc->callback_result = axienet_dma_tx_cb;
>+	dmaengine_submit(dma_tx_desc);
>+	dma_async_issue_pending(lp->tx_chan);
>+
>+	return NETDEV_TX_OK;
>+
>+xmit_error_prep:
>+	dma_unmap_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
>+xmit_error_skb_sgvec:
>+	kmem_cache_free(lp->skb_cache, axi_skb);
>+	return NETDEV_TX_BUSY;
>+}
>+
> /**
>  * axienet_tx_poll - Invoked once a transmit is completed by the
>  * Axi DMA Tx channel.
>@@ -912,7 +1030,42 @@ axienet_start_xmit(struct sk_buff *skb, struct
>net_device *ndev)
> 	if (!AXIENET_USE_DMA(lp))
> 		return axienet_start_xmit_legacy(skb, ndev);
> 	else
>-		return NETDEV_TX_BUSY;
>+		return axienet_start_xmit_dmaengine(skb, ndev);

You can avoid this if else by
 if (AXIENET_USE_DMA(lp))
         return axienet_start_xmit_dmaengine(skb, ndev);

and no need of defining axienet_start_xmit_legacy function in patch 2/3.
_legacy may not be correct since you support both in-built dma or with dma engine just by
turning on/off dt properties. Also does this driver roll back to using in-built dma if DMA_ENGINE
is not compiled in? 

>+}
>+
>+/**
>+ * axienet_dma_rx_cb - DMA engine callback for RX channel.
>+ * @data:       Pointer to the axi_skbuff structure
>+ * @result:     error reporting through dmaengine_result.
>+ * This function is called by dmaengine driver for RX channel to notify
>+ * that the packet is received.
>+ */
>+static void axienet_dma_rx_cb(void *data, const struct dmaengine_result
>*result)
>+{
>+	struct axi_skbuff *axi_skb = data;
>+	struct sk_buff *skb = axi_skb->skb;
>+	struct net_device *netdev = skb->dev;
>+	struct axienet_local *lp = netdev_priv(netdev);
>+	size_t meta_len, meta_max_len, rx_len;
>+	u32 *app;
>+
>+	app  = dmaengine_desc_get_metadata_ptr(axi_skb->desc, &meta_len,
>&meta_max_len);
>+	dma_unmap_single(lp->dev, axi_skb->dma_address, lp->max_frm_size,
>+			 DMA_FROM_DEVICE);
>+	/* TODO: Derive app word index programmatically */
>+	rx_len = (app[LEN_APP] & 0xFFFF);
>+	skb_put(skb, rx_len);
>+	skb->protocol = eth_type_trans(skb, netdev);
>+	skb->ip_summed = CHECKSUM_NONE;
>+
>+	netif_rx(skb);
>+	kmem_cache_free(lp->skb_cache, axi_skb);
>+	u64_stats_update_begin(&lp->rx_stat_sync);
>+	u64_stats_add(&lp->rx_packets, 1);
>+	u64_stats_add(&lp->rx_bytes, rx_len);
>+	u64_stats_update_end(&lp->rx_stat_sync);
>+	axienet_rx_submit_desc(netdev);
>+	dma_async_issue_pending(lp->rx_chan);
> }
>
> /**
>@@ -1148,6 +1301,134 @@ static irqreturn_t axienet_eth_irq(int irq, void
>*_ndev)
>
> static void axienet_dma_err_handler(struct work_struct *work);
>
>+/**
>+ * axienet_rx_submit_desc - Submit the descriptors with required data
>+ * like call backup API, skb buffer.. etc to dmaengine.
>+ *
>+ * @ndev:	net_device pointer
>+ *
>+ *Return: 0, on success.
>+ *          non-zero error value on failure
>+ */
>+static int axienet_rx_submit_desc(struct net_device *ndev)
>+{
>+	struct dma_async_tx_descriptor *dma_rx_desc = NULL;
>+	struct axienet_local *lp = netdev_priv(ndev);
>+	struct axi_skbuff *axi_skb;
>+	struct sk_buff *skb;
>+	dma_addr_t addr;
>+	int ret;
>+
>+	axi_skb = kmem_cache_alloc(lp->skb_cache, GFP_KERNEL);
>+
>+	if (!axi_skb)
>+		return -ENOMEM;
>+	skb = netdev_alloc_skb(ndev, lp->max_frm_size);
>+	if (!skb) {
>+		ret = -ENOMEM;
>+		goto rx_bd_init_skb;
>+	}
>+
>+	sg_init_table(axi_skb->sgl, 1);
>+	addr = dma_map_single(lp->dev, skb->data, lp->max_frm_size,
>DMA_FROM_DEVICE);
>+	sg_dma_address(axi_skb->sgl) = addr;
>+	sg_dma_len(axi_skb->sgl) = lp->max_frm_size;
>+	dma_rx_desc = dmaengine_prep_slave_sg(lp->rx_chan, axi_skb->sgl,
>+					      1, DMA_DEV_TO_MEM,
>+					      DMA_PREP_INTERRUPT);
>+	if (!dma_rx_desc) {
>+		ret = -EINVAL;
>+		goto rx_bd_init_prep_sg;
>+	}
>+
>+	axi_skb->skb = skb;
>+	axi_skb->dma_address = sg_dma_address(axi_skb->sgl);
>+	axi_skb->desc = dma_rx_desc;
>+	dma_rx_desc->callback_param =  axi_skb;
>+	dma_rx_desc->callback_result = axienet_dma_rx_cb;
>+	dmaengine_submit(dma_rx_desc);
>+
>+	return 0;
>+
>+rx_bd_init_prep_sg:
>+	dma_unmap_single(lp->dev, addr, lp->max_frm_size,
>DMA_FROM_DEVICE);
>+	dev_kfree_skb(skb);
>+rx_bd_init_skb:
>+	kmem_cache_free(lp->skb_cache, axi_skb);
>+	return ret;
>+}
>+
>+/**
>+ * axienet_setup_dma_chan - request the dma channels.
>+ * @ndev:       Pointer to net_device structure
>+ *
>+ * Return: 0, on success.
>+ *          non-zero error value on failure
>+ *
>+ * This function requests the TX and RX channels. It also submits the
>+ * allocated skb buffers and call back APIs to dmaengine.
>+ *
>+ */
>+static int axienet_setup_dma_chan(struct net_device *ndev)
>+{
>+	struct axienet_local *lp = netdev_priv(ndev);
>+	int i, ret;
>+
>+	lp->tx_chan = dma_request_chan(lp->dev, "tx_chan0");
>+	if (IS_ERR(lp->tx_chan)) {
>+		ret = PTR_ERR(lp->tx_chan);
>+		if (ret != -EPROBE_DEFER)
>+			netdev_err(ndev, "No Ethernet DMA (TX) channel
>found\n");
>+		return ret;
>+	}
>+
>+	lp->rx_chan = dma_request_chan(lp->dev, "rx_chan0");
>+	if (IS_ERR(lp->rx_chan)) {
>+		ret = PTR_ERR(lp->rx_chan);
>+		if (ret != -EPROBE_DEFER)
>+			netdev_err(ndev, "No Ethernet DMA (RX) channel
>found\n");
>+		goto err_dma_request_rx;
>+	}
>+	lp->skb_cache = kmem_cache_create("ethernet", sizeof(struct
>axi_skbuff),
>+					  0, 0, NULL);

I do not see kmem_cache_destroy?

Thanks,
Sundeep

>+	if (!lp->skb_cache) {
>+		ret =  -ENOMEM;
>+		goto err_kmem;
>+	}
>+	/* TODO: Instead of BD_NUM_DEFAULT use runtime support*/
>+	for (i = 0; i < RX_BUF_NUM_DEFAULT; i++)
>+		axienet_rx_submit_desc(ndev);
>+	dma_async_issue_pending(lp->rx_chan);
>+
>+	return 0;
>+err_kmem:
>+	dma_release_channel(lp->rx_chan);
>+err_dma_request_rx:
>+	dma_release_channel(lp->tx_chan);
>+	return ret;
>+}
>+
>+/**
>+ * axienet_init_dmaengine - init the dmaengine code.
>+ * @ndev:       Pointer to net_device structure
>+ *
>+ * Return: 0, on success.
>+ *          non-zero error value on failure
>+ *
>+ * This is the dmaengine initialization code.
>+ */
>+static inline int axienet_init_dmaengine(struct net_device *ndev)
>+{
>+	int ret;
>+
>+	ret = axienet_setup_dma_chan(ndev);
>+
>+	if (ret < 0)
>+		return ret;
>+
>+	return 0;
>+}
>+
> /**
>  * axienet_init_legacy_dma - init the dma legacy code.
>  * @ndev:       Pointer to net_device structure
>@@ -1239,7 +1520,20 @@ static int axienet_open(struct net_device *ndev)
>
> 	phylink_start(lp->phylink);
>
>-	if (!AXIENET_USE_DMA(lp)) {
>+	if (AXIENET_USE_DMA(lp)) {
>+		ret = axienet_init_dmaengine(ndev);
>+		if (ret < 0)
>+			goto error_code;
>+
>+		/* Enable interrupts for Axi Ethernet core (if defined) */
>+		if (lp->eth_irq > 0) {
>+			ret = request_irq(lp->eth_irq, axienet_eth_irq,
>IRQF_SHARED,
>+					  ndev->name, ndev);
>+			if (ret)
>+				goto error_code;
>+		}
>+
>+	} else {
> 		ret = axienet_init_legacy_dma(ndev);
> 		if (ret)
> 			goto error_code;
>@@ -1287,6 +1581,12 @@ static int axienet_stop(struct net_device *ndev)
> 		free_irq(lp->tx_irq, ndev);
> 		free_irq(lp->rx_irq, ndev);
> 		axienet_dma_bd_release(ndev);
>+	} else {
>+		dmaengine_terminate_all(lp->tx_chan);
>+		dmaengine_terminate_all(lp->rx_chan);
>+
>+		dma_release_channel(lp->rx_chan);
>+		dma_release_channel(lp->tx_chan);
> 	}
>
> 	axienet_iow(lp, XAE_IE_OFFSET, 0);
>@@ -2136,6 +2436,33 @@ static int axienet_probe(struct platform_device
>*pdev)
> 		}
> 		netif_napi_add(ndev, &lp->napi_rx, axienet_rx_poll);
> 		netif_napi_add(ndev, &lp->napi_tx, axienet_tx_poll);
>+	} else {
>+		struct xilinx_vdma_config cfg;
>+		struct dma_chan *tx_chan;
>+
>+		lp->eth_irq = platform_get_irq_optional(pdev, 0);
>+		tx_chan = dma_request_chan(lp->dev, "tx_chan0");
>+
>+		if (IS_ERR(tx_chan)) {
>+			ret = PTR_ERR(tx_chan);
>+			if (ret != -EPROBE_DEFER)
>+				dev_err(&pdev->dev, "No Ethernet DMA (TX)
>channel found\n");
>+			goto cleanup_clk;
>+		}
>+
>+		cfg.reset = 1;
>+		/* As name says VDMA but it has support for DMA channel
>reset*/
>+		ret = xilinx_vdma_channel_set_config(tx_chan, &cfg);
>+
>+		if (ret < 0) {
>+			dev_err(&pdev->dev, "Reset channel failed\n");
>+			dma_release_channel(tx_chan);
>+			goto cleanup_clk;
>+		} else {
>+			lp->has_dmas = 1;
>+		}
>+
>+		dma_release_channel(tx_chan);
> 	}
>
> 	/* Check for Ethernet core IRQ (optional) */
>--
>2.25.1
>


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support
  2023-05-10 10:08   ` Krzysztof Kozlowski
@ 2023-05-11 11:32     ` Gaddam, Sarath Babu Naidu
  2023-05-12  6:27       ` Krzysztof Kozlowski
  0 siblings, 1 reply; 14+ messages in thread
From: Gaddam, Sarath Babu Naidu @ 2023-05-11 11:32 UTC (permalink / raw)
  To: Krzysztof Kozlowski, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, Simek, Michal, Pandey, Radhey Shyam, netdev, devicetree,
	linux-arm-kernel, linux-kernel, Sarangi, Anirudha, Katakam,
	Harini, git (AMD-Xilinx)



> -----Original Message-----
> From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
> Sent: Wednesday, May 10, 2023 3:39 PM
> To: Gaddam, Sarath Babu Naidu
> <sarath.babu.naidu.gaddam@amd.com>; davem@davemloft.net;
> edumazet@google.com; kuba@kernel.org; pabeni@redhat.com;
> robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
> Cc: linux@armlinux.org.uk; Simek, Michal <michal.simek@amd.com>;
> Pandey, Radhey Shyam <radhey.shyam.pandey@amd.com>;
> netdev@vger.kernel.org; devicetree@vger.kernel.org; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Sarangi,
> Anirudha <anirudha.sarangi@amd.com>; Katakam, Harini
> <harini.katakam@amd.com>; git (AMD-Xilinx) <git@amd.com>
> Subject: Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet:
> Introduce dmaengine binding support
> 
> On 10/05/2023 10:50, Sarath Babu Naidu Gaddam wrote:
> > From: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> >
> > The axiethernet driver will use dmaengine framework to communicate
> > with dma controller IP instead of built-in dma programming sequence.
> 
> Subject: drop second/last, redundant "bindings". The "dt-bindings"
> prefix is already stating that these are bindings.
> 
> Actually also drop "dmaenging" as it is Linuxism. Focus on hardware, e.g.
> "Add DMA support".
> 
> >
> > To request dma transmit and receive channels the axiethernet driver
> > uses generic dmas, dma-names properties.
> >
> > Also to support the backward compatibility, use "dmas" property to
> > identify as it should use dmaengine framework or legacy
> > driver(built-in dma programming).
> >
> > At this point it is recommended to use dmaengine framework but it's
> > optional. Once the solution is stable will make dmas as required
> > properties.
> >
> > Signed-off-by: Radhey Shyam Pandey
> <radhey.shyam.pandey@xilinx.com>
> > Signed-off-by: Sarath Babu Naidu Gaddam
> > <sarath.babu.naidu.gaddam@amd.com>
> > ---
> > These changes are on top of below txt to yaml conversion discussion
> > https://lore.kernel.org/all/20230308061223.1358637-1-
> sarath.babu.naidu
> > .gaddam@amd.com/#Z2e.:20230308061223.1358637-1-
> sarath.babu.naidu.gadda
> > m::40amd.com:1bindings:net:xlnx::2caxi-ethernet.yaml
> >
> > Changes in V3:
> > 1) Reverted reg and interrupts property to  support backward
> compatibility.
> > 2) Moved dmas and dma-names properties from Required properties.
> >
> > Changes in V2:
> > - None.
> > ---
> >  .../devicetree/bindings/net/xlnx,axi-ethernet.yaml   | 12
> ++++++++++++
> >  1 file changed, 12 insertions(+)
> >
> > diff --git
> > a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> > b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> > index 80843c177029..9dfa1976e260 100644
> > --- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> > +++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> > @@ -122,6 +122,16 @@ properties:
> >        modes, where "pcs-handle" should be used to point to the PCS/PMA
> PHY,
> >        and "phy-handle" should point to an external PHY if exists.
> >
> > +  dmas:
> > +    items:
> > +      - description: TX DMA Channel phandle and DMA request line
> number
> > +      - description: RX DMA Channel phandle and DMA request line
> > + number
> > +
> > +  dma-names:
> > +    items:
> > +      - const: tx_chan0
> 
> tx
> 
> > +      - const: rx_chan0
> 
> rx

We want to support more channels in the future, currently we support
AXI DMA which has only one tx and rx channel. In future we want to 
extend support for multichannel DMA (MCDMA) which has 16 TX and
16 RX channels. To uniquely identify each channel, we are using chan
suffix. Depending on the usecase AXI ethernet driver can request any
combination of multichannel DMA  channels.

dma-names = tx_chan0, tx_chan1, rx_chan0, rx_chan1;

will update the commit message with same.
 
> Why doing these differently than all other devices?

To make the axi ethernet driver generic to be hooked to any complaint
dma IP i.e AXIDMA, AXIMCDMA without any modification.The inspiration
behind this dmaengine adoption is to reuse the in-kernel xilinx dma engine
driver and remove redundant dma programming sequence from the
ethernet driver.

Above information is explained in the cover letter
https://lore.kernel.org/all/20230510085031.1116327-1-sarath.babu.naidu.gaddam@amd.com/

Thanks,
Sarath


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [EXT] [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine support
  2023-05-10 13:44   ` [EXT] " Subbaraya Sundeep Bhatta
@ 2023-05-12  4:36     ` Gaddam, Sarath Babu Naidu
  0 siblings, 0 replies; 14+ messages in thread
From: Gaddam, Sarath Babu Naidu @ 2023-05-12  4:36 UTC (permalink / raw)
  To: Subbaraya Sundeep Bhatta, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, Simek, Michal, Pandey, Radhey Shyam, netdev, devicetree,
	linux-arm-kernel, linux-kernel, Sarangi, Anirudha, Katakam,
	Harini, git (AMD-Xilinx)



> -----Original Message-----
> From: Subbaraya Sundeep Bhatta <sbhatta@marvell.com>
> Sent: Wednesday, May 10, 2023 7:14 PM
> To: Gaddam, Sarath Babu Naidu
> <sarath.babu.naidu.gaddam@amd.com>; davem@davemloft.net;
> edumazet@google.com; kuba@kernel.org; pabeni@redhat.com;
> robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
> Cc: linux@armlinux.org.uk; Simek, Michal <michal.simek@amd.com>;
> Pandey, Radhey Shyam <radhey.shyam.pandey@amd.com>;
> netdev@vger.kernel.org; devicetree@vger.kernel.org; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Sarangi,
> Anirudha <anirudha.sarangi@amd.com>; Katakam, Harini
> <harini.katakam@amd.com>; git (AMD-Xilinx) <git@amd.com>
> Subject: RE: [EXT] [PATCH net-next V3 3/3] net: axienet: Introduce
> dmaengine support
> 
> Hi,
> 
> >-----Original Message-----
> >From: Sarath Babu Naidu Gaddam
> <sarath.babu.naidu.gaddam@amd.com>
> >Sent: Wednesday, May 10, 2023 2:21 PM
> >To: davem@davemloft.net; edumazet@google.com; kuba@kernel.org;
> >pabeni@redhat.com; robh+dt@kernel.org;
> >krzysztof.kozlowski+dt@linaro.org
> >Cc: linux@armlinux.org.uk; michal.simek@amd.com;
> >radhey.shyam.pandey@amd.com; netdev@vger.kernel.org;
> >devicetree@vger.kernel.org; linux-arm-kernel@lists.infradead.org;
> >linux- kernel@vger.kernel.org; anirudha.sarangi@amd.com;
> >harini.katakam@amd.com; sarath.babu.naidu.gaddam@amd.com;
> git@amd.com
> >Subject: [PATCH net-next V3 3/3] net: axienet: Introduce dmaengine
> >support
> >
> >Add dmaengine framework to communicate with the xilinx DMAengine
> >driver(AXIDMA).
> >
> >Axi ethernet driver uses separate channels for transmit and receive.
> >Add support for these channels to handle TX and RX with skb and
> >appropriate callbacks. Also add axi ethernet core interrupt for
> >dmaengine framework support.
> >
> >The dmaengine framework was extended for metadata API support
> during
> >the axidma RFC[1] discussion. However it still needs further
> >enhancements to make it well suited for ethernet usecases. The ethernet
> >features i.e ethtool set/get of DMA IP properties, ndo_poll_controller,
> >trigger reset of DMA IP from ethernet are not supported (mentioned in
> >TODO) and it requires follow-up discussion and dma framework
> enhancement.
> >
> >[1]: https://urldefense.proofpoint.com/v2/url?u=https-
> >3A__lore.kernel.org_lkml_1522665546-2D10035-2D1-2Dgit-2Dsend-
> 2Demail-
> >2D&d=DwIDAg&c=nKjWec2b6R0mOyPaz7xtfQ&r=wYboOaw70DU5hR
> M5HDwOR
> >Jx_MfD-
> hXXKii2eobNikgU&m=qvFFGXjULUP3IPFOil5aKySdkumrp8V0TYK4kQ-
> >yzsWPmRolFaQvfKMhaR11_dZv&s=brEWb1Til18VCodb-
> H4tST0HOBXKIJtL-
> >2ztGmMZz_8&e=
> >radheys@xilinx.com
> >
> >Signed-off-by: Radhey Shyam Pandey
> <radhey.shyam.pandey@xilinx.com>
> >Signed-off-by: Sarath Babu Naidu Gaddam
> ><sarath.babu.naidu.gaddam@amd.com>
> >---
> >Performance numbers(Mbps):
> >
> >              | TCP | UDP |
> >         -----------------
> >         | Tx | 920 | 800 |
> >         -----------------
> >         | Rx | 620 | 910 |
> >
> >Changes in V3:
> >1) New patch for dmaengine framework support.
> >---
> > drivers/net/ethernet/xilinx/xilinx_axienet.h  |   6 +
> > .../net/ethernet/xilinx/xilinx_axienet_main.c | 331
> +++++++++++++++++-
> > 2 files changed, 335 insertions(+), 2 deletions(-)
> >
> >diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >b/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >index 10917d997d27..fbe00c5390d5 100644
> >--- a/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >+++ b/drivers/net/ethernet/xilinx/xilinx_axienet.h
> >@@ -436,6 +436,9 @@ struct axidma_bd {
> >  * @coalesce_count_tx:	Store the irq coalesce on TX side.
> >  * @coalesce_usec_tx:	IRQ coalesce delay for TX
> >  * @has_dmas:	flag to check dmaengine framework usage.
> >+ * @tx_chan:	TX DMA channel.
> >+ * @rx_chan:	RX DMA channel.
> >+ * @skb_cache:	Custom skb slab allocator
> >  */
> > struct axienet_local {
> > 	struct net_device *ndev;
> >@@ -501,6 +504,9 @@ struct axienet_local {
> > 	u32 coalesce_count_tx;
> > 	u32 coalesce_usec_tx;
> > 	u8  has_dmas;
> >+	struct dma_chan *tx_chan;
> >+	struct dma_chan *rx_chan;
> >+	struct kmem_cache *skb_cache;
> > };
> >
> > /**
> >diff --git a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >index 8678fc09245a..662c77ff0e99 100644
> >--- a/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >+++ b/drivers/net/ethernet/xilinx/xilinx_axienet_main.c
> >@@ -37,6 +37,9 @@
> > #include <linux/phy.h>
> > #include <linux/mii.h>
> > #include <linux/ethtool.h>
> >+#include <linux/dmaengine.h>
> >+#include <linux/dma-mapping.h>
> >+#include <linux/dma/xilinx_dma.h>
> >
> > #include "xilinx_axienet.h"
> >
> >@@ -46,6 +49,9 @@
> > #define TX_BD_NUM_MIN			(MAX_SKB_FRAGS + 1)
> > #define TX_BD_NUM_MAX			4096
> > #define RX_BD_NUM_MAX			4096
> >+#define DMA_NUM_APP_WORDS		5
> >+#define LEN_APP				4
> >+#define RX_BUF_NUM_DEFAULT		128
> >
> > /* Must be shorter than length of ethtool_drvinfo.driver field to fit */
> > #define DRIVER_NAME		"xaxienet"
> >@@ -56,6 +62,16 @@
> >
> > #define AXIENET_USE_DMA(lp) ((lp)->has_dmas)
> >
> >+struct axi_skbuff {
> >+	struct scatterlist sgl[MAX_SKB_FRAGS + 1];
> >+	struct dma_async_tx_descriptor *desc;
> >+	dma_addr_t dma_address;
> >+	struct sk_buff *skb;
> >+	int sg_len;
> >+} __packed;
> >+
> >+static int axienet_rx_submit_desc(struct net_device *ndev);
> >+
> > /* Match table for of_platform binding */  static const struct
> >of_device_id axienet_of_match[] = {
> > 	{ .compatible = "xlnx,axi-ethernet-1.00.a", }, @@ -728,6
> +744,108 @@
> >static inline int axienet_check_tx_bd_space(struct axienet_local *lp,
> > 	return 0;
> > }
> >
> >+/**
> >+ * axienet_dma_tx_cb - DMA engine callback for TX channel.
> >+ * @data:       Pointer to the axi_skbuff structure
> >+ * @result:     error reporting through dmaengine_result.
> >+ * This function is called by dmaengine driver for TX channel to
> >+notify
> >+ * that the transmit is done.
> >+ */
> >+static void axienet_dma_tx_cb(void *data, const struct
> >+dmaengine_result
> >*result)
> >+{
> >+	struct axi_skbuff *axi_skb = data;
> >+
> >+	struct net_device *netdev = axi_skb->skb->dev;
> >+	struct axienet_local *lp = netdev_priv(netdev);
> >+
> >+	u64_stats_update_begin(&lp->tx_stat_sync);
> >+	u64_stats_add(&lp->tx_bytes, axi_skb->skb->len);
> >+	u64_stats_add(&lp->tx_packets, 1);
> >+	u64_stats_update_end(&lp->tx_stat_sync);
> >+
> >+	dma_unmap_sg(lp->dev, axi_skb->sgl, axi_skb->sg_len,
> >DMA_MEM_TO_DEV);
> >+	dev_kfree_skb_any(axi_skb->skb);
> >+	kmem_cache_free(lp->skb_cache, axi_skb); }
> >+
> >+/**
> >+ * axienet_start_xmit_dmaengine - Starts the transmission.
> >+ * @skb:        sk_buff pointer that contains data to be Txed.
> >+ * @ndev:       Pointer to net_device structure.
> >+ *
> >+ * Return: NETDEV_TX_OK, on success
> >+ *          NETDEV_TX_BUSY, if any memory failure or SG error.
> >+ *
> >+ * This function is invoked from xmit to initiate transmission. The
> >+ * function sets the skbs , call back API, SG etc.
> >+ * Additionally if checksum offloading is supported,
> >+ * it populates AXI Stream Control fields with appropriate values.
> >+ */
> >+static netdev_tx_t
> >+axienet_start_xmit_dmaengine(struct sk_buff *skb, struct net_device
> >+*ndev) {
> >+	struct dma_async_tx_descriptor *dma_tx_desc = NULL;
> >+	struct axienet_local *lp = netdev_priv(ndev);
> >+	u32 app[DMA_NUM_APP_WORDS] = {0};
> >+	struct axi_skbuff *axi_skb;
> >+	u32 csum_start_off;
> >+	u32 csum_index_off;
> >+	int sg_len;
> >+	int ret;
> >+
> >+	sg_len = skb_shinfo(skb)->nr_frags + 1;
> >+	axi_skb = kmem_cache_zalloc(lp->skb_cache, GFP_KERNEL);
> >+	if (!axi_skb)
> >+		return NETDEV_TX_BUSY;
> >+
> >+	sg_init_table(axi_skb->sgl, sg_len);
> >+	ret = skb_to_sgvec(skb, axi_skb->sgl, 0, skb->len);
> >+	if (unlikely(ret < 0))
> >+		goto xmit_error_skb_sgvec;
> >+
> >+	ret = dma_map_sg(lp->dev, axi_skb->sgl, sg_len,
> DMA_TO_DEVICE);
> >+	if (ret == 0)
> >+		goto xmit_error_skb_sgvec;
> >+
> >+	/*Fill up app fields for checksum */
> >+	if (skb->ip_summed == CHECKSUM_PARTIAL) {
> >+		if (lp->features & XAE_FEATURE_FULL_TX_CSUM) {
> >+			/* Tx Full Checksum Offload Enabled */
> >+			app[0] |= 2;
> >+		} else if (lp->features &
> XAE_FEATURE_PARTIAL_RX_CSUM) {
> >+			csum_start_off = skb_transport_offset(skb);
> >+			csum_index_off = csum_start_off + skb-
> >csum_offset;
> >+			/* Tx Partial Checksum Offload Enabled */
> >+			app[0] |= 1;
> >+			app[1] = (csum_start_off << 16) |
> csum_index_off;
> >+		}
> >+	} else if (skb->ip_summed == CHECKSUM_UNNECESSARY) {
> >+		app[0] |= 2; /* Tx Full Checksum Offload Enabled */
> >+	}
> >+
> >+	dma_tx_desc = lp->tx_chan->device->device_prep_slave_sg(lp-
> >tx_chan,
> >axi_skb->sgl,
> >+			sg_len, DMA_MEM_TO_DEV,
> >+			DMA_PREP_INTERRUPT, (void *)app);
> >+
> >+	if (!dma_tx_desc)
> >+		goto xmit_error_prep;
> >+
> >+	axi_skb->skb = skb;
> >+	axi_skb->sg_len = sg_len;
> >+	dma_tx_desc->callback_param =  axi_skb;
> >+	dma_tx_desc->callback_result = axienet_dma_tx_cb;
> >+	dmaengine_submit(dma_tx_desc);
> >+	dma_async_issue_pending(lp->tx_chan);
> >+
> >+	return NETDEV_TX_OK;
> >+
> >+xmit_error_prep:
> >+	dma_unmap_sg(lp->dev, axi_skb->sgl, sg_len, DMA_TO_DEVICE);
> >+xmit_error_skb_sgvec:
> >+	kmem_cache_free(lp->skb_cache, axi_skb);
> >+	return NETDEV_TX_BUSY;
> >+}
> >+
> > /**
> >  * axienet_tx_poll - Invoked once a transmit is completed by the
> >  * Axi DMA Tx channel.
> >@@ -912,7 +1030,42 @@ axienet_start_xmit(struct sk_buff *skb, struct
> >net_device *ndev)
> > 	if (!AXIENET_USE_DMA(lp))
> > 		return axienet_start_xmit_legacy(skb, ndev);
> > 	else
> >-		return NETDEV_TX_BUSY;
> >+		return axienet_start_xmit_dmaengine(skb, ndev);
> 
> You can avoid this if else by
>  if (AXIENET_USE_DMA(lp))
>          return axienet_start_xmit_dmaengine(skb, ndev);
> 
> and no need of defining axienet_start_xmit_legacy function in patch 2/3.
> _legacy may not be correct since you support both in-built dma or with
> dma engine just by turning on/off dt properties. Also does this driver roll
> back to using in-built dma if DMA_ENGINE is not compiled in?

We wanted to separate it to support future maintenance or deprecation 
with a separate wrapper. It is only a preference and either approach is ok.

There is no fallback to built in DMA. The intention is to use dmaengine
as the preferred approach and eventually deprecate built-in dma. 
If DMA_ENGINE is not present, the dma channel request fails through 
the usual exit path.

Will address remaining review comments of 2/3 and 3/3 in the next
Version.

Thanks,
Sarath



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support
  2023-05-11 11:32     ` Gaddam, Sarath Babu Naidu
@ 2023-05-12  6:27       ` Krzysztof Kozlowski
  2023-05-17 12:06         ` Gaddam, Sarath Babu Naidu
  0 siblings, 1 reply; 14+ messages in thread
From: Krzysztof Kozlowski @ 2023-05-12  6:27 UTC (permalink / raw)
  To: Gaddam, Sarath Babu Naidu, davem, edumazet, kuba, pabeni,
	robh+dt, krzysztof.kozlowski+dt
  Cc: linux, Simek, Michal, Pandey, Radhey Shyam, netdev, devicetree,
	linux-arm-kernel, linux-kernel, Sarangi, Anirudha, Katakam,
	Harini, git (AMD-Xilinx)

On 11/05/2023 13:32, Gaddam, Sarath Babu Naidu wrote:
> 
> 
>> -----Original Message-----
>> From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
>> Sent: Wednesday, May 10, 2023 3:39 PM
>> To: Gaddam, Sarath Babu Naidu
>> <sarath.babu.naidu.gaddam@amd.com>; davem@davemloft.net;
>> edumazet@google.com; kuba@kernel.org; pabeni@redhat.com;
>> robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
>> Cc: linux@armlinux.org.uk; Simek, Michal <michal.simek@amd.com>;
>> Pandey, Radhey Shyam <radhey.shyam.pandey@amd.com>;
>> netdev@vger.kernel.org; devicetree@vger.kernel.org; linux-arm-
>> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Sarangi,
>> Anirudha <anirudha.sarangi@amd.com>; Katakam, Harini
>> <harini.katakam@amd.com>; git (AMD-Xilinx) <git@amd.com>
>> Subject: Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet:
>> Introduce dmaengine binding support
>>
>> On 10/05/2023 10:50, Sarath Babu Naidu Gaddam wrote:
>>> From: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
>>>
>>> The axiethernet driver will use dmaengine framework to communicate
>>> with dma controller IP instead of built-in dma programming sequence.
>>
>> Subject: drop second/last, redundant "bindings". The "dt-bindings"
>> prefix is already stating that these are bindings.
>>
>> Actually also drop "dmaenging" as it is Linuxism. Focus on hardware, e.g.
>> "Add DMA support".
>>
>>>
>>> To request dma transmit and receive channels the axiethernet driver
>>> uses generic dmas, dma-names properties.
>>>
>>> Also to support the backward compatibility, use "dmas" property to
>>> identify as it should use dmaengine framework or legacy
>>> driver(built-in dma programming).
>>>
>>> At this point it is recommended to use dmaengine framework but it's
>>> optional. Once the solution is stable will make dmas as required
>>> properties.
>>>
>>> Signed-off-by: Radhey Shyam Pandey
>> <radhey.shyam.pandey@xilinx.com>
>>> Signed-off-by: Sarath Babu Naidu Gaddam
>>> <sarath.babu.naidu.gaddam@amd.com>
>>> ---
>>> These changes are on top of below txt to yaml conversion discussion
>>> https://lore.kernel.org/all/20230308061223.1358637-1-
>> sarath.babu.naidu
>>> .gaddam@amd.com/#Z2e.:20230308061223.1358637-1-
>> sarath.babu.naidu.gadda
>>> m::40amd.com:1bindings:net:xlnx::2caxi-ethernet.yaml
>>>
>>> Changes in V3:
>>> 1) Reverted reg and interrupts property to  support backward
>> compatibility.
>>> 2) Moved dmas and dma-names properties from Required properties.
>>>
>>> Changes in V2:
>>> - None.
>>> ---
>>>  .../devicetree/bindings/net/xlnx,axi-ethernet.yaml   | 12
>> ++++++++++++
>>>  1 file changed, 12 insertions(+)
>>>
>>> diff --git
>>> a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
>>> b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
>>> index 80843c177029..9dfa1976e260 100644
>>> --- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
>>> +++ b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
>>> @@ -122,6 +122,16 @@ properties:
>>>        modes, where "pcs-handle" should be used to point to the PCS/PMA
>> PHY,
>>>        and "phy-handle" should point to an external PHY if exists.
>>>
>>> +  dmas:
>>> +    items:
>>> +      - description: TX DMA Channel phandle and DMA request line
>> number
>>> +      - description: RX DMA Channel phandle and DMA request line
>>> + number
>>> +
>>> +  dma-names:
>>> +    items:
>>> +      - const: tx_chan0
>>
>> tx
>>
>>> +      - const: rx_chan0
>>
>> rx
> 
> We want to support more channels in the future, currently we support
> AXI DMA which has only one tx and rx channel. In future we want to 
> extend support for multichannel DMA (MCDMA) which has 16 TX and
> 16 RX channels. To uniquely identify each channel, we are using chan
> suffix. Depending on the usecase AXI ethernet driver can request any
> combination of multichannel DMA  channels.
> 
> dma-names = tx_chan0, tx_chan1, rx_chan0, rx_chan1;
> 
> will update the commit message with same.

I expect the binding to be complete, otherwise you get comments like
this. Add missing parts to the binding and resend.

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support
  2023-05-12  6:27       ` Krzysztof Kozlowski
@ 2023-05-17 12:06         ` Gaddam, Sarath Babu Naidu
  2023-05-17 14:48           ` Krzysztof Kozlowski
  0 siblings, 1 reply; 14+ messages in thread
From: Gaddam, Sarath Babu Naidu @ 2023-05-17 12:06 UTC (permalink / raw)
  To: Krzysztof Kozlowski, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, Simek, Michal, Pandey, Radhey Shyam, netdev, devicetree,
	linux-arm-kernel, linux-kernel, Sarangi, Anirudha, Katakam,
	Harini, git (AMD-Xilinx)



> -----Original Message-----
> From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
> Sent: Friday, May 12, 2023 11:57 AM
> To: Gaddam, Sarath Babu Naidu
> <sarath.babu.naidu.gaddam@amd.com>; davem@davemloft.net;
> edumazet@google.com; kuba@kernel.org; pabeni@redhat.com;
> robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
> Cc: linux@armlinux.org.uk; Simek, Michal <michal.simek@amd.com>;
> Pandey, Radhey Shyam <radhey.shyam.pandey@amd.com>;
> netdev@vger.kernel.org; devicetree@vger.kernel.org; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Sarangi,
> Anirudha <anirudha.sarangi@amd.com>; Katakam, Harini
> <harini.katakam@amd.com>; git (AMD-Xilinx) <git@amd.com>
> Subject: Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet:
> Introduce dmaengine binding support
> 
> On 11/05/2023 13:32, Gaddam, Sarath Babu Naidu wrote:
> >
> >
> >> -----Original Message-----
> >> From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
> >> Sent: Wednesday, May 10, 2023 3:39 PM
> >> To: Gaddam, Sarath Babu Naidu
> >> <sarath.babu.naidu.gaddam@amd.com>; davem@davemloft.net;
> >> edumazet@google.com; kuba@kernel.org; pabeni@redhat.com;
> >> robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
> >> Cc: linux@armlinux.org.uk; Simek, Michal <michal.simek@amd.com>;
> >> Pandey, Radhey Shyam <radhey.shyam.pandey@amd.com>;
> >> netdev@vger.kernel.org; devicetree@vger.kernel.org; linux-arm-
> >> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Sarangi,
> >> Anirudha <anirudha.sarangi@amd.com>; Katakam, Harini
> >> <harini.katakam@amd.com>; git (AMD-Xilinx) <git@amd.com>
> >> Subject: Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet:
> >> Introduce dmaengine binding support
> >>
> >> On 10/05/2023 10:50, Sarath Babu Naidu Gaddam wrote:
> >>> From: Radhey Shyam Pandey <radhey.shyam.pandey@xilinx.com>
> >>>
> >>> The axiethernet driver will use dmaengine framework to communicate
> >>> with dma controller IP instead of built-in dma programming sequence.
> >>
> >> Subject: drop second/last, redundant "bindings". The "dt-bindings"
> >> prefix is already stating that these are bindings.
> >>
> >> Actually also drop "dmaenging" as it is Linuxism. Focus on hardware,
> e.g.
> >> "Add DMA support".
> >>
> >>>
> >>> To request dma transmit and receive channels the axiethernet driver
> >>> uses generic dmas, dma-names properties.
> >>>
> >>> Also to support the backward compatibility, use "dmas" property to
> >>> identify as it should use dmaengine framework or legacy
> >>> driver(built-in dma programming).
> >>>
> >>> At this point it is recommended to use dmaengine framework but it's
> >>> optional. Once the solution is stable will make dmas as required
> >>> properties.
> >>>
> >>> Signed-off-by: Radhey Shyam Pandey
> >> <radhey.shyam.pandey@xilinx.com>
> >>> Signed-off-by: Sarath Babu Naidu Gaddam
> >>> <sarath.babu.naidu.gaddam@amd.com>
> >>> ---
> >>> These changes are on top of below txt to yaml conversion discussion
> >>> https://lore.kernel.org/all/20230308061223.1358637-1-
> >> sarath.babu.naidu
> >>> .gaddam@amd.com/#Z2e.:20230308061223.1358637-1-
> >> sarath.babu.naidu.gadda
> >>> m::40amd.com:1bindings:net:xlnx::2caxi-ethernet.yaml
> >>>
> >>> Changes in V3:
> >>> 1) Reverted reg and interrupts property to  support backward
> >> compatibility.
> >>> 2) Moved dmas and dma-names properties from Required properties.
> >>>
> >>> Changes in V2:
> >>> - None.
> >>> ---
> >>>  .../devicetree/bindings/net/xlnx,axi-ethernet.yaml   | 12
> >> ++++++++++++
> >>>  1 file changed, 12 insertions(+)
> >>>
> >>> diff --git
> >>> a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> >>> b/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> >>> index 80843c177029..9dfa1976e260 100644
> >>> --- a/Documentation/devicetree/bindings/net/xlnx,axi-ethernet.yaml
> >>> +++ b/Documentation/devicetree/bindings/net/xlnx,axi-
> ethernet.yaml
> >>> @@ -122,6 +122,16 @@ properties:
> >>>        modes, where "pcs-handle" should be used to point to the
> >>> PCS/PMA
> >> PHY,
> >>>        and "phy-handle" should point to an external PHY if exists.
> >>>
> >>> +  dmas:
> >>> +    items:
> >>> +      - description: TX DMA Channel phandle and DMA request line
> >> number
> >>> +      - description: RX DMA Channel phandle and DMA request line
> >>> + number
> >>> +
> >>> +  dma-names:
> >>> +    items:
> >>> +      - const: tx_chan0
> >>
> >> tx
> >>
> >>> +      - const: rx_chan0
> >>
> >> rx
> >
> > We want to support more channels in the future, currently we support
> > AXI DMA which has only one tx and rx channel. In future we want to
> > extend support for multichannel DMA (MCDMA) which has 16 TX and
> > 16 RX channels. To uniquely identify each channel, we are using chan
> > suffix. Depending on the usecase AXI ethernet driver can request any
> > combination of multichannel DMA  channels.
> >
> > dma-names = tx_chan0, tx_chan1, rx_chan0, rx_chan1;
> >
> > will update the commit message with same.
> 
> I expect the binding to be complete, otherwise you get comments like this.
> Add missing parts to the binding and resend.

Binding is complete for current supported DMA (single channel).  We will
extend when we add MCDMA.

We will describe the reason for using channel suffix in the description as 
below. 

   dma-names:
      items:
        - const: tx_chan0
        - const: rx_chan0
     description: |
           Chan suffix is used for identifying each channel uniquely.
           Current DMA has only one Tx and Rx channel but it will be 
           extended to support for multichannel DMA (MCDMA) which
           has 16 TX and 16 RX channels. Depending on the usecase AXI
           ethernet driver can request any combination of multichannel
           DMA  channels.

Thanks,
Sarath




^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support
  2023-05-17 12:06         ` Gaddam, Sarath Babu Naidu
@ 2023-05-17 14:48           ` Krzysztof Kozlowski
  2023-05-18  8:51             ` Gaddam, Sarath Babu Naidu
  0 siblings, 1 reply; 14+ messages in thread
From: Krzysztof Kozlowski @ 2023-05-17 14:48 UTC (permalink / raw)
  To: Gaddam, Sarath Babu Naidu, davem, edumazet, kuba, pabeni,
	robh+dt, krzysztof.kozlowski+dt
  Cc: linux, Simek, Michal, Pandey, Radhey Shyam, netdev, devicetree,
	linux-arm-kernel, linux-kernel, Sarangi, Anirudha, Katakam,
	Harini, git (AMD-Xilinx)

On 17/05/2023 14:06, Gaddam, Sarath Babu Naidu wrote:
>>>>> +  dma-names:
>>>>> +    items:
>>>>> +      - const: tx_chan0
>>>>
>>>> tx
>>>>
>>>>> +      - const: rx_chan0
>>>>
>>>> rx
>>>
>>> We want to support more channels in the future, currently we support
>>> AXI DMA which has only one tx and rx channel. In future we want to
>>> extend support for multichannel DMA (MCDMA) which has 16 TX and
>>> 16 RX channels. To uniquely identify each channel, we are using chan
>>> suffix. Depending on the usecase AXI ethernet driver can request any
>>> combination of multichannel DMA  channels.
>>>
>>> dma-names = tx_chan0, tx_chan1, rx_chan0, rx_chan1;
>>>
>>> will update the commit message with same.
>>
>> I expect the binding to be complete, otherwise you get comments like this.
>> Add missing parts to the binding and resend.
> 
> Binding is complete for current supported DMA (single channel).  We will
> extend when we add MCDMA.

What doe sit mean "current supported DMA"? By driver? or by hardware? If
the former, then how does it matter for the bindings?

If the latter, then your hardware is going to change? Then you will have
different set of compatibles and then can use different names.

> 
> We will describe the reason for using channel suffix in the description as 
> below. 
> 
>    dma-names:
>       items:
>         - const: tx_chan0
>         - const: rx_chan0
>      description: |
>            Chan suffix is used for identifying each channel uniquely.
>            Current DMA has only one Tx and Rx channel but it will be 
>            extended to support for multichannel DMA (MCDMA) which
>            has 16 TX and 16 RX channels. Depending on the usecase AXI
>            ethernet driver can request any combination of multichannel
>            DMA  channels.

No, because I don't understand what is "will be extended". Bindings
should be complete. If they are going to be extended, it means they are
not complete. If they cannot be complete, which happens, please provide
a reason. There was no reason so far, except your claim it is complete.

Best regards,
Krzysztof


^ permalink raw reply	[flat|nested] 14+ messages in thread

* RE: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support
  2023-05-17 14:48           ` Krzysztof Kozlowski
@ 2023-05-18  8:51             ` Gaddam, Sarath Babu Naidu
  0 siblings, 0 replies; 14+ messages in thread
From: Gaddam, Sarath Babu Naidu @ 2023-05-18  8:51 UTC (permalink / raw)
  To: Krzysztof Kozlowski, davem, edumazet, kuba, pabeni, robh+dt,
	krzysztof.kozlowski+dt
  Cc: linux, Simek, Michal, Pandey, Radhey Shyam, netdev, devicetree,
	linux-arm-kernel, linux-kernel, Sarangi, Anirudha, Katakam,
	Harini, git (AMD-Xilinx)



> -----Original Message-----
> From: Krzysztof Kozlowski <krzysztof.kozlowski@linaro.org>
> Sent: Wednesday, May 17, 2023 8:19 PM
> To: Gaddam, Sarath Babu Naidu
> <sarath.babu.naidu.gaddam@amd.com>; davem@davemloft.net;
> edumazet@google.com; kuba@kernel.org; pabeni@redhat.com;
> robh+dt@kernel.org; krzysztof.kozlowski+dt@linaro.org
> Cc: linux@armlinux.org.uk; Simek, Michal <michal.simek@amd.com>;
> Pandey, Radhey Shyam <radhey.shyam.pandey@amd.com>;
> netdev@vger.kernel.org; devicetree@vger.kernel.org; linux-arm-
> kernel@lists.infradead.org; linux-kernel@vger.kernel.org; Sarangi,
> Anirudha <anirudha.sarangi@amd.com>; Katakam, Harini
> <harini.katakam@amd.com>; git (AMD-Xilinx) <git@amd.com>
> Subject: Re: [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet:
> Introduce dmaengine binding support
> 
> On 17/05/2023 14:06, Gaddam, Sarath Babu Naidu wrote:
> >>>>> +  dma-names:
> >>>>> +    items:
> >>>>> +      - const: tx_chan0
> >>>>
> >>>> tx
> >>>>
> >>>>> +      - const: rx_chan0
> >>>>
> >>>> rx
> >>>
> >>> We want to support more channels in the future, currently we
> support
> >>> AXI DMA which has only one tx and rx channel. In future we want to
> >>> extend support for multichannel DMA (MCDMA) which has 16 TX and
> >>> 16 RX channels. To uniquely identify each channel, we are using chan
> >>> suffix. Depending on the usecase AXI ethernet driver can request any
> >>> combination of multichannel DMA  channels.
> >>>
> >>> dma-names = tx_chan0, tx_chan1, rx_chan0, rx_chan1;
> >>>
> >>> will update the commit message with same.
> >>
> >> I expect the binding to be complete, otherwise you get comments like
> this.
> >> Add missing parts to the binding and resend.
> >
> > Binding is complete for current supported DMA (single channel).  We
> > will extend when we add MCDMA.
> 
> What doe sit mean "current supported DMA"? By driver? or by hardware?
> If the former, then how does it matter for the bindings?
> 
> If the latter, then your hardware is going to change? Then you will have
> different set of compatibles and then can use different names.
> 
> >
> > We will describe the reason for using channel suffix in the
> > description as below.
> >
> >    dma-names:
> >       items:
> >         - const: tx_chan0
> >         - const: rx_chan0
> >      description: |
> >            Chan suffix is used for identifying each channel uniquely.
> >            Current DMA has only one Tx and Rx channel but it will be
> >            extended to support for multichannel DMA (MCDMA) which
> >            has 16 TX and 16 RX channels. Depending on the usecase AXI
> >            ethernet driver can request any combination of multichannel
> >            DMA  channels.
> 
> No, because I don't understand what is "will be extended". Bindings
> should be complete. If they are going to be extended, it means they are
> not complete. If they cannot be complete, which happens, please provide
> a reason. There was no reason so far, except your claim it is complete.

We will re-spin another series with complete bindings including MCDMA
support.

Thanks,
Sarath


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2023-05-18  8:51 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-05-10  8:50 [PATCH net-next V3 0/3] net: axienet: Introduce dmaengine Sarath Babu Naidu Gaddam
2023-05-10  8:50 ` [PATCH net-next V3 1/3] dt-bindings: net: xilinx_axienet: Introduce dmaengine binding support Sarath Babu Naidu Gaddam
2023-05-10 10:08   ` Krzysztof Kozlowski
2023-05-11 11:32     ` Gaddam, Sarath Babu Naidu
2023-05-12  6:27       ` Krzysztof Kozlowski
2023-05-17 12:06         ` Gaddam, Sarath Babu Naidu
2023-05-17 14:48           ` Krzysztof Kozlowski
2023-05-18  8:51             ` Gaddam, Sarath Babu Naidu
2023-05-10  8:50 ` [PATCH net-next V3 2/3] net: axienet: Preparatory changes for dmaengine support Sarath Babu Naidu Gaddam
2023-05-10 13:19   ` [EXT] " Subbaraya Sundeep Bhatta
2023-05-10  8:50 ` [PATCH net-next V3 3/3] net: axienet: Introduce " Sarath Babu Naidu Gaddam
2023-05-10 10:10   ` Krzysztof Kozlowski
2023-05-10 13:44   ` [EXT] " Subbaraya Sundeep Bhatta
2023-05-12  4:36     ` Gaddam, Sarath Babu Naidu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).