linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support
@ 2020-09-20 11:23 Serge Semin
  2020-09-20 11:23 ` [PATCH v2 01/11] spi: dw-dma: Set DMA Level registers on init Serge Semin
                   ` (11 more replies)
  0 siblings, 12 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Mainly this series is about fixing a very nasty problem discovered in the
DW APB SSI driver working in a couple with DW DMAC, which doesn't have
multi-block capability support (a.k.a. accelerated SG list entries
traversal/burst, or automatic LLP entries reload, etc.).

DW DMA IP-core and its DMA channel on the synthesize stage can be tuned by
setting a vast number of the model parameters, some of which are provided
to create a more optimized/performant controller. In particular two of
those parameters are DMAH_CHx_MULTI_BLK_EN and DMAH_CHx_HC_LLP. If at
least one of them is set to zero (false) then the target DMA controller
will be smaller and faster but will lack of the DMA blocks chaining
support. In the kernel notation it means that the controller won't be able
to automatically reload a next SG-list entry, when a current one is
finished. Since Linux kernel DMA subsystem interface requires to have the
SG-lists execution supported, the DW DMA engine driver is developed in a
way so to resubmit each SG-list entry one-by-one by software means: each
SG-list entry execution finish is tracked by the DW DMA controller
interrupt, which handler executes a tasklet, which then re-charges a DW
DMA channel with a next SG-list entry if one is available. Such
implementation is a normal design and works well for the most of the DW
DMAC client devices. But it may cause problems for devices, which send and
receive data by means of internal FIFOs. That requires having several DMA
channels working synchronously in order to prevent the FIFOs overflow.

A bright example of such device is the DW APB SSI controller. It has Tx
and Rx FIFOs, which first need to be filled in with data before data
sending out or receiving in. But those FIFOs are inter-dependent because
of the SPI nature and its shift-register design. So each sent over Tx FIFO
byte immediately causes getting a byte from the SPI bus into the Rx FIFO.
It makes a strategy of working with the SPI controller a bit tricky. The
more data we push into the Tx FIFO and the faster the SPI bus is, the more
careful we have to be in pulling data from Rx FIFO since if software or
DMA engine misses a moment when the Rx FIFO is full, for instance, due to
being busy with some other activity or due to being blocked if system bus
is busy with doing something else or just due to being too slow to keep up
with incoming data, then Rx FIFO will be overflown, which consequently
causes data loss. Needless to say that such situation is fatal and mustn't
be tolerated for a bus like SPI.

In application to the DW APB SSI driver the problem above may happen when
DW DMAC is synthesized with no multi-block capability support and it's
enabled to be working with DW APB SSI for full-duplex transfers. DW APB
SSI driver allocates two DW DMAC channels to perform Tx and Rx SPI
transfers, initializes them, submits Tx and Rx SG-lists for execution and
then waits until the DMA transfers are finished. The issue happens when Rx
SG-list consists of more than one entry. Since the multi-block capability
is unsupported the DW DMAC driver will use the software-based SG-list
entries traverse implementation, which by design may cause
non-deterministic latency during the Rx SG-list entries re-charge. During
the entries re-charge procedure the DMA Tx channel will keep pushing data
into the SPI Tx FIFO. DW APB SSI controller in its turn will keep pushing
data out from the Tx FIFO to the SPI bus, and will immediately fill in the
Rx FIFO. So if the latency is big enough, then we'll eventually end up
with Rx FIFO overflow.

One of the possible solution of the problem denoted above is to feed the
DMA engine with the Tx and Rx SG-list entries one-by-one. This
patch-series provides an implementation of that approach. First it moves
the requested DMA channels configuration into the dma_setup() callback,
which should have been there in the first place. Then it's better to move
the DMA transfers submission into the DMA-preparation methods to collect
all the setups in a single method. After that the current implementation
of a straightforward SG-lists DMA transfer should be unpinned into a
dedicated method dw_spi_dma_transfer_all() since we are about to introduce
an alternative DMA-based SPI transfer approach. Since DMA-transfers finish
is now locally detected we can simplify the driver code a bit and move the
DMAC register cleanup to a single place in the dw_spi_dma_transfer_all()
method. In order to re-use the DMA-{wait, submit Tx, submit Rx} methods we
have to alter their prototypes, so they would accept SG-lists instead of
the SPI-transfer structure. Finally we introduce a new DMA-based
SPI-transfer method, which traverses the SG-list entries in a loop and
synchronously submits each of then to the Tx and Rx DMA channels in a way
so the DMA engine wouldn't need to activate the prone to errors in our
case SG-list entries re-charge implementation. That new method is utilized
only for the DMA controllers, which can't handle all Tx and Rx SPI
transfer SG-lists in a single DMA transaction without software
intervention, and for the full-duplex SPI-transfers.

Link: https://lore.kernel.org/linux-spi/20200731075953.14416-1-Sergey.Semin@baikalelectronics.ru
Changelog v2:
- Replace negative conditional statements with the positive ones in the
  dw_spi_dma_submit_{tx,rx}() methods.
- Terminate the prepared DMA Tx-descriptors on submission errors.
- Split the patch "spi: dw-dma: Move DMA transfers submission to the
  channels prep methods" up into a series of more simple commits.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Cc: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru>
Cc: Georgy Vlasov <Georgy.Vlasov@baikalelectronics.ru>
Cc: Ramil Zaripov <Ramil.Zaripov@baikalelectronics.ru>
Cc: Pavel Parkhomenko <Pavel.Parkhomenko@baikalelectronics.ru>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Feng Tang <feng.tang@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: linux-spi@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

Serge Semin (11):
  spi: dw-dma: Set DMA Level registers on init
  spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified
  spi: dw-dma: Configure the DMA channels in dma_setup
  spi: dw-dma: Check rx_buf availability in the xfer method
  spi: dw-dma: Move DMA transfers submission to the channels prep
    methods
  spi: dw-dma: Check DMA Tx-desc submission status
  spi: dw-dma: Remove DMA Tx-desc passing around
  spi: dw-dma: Detach DMA transfer into a dedicated method
  spi: dw-dma: Move DMAC register cleanup to DMA transfer method
  spi: dw-dma: Pass exact data to the DMA submit and wait methods
  spi: dw-dma: Add one-by-one SG list entries transfer

 drivers/spi/spi-dw-dma.c | 316 ++++++++++++++++++++++++++++++---------
 drivers/spi/spi-dw.h     |   1 +
 2 files changed, 245 insertions(+), 72 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v2 01/11] spi: dw-dma: Set DMA Level registers on init
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 02/11] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Serge Semin
                   ` (10 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Indeed the registers content doesn't get cleared when the SPI controller
is disabled and enabled. Max burst lengths aren't changed since the Rx and
Tx DMA channels are requested on init stage and are kept acquired until
the device is removed. Obviously SPI controller FIFO depth can't be
changed. Due to all of that we can safely move the DMA Transmit and
Receive data level registers initialization to the SPI controller DMA init
stage (when the SPI controller is being probed) instead of doing it for
each SPI transfer when dma_setup is called. This shall speed the DMA-based
SPI transfer initialization up a bit, particularly if the APB bus is
relatively slow.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 28 +++++++++++++---------------
 1 file changed, 13 insertions(+), 15 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index bb390ff67d1d..a7ff1e357f8b 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -49,6 +49,7 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws)
 		max_burst = RX_BURST_LEVEL;
 
 	dws->rxburst = min(max_burst, def_burst);
+	dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1);
 
 	ret = dma_get_slave_caps(dws->txchan, &caps);
 	if (!ret && caps.max_burst)
@@ -56,7 +57,19 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws)
 	else
 		max_burst = TX_BURST_LEVEL;
 
+	/*
+	 * Having a Rx DMA channel serviced with higher priority than a Tx DMA
+	 * channel might not be enough to provide a well balanced DMA-based
+	 * SPI transfer interface. There might still be moments when the Tx DMA
+	 * channel is occasionally handled faster than the Rx DMA channel.
+	 * That in its turn will eventually cause the SPI Rx FIFO overflow if
+	 * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's
+	 * cleared by the Rx DMA channel. In order to fix the problem the Tx
+	 * DMA activity is intentionally slowed down by limiting the SPI Tx
+	 * FIFO depth with a value twice bigger than the Tx burst length.
+	 */
 	dws->txburst = min(max_burst, def_burst);
+	dw_writel(dws, DW_SPI_DMATDLR, dws->txburst);
 }
 
 static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
@@ -372,21 +385,6 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	u16 imr = 0, dma_ctrl = 0;
 
-	/*
-	 * Having a Rx DMA channel serviced with higher priority than a Tx DMA
-	 * channel might not be enough to provide a well balanced DMA-based
-	 * SPI transfer interface. There might still be moments when the Tx DMA
-	 * channel is occasionally handled faster than the Rx DMA channel.
-	 * That in its turn will eventually cause the SPI Rx FIFO overflow if
-	 * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's
-	 * cleared by the Rx DMA channel. In order to fix the problem the Tx
-	 * DMA activity is intentionally slowed down by limiting the SPI Tx
-	 * FIFO depth with a value twice bigger than the Tx burst length
-	 * calculated earlier by the dw_spi_dma_maxburst_init() method.
-	 */
-	dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1);
-	dw_writel(dws, DW_SPI_DMATDLR, dws->txburst);
-
 	if (xfer->tx_buf)
 		dma_ctrl |= SPI_DMA_TDMAE;
 	if (xfer->rx_buf)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 02/11] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
  2020-09-20 11:23 ` [PATCH v2 01/11] spi: dw-dma: Set DMA Level registers on init Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 03/11] spi: dw-dma: Configure the DMA channels in dma_setup Serge Semin
                   ` (9 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Since commit 46164fde6b78 ("spi: dw: Fix Rx-only DMA transfers") if DMA
interface is enabled, then Tx-buffer must be available in each SPI
transfer. It's required since in order to activate the incoming data
reception either DMA or CPU must be pushing data out to the SPI bus.
But the DW APB SSI DMA driver code is still left in state as if Tx-buffer
might be optional, which is no longer true. Let's fix it so an error would
be returned if no Tx-buffer detected and DMA Tx would be always
enabled.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index a7ff1e357f8b..1b013ac94a3f 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -262,9 +262,6 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 	struct dma_slave_config txconf;
 	struct dma_async_tx_descriptor *txdesc;
 
-	if (!xfer->tx_buf)
-		return NULL;
-
 	memset(&txconf, 0, sizeof(txconf));
 	txconf.direction = DMA_MEM_TO_DEV;
 	txconf.dst_addr = dws->dma_addr;
@@ -383,17 +380,19 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 
 static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 {
-	u16 imr = 0, dma_ctrl = 0;
+	u16 imr, dma_ctrl;
 
-	if (xfer->tx_buf)
-		dma_ctrl |= SPI_DMA_TDMAE;
+	if (!xfer->tx_buf)
+		return -EINVAL;
+
+	/* Set the DMA handshaking interface */
+	dma_ctrl = SPI_DMA_TDMAE;
 	if (xfer->rx_buf)
 		dma_ctrl |= SPI_DMA_RDMAE;
 	dw_writel(dws, DW_SPI_DMACR, dma_ctrl);
 
 	/* Set the interrupt mask */
-	if (xfer->tx_buf)
-		imr |= SPI_INT_TXOI;
+	imr = SPI_INT_TXOI;
 	if (xfer->rx_buf)
 		imr |= SPI_INT_RXUI | SPI_INT_RXOI;
 	spi_umask_intr(dws, imr);
@@ -412,6 +411,8 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 
 	/* Prepare the TX dma transfer */
 	txdesc = dw_spi_dma_prepare_tx(dws, xfer);
+	if (!txdesc)
+		return -EINVAL;
 
 	/* Prepare the RX dma transfer */
 	rxdesc = dw_spi_dma_prepare_rx(dws, xfer);
@@ -423,17 +424,15 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 		dma_async_issue_pending(dws->rxchan);
 	}
 
-	if (txdesc) {
-		set_bit(TX_BUSY, &dws->dma_chan_busy);
-		dmaengine_submit(txdesc);
-		dma_async_issue_pending(dws->txchan);
-	}
+	set_bit(TX_BUSY, &dws->dma_chan_busy);
+	dmaengine_submit(txdesc);
+	dma_async_issue_pending(dws->txchan);
 
 	ret = dw_spi_dma_wait(dws, xfer);
 	if (ret)
 		return ret;
 
-	if (txdesc && dws->master->cur_msg->status == -EINPROGRESS) {
+	if (dws->master->cur_msg->status == -EINPROGRESS) {
 		ret = dw_spi_dma_wait_tx_done(dws, xfer);
 		if (ret)
 			return ret;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 03/11] spi: dw-dma: Configure the DMA channels in dma_setup
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
  2020-09-20 11:23 ` [PATCH v2 01/11] spi: dw-dma: Set DMA Level registers on init Serge Semin
  2020-09-20 11:23 ` [PATCH v2 02/11] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 04/11] spi: dw-dma: Check rx_buf availability in the xfer method Serge Semin
                   ` (8 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Mainly this is a preparation patch before adding one-by-one DMA SG entries
transmission. But logically the Tx and Rx DMA channels setup should be
performed in the dma_setup() callback anyway. So we'll move the DMA slave
channels src/dst burst lengths, address and address width configuration
from the Tx/Rx channels preparation methods to the dedicated functions and
then make sure it's called at the DMA setup stage.

Note we now make sure the return value of the dmaengine_slave_config()
method doesn't indicate an error. It has been unnecessary in case if Dw
DMAC is utilized as a DMA engine, since its device_config() callback
always returns zero (though it might change in future). But since DW APB
SSI driver now supports any DMA back-end we must make sure the DMA device
configuration has been successful before proceeding with further setups.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 42 +++++++++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 11 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 1b013ac94a3f..da17897b8acb 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -256,11 +256,9 @@ static void dw_spi_dma_tx_done(void *arg)
 	complete(&dws->dma_completion);
 }
 
-static struct dma_async_tx_descriptor *
-dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_config_tx(struct dw_spi *dws)
 {
 	struct dma_slave_config txconf;
-	struct dma_async_tx_descriptor *txdesc;
 
 	memset(&txconf, 0, sizeof(txconf));
 	txconf.direction = DMA_MEM_TO_DEV;
@@ -270,7 +268,13 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 	txconf.dst_addr_width = dw_spi_dma_convert_width(dws->n_bytes);
 	txconf.device_fc = false;
 
-	dmaengine_slave_config(dws->txchan, &txconf);
+	return dmaengine_slave_config(dws->txchan, &txconf);
+}
+
+static struct dma_async_tx_descriptor *
+dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+{
+	struct dma_async_tx_descriptor *txdesc;
 
 	txdesc = dmaengine_prep_slave_sg(dws->txchan,
 				xfer->tx_sg.sgl,
@@ -345,14 +349,9 @@ static void dw_spi_dma_rx_done(void *arg)
 	complete(&dws->dma_completion);
 }
 
-static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
-		struct spi_transfer *xfer)
+static int dw_spi_dma_config_rx(struct dw_spi *dws)
 {
 	struct dma_slave_config rxconf;
-	struct dma_async_tx_descriptor *rxdesc;
-
-	if (!xfer->rx_buf)
-		return NULL;
 
 	memset(&rxconf, 0, sizeof(rxconf));
 	rxconf.direction = DMA_DEV_TO_MEM;
@@ -362,7 +361,16 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 	rxconf.src_addr_width = dw_spi_dma_convert_width(dws->n_bytes);
 	rxconf.device_fc = false;
 
-	dmaengine_slave_config(dws->rxchan, &rxconf);
+	return dmaengine_slave_config(dws->rxchan, &rxconf);
+}
+
+static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
+		struct spi_transfer *xfer)
+{
+	struct dma_async_tx_descriptor *rxdesc;
+
+	if (!xfer->rx_buf)
+		return NULL;
 
 	rxdesc = dmaengine_prep_slave_sg(dws->rxchan,
 				xfer->rx_sg.sgl,
@@ -381,10 +389,22 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	u16 imr, dma_ctrl;
+	int ret;
 
 	if (!xfer->tx_buf)
 		return -EINVAL;
 
+	/* Setup DMA channels */
+	ret = dw_spi_dma_config_tx(dws);
+	if (ret)
+		return ret;
+
+	if (xfer->rx_buf) {
+		ret = dw_spi_dma_config_rx(dws);
+		if (ret)
+			return ret;
+	}
+
 	/* Set the DMA handshaking interface */
 	dma_ctrl = SPI_DMA_TDMAE;
 	if (xfer->rx_buf)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 04/11] spi: dw-dma: Check rx_buf availability in the xfer method
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (2 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 03/11] spi: dw-dma: Configure the DMA channels in dma_setup Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 05/11] spi: dw-dma: Move DMA transfers submission to the channels prep methods Serge Semin
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Checking rx_buf for being NULL and returning NULL from the Rx-channel
preparation method doesn't let us to distinguish that situation from
errors happening during the Rx SG-list preparation. So it's better to make
sure that the rx_buf not-NULL and full-duplex communication is requested
prior calling the Rx preparation method.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 13 ++++++-------
 1 file changed, 6 insertions(+), 7 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index da17897b8acb..d2a67dee1a66 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -369,9 +369,6 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 {
 	struct dma_async_tx_descriptor *rxdesc;
 
-	if (!xfer->rx_buf)
-		return NULL;
-
 	rxdesc = dmaengine_prep_slave_sg(dws->rxchan,
 				xfer->rx_sg.sgl,
 				xfer->rx_sg.nents,
@@ -435,10 +432,12 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 		return -EINVAL;
 
 	/* Prepare the RX dma transfer */
-	rxdesc = dw_spi_dma_prepare_rx(dws, xfer);
+	if (xfer->rx_buf) {
+		rxdesc = dw_spi_dma_prepare_rx(dws, xfer);
+		if (!rxdesc)
+			return -EINVAL;
 
-	/* rx must be started before tx due to spi instinct */
-	if (rxdesc) {
+		/* rx must be started before tx due to spi instinct */
 		set_bit(RX_BUSY, &dws->dma_chan_busy);
 		dmaengine_submit(rxdesc);
 		dma_async_issue_pending(dws->rxchan);
@@ -458,7 +457,7 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 			return ret;
 	}
 
-	if (rxdesc && dws->master->cur_msg->status == -EINPROGRESS)
+	if (xfer->rx_buf && dws->master->cur_msg->status == -EINPROGRESS)
 		ret = dw_spi_dma_wait_rx_done(dws);
 
 	return ret;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 05/11] spi: dw-dma: Move DMA transfers submission to the channels prep methods
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (3 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 04/11] spi: dw-dma: Check rx_buf availability in the xfer method Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 06/11] spi: dw-dma: Check DMA Tx-desc submission status Serge Semin
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Indeed we can freely move the dmaengine_submit() method invocation and the
Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. Since the
Tx/Rx preparation method is now mainly used for the DMA transfers
submission, here we suggest to rename it to have the _submit_{r,t}x suffix
instead.

By having this alteration applied first we implement another code
preparation before adding the one-by-one DMA SG entries transmission,
second we now have the dma_async_tx_descriptor descriptor used locally
only in the new DMA transfers submission methods (this will be cleaned up
a bit later), third we make the generic transfer method more readable,
where now the functionality of submission, execution and wait procedures
is transparently split up instead of having a preparation, intermixed
submission/execution and wait procedures.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index d2a67dee1a66..769d10ca74b4 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -272,7 +272,7 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws)
 }
 
 static struct dma_async_tx_descriptor *
-dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *txdesc;
 
@@ -287,6 +287,9 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 	txdesc->callback = dw_spi_dma_tx_done;
 	txdesc->callback_param = dws;
 
+	dmaengine_submit(txdesc);
+	set_bit(TX_BUSY, &dws->dma_chan_busy);
+
 	return txdesc;
 }
 
@@ -364,7 +367,7 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->rxchan, &rxconf);
 }
 
-static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
+static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws,
 		struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *rxdesc;
@@ -380,6 +383,9 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 	rxdesc->callback = dw_spi_dma_rx_done;
 	rxdesc->callback_param = dws;
 
+	dmaengine_submit(rxdesc);
+	set_bit(RX_BUSY, &dws->dma_chan_busy);
+
 	return rxdesc;
 }
 
@@ -426,25 +432,21 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 	struct dma_async_tx_descriptor *txdesc, *rxdesc;
 	int ret;
 
-	/* Prepare the TX dma transfer */
-	txdesc = dw_spi_dma_prepare_tx(dws, xfer);
+	/* Submit the DMA Tx transfer */
+	txdesc = dw_spi_dma_submit_tx(dws, xfer);
 	if (!txdesc)
 		return -EINVAL;
 
-	/* Prepare the RX dma transfer */
+	/* Submit the DMA Rx transfer if required */
 	if (xfer->rx_buf) {
-		rxdesc = dw_spi_dma_prepare_rx(dws, xfer);
+		rxdesc = dw_spi_dma_submit_rx(dws, xfer);
 		if (!rxdesc)
 			return -EINVAL;
 
 		/* rx must be started before tx due to spi instinct */
-		set_bit(RX_BUSY, &dws->dma_chan_busy);
-		dmaengine_submit(rxdesc);
 		dma_async_issue_pending(dws->rxchan);
 	}
 
-	set_bit(TX_BUSY, &dws->dma_chan_busy);
-	dmaengine_submit(txdesc);
 	dma_async_issue_pending(dws->txchan);
 
 	ret = dw_spi_dma_wait(dws, xfer);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 06/11] spi: dw-dma: Check DMA Tx-desc submission status
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (4 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 05/11] spi: dw-dma: Move DMA transfers submission to the channels prep methods Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 07/11] spi: dw-dma: Remove DMA Tx-desc passing around Serge Semin
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

We suggest to add the dmaengine_submit() return value test for errors.  It
has been unnecessary while the driver was expected to be utilized in pair
with DW DMAC. But since now the driver can be used with any DMA engine, it
might be useful to track the errors on DMA submissions so not miss them
and get into an unpredictable driver behaviour.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>

---

Changelog v2:
- Replace negative conditional statements with the positive ones.
- Terminate the prepared descriptors on submission errors.
---
 drivers/spi/spi-dw-dma.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 769d10ca74b4..aa3900809126 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -275,6 +275,8 @@ static struct dma_async_tx_descriptor *
 dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *txdesc;
+	dma_cookie_t cookie;
+	int ret;
 
 	txdesc = dmaengine_prep_slave_sg(dws->txchan,
 				xfer->tx_sg.sgl,
@@ -287,7 +289,13 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 	txdesc->callback = dw_spi_dma_tx_done;
 	txdesc->callback_param = dws;
 
-	dmaengine_submit(txdesc);
+	cookie = dmaengine_submit(txdesc);
+	ret = dma_submit_error(cookie);
+	if (ret) {
+		dmaengine_terminate_sync(dws->txchan);
+		return NULL;
+	}
+
 	set_bit(TX_BUSY, &dws->dma_chan_busy);
 
 	return txdesc;
@@ -371,6 +379,8 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws,
 		struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *rxdesc;
+	dma_cookie_t cookie;
+	int ret;
 
 	rxdesc = dmaengine_prep_slave_sg(dws->rxchan,
 				xfer->rx_sg.sgl,
@@ -383,7 +393,13 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws,
 	rxdesc->callback = dw_spi_dma_rx_done;
 	rxdesc->callback_param = dws;
 
-	dmaengine_submit(rxdesc);
+	cookie = dmaengine_submit(rxdesc);
+	ret = dma_submit_error(cookie);
+	if (ret) {
+		dmaengine_terminate_sync(dws->rxchan);
+		return NULL;
+	}
+
 	set_bit(RX_BUSY, &dws->dma_chan_busy);
 
 	return rxdesc;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 07/11] spi: dw-dma: Remove DMA Tx-desc passing around
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (5 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 06/11] spi: dw-dma: Check DMA Tx-desc submission status Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 08/11] spi: dw-dma: Detach DMA transfer into a dedicated method Serge Semin
                   ` (4 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

It's pointless to pass the Rx and Tx transfers DMA Tx-descriptors, since
they are used in the Tx/Rx submit method only. Instead just return the
submission status from these methods. This alteration will make the code
less complex.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 31 ++++++++++++++-----------------
 1 file changed, 14 insertions(+), 17 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index aa3900809126..9f70818acce6 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -271,8 +271,7 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->txchan, &txconf);
 }
 
-static struct dma_async_tx_descriptor *
-dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *txdesc;
 	dma_cookie_t cookie;
@@ -284,7 +283,7 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 				DMA_MEM_TO_DEV,
 				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!txdesc)
-		return NULL;
+		return -ENOMEM;
 
 	txdesc->callback = dw_spi_dma_tx_done;
 	txdesc->callback_param = dws;
@@ -293,12 +292,12 @@ dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 	ret = dma_submit_error(cookie);
 	if (ret) {
 		dmaengine_terminate_sync(dws->txchan);
-		return NULL;
+		return ret;
 	}
 
 	set_bit(TX_BUSY, &dws->dma_chan_busy);
 
-	return txdesc;
+	return 0;
 }
 
 static inline bool dw_spi_dma_rx_busy(struct dw_spi *dws)
@@ -375,8 +374,7 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->rxchan, &rxconf);
 }
 
-static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws,
-		struct spi_transfer *xfer)
+static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *rxdesc;
 	dma_cookie_t cookie;
@@ -388,7 +386,7 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws,
 				DMA_DEV_TO_MEM,
 				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!rxdesc)
-		return NULL;
+		return -ENOMEM;
 
 	rxdesc->callback = dw_spi_dma_rx_done;
 	rxdesc->callback_param = dws;
@@ -397,12 +395,12 @@ static struct dma_async_tx_descriptor *dw_spi_dma_submit_rx(struct dw_spi *dws,
 	ret = dma_submit_error(cookie);
 	if (ret) {
 		dmaengine_terminate_sync(dws->rxchan);
-		return NULL;
+		return ret;
 	}
 
 	set_bit(RX_BUSY, &dws->dma_chan_busy);
 
-	return rxdesc;
+	return 0;
 }
 
 static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
@@ -445,19 +443,18 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 
 static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 {
-	struct dma_async_tx_descriptor *txdesc, *rxdesc;
 	int ret;
 
 	/* Submit the DMA Tx transfer */
-	txdesc = dw_spi_dma_submit_tx(dws, xfer);
-	if (!txdesc)
-		return -EINVAL;
+	ret = dw_spi_dma_submit_tx(dws, xfer);
+	if (ret)
+		return ret;
 
 	/* Submit the DMA Rx transfer if required */
 	if (xfer->rx_buf) {
-		rxdesc = dw_spi_dma_submit_rx(dws, xfer);
-		if (!rxdesc)
-			return -EINVAL;
+		ret = dw_spi_dma_submit_rx(dws, xfer);
+		if (ret)
+			return ret;
 
 		/* rx must be started before tx due to spi instinct */
 		dma_async_issue_pending(dws->rxchan);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 08/11] spi: dw-dma: Detach DMA transfer into a dedicated method
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (6 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 07/11] spi: dw-dma: Remove DMA Tx-desc passing around Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 09/11] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Serge Semin
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

In order to add an alternative method of DMA-based SPI transfer first we
need to detach the currently available one from the common code. Here we
move the normal DMA-based SPI transfer execution functionality into a
dedicated method. It will be utilized if either the DMA engine supports
an unlimited number SG entries or Tx-only SPI transfer is requested. But
currently just use it for any SPI transfer.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 9f70818acce6..f2baefcae9ae 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -441,7 +441,8 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 	return 0;
 }
 
-static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_transfer_all(struct dw_spi *dws,
+				   struct spi_transfer *xfer)
 {
 	int ret;
 
@@ -462,7 +463,14 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 
 	dma_async_issue_pending(dws->txchan);
 
-	ret = dw_spi_dma_wait(dws, xfer);
+	return dw_spi_dma_wait(dws, xfer);
+}
+
+static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
+{
+	int ret;
+
+	ret = dw_spi_dma_transfer_all(dws, xfer);
 	if (ret)
 		return ret;
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 09/11] spi: dw-dma: Move DMAC register cleanup to DMA transfer method
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (7 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 08/11] spi: dw-dma: Detach DMA transfer into a dedicated method Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 10/11] spi: dw-dma: Pass exact data to the DMA submit and wait methods Serge Semin
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

DW APB SSI DMA driver doesn't use the native SPI core wait API since
commit bdbdf0f06337 ("spi: dw: Locally wait for the DMA transfers
completion"). Due to that the driver can now clear the DMAC register
in a single place synchronously with the DMA transactions completion
or failure. After that all the possible code paths are still covered:
1) DMA completion callbacks are executed in case if the corresponding DMA
transactions are finished. When they are, one of them will eventually wake
the SPI messages pump kernel thread and dw_spi_dma_transfer_all() method
will clean the DMAC register as implied by this patch.
2) dma_stop is called when the SPI core detects an error either returned
from the transfer_one() callback or set in the SPI message status field.
Both types of errors will be noticed by the dw_spi_dma_transfer_all()
method.
3) dma_exit is called when either SPI controller driver or the
corresponding device is removed. In any case the SPI core will first
flush the SPI messages pump kernel thread, so any pending or in-fly
SPI transfers will be finished before that.

Due to all of that let's simplify the DW APB SSI DMA driver a bit and
move the DMAC register cleanup to a single place in the
dw_spi_dma_transfer_all() method.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index f2baefcae9ae..935f073a3523 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -152,8 +152,6 @@ static void dw_spi_dma_exit(struct dw_spi *dws)
 		dmaengine_terminate_sync(dws->rxchan);
 		dma_release_channel(dws->rxchan);
 	}
-
-	dw_writel(dws, DW_SPI_DMACR, 0);
 }
 
 static irqreturn_t dw_spi_dma_transfer_handler(struct dw_spi *dws)
@@ -252,7 +250,6 @@ static void dw_spi_dma_tx_done(void *arg)
 	if (test_bit(RX_BUSY, &dws->dma_chan_busy))
 		return;
 
-	dw_writel(dws, DW_SPI_DMACR, 0);
 	complete(&dws->dma_completion);
 }
 
@@ -355,7 +352,6 @@ static void dw_spi_dma_rx_done(void *arg)
 	if (test_bit(TX_BUSY, &dws->dma_chan_busy))
 		return;
 
-	dw_writel(dws, DW_SPI_DMACR, 0);
 	complete(&dws->dma_completion);
 }
 
@@ -449,13 +445,13 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 	/* Submit the DMA Tx transfer */
 	ret = dw_spi_dma_submit_tx(dws, xfer);
 	if (ret)
-		return ret;
+		goto err_clear_dmac;
 
 	/* Submit the DMA Rx transfer if required */
 	if (xfer->rx_buf) {
 		ret = dw_spi_dma_submit_rx(dws, xfer);
 		if (ret)
-			return ret;
+			goto err_clear_dmac;
 
 		/* rx must be started before tx due to spi instinct */
 		dma_async_issue_pending(dws->rxchan);
@@ -463,7 +459,12 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 
 	dma_async_issue_pending(dws->txchan);
 
-	return dw_spi_dma_wait(dws, xfer);
+	ret = dw_spi_dma_wait(dws, xfer);
+
+err_clear_dmac:
+	dw_writel(dws, DW_SPI_DMACR, 0);
+
+	return ret;
 }
 
 static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
@@ -496,8 +497,6 @@ static void dw_spi_dma_stop(struct dw_spi *dws)
 		dmaengine_terminate_sync(dws->rxchan);
 		clear_bit(RX_BUSY, &dws->dma_chan_busy);
 	}
-
-	dw_writel(dws, DW_SPI_DMACR, 0);
 }
 
 static const struct dw_spi_dma_ops dw_spi_dma_mfld_ops = {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 10/11] spi: dw-dma: Pass exact data to the DMA submit and wait methods
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (8 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 09/11] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-20 11:23 ` [PATCH v2 11/11] spi: dw-dma: Add one-by-one SG list entries transfer Serge Semin
  2020-09-29 16:23 ` [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Mark Brown
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

In order to use the DMA submission and waiting methods in both generic
DMA-based SPI transfer and one-by-one DMA SG entries transmission
functions, we need to modify the dw_spi_dma_wait() and
dw_spi_dma_submit_tx()/dw_spi_dma_submit_rx() prototypes. So instead of
getting the SPI transfer object as the second argument they must accept
the exact data structure instances they imply to use. Those are the
current transfer length and the SPI bus frequency in case of
dw_spi_dma_wait(), and SG list together with number of list entries in
case of the DMA Tx/Rx submission methods.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 35 +++++++++++++++++------------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 935f073a3523..f333c2e23bf6 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -188,12 +188,12 @@ static enum dma_slave_buswidth dw_spi_dma_convert_width(u8 n_bytes)
 	return DMA_SLAVE_BUSWIDTH_UNDEFINED;
 }
 
-static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_wait(struct dw_spi *dws, unsigned int len, u32 speed)
 {
 	unsigned long long ms;
 
-	ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE;
-	do_div(ms, xfer->effective_speed_hz);
+	ms = len * MSEC_PER_SEC * BITS_PER_BYTE;
+	do_div(ms, speed);
 	ms += ms + 200;
 
 	if (ms > UINT_MAX)
@@ -268,17 +268,16 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->txchan, &txconf);
 }
 
-static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct scatterlist *sgl,
+				unsigned int nents)
 {
 	struct dma_async_tx_descriptor *txdesc;
 	dma_cookie_t cookie;
 	int ret;
 
-	txdesc = dmaengine_prep_slave_sg(dws->txchan,
-				xfer->tx_sg.sgl,
-				xfer->tx_sg.nents,
-				DMA_MEM_TO_DEV,
-				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	txdesc = dmaengine_prep_slave_sg(dws->txchan, sgl, nents,
+					 DMA_MEM_TO_DEV,
+					 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!txdesc)
 		return -ENOMEM;
 
@@ -370,17 +369,16 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->rxchan, &rxconf);
 }
 
-static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct scatterlist *sgl,
+				unsigned int nents)
 {
 	struct dma_async_tx_descriptor *rxdesc;
 	dma_cookie_t cookie;
 	int ret;
 
-	rxdesc = dmaengine_prep_slave_sg(dws->rxchan,
-				xfer->rx_sg.sgl,
-				xfer->rx_sg.nents,
-				DMA_DEV_TO_MEM,
-				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	rxdesc = dmaengine_prep_slave_sg(dws->rxchan, sgl, nents,
+					 DMA_DEV_TO_MEM,
+					 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!rxdesc)
 		return -ENOMEM;
 
@@ -443,13 +441,14 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 	int ret;
 
 	/* Submit the DMA Tx transfer */
-	ret = dw_spi_dma_submit_tx(dws, xfer);
+	ret = dw_spi_dma_submit_tx(dws, xfer->tx_sg.sgl, xfer->tx_sg.nents);
 	if (ret)
 		goto err_clear_dmac;
 
 	/* Submit the DMA Rx transfer if required */
 	if (xfer->rx_buf) {
-		ret = dw_spi_dma_submit_rx(dws, xfer);
+		ret = dw_spi_dma_submit_rx(dws, xfer->rx_sg.sgl,
+					   xfer->rx_sg.nents);
 		if (ret)
 			goto err_clear_dmac;
 
@@ -459,7 +458,7 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 
 	dma_async_issue_pending(dws->txchan);
 
-	ret = dw_spi_dma_wait(dws, xfer);
+	ret = dw_spi_dma_wait(dws, xfer->len, xfer->effective_speed_hz);
 
 err_clear_dmac:
 	dw_writel(dws, DW_SPI_DMACR, 0);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v2 11/11] spi: dw-dma: Add one-by-one SG list entries transfer
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (9 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 10/11] spi: dw-dma: Pass exact data to the DMA submit and wait methods Serge Semin
@ 2020-09-20 11:23 ` Serge Semin
  2020-09-29 16:23 ` [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Mark Brown
  11 siblings, 0 replies; 13+ messages in thread
From: Serge Semin @ 2020-09-20 11:23 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

In case if at least one of the requested DMA engine channels doesn't
support the hardware accelerated SG list entries traverse, the DMA driver
will most likely work that around by performing the IRQ-based SG list
entries resubmission. That might and will cause a problem if the DMA Tx
channel is recharged and re-executed before the Rx DMA channel. Due to
non-deterministic IRQ-handler execution latency the DMA Tx channel will
start pushing data to the SPI bus before the Rx DMA channel is even
reinitialized with the next inbound SG list entry. By doing so the DMA
Tx channel will implicitly start filling the DW APB SSI Rx FIFO up, which
while the DMA Rx channel being recharged and re-executed will eventually
be overflown.

In order to solve the problem we have to feed the DMA engine with SG
list entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs
synchronized and prevent the Rx FIFO overflow. Since in general the SPI
tx_sg and rx_sg lists may have different number of entries of different
lengths (though total length should match) we virtually split the
SG-lists to the set of DMA transfers, which length is a minimum of the
ordered SG-entries lengths.

The solution described above is only executed if a full-duplex SPI
transfer is requested and the DMA engine hasn't provided channels with
hardware accelerated SG list traverse capability to handle both SG
lists at once.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/spi/spi-dw-dma.c | 137 ++++++++++++++++++++++++++++++++++++++-
 drivers/spi/spi-dw.h     |   1 +
 2 files changed, 137 insertions(+), 1 deletion(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index f333c2e23bf6..1cbb5a9efbba 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -72,6 +72,23 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws)
 	dw_writel(dws, DW_SPI_DMATDLR, dws->txburst);
 }
 
+static void dw_spi_dma_sg_burst_init(struct dw_spi *dws)
+{
+	struct dma_slave_caps tx = {0}, rx = {0};
+
+	dma_get_slave_caps(dws->txchan, &tx);
+	dma_get_slave_caps(dws->rxchan, &rx);
+
+	if (tx.max_sg_burst > 0 && rx.max_sg_burst > 0)
+		dws->dma_sg_burst = min(tx.max_sg_burst, rx.max_sg_burst);
+	else if (tx.max_sg_burst > 0)
+		dws->dma_sg_burst = tx.max_sg_burst;
+	else if (rx.max_sg_burst > 0)
+		dws->dma_sg_burst = rx.max_sg_burst;
+	else
+		dws->dma_sg_burst = 0;
+}
+
 static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
 {
 	struct dw_dma_slave dma_tx = { .dst_id = 1 }, *tx = &dma_tx;
@@ -109,6 +126,8 @@ static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
 
 	dw_spi_dma_maxburst_init(dws);
 
+	dw_spi_dma_sg_burst_init(dws);
+
 	return 0;
 
 free_rxchan:
@@ -138,6 +157,8 @@ static int dw_spi_dma_init_generic(struct device *dev, struct dw_spi *dws)
 
 	dw_spi_dma_maxburst_init(dws);
 
+	dw_spi_dma_sg_burst_init(dws);
+
 	return 0;
 }
 
@@ -466,11 +487,125 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 	return ret;
 }
 
+/*
+ * In case if at least one of the requested DMA channels doesn't support the
+ * hardware accelerated SG list entries traverse, the DMA driver will most
+ * likely work that around by performing the IRQ-based SG list entries
+ * resubmission. That might and will cause a problem if the DMA Tx channel is
+ * recharged and re-executed before the Rx DMA channel. Due to
+ * non-deterministic IRQ-handler execution latency the DMA Tx channel will
+ * start pushing data to the SPI bus before the Rx DMA channel is even
+ * reinitialized with the next inbound SG list entry. By doing so the DMA Tx
+ * channel will implicitly start filling the DW APB SSI Rx FIFO up, which while
+ * the DMA Rx channel being recharged and re-executed will eventually be
+ * overflown.
+ *
+ * In order to solve the problem we have to feed the DMA engine with SG list
+ * entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs
+ * synchronized and prevent the Rx FIFO overflow. Since in general the tx_sg
+ * and rx_sg lists may have different number of entries of different lengths
+ * (though total length should match) let's virtually split the SG-lists to the
+ * set of DMA transfers, which length is a minimum of the ordered SG-entries
+ * lengths. An ASCII-sketch of the implemented algo is following:
+ *                  xfer->len
+ *                |___________|
+ * tx_sg list:    |___|____|__|
+ * rx_sg list:    |_|____|____|
+ * DMA transfers: |_|_|__|_|__|
+ *
+ * Note in order to have this workaround solving the denoted problem the DMA
+ * engine driver should properly initialize the max_sg_burst capability and set
+ * the DMA device max segment size parameter with maximum data block size the
+ * DMA engine supports.
+ */
+
+static int dw_spi_dma_transfer_one(struct dw_spi *dws,
+				   struct spi_transfer *xfer)
+{
+	struct scatterlist *tx_sg = NULL, *rx_sg = NULL, tx_tmp, rx_tmp;
+	unsigned int tx_len = 0, rx_len = 0;
+	unsigned int base, len;
+	int ret;
+
+	sg_init_table(&tx_tmp, 1);
+	sg_init_table(&rx_tmp, 1);
+
+	for (base = 0, len = 0; base < xfer->len; base += len) {
+		/* Fetch next Tx DMA data chunk */
+		if (!tx_len) {
+			tx_sg = !tx_sg ? &xfer->tx_sg.sgl[0] : sg_next(tx_sg);
+			sg_dma_address(&tx_tmp) = sg_dma_address(tx_sg);
+			tx_len = sg_dma_len(tx_sg);
+		}
+
+		/* Fetch next Rx DMA data chunk */
+		if (!rx_len) {
+			rx_sg = !rx_sg ? &xfer->rx_sg.sgl[0] : sg_next(rx_sg);
+			sg_dma_address(&rx_tmp) = sg_dma_address(rx_sg);
+			rx_len = sg_dma_len(rx_sg);
+		}
+
+		len = min(tx_len, rx_len);
+
+		sg_dma_len(&tx_tmp) = len;
+		sg_dma_len(&rx_tmp) = len;
+
+		/* Submit DMA Tx transfer */
+		ret = dw_spi_dma_submit_tx(dws, &tx_tmp, 1);
+		if (ret)
+			break;
+
+		/* Submit DMA Rx transfer */
+		ret = dw_spi_dma_submit_rx(dws, &rx_tmp, 1);
+		if (ret)
+			break;
+
+		/* Rx must be started before Tx due to SPI instinct */
+		dma_async_issue_pending(dws->rxchan);
+
+		dma_async_issue_pending(dws->txchan);
+
+		/*
+		 * Here we only need to wait for the DMA transfer to be
+		 * finished since SPI controller is kept enabled during the
+		 * procedure this loop implements and there is no risk to lose
+		 * data left in the Tx/Rx FIFOs.
+		 */
+		ret = dw_spi_dma_wait(dws, len, xfer->effective_speed_hz);
+		if (ret)
+			break;
+
+		reinit_completion(&dws->dma_completion);
+
+		sg_dma_address(&tx_tmp) += len;
+		sg_dma_address(&rx_tmp) += len;
+		tx_len -= len;
+		rx_len -= len;
+	}
+
+	dw_writel(dws, DW_SPI_DMACR, 0);
+
+	return ret;
+}
+
 static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 {
+	unsigned int nents;
 	int ret;
 
-	ret = dw_spi_dma_transfer_all(dws, xfer);
+	nents = max(xfer->tx_sg.nents, xfer->rx_sg.nents);
+
+	/*
+	 * Execute normal DMA-based transfer (which submits the Rx and Tx SG
+	 * lists directly to the DMA engine at once) if either full hardware
+	 * accelerated SG list traverse is supported by both channels, or the
+	 * Tx-only SPI transfer is requested, or the DMA engine is capable to
+	 * handle both SG lists on hardware accelerated basis.
+	 */
+	if (!dws->dma_sg_burst || !xfer->rx_buf || nents <= dws->dma_sg_burst)
+		ret = dw_spi_dma_transfer_all(dws, xfer);
+	else
+		ret = dw_spi_dma_transfer_one(dws, xfer);
 	if (ret)
 		return ret;
 
diff --git a/drivers/spi/spi-dw.h b/drivers/spi/spi-dw.h
index 151ba316619e..1d201c62d292 100644
--- a/drivers/spi/spi-dw.h
+++ b/drivers/spi/spi-dw.h
@@ -146,6 +146,7 @@ struct dw_spi {
 	u32			txburst;
 	struct dma_chan		*rxchan;
 	u32			rxburst;
+	u32			dma_sg_burst;
 	unsigned long		dma_chan_busy;
 	dma_addr_t		dma_addr; /* phy address of the Data register */
 	const struct dw_spi_dma_ops *dma_ops;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support
  2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (10 preceding siblings ...)
  2020-09-20 11:23 ` [PATCH v2 11/11] spi: dw-dma: Add one-by-one SG list entries transfer Serge Semin
@ 2020-09-29 16:23 ` Mark Brown
  11 siblings, 0 replies; 13+ messages in thread
From: Mark Brown @ 2020-09-29 16:23 UTC (permalink / raw)
  To: Serge Semin
  Cc: Pavel Parkhomenko, Ramil Zaripov, Feng Tang, Andy Shevchenko,
	Vinod Koul, Alexey Malahov, Andy Shevchenko, linux-spi,
	linux-kernel, Peter Ujfalusi, Georgy Vlasov, Serge Semin

On Sun, 20 Sep 2020 14:23:11 +0300, Serge Semin wrote:
> Mainly this series is about fixing a very nasty problem discovered in the
> DW APB SSI driver working in a couple with DW DMAC, which doesn't have
> multi-block capability support (a.k.a. accelerated SG list entries
> traversal/burst, or automatic LLP entries reload, etc.).
> 
> DW DMA IP-core and its DMA channel on the synthesize stage can be tuned by
> setting a vast number of the model parameters, some of which are provided
> to create a more optimized/performant controller. In particular two of
> those parameters are DMAH_CHx_MULTI_BLK_EN and DMAH_CHx_HC_LLP. If at
> least one of them is set to zero (false) then the target DMA controller
> will be smaller and faster but will lack of the DMA blocks chaining
> support. In the kernel notation it means that the controller won't be able
> to automatically reload a next SG-list entry, when a current one is
> finished. Since Linux kernel DMA subsystem interface requires to have the
> SG-lists execution supported, the DW DMA engine driver is developed in a
> way so to resubmit each SG-list entry one-by-one by software means: each
> SG-list entry execution finish is tracked by the DW DMA controller
> interrupt, which handler executes a tasklet, which then re-charges a DW
> DMA channel with a next SG-list entry if one is available. Such
> implementation is a normal design and works well for the most of the DW
> DMAC client devices. But it may cause problems for devices, which send and
> receive data by means of internal FIFOs. That requires having several DMA
> channels working synchronously in order to prevent the FIFOs overflow.
> 
> [...]

Applied to

   https://git.kernel.org/pub/scm/linux/kernel/git/broonie/spi.git for-next

Thanks!

[01/11] spi: dw-dma: Set DMA Level registers on init
        commit: 01ddbbb0b0af255d93b279f83c4ff91d494397d9
[02/11] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified
        commit: 7ef30385b05fa8bc13f473c9b0b3ecc7dfb2b208
[03/11] spi: dw-dma: Configure the DMA channels in dma_setup
        commit: a874d811f0c2d285fd7409b5fc569c454c05c835
[04/11] spi: dw-dma: Check rx_buf availability in the xfer method
        commit: be3034d9f9f3ea28588932d10bba6d06b71489a7
[05/11] spi: dw-dma: Move DMA transfers submission to the channels prep methods
        commit: ab7a4d758b278fe44ded648e731c0638b6fa7fd3
[06/11] spi: dw-dma: Check DMA Tx-desc submission status
        commit: 9a6471a1a2c24964838a5bfa4d374e644e9daf07
[07/11] spi: dw-dma: Remove DMA Tx-desc passing around
        commit: 7a4d61f1dc94871154b2d06d671a5c20aea16ff2
[08/11] spi: dw-dma: Detach DMA transfer into a dedicated method
        commit: b86fed121fe6bf5bcac1c258472791ca352f47cf
[09/11] spi: dw-dma: Move DMAC register cleanup to DMA transfer method
        commit: 945b5b60f7110a81d1fd8145b197793edef3282d
[10/11] spi: dw-dma: Pass exact data to the DMA submit and wait methods
        commit: 917ce29ef559630cfeaea5b05f93d8744a6e9d97
[11/11] spi: dw-dma: Add one-by-one SG list entries transfer
        commit: ad4fe1264b396e94b78d91c49ecea425a593b28d

All being well this means that it will be integrated into the linux-next
tree (usually sometime in the next 24 hours) and sent to Linus during
the next merge window (or sooner if it is a bug fix), however if
problems are discovered then the patch may be dropped or reverted.

You may get further e-mails resulting from automated or manual testing
and review of the tree, please engage with people reporting problems and
send followup patches addressing any issues that are reported if needed.

If any updates are required or you are submitting further changes they
should be sent as incremental updates against current git, existing
patches will not be replaced.

Please add any relevant lists and maintainers to the CCs when replying
to this mail.

Thanks,
Mark

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2020-09-29 16:24 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-09-20 11:23 [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Serge Semin
2020-09-20 11:23 ` [PATCH v2 01/11] spi: dw-dma: Set DMA Level registers on init Serge Semin
2020-09-20 11:23 ` [PATCH v2 02/11] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Serge Semin
2020-09-20 11:23 ` [PATCH v2 03/11] spi: dw-dma: Configure the DMA channels in dma_setup Serge Semin
2020-09-20 11:23 ` [PATCH v2 04/11] spi: dw-dma: Check rx_buf availability in the xfer method Serge Semin
2020-09-20 11:23 ` [PATCH v2 05/11] spi: dw-dma: Move DMA transfers submission to the channels prep methods Serge Semin
2020-09-20 11:23 ` [PATCH v2 06/11] spi: dw-dma: Check DMA Tx-desc submission status Serge Semin
2020-09-20 11:23 ` [PATCH v2 07/11] spi: dw-dma: Remove DMA Tx-desc passing around Serge Semin
2020-09-20 11:23 ` [PATCH v2 08/11] spi: dw-dma: Detach DMA transfer into a dedicated method Serge Semin
2020-09-20 11:23 ` [PATCH v2 09/11] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Serge Semin
2020-09-20 11:23 ` [PATCH v2 10/11] spi: dw-dma: Pass exact data to the DMA submit and wait methods Serge Semin
2020-09-20 11:23 ` [PATCH v2 11/11] spi: dw-dma: Add one-by-one SG list entries transfer Serge Semin
2020-09-29 16:23 ` [PATCH v2 00/11] spi: dw-dma: Add max SG entries burst capability support Mark Brown

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).