linux-spi.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support
@ 2020-07-31  7:59 Serge Semin
  2020-07-31  7:59 ` [PATCH 1/8] spi: dw-dma: Set DMA Level registers on init Serge Semin
                   ` (8 more replies)
  0 siblings, 9 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Mainly this series is about fixing a very nasty problem discovered in the
DW APB SSI driver working in a couple with DW DMAC, which doesn't have
multi-block capability support (a.k.a. accelerated SG list entries
traversal/burst, or automatic LLP entries reload, etc.).

DW DMA IP-core and its DMA channel on the synthesize stage can be tuned by
setting a vast number of the model parameters, some of which are provided
to create a more optimized/performant controller. In particular two of
those parameters are DMAH_CHx_MULTI_BLK_EN and DMAH_CHx_HC_LLP. If at
least one of them is set to zero (false) then the target DMA controller
will be smaller and faster but will lack of the DMA blocks chaining
support. In the kernel notation it means that the controller won't be able
to automatically reload a next SG-list entry, when a current one is
finished. Since Linux kernel DMA subsystem interface requires to have the
SG-lists execution supported, the DW DMA engine driver is developed in a
way so to resubmit each SG-list entry one-by-one by software means: each
SG-list entry execution finish is tracked by the DW DMA controller
interrupt, which handler executes a tasklet, which then re-charges a DW
DMA channel with a next SG-list entry if one is available.  Such
implementation is a normal design and works well for the most of the DW
DMAC client devices. But it may cause problems for devices, which send and
receive data by means of internal FIFOs. That requires having several DMA
channels working synchronously in order to prevent the FIFOs overflow.

A bright example of such device is the DW APB SSI controller. It has Tx
and Rx FIFOs, which first need to be filled in with data before data
sending out or receiving in. But those FIFOs are inter-dependent because
of the SPI nature and its shift-register design. So each sent over Tx FIFO
byte immediately causes getting a byte from the SPI bus into the Rx FIFO.
It makes a strategy of working with the SPI controller a bit tricky. The
more data we push into the Tx FIFO and the faster the SPI bus is, the more
careful we have to be in pulling data from Rx FIFO since if software or
DMA engine misses a moment when Rx FIFO is full, for instance, due to
being busy with some other activity or due to being blocked if system bus
is busy with doing something else or just due to being too slow to keep up
with incoming data, then Rx FIFO will be overflown, which consequently
causes data loss. Needless to say that such situation is fatal and mustn't
be tolerated for a bus like SPI.

In application to the DW APB SSI driver the problem above may happen when
DW DMAC is synthesized with no multi-block capability support and it's
enabled to be working with DW APB SSI for full-duplex transfers. DW APB
SSI driver allocates two DW DMAC channels to perform Tx and Rx SPI
transfers, initializes them, submits Tx and Rx SG-lists for execution and
then waits until the DMA transfers are finished. The issue happens when Rx
SG-list consists of more than one entry. Since the multi-block capability
is unsupported the DW DMAC driver will use the software-based SG-list
entries traverse implementation, which by design may cause
non-deterministic latency during the Rx SG-list entries re-charge. During
the entries re-charge procedure the DMA Tx channel will keep pushing data
into the SPI Tx FIFO. DW APB SSI controller in its turn will keep pushing
data out from the Tx FIFO to the SPI bus, and will immediately fill in the
Rx FIFO. So if the latency is big enough, then we'll eventually end up
with Rx FIFO overflow.

One of the possible solution of the problem denoted above is to feed the
DMA engine with the Tx and Rx SG-list entries one-by-one. This series
provides an implementation of that approach. First it moves the requested
DMA channels configuration into the dma_setup() callback, which should
have been there in the first place. Then it's better to move the DMA
transfers submission into the DMA-preparation methods to collect all the
setups in a single method. After that the current implementation of a
straightforward SG-lists DMA transfer should be unpinned into a dedicated
method dw_spi_dma_transfer_all() since we are about to introduce an
alternative DMA-based SPI transfer approach. Since DMA-transfers finish is
now locally detected we can simplify the driver code a bit and move the
DMAC register cleanup to a single place in the dw_spi_dma_transfer_all()
method. In order to re-use the DMA-{wait, submit Tx, submit Rx} methods we
have to alter their prototypes, so they would accept SG-lists instead of
the SPI-transfer structure. Finally we introduce a new DMA-based
SPI-transfer method, which traverses the SG-list entries in a loop and
synchronously submits each of then to the Tx and Rx DMA channels in a way
so the DMA engine wouldn't need to activate the prone to errors in our
case SG-list entries re-charge implementation. That new method is utilized
only for the DMA controllers, which can't handle all Tx and Rx SPI
transfer SG-lists in a single DMA transaction without software
intervention, and for the full-duplex SPI-transfers.

Note since the DMA-engine subsystem in kernel 5.8-rcX doesn't have the
max_sg_burst capability supported, this series is intended to be applied
only after the "next"-branch of the DMA-engine repository is merged into
the mainline repo. Alternatively the series could be merged in through the
DMA-engine repo.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Cc: Alexey Malahov <Alexey.Malahov@baikalelectronics.ru>
Cc: Georgy Vlasov <Georgy.Vlasov@baikalelectronics.ru>
Cc: Ramil Zaripov <Ramil.Zaripov@baikalelectronics.ru>
Cc: Pavel Parkhomenko <Pavel.Parkhomenko@baikalelectronics.ru>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>
Cc: Andy Shevchenko <andy.shevchenko@gmail.com>
Cc: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
Cc: Feng Tang <feng.tang@intel.com>
Cc: Vinod Koul <vkoul@kernel.org>
Cc: linux-spi@vger.kernel.org
Cc: linux-kernel@vger.kernel.org

Serge Semin (8):
  spi: dw-dma: Set DMA Level registers on init
  spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified
  spi: dw-dma: Configure the DMA channels in dma_setup
  spi: dw-dma: Move DMA transfers submission to the channels prep
    methods
  spi: dw-dma: Detach DMA transfer into a dedicated method
  spi: dw-dma: Move DMAC register cleanup to DMA transfer method
  spi: dw-dma: Pass exact data to the DMA submit and wait methods
  spi: dw-dma: Add one-by-one SG list entries transfer

 drivers/spi/spi-dw-dma.c | 309 ++++++++++++++++++++++++++++++---------
 drivers/spi/spi-dw.h     |   1 +
 2 files changed, 238 insertions(+), 72 deletions(-)

-- 
2.27.0


^ permalink raw reply	[flat|nested] 17+ messages in thread

* [PATCH 1/8] spi: dw-dma: Set DMA Level registers on init
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  7:59 ` [PATCH 2/8] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Serge Semin
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Indeed the registers content doesn't get cleared when the SPI controller
is disabled and enabled. Max burst lengths aren't changed since the Rx and
Tx DMA channels are requested on init stage and are kept acquired until
the device is removed. Obviously SPI controller FIFO depth can't be
changed. Due to all of that we can safely move the DMA Transmit and
Receive data level registers initialization to the SPI controller DMA init
stage (when the SPI controller is being probed) instead of doing it for
each SPI transfer when dma_setup is called. This shall speed the DMA-based
SPI transfer initialization up a bit, particularly if the APB bus is
relatively slow.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 29 ++++++++++++++---------------
 1 file changed, 14 insertions(+), 15 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index bb390ff67d1d..440679fa0764 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -49,6 +49,7 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws)
 		max_burst = RX_BURST_LEVEL;
 
 	dws->rxburst = min(max_burst, def_burst);
+	dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1);
 
 	ret = dma_get_slave_caps(dws->txchan, &caps);
 	if (!ret && caps.max_burst)
@@ -56,7 +57,20 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws)
 	else
 		max_burst = TX_BURST_LEVEL;
 
+	/*
+	 * Having a Rx DMA channel serviced with higher priority than a Tx DMA
+	 * channel might not be enough to provide a well balanced DMA-based
+	 * SPI transfer interface. There might still be moments when the Tx DMA
+	 * channel is occasionally handled faster than the Rx DMA channel.
+	 * That in its turn will eventually cause the SPI Rx FIFO overflow if
+	 * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's
+	 * cleared by the Rx DMA channel. In order to fix the problem the Tx
+	 * DMA activity is intentionally slowed down by limiting the SPI Tx
+	 * FIFO depth with a value twice bigger than the Tx burst length
+	 * calculated earlier by the dw_spi_dma_maxburst_init() method.
+	 */
 	dws->txburst = min(max_burst, def_burst);
+	dw_writel(dws, DW_SPI_DMATDLR, dws->txburst);
 }
 
 static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
@@ -372,21 +386,6 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	u16 imr = 0, dma_ctrl = 0;
 
-	/*
-	 * Having a Rx DMA channel serviced with higher priority than a Tx DMA
-	 * channel might not be enough to provide a well balanced DMA-based
-	 * SPI transfer interface. There might still be moments when the Tx DMA
-	 * channel is occasionally handled faster than the Rx DMA channel.
-	 * That in its turn will eventually cause the SPI Rx FIFO overflow if
-	 * SPI bus speed is high enough to fill the SPI Rx FIFO in before it's
-	 * cleared by the Rx DMA channel. In order to fix the problem the Tx
-	 * DMA activity is intentionally slowed down by limiting the SPI Tx
-	 * FIFO depth with a value twice bigger than the Tx burst length
-	 * calculated earlier by the dw_spi_dma_maxburst_init() method.
-	 */
-	dw_writel(dws, DW_SPI_DMARDLR, dws->rxburst - 1);
-	dw_writel(dws, DW_SPI_DMATDLR, dws->txburst);
-
 	if (xfer->tx_buf)
 		dma_ctrl |= SPI_DMA_TDMAE;
 	if (xfer->rx_buf)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 2/8] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
  2020-07-31  7:59 ` [PATCH 1/8] spi: dw-dma: Set DMA Level registers on init Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  7:59 ` [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup Serge Semin
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Since commit 46164fde6b78 ("spi: dw: Fix Rx-only DMA transfers") if DMA
interface is enabled, then Tx-buffer must be available in each SPI
transfer. It's required since in order to activate the incoming data
reception either DMA or CPU must be pushing data out to the SPI bus.
But the DW APB SSI DMA driver code is still left in state as if Tx-buffer
might be optional, which is no longer true. Let's fix it so an error would
be returned if no Tx-buffer detected and DMA Tx would be always
enabled.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 27 +++++++++++++--------------
 1 file changed, 13 insertions(+), 14 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 440679fa0764..ec721af61663 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -263,9 +263,6 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 	struct dma_slave_config txconf;
 	struct dma_async_tx_descriptor *txdesc;
 
-	if (!xfer->tx_buf)
-		return NULL;
-
 	memset(&txconf, 0, sizeof(txconf));
 	txconf.direction = DMA_MEM_TO_DEV;
 	txconf.dst_addr = dws->dma_addr;
@@ -384,17 +381,19 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 
 static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 {
-	u16 imr = 0, dma_ctrl = 0;
+	u16 imr, dma_ctrl;
 
-	if (xfer->tx_buf)
-		dma_ctrl |= SPI_DMA_TDMAE;
+	if (!xfer->tx_buf)
+		return -EINVAL;
+
+	/* Set the DMA handshaking interface */
+	dma_ctrl = SPI_DMA_TDMAE;
 	if (xfer->rx_buf)
 		dma_ctrl |= SPI_DMA_RDMAE;
 	dw_writel(dws, DW_SPI_DMACR, dma_ctrl);
 
 	/* Set the interrupt mask */
-	if (xfer->tx_buf)
-		imr |= SPI_INT_TXOI;
+	imr = SPI_INT_TXOI;
 	if (xfer->rx_buf)
 		imr |= SPI_INT_RXUI | SPI_INT_RXOI;
 	spi_umask_intr(dws, imr);
@@ -413,6 +412,8 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 
 	/* Prepare the TX dma transfer */
 	txdesc = dw_spi_dma_prepare_tx(dws, xfer);
+	if (!txdesc)
+		return -EINVAL;
 
 	/* Prepare the RX dma transfer */
 	rxdesc = dw_spi_dma_prepare_rx(dws, xfer);
@@ -424,17 +425,15 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 		dma_async_issue_pending(dws->rxchan);
 	}
 
-	if (txdesc) {
-		set_bit(TX_BUSY, &dws->dma_chan_busy);
-		dmaengine_submit(txdesc);
-		dma_async_issue_pending(dws->txchan);
-	}
+	set_bit(TX_BUSY, &dws->dma_chan_busy);
+	dmaengine_submit(txdesc);
+	dma_async_issue_pending(dws->txchan);
 
 	ret = dw_spi_dma_wait(dws, xfer);
 	if (ret)
 		return ret;
 
-	if (txdesc && dws->master->cur_msg->status == -EINPROGRESS) {
+	if (dws->master->cur_msg->status == -EINPROGRESS) {
 		ret = dw_spi_dma_wait_tx_done(dws, xfer);
 		if (ret)
 			return ret;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
  2020-07-31  7:59 ` [PATCH 1/8] spi: dw-dma: Set DMA Level registers on init Serge Semin
  2020-07-31  7:59 ` [PATCH 2/8] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  9:16   ` Andy Shevchenko
  2020-07-31  7:59 ` [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods Serge Semin
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Mainly this is a preparation patch before adding one-by-one DMA SG entries
transmission. But logically the Tx and Rx DMA channels setup should be
performed in the dma_setup() callback anyway. So let's move the DMA slave
channels src/dst burst lengths, address and address width configuration to
the DMA setup stage. While at it make sure the return value of the
dmaengine_slave_config() method is checked. It has been unnecessary in
case if Dw DMAC is utilized as a DMA engine, since its device_config()
callback always returns zero (though it might change in future). But since
DW APB SSI driver now supports any DMA back-end we must make sure the
DMA device configuration has been successful before proceeding with
further setups.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 42 +++++++++++++++++++++++++++++-----------
 1 file changed, 31 insertions(+), 11 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index ec721af61663..56496b659d62 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -257,11 +257,9 @@ static void dw_spi_dma_tx_done(void *arg)
 	complete(&dws->dma_completion);
 }
 
-static struct dma_async_tx_descriptor *
-dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_config_tx(struct dw_spi *dws)
 {
 	struct dma_slave_config txconf;
-	struct dma_async_tx_descriptor *txdesc;
 
 	memset(&txconf, 0, sizeof(txconf));
 	txconf.direction = DMA_MEM_TO_DEV;
@@ -271,7 +269,13 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 	txconf.dst_addr_width = dw_spi_dma_convert_width(dws->n_bytes);
 	txconf.device_fc = false;
 
-	dmaengine_slave_config(dws->txchan, &txconf);
+	return dmaengine_slave_config(dws->txchan, &txconf);
+}
+
+static struct dma_async_tx_descriptor *
+dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+{
+	struct dma_async_tx_descriptor *txdesc;
 
 	txdesc = dmaengine_prep_slave_sg(dws->txchan,
 				xfer->tx_sg.sgl,
@@ -346,14 +350,9 @@ static void dw_spi_dma_rx_done(void *arg)
 	complete(&dws->dma_completion);
 }
 
-static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
-		struct spi_transfer *xfer)
+static int dw_spi_dma_config_rx(struct dw_spi *dws)
 {
 	struct dma_slave_config rxconf;
-	struct dma_async_tx_descriptor *rxdesc;
-
-	if (!xfer->rx_buf)
-		return NULL;
 
 	memset(&rxconf, 0, sizeof(rxconf));
 	rxconf.direction = DMA_DEV_TO_MEM;
@@ -363,7 +362,16 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 	rxconf.src_addr_width = dw_spi_dma_convert_width(dws->n_bytes);
 	rxconf.device_fc = false;
 
-	dmaengine_slave_config(dws->rxchan, &rxconf);
+	return dmaengine_slave_config(dws->rxchan, &rxconf);
+}
+
+static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
+		struct spi_transfer *xfer)
+{
+	struct dma_async_tx_descriptor *rxdesc;
+
+	if (!xfer->rx_buf)
+		return NULL;
 
 	rxdesc = dmaengine_prep_slave_sg(dws->rxchan,
 				xfer->rx_sg.sgl,
@@ -382,10 +390,22 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	u16 imr, dma_ctrl;
+	int ret;
 
 	if (!xfer->tx_buf)
 		return -EINVAL;
 
+	/* Setup DMA channels */
+	ret = dw_spi_dma_config_tx(dws);
+	if (ret)
+		return ret;
+
+	if (xfer->rx_buf) {
+		ret = dw_spi_dma_config_rx(dws);
+		if (ret)
+			return ret;
+	}
+
 	/* Set the DMA handshaking interface */
 	dma_ctrl = SPI_DMA_TDMAE;
 	if (xfer->rx_buf)
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (2 preceding siblings ...)
  2020-07-31  7:59 ` [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  9:15   ` Andy Shevchenko
  2020-07-31  7:59 ` [PATCH 5/8] spi: dw-dma: Detach DMA transfer into a dedicated method Serge Semin
                   ` (4 subsequent siblings)
  8 siblings, 1 reply; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

Indeed we can freely move the dmaengine_submit() method invocation and the
Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. By doing
so first we implement another preparation before adding the one-by-one DMA
SG entries transmission, second we now have the dma_async_tx_descriptor
descriptor used locally only in the new DMA transfers submitition methods,
which makes the code less complex with no passing around the DMA Tx
descriptors, third we make the generic transfer method more readable, where
now the functionality of submission, execution and wait procedures is
transparently split up instead of having a preparation, intermixed
submission/execution and wait procedures. While at it we also add the
dmaengine_submit() return value test. It has been unnecessary for
DW DMAC, but should be done to support the generic DMA interface.

Note since the DMA channels preparation methods are now responsible for
the DMA transactions submission, we also rename them to
dw_spi_dma_submit_{tx,rx}().

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 56 ++++++++++++++++++++++------------------
 1 file changed, 31 insertions(+), 25 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 56496b659d62..366978086781 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -272,10 +272,11 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->txchan, &txconf);
 }
 
-static struct dma_async_tx_descriptor *
-dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *txdesc;
+	dma_cookie_t cookie;
+	int ret;
 
 	txdesc = dmaengine_prep_slave_sg(dws->txchan,
 				xfer->tx_sg.sgl,
@@ -283,12 +284,17 @@ dw_spi_dma_prepare_tx(struct dw_spi *dws, struct spi_transfer *xfer)
 				DMA_MEM_TO_DEV,
 				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!txdesc)
-		return NULL;
+		return -ENOMEM;
 
 	txdesc->callback = dw_spi_dma_tx_done;
 	txdesc->callback_param = dws;
 
-	return txdesc;
+	cookie = dmaengine_submit(txdesc);
+	ret = dma_submit_error(cookie);
+	if (!ret)
+		set_bit(TX_BUSY, &dws->dma_chan_busy);
+
+	return ret;
 }
 
 static inline bool dw_spi_dma_rx_busy(struct dw_spi *dws)
@@ -365,13 +371,11 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->rxchan, &rxconf);
 }
 
-static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
-		struct spi_transfer *xfer)
+static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer)
 {
 	struct dma_async_tx_descriptor *rxdesc;
-
-	if (!xfer->rx_buf)
-		return NULL;
+	dma_cookie_t cookie;
+	int ret;
 
 	rxdesc = dmaengine_prep_slave_sg(dws->rxchan,
 				xfer->rx_sg.sgl,
@@ -379,12 +383,17 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_rx(struct dw_spi *dws,
 				DMA_DEV_TO_MEM,
 				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!rxdesc)
-		return NULL;
+		return -ENOMEM;
 
 	rxdesc->callback = dw_spi_dma_rx_done;
 	rxdesc->callback_param = dws;
 
-	return rxdesc;
+	cookie = dmaengine_submit(rxdesc);
+	ret = dma_submit_error(cookie);
+	if (!ret)
+		set_bit(RX_BUSY, &dws->dma_chan_busy);
+
+	return ret;
 }
 
 static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
@@ -427,26 +436,23 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 
 static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 {
-	struct dma_async_tx_descriptor *txdesc, *rxdesc;
 	int ret;
 
-	/* Prepare the TX dma transfer */
-	txdesc = dw_spi_dma_prepare_tx(dws, xfer);
-	if (!txdesc)
-		return -EINVAL;
+	/* Submit DMA Tx transfer */
+	ret = dw_spi_dma_submit_tx(dws, xfer);
+	if (ret)
+		return ret;
 
-	/* Prepare the RX dma transfer */
-	rxdesc = dw_spi_dma_prepare_rx(dws, xfer);
+	/* Submit DMA Rx transfer if required */
+	if (xfer->rx_buf) {
+		ret = dw_spi_dma_submit_rx(dws, xfer);
+		if (ret)
+			return ret;
 
-	/* rx must be started before tx due to spi instinct */
-	if (rxdesc) {
-		set_bit(RX_BUSY, &dws->dma_chan_busy);
-		dmaengine_submit(rxdesc);
+		/* rx must be started before tx due to spi instinct */
 		dma_async_issue_pending(dws->rxchan);
 	}
 
-	set_bit(TX_BUSY, &dws->dma_chan_busy);
-	dmaengine_submit(txdesc);
 	dma_async_issue_pending(dws->txchan);
 
 	ret = dw_spi_dma_wait(dws, xfer);
@@ -459,7 +465,7 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 			return ret;
 	}
 
-	if (rxdesc && dws->master->cur_msg->status == -EINPROGRESS)
+	if (xfer->rx_buf && dws->master->cur_msg->status == -EINPROGRESS)
 		ret = dw_spi_dma_wait_rx_done(dws);
 
 	return ret;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 5/8] spi: dw-dma: Detach DMA transfer into a dedicated method
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (3 preceding siblings ...)
  2020-07-31  7:59 ` [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  7:59 ` [PATCH 6/8] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Serge Semin
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

In order to add an alternative method of DMA-based SPI transfer first we
need to detach the currently available one from the common code. Here we
move the normal DMA-based SPI transfer execution functionality into a
dedicated method. It will be utilized if either the DMA engine supports
an unlimited number SG entries or Tx-only SPI transfer is requested. But
currently just use it for any SPI transfer.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 12 ++++++++++--
 1 file changed, 10 insertions(+), 2 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 366978086781..32ef7913a73d 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -434,7 +434,8 @@ static int dw_spi_dma_setup(struct dw_spi *dws, struct spi_transfer *xfer)
 	return 0;
 }
 
-static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_transfer_all(struct dw_spi *dws,
+				   struct spi_transfer *xfer)
 {
 	int ret;
 
@@ -455,7 +456,14 @@ static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 
 	dma_async_issue_pending(dws->txchan);
 
-	ret = dw_spi_dma_wait(dws, xfer);
+	return dw_spi_dma_wait(dws, xfer);
+}
+
+static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
+{
+	int ret;
+
+	ret = dw_spi_dma_transfer_all(dws, xfer);
 	if (ret)
 		return ret;
 
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 6/8] spi: dw-dma: Move DMAC register cleanup to DMA transfer method
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (4 preceding siblings ...)
  2020-07-31  7:59 ` [PATCH 5/8] spi: dw-dma: Detach DMA transfer into a dedicated method Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  7:59 ` [PATCH 7/8] spi: dw-dma: Pass exact data to the DMA submit and wait methods Serge Semin
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

DW APB SSI DMA driver doesn't use the native SPI core wait API since
commit bdbdf0f06337 ("spi: dw: Locally wait for the DMA transfers
completion"). Due to that the driver can now clear the DMAC register
in a single place synchronously with the DMA transactions completion
or failure. After that all the possible code paths are still covered:
1) DMA completion callbacks are executed in case if the corresponding DMA
transactions are finished. When they are, one of them will eventually wake
the SPI messages pump kernel thread and dw_spi_dma_transfer_all() method
will clean the DMAC register as implied by this patch.
2) dma_stop is called when the SPI core detects an error either returned
from the transfer_one() callback or set in the SPI message status field.
Both types of errors will be noticed by the dw_spi_dma_transfer_all()
method.
3) dma_exit is called when either SPI controller driver or the
corresponding device is removed. In any case the SPI core will first
flush the SPI messages pump kernel thread, so any pending or in-fly
SPI transfers will be finished before that.

Due to all of that let's simplify the DW APB SSI DMA driver a bit and
move the DMAC register cleanup to a single place in the
dw_spi_dma_transfer_all() method.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 17 ++++++++---------
 1 file changed, 8 insertions(+), 9 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 32ef7913a73d..f8508c0c7978 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -153,8 +153,6 @@ static void dw_spi_dma_exit(struct dw_spi *dws)
 		dmaengine_terminate_sync(dws->rxchan);
 		dma_release_channel(dws->rxchan);
 	}
-
-	dw_writel(dws, DW_SPI_DMACR, 0);
 }
 
 static irqreturn_t dw_spi_dma_transfer_handler(struct dw_spi *dws)
@@ -253,7 +251,6 @@ static void dw_spi_dma_tx_done(void *arg)
 	if (test_bit(RX_BUSY, &dws->dma_chan_busy))
 		return;
 
-	dw_writel(dws, DW_SPI_DMACR, 0);
 	complete(&dws->dma_completion);
 }
 
@@ -352,7 +349,6 @@ static void dw_spi_dma_rx_done(void *arg)
 	if (test_bit(TX_BUSY, &dws->dma_chan_busy))
 		return;
 
-	dw_writel(dws, DW_SPI_DMACR, 0);
 	complete(&dws->dma_completion);
 }
 
@@ -442,13 +438,13 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 	/* Submit DMA Tx transfer */
 	ret = dw_spi_dma_submit_tx(dws, xfer);
 	if (ret)
-		return ret;
+		goto err_clear_dmac;
 
 	/* Submit DMA Rx transfer if required */
 	if (xfer->rx_buf) {
 		ret = dw_spi_dma_submit_rx(dws, xfer);
 		if (ret)
-			return ret;
+			goto err_clear_dmac;
 
 		/* rx must be started before tx due to spi instinct */
 		dma_async_issue_pending(dws->rxchan);
@@ -456,7 +452,12 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 
 	dma_async_issue_pending(dws->txchan);
 
-	return dw_spi_dma_wait(dws, xfer);
+	ret = dw_spi_dma_wait(dws, xfer);
+
+err_clear_dmac:
+	dw_writel(dws, DW_SPI_DMACR, 0);
+
+	return ret;
 }
 
 static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
@@ -489,8 +490,6 @@ static void dw_spi_dma_stop(struct dw_spi *dws)
 		dmaengine_terminate_sync(dws->rxchan);
 		clear_bit(RX_BUSY, &dws->dma_chan_busy);
 	}
-
-	dw_writel(dws, DW_SPI_DMACR, 0);
 }
 
 static const struct dw_spi_dma_ops dw_spi_dma_mfld_ops = {
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 7/8] spi: dw-dma: Pass exact data to the DMA submit and wait methods
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (5 preceding siblings ...)
  2020-07-31  7:59 ` [PATCH 6/8] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  7:59 ` [PATCH 8/8] spi: dw-dma: Add one-by-one SG list entries transfer Serge Semin
  2020-07-31  9:26 ` [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Andy Shevchenko
  8 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

In order to use the DMA submission and waiting methods in both generic
DMA-based SPI transfer and one-by-one DMA SG entries transmission
functions, we need to modify the dw_spi_dma_wait() and
dw_spi_dma_submit_tx()/dw_spi_dma_submit_rx() prototypes. So instead of
getting the SPI transfer object as the second argument they must accept
the exact data structure instances they imply to use. Those are the
current transfer length and the SPI bus frequency in case of
dw_spi_dma_wait(), and SG list together with number of list entries in
case of the DMA Tx/Rx submission methods.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
---
 drivers/spi/spi-dw-dma.c | 35 +++++++++++++++++------------------
 1 file changed, 17 insertions(+), 18 deletions(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index f8508c0c7978..2b42b42b6cf2 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -189,12 +189,12 @@ static enum dma_slave_buswidth dw_spi_dma_convert_width(u8 n_bytes)
 	return DMA_SLAVE_BUSWIDTH_UNDEFINED;
 }
 
-static int dw_spi_dma_wait(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_wait(struct dw_spi *dws, unsigned int len, u32 speed)
 {
 	unsigned long long ms;
 
-	ms = xfer->len * MSEC_PER_SEC * BITS_PER_BYTE;
-	do_div(ms, xfer->effective_speed_hz);
+	ms = len * MSEC_PER_SEC * BITS_PER_BYTE;
+	do_div(ms, speed);
 	ms += ms + 200;
 
 	if (ms > UINT_MAX)
@@ -269,17 +269,16 @@ static int dw_spi_dma_config_tx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->txchan, &txconf);
 }
 
-static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_submit_tx(struct dw_spi *dws, struct scatterlist *sgl,
+				unsigned int nents)
 {
 	struct dma_async_tx_descriptor *txdesc;
 	dma_cookie_t cookie;
 	int ret;
 
-	txdesc = dmaengine_prep_slave_sg(dws->txchan,
-				xfer->tx_sg.sgl,
-				xfer->tx_sg.nents,
-				DMA_MEM_TO_DEV,
-				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	txdesc = dmaengine_prep_slave_sg(dws->txchan, sgl, nents,
+					 DMA_MEM_TO_DEV,
+					 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!txdesc)
 		return -ENOMEM;
 
@@ -367,17 +366,16 @@ static int dw_spi_dma_config_rx(struct dw_spi *dws)
 	return dmaengine_slave_config(dws->rxchan, &rxconf);
 }
 
-static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct spi_transfer *xfer)
+static int dw_spi_dma_submit_rx(struct dw_spi *dws, struct scatterlist *sgl,
+				unsigned int nents)
 {
 	struct dma_async_tx_descriptor *rxdesc;
 	dma_cookie_t cookie;
 	int ret;
 
-	rxdesc = dmaengine_prep_slave_sg(dws->rxchan,
-				xfer->rx_sg.sgl,
-				xfer->rx_sg.nents,
-				DMA_DEV_TO_MEM,
-				DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
+	rxdesc = dmaengine_prep_slave_sg(dws->rxchan, sgl, nents,
+					 DMA_DEV_TO_MEM,
+					 DMA_PREP_INTERRUPT | DMA_CTRL_ACK);
 	if (!rxdesc)
 		return -ENOMEM;
 
@@ -436,13 +434,14 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 	int ret;
 
 	/* Submit DMA Tx transfer */
-	ret = dw_spi_dma_submit_tx(dws, xfer);
+	ret = dw_spi_dma_submit_tx(dws, xfer->tx_sg.sgl, xfer->tx_sg.nents);
 	if (ret)
 		goto err_clear_dmac;
 
 	/* Submit DMA Rx transfer if required */
 	if (xfer->rx_buf) {
-		ret = dw_spi_dma_submit_rx(dws, xfer);
+		ret = dw_spi_dma_submit_rx(dws, xfer->rx_sg.sgl,
+					   xfer->rx_sg.nents);
 		if (ret)
 			goto err_clear_dmac;
 
@@ -452,7 +451,7 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 
 	dma_async_issue_pending(dws->txchan);
 
-	ret = dw_spi_dma_wait(dws, xfer);
+	ret = dw_spi_dma_wait(dws, xfer->len, xfer->effective_speed_hz);
 
 err_clear_dmac:
 	dw_writel(dws, DW_SPI_DMACR, 0);
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* [PATCH 8/8] spi: dw-dma: Add one-by-one SG list entries transfer
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (6 preceding siblings ...)
  2020-07-31  7:59 ` [PATCH 7/8] spi: dw-dma: Pass exact data to the DMA submit and wait methods Serge Semin
@ 2020-07-31  7:59 ` Serge Semin
  2020-07-31  9:26 ` [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Andy Shevchenko
  8 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31  7:59 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Andy Shevchenko, Andy Shevchenko, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

In case if at least one of the requested DMA engine channels doesn't
support the hardware accelerated SG list entries traverse, the DMA driver
will most likely work that around by performing the IRQ-based SG list
entries resubmission. That might and will cause a problem if the DMA Tx
channel is recharged and re-executed before the Rx DMA channel. Due to
non-deterministic IRQ-handler execution latency the DMA Tx channel will
start pushing data to the SPI bus before the Rx DMA channel is even
reinitialized with the next inbound SG list entry. By doing so the DMA
Tx channel will implicitly start filling the DW APB SSI Rx FIFO up, which
while the DMA Rx channel being recharged and re-executed will eventually
be overflown.

In order to solve the problem we have to feed the DMA engine with SG
list entries one-by-one. It shall keep the DW APB SSI Tx and Rx FIFOs
synchronized and prevent the Rx FIFO overflow. Since in general the SPI
tx_sg and rx_sg lists may have different number of entries of different
lengths (though total length should match) we virtually split the
SG-lists to the set of DMA transfers, which length is a minimum of the
ordered SG-entries lengths.

The solution described above is only executed if a full-duplex SPI
transfer is requested and the DMA engine hasn't provided channels with
hardware accelerated SG list traverse capability to handle both SG
lists at once.

Signed-off-by: Serge Semin <Sergey.Semin@baikalelectronics.ru>
Suggested-by: Andy Shevchenko <andriy.shevchenko@linux.intel.com>
---
 drivers/spi/spi-dw-dma.c | 137 ++++++++++++++++++++++++++++++++++++++-
 drivers/spi/spi-dw.h     |   1 +
 2 files changed, 137 insertions(+), 1 deletion(-)

diff --git a/drivers/spi/spi-dw-dma.c b/drivers/spi/spi-dw-dma.c
index 2b42b42b6cf2..b3eeef3d5cf7 100644
--- a/drivers/spi/spi-dw-dma.c
+++ b/drivers/spi/spi-dw-dma.c
@@ -73,6 +73,23 @@ static void dw_spi_dma_maxburst_init(struct dw_spi *dws)
 	dw_writel(dws, DW_SPI_DMATDLR, dws->txburst);
 }
 
+static void dw_spi_dma_sg_burst_init(struct dw_spi *dws)
+{
+	struct dma_slave_caps tx = {0}, rx = {0};
+
+	dma_get_slave_caps(dws->txchan, &tx);
+	dma_get_slave_caps(dws->rxchan, &rx);
+
+	if (tx.max_sg_burst > 0 && rx.max_sg_burst > 0)
+		dws->dma_sg_burst = min(tx.max_sg_burst, rx.max_sg_burst);
+	else if (tx.max_sg_burst > 0)
+		dws->dma_sg_burst = tx.max_sg_burst;
+	else if (rx.max_sg_burst > 0)
+		dws->dma_sg_burst = rx.max_sg_burst;
+	else
+		dws->dma_sg_burst = 0;
+}
+
 static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
 {
 	struct dw_dma_slave dma_tx = { .dst_id = 1 }, *tx = &dma_tx;
@@ -110,6 +127,8 @@ static int dw_spi_dma_init_mfld(struct device *dev, struct dw_spi *dws)
 
 	dw_spi_dma_maxburst_init(dws);
 
+	dw_spi_dma_sg_burst_init(dws);
+
 	return 0;
 
 free_rxchan:
@@ -139,6 +158,8 @@ static int dw_spi_dma_init_generic(struct device *dev, struct dw_spi *dws)
 
 	dw_spi_dma_maxburst_init(dws);
 
+	dw_spi_dma_sg_burst_init(dws);
+
 	return 0;
 }
 
@@ -459,11 +480,125 @@ static int dw_spi_dma_transfer_all(struct dw_spi *dws,
 	return ret;
 }
 
+static int dw_spi_dma_transfer_one(struct dw_spi *dws,
+				   struct spi_transfer *xfer)
+{
+	struct scatterlist *tx_sg = NULL, *rx_sg = NULL, tx_tmp, rx_tmp;
+	unsigned int tx_len = 0, rx_len = 0;
+	unsigned int base, len;
+	int ret;
+
+	sg_init_table(&tx_tmp, 1);
+	sg_init_table(&rx_tmp, 1);
+
+	/*
+	 * In case if at least one of the requested DMA channels doesn't
+	 * support the hardware accelerated SG list entries traverse, the DMA
+	 * driver will most likely work that around by performing the IRQ-based
+	 * SG list entries resubmission. That might and will cause a problem
+	 * if the DMA Tx channel is recharged and re-executed before the Rx DMA
+	 * channel. Due to non-deterministic IRQ-handler execution latency the
+	 * DMA Tx channel will start pushing data to the SPI bus before the
+	 * Rx DMA channel is even reinitialized with the next inbound SG list
+	 * entry. By doing so the DMA Tx channel will implicitly start filling
+	 * the DW APB SSI Rx FIFO up, which while the DMA Rx channel being
+	 * recharged and re-executed will eventually be overflown.
+	 *
+	 * In order to solve the problem we have to feed the DMA engine with SG
+	 * list entries one-by-one. It shall keep the DW APB SSI Tx and Rx
+	 * FIFOs synchronized and prevent the Rx FIFO overflow. Since in
+	 * general the tx_sg and rx_sg lists may have different number of
+	 * entries of different lengths (though total length should match)
+	 * let's virtually split the SG-lists to the set of DMA transfers,
+	 * which length is a minimum of the ordered SG-entries lengths.
+	 * An ASCII-sketch of the implemented algo is following:
+	 *                  xfer->len
+	 *                |___________|
+	 * tx_sg list:    |___|____|__|
+	 * rx_sg list:    |_|____|____|
+	 * DMA transfers: |_|_|__|_|__|
+	 *
+	 * Note in order to have this workaround solving the denoted problem
+	 * the DMA engine driver should properly initialize the max_sg_burst
+	 * capability and set the DMA device max segment size parameter with
+	 * maximum data block size the DMA engine supports.
+	 */
+	for (base = 0, len = 0; base < xfer->len; base += len) {
+		/* Fetch next Tx DMA data chunk */
+		if (!tx_len) {
+			tx_sg = !tx_sg ? &xfer->tx_sg.sgl[0] : sg_next(tx_sg);
+			sg_dma_address(&tx_tmp) = sg_dma_address(tx_sg);
+			tx_len = sg_dma_len(tx_sg);
+		}
+
+		/* Fetch next Rx DMA data chunk */
+		if (!rx_len) {
+			rx_sg = !rx_sg ? &xfer->rx_sg.sgl[0] : sg_next(rx_sg);
+			sg_dma_address(&rx_tmp) = sg_dma_address(rx_sg);
+			rx_len = sg_dma_len(rx_sg);
+		}
+
+		len = min(tx_len, rx_len);
+
+		sg_dma_len(&tx_tmp) = len;
+		sg_dma_len(&rx_tmp) = len;
+
+		/* Submit DMA Tx transfer */
+		ret = dw_spi_dma_submit_tx(dws, &tx_tmp, 1);
+		if (ret)
+			break;
+
+		/* Submit DMA Rx transfer */
+		ret = dw_spi_dma_submit_rx(dws, &rx_tmp, 1);
+		if (ret)
+			break;
+
+		/* Rx must be started before Tx due to SPI instinct */
+		dma_async_issue_pending(dws->rxchan);
+
+		dma_async_issue_pending(dws->txchan);
+
+		/*
+		 * Here we only need to wait for the DMA transfer to be
+		 * finished since SPI controller is kept enabled during the
+		 * procedure this loop implements and there is no risk to lose
+		 * data left in the Tx/Rx FIFOs.
+		 */
+		ret = dw_spi_dma_wait(dws, len, xfer->effective_speed_hz);
+		if (ret)
+			break;
+
+		reinit_completion(&dws->dma_completion);
+
+		sg_dma_address(&tx_tmp) += len;
+		sg_dma_address(&rx_tmp) += len;
+		tx_len -= len;
+		rx_len -= len;
+	}
+
+	dw_writel(dws, DW_SPI_DMACR, 0);
+
+	return ret;
+}
+
 static int dw_spi_dma_transfer(struct dw_spi *dws, struct spi_transfer *xfer)
 {
+	unsigned int nents;
 	int ret;
 
-	ret = dw_spi_dma_transfer_all(dws, xfer);
+	nents = max(xfer->tx_sg.nents, xfer->rx_sg.nents);
+
+	/*
+	 * Execute normal DMA-based transfer (which submits the Rx and Tx SG
+	 * lists directly to the DMA engine at once) if either full hardware
+	 * accelerated SG list traverse is supported by both channels, or the
+	 * Tx-only SPI transfer is requested, or the DMA engine is capable to
+	 * handle both SG lists on hardware accelerated basis.
+	 */
+	if (!dws->dma_sg_burst || !xfer->rx_buf || nents <= dws->dma_sg_burst)
+		ret = dw_spi_dma_transfer_all(dws, xfer);
+	else
+		ret = dw_spi_dma_transfer_one(dws, xfer);
 	if (ret)
 		return ret;
 
diff --git a/drivers/spi/spi-dw.h b/drivers/spi/spi-dw.h
index 151ba316619e..1d201c62d292 100644
--- a/drivers/spi/spi-dw.h
+++ b/drivers/spi/spi-dw.h
@@ -146,6 +146,7 @@ struct dw_spi {
 	u32			txburst;
 	struct dma_chan		*rxchan;
 	u32			rxburst;
+	u32			dma_sg_burst;
 	unsigned long		dma_chan_busy;
 	dma_addr_t		dma_addr; /* phy address of the Data register */
 	const struct dw_spi_dma_ops *dma_ops;
-- 
2.27.0


^ permalink raw reply related	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods
  2020-07-31  7:59 ` [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods Serge Semin
@ 2020-07-31  9:15   ` Andy Shevchenko
  2020-07-31 12:46     ` Serge Semin
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Shevchenko @ 2020-07-31  9:15 UTC (permalink / raw)
  To: Serge Semin
  Cc: Mark Brown, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi, Feng Tang,
	Vinod Koul, linux-spi, linux-kernel

On Fri, Jul 31, 2020 at 10:59:49AM +0300, Serge Semin wrote:
> Indeed we can freely move the dmaengine_submit() method invocation and the
> Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. By doing
> so first we implement another preparation before adding the one-by-one DMA
> SG entries transmission, second we now have the dma_async_tx_descriptor
> descriptor used locally only in the new DMA transfers submitition methods,
> which makes the code less complex with no passing around the DMA Tx
> descriptors, third we make the generic transfer method more readable, where
> now the functionality of submission, execution and wait procedures is
> transparently split up instead of having a preparation, intermixed
> submission/execution and wait procedures. While at it we also add the
> dmaengine_submit() return value test. It has been unnecessary for
> DW DMAC, but should be done to support the generic DMA interface.
> 
> Note since the DMA channels preparation methods are now responsible for
> the DMA transactions submission, we also rename them to
> dw_spi_dma_submit_{tx,rx}().

...

> +	cookie = dmaengine_submit(txdesc);
> +	ret = dma_submit_error(cookie);
> +	if (!ret)

Use traditional pattern
	if (ret)
		return ret;

Same for below.

> +		set_bit(TX_BUSY, &dws->dma_chan_busy);
> +
> +	return ret;

...

> -	if (!xfer->rx_buf)
> -		return NULL;

This seems not related.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup
  2020-07-31  7:59 ` [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup Serge Semin
@ 2020-07-31  9:16   ` Andy Shevchenko
  2020-07-31 12:32     ` Serge Semin
  0 siblings, 1 reply; 17+ messages in thread
From: Andy Shevchenko @ 2020-07-31  9:16 UTC (permalink / raw)
  To: Serge Semin
  Cc: Mark Brown, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi, Feng Tang,
	Vinod Koul, linux-spi, linux-kernel

On Fri, Jul 31, 2020 at 10:59:48AM +0300, Serge Semin wrote:
> Mainly this is a preparation patch before adding one-by-one DMA SG entries
> transmission. But logically the Tx and Rx DMA channels setup should be
> performed in the dma_setup() callback anyway. So let's move the DMA slave
> channels src/dst burst lengths, address and address width configuration to
> the DMA setup stage. While at it make sure the return value of the
> dmaengine_slave_config() method is checked. It has been unnecessary in
> case if Dw DMAC is utilized as a DMA engine, since its device_config()
> callback always returns zero (though it might change in future). But since
> DW APB SSI driver now supports any DMA back-end we must make sure the
> DMA device configuration has been successful before proceeding with
> further setups.

...

> +	if (!xfer->rx_buf)
> +		return NULL;

...

> +	if (xfer->rx_buf) {

> +	}

This looks like a separate change to drop one of them and not hide in the next patch.

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support
  2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
                   ` (7 preceding siblings ...)
  2020-07-31  7:59 ` [PATCH 8/8] spi: dw-dma: Add one-by-one SG list entries transfer Serge Semin
@ 2020-07-31  9:26 ` Andy Shevchenko
  2020-07-31 12:59   ` Serge Semin
  8 siblings, 1 reply; 17+ messages in thread
From: Andy Shevchenko @ 2020-07-31  9:26 UTC (permalink / raw)
  To: Serge Semin
  Cc: Mark Brown, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi, Feng Tang,
	Vinod Koul, linux-spi, linux-kernel

On Fri, Jul 31, 2020 at 10:59:45AM +0300, Serge Semin wrote:

...

> Note since the DMA-engine subsystem in kernel 5.8-rcX doesn't have the
> max_sg_burst capability supported, this series is intended to be applied
> only after the "next"-branch of the DMA-engine repository is merged into
> the mainline repo. Alternatively the series could be merged in through the
> DMA-engine repo.

This needs to be thought through...

I gave some simple comments (and on top just try not to modify the same lines
inside one series two or more times, e.g. ret = func() -> return func() -> ret
= func() in one case).

-- 
With Best Regards,
Andy Shevchenko



^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup
  2020-07-31  9:16   ` Andy Shevchenko
@ 2020-07-31 12:32     ` Serge Semin
  0 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31 12:32 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Mark Brown, Alexey Malahov, Georgy Vlasov, Ramil Zaripov,
	Pavel Parkhomenko, Peter Ujfalusi, Feng Tang, Vinod Koul,
	linux-spi, linux-kernel

On Fri, Jul 31, 2020 at 12:16:38PM +0300, Andy Shevchenko wrote:
> On Fri, Jul 31, 2020 at 10:59:48AM +0300, Serge Semin wrote:
> > Mainly this is a preparation patch before adding one-by-one DMA SG entries
> > transmission. But logically the Tx and Rx DMA channels setup should be
> > performed in the dma_setup() callback anyway. So let's move the DMA slave
> > channels src/dst burst lengths, address and address width configuration to
> > the DMA setup stage. While at it make sure the return value of the
> > dmaengine_slave_config() method is checked. It has been unnecessary in
> > case if Dw DMAC is utilized as a DMA engine, since its device_config()
> > callback always returns zero (though it might change in future). But since
> > DW APB SSI driver now supports any DMA back-end we must make sure the
> > DMA device configuration has been successful before proceeding with
> > further setups.
> 
> ...
> 

Part 1:
> > +	if (!xfer->rx_buf)
> > +		return NULL;
> 
> ...
> 

Part 2:
> > +	if (xfer->rx_buf) {
> 
> > +	}
> 
> This looks like a separate change to drop one of them and not hide in the next patch.

Both of these changes are a part of the single alteration introduced to detach two
methods from each other: dw_spi_dma_{config,prepare}_{rx,tx}(). Part 1 is a
statement, which belongs to the method dw_spi_dma_prepare_rx() and is left there
after dw_spi_dma_config_rx() has been detached from it. Part 2 is a logical
part, which must be presented in dw_spi_dma_setup() since we don't need to configure
the Rx DMA channel if rx_buf isn't specified.

Please, read more carefully the commit log. I didn't introduce anything other
than the changes described there.

-Sergey

> 
> -- 
> With Best Regards,
> Andy Shevchenko
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods
  2020-07-31  9:15   ` Andy Shevchenko
@ 2020-07-31 12:46     ` Serge Semin
  0 siblings, 0 replies; 17+ messages in thread
From: Serge Semin @ 2020-07-31 12:46 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Serge Semin, Mark Brown, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi, Feng Tang,
	Vinod Koul, linux-spi, linux-kernel

On Fri, Jul 31, 2020 at 12:15:28PM +0300, Andy Shevchenko wrote:
> On Fri, Jul 31, 2020 at 10:59:49AM +0300, Serge Semin wrote:
> > Indeed we can freely move the dmaengine_submit() method invocation and the
> > Tx and Rx busy flag setting into the DMA Tx/Rx prepare methods. By doing
> > so first we implement another preparation before adding the one-by-one DMA
> > SG entries transmission, second we now have the dma_async_tx_descriptor
> > descriptor used locally only in the new DMA transfers submitition methods,
> > which makes the code less complex with no passing around the DMA Tx
> > descriptors, third we make the generic transfer method more readable, where
> > now the functionality of submission, execution and wait procedures is
> > transparently split up instead of having a preparation, intermixed
> > submission/execution and wait procedures. While at it we also add the
> > dmaengine_submit() return value test. It has been unnecessary for
> > DW DMAC, but should be done to support the generic DMA interface.
> > 
> > Note since the DMA channels preparation methods are now responsible for
> > the DMA transactions submission, we also rename them to
> > dw_spi_dma_submit_{tx,rx}().
> 
> ...
> 
> > +	cookie = dmaengine_submit(txdesc);
> > +	ret = dma_submit_error(cookie);
> > +	if (!ret)
> 

> Use traditional pattern
> 	if (ret)
> 		return ret;
> 
> Same for below.

Ok.

> 
> > +		set_bit(TX_BUSY, &dws->dma_chan_busy);
> > +
> > +	return ret;
> 
> ...
> 

> > -	if (!xfer->rx_buf)
> > -		return NULL;
> 
> This seems not related.

I moved it to the upper level for the methods better maintainability.

-Sergey

> 
> -- 
> With Best Regards,
> Andy Shevchenko
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support
  2020-07-31  9:26 ` [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Andy Shevchenko
@ 2020-07-31 12:59   ` Serge Semin
  2020-08-04 21:14     ` Mark Brown
  0 siblings, 1 reply; 17+ messages in thread
From: Serge Semin @ 2020-07-31 12:59 UTC (permalink / raw)
  To: Andy Shevchenko
  Cc: Serge Semin, Mark Brown, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi, Feng Tang,
	Vinod Koul, linux-spi, linux-kernel

On Fri, Jul 31, 2020 at 12:26:12PM +0300, Andy Shevchenko wrote:
> On Fri, Jul 31, 2020 at 10:59:45AM +0300, Serge Semin wrote:
> 
> ...
> 
> > Note since the DMA-engine subsystem in kernel 5.8-rcX doesn't have the
> > max_sg_burst capability supported, this series is intended to be applied
> > only after the "next"-branch of the DMA-engine repository is merged into
> > the mainline repo. Alternatively the series could be merged in through the
> > DMA-engine repo.
> 

> This needs to be thought through...

There is nothing to think about: either Mark completes review/accepts the series
and Vinod agrees to merge it in through the DMA-engine repo, or we'll have to
wait until the next merge window is closed and then merge the series in
traditionally through the SPI repo.

-Sergey

> 
> I gave some simple comments (and on top just try not to modify the same lines
> inside one series two or more times, e.g. ret = func() -> return func() -> ret
> = func() in one case).
> 
> -- 
> With Best Regards,
> Andy Shevchenko
> 
> 

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support
  2020-07-31 12:59   ` Serge Semin
@ 2020-08-04 21:14     ` Mark Brown
  2020-08-05 11:31       ` Vinod Koul
  0 siblings, 1 reply; 17+ messages in thread
From: Mark Brown @ 2020-08-04 21:14 UTC (permalink / raw)
  To: Serge Semin
  Cc: Andy Shevchenko, Serge Semin, Alexey Malahov, Georgy Vlasov,
	Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi, Feng Tang,
	Vinod Koul, linux-spi, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 930 bytes --]

On Fri, Jul 31, 2020 at 03:59:54PM +0300, Serge Semin wrote:
> On Fri, Jul 31, 2020 at 12:26:12PM +0300, Andy Shevchenko wrote:
> > On Fri, Jul 31, 2020 at 10:59:45AM +0300, Serge Semin wrote:

> > > Note since the DMA-engine subsystem in kernel 5.8-rcX doesn't have the
> > > max_sg_burst capability supported, this series is intended to be applied
> > > only after the "next"-branch of the DMA-engine repository is merged into
> > > the mainline repo. Alternatively the series could be merged in through the
> > > DMA-engine repo.

> > This needs to be thought through...

> There is nothing to think about: either Mark completes review/accepts the series
> and Vinod agrees to merge it in through the DMA-engine repo, or we'll have to
> wait until the next merge window is closed and then merge the series in
> traditionally through the SPI repo.

Well, the merge window is open now so this won't get applied till -rc1
anyway.

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

^ permalink raw reply	[flat|nested] 17+ messages in thread

* Re: [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support
  2020-08-04 21:14     ` Mark Brown
@ 2020-08-05 11:31       ` Vinod Koul
  0 siblings, 0 replies; 17+ messages in thread
From: Vinod Koul @ 2020-08-05 11:31 UTC (permalink / raw)
  To: Mark Brown
  Cc: Serge Semin, Andy Shevchenko, Serge Semin, Alexey Malahov,
	Georgy Vlasov, Ramil Zaripov, Pavel Parkhomenko, Peter Ujfalusi,
	Feng Tang, linux-spi, linux-kernel

On 04-08-20, 22:14, Mark Brown wrote:
> On Fri, Jul 31, 2020 at 03:59:54PM +0300, Serge Semin wrote:
> > On Fri, Jul 31, 2020 at 12:26:12PM +0300, Andy Shevchenko wrote:
> > > On Fri, Jul 31, 2020 at 10:59:45AM +0300, Serge Semin wrote:
> 
> > > > Note since the DMA-engine subsystem in kernel 5.8-rcX doesn't have the
> > > > max_sg_burst capability supported, this series is intended to be applied
> > > > only after the "next"-branch of the DMA-engine repository is merged into
> > > > the mainline repo. Alternatively the series could be merged in through the
> > > > DMA-engine repo.
> 
> > > This needs to be thought through...
> 
> > There is nothing to think about: either Mark completes review/accepts the series
> > and Vinod agrees to merge it in through the DMA-engine repo, or we'll have to
> > wait until the next merge window is closed and then merge the series in
> > traditionally through the SPI repo.
> 
> Well, the merge window is open now so this won't get applied till -rc1
> anyway.

Yes and the max_sg_burst capability is in my next and should be in PR to
Linus this week, so rc1 should have this.

-- 
~Vinod

^ permalink raw reply	[flat|nested] 17+ messages in thread

end of thread, other threads:[~2020-08-05 20:09 UTC | newest]

Thread overview: 17+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-31  7:59 [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Serge Semin
2020-07-31  7:59 ` [PATCH 1/8] spi: dw-dma: Set DMA Level registers on init Serge Semin
2020-07-31  7:59 ` [PATCH 2/8] spi: dw-dma: Fail DMA-based transfer if no Tx-buffer specified Serge Semin
2020-07-31  7:59 ` [PATCH 3/8] spi: dw-dma: Configure the DMA channels in dma_setup Serge Semin
2020-07-31  9:16   ` Andy Shevchenko
2020-07-31 12:32     ` Serge Semin
2020-07-31  7:59 ` [PATCH 4/8] spi: dw-dma: Move DMA transfers submission to the channels prep methods Serge Semin
2020-07-31  9:15   ` Andy Shevchenko
2020-07-31 12:46     ` Serge Semin
2020-07-31  7:59 ` [PATCH 5/8] spi: dw-dma: Detach DMA transfer into a dedicated method Serge Semin
2020-07-31  7:59 ` [PATCH 6/8] spi: dw-dma: Move DMAC register cleanup to DMA transfer method Serge Semin
2020-07-31  7:59 ` [PATCH 7/8] spi: dw-dma: Pass exact data to the DMA submit and wait methods Serge Semin
2020-07-31  7:59 ` [PATCH 8/8] spi: dw-dma: Add one-by-one SG list entries transfer Serge Semin
2020-07-31  9:26 ` [PATCH 0/8] spi: dw-dma: Add max SG entries burst capability support Andy Shevchenko
2020-07-31 12:59   ` Serge Semin
2020-08-04 21:14     ` Mark Brown
2020-08-05 11:31       ` Vinod Koul

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).