From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B8B23C433E0 for ; Thu, 21 May 2020 03:09:41 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 890FE2067B for ; Thu, 21 May 2020 03:09:41 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727873AbgEUDJl (ORCPT ); Wed, 20 May 2020 23:09:41 -0400 Received: from mga01.intel.com ([192.55.52.88]:19984 "EHLO mga01.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727869AbgEUDJl (ORCPT ); Wed, 20 May 2020 23:09:41 -0400 IronPort-SDR: D5uqVYZUNFiC5uW9oluo39higlsvO3TcY9M3915neLIO5/vYIDYi9W0mLPsxAjz8wLTYBWQE9W 297miXW/DeCg== X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from fmsmga008.fm.intel.com ([10.253.24.58]) by fmsmga101.fm.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 20 May 2020 20:09:39 -0700 IronPort-SDR: /FY+8d5ByoyOaHpIPTb39UfH0C/6qZ0RMzVOy5zDuuQdY3sa65MBvMIfZBnMybIdHFoZPrmakL kQ35fmbnIWYg== X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.73,416,1583222400"; d="scan'208";a="255188307" Received: from shbuild999.sh.intel.com (HELO localhost) ([10.239.146.107]) by fmsmga008.fm.intel.com with ESMTP; 20 May 2020 20:09:24 -0700 Date: Thu, 21 May 2020 11:09:24 +0800 From: Feng Tang To: Serge Semin Cc: Mark Brown , Grant Likely , Vinod Koul , Alan Cox , Linus Walleij , Serge Semin , Georgy Vlasov , Ramil Zaripov , Alexey Malahov , Thomas Bogendoerfer , Paul Burton , Ralf Baechle , Arnd Bergmann , Andy Shevchenko , Rob Herring , linux-mips@vger.kernel.org, devicetree@vger.kernel.org, Jarkko Nikula , Thomas Gleixner , Wan Ahmad Zainie , Linus Walleij , Clement Leger , linux-spi@vger.kernel.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v3 01/16] spi: dw: Add Tx/Rx finish wait methods to the MID DMA Message-ID: <20200521030924.GA12568@shbuild999.sh.intel.com> References: <20200521012206.14472-1-Sergey.Semin@baikalelectronics.ru> <20200521012206.14472-2-Sergey.Semin@baikalelectronics.ru> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20200521012206.14472-2-Sergey.Semin@baikalelectronics.ru> User-Agent: Mutt/1.5.24 (2015-08-30) Sender: linux-spi-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-spi@vger.kernel.org Hi Serge, On Thu, May 21, 2020 at 04:21:51AM +0300, Serge Semin wrote: > Since DMA transfers are performed asynchronously with actual SPI > transaction, then even if DMA transfers are finished it doesn't mean > all data is actually pushed to the SPI bus. Some data might still be > in the controller FIFO. This is specifically true for Tx-only > transfers. In this case if the next SPI transfer is recharged while > a tail of the previous one is still in FIFO, we'll loose that tail > data. In order to fix this lets add the wait procedure of the Tx/Rx > SPI transfers completion after the corresponding DMA transactions > are finished. > > Co-developed-by: Georgy Vlasov > Signed-off-by: Georgy Vlasov > Signed-off-by: Serge Semin > Fixes: 7063c0d942a1 ("spi/dw_spi: add DMA support") > Cc: Ramil Zaripov > Cc: Alexey Malahov > Cc: Thomas Bogendoerfer > Cc: Paul Burton > Cc: Ralf Baechle > Cc: Arnd Bergmann > Cc: Andy Shevchenko > Cc: Rob Herring > Cc: linux-mips@vger.kernel.org > Cc: devicetree@vger.kernel.org > > --- > > Changelog v2: > - Use conditional statement instead of the ternary operator in the ref > clock getter. > - Move the patch to the head of the series so one could be picked up to > the stable kernels as a fix. > > Changelog v3: > - Use spi_delay_exec() method to wait for the current operation completion. > --- > drivers/spi/spi-dw-mid.c | 69 ++++++++++++++++++++++++++++++++++++++++ > drivers/spi/spi-dw.h | 10 ++++++ > 2 files changed, 79 insertions(+) > > diff --git a/drivers/spi/spi-dw-mid.c b/drivers/spi/spi-dw-mid.c > index f9757a370699..3526b196a7fc 100644 > --- a/drivers/spi/spi-dw-mid.c > +++ b/drivers/spi/spi-dw-mid.c > @@ -17,6 +17,7 @@ > #include > #include > > +#define WAIT_RETRIES 5 > #define RX_BUSY 0 > #define TX_BUSY 1 > > @@ -143,6 +144,47 @@ static enum dma_slave_buswidth convert_dma_width(u32 dma_width) { > return DMA_SLAVE_BUSWIDTH_UNDEFINED; > } > > +static void dw_spi_dma_calc_delay(struct dw_spi *dws, u32 nents, > + struct spi_delay *delay) > +{ > + unsigned long ns, us; > + > + ns = (NSEC_PER_SEC / spi_get_clk(dws)) * nents * dws->n_bytes * > + BITS_PER_BYTE; > + > + if (ns <= NSEC_PER_USEC) { > + delay->unit = SPI_DELAY_UNIT_NSECS; > + delay->value = ns; > + } else { > + us = DIV_ROUND_UP(ns, NSEC_PER_USEC); > + delay->unit = SPI_DELAY_UNIT_USECS; > + delay->value = clamp_val(us, 0, USHRT_MAX); > + } > +} > + > +static inline bool dw_spi_dma_tx_busy(struct dw_spi *dws) > +{ > + return !(dw_readl(dws, DW_SPI_SR) & SR_TF_EMPT); > +} > + > +static void dw_spi_dma_wait_tx_done(struct dw_spi *dws) > +{ > + int retry = WAIT_RETRIES; > + struct spi_delay delay; > + u32 nents; > + > + nents = dw_readl(dws, DW_SPI_TXFLR); > + dw_spi_dma_calc_delay(dws, nents, &delay); > + > + while (dw_spi_dma_tx_busy(dws) && retry--) > + spi_delay_exec(&delay, NULL); > + > + if (retry < 0) { > + dev_err(&dws->master->dev, "Tx hanged up\n"); > + dws->master->cur_msg->status = -EIO; > + } > +} > + > /* > * dws->dma_chan_busy is set before the dma transfer starts, callback for tx > * channel will clear a corresponding bit. > @@ -151,6 +193,8 @@ static void dw_spi_dma_tx_done(void *arg) > { > struct dw_spi *dws = arg; > > + dw_spi_dma_wait_tx_done(dws); > + > clear_bit(TX_BUSY, &dws->dma_chan_busy); > if (test_bit(RX_BUSY, &dws->dma_chan_busy)) > return; > @@ -192,6 +236,29 @@ static struct dma_async_tx_descriptor *dw_spi_dma_prepare_tx(struct dw_spi *dws, > return txdesc; > } > > +static inline bool dw_spi_dma_rx_busy(struct dw_spi *dws) > +{ > + return !!(dw_readl(dws, DW_SPI_SR) & SR_RF_NOT_EMPT); > +} > + > +static void dw_spi_dma_wait_rx_done(struct dw_spi *dws) > +{ > + int retry = WAIT_RETRIES; > + struct spi_delay delay; > + u32 nents; > + > + nents = dw_readl(dws, DW_SPI_RXFLR); > + dw_spi_dma_calc_delay(dws, nents, &delay); > + > + while (dw_spi_dma_rx_busy(dws) && retry--) > + spi_delay_exec(&delay, NULL); > + > + if (retry < 0) { > + dev_err(&dws->master->dev, "Rx hanged up\n"); > + dws->master->cur_msg->status = -EIO; > + } > +} > + > /* > * dws->dma_chan_busy is set before the dma transfer starts, callback for rx > * channel will clear a corresponding bit. > @@ -200,6 +267,8 @@ static void dw_spi_dma_rx_done(void *arg) > { > struct dw_spi *dws = arg; > > + dw_spi_dma_wait_rx_done(dws); I can understand the problem about TX, but I don't see how RX will get hurt, can you elaborate more? thanks - Feng > + > clear_bit(RX_BUSY, &dws->dma_chan_busy); > if (test_bit(TX_BUSY, &dws->dma_chan_busy)) > return; > diff --git a/drivers/spi/spi-dw.h b/drivers/spi/spi-dw.h > index e92d43b9a9e6..81364f501b7e 100644 > --- a/drivers/spi/spi-dw.h > +++ b/drivers/spi/spi-dw.h > @@ -210,6 +210,16 @@ static inline void spi_set_clk(struct dw_spi *dws, u16 div) > dw_writel(dws, DW_SPI_BAUDR, div); > } > > +static inline u32 spi_get_clk(struct dw_spi *dws) > +{ > + u32 div = dw_readl(dws, DW_SPI_BAUDR); > + > + if (!div) > + return 0; > + > + return dws->max_freq / div; > +} > + > /* Disable IRQ bits */ > static inline void spi_mask_intr(struct dw_spi *dws, u32 mask) > { > -- > 2.25.1