linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RFC 0/4] serial,dma: use DMA cyclic for IMX UART driver
@ 2016-06-09 12:16 Nandor Han
  2016-06-09 12:16 ` [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
                   ` (4 more replies)
  0 siblings, 5 replies; 14+ messages in thread
From: Nandor Han @ 2016-06-09 12:16 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel

Update the IMX UART driver to use cyclic DMA in order
to avoid losing serial data because of DMA start latency.

Support variable data length transfers for cyclic channels in
IMX SDMA(Smart DMA) driver and, as specified in 
"Documentation/dmaengine/provider.txt" section "General Design Notes",
run the SDMA channel callback directly from the SDMA interrupt context.

Every commit message will explain the changes.

Impacted devices:
  IMX6i,IMX53 CPU:
        - clients of the IMX UART that have DMA enabled.
        - cyclic SDMA clients 
            (Note:Not able to find others than IMX UART driver)

Functionality tested by connecting the m53evk to my laptop using a
USB-serial converter and start sending, from my laptop 800 bytes
long packets at 8ms interval.

On the m53evk with interrupt load, generated by
connecting/disconnecting at 3s interval a USB HUB having
some devices connected (~4), I receive serial data in a
loop and check that no overruns are created using command:

    `watch -n1 cat /proc/tty/driver/IMX-uart`

Also done tests were I check how the data is received  when sending packets
bigger than a buffer descriptor size (1024). In my tests I have sent 4 packets
(1028, 2048, 2048, 2048) and check that 7172 bytes were received using
the command:
    `watch -n1 cat /proc/tty/driver/IMX-uart`

    Result:
        serinfo:1.0 driver revision:
        2: uart:IMX mmio:0x5000C000 irq:49 tx:0 rx:7172 RTS|CTS|DTR|DSR|CD


Nandor Han (4):
  dma: imx-sdma - reduce transfer latency for DMA cyclic clients
  dma: imx-sdma - update the residue calculation for cyclic channels
  serial: imx-serial - update UART IMX driver to use cyclic DMA
  serial: imx-serial - update RX error counters when DMA is used

 drivers/dma/imx-sdma.c   |  56 +++++++++------
 drivers/tty/serial/imx.c | 173 ++++++++++++++++++++++++++++++-----------------
 2 files changed, 144 insertions(+), 85 deletions(-)

-- 
2.8.3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients
  2016-06-09 12:16 [PATCH RFC 0/4] serial,dma: use DMA cyclic for IMX UART driver Nandor Han
@ 2016-06-09 12:16 ` Nandor Han
  2016-06-28 14:34   ` Vinod Koul
  2016-07-14  8:32   ` Peter Senna Tschudin
  2016-06-09 12:16 ` [PATCH RFC 2/4] dma: imx-sdma - update the residue calculation for cyclic channels Nandor Han
                   ` (3 subsequent siblings)
  4 siblings, 2 replies; 14+ messages in thread
From: Nandor Han @ 2016-06-09 12:16 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel

Having the SDMA driver use a tasklet for running the clients
callback introduce some issues:
  - probability to have desynchronized data because of the
    race condition created since the DMA transaction status
    is retrieved only when the callback is executed, leaving
    plenty of time for transaction status to get altered.
  - inter-transfer latency which can leave channels idle.

Move the callback execution, for cyclic channels, to SDMA
interrupt (as advised in `Documentation/dmaengine/provider.txt`)
to (a)reduce the inter-transfer latency and (b) eliminate the
race condition possibility where DMA transaction status might
be changed by the time is read.

The responsibility of the SDMA interrupt latency
is moved to the SDMA clients which case by case should defer
the work to bottom-halves when needed.

Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/dma/imx-sdma.c | 36 ++++++++++++++++--------------------
 1 file changed, 16 insertions(+), 20 deletions(-)

diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index 0f6fd42..e497847 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -654,12 +654,6 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
 	writel_relaxed(val, sdma->regs + chnenbl);
 }
 
-static void sdma_handle_channel_loop(struct sdma_channel *sdmac)
-{
-	if (sdmac->desc.callback)
-		sdmac->desc.callback(sdmac->desc.callback_param);
-}
-
 static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 {
 	struct sdma_buffer_descriptor *bd;
@@ -678,13 +672,25 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 			sdmac->status = DMA_ERROR;
 
 		bd->mode.status |= BD_DONE;
+
+		/*
+		 * The callback is called from the interrupt context in order
+		 * to reduce latency and to avoid the risk of altering the
+		 * SDMA transaction status by the time the client tasklet is
+		 * executed.
+		 */
+
+		if (sdmac->desc.callback)
+			sdmac->desc.callback(sdmac->desc.callback_param);
+
 		sdmac->buf_tail++;
 		sdmac->buf_tail %= sdmac->num_bd;
 	}
 }
 
-static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
+static void mxc_sdma_handle_channel_normal(unsigned long data)
 {
+	struct sdma_channel *sdmac = (struct sdma_channel *) data;
 	struct sdma_buffer_descriptor *bd;
 	int i, error = 0;
 
@@ -711,16 +717,6 @@ static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
 		sdmac->desc.callback(sdmac->desc.callback_param);
 }
 
-static void sdma_tasklet(unsigned long data)
-{
-	struct sdma_channel *sdmac = (struct sdma_channel *) data;
-
-	if (sdmac->flags & IMX_DMA_SG_LOOP)
-		sdma_handle_channel_loop(sdmac);
-	else
-		mxc_sdma_handle_channel_normal(sdmac);
-}
-
 static irqreturn_t sdma_int_handler(int irq, void *dev_id)
 {
 	struct sdma_engine *sdma = dev_id;
@@ -737,8 +733,8 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id)
 
 		if (sdmac->flags & IMX_DMA_SG_LOOP)
 			sdma_update_channel_loop(sdmac);
-
-		tasklet_schedule(&sdmac->tasklet);
+		else
+			tasklet_schedule(&sdmac->tasklet);
 
 		__clear_bit(channel, &stat);
 	}
@@ -1739,7 +1735,7 @@ static int sdma_probe(struct platform_device *pdev)
 		dma_cookie_init(&sdmac->chan);
 		sdmac->channel = i;
 
-		tasklet_init(&sdmac->tasklet, sdma_tasklet,
+		tasklet_init(&sdmac->tasklet, mxc_sdma_handle_channel_normal,
 			     (unsigned long) sdmac);
 		/*
 		 * Add the channel to the DMAC list. Do not add channel 0 though
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH RFC 2/4] dma: imx-sdma - update the residue calculation for cyclic channels
  2016-06-09 12:16 [PATCH RFC 0/4] serial,dma: use DMA cyclic for IMX UART driver Nandor Han
  2016-06-09 12:16 ` [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
@ 2016-06-09 12:16 ` Nandor Han
  2016-06-09 12:16 ` [PATCH RFC 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA Nandor Han
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 14+ messages in thread
From: Nandor Han @ 2016-06-09 12:16 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel

The calculation of the DMA transaction residue supports only fixed
size data transfers. This implementation is not covering all
operations (e.g. data receiving) when we need to know the exact amount
of bytes transferred.

The loop channels handling was changed to clear the buffer
descriptor errors and use the bd->mode.count to calculate the
residue.

Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/dma/imx-sdma.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index e497847..9da258a 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -657,6 +657,8 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
 static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 {
 	struct sdma_buffer_descriptor *bd;
+	int error = 0;
+	enum dma_status	old_status = sdmac->status;
 
 	/*
 	 * loop mode. Iterate over descriptors, re-setup them and
@@ -668,10 +670,20 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 		if (bd->mode.status & BD_DONE)
 			break;
 
-		if (bd->mode.status & BD_RROR)
+		if (bd->mode.status & BD_RROR) {
+			bd->mode.status &= ~BD_RROR;
 			sdmac->status = DMA_ERROR;
+			error = -EIO;
+		}
 
+	       /*
+		* We use bd->mode.count to calculate the residue, since contains
+		* the number of bytes present in the current buffer descriptor.
+		*/
+
+		sdmac->chn_real_count = bd->mode.count;
 		bd->mode.status |= BD_DONE;
+		bd->mode.count = sdmac->period_len;
 
 		/*
 		 * The callback is called from the interrupt context in order
@@ -685,6 +697,9 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 
 		sdmac->buf_tail++;
 		sdmac->buf_tail %= sdmac->num_bd;
+
+		if (error)
+			sdmac->status = old_status;
 	}
 }
 
@@ -1358,7 +1373,8 @@ static enum dma_status sdma_tx_status(struct dma_chan *chan,
 	u32 residue;
 
 	if (sdmac->flags & IMX_DMA_SG_LOOP)
-		residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len;
+		residue = (sdmac->num_bd - sdmac->buf_tail) *
+			   sdmac->period_len - sdmac->chn_real_count;
 	else
 		residue = sdmac->chn_count - sdmac->chn_real_count;
 
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH RFC 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA
  2016-06-09 12:16 [PATCH RFC 0/4] serial,dma: use DMA cyclic for IMX UART driver Nandor Han
  2016-06-09 12:16 ` [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
  2016-06-09 12:16 ` [PATCH RFC 2/4] dma: imx-sdma - update the residue calculation for cyclic channels Nandor Han
@ 2016-06-09 12:16 ` Nandor Han
  2016-06-09 12:16 ` [PATCH RFC 4/4] serial: imx-serial - update RX error counters when DMA is used Nandor Han
  2016-08-08 12:38 ` [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver Nandor Han
  4 siblings, 0 replies; 14+ messages in thread
From: Nandor Han @ 2016-06-09 12:16 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel

The IMX UART has a 32 bytes HW buffer which can be filled up in
2777us at 115200 baud or 80us at 4Mbaud (supported by IMX53).
Taking this in consideration there is a good probability to lose
data because of the DMA startup latency.
Our tests (explained below) indicates a latency up to 4400us when
creating interrupt load and ~70us without. When creating interrupt
load I was able to see continuous overrun errors by checking serial
driver statistics using the command:
`cat /proc/tty/driver/IMX-uart`.

Replace manual restart of DMA with cyclic DMA to eliminate data loss
due to DMA engine startup latency (similar approch to atmel_serial.c
driver). As result the DMA engine will start on the first serial data
transfer and stops only when serial port is closed.

Tests environment:
 Using the m53evk board I have used a GPIO for profiling the IMX
 serial driver.
  - The RX line and GPIO were connected to oscilloscope.
  - Run a small test program on the m53evk board that will only open
    and read data from ttymxc2 port.
  - Connect the ttymxc2 port to my laptop using a USB serial converter
    where another test program is running, able to send configurable
    packet lengths and intervals.
  - Serial ports configured at 115200 8N1.
  - Interrupts load created by disconnecting/connecting (3s interval)
    a USB hub, using a digital switch, with 4 USB devices (USB-Serial
    converter, USB SD card, etc) connected.
    (around 160 interrupts/second generated)
  - The GPIO was toggled HI in the `imx_int` when USR1_RRDY or USR1_AGTIM
    events are received and toggled back, once the DMA configuration
    is finalized, at the end of `imx_dma_rxint`.

Measurements:
The measurements were done from the end of the last byte (RX line) until
the GPIO was toggled back LOW.

Note: The GPIO toggling was done using `gpiod_set_value` method.

Tests performed:
   1. Sending 9 bytes packets at 8ms interval. Having the 9 bytes packets
      will activate the RRDY threshold event and IMX serial interrupt
      called.
      Results:
        - DMA start latency (interrupt start latency +
           DMA configuration) consistently 70us when system not loaded.
        - DMA start latency up to 4400us when system loaded.
   2. Sending 40 bytes packet at 8mS interval.
      Results with load:
        - Able to observe overruns by running:
           `watch -n1 cat /proc/tty/driver/IMX-uart`

Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/tty/serial/imx.c | 141 ++++++++++++++++++++++++++---------------------
 1 file changed, 78 insertions(+), 63 deletions(-)

diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 0df2b1c..1912136 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -222,6 +222,9 @@ struct imx_port {
 	struct dma_chan		*dma_chan_rx, *dma_chan_tx;
 	struct scatterlist	rx_sgl, tx_sgl[2];
 	void			*rx_buf;
+	struct circ_buf		rx_ring;
+	unsigned int		rx_periods;
+	dma_cookie_t		rx_cookie;
 	unsigned int		tx_bytes;
 	unsigned int		dma_tx_nents;
 	wait_queue_head_t	dma_wait;
@@ -932,30 +935,6 @@ static void imx_timeout(unsigned long data)
 }
 
 #define RX_BUF_SIZE	(PAGE_SIZE)
-static void imx_rx_dma_done(struct imx_port *sport)
-{
-	unsigned long temp;
-	unsigned long flags;
-
-	spin_lock_irqsave(&sport->port.lock, flags);
-
-	/* re-enable interrupts to get notified when new symbols are incoming */
-	temp = readl(sport->port.membase + UCR1);
-	temp |= UCR1_RRDYEN;
-	writel(temp, sport->port.membase + UCR1);
-
-	temp = readl(sport->port.membase + UCR2);
-	temp |= UCR2_ATEN;
-	writel(temp, sport->port.membase + UCR2);
-
-	sport->dma_is_rxing = 0;
-
-	/* Is the shutdown waiting for us? */
-	if (waitqueue_active(&sport->dma_wait))
-		wake_up(&sport->dma_wait);
-
-	spin_unlock_irqrestore(&sport->port.lock, flags);
-}
 
 /*
  * There are two kinds of RX DMA interrupts(such as in the MX6Q):
@@ -972,43 +951,75 @@ static void dma_rx_callback(void *data)
 	struct scatterlist *sgl = &sport->rx_sgl;
 	struct tty_port *port = &sport->port.state->port;
 	struct dma_tx_state state;
+	struct circ_buf *rx_ring = &sport->rx_ring;
 	enum dma_status status;
-	unsigned int count;
-
-	/* unmap it first */
-	dma_unmap_sg(sport->port.dev, sgl, 1, DMA_FROM_DEVICE);
+	unsigned int w_bytes = 0;
+	unsigned int r_bytes;
+	unsigned int bd_size;
 
 	status = dmaengine_tx_status(chan, (dma_cookie_t)0, &state);
-	count = RX_BUF_SIZE - state.residue;
 
-	dev_dbg(sport->port.dev, "We get %d bytes.\n", count);
+	if (status == DMA_ERROR) {
+		dev_err(sport->port.dev, "DMA transaction error.\n");
+		return;
+	}
+
+	if (!(sport->port.ignore_status_mask & URXD_DUMMY_READ)) {
+
+		/*
+		 * The state-residue variable represents the empty space
+		 * relative to the entire buffer. Taking this in consideration
+		 * the head is always calculated base on the buffer total
+		 * length - DMA transaction residue. The UART script from the
+		 * SDMA firmware will jump to the next buffer descriptor,
+		 * once a DMA transaction if finalized (IMX53 RM - A.4.1.2.4).
+		 * Taking this in consideration the tail is always at the
+		 * beginning of the buffer descriptor that contains the head.
+		 */
+
+		/* Calculate the head */
+		rx_ring->head = sg_dma_len(sgl) - state.residue;
+
+		/* Calculate the tail. */
+		bd_size = sg_dma_len(sgl) / sport->rx_periods;
+		rx_ring->tail = ((rx_ring->head-1) / bd_size) * bd_size;
+
+		if (rx_ring->head <= sg_dma_len(sgl) &&
+		    rx_ring->head > rx_ring->tail) {
+
+			/* Move data from tail to head */
+			r_bytes = rx_ring->head - rx_ring->tail;
+
+			/* CPU claims ownership of RX DMA buffer */
+			dma_sync_sg_for_cpu(sport->port.dev, sgl, 1,
+				DMA_FROM_DEVICE);
 
-	if (count) {
-		if (!(sport->port.ignore_status_mask & URXD_DUMMY_READ)) {
-			int bytes = tty_insert_flip_string(port, sport->rx_buf,
-					count);
+			w_bytes = tty_insert_flip_string(port,
+				sport->rx_buf + rx_ring->tail, r_bytes);
 
-			if (bytes != count)
+			/* UART retrieves ownership of RX DMA buffer */
+			dma_sync_sg_for_device(sport->port.dev, sgl, 1,
+				DMA_FROM_DEVICE);
+
+			if (w_bytes != r_bytes)
 				sport->port.icount.buf_overrun++;
+
+			sport->port.icount.rx += w_bytes;
+		} else	{
+			WARN_ON(rx_ring->head > sg_dma_len(sgl));
+			WARN_ON(rx_ring->head <= rx_ring->tail);
 		}
-		tty_flip_buffer_push(port);
-		sport->port.icount.rx += count;
 	}
 
-	/*
-	 * Restart RX DMA directly if more data is available in order to skip
-	 * the roundtrip through the IRQ handler. If there is some data already
-	 * in the FIFO, DMA needs to be restarted soon anyways.
-	 *
-	 * Otherwise stop the DMA and reactivate FIFO IRQs to restart DMA once
-	 * data starts to arrive again.
-	 */
-	if (readl(sport->port.membase + USR2) & USR2_RDR)
-		start_rx_dma(sport);
-	else
-		imx_rx_dma_done(sport);
+	if (w_bytes) {
+		tty_flip_buffer_push(port);
+		dev_dbg(sport->port.dev, "We get %d bytes.\n", w_bytes);
+	}
 }
 
+/* RX DMA buffer periods */
+#define RX_DMA_PERIODS 4
+
 static int start_rx_dma(struct imx_port *sport)
 {
 	struct scatterlist *sgl = &sport->rx_sgl;
@@ -1017,14 +1028,21 @@ static int start_rx_dma(struct imx_port *sport)
 	struct dma_async_tx_descriptor *desc;
 	int ret;
 
+	sport->rx_ring.head = 0;
+	sport->rx_ring.tail = 0;
+	sport->rx_periods = RX_DMA_PERIODS;
+
 	sg_init_one(sgl, sport->rx_buf, RX_BUF_SIZE);
 	ret = dma_map_sg(dev, sgl, 1, DMA_FROM_DEVICE);
 	if (ret == 0) {
 		dev_err(dev, "DMA mapping error for RX.\n");
 		return -EINVAL;
 	}
-	desc = dmaengine_prep_slave_sg(chan, sgl, 1, DMA_DEV_TO_MEM,
-					DMA_PREP_INTERRUPT);
+
+	desc = dmaengine_prep_dma_cyclic(chan, sg_dma_address(sgl),
+		sg_dma_len(sgl), sg_dma_len(sgl) / sport->rx_periods,
+		DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT);
+
 	if (!desc) {
 		dma_unmap_sg(dev, sgl, 1, DMA_FROM_DEVICE);
 		dev_err(dev, "We cannot prepare for the RX slave dma!\n");
@@ -1034,7 +1052,7 @@ static int start_rx_dma(struct imx_port *sport)
 	desc->callback_param = sport;
 
 	dev_dbg(dev, "RX: prepare for the DMA.\n");
-	dmaengine_submit(desc);
+	sport->rx_cookie = dmaengine_submit(desc);
 	dma_async_issue_pending(chan);
 	return 0;
 }
@@ -1058,14 +1076,16 @@ static void imx_setup_ufcr(struct imx_port *sport,
 static void imx_uart_dma_exit(struct imx_port *sport)
 {
 	if (sport->dma_chan_rx) {
+		dmaengine_terminate_all(sport->dma_chan_rx);
 		dma_release_channel(sport->dma_chan_rx);
 		sport->dma_chan_rx = NULL;
-
+		sport->rx_cookie = -EINVAL;
 		kfree(sport->rx_buf);
 		sport->rx_buf = NULL;
 	}
 
 	if (sport->dma_chan_tx) {
+		dmaengine_terminate_all(sport->dma_chan_tx);
 		dma_release_channel(sport->dma_chan_tx);
 		sport->dma_chan_tx = NULL;
 	}
@@ -1103,6 +1123,7 @@ static int imx_uart_dma_init(struct imx_port *sport)
 		ret = -ENOMEM;
 		goto err;
 	}
+	sport->rx_ring.buf = sport->rx_buf;
 
 	/* Prepare for TX : */
 	sport->dma_chan_tx = dma_request_slave_channel(dev, "tx");
@@ -1283,17 +1304,11 @@ static void imx_shutdown(struct uart_port *port)
 	unsigned long flags;
 
 	if (sport->dma_is_enabled) {
-		int ret;
+		sport->dma_is_rxing = 0;
+		sport->dma_is_txing = 0;
+		dmaengine_terminate_all(sport->dma_chan_tx);
+		dmaengine_terminate_all(sport->dma_chan_rx);
 
-		/* We have to wait for the DMA to finish. */
-		ret = wait_event_interruptible(sport->dma_wait,
-			!sport->dma_is_rxing && !sport->dma_is_txing);
-		if (ret != 0) {
-			sport->dma_is_rxing = 0;
-			sport->dma_is_txing = 0;
-			dmaengine_terminate_all(sport->dma_chan_tx);
-			dmaengine_terminate_all(sport->dma_chan_rx);
-		}
 		spin_lock_irqsave(&sport->port.lock, flags);
 		imx_stop_tx(port);
 		imx_stop_rx(port);
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH RFC 4/4] serial: imx-serial - update RX error counters when DMA is used
  2016-06-09 12:16 [PATCH RFC 0/4] serial,dma: use DMA cyclic for IMX UART driver Nandor Han
                   ` (2 preceding siblings ...)
  2016-06-09 12:16 ` [PATCH RFC 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA Nandor Han
@ 2016-06-09 12:16 ` Nandor Han
  2016-08-08 12:38 ` [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver Nandor Han
  4 siblings, 0 replies; 14+ messages in thread
From: Nandor Han @ 2016-06-09 12:16 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel

Update error couters when DMA is used for receiving data. Do
this by using DMA transaction error event instead error interrupts
to reduce interrupt load.

Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/tty/serial/imx.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 1912136..08ccfe1 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -704,6 +704,7 @@ out:
 	return IRQ_HANDLED;
 }
 
+static void clear_rx_errors(struct imx_port *sport);
 static int start_rx_dma(struct imx_port *sport);
 /*
  * If the RXFIFO is filled with some data, and then we
@@ -729,6 +730,11 @@ static void imx_dma_rxint(struct imx_port *sport)
 		temp &= ~(UCR2_ATEN);
 		writel(temp, sport->port.membase + UCR2);
 
+		/* disable the rx errors interrupts */
+		temp = readl(sport->port.membase + UCR4);
+		temp &= ~UCR4_OREN;
+		writel(temp, sport->port.membase + UCR4);
+
 		/* tell the DMA to receive the data. */
 		start_rx_dma(sport);
 	}
@@ -961,6 +967,7 @@ static void dma_rx_callback(void *data)
 
 	if (status == DMA_ERROR) {
 		dev_err(sport->port.dev, "DMA transaction error.\n");
+		clear_rx_errors(sport);
 		return;
 	}
 
@@ -1057,6 +1064,31 @@ static int start_rx_dma(struct imx_port *sport)
 	return 0;
 }
 
+static void clear_rx_errors(struct imx_port *sport)
+{
+	unsigned int status_usr1, status_usr2;
+
+	status_usr1 = readl(sport->port.membase + USR1);
+	status_usr2 = readl(sport->port.membase + USR2);
+
+	if (status_usr2 & USR2_BRCD) {
+		sport->port.icount.brk++;
+		writel(USR2_BRCD, sport->port.membase + USR2);
+	} else if (status_usr1 & USR1_FRAMERR) {
+		sport->port.icount.frame++;
+		writel(USR1_FRAMERR, sport->port.membase + USR1);
+	} else if (status_usr1 & USR1_PARITYERR) {
+		sport->port.icount.parity++;
+		writel(USR1_PARITYERR, sport->port.membase + USR1);
+	}
+
+	if (status_usr2 & USR2_ORE) {
+		sport->port.icount.overrun++;
+		writel(USR2_ORE, sport->port.membase + USR2);
+	}
+
+}
+
 #define TXTL_DEFAULT 2 /* reset default */
 #define RXTL_DEFAULT 1 /* reset default */
 #define TXTL_DMA 8 /* DMA burst setting */
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients
  2016-06-09 12:16 ` [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
@ 2016-06-28 14:34   ` Vinod Koul
  2016-07-01 14:59     ` EXT: " Nandor Han
  2016-07-14  8:32   ` Peter Senna Tschudin
  1 sibling, 1 reply; 14+ messages in thread
From: Vinod Koul @ 2016-06-28 14:34 UTC (permalink / raw)
  To: Nandor Han
  Cc: =Dan Williams, Greg Kroah-Hartman, Jiri Slaby, dmaengine,
	linux-serial, linux-kernel

On Thu, Jun 09, 2016 at 03:16:30PM +0300, Nandor Han wrote:
> Having the SDMA driver use a tasklet for running the clients
> callback introduce some issues:
>   - probability to have desynchronized data because of the
>     race condition created since the DMA transaction status
>     is retrieved only when the callback is executed, leaving
>     plenty of time for transaction status to get altered.
>   - inter-transfer latency which can leave channels idle.
> 
> Move the callback execution, for cyclic channels, to SDMA
> interrupt (as advised in `Documentation/dmaengine/provider.txt`)
> to (a)reduce the inter-transfer latency and (b) eliminate the
> race condition possibility where DMA transaction status might
> be changed by the time is read.
> 
> The responsibility of the SDMA interrupt latency
> is moved to the SDMA clients which case by case should defer
> the work to bottom-halves when needed.

Both of these look fine. Please change the patch titles to dmaengine: xxxx

Are these going to be merged thru dmaengine tree or serial one?

> 
> Signed-off-by: Nandor Han <nandor.han@ge.com>
> ---
>  drivers/dma/imx-sdma.c | 36 ++++++++++++++++--------------------
>  1 file changed, 16 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
> index 0f6fd42..e497847 100644
> --- a/drivers/dma/imx-sdma.c
> +++ b/drivers/dma/imx-sdma.c
> @@ -654,12 +654,6 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
>  	writel_relaxed(val, sdma->regs + chnenbl);
>  }
>  
> -static void sdma_handle_channel_loop(struct sdma_channel *sdmac)
> -{
> -	if (sdmac->desc.callback)
> -		sdmac->desc.callback(sdmac->desc.callback_param);
> -}
> -
>  static void sdma_update_channel_loop(struct sdma_channel *sdmac)
>  {
>  	struct sdma_buffer_descriptor *bd;
> @@ -678,13 +672,25 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
>  			sdmac->status = DMA_ERROR;
>  
>  		bd->mode.status |= BD_DONE;
> +
> +		/*
> +		 * The callback is called from the interrupt context in order
> +		 * to reduce latency and to avoid the risk of altering the
> +		 * SDMA transaction status by the time the client tasklet is
> +		 * executed.
> +		 */
> +
> +		if (sdmac->desc.callback)
> +			sdmac->desc.callback(sdmac->desc.callback_param);
> +
>  		sdmac->buf_tail++;
>  		sdmac->buf_tail %= sdmac->num_bd;
>  	}
>  }
>  
> -static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
> +static void mxc_sdma_handle_channel_normal(unsigned long data)
>  {
> +	struct sdma_channel *sdmac = (struct sdma_channel *) data;
>  	struct sdma_buffer_descriptor *bd;
>  	int i, error = 0;
>  
> @@ -711,16 +717,6 @@ static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
>  		sdmac->desc.callback(sdmac->desc.callback_param);
>  }
>  
> -static void sdma_tasklet(unsigned long data)
> -{
> -	struct sdma_channel *sdmac = (struct sdma_channel *) data;
> -
> -	if (sdmac->flags & IMX_DMA_SG_LOOP)
> -		sdma_handle_channel_loop(sdmac);
> -	else
> -		mxc_sdma_handle_channel_normal(sdmac);
> -}
> -
>  static irqreturn_t sdma_int_handler(int irq, void *dev_id)
>  {
>  	struct sdma_engine *sdma = dev_id;
> @@ -737,8 +733,8 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id)
>  
>  		if (sdmac->flags & IMX_DMA_SG_LOOP)
>  			sdma_update_channel_loop(sdmac);
> -
> -		tasklet_schedule(&sdmac->tasklet);
> +		else
> +			tasklet_schedule(&sdmac->tasklet);
>  
>  		__clear_bit(channel, &stat);
>  	}
> @@ -1739,7 +1735,7 @@ static int sdma_probe(struct platform_device *pdev)
>  		dma_cookie_init(&sdmac->chan);
>  		sdmac->channel = i;
>  
> -		tasklet_init(&sdmac->tasklet, sdma_tasklet,
> +		tasklet_init(&sdmac->tasklet, mxc_sdma_handle_channel_normal,
>  			     (unsigned long) sdmac);
>  		/*
>  		 * Add the channel to the DMAC list. Do not add channel 0 though
> -- 
> 2.8.3
> 

-- 
~Vinod

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: EXT: Re: [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients
  2016-06-28 14:34   ` Vinod Koul
@ 2016-07-01 14:59     ` Nandor Han
  2016-07-02 17:27       ` Vinod Koul
  0 siblings, 1 reply; 14+ messages in thread
From: Nandor Han @ 2016-07-01 14:59 UTC (permalink / raw)
  To: Vinod Koul
  Cc: =Dan Williams, Greg Kroah-Hartman, Jiri Slaby, dmaengine,
	linux-serial, linux-kernel



On 28/06/16 17:34, Vinod Koul wrote:
> On Thu, Jun 09, 2016 at 03:16:30PM +0300, Nandor Han wrote:
>> Having the SDMA driver use a tasklet for running the clients
>> callback introduce some issues:
>>    - probability to have desynchronized data because of the
>>      race condition created since the DMA transaction status
>>      is retrieved only when the callback is executed, leaving
>>      plenty of time for transaction status to get altered.
>>    - inter-transfer latency which can leave channels idle.
>>
>> Move the callback execution, for cyclic channels, to SDMA
>> interrupt (as advised in `Documentation/dmaengine/provider.txt`)
>> to (a)reduce the inter-transfer latency and (b) eliminate the
>> race condition possibility where DMA transaction status might
>> be changed by the time is read.
>>
>> The responsibility of the SDMA interrupt latency
>> is moved to the SDMA clients which case by case should defer
>> the work to bottom-halves when needed.
>
> Both of these look fine. Please change the patch titles to dmaengine: xxxx
>
> Are these going to be merged thru dmaengine tree or serial one?
>

I will send soon a V2 where I will fix the titles. If you are OK with 
all the patchset it can be merged to dmaengine tree, otherwise probably 
goes to serial one.

Let me know which is the best option.

Thanks,
    Nandor

>>
>> Signed-off-by: Nandor Han <nandor.han@ge.com>
>> ---
>>   drivers/dma/imx-sdma.c | 36 ++++++++++++++++--------------------
>>   1 file changed, 16 insertions(+), 20 deletions(-)
>>
>> diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
>> index 0f6fd42..e497847 100644
>> --- a/drivers/dma/imx-sdma.c
>> +++ b/drivers/dma/imx-sdma.c
>> @@ -654,12 +654,6 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
>>   	writel_relaxed(val, sdma->regs + chnenbl);
>>   }
>>
>> -static void sdma_handle_channel_loop(struct sdma_channel *sdmac)
>> -{
>> -	if (sdmac->desc.callback)
>> -		sdmac->desc.callback(sdmac->desc.callback_param);
>> -}
>> -
>>   static void sdma_update_channel_loop(struct sdma_channel *sdmac)
>>   {
>>   	struct sdma_buffer_descriptor *bd;
>> @@ -678,13 +672,25 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
>>   			sdmac->status = DMA_ERROR;
>>
>>   		bd->mode.status |= BD_DONE;
>> +
>> +		/*
>> +		 * The callback is called from the interrupt context in order
>> +		 * to reduce latency and to avoid the risk of altering the
>> +		 * SDMA transaction status by the time the client tasklet is
>> +		 * executed.
>> +		 */
>> +
>> +		if (sdmac->desc.callback)
>> +			sdmac->desc.callback(sdmac->desc.callback_param);
>> +
>>   		sdmac->buf_tail++;
>>   		sdmac->buf_tail %= sdmac->num_bd;
>>   	}
>>   }
>>
>> -static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
>> +static void mxc_sdma_handle_channel_normal(unsigned long data)
>>   {
>> +	struct sdma_channel *sdmac = (struct sdma_channel *) data;
>>   	struct sdma_buffer_descriptor *bd;
>>   	int i, error = 0;
>>
>> @@ -711,16 +717,6 @@ static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
>>   		sdmac->desc.callback(sdmac->desc.callback_param);
>>   }
>>
>> -static void sdma_tasklet(unsigned long data)
>> -{
>> -	struct sdma_channel *sdmac = (struct sdma_channel *) data;
>> -
>> -	if (sdmac->flags & IMX_DMA_SG_LOOP)
>> -		sdma_handle_channel_loop(sdmac);
>> -	else
>> -		mxc_sdma_handle_channel_normal(sdmac);
>> -}
>> -
>>   static irqreturn_t sdma_int_handler(int irq, void *dev_id)
>>   {
>>   	struct sdma_engine *sdma = dev_id;
>> @@ -737,8 +733,8 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id)
>>
>>   		if (sdmac->flags & IMX_DMA_SG_LOOP)
>>   			sdma_update_channel_loop(sdmac);
>> -
>> -		tasklet_schedule(&sdmac->tasklet);
>> +		else
>> +			tasklet_schedule(&sdmac->tasklet);
>>
>>   		__clear_bit(channel, &stat);
>>   	}
>> @@ -1739,7 +1735,7 @@ static int sdma_probe(struct platform_device *pdev)
>>   		dma_cookie_init(&sdmac->chan);
>>   		sdmac->channel = i;
>>
>> -		tasklet_init(&sdmac->tasklet, sdma_tasklet,
>> +		tasklet_init(&sdmac->tasklet, mxc_sdma_handle_channel_normal,
>>   			     (unsigned long) sdmac);
>>   		/*
>>   		 * Add the channel to the DMAC list. Do not add channel 0 though
>> --
>> 2.8.3
>>
>

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: EXT: Re: [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients
  2016-07-01 14:59     ` EXT: " Nandor Han
@ 2016-07-02 17:27       ` Vinod Koul
  0 siblings, 0 replies; 14+ messages in thread
From: Vinod Koul @ 2016-07-02 17:27 UTC (permalink / raw)
  To: Nandor Han
  Cc: =Dan Williams, Greg Kroah-Hartman, Jiri Slaby, dmaengine,
	linux-serial, linux-kernel

On Fri, Jul 01, 2016 at 05:59:30PM +0300, Nandor Han wrote:
> 
> 
> On 28/06/16 17:34, Vinod Koul wrote:
> >On Thu, Jun 09, 2016 at 03:16:30PM +0300, Nandor Han wrote:
> >>Having the SDMA driver use a tasklet for running the clients
> >>callback introduce some issues:
> >>   - probability to have desynchronized data because of the
> >>     race condition created since the DMA transaction status
> >>     is retrieved only when the callback is executed, leaving
> >>     plenty of time for transaction status to get altered.
> >>   - inter-transfer latency which can leave channels idle.
> >>
> >>Move the callback execution, for cyclic channels, to SDMA
> >>interrupt (as advised in `Documentation/dmaengine/provider.txt`)
> >>to (a)reduce the inter-transfer latency and (b) eliminate the
> >>race condition possibility where DMA transaction status might
> >>be changed by the time is read.
> >>
> >>The responsibility of the SDMA interrupt latency
> >>is moved to the SDMA clients which case by case should defer
> >>the work to bottom-halves when needed.
> >
> >Both of these look fine. Please change the patch titles to dmaengine: xxxx
> >
> >Are these going to be merged thru dmaengine tree or serial one?
> >
> 
> I will send soon a V2 where I will fix the titles. If you are OK
> with all the patchset it can be merged to dmaengine tree, otherwise
> probably goes to serial one.

Sure I can merge all.. provided ACKs on other patches.


-- 
~Vinod

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients
  2016-06-09 12:16 ` [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
  2016-06-28 14:34   ` Vinod Koul
@ 2016-07-14  8:32   ` Peter Senna Tschudin
  1 sibling, 0 replies; 14+ messages in thread
From: Peter Senna Tschudin @ 2016-07-14  8:32 UTC (permalink / raw)
  To: Nandor Han
  Cc: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby,
	dmaengine, linux-serial, linux-kernel

On Thu, Jun 09, 2016 at 03:16:30PM +0300, Nandor Han wrote:
> Having the SDMA driver use a tasklet for running the clients
> callback introduce some issues:
>   - probability to have desynchronized data because of the
>     race condition created since the DMA transaction status
>     is retrieved only when the callback is executed, leaving
>     plenty of time for transaction status to get altered.
>   - inter-transfer latency which can leave channels idle.
> 
> Move the callback execution, for cyclic channels, to SDMA
> interrupt (as advised in `Documentation/dmaengine/provider.txt`)
> to (a)reduce the inter-transfer latency and (b) eliminate the
> race condition possibility where DMA transaction status might
> be changed by the time is read.
> 
> The responsibility of the SDMA interrupt latency
> is moved to the SDMA clients which case by case should defer
> the work to bottom-halves when needed.
> 
> Signed-off-by: Nandor Han <nandor.han@ge.com>
For the whole series:
Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
> ---
>  drivers/dma/imx-sdma.c | 36 ++++++++++++++++--------------------
>  1 file changed, 16 insertions(+), 20 deletions(-)
> 
> diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
> index 0f6fd42..e497847 100644
> --- a/drivers/dma/imx-sdma.c
> +++ b/drivers/dma/imx-sdma.c
> @@ -654,12 +654,6 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
>  	writel_relaxed(val, sdma->regs + chnenbl);
>  }
>  
> -static void sdma_handle_channel_loop(struct sdma_channel *sdmac)
> -{
> -	if (sdmac->desc.callback)
> -		sdmac->desc.callback(sdmac->desc.callback_param);
> -}
> -
>  static void sdma_update_channel_loop(struct sdma_channel *sdmac)
>  {
>  	struct sdma_buffer_descriptor *bd;
> @@ -678,13 +672,25 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
>  			sdmac->status = DMA_ERROR;
>  
>  		bd->mode.status |= BD_DONE;
> +
> +		/*
> +		 * The callback is called from the interrupt context in order
> +		 * to reduce latency and to avoid the risk of altering the
> +		 * SDMA transaction status by the time the client tasklet is
> +		 * executed.
> +		 */
> +
> +		if (sdmac->desc.callback)
> +			sdmac->desc.callback(sdmac->desc.callback_param);
> +
>  		sdmac->buf_tail++;
>  		sdmac->buf_tail %= sdmac->num_bd;
>  	}
>  }
>  
> -static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
> +static void mxc_sdma_handle_channel_normal(unsigned long data)
>  {
> +	struct sdma_channel *sdmac = (struct sdma_channel *) data;
>  	struct sdma_buffer_descriptor *bd;
>  	int i, error = 0;
>  
> @@ -711,16 +717,6 @@ static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
>  		sdmac->desc.callback(sdmac->desc.callback_param);
>  }
>  
> -static void sdma_tasklet(unsigned long data)
> -{
> -	struct sdma_channel *sdmac = (struct sdma_channel *) data;
> -
> -	if (sdmac->flags & IMX_DMA_SG_LOOP)
> -		sdma_handle_channel_loop(sdmac);
> -	else
> -		mxc_sdma_handle_channel_normal(sdmac);
> -}
> -
>  static irqreturn_t sdma_int_handler(int irq, void *dev_id)
>  {
>  	struct sdma_engine *sdma = dev_id;
> @@ -737,8 +733,8 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id)
>  
>  		if (sdmac->flags & IMX_DMA_SG_LOOP)
>  			sdma_update_channel_loop(sdmac);
> -
> -		tasklet_schedule(&sdmac->tasklet);
> +		else
> +			tasklet_schedule(&sdmac->tasklet);
>  
>  		__clear_bit(channel, &stat);
>  	}
> @@ -1739,7 +1735,7 @@ static int sdma_probe(struct platform_device *pdev)
>  		dma_cookie_init(&sdmac->chan);
>  		sdmac->channel = i;
>  
> -		tasklet_init(&sdmac->tasklet, sdma_tasklet,
> +		tasklet_init(&sdmac->tasklet, mxc_sdma_handle_channel_normal,
>  			     (unsigned long) sdmac);
>  		/*
>  		 * Add the channel to the DMAC list. Do not add channel 0 though
> -- 
> 2.8.3
> 
> 

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver
  2016-06-09 12:16 [PATCH RFC 0/4] serial,dma: use DMA cyclic for IMX UART driver Nandor Han
                   ` (3 preceding siblings ...)
  2016-06-09 12:16 ` [PATCH RFC 4/4] serial: imx-serial - update RX error counters when DMA is used Nandor Han
@ 2016-08-08 12:38 ` Nandor Han
  2016-08-08 12:38   ` [PATCH 1/4] dmaengine: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
                     ` (3 more replies)
  4 siblings, 4 replies; 14+ messages in thread
From: Nandor Han @ 2016-08-08 12:38 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel, Peter Senna Tschudin

Update the IMX UART driver to use cyclic DMA in order
to avoid losing serial data because of DMA start latency.

Support variable data length transfers for cyclic channels in
IMX SDMA(Smart DMA) driver and, as specified in 
"Documentation/dmaengine/provider.txt" section "General Design Notes",
run the SDMA channel callback directly from the SDMA interrupt context.

Every commit message will explain the changes.

Impacted devices:
  IMX6i,IMX53 CPU:
        - clients of the IMX UART that have DMA enabled.
        - cyclic SDMA clients 
            (Note:Not able to find others than IMX UART driver)

Functionality tested by connecting the m53evk to my laptop using a
USB-serial converter and start sending, from my laptop 800 bytes
long packets at 8ms interval.

On the m53evk with interrupt load, generated by
connecting/disconnecting at 3s interval a USB HUB having
some devices connected (~4), I receive serial data in a
loop and check that no overruns are created using command:

    `watch -n1 cat /proc/tty/driver/IMX-uart`

Also done tests were I check how the data is received  when sending packets
bigger than a buffer descriptor size (1024). In my tests I have sent 4 packets
(1028, 2048, 2048, 2048) and check that 7172 bytes were received using
the command:
    `watch -n1 cat /proc/tty/driver/IMX-uart`

    Result:
        serinfo:1.0 driver revision:
        2: uart:IMX mmio:0x5000C000 irq:49 tx:0 rx:7172 RTS|CTS|DTR|DSR|CD


Nandor Han (4):
  dmaengine: imx-sdma - reduce transfer latency for DMA cyclic clients
  dmaengine: imx-sdma - update the residue calculation for cyclic
    channels
  serial: imx-serial - update UART IMX driver to use cyclic DMA
  serial: imx-serial - update RX error counters when DMA is used

 drivers/dma/imx-sdma.c   |  56 +++++++++------
 drivers/tty/serial/imx.c | 173 ++++++++++++++++++++++++++++++-----------------
 2 files changed, 144 insertions(+), 85 deletions(-)

-- 
2.8.3

^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/4] dmaengine: imx-sdma - reduce transfer latency for DMA cyclic clients
  2016-08-08 12:38 ` [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver Nandor Han
@ 2016-08-08 12:38   ` Nandor Han
  2016-08-08 12:38   ` [PATCH 2/4] dmaengine: imx-sdma - update the residue calculation for cyclic channels Nandor Han
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 14+ messages in thread
From: Nandor Han @ 2016-08-08 12:38 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel, Peter Senna Tschudin

Having the SDMA driver use a tasklet for running the clients
callback introduce some issues:
  - probability to have desynchronized data because of the
    race condition created since the DMA transaction status
    is retrieved only when the callback is executed, leaving
    plenty of time for transaction status to get altered.
  - inter-transfer latency which can leave channels idle.

Move the callback execution, for cyclic channels, to SDMA
interrupt (as advised in `Documentation/dmaengine/provider.txt`)
to (a)reduce the inter-transfer latency and (b) eliminate the
race condition possibility where DMA transaction status might
be changed by the time is read.

The responsibility of the SDMA interrupt latency
is moved to the SDMA clients which case by case should defer
the work to bottom-halves when needed.

Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Reviewed-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/dma/imx-sdma.c | 36 ++++++++++++++++--------------------
 1 file changed, 16 insertions(+), 20 deletions(-)

diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index 03ec76f..aa35a77 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -648,12 +648,6 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
 	writel_relaxed(val, sdma->regs + chnenbl);
 }
 
-static void sdma_handle_channel_loop(struct sdma_channel *sdmac)
-{
-	if (sdmac->desc.callback)
-		sdmac->desc.callback(sdmac->desc.callback_param);
-}
-
 static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 {
 	struct sdma_buffer_descriptor *bd;
@@ -672,13 +666,25 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 			sdmac->status = DMA_ERROR;
 
 		bd->mode.status |= BD_DONE;
+
+		/*
+		 * The callback is called from the interrupt context in order
+		 * to reduce latency and to avoid the risk of altering the
+		 * SDMA transaction status by the time the client tasklet is
+		 * executed.
+		 */
+
+		if (sdmac->desc.callback)
+			sdmac->desc.callback(sdmac->desc.callback_param);
+
 		sdmac->buf_tail++;
 		sdmac->buf_tail %= sdmac->num_bd;
 	}
 }
 
-static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
+static void mxc_sdma_handle_channel_normal(unsigned long data)
 {
+	struct sdma_channel *sdmac = (struct sdma_channel *) data;
 	struct sdma_buffer_descriptor *bd;
 	int i, error = 0;
 
@@ -705,16 +711,6 @@ static void mxc_sdma_handle_channel_normal(struct sdma_channel *sdmac)
 		sdmac->desc.callback(sdmac->desc.callback_param);
 }
 
-static void sdma_tasklet(unsigned long data)
-{
-	struct sdma_channel *sdmac = (struct sdma_channel *) data;
-
-	if (sdmac->flags & IMX_DMA_SG_LOOP)
-		sdma_handle_channel_loop(sdmac);
-	else
-		mxc_sdma_handle_channel_normal(sdmac);
-}
-
 static irqreturn_t sdma_int_handler(int irq, void *dev_id)
 {
 	struct sdma_engine *sdma = dev_id;
@@ -731,8 +727,8 @@ static irqreturn_t sdma_int_handler(int irq, void *dev_id)
 
 		if (sdmac->flags & IMX_DMA_SG_LOOP)
 			sdma_update_channel_loop(sdmac);
-
-		tasklet_schedule(&sdmac->tasklet);
+		else
+			tasklet_schedule(&sdmac->tasklet);
 
 		__clear_bit(channel, &stat);
 	}
@@ -1732,7 +1728,7 @@ static int sdma_probe(struct platform_device *pdev)
 		dma_cookie_init(&sdmac->chan);
 		sdmac->channel = i;
 
-		tasklet_init(&sdmac->tasklet, sdma_tasklet,
+		tasklet_init(&sdmac->tasklet, mxc_sdma_handle_channel_normal,
 			     (unsigned long) sdmac);
 		/*
 		 * Add the channel to the DMAC list. Do not add channel 0 though
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/4] dmaengine: imx-sdma - update the residue calculation for cyclic channels
  2016-08-08 12:38 ` [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver Nandor Han
  2016-08-08 12:38   ` [PATCH 1/4] dmaengine: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
@ 2016-08-08 12:38   ` Nandor Han
  2016-08-08 12:38   ` [PATCH 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA Nandor Han
  2016-08-08 12:38   ` [PATCH 4/4] serial: imx-serial - update RX error counters when DMA is used Nandor Han
  3 siblings, 0 replies; 14+ messages in thread
From: Nandor Han @ 2016-08-08 12:38 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel, Peter Senna Tschudin

The calculation of the DMA transaction residue supports only fixed
size data transfers. This implementation is not covering all
operations (e.g. data receiving) when we need to know the exact amount
of bytes transferred.

The loop channels handling was changed to clear the buffer
descriptor errors and use the bd->mode.count to calculate the
residue.

Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Reviewed-by: Vinod Koul <vinod.koul@intel.com>
Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/dma/imx-sdma.c | 20 ++++++++++++++++++--
 1 file changed, 18 insertions(+), 2 deletions(-)

diff --git a/drivers/dma/imx-sdma.c b/drivers/dma/imx-sdma.c
index aa35a77..3cb4738 100644
--- a/drivers/dma/imx-sdma.c
+++ b/drivers/dma/imx-sdma.c
@@ -651,6 +651,8 @@ static void sdma_event_disable(struct sdma_channel *sdmac, unsigned int event)
 static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 {
 	struct sdma_buffer_descriptor *bd;
+	int error = 0;
+	enum dma_status	old_status = sdmac->status;
 
 	/*
 	 * loop mode. Iterate over descriptors, re-setup them and
@@ -662,10 +664,20 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 		if (bd->mode.status & BD_DONE)
 			break;
 
-		if (bd->mode.status & BD_RROR)
+		if (bd->mode.status & BD_RROR) {
+			bd->mode.status &= ~BD_RROR;
 			sdmac->status = DMA_ERROR;
+			error = -EIO;
+		}
 
+	       /*
+		* We use bd->mode.count to calculate the residue, since contains
+		* the number of bytes present in the current buffer descriptor.
+		*/
+
+		sdmac->chn_real_count = bd->mode.count;
 		bd->mode.status |= BD_DONE;
+		bd->mode.count = sdmac->period_len;
 
 		/*
 		 * The callback is called from the interrupt context in order
@@ -679,6 +691,9 @@ static void sdma_update_channel_loop(struct sdma_channel *sdmac)
 
 		sdmac->buf_tail++;
 		sdmac->buf_tail %= sdmac->num_bd;
+
+		if (error)
+			sdmac->status = old_status;
 	}
 }
 
@@ -1349,7 +1364,8 @@ static enum dma_status sdma_tx_status(struct dma_chan *chan,
 	u32 residue;
 
 	if (sdmac->flags & IMX_DMA_SG_LOOP)
-		residue = (sdmac->num_bd - sdmac->buf_tail) * sdmac->period_len;
+		residue = (sdmac->num_bd - sdmac->buf_tail) *
+			   sdmac->period_len - sdmac->chn_real_count;
 	else
 		residue = sdmac->chn_count - sdmac->chn_real_count;
 
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA
  2016-08-08 12:38 ` [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver Nandor Han
  2016-08-08 12:38   ` [PATCH 1/4] dmaengine: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
  2016-08-08 12:38   ` [PATCH 2/4] dmaengine: imx-sdma - update the residue calculation for cyclic channels Nandor Han
@ 2016-08-08 12:38   ` Nandor Han
  2016-08-08 12:38   ` [PATCH 4/4] serial: imx-serial - update RX error counters when DMA is used Nandor Han
  3 siblings, 0 replies; 14+ messages in thread
From: Nandor Han @ 2016-08-08 12:38 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel, Peter Senna Tschudin

The IMX UART has a 32 bytes HW buffer which can be filled up in
2777us at 115200 baud or 80us at 4Mbaud (supported by IMX53).
Taking this in consideration there is a good probability to lose
data because of the DMA startup latency.
Our tests (explained below) indicates a latency up to 4400us when
creating interrupt load and ~70us without. When creating interrupt
load I was able to see continuous overrun errors by checking serial
driver statistics using the command:
`cat /proc/tty/driver/IMX-uart`.

Replace manual restart of DMA with cyclic DMA to eliminate data loss
due to DMA engine startup latency (similar approch to atmel_serial.c
driver). As result the DMA engine will start on the first serial data
transfer and stops only when serial port is closed.

Tests environment:
 Using the m53evk board I have used a GPIO for profiling the IMX
 serial driver.
  - The RX line and GPIO were connected to oscilloscope.
  - Run a small test program on the m53evk board that will only open
    and read data from ttymxc2 port.
  - Connect the ttymxc2 port to my laptop using a USB serial converter
    where another test program is running, able to send configurable
    packet lengths and intervals.
  - Serial ports configured at 115200 8N1.
  - Interrupts load created by disconnecting/connecting (3s interval)
    a USB hub, using a digital switch, with 4 USB devices (USB-Serial
    converter, USB SD card, etc) connected.
    (around 160 interrupts/second generated)
  - The GPIO was toggled HI in the `imx_int` when USR1_RRDY or USR1_AGTIM
    events are received and toggled back, once the DMA configuration
    is finalized, at the end of `imx_dma_rxint`.

Measurements:
The measurements were done from the end of the last byte (RX line) until
the GPIO was toggled back LOW.

Note: The GPIO toggling was done using `gpiod_set_value` method.

Tests performed:
   1. Sending 9 bytes packets at 8ms interval. Having the 9 bytes packets
      will activate the RRDY threshold event and IMX serial interrupt
      called.
      Results:
        - DMA start latency (interrupt start latency +
           DMA configuration) consistently 70us when system not loaded.
        - DMA start latency up to 4400us when system loaded.
   2. Sending 40 bytes packet at 8mS interval.
      Results with load:
        - Able to observe overruns by running:
           `watch -n1 cat /proc/tty/driver/IMX-uart`

Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/tty/serial/imx.c | 141 ++++++++++++++++++++++++++---------------------
 1 file changed, 78 insertions(+), 63 deletions(-)

diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 0df2b1c..1912136 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -222,6 +222,9 @@ struct imx_port {
 	struct dma_chan		*dma_chan_rx, *dma_chan_tx;
 	struct scatterlist	rx_sgl, tx_sgl[2];
 	void			*rx_buf;
+	struct circ_buf		rx_ring;
+	unsigned int		rx_periods;
+	dma_cookie_t		rx_cookie;
 	unsigned int		tx_bytes;
 	unsigned int		dma_tx_nents;
 	wait_queue_head_t	dma_wait;
@@ -932,30 +935,6 @@ static void imx_timeout(unsigned long data)
 }
 
 #define RX_BUF_SIZE	(PAGE_SIZE)
-static void imx_rx_dma_done(struct imx_port *sport)
-{
-	unsigned long temp;
-	unsigned long flags;
-
-	spin_lock_irqsave(&sport->port.lock, flags);
-
-	/* re-enable interrupts to get notified when new symbols are incoming */
-	temp = readl(sport->port.membase + UCR1);
-	temp |= UCR1_RRDYEN;
-	writel(temp, sport->port.membase + UCR1);
-
-	temp = readl(sport->port.membase + UCR2);
-	temp |= UCR2_ATEN;
-	writel(temp, sport->port.membase + UCR2);
-
-	sport->dma_is_rxing = 0;
-
-	/* Is the shutdown waiting for us? */
-	if (waitqueue_active(&sport->dma_wait))
-		wake_up(&sport->dma_wait);
-
-	spin_unlock_irqrestore(&sport->port.lock, flags);
-}
 
 /*
  * There are two kinds of RX DMA interrupts(such as in the MX6Q):
@@ -972,43 +951,75 @@ static void dma_rx_callback(void *data)
 	struct scatterlist *sgl = &sport->rx_sgl;
 	struct tty_port *port = &sport->port.state->port;
 	struct dma_tx_state state;
+	struct circ_buf *rx_ring = &sport->rx_ring;
 	enum dma_status status;
-	unsigned int count;
-
-	/* unmap it first */
-	dma_unmap_sg(sport->port.dev, sgl, 1, DMA_FROM_DEVICE);
+	unsigned int w_bytes = 0;
+	unsigned int r_bytes;
+	unsigned int bd_size;
 
 	status = dmaengine_tx_status(chan, (dma_cookie_t)0, &state);
-	count = RX_BUF_SIZE - state.residue;
 
-	dev_dbg(sport->port.dev, "We get %d bytes.\n", count);
+	if (status == DMA_ERROR) {
+		dev_err(sport->port.dev, "DMA transaction error.\n");
+		return;
+	}
+
+	if (!(sport->port.ignore_status_mask & URXD_DUMMY_READ)) {
+
+		/*
+		 * The state-residue variable represents the empty space
+		 * relative to the entire buffer. Taking this in consideration
+		 * the head is always calculated base on the buffer total
+		 * length - DMA transaction residue. The UART script from the
+		 * SDMA firmware will jump to the next buffer descriptor,
+		 * once a DMA transaction if finalized (IMX53 RM - A.4.1.2.4).
+		 * Taking this in consideration the tail is always at the
+		 * beginning of the buffer descriptor that contains the head.
+		 */
+
+		/* Calculate the head */
+		rx_ring->head = sg_dma_len(sgl) - state.residue;
+
+		/* Calculate the tail. */
+		bd_size = sg_dma_len(sgl) / sport->rx_periods;
+		rx_ring->tail = ((rx_ring->head-1) / bd_size) * bd_size;
+
+		if (rx_ring->head <= sg_dma_len(sgl) &&
+		    rx_ring->head > rx_ring->tail) {
+
+			/* Move data from tail to head */
+			r_bytes = rx_ring->head - rx_ring->tail;
+
+			/* CPU claims ownership of RX DMA buffer */
+			dma_sync_sg_for_cpu(sport->port.dev, sgl, 1,
+				DMA_FROM_DEVICE);
 
-	if (count) {
-		if (!(sport->port.ignore_status_mask & URXD_DUMMY_READ)) {
-			int bytes = tty_insert_flip_string(port, sport->rx_buf,
-					count);
+			w_bytes = tty_insert_flip_string(port,
+				sport->rx_buf + rx_ring->tail, r_bytes);
 
-			if (bytes != count)
+			/* UART retrieves ownership of RX DMA buffer */
+			dma_sync_sg_for_device(sport->port.dev, sgl, 1,
+				DMA_FROM_DEVICE);
+
+			if (w_bytes != r_bytes)
 				sport->port.icount.buf_overrun++;
+
+			sport->port.icount.rx += w_bytes;
+		} else	{
+			WARN_ON(rx_ring->head > sg_dma_len(sgl));
+			WARN_ON(rx_ring->head <= rx_ring->tail);
 		}
-		tty_flip_buffer_push(port);
-		sport->port.icount.rx += count;
 	}
 
-	/*
-	 * Restart RX DMA directly if more data is available in order to skip
-	 * the roundtrip through the IRQ handler. If there is some data already
-	 * in the FIFO, DMA needs to be restarted soon anyways.
-	 *
-	 * Otherwise stop the DMA and reactivate FIFO IRQs to restart DMA once
-	 * data starts to arrive again.
-	 */
-	if (readl(sport->port.membase + USR2) & USR2_RDR)
-		start_rx_dma(sport);
-	else
-		imx_rx_dma_done(sport);
+	if (w_bytes) {
+		tty_flip_buffer_push(port);
+		dev_dbg(sport->port.dev, "We get %d bytes.\n", w_bytes);
+	}
 }
 
+/* RX DMA buffer periods */
+#define RX_DMA_PERIODS 4
+
 static int start_rx_dma(struct imx_port *sport)
 {
 	struct scatterlist *sgl = &sport->rx_sgl;
@@ -1017,14 +1028,21 @@ static int start_rx_dma(struct imx_port *sport)
 	struct dma_async_tx_descriptor *desc;
 	int ret;
 
+	sport->rx_ring.head = 0;
+	sport->rx_ring.tail = 0;
+	sport->rx_periods = RX_DMA_PERIODS;
+
 	sg_init_one(sgl, sport->rx_buf, RX_BUF_SIZE);
 	ret = dma_map_sg(dev, sgl, 1, DMA_FROM_DEVICE);
 	if (ret == 0) {
 		dev_err(dev, "DMA mapping error for RX.\n");
 		return -EINVAL;
 	}
-	desc = dmaengine_prep_slave_sg(chan, sgl, 1, DMA_DEV_TO_MEM,
-					DMA_PREP_INTERRUPT);
+
+	desc = dmaengine_prep_dma_cyclic(chan, sg_dma_address(sgl),
+		sg_dma_len(sgl), sg_dma_len(sgl) / sport->rx_periods,
+		DMA_DEV_TO_MEM, DMA_PREP_INTERRUPT);
+
 	if (!desc) {
 		dma_unmap_sg(dev, sgl, 1, DMA_FROM_DEVICE);
 		dev_err(dev, "We cannot prepare for the RX slave dma!\n");
@@ -1034,7 +1052,7 @@ static int start_rx_dma(struct imx_port *sport)
 	desc->callback_param = sport;
 
 	dev_dbg(dev, "RX: prepare for the DMA.\n");
-	dmaengine_submit(desc);
+	sport->rx_cookie = dmaengine_submit(desc);
 	dma_async_issue_pending(chan);
 	return 0;
 }
@@ -1058,14 +1076,16 @@ static void imx_setup_ufcr(struct imx_port *sport,
 static void imx_uart_dma_exit(struct imx_port *sport)
 {
 	if (sport->dma_chan_rx) {
+		dmaengine_terminate_all(sport->dma_chan_rx);
 		dma_release_channel(sport->dma_chan_rx);
 		sport->dma_chan_rx = NULL;
-
+		sport->rx_cookie = -EINVAL;
 		kfree(sport->rx_buf);
 		sport->rx_buf = NULL;
 	}
 
 	if (sport->dma_chan_tx) {
+		dmaengine_terminate_all(sport->dma_chan_tx);
 		dma_release_channel(sport->dma_chan_tx);
 		sport->dma_chan_tx = NULL;
 	}
@@ -1103,6 +1123,7 @@ static int imx_uart_dma_init(struct imx_port *sport)
 		ret = -ENOMEM;
 		goto err;
 	}
+	sport->rx_ring.buf = sport->rx_buf;
 
 	/* Prepare for TX : */
 	sport->dma_chan_tx = dma_request_slave_channel(dev, "tx");
@@ -1283,17 +1304,11 @@ static void imx_shutdown(struct uart_port *port)
 	unsigned long flags;
 
 	if (sport->dma_is_enabled) {
-		int ret;
+		sport->dma_is_rxing = 0;
+		sport->dma_is_txing = 0;
+		dmaengine_terminate_all(sport->dma_chan_tx);
+		dmaengine_terminate_all(sport->dma_chan_rx);
 
-		/* We have to wait for the DMA to finish. */
-		ret = wait_event_interruptible(sport->dma_wait,
-			!sport->dma_is_rxing && !sport->dma_is_txing);
-		if (ret != 0) {
-			sport->dma_is_rxing = 0;
-			sport->dma_is_txing = 0;
-			dmaengine_terminate_all(sport->dma_chan_tx);
-			dmaengine_terminate_all(sport->dma_chan_rx);
-		}
 		spin_lock_irqsave(&sport->port.lock, flags);
 		imx_stop_tx(port);
 		imx_stop_rx(port);
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/4] serial: imx-serial - update RX error counters when DMA is used
  2016-08-08 12:38 ` [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver Nandor Han
                     ` (2 preceding siblings ...)
  2016-08-08 12:38   ` [PATCH 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA Nandor Han
@ 2016-08-08 12:38   ` Nandor Han
  3 siblings, 0 replies; 14+ messages in thread
From: Nandor Han @ 2016-08-08 12:38 UTC (permalink / raw)
  To: =Dan Williams, Vinod Koul, Greg Kroah-Hartman, Jiri Slaby
  Cc: Nandor Han, dmaengine, linux-serial, linux-kernel, Peter Senna Tschudin

Update error counters when DMA is used for receiving data. Do
this by using DMA transaction error event instead error interrupts
to reduce interrupt load.

Tested-by: Peter Senna Tschudin <peter.senna@collabora.com>
Acked-by: Peter Senna Tschudin <peter.senna@collabora.com>
Signed-off-by: Nandor Han <nandor.han@ge.com>
---
 drivers/tty/serial/imx.c | 32 ++++++++++++++++++++++++++++++++
 1 file changed, 32 insertions(+)

diff --git a/drivers/tty/serial/imx.c b/drivers/tty/serial/imx.c
index 1912136..08ccfe1 100644
--- a/drivers/tty/serial/imx.c
+++ b/drivers/tty/serial/imx.c
@@ -704,6 +704,7 @@ out:
 	return IRQ_HANDLED;
 }
 
+static void clear_rx_errors(struct imx_port *sport);
 static int start_rx_dma(struct imx_port *sport);
 /*
  * If the RXFIFO is filled with some data, and then we
@@ -729,6 +730,11 @@ static void imx_dma_rxint(struct imx_port *sport)
 		temp &= ~(UCR2_ATEN);
 		writel(temp, sport->port.membase + UCR2);
 
+		/* disable the rx errors interrupts */
+		temp = readl(sport->port.membase + UCR4);
+		temp &= ~UCR4_OREN;
+		writel(temp, sport->port.membase + UCR4);
+
 		/* tell the DMA to receive the data. */
 		start_rx_dma(sport);
 	}
@@ -961,6 +967,7 @@ static void dma_rx_callback(void *data)
 
 	if (status == DMA_ERROR) {
 		dev_err(sport->port.dev, "DMA transaction error.\n");
+		clear_rx_errors(sport);
 		return;
 	}
 
@@ -1057,6 +1064,31 @@ static int start_rx_dma(struct imx_port *sport)
 	return 0;
 }
 
+static void clear_rx_errors(struct imx_port *sport)
+{
+	unsigned int status_usr1, status_usr2;
+
+	status_usr1 = readl(sport->port.membase + USR1);
+	status_usr2 = readl(sport->port.membase + USR2);
+
+	if (status_usr2 & USR2_BRCD) {
+		sport->port.icount.brk++;
+		writel(USR2_BRCD, sport->port.membase + USR2);
+	} else if (status_usr1 & USR1_FRAMERR) {
+		sport->port.icount.frame++;
+		writel(USR1_FRAMERR, sport->port.membase + USR1);
+	} else if (status_usr1 & USR1_PARITYERR) {
+		sport->port.icount.parity++;
+		writel(USR1_PARITYERR, sport->port.membase + USR1);
+	}
+
+	if (status_usr2 & USR2_ORE) {
+		sport->port.icount.overrun++;
+		writel(USR2_ORE, sport->port.membase + USR2);
+	}
+
+}
+
 #define TXTL_DEFAULT 2 /* reset default */
 #define RXTL_DEFAULT 1 /* reset default */
 #define TXTL_DMA 8 /* DMA burst setting */
-- 
2.8.3

^ permalink raw reply related	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2016-08-08 14:11 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-06-09 12:16 [PATCH RFC 0/4] serial,dma: use DMA cyclic for IMX UART driver Nandor Han
2016-06-09 12:16 ` [PATCH RFC 1/4] dma: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
2016-06-28 14:34   ` Vinod Koul
2016-07-01 14:59     ` EXT: " Nandor Han
2016-07-02 17:27       ` Vinod Koul
2016-07-14  8:32   ` Peter Senna Tschudin
2016-06-09 12:16 ` [PATCH RFC 2/4] dma: imx-sdma - update the residue calculation for cyclic channels Nandor Han
2016-06-09 12:16 ` [PATCH RFC 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA Nandor Han
2016-06-09 12:16 ` [PATCH RFC 4/4] serial: imx-serial - update RX error counters when DMA is used Nandor Han
2016-08-08 12:38 ` [PATCH 0/4] serial,dmaengine: use DMA cyclic for IMX UART driver Nandor Han
2016-08-08 12:38   ` [PATCH 1/4] dmaengine: imx-sdma - reduce transfer latency for DMA cyclic clients Nandor Han
2016-08-08 12:38   ` [PATCH 2/4] dmaengine: imx-sdma - update the residue calculation for cyclic channels Nandor Han
2016-08-08 12:38   ` [PATCH 3/4] serial: imx-serial - update UART IMX driver to use cyclic DMA Nandor Han
2016-08-08 12:38   ` [PATCH 4/4] serial: imx-serial - update RX error counters when DMA is used Nandor Han

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).