All of lore.kernel.org
 help / color / mirror / Atom feed
From: Jose Abreu <Jose.Abreu@synopsys.com>
To: Kedareswara rao Appana <appana.durga.rao@xilinx.com>,
	<dan.j.williams@intel.com>, <vinod.koul@intel.com>,
	<michal.simek@xilinx.com>, <soren.brinkmann@xilinx.com>,
	<appanad@xilinx.com>, <moritz.fischer@ettus.com>,
	<laurent.pinchart@ideasonboard.com>, <luis@debethencourt.com>,
	<svemula@xilinx.com>, <anirudh@xilinx.com>,
	<Jose.Abreu@synopsys.com>
Cc: <dmaengine@vger.kernel.org>,
	<linux-arm-kernel@lists.infradead.org>,
	<linux-kernel@vger.kernel.org>
Subject: Re: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
Date: Thu, 15 Dec 2016 16:10:20 +0000	[thread overview]
Message-ID: <9d92984b-e04a-cd29-e933-d8ea4d610c94@synopsys.com> (raw)
In-Reply-To: <1481814682-31780-3-git-send-email-appanad@xilinx.com>

Hi Kedar,


On 15-12-2016 15:11, Kedareswara rao Appana wrote:
> When VDMA is configured for more than one frame in the h/w
> for example h/w is configured for n number of frames and user
> Submits n number of frames and triggered the DMA using issue_pending API.
> In the current driver flow we are submitting one frame at a time
> but we should submit all the n number of frames at one time as the h/w
> Is configured for n number of frames.
>
> This patch fixes this issue.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 43 +++++++++++++++++++++++++----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 736c2a3..4f3fa94 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  				tail_segment->phys);
>  	} else {
>  		struct xilinx_vdma_tx_segment *segment, *last = NULL;
> -		int i = 0;
> +		int i = 0, j = 0;
>  
>  		if (chan->desc_submitcount < chan->num_frms)
>  			i = chan->desc_submitcount;
>  
> -		list_for_each_entry(segment, &desc->segments, node) {
> -			if (chan->ext_addr)
> -				vdma_desc_write_64(chan,
> -					XILINX_VDMA_REG_START_ADDRESS_64(i++),
> -					segment->hw.buf_addr,
> -					segment->hw.buf_addr_msb);
> -			else
> -				vdma_desc_write(chan,
> -					XILINX_VDMA_REG_START_ADDRESS(i++),
> -					segment->hw.buf_addr);
> -
> -			last = segment;
> +		for (j = 0; j < chan->num_frms; ) {
> +			list_for_each_entry(segment, &desc->segments, node) {
> +				if (chan->ext_addr)
> +					vdma_desc_write_64(chan,
> +					  XILINX_VDMA_REG_START_ADDRESS_64(i++),
> +					  segment->hw.buf_addr,
> +					  segment->hw.buf_addr_msb);
> +				else
> +					vdma_desc_write(chan,
> +					    XILINX_VDMA_REG_START_ADDRESS(i++),
> +					    segment->hw.buf_addr);
> +
> +				last = segment;

Hmm, is it possible to submit more than one segment? If so, then
i and j will get out of sync.

> +			}
> +			list_del(&desc->node);
> +			list_add_tail(&desc->node, &chan->active_list);
> +			j++;

But if i is non zero and pending_list has more than num_frms then
i will not wrap-around as it should and will write to invalid
framebuffer location, right?

> +			if (list_empty(&chan->pending_list))
> +				break;
> +			desc = list_first_entry(&chan->pending_list,
> +						struct xilinx_dma_tx_descriptor,
> +						node);
>  		}
>  
>  		if (!last)
> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>  				last->hw.stride);
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);

Maybe a check that all framebuffers contain valid addresses
should be done before programming vsize so that VDMA does not try
to write to invalid addresses.

> +
> +		chan->desc_submitcount += j;
> +		chan->desc_pendingcount -= j;
>  	}
>  
>  	chan->idle = false;
>  	if (!chan->has_sg) {
> -		list_del(&desc->node);
> -		list_add_tail(&desc->node, &chan->active_list);
> -		chan->desc_submitcount++;
> -		chan->desc_pendingcount--;
>  		if (chan->desc_submitcount == chan->num_frms)
>  			chan->desc_submitcount = 0;

"desc_submitcount >= chan->num_frms would be safer here.

>  	} else {

Best regards,
Jose Miguel Abreu

WARNING: multiple messages have this Message-ID (diff)
From: Jose.Abreu@synopsys.com (Jose Abreu)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma
Date: Thu, 15 Dec 2016 16:10:20 +0000	[thread overview]
Message-ID: <9d92984b-e04a-cd29-e933-d8ea4d610c94@synopsys.com> (raw)
In-Reply-To: <1481814682-31780-3-git-send-email-appanad@xilinx.com>

Hi Kedar,


On 15-12-2016 15:11, Kedareswara rao Appana wrote:
> When VDMA is configured for more than one frame in the h/w
> for example h/w is configured for n number of frames and user
> Submits n number of frames and triggered the DMA using issue_pending API.
> In the current driver flow we are submitting one frame at a time
> but we should submit all the n number of frames at one time as the h/w
> Is configured for n number of frames.
>
> This patch fixes this issue.
>
> Signed-off-by: Kedareswara rao Appana <appanad@xilinx.com>
> ---
>  drivers/dma/xilinx/xilinx_dma.c | 43 +++++++++++++++++++++++++----------------
>  1 file changed, 26 insertions(+), 17 deletions(-)
>
> diff --git a/drivers/dma/xilinx/xilinx_dma.c b/drivers/dma/xilinx/xilinx_dma.c
> index 736c2a3..4f3fa94 100644
> --- a/drivers/dma/xilinx/xilinx_dma.c
> +++ b/drivers/dma/xilinx/xilinx_dma.c
> @@ -1087,23 +1087,33 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  				tail_segment->phys);
>  	} else {
>  		struct xilinx_vdma_tx_segment *segment, *last = NULL;
> -		int i = 0;
> +		int i = 0, j = 0;
>  
>  		if (chan->desc_submitcount < chan->num_frms)
>  			i = chan->desc_submitcount;
>  
> -		list_for_each_entry(segment, &desc->segments, node) {
> -			if (chan->ext_addr)
> -				vdma_desc_write_64(chan,
> -					XILINX_VDMA_REG_START_ADDRESS_64(i++),
> -					segment->hw.buf_addr,
> -					segment->hw.buf_addr_msb);
> -			else
> -				vdma_desc_write(chan,
> -					XILINX_VDMA_REG_START_ADDRESS(i++),
> -					segment->hw.buf_addr);
> -
> -			last = segment;
> +		for (j = 0; j < chan->num_frms; ) {
> +			list_for_each_entry(segment, &desc->segments, node) {
> +				if (chan->ext_addr)
> +					vdma_desc_write_64(chan,
> +					  XILINX_VDMA_REG_START_ADDRESS_64(i++),
> +					  segment->hw.buf_addr,
> +					  segment->hw.buf_addr_msb);
> +				else
> +					vdma_desc_write(chan,
> +					    XILINX_VDMA_REG_START_ADDRESS(i++),
> +					    segment->hw.buf_addr);
> +
> +				last = segment;

Hmm, is it possible to submit more than one segment? If so, then
i and j will get out of sync.

> +			}
> +			list_del(&desc->node);
> +			list_add_tail(&desc->node, &chan->active_list);
> +			j++;

But if i is non zero and pending_list has more than num_frms then
i will not wrap-around as it should and will write to invalid
framebuffer location, right?

> +			if (list_empty(&chan->pending_list))
> +				break;
> +			desc = list_first_entry(&chan->pending_list,
> +						struct xilinx_dma_tx_descriptor,
> +						node);
>  		}
>  
>  		if (!last)
> @@ -1114,14 +1124,13 @@ static void xilinx_vdma_start_transfer(struct xilinx_dma_chan *chan)
>  		vdma_desc_write(chan, XILINX_DMA_REG_FRMDLY_STRIDE,
>  				last->hw.stride);
>  		vdma_desc_write(chan, XILINX_DMA_REG_VSIZE, last->hw.vsize);

Maybe a check that all framebuffers contain valid addresses
should be done before programming vsize so that VDMA does not try
to write to invalid addresses.

> +
> +		chan->desc_submitcount += j;
> +		chan->desc_pendingcount -= j;
>  	}
>  
>  	chan->idle = false;
>  	if (!chan->has_sg) {
> -		list_del(&desc->node);
> -		list_add_tail(&desc->node, &chan->active_list);
> -		chan->desc_submitcount++;
> -		chan->desc_pendingcount--;
>  		if (chan->desc_submitcount == chan->num_frms)
>  			chan->desc_submitcount = 0;

"desc_submitcount >= chan->num_frms would be safer here.

>  	} else {

Best regards,
Jose Miguel Abreu

  reply	other threads:[~2016-12-15 16:11 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-12-15 15:11 [PATCH 0/3] dmaengine: xilinx_dma: Bug fixes Kedareswara rao Appana
2016-12-15 15:11 ` Kedareswara rao Appana
2016-12-15 15:11 ` [PATCH 1/3] dmaengine: xilinx_dma: Check for channel idle state before submitting dma descriptor Kedareswara rao Appana
2016-12-15 15:11   ` Kedareswara rao Appana
2016-12-15 15:45   ` Jose Abreu
2016-12-15 15:45     ` Jose Abreu
2016-12-15 18:49     ` Appana Durga Kedareswara Rao
2016-12-15 18:49       ` Appana Durga Kedareswara Rao
2016-12-16 15:35   ` Laurent Pinchart
2016-12-16 15:35     ` Laurent Pinchart
2016-12-19 15:39     ` Appana Durga Kedareswara Rao
2016-12-19 15:39       ` Appana Durga Kedareswara Rao
2016-12-19 17:18       ` Laurent Pinchart
2016-12-19 17:18         ` Laurent Pinchart
2016-12-23  8:49         ` Appana Durga Kedareswara Rao
2016-12-23  8:49           ` Appana Durga Kedareswara Rao
2016-12-15 15:11 ` [PATCH 2/3] dmaeninge: xilinx_dma: Fix bug in multiple frame stores scenario in vdma Kedareswara rao Appana
2016-12-15 15:11   ` Kedareswara rao Appana
2016-12-15 16:10   ` Jose Abreu [this message]
2016-12-15 16:10     ` Jose Abreu
2016-12-15 19:09     ` Appana Durga Kedareswara Rao
2016-12-15 19:09       ` Appana Durga Kedareswara Rao
2016-12-16 10:11       ` Jose Abreu
2016-12-16 10:11         ` Jose Abreu
2016-12-19 15:40         ` Appana Durga Kedareswara Rao
2016-12-19 15:40           ` Appana Durga Kedareswara Rao
2016-12-16 15:54   ` Laurent Pinchart
2016-12-16 15:54     ` Laurent Pinchart
2016-12-19 15:41     ` Appana Durga Kedareswara Rao
2016-12-19 15:41       ` Appana Durga Kedareswara Rao
2016-12-15 15:11 ` [PATCH 3/3] dmaengine: xilinx_dma: Fix race condition in the driver for multiple descriptor scenario Kedareswara rao Appana
2016-12-15 15:11   ` Kedareswara rao Appana

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=9d92984b-e04a-cd29-e933-d8ea4d610c94@synopsys.com \
    --to=jose.abreu@synopsys.com \
    --cc=anirudh@xilinx.com \
    --cc=appana.durga.rao@xilinx.com \
    --cc=appanad@xilinx.com \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=laurent.pinchart@ideasonboard.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luis@debethencourt.com \
    --cc=michal.simek@xilinx.com \
    --cc=moritz.fischer@ettus.com \
    --cc=soren.brinkmann@xilinx.com \
    --cc=svemula@xilinx.com \
    --cc=vinod.koul@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.