dmaengine Archive on lore.kernel.org
 help / color / Atom feed
From: Vinod Koul <vkoul@kernel.org>
To: Jon Hunter <jonathanh@nvidia.com>
Cc: Sameer Pujar <spujar@nvidia.com>,
	Peter Ujfalusi <peter.ujfalusi@ti.com>,
	dan.j.williams@intel.com, tiwai@suse.com,
	dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
	sharadg@nvidia.com, rlokhande@nvidia.com, dramesh@nvidia.com,
	mkumard@nvidia.com
Subject: Re: [PATCH] [RFC] dmaengine: add fifo_size member
Date: Thu, 8 Aug 2019 18:08:33 +0530
Message-ID: <20190808123833.GX12733@vkoul-mobl.Dlink> (raw)
In-Reply-To: <c0f4de86-423a-35df-3744-40db89f2fdfe@nvidia.com>

On 02-08-19, 09:51, Jon Hunter wrote:
> 
> On 31/07/2019 16:16, Vinod Koul wrote:
> > On 31-07-19, 10:48, Jon Hunter wrote:
> >>
> >> On 29/07/2019 07:10, Vinod Koul wrote:
> >>> On 23-07-19, 11:24, Sameer Pujar wrote:
> >>>>
> >>>> On 7/19/2019 10:34 AM, Vinod Koul wrote:
> >>>>> On 05-07-19, 11:45, Sameer Pujar wrote:
> >>>>>> Hi Vinod,
> >>>>>>
> >>>>>> What are your final thoughts regarding this?
> >>>>> Hi sameer,
> >>>>>
> >>>>> Sorry for the delay in replying
> >>>>>
> >>>>> On this, I am inclined to think that dma driver should not be involved.
> >>>>> The ADMAIF needs this configuration and we should take the path of
> >>>>> dma_router for this piece and add features like this to it
> >>>>
> >>>> Hi Vinod,
> >>>>
> >>>> The configuration is needed by both ADMA and ADMAIF. The size is
> >>>> configurable
> >>>> on ADMAIF side. ADMA needs to know this info and program accordingly.
> >>>
> >>> Well I would say client decides the settings for both DMA, DMAIF and
> >>> sets the peripheral accordingly as well, so client communicates the two
> >>> sets of info to two set of drivers
> >>
> >> That maybe, but I still don't see how the information is passed from the
> >> client in the first place. The current problem is that there is no means
> >> to pass both a max-burst size and fifo-size to the DMA driver from the
> >> client.
> > 
> > So one thing not clear to me is why ADMA needs fifo-size, I thought it
> > was to program ADMAIF and if we have client programme the max-burst
> > size to ADMA and fifo-size to ADMAIF we wont need that. Can you please
> > confirm if my assumption is valid?
> 
> Let me see if I can clarify ...
> 
> 1. The FIFO we are discussing here resides in the ADMAIF module which is
>    a separate hardware block the ADMA (although the naming make this
>    unclear).
> 
> 2. The size of FIFO in the ADMAIF is configurable and it this is
>    configured via the ADMAIF registers. This allows different channels
>    to use different FIFO sizes. Think of this as a shared memory that is
>    divided into n FIFOs shared between all channels.
> 
> 3. The ADMA, not the ADMAIF, manages the flow to the FIFO and this is
>    because the ADMAIF only tells the ADMA when a word has been
>    read/written (depending on direction), the ADMAIF does not indicate
>    if the FIFO is full, empty, etc. Hence, the ADMA needs to know the
>    total FIFO size.
> 
> So the ADMA needs to know the FIFO size so that it does not overrun the
> FIFO and we can also set a burst size (less than the total FIFO size)
> indicating how many words to transfer at a time. Hence, the two parameters.

Thanks, I confirm this is my understanding as well.

To compare to regular case for example SPI on DMA, SPI driver will
calculate fifo size & burst to be used and program dma (burst size) and
its own fifos accordingly

So, in your case why should the peripheral driver not calculate the fifo
size for both ADMA and ADMAIF and (if required it's own FIFO) and
program the two (ADMA and ADMAIF).

What is the limiting factor in this flow is not clear to me.

> Even if we were to use some sort of router between the ADMA and ADMAIF,
> the client still needs to indicate to the ADMA what FIFO size and burst
> size, if I am following you correctly.
> 
> Let me know if this is clearer.
> 
> Thanks
> Jon
> 
> -- 
> nvpublic

-- 
~Vinod

  reply index

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-30 11:30 Sameer Pujar
2019-04-30 11:30 ` [PATCH] " Sameer Pujar
2019-05-02  6:04 ` Vinod Koul
2019-05-02  6:04   ` [PATCH] " Vinod Koul
2019-05-02 10:53   ` Sameer Pujar
2019-05-02 12:25     ` Vinod Koul
2019-05-02 13:29       ` Sameer Pujar
2019-05-03 19:10         ` Peter Ujfalusi
2019-05-04 10:23         ` Vinod Koul
2019-05-06 13:04           ` Sameer Pujar
2019-05-06 15:50             ` Vinod Koul
2019-06-06  3:49               ` Sameer Pujar
2019-06-06  6:00                 ` Peter Ujfalusi
2019-06-06  6:41                   ` Sameer Pujar
2019-06-06  7:14                     ` Jon Hunter
2019-06-06 10:22                       ` Peter Ujfalusi
2019-06-06 10:49                         ` Jon Hunter
2019-06-06 11:54                           ` Peter Ujfalusi
2019-06-06 12:37                             ` Jon Hunter
2019-06-06 13:45                               ` Dmitry Osipenko
2019-06-06 13:55                                 ` Dmitry Osipenko
2019-06-06 14:26                                   ` Jon Hunter
2019-06-06 14:36                                     ` Jon Hunter
2019-06-06 14:36                                     ` Dmitry Osipenko
2019-06-06 14:47                                       ` Jon Hunter
2019-06-06 14:25                                 ` Jon Hunter
2019-06-06 15:18                                   ` Dmitry Osipenko
2019-06-06 16:32                                     ` Jon Hunter
2019-06-06 16:44                                       ` Dmitry Osipenko
2019-06-06 16:53                                         ` Jon Hunter
2019-06-06 17:25                                           ` Dmitry Osipenko
2019-06-06 17:56                                             ` Dmitry Osipenko
2019-06-07  9:24                                             ` Jon Hunter
2019-06-07  5:50                               ` Peter Ujfalusi
2019-06-07  9:18                                 ` Jon Hunter
2019-06-07 10:27                                   ` Jon Hunter
2019-06-07 12:17                                     ` Peter Ujfalusi
2019-06-07 12:58                                       ` Jon Hunter
2019-06-07 13:35                                         ` Peter Ujfalusi
2019-06-07 20:53                                           ` Dmitry Osipenko
2019-06-10  8:01                                             ` Jon Hunter
2019-06-10  7:59                                           ` Jon Hunter
2019-06-13  4:43                 ` Vinod Koul
2019-06-17  7:07                   ` Sameer Pujar
2019-06-18  4:33                     ` Vinod Koul
2019-06-20 10:29                       ` Sameer Pujar
2019-06-24  6:26                         ` Vinod Koul
2019-06-25  2:57                           ` Sameer Pujar
2019-07-05  6:15                             ` Sameer Pujar
2019-07-15 15:42                               ` Sameer Pujar
2019-07-19  5:04                               ` Vinod Koul
2019-07-23  5:54                                 ` Sameer Pujar
2019-07-29  6:10                                   ` Vinod Koul
2019-07-31  9:48                                     ` Jon Hunter
2019-07-31 15:16                                       ` Vinod Koul
2019-08-02  8:51                                         ` Jon Hunter
2019-08-08 12:38                                           ` Vinod Koul [this message]
2019-08-19 15:56                                             ` Jon Hunter
2019-08-20 11:05                                               ` Vinod Koul
2019-09-16  9:02                                                 ` Sameer Pujar

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190808123833.GX12733@vkoul-mobl.Dlink \
    --to=vkoul@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=dramesh@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mkumard@nvidia.com \
    --cc=peter.ujfalusi@ti.com \
    --cc=rlokhande@nvidia.com \
    --cc=sharadg@nvidia.com \
    --cc=spujar@nvidia.com \
    --cc=tiwai@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

dmaengine Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/dmaengine/0 dmaengine/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dmaengine dmaengine/ https://lore.kernel.org/dmaengine \
		dmaengine@vger.kernel.org
	public-inbox-index dmaengine

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.dmaengine


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git