dmaengine.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sameer Pujar <spujar@nvidia.com>
To: Vinod Koul <vkoul@kernel.org>, Jon Hunter <jonathanh@nvidia.com>
Cc: Peter Ujfalusi <peter.ujfalusi@ti.com>,
	<dan.j.williams@intel.com>, <tiwai@suse.com>,
	<dmaengine@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
	<sharadg@nvidia.com>, <rlokhande@nvidia.com>,
	<dramesh@nvidia.com>, <mkumard@nvidia.com>
Subject: Re: [PATCH] [RFC] dmaengine: add fifo_size member
Date: Mon, 16 Sep 2019 14:32:30 +0530	[thread overview]
Message-ID: <14c52eb7-8676-652a-ae7a-8713ba536f05@nvidia.com> (raw)
In-Reply-To: <20190820110510.GQ12733@vkoul-mobl.Dlink>

Sorry for the delay in replying.

On 8/20/2019 4:35 PM, Vinod Koul wrote:
> On 19-08-19, 16:56, Jon Hunter wrote:
>>>>>>>>> On this, I am inclined to think that dma driver should not be involved.
>>>>>>>>> The ADMAIF needs this configuration and we should take the path of
>>>>>>>>> dma_router for this piece and add features like this to it
>>>>>>>> Hi Vinod,
>>>>>>>>
>>>>>>>> The configuration is needed by both ADMA and ADMAIF. The size is
>>>>>>>> configurable
>>>>>>>> on ADMAIF side. ADMA needs to know this info and program accordingly.
>>>>>>> Well I would say client decides the settings for both DMA, DMAIF and
>>>>>>> sets the peripheral accordingly as well, so client communicates the two
>>>>>>> sets of info to two set of drivers
>>>>>> That maybe, but I still don't see how the information is passed from the
>>>>>> client in the first place. The current problem is that there is no means
>>>>>> to pass both a max-burst size and fifo-size to the DMA driver from the
>>>>>> client.
>>>>> So one thing not clear to me is why ADMA needs fifo-size, I thought it
>>>>> was to program ADMAIF and if we have client programme the max-burst
>>>>> size to ADMA and fifo-size to ADMAIF we wont need that. Can you please
>>>>> confirm if my assumption is valid?
>>>> Let me see if I can clarify ...
>>>>
>>>> 1. The FIFO we are discussing here resides in the ADMAIF module which is
>>>>     a separate hardware block the ADMA (although the naming make this
>>>>     unclear).
>>>>
>>>> 2. The size of FIFO in the ADMAIF is configurable and it this is
>>>>     configured via the ADMAIF registers. This allows different channels
>>>>     to use different FIFO sizes. Think of this as a shared memory that is
>>>>     divided into n FIFOs shared between all channels.
>>>>
>>>> 3. The ADMA, not the ADMAIF, manages the flow to the FIFO and this is
>>>>     because the ADMAIF only tells the ADMA when a word has been
>>>>     read/written (depending on direction), the ADMAIF does not indicate
>>>>     if the FIFO is full, empty, etc. Hence, the ADMA needs to know the
>>>>     total FIFO size.
>>>>
>>>> So the ADMA needs to know the FIFO size so that it does not overrun the
>>>> FIFO and we can also set a burst size (less than the total FIFO size)
>>>> indicating how many words to transfer at a time. Hence, the two parameters.
>>> Thanks, I confirm this is my understanding as well.
>>>
>>> To compare to regular case for example SPI on DMA, SPI driver will
>>> calculate fifo size & burst to be used and program dma (burst size) and
>>> its own fifos accordingly
>>>
>>> So, in your case why should the peripheral driver not calculate the fifo
>>> size for both ADMA and ADMAIF and (if required it's own FIFO) and
>>> program the two (ADMA and ADMAIF).
>>>
>>> What is the limiting factor in this flow is not clear to me.
>> The FIFO size that is configured by the ADMAIF driver needs to be given
>> to the ADMA driver so that it can program its registers accordingly. The
>> difference here is that both the ADMA and ADMAIF need the FIFO size.
> Can you please help describing what it is programming using the FIFO
> size of ADMAIF?
**ADMA channel register is programmed with the same FIFO_SIZE as ADMAIF 
channel
to which it is mapped to.**
As previously mentioned, HW (on ADMA) uses this value to understand the 
FIFO depth
and comes to know when a space of BURST_SIZE is available. ADMAIF is an 
interface
to AHUB and when data moves forward to other clients via AHUB, it sends 
signal to
ADMA per WORD basis. ADMA calculates this to know available space and 
initiates a
transfer when sufficient space is available.
> Thanks

      reply	other threads:[~2019-09-16  9:02 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-30 11:30 [RFC] dmaengine: add fifo_size member Sameer Pujar
2019-04-30 11:30 ` [PATCH] " Sameer Pujar
2019-05-02  6:04 ` Vinod Koul
2019-05-02  6:04   ` [PATCH] " Vinod Koul
2019-05-02 10:53   ` Sameer Pujar
2019-05-02 12:25     ` Vinod Koul
2019-05-02 13:29       ` Sameer Pujar
2019-05-03 19:10         ` Peter Ujfalusi
2019-05-04 10:23         ` Vinod Koul
2019-05-06 13:04           ` Sameer Pujar
2019-05-06 15:50             ` Vinod Koul
2019-06-06  3:49               ` Sameer Pujar
2019-06-06  6:00                 ` Peter Ujfalusi
2019-06-06  6:41                   ` Sameer Pujar
2019-06-06  7:14                     ` Jon Hunter
2019-06-06 10:22                       ` Peter Ujfalusi
2019-06-06 10:49                         ` Jon Hunter
2019-06-06 11:54                           ` Peter Ujfalusi
2019-06-06 12:37                             ` Jon Hunter
2019-06-06 13:45                               ` Dmitry Osipenko
2019-06-06 13:55                                 ` Dmitry Osipenko
2019-06-06 14:26                                   ` Jon Hunter
2019-06-06 14:36                                     ` Jon Hunter
2019-06-06 14:36                                     ` Dmitry Osipenko
2019-06-06 14:47                                       ` Jon Hunter
2019-06-06 14:25                                 ` Jon Hunter
2019-06-06 15:18                                   ` Dmitry Osipenko
2019-06-06 16:32                                     ` Jon Hunter
2019-06-06 16:44                                       ` Dmitry Osipenko
2019-06-06 16:53                                         ` Jon Hunter
2019-06-06 17:25                                           ` Dmitry Osipenko
2019-06-06 17:56                                             ` Dmitry Osipenko
2019-06-07  9:24                                             ` Jon Hunter
2019-06-07  5:50                               ` Peter Ujfalusi
2019-06-07  9:18                                 ` Jon Hunter
2019-06-07 10:27                                   ` Jon Hunter
2019-06-07 12:17                                     ` Peter Ujfalusi
2019-06-07 12:58                                       ` Jon Hunter
2019-06-07 13:35                                         ` Peter Ujfalusi
2019-06-07 20:53                                           ` Dmitry Osipenko
2019-06-10  8:01                                             ` Jon Hunter
2019-06-10  7:59                                           ` Jon Hunter
2019-06-13  4:43                 ` Vinod Koul
2019-06-17  7:07                   ` Sameer Pujar
2019-06-18  4:33                     ` Vinod Koul
2019-06-20 10:29                       ` Sameer Pujar
2019-06-24  6:26                         ` Vinod Koul
2019-06-25  2:57                           ` Sameer Pujar
2019-07-05  6:15                             ` Sameer Pujar
2019-07-15 15:42                               ` Sameer Pujar
2019-07-19  5:04                               ` Vinod Koul
2019-07-23  5:54                                 ` Sameer Pujar
2019-07-29  6:10                                   ` Vinod Koul
2019-07-31  9:48                                     ` Jon Hunter
2019-07-31 15:16                                       ` Vinod Koul
2019-08-02  8:51                                         ` Jon Hunter
2019-08-08 12:38                                           ` Vinod Koul
2019-08-19 15:56                                             ` Jon Hunter
2019-08-20 11:05                                               ` Vinod Koul
2019-09-16  9:02                                                 ` Sameer Pujar [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=14c52eb7-8676-652a-ae7a-8713ba536f05@nvidia.com \
    --to=spujar@nvidia.com \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=dramesh@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mkumard@nvidia.com \
    --cc=peter.ujfalusi@ti.com \
    --cc=rlokhande@nvidia.com \
    --cc=sharadg@nvidia.com \
    --cc=tiwai@suse.com \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).