dmaengine.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Sameer Pujar <spujar@nvidia.com>
To: Vinod Koul <vkoul@kernel.org>
Cc: <dan.j.williams@intel.com>, <tiwai@suse.com>,
	<jonathanh@nvidia.com>, <dmaengine@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <sharadg@nvidia.com>,
	<rlokhande@nvidia.com>, <dramesh@nvidia.com>,
	<mkumard@nvidia.com>
Subject: Re: [PATCH] [RFC] dmaengine: add fifo_size member
Date: Thu, 6 Jun 2019 09:19:12 +0530	[thread overview]
Message-ID: <b7e28e73-7214-f1dc-866f-102410c88323@nvidia.com> (raw)
In-Reply-To: <20190506155046.GH3845@vkoul-mobl.Dlink>

Sorry for late reply.
[Resending the reply since delivery failed for few recipients]

On 5/6/2019 9:20 PM, Vinod Koul wrote:
> On 06-05-19, 18:34, Sameer Pujar wrote:
>> On 5/4/2019 3:53 PM, Vinod Koul wrote:
>>> On 02-05-19, 18:59, Sameer Pujar wrote:
>>>> On 5/2/2019 5:55 PM, Vinod Koul wrote:
>>>>> On 02-05-19, 16:23, Sameer Pujar wrote:
>>>>>> On 5/2/2019 11:34 AM, Vinod Koul wrote:
>>>>>>> On 30-04-19, 17:00, Sameer Pujar wrote:
>>>>>>>> During the DMA transfers from memory to I/O, it was observed that transfers
>>>>>>>> were inconsistent and resulted in glitches for audio playback. It happened
>>>>>>>> because fifo size on DMA did not match with slave channel configuration.
>>>>>>>>
>>>>>>>> currently 'dma_slave_config' structure does not have a field for fifo size.
>>>>>>>> Hence the platform pcm driver cannot pass the fifo size as a slave_config.
>>>>>>>> Note that 'snd_dmaengine_dai_dma_data' structure has fifo_size field which
>>>>>>>> cannot be used to pass the size info. This patch introduces fifo_size field
>>>>>>>> and the same can be populated on slave side. Users can set required size
>>>>>>>> for slave peripheral (multiple channels can be independently running with
>>>>>>>> different fifo sizes) and the corresponding sizes are programmed through
>>>>>>>> dma_slave_config on DMA side.
>>>>>>> FIFO size is a hardware property not sure why you would want an
>>>>>>> interface to program that?
>>>>>>>
>>>>>>> On mismatch, I guess you need to take care of src/dst_maxburst..
>>>>>> Yes, FIFO size is a HW property. But it is SW configurable(atleast in my
>>>>>> case) on
>>>>>> slave side and can be set to different sizes. The src/dst_maxburst is
>>>>> Are you sure, have you talked to HW folks on that? IIUC you are
>>>>> programming the data to be used in FIFO not the FIFO length!
>>>> Yes, I mentioned about FIFO length.
>>>>
>>>> 1. MAX FIFO size is fixed in HW. But there is a way to limit the usage per
>>>> channel
>>>>      in multiples of 64 bytes.
>>>> 2. Having a separate member would give independent control over MAX BURST
>>>> SIZE and
>>>>      FIFO SIZE.
>>>>>> programmed
>>>>>> for specific values, I think this depends on few factors related to
>>>>>> bandwidth
>>>>>> needs of client, DMA needs of the system etc.,
>>>>> Precisely
>>>>>
>>>>>> In such cases how does DMA know the actual FIFO depth of slave peripheral?
>>>>> Why should DMA know? Its job is to push/pull data as configured by
>>>>> peripheral driver. The peripheral driver knows and configures DMA
>>>>> accordingly.
>>>> I am not sure if there is any HW logic that mandates DMA to know the size
>>>> of configured FIFO depth on slave side. I will speak to HW folks and
>>>> would update here.
>>> I still do not comprehend why dma would care about slave side
>>> configuration. In the absence of patch which uses this I am not sure
>>> what you are trying to do...
>> I am using DMA HW in cyclic mode for data transfers to Audio sub-system.
>> In such cases flow control on DMA transfers is essential, since I/O is
> right and people use burst size for precisely that!
>
>> consuming/producing the data at slower rate. The DMA tranfer is enabled/
>> disabled during start/stop of audio playback/capture sessions through ALSA
>> callbacks and DMA runs in cyclic mode. Hence DMA is the one which is doing
>> flow control and it is necessary for it to know the peripheral FIFO depth
>> to avoid overruns/underruns.
> not really, knowing that doesnt help anyway you have described! DMA
> pushes/pulls data and that is controlled by burst configured by slave
> (so it know what to expect and porgrams things accordingly)
>
> you are really going other way around about the whole picture. FWIW that
> is how *other* folks do audio with dmaengine!
I discussed this internally with HW folks and below is the reason why 
DMA needs
to know FIFO size.

- FIFOs reside in peripheral device(ADMAIF), which is the ADMA interface 
to the audio sub-system.
- ADMAIF has multiple channels and share FIFO buffer for individual 
operations. There is a provision
   to allocate specific fifo size for each individual ADMAIF channel 
from the shared buffer.
- Tegra Audio DMA(ADMA) architecture is different from the usual DMA 
engines, which you described earlier.
- The flow control logic is placed inside ADMA. Slave peripheral 
device(ADMAIF) signals ADMA whenever a
   read or write happens on the FIFO(per WORD basis). Please note that 
the signaling is per channel. There is
   no other signaling present from ADMAIF to ADMA.
- ADMA keeps a counter related to above signaling. Whenever a sufficient 
space is available, it initiates a transfer.
   But the question is, how does it know when to transfer. This is the 
reason, why ADMA has to be aware of FIFO
   depth of ADMAIF channel. Depending on the counters and FIFO depth, it 
knows exactly when a free space is available
   in the context of a specific channel. On ADMA, FIFO_SIZE is just a 
value which should match to actual FIFO_DEPTH/SIZE
   of ADMAIF channel.
- Now consider two cases based on above logic,
   * Case 1: when DMA_FIFO_SIZE > SLAVE_FIFO_SIZE
     In this case, ADMA thinks that there is enough space available for 
transfer, when actually the FIFO data
     on slave is not consumed yet. It would result in OVERRUN.
   * Case 2: when DMA_FIFO_SIZE < SLAVE_FIFO_SIZE
     This is case where ADMA won’t transfer, even though sufficient 
space is available, resulting in UNDERRUN.
- The guideline is to program, DMA_FIFO_SIZE(on ADMA side) = 
SLAVE_FIFO_SIZE(on ADMAIF side) and hence we need a
   way to communicate fifo size info to ADMA.

Thanks,
Sameer.
>> Also please note that, peripheral device has multiple channels and share
>> a fixed MAX FIFO buffer. But SW can program different FIFO sizes for
>> individual channels.
> yeah peripheral driver, yes. DMA driver nope!
>

  reply	other threads:[~2019-06-06  3:49 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-30 11:30 [RFC] dmaengine: add fifo_size member Sameer Pujar
2019-04-30 11:30 ` [PATCH] " Sameer Pujar
2019-05-02  6:04 ` Vinod Koul
2019-05-02  6:04   ` [PATCH] " Vinod Koul
2019-05-02 10:53   ` Sameer Pujar
2019-05-02 12:25     ` Vinod Koul
2019-05-02 13:29       ` Sameer Pujar
2019-05-03 19:10         ` Peter Ujfalusi
2019-05-04 10:23         ` Vinod Koul
2019-05-06 13:04           ` Sameer Pujar
2019-05-06 15:50             ` Vinod Koul
2019-06-06  3:49               ` Sameer Pujar [this message]
2019-06-06  6:00                 ` Peter Ujfalusi
2019-06-06  6:41                   ` Sameer Pujar
2019-06-06  7:14                     ` Jon Hunter
2019-06-06 10:22                       ` Peter Ujfalusi
2019-06-06 10:49                         ` Jon Hunter
2019-06-06 11:54                           ` Peter Ujfalusi
2019-06-06 12:37                             ` Jon Hunter
2019-06-06 13:45                               ` Dmitry Osipenko
2019-06-06 13:55                                 ` Dmitry Osipenko
2019-06-06 14:26                                   ` Jon Hunter
2019-06-06 14:36                                     ` Jon Hunter
2019-06-06 14:36                                     ` Dmitry Osipenko
2019-06-06 14:47                                       ` Jon Hunter
2019-06-06 14:25                                 ` Jon Hunter
2019-06-06 15:18                                   ` Dmitry Osipenko
2019-06-06 16:32                                     ` Jon Hunter
2019-06-06 16:44                                       ` Dmitry Osipenko
2019-06-06 16:53                                         ` Jon Hunter
2019-06-06 17:25                                           ` Dmitry Osipenko
2019-06-06 17:56                                             ` Dmitry Osipenko
2019-06-07  9:24                                             ` Jon Hunter
2019-06-07  5:50                               ` Peter Ujfalusi
2019-06-07  9:18                                 ` Jon Hunter
2019-06-07 10:27                                   ` Jon Hunter
2019-06-07 12:17                                     ` Peter Ujfalusi
2019-06-07 12:58                                       ` Jon Hunter
2019-06-07 13:35                                         ` Peter Ujfalusi
2019-06-07 20:53                                           ` Dmitry Osipenko
2019-06-10  8:01                                             ` Jon Hunter
2019-06-10  7:59                                           ` Jon Hunter
2019-06-13  4:43                 ` Vinod Koul
2019-06-17  7:07                   ` Sameer Pujar
2019-06-18  4:33                     ` Vinod Koul
2019-06-20 10:29                       ` Sameer Pujar
2019-06-24  6:26                         ` Vinod Koul
2019-06-25  2:57                           ` Sameer Pujar
2019-07-05  6:15                             ` Sameer Pujar
2019-07-15 15:42                               ` Sameer Pujar
2019-07-19  5:04                               ` Vinod Koul
2019-07-23  5:54                                 ` Sameer Pujar
2019-07-29  6:10                                   ` Vinod Koul
2019-07-31  9:48                                     ` Jon Hunter
2019-07-31 15:16                                       ` Vinod Koul
2019-08-02  8:51                                         ` Jon Hunter
2019-08-08 12:38                                           ` Vinod Koul
2019-08-19 15:56                                             ` Jon Hunter
2019-08-20 11:05                                               ` Vinod Koul
2019-09-16  9:02                                                 ` Sameer Pujar

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=b7e28e73-7214-f1dc-866f-102410c88323@nvidia.com \
    --to=spujar@nvidia.com \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=dramesh@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mkumard@nvidia.com \
    --cc=rlokhande@nvidia.com \
    --cc=sharadg@nvidia.com \
    --cc=tiwai@suse.com \
    --cc=vkoul@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).