dmaengine Archive on lore.kernel.org
 help / color / Atom feed
From: Vinod Koul <vkoul@kernel.org>
To: Sameer Pujar <spujar@nvidia.com>
Cc: dan.j.williams@intel.com, tiwai@suse.com, jonathanh@nvidia.com,
	dmaengine@vger.kernel.org, linux-kernel@vger.kernel.org,
	sharadg@nvidia.com, rlokhande@nvidia.com, dramesh@nvidia.com,
	mkumard@nvidia.com
Subject: Re: [PATCH] [RFC] dmaengine: add fifo_size member
Date: Mon, 6 May 2019 21:20:46 +0530
Message-ID: <20190506155046.GH3845@vkoul-mobl.Dlink> (raw)
In-Reply-To: <ce0e9c0b-b909-54ae-9086-a1f0f6be903c@nvidia.com>

On 06-05-19, 18:34, Sameer Pujar wrote:
> 
> On 5/4/2019 3:53 PM, Vinod Koul wrote:
> > On 02-05-19, 18:59, Sameer Pujar wrote:
> > > On 5/2/2019 5:55 PM, Vinod Koul wrote:
> > > > On 02-05-19, 16:23, Sameer Pujar wrote:
> > > > > On 5/2/2019 11:34 AM, Vinod Koul wrote:
> > > > > > On 30-04-19, 17:00, Sameer Pujar wrote:
> > > > > > > During the DMA transfers from memory to I/O, it was observed that transfers
> > > > > > > were inconsistent and resulted in glitches for audio playback. It happened
> > > > > > > because fifo size on DMA did not match with slave channel configuration.
> > > > > > > 
> > > > > > > currently 'dma_slave_config' structure does not have a field for fifo size.
> > > > > > > Hence the platform pcm driver cannot pass the fifo size as a slave_config.
> > > > > > > Note that 'snd_dmaengine_dai_dma_data' structure has fifo_size field which
> > > > > > > cannot be used to pass the size info. This patch introduces fifo_size field
> > > > > > > and the same can be populated on slave side. Users can set required size
> > > > > > > for slave peripheral (multiple channels can be independently running with
> > > > > > > different fifo sizes) and the corresponding sizes are programmed through
> > > > > > > dma_slave_config on DMA side.
> > > > > > FIFO size is a hardware property not sure why you would want an
> > > > > > interface to program that?
> > > > > > 
> > > > > > On mismatch, I guess you need to take care of src/dst_maxburst..
> > > > > Yes, FIFO size is a HW property. But it is SW configurable(atleast in my
> > > > > case) on
> > > > > slave side and can be set to different sizes. The src/dst_maxburst is
> > > > Are you sure, have you talked to HW folks on that? IIUC you are
> > > > programming the data to be used in FIFO not the FIFO length!
> > > Yes, I mentioned about FIFO length.
> > > 
> > > 1. MAX FIFO size is fixed in HW. But there is a way to limit the usage per
> > > channel
> > >     in multiples of 64 bytes.
> > > 2. Having a separate member would give independent control over MAX BURST
> > > SIZE and
> > >     FIFO SIZE.
> > > > > programmed
> > > > > for specific values, I think this depends on few factors related to
> > > > > bandwidth
> > > > > needs of client, DMA needs of the system etc.,
> > > > Precisely
> > > > 
> > > > > In such cases how does DMA know the actual FIFO depth of slave peripheral?
> > > > Why should DMA know? Its job is to push/pull data as configured by
> > > > peripheral driver. The peripheral driver knows and configures DMA
> > > > accordingly.
> > > I am not sure if there is any HW logic that mandates DMA to know the size
> > > of configured FIFO depth on slave side. I will speak to HW folks and
> > > would update here.
> > I still do not comprehend why dma would care about slave side
> > configuration. In the absence of patch which uses this I am not sure
> > what you are trying to do...
> 
> I am using DMA HW in cyclic mode for data transfers to Audio sub-system.
> In such cases flow control on DMA transfers is essential, since I/O is

right and people use burst size for precisely that!

> consuming/producing the data at slower rate. The DMA tranfer is enabled/
> disabled during start/stop of audio playback/capture sessions through ALSA
> callbacks and DMA runs in cyclic mode. Hence DMA is the one which is doing
> flow control and it is necessary for it to know the peripheral FIFO depth
> to avoid overruns/underruns.

not really, knowing that doesnt help anyway you have described! DMA
pushes/pulls data and that is controlled by burst configured by slave
(so it know what to expect and porgrams things accordingly)

you are really going other way around about the whole picture. FWIW that
is how *other* folks do audio with dmaengine!

> Also please note that, peripheral device has multiple channels and share
> a fixed MAX FIFO buffer. But SW can program different FIFO sizes for
> individual channels.

yeah peripheral driver, yes. DMA driver nope!

-- 
~Vinod

  reply index

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-04-30 11:30 Sameer Pujar
2019-04-30 11:30 ` [PATCH] " Sameer Pujar
2019-05-02  6:04 ` Vinod Koul
2019-05-02  6:04   ` [PATCH] " Vinod Koul
2019-05-02 10:53   ` Sameer Pujar
2019-05-02 12:25     ` Vinod Koul
2019-05-02 13:29       ` Sameer Pujar
2019-05-03 19:10         ` Peter Ujfalusi
2019-05-04 10:23         ` Vinod Koul
2019-05-06 13:04           ` Sameer Pujar
2019-05-06 15:50             ` Vinod Koul [this message]
2019-06-06  3:49               ` Sameer Pujar
2019-06-06  6:00                 ` Peter Ujfalusi
2019-06-06  6:41                   ` Sameer Pujar
2019-06-06  7:14                     ` Jon Hunter
2019-06-06 10:22                       ` Peter Ujfalusi
2019-06-06 10:49                         ` Jon Hunter
2019-06-06 11:54                           ` Peter Ujfalusi
2019-06-06 12:37                             ` Jon Hunter
2019-06-06 13:45                               ` Dmitry Osipenko
2019-06-06 13:55                                 ` Dmitry Osipenko
2019-06-06 14:26                                   ` Jon Hunter
2019-06-06 14:36                                     ` Jon Hunter
2019-06-06 14:36                                     ` Dmitry Osipenko
2019-06-06 14:47                                       ` Jon Hunter
2019-06-06 14:25                                 ` Jon Hunter
2019-06-06 15:18                                   ` Dmitry Osipenko
2019-06-06 16:32                                     ` Jon Hunter
2019-06-06 16:44                                       ` Dmitry Osipenko
2019-06-06 16:53                                         ` Jon Hunter
2019-06-06 17:25                                           ` Dmitry Osipenko
2019-06-06 17:56                                             ` Dmitry Osipenko
2019-06-07  9:24                                             ` Jon Hunter
2019-06-07  5:50                               ` Peter Ujfalusi
2019-06-07  9:18                                 ` Jon Hunter
2019-06-07 10:27                                   ` Jon Hunter
2019-06-07 12:17                                     ` Peter Ujfalusi
2019-06-07 12:58                                       ` Jon Hunter
2019-06-07 13:35                                         ` Peter Ujfalusi
2019-06-07 20:53                                           ` Dmitry Osipenko
2019-06-10  8:01                                             ` Jon Hunter
2019-06-10  7:59                                           ` Jon Hunter
2019-06-13  4:43                 ` Vinod Koul
2019-06-17  7:07                   ` Sameer Pujar
2019-06-18  4:33                     ` Vinod Koul
2019-06-20 10:29                       ` Sameer Pujar
2019-06-24  6:26                         ` Vinod Koul
2019-06-25  2:57                           ` Sameer Pujar
2019-07-05  6:15                             ` Sameer Pujar
2019-07-15 15:42                               ` Sameer Pujar
2019-07-19  5:04                               ` Vinod Koul
2019-07-23  5:54                                 ` Sameer Pujar
2019-07-29  6:10                                   ` Vinod Koul
2019-07-31  9:48                                     ` Jon Hunter
2019-07-31 15:16                                       ` Vinod Koul
2019-08-02  8:51                                         ` Jon Hunter
2019-08-08 12:38                                           ` Vinod Koul
2019-08-19 15:56                                             ` Jon Hunter
2019-08-20 11:05                                               ` Vinod Koul
2019-09-16  9:02                                                 ` Sameer Pujar

Reply instructions:

You may reply publically to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190506155046.GH3845@vkoul-mobl.Dlink \
    --to=vkoul@kernel.org \
    --cc=dan.j.williams@intel.com \
    --cc=dmaengine@vger.kernel.org \
    --cc=dramesh@nvidia.com \
    --cc=jonathanh@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mkumard@nvidia.com \
    --cc=rlokhande@nvidia.com \
    --cc=sharadg@nvidia.com \
    --cc=spujar@nvidia.com \
    --cc=tiwai@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

dmaengine Archive on lore.kernel.org

Archives are clonable:
	git clone --mirror https://lore.kernel.org/dmaengine/0 dmaengine/git/0.git

	# If you have public-inbox 1.1+ installed, you may
	# initialize and index your mirror using the following commands:
	public-inbox-init -V2 dmaengine dmaengine/ https://lore.kernel.org/dmaengine \
		dmaengine@vger.kernel.org
	public-inbox-index dmaengine

Example config snippet for mirrors

Newsgroup available over NNTP:
	nntp://nntp.lore.kernel.org/org.kernel.vger.dmaengine


AGPL code for this site: git clone https://public-inbox.org/public-inbox.git