From: Jon Hunter <jonathanh@nvidia.com>
To: Dmitry Osipenko <digetx@gmail.com>,
Peter Ujfalusi <peter.ujfalusi@ti.com>,
Sameer Pujar <spujar@nvidia.com>, Vinod Koul <vkoul@kernel.org>
Cc: <dan.j.williams@intel.com>, <tiwai@suse.com>,
<dmaengine@vger.kernel.org>, <linux-kernel@vger.kernel.org>,
<sharadg@nvidia.com>, <rlokhande@nvidia.com>,
<dramesh@nvidia.com>, <mkumard@nvidia.com>,
linux-tegra <linux-tegra@vger.kernel.org>
Subject: Re: [PATCH] [RFC] dmaengine: add fifo_size member
Date: Thu, 6 Jun 2019 15:36:03 +0100 [thread overview]
Message-ID: <e24a26d8-6355-bac0-eeba-8d8ce8d5f985@nvidia.com> (raw)
In-Reply-To: <2eab4777-79b8-0aea-c22f-ac9d11284889@nvidia.com>
On 06/06/2019 15:26, Jon Hunter wrote:
...
>>> If I understood everything correctly, the FIFO buffer is shared among
>>> all of the ADMA clients and hence it should be up to the ADMA driver to
>>> manage the quotas of the clients. So if there is only one client that
>>> uses ADMA at a time, then this client will get a whole FIFO buffer, but
>>> once another client starts to use ADMA, then the ADMA driver will have
>>> to reconfigure hardware to split the quotas.
>>>
>>
>> You could also simply hardcode the quotas per client in the ADMA driver
>> if the quotas are going to be static anyway.
>
> Essentially this is what we have done so far, but Sameer is looking for
> a way to make this more programmable/flexible. We can always do that if
> there is no other option indeed. However, seems like a good time to see
> if there is a better way.
My thoughts on resolving this, in order of preference, would be ...
1. Add a new 'fifo_size' variable as Sameer is proposing.
2. Update the ADMA driver to use src/dst_maxburst as the fifo size and
then have the ADMA driver set a suitable burst size for its burst
size.
3. Resort to a static configuration.
I can see that #1 only makes sense if others would find it useful,
otherwise #2, may give us enough flexibility for now.
Cheers
Jon
--
nvpublic
next prev parent reply other threads:[~2019-06-06 14:36 UTC|newest]
Thread overview: 60+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-30 11:30 [RFC] dmaengine: add fifo_size member Sameer Pujar
2019-04-30 11:30 ` [PATCH] " Sameer Pujar
2019-05-02 6:04 ` Vinod Koul
2019-05-02 6:04 ` [PATCH] " Vinod Koul
2019-05-02 10:53 ` Sameer Pujar
2019-05-02 12:25 ` Vinod Koul
2019-05-02 13:29 ` Sameer Pujar
2019-05-03 19:10 ` Peter Ujfalusi
2019-05-04 10:23 ` Vinod Koul
2019-05-06 13:04 ` Sameer Pujar
2019-05-06 15:50 ` Vinod Koul
2019-06-06 3:49 ` Sameer Pujar
2019-06-06 6:00 ` Peter Ujfalusi
2019-06-06 6:41 ` Sameer Pujar
2019-06-06 7:14 ` Jon Hunter
2019-06-06 10:22 ` Peter Ujfalusi
2019-06-06 10:49 ` Jon Hunter
2019-06-06 11:54 ` Peter Ujfalusi
2019-06-06 12:37 ` Jon Hunter
2019-06-06 13:45 ` Dmitry Osipenko
2019-06-06 13:55 ` Dmitry Osipenko
2019-06-06 14:26 ` Jon Hunter
2019-06-06 14:36 ` Jon Hunter [this message]
2019-06-06 14:36 ` Dmitry Osipenko
2019-06-06 14:47 ` Jon Hunter
2019-06-06 14:25 ` Jon Hunter
2019-06-06 15:18 ` Dmitry Osipenko
2019-06-06 16:32 ` Jon Hunter
2019-06-06 16:44 ` Dmitry Osipenko
2019-06-06 16:53 ` Jon Hunter
2019-06-06 17:25 ` Dmitry Osipenko
2019-06-06 17:56 ` Dmitry Osipenko
2019-06-07 9:24 ` Jon Hunter
2019-06-07 5:50 ` Peter Ujfalusi
2019-06-07 9:18 ` Jon Hunter
2019-06-07 10:27 ` Jon Hunter
2019-06-07 12:17 ` Peter Ujfalusi
2019-06-07 12:58 ` Jon Hunter
2019-06-07 13:35 ` Peter Ujfalusi
2019-06-07 20:53 ` Dmitry Osipenko
2019-06-10 8:01 ` Jon Hunter
2019-06-10 7:59 ` Jon Hunter
2019-06-13 4:43 ` Vinod Koul
2019-06-17 7:07 ` Sameer Pujar
2019-06-18 4:33 ` Vinod Koul
2019-06-20 10:29 ` Sameer Pujar
2019-06-24 6:26 ` Vinod Koul
2019-06-25 2:57 ` Sameer Pujar
2019-07-05 6:15 ` Sameer Pujar
2019-07-15 15:42 ` Sameer Pujar
2019-07-19 5:04 ` Vinod Koul
2019-07-23 5:54 ` Sameer Pujar
2019-07-29 6:10 ` Vinod Koul
2019-07-31 9:48 ` Jon Hunter
2019-07-31 15:16 ` Vinod Koul
2019-08-02 8:51 ` Jon Hunter
2019-08-08 12:38 ` Vinod Koul
2019-08-19 15:56 ` Jon Hunter
2019-08-20 11:05 ` Vinod Koul
2019-09-16 9:02 ` Sameer Pujar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=e24a26d8-6355-bac0-eeba-8d8ce8d5f985@nvidia.com \
--to=jonathanh@nvidia.com \
--cc=dan.j.williams@intel.com \
--cc=digetx@gmail.com \
--cc=dmaengine@vger.kernel.org \
--cc=dramesh@nvidia.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-tegra@vger.kernel.org \
--cc=mkumard@nvidia.com \
--cc=peter.ujfalusi@ti.com \
--cc=rlokhande@nvidia.com \
--cc=sharadg@nvidia.com \
--cc=spujar@nvidia.com \
--cc=tiwai@suse.com \
--cc=vkoul@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).