From: Nicolin Chen <nicoleotsuka@gmail.com>
To: Christoph Hellwig <hch@lst.de>
Cc: m.szyprowski@samsung.com, robin.murphy@arm.com,
vdumpa@nvidia.com, iommu@lists.linux-foundation.org,
linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] dma-direct: do not allocate a single page from CMA area
Date: Tue, 5 Feb 2019 15:05:30 -0800 [thread overview]
Message-ID: <20190205230514.GA21515@Asurada-Nvidia.nvidia.com> (raw)
In-Reply-To: <20190204082307.GA5916@lst.de>
Hi Christoph
On Mon, Feb 04, 2019 at 09:23:07AM +0100, Christoph Hellwig wrote:
> On Tue, Jan 15, 2019 at 01:51:40PM -0800, Nicolin Chen wrote:
> > The addresses within a single page are always contiguous, so it's
> > not so necessary to allocate one single page from CMA area. Since
> > the CMA area has a limited predefined size of space, it might run
> > out of space in some heavy use case, where there might be quite a
> > lot CMA pages being allocated for single pages.
> >
> > This patch tries to skip CMA allocations of single pages and lets
> > them go through normal page allocations unless the allocation has
> > a DMA_ATTR_FORCE_CONTIGUOUS attribute. This'd save some resources
> > in the CMA area for further more CMA allocations, and it can also
> > reduce CMA fragmentations resulted from trivial allocations.
>
> That DMA_ATTR_FORCE_CONTIGUOUS flag does not make sense. A single
> page allocation is per defintion always contigous.
>
> > again:
> > - /* CMA can be used only in the context which permits sleeping */
> > - if (gfpflags_allow_blocking(gfp)) {
> > + /*
> > + * CMA can be used only in the context which permits sleeping.
> > + * Since addresses within one PAGE are always contiguous, skip
> > + * CMA allocation for a single page to save CMA reserved space
> > + * unless DMA_ATTR_FORCE_CONTIGUOUS is flagged.
> > + */
> > + if (gfpflags_allow_blocking(gfp) &&
> > + (count > 1 || attrs & DMA_ATTR_FORCE_CONTIGUOUS)) {
>
> And my other concern is that this skips allocating from the per-device
> pool, which drivers might rely on.
Actually Robin had the same concern at v1 and suggested that we could
always use DMA_ATTR_FORCE_CONTIGUOUS to enforce into per-device pool.
> To be honest I'm not sure there is
> much of a point in the per-device CMA pool vs the traditional per-device
> coherent pool, but I'd rather change that behavior in a clearly documented
> commit with intentions rather as a side effect from a random optimization.
Hmm..sorry, I don't really follow this suggestion. Is it possible for
you to make it clear that what should I do for the change?
Thanks
Nicolin
next prev parent reply other threads:[~2019-02-05 23:05 UTC|newest]
Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-01-15 21:51 [PATCH v2] dma-direct: do not allocate a single page from CMA area Nicolin Chen
2019-02-04 8:23 ` Christoph Hellwig
2019-02-05 23:05 ` Nicolin Chen [this message]
2019-02-06 7:07 ` Christoph Hellwig
2019-02-07 2:28 ` Nicolin Chen
2019-02-07 5:37 ` Christoph Hellwig
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190205230514.GA21515@Asurada-Nvidia.nvidia.com \
--to=nicoleotsuka@gmail.com \
--cc=hch@lst.de \
--cc=iommu@lists.linux-foundation.org \
--cc=linux-kernel@vger.kernel.org \
--cc=m.szyprowski@samsung.com \
--cc=robin.murphy@arm.com \
--cc=vdumpa@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).