linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Nicolin Chen <nicoleotsuka@gmail.com>
To: Christoph Hellwig <hch@lst.de>
Cc: m.szyprowski@samsung.com, robin.murphy@arm.com,
	vdumpa@nvidia.com, iommu@lists.linux-foundation.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] dma-direct: do not allocate a single page from CMA area
Date: Wed, 6 Feb 2019 18:28:49 -0800	[thread overview]
Message-ID: <20190207022848.GA5581@Asurada-Nvidia.nvidia.com> (raw)
In-Reply-To: <20190206070726.GE23392@lst.de>

Hi Christoph,

On Wed, Feb 06, 2019 at 08:07:26AM +0100, Christoph Hellwig wrote:
> On Tue, Feb 05, 2019 at 03:05:30PM -0800, Nicolin Chen wrote:
> > > And my other concern is that this skips allocating from the per-device
> > > pool, which drivers might rely on.
> > 
> > Actually Robin had the same concern at v1 and suggested that we could
> > always use DMA_ATTR_FORCE_CONTIGUOUS to enforce into per-device pool.
> 
> That is both against the documented behavior of DMA_ATTR_FORCE_CONTIGUOUS
> and doesn't help existing drivers that specify their CMA area in DT.

OK. I will drop it.

> > > To be honest I'm not sure there is
> > > much of a point in the per-device CMA pool vs the traditional per-device
> > > coherent pool, but I'd rather change that behavior in a clearly documented
> > > commit with intentions rather as a side effect from a random optimization.
> > 
> > Hmm..sorry, I don't really follow this suggestion. Is it possible for
> > you to make it clear that what should I do for the change?
> 
> Something like this (plus proper comments):
> 
> diff --git a/kernel/dma/contiguous.c b/kernel/dma/contiguous.c
> index b2a87905846d..789d734f0f77 100644
> --- a/kernel/dma/contiguous.c
> +++ b/kernel/dma/contiguous.c
> @@ -192,10 +192,19 @@ int __init dma_contiguous_reserve_area(phys_addr_t size, phys_addr_t base,
>  struct page *dma_alloc_from_contiguous(struct device *dev, size_t count,
>  				       unsigned int align, bool no_warn)
>  {
> +	struct cma *cma;
> +
>  	if (align > CONFIG_CMA_ALIGNMENT)
>  		align = CONFIG_CMA_ALIGNMENT;
>  
> -	return cma_alloc(dev_get_cma_area(dev), count, align, no_warn);
> +	if (dev && dev->cma_area)
> +		cma = dev->cma_area;
> +	else if (count > PAGE_SIZE)
> +		cma = dma_contiguous_default_area;
> +	else
> +		return NULL;

So we will keep allocating single pages in dev->cma_area if it's
present, in order to address your previous concern?

Thanks
Nicolin

  reply	other threads:[~2019-02-07  2:28 UTC|newest]

Thread overview: 6+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-15 21:51 [PATCH v2] dma-direct: do not allocate a single page from CMA area Nicolin Chen
2019-02-04  8:23 ` Christoph Hellwig
2019-02-05 23:05   ` Nicolin Chen
2019-02-06  7:07     ` Christoph Hellwig
2019-02-07  2:28       ` Nicolin Chen [this message]
2019-02-07  5:37         ` Christoph Hellwig

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190207022848.GA5581@Asurada-Nvidia.nvidia.com \
    --to=nicoleotsuka@gmail.com \
    --cc=hch@lst.de \
    --cc=iommu@lists.linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=m.szyprowski@samsung.com \
    --cc=robin.murphy@arm.com \
    --cc=vdumpa@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).