iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes via iommu <iommu@lists.linux-foundation.org>
To: Guenter Roeck <linux@roeck-us.net>
Cc: Linux Kernel Mailing List <linux-kernel@vger.kernel.org>,
	Linux IOMMU <iommu@lists.linux-foundation.org>,
	Geert Uytterhoeven <geert@linux-m68k.org>,
	Robin Murphy <robin.murphy@arm.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [patch] dma-pool: warn when coherent pool is depleted
Date: Sat, 27 Jun 2020 21:25:21 -0700 (PDT)	[thread overview]
Message-ID: <alpine.DEB.2.22.394.2006272124470.591864@chino.kir.corp.google.com> (raw)
In-Reply-To: <20200621211200.GA158319@roeck-us.net>

On Sun, 21 Jun 2020, Guenter Roeck wrote:

> > When a DMA coherent pool is depleted, allocation failures may or may not
> > get reported in the kernel log depending on the allocator.
> > 
> > The admin does have a workaround, however, by using coherent_pool= on the
> > kernel command line.
> > 
> > Provide some guidance on the failure and a recommended minimum size for
> > the pools (double the size).
> > 
> > Signed-off-by: David Rientjes <rientjes@google.com>
> 
> Tested-by: Guenter Roeck <linux@roeck-us.net>
> 
> Also confirmed that coherent_pool=256k works around the crash
> I had observed.
> 

Thanks Guenter.  Christoph, does it make sense to apply this patch since 
there may not be an artifact left behind in the kernel log on allocation 
failure by the caller?

> Guenter
> 
> > ---
> >  kernel/dma/pool.c | 6 +++++-
> >  1 file changed, 5 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/dma/pool.c b/kernel/dma/pool.c
> > --- a/kernel/dma/pool.c
> > +++ b/kernel/dma/pool.c
> > @@ -239,12 +239,16 @@ void *dma_alloc_from_pool(struct device *dev, size_t size,
> >  	}
> >  
> >  	val = gen_pool_alloc(pool, size);
> > -	if (val) {
> > +	if (likely(val)) {
> >  		phys_addr_t phys = gen_pool_virt_to_phys(pool, val);
> >  
> >  		*ret_page = pfn_to_page(__phys_to_pfn(phys));
> >  		ptr = (void *)val;
> >  		memset(ptr, 0, size);
> > +	} else {
> > +		WARN_ONCE(1, "DMA coherent pool depleted, increase size "
> > +			     "(recommended min coherent_pool=%zuK)\n",
> > +			  gen_pool_size(pool) >> 9);
> >  	}
> >  	if (gen_pool_avail(pool) < atomic_pool_size)
> >  		schedule_work(&atomic_pool_work);
> 
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2020-06-28  4:25 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-06-21 21:12 [patch] dma-pool: warn when coherent pool is depleted Guenter Roeck
2020-06-28  4:25 ` David Rientjes via iommu [this message]
2020-06-29  8:05   ` Christoph Hellwig
  -- strict thread matches above, loose matches on Subject: below --
2020-06-21 20:43 David Rientjes via iommu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.22.394.2006272124470.591864@chino.kir.corp.google.com \
    --to=iommu@lists.linux-foundation.org \
    --cc=geert@linux-m68k.org \
    --cc=hch@lst.de \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux@roeck-us.net \
    --cc=rientjes@google.com \
    --cc=robin.murphy@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).