iommu.lists.linux-foundation.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes via iommu <iommu@lists.linux-foundation.org>
To: Christoph Hellwig <hch@lst.de>
Cc: Jens Axboe <axboe@kernel.dk>,
	Tom Lendacky <thomas.lendacky@amd.com>,
	Brijesh Singh <brijesh.singh@amd.com>,
	x86@kernel.org, linux-kernel@vger.kernel.org,
	Ming Lei <ming.lei@redhat.com>,
	iommu@lists.linux-foundation.org, Peter Gonda <pgonda@google.com>,
	Jianxiong Gao <jxgao@google.com>
Subject: Re: [bug] __blk_mq_run_hw_queue suspicious rcu usage
Date: Mon, 16 Sep 2019 16:45:24 -0700 (PDT)	[thread overview]
Message-ID: <alpine.DEB.2.21.1909161641320.9200@chino.kir.corp.google.com> (raw)
In-Reply-To: <alpine.DEB.2.21.1909051534050.245316@chino.kir.corp.google.com>

On Thu, 5 Sep 2019, David Rientjes wrote:

> > > Hi Christoph, Jens, and Ming,
> > > 
> > > While booting a 5.2 SEV-enabled guest we have encountered the following 
> > > WARNING that is followed up by a BUG because we are in atomic context 
> > > while trying to call set_memory_decrypted:
> > 
> > Well, this really is a x86 / DMA API issue unfortunately.  Drivers
> > are allowed to do GFP_ATOMIC dma allocation under locks / rcu critical
> > sections and from interrupts.  And it seems like the SEV case can't
> > handle that.  We have some semi-generic code to have a fixed sized
> > pool in kernel/dma for non-coherent platforms that have similar issues
> > that we could try to wire up, but I wonder if there is a better way
> > to handle the issue, so I've added Tom and the x86 maintainers.
> > 
> > Now independent of that issue using DMA coherent memory for the nvme
> > PRPs/SGLs doesn't actually feel very optional.  We could do with
> > normal kmalloc allocations and just sync it to the device and back.
> > I wonder if we should create some general mempool-like helpers for that.
> > 
> 
> Thanks for looking into this.  I assume it's a non-starter to try to 
> address this in _vm_unmap_aliases() itself, i.e. rely on a purge spinlock 
> to do all synchronization (or trylock if not forced) for 
> purge_vmap_area_lazy() rather than only the vmap_area_lock within it.  In 
> other words, no mutex.
> 
> If that's the case, and set_memory_encrypted() can't be fixed to not need 
> to sleep by changing _vm_unmap_aliases() locking, then I assume dmapool is 
> our only alternative?  I have no idea with how large this should be.
> 

Brijesh and Tom, we currently hit this any time we boot an SEV enabled 
Ubuntu 18.04 guest; I assume that guest kernels, especially those of such 
major distributions, are expected to work with warnings and BUGs when 
certain drivers are enabled.

If the vmap purge lock is to remain a mutex (any other reason that 
unmapping aliases can block?) then it appears that allocating a dmapool 
is the only alternative.  Is this something that you'll be addressing 
generically or do we need to get buy-in from the maintainers of this 
specific driver?
_______________________________________________
iommu mailing list
iommu@lists.linux-foundation.org
https://lists.linuxfoundation.org/mailman/listinfo/iommu

  reply	other threads:[~2019-09-16 23:45 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <alpine.DEB.2.21.1909041434580.160038@chino.kir.corp.google.com>
2019-09-05  6:06 ` [bug] __blk_mq_run_hw_queue suspicious rcu usage Christoph Hellwig
2019-09-05 22:37   ` David Rientjes via iommu
2019-09-16 23:45     ` David Rientjes via iommu [this message]
2019-09-17 18:23       ` David Rientjes via iommu
2019-09-17 18:32         ` Jens Axboe
2019-09-17 18:41         ` Lendacky, Thomas
2019-09-18 13:22           ` Christoph Hellwig
2019-11-27 22:11             ` David Rientjes via iommu
2019-11-28  6:40               ` Christoph Hellwig
2019-12-13  0:07                 ` David Rientjes via iommu
2019-12-13  9:33                   ` David Rientjes via iommu
2019-12-15  5:38                   ` David Rientjes via iommu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.21.1909161641320.9200@chino.kir.corp.google.com \
    --to=iommu@lists.linux-foundation.org \
    --cc=axboe@kernel.dk \
    --cc=brijesh.singh@amd.com \
    --cc=hch@lst.de \
    --cc=jxgao@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ming.lei@redhat.com \
    --cc=pgonda@google.com \
    --cc=rientjes@google.com \
    --cc=thomas.lendacky@amd.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).