linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jason Gunthorpe <jgg@ziepe.ca>
To: Christoph Hellwig <hch@lst.de>
Cc: "Michal Hocko" <mhocko@suse.com>,
	"Ralph Campbell" <rcampbell@nvidia.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	nouveau@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	"Jérôme Glisse" <jglisse@redhat.com>,
	"Ben Skeggs" <bskeggs@redhat.com>
Subject: Re: [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range
Date: Wed, 24 Jul 2019 15:01:53 -0300	[thread overview]
Message-ID: <20190724180153.GE28493@ziepe.ca> (raw)
In-Reply-To: <20190724153305.GA10681@lst.de>

On Wed, Jul 24, 2019 at 05:33:05PM +0200, Christoph Hellwig wrote:
> On Wed, Jul 24, 2019 at 12:28:58PM -0300, Jason Gunthorpe wrote:
> > Humm. Actually having looked this some more, I wonder if this is a
> > problem:
> 
> What a mess.
> 
> Question: do we even care for the non-blocking events?  The only place
> where mmu_notifier_invalidate_range_start_nonblock is called is the oom
> killer, which means the process is about to die and the pagetable will
> get torn down ASAP.  Should we just skip them unconditionally?  nouveau
> already does so, but amdgpu currently tries to handle the non-blocking
> notifications.

I think the issue is the pages need to get freed to make the memory
available without becoming entangled in risky locks and deadlock.

Presumably if we go to the 'torn down ASAP' things get more risky that
the whole thing deadlocks?

I'm guessing a bit, but I *think* non-blocking here really means
something closer to WQ_MEM_RECLAIM, ie you can't do anything that would
become entangled with locks in the allocator that are pending on OOM??

(if so we really should call this reclaim not non-blocking)

ie for ODP umem_rwsem is held by threads while calling into kmalloc,
so when we go to do the exit_mmap we still do the
invalidate_range_start and can still end up blocked on a lock that is
held by a thread waiting for kmalloc to return.

Would be nice to know if this guess is right or not.

Jason

  parent reply	other threads:[~2019-07-24 18:01 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-07-23 21:05 [PATCH] mm/hmm: replace hmm_update with mmu_notifier_range Ralph Campbell
2019-07-24  7:05 ` Christoph Hellwig
2019-07-24 15:28   ` Jason Gunthorpe
2019-07-24 15:33     ` Christoph Hellwig
2019-07-24 18:00       ` Michal Hocko
2019-07-24 18:01       ` Jason Gunthorpe [this message]
2019-07-24 17:58     ` Michal Hocko
2019-07-24 18:08       ` Jason Gunthorpe
2019-07-24 18:56         ` Michal Hocko
2019-07-24 18:59           ` Michal Hocko
2019-07-24 19:21             ` Jason Gunthorpe
2019-07-24 19:48               ` Christoph Hellwig
2019-07-24 20:18                 ` Jason Gunthorpe
2019-07-25  1:14 ` Jason Gunthorpe
2019-07-25 17:53   ` Ralph Campbell

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190724180153.GE28493@ziepe.ca \
    --to=jgg@ziepe.ca \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=jglisse@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).