linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: linux-mm@kvack.org, Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org,
	Ralph Campbell <rcampbell@nvidia.com>,
	stable@vger.kernel.org, Evgeny Baskakov <ebaskakov@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>
Subject: Re: [PATCH 03/15] mm/hmm: HMM should have a callback before MM is destroyed v2
Date: Wed, 21 Mar 2018 21:32:33 -0400	[thread overview]
Message-ID: <20180322013233.GM3214@redhat.com> (raw)
In-Reply-To: <c9607860-4d93-c81e-3f63-1ebcba46b321@nvidia.com>

On Wed, Mar 21, 2018 at 05:11:10PM -0700, John Hubbard wrote:
> On 03/21/2018 04:37 PM, Jerome Glisse wrote:
> > On Wed, Mar 21, 2018 at 04:10:32PM -0700, John Hubbard wrote:
> >> On 03/21/2018 03:46 PM, Jerome Glisse wrote:
> >>> On Wed, Mar 21, 2018 at 03:16:04PM -0700, John Hubbard wrote:
> >>>> On 03/21/2018 11:03 AM, Jerome Glisse wrote:
> >>>>> On Tue, Mar 20, 2018 at 09:14:34PM -0700, John Hubbard wrote:
> >>>>>> On 03/19/2018 07:00 PM, jglisse@redhat.com wrote:
> >>>>>>> From: Ralph Campbell <rcampbell@nvidia.com>
> > 
> > [...]
> > 
> >>>>> That is just illegal, the release callback is not allowed to trigger
> >>>>> invalidation all it does is kill all device's threads and stop device
> >>>>> page fault from happening. So there is no deadlock issues. I can re-
> >>>>> inforce the comment some more (see [1] for example on what it should
> >>>>> be).
> >>>>
> >>>> That rule is fine, and it is true that the .release callback will not 
> >>>> directly trigger any invalidations. However, the problem is in letting 
> >>>> any *existing* outstanding operations finish up. We have to let 
> >>>> existing operations "drain", in order to meet the requirement that 
> >>>> everything is done when .release returns.
> >>>>
> >>>> For example, if a device driver thread is in the middle of working through
> >>>> its fault buffer, it will call migrate_vma(), which will in turn unmap
> >>>> pages. That will cause an hmm_invalidate_range() callback, which tries
> >>>> to take hmm->mirrors_sems, and we deadlock.
> >>>>
> >>>> There's no way to "kill" such a thread while it's in the middle of
> >>>> migrate_vma(), you have to let it finish up.
> >>>>
> >>>>> Also it is illegal for the sync callback to trigger any mmu_notifier
> >>>>> callback. I thought this was obvious. The sync callback should only
> >>>>> update device page table and do _nothing else_. No way to make this
> >>>>> re-entrant.
> >>>>
> >>>> That is obvious, yes. I am not trying to say there is any problem with
> >>>> that rule. It's the "drain outstanding operations during .release", 
> >>>> above, that is the real problem.
> >>>
> >>> Maybe just relax the release callback wording, it should stop any
> >>> more processing of fault buffer but not wait for it to finish. In
> >>> nouveau code i kill thing but i do not wait hence i don't deadlock.
> >>
> >> But you may crash, because that approach allows .release to finish
> >> up, thus removing the mm entirely, out from under (for example)
> >> a migrate_vma call--or any other call that refers to the mm.
> > 
> > No you can not crash on mm as it will not vanish before you are done
> > with it as mm will not be freed before you call hmm_unregister() and
> > you should not call that from release, nor should you call it before
> > everything is flush. However vma struct might vanish ... i might have
> > assume wrongly about the down_write() always happening in exit_mmap()
> > This might be a solution to force serialization.
> > 
>  
> OK. My details on mm destruction were inaccurate, but we do agree now
> that that the whole virtual address space is being torn down at the same 
> time as we're trying to use it, so I think we're on the same page now.
> 
> >>
> >> It doesn't seem too hard to avoid the problem, though: maybe we
> >> can just drop the lock while doing the mirror->ops->release callback.
> >> There are a few ways to do this, but one example is: 
> >>
> >>     -- take the lock,
> >>         -- copy the list to a local list, deleting entries as you go,
> >>     -- drop the lock, 
> >>     -- iterate through the local list copy and 
> >>         -- issue the mirror->ops->release callbacks.
> >>
> >> At this point, more items could have been added to the list, so repeat
> >> the above until the original list is empty. 
> >>
> >> This is subject to a limited starvation case if mirror keep getting 
> >> registered, but I think we can ignore that, because it only lasts as long as 
> >> mirrors keep getting added, and then it finishes up.
> > 
> > The down_write is better solution and easier just 2 line of code.
> 
> OK. I'll have a better idea when I see it.
> 
> > 
> >>
> >>>
> >>> What matter is to stop any further processing. Yes some fault might
> >>> be in flight but they will serialize on various lock. 
> >>
> >> Those faults in flight could already be at a point where they have taken
> >> whatever locks they need, so we don't dare let the mm get destroyed while
> >> such fault handling is in progress.
> > 
> > mm can not vanish until hmm_unregister() is call, vma will vanish before.
> 
> OK, yes. And we agree that vma vanishing is a problem. 
> 
> > 
> >> So just do not
> >>> wait in the release callback, kill thing. I might have a bug where i
> >>> still fill in GPU page table in nouveau, i will check nouveau code
> >>> for that.
> >>
> >> Again, we can't "kill" a thread of execution (this would often be an
> >> interrupt bottom half context, btw) while it is, for example,
> >> in the middle of migrate_vma.
> > 
> > You should not call migrate from bottom half ! Only call this from work
> > queue like nouveau.
> 
> By "bottom half", I mean the kthread that we have running to handle work
> that was handed off from the top half ISR. So we are in process context.
> And we will need to do migrate_vma() from there.
> 
> > 
> >>
> >> I really don't believe there is a safe way to do this without draining
> >> the existing operations before .release returns, and for that, we'll need to 
> >> issue the .release callbacks while not holding locks.
> > 
> > down_write on mmap_sem would force serialization. I am not sure we want
> > to do this change now. It can wait as it is definitly not an issue for
> > nouveau yet. Taking mmap_sem in write (see oom in exit_mmap()) in release
> > make me nervous.
> > 
> 
> I'm not going to lose any sleep about when various fixes are made, as long as
> we agree on problems and solution approaches, and fix them at some point.
> I will note that our downstreamdriver will not be...well, completely usable, 
> until we fix this, though.
> 

So i posted updated patch for 3 and 4 that should address your concern.
Testing done with them and nouveau seems to work ok. I am hopping this
address all your concerns.

Cheers,
Jérôme

  reply	other threads:[~2018-03-22  1:32 UTC|newest]

Thread overview: 47+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-03-20  2:00 [PATCH 00/15] hmm: fixes and documentations v3 jglisse
2018-03-20  2:00 ` [PATCH 01/15] mm/hmm: documentation editorial update to HMM documentation jglisse
2018-03-20  2:00 ` [PATCH 02/15] mm/hmm: fix header file if/else/endif maze v2 jglisse
2018-03-20  2:00 ` [PATCH 03/15] mm/hmm: HMM should have a callback before MM is destroyed v2 jglisse
2018-03-21  4:14   ` John Hubbard
2018-03-21 18:03     ` Jerome Glisse
2018-03-21 22:16       ` John Hubbard
2018-03-21 22:46         ` Jerome Glisse
2018-03-21 23:10           ` John Hubbard
2018-03-21 23:37             ` Jerome Glisse
2018-03-22  0:11               ` John Hubbard
2018-03-22  1:32                 ` Jerome Glisse [this message]
2018-03-22  1:28   ` [PATCH 03/15] mm/hmm: HMM should have a callback before MM is destroyed v3 jglisse
2018-03-22  6:58     ` John Hubbard
2018-03-20  2:00 ` [PATCH 04/15] mm/hmm: unregister mmu_notifier when last HMM client quit jglisse
2018-03-21  4:24   ` John Hubbard
2018-03-21 18:12     ` Jerome Glisse
2018-03-21 18:16   ` [PATCH 04/15] mm/hmm: unregister mmu_notifier when last HMM client quit v2 jglisse
2018-03-21 23:22     ` John Hubbard
2018-03-21 23:41       ` Jerome Glisse
2018-03-22 22:47         ` John Hubbard
2018-03-22 23:37           ` Jerome Glisse
2018-03-23  0:13             ` John Hubbard
2018-03-23  0:50               ` Jerome Glisse
2018-03-23  0:56                 ` John Hubbard
2018-03-22  1:30     ` [PATCH 04/15] mm/hmm: unregister mmu_notifier when last HMM client quit v3 jglisse
2018-03-22 22:36       ` Andrew Morton
2018-03-20  2:00 ` [PATCH 05/15] mm/hmm: hmm_pfns_bad() was accessing wrong struct jglisse
2018-03-20  2:00 ` [PATCH 06/15] mm/hmm: use struct for hmm_vma_fault(), hmm_vma_get_pfns() parameters v2 jglisse
2018-03-20  2:00 ` [PATCH 07/15] mm/hmm: remove HMM_PFN_READ flag and ignore peculiar architecture v2 jglisse
2018-03-20  2:00 ` [PATCH 08/15] mm/hmm: use uint64_t for HMM pfn instead of defining hmm_pfn_t to ulong v2 jglisse
2018-03-20  2:00 ` [PATCH 09/15] mm/hmm: cleanup special vma handling (VM_SPECIAL) jglisse
2018-03-20  2:00 ` [PATCH 10/15] mm/hmm: do not differentiate between empty entry or missing directory v2 jglisse
2018-03-21  5:24   ` John Hubbard
2018-03-21 14:48     ` Jerome Glisse
2018-03-21 23:16       ` John Hubbard
2018-03-20  2:00 ` [PATCH 11/15] mm/hmm: rename HMM_PFN_DEVICE_UNADDRESSABLE to HMM_PFN_DEVICE_PRIVATE jglisse
2018-03-20  2:00 ` [PATCH 12/15] mm/hmm: move hmm_pfns_clear() closer to where it is use jglisse
2018-03-20  2:00 ` [PATCH 13/15] mm/hmm: factor out pte and pmd handling to simplify hmm_vma_walk_pmd() jglisse
2018-03-21  5:07   ` John Hubbard
2018-03-21 15:08     ` Jerome Glisse
2018-03-21 22:36       ` John Hubbard
2018-03-20  2:00 ` [PATCH 14/15] mm/hmm: change hmm_vma_fault() to allow write fault on page basis jglisse
2018-03-20  2:00 ` [PATCH 15/15] mm/hmm: use device driver encoding for HMM pfn v2 jglisse
2018-03-21  4:39   ` John Hubbard
2018-03-21 15:52     ` Jerome Glisse
2018-03-21 23:19       ` John Hubbard

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180322013233.GM3214@redhat.com \
    --to=jglisse@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=ebaskakov@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhairgrove@nvidia.com \
    --cc=rcampbell@nvidia.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).