From: Daniel Vetter <daniel@ffwll.ch>
To: Jerome Glisse <jglisse@redhat.com>
Cc: "Jason Gunthorpe" <jgg@ziepe.ca>,
"Thomas Hellström (Intel)" <thomas_os@shipmail.org>,
"DRI Development" <dri-devel@lists.freedesktop.org>,
linux-rdma <linux-rdma@vger.kernel.org>,
"Intel Graphics Development" <intel-gfx@lists.freedesktop.org>,
"Maarten Lankhorst" <maarten.lankhorst@linux.intel.com>,
LKML <linux-kernel@vger.kernel.org>,
"amd-gfx list" <amd-gfx@lists.freedesktop.org>,
"moderated list:DMA BUFFER SHARING FRAMEWORK"
<linaro-mm-sig@lists.linaro.org>,
"Thomas Hellstrom" <thomas.hellstrom@intel.com>,
"Daniel Vetter" <daniel.vetter@intel.com>,
"open list:DMA BUFFER SHARING FRAMEWORK"
<linux-media@vger.kernel.org>,
"Christian König" <christian.koenig@amd.com>,
"Mika Kuoppala" <mika.kuoppala@intel.com>
Subject: Re: [Linaro-mm-sig] [PATCH 04/18] dma-fence: prime lockdep annotations
Date: Fri, 19 Jun 2020 22:43:20 +0200 [thread overview]
Message-ID: <CAKMK7uFZgQH3bP4iC9MPArpngeSHESK62KFEeJvYyV9NSJ_GRw@mail.gmail.com> (raw)
In-Reply-To: <20200619201011.GB13117@redhat.com>
On Fri, Jun 19, 2020 at 10:10 PM Jerome Glisse <jglisse@redhat.com> wrote:
>
> On Fri, Jun 19, 2020 at 03:18:49PM -0300, Jason Gunthorpe wrote:
> > On Fri, Jun 19, 2020 at 02:09:35PM -0400, Jerome Glisse wrote:
> > > On Fri, Jun 19, 2020 at 02:23:08PM -0300, Jason Gunthorpe wrote:
> > > > On Fri, Jun 19, 2020 at 06:19:41PM +0200, Daniel Vetter wrote:
> > > >
> > > > > The madness is only that device B's mmu notifier might need to wait
> > > > > for fence_B so that the dma operation finishes. Which in turn has to
> > > > > wait for device A to finish first.
> > > >
> > > > So, it sound, fundamentally you've got this graph of operations across
> > > > an unknown set of drivers and the kernel cannot insert itself in
> > > > dma_fence hand offs to re-validate any of the buffers involved?
> > > > Buffers which by definition cannot be touched by the hardware yet.
> > > >
> > > > That really is a pretty horrible place to end up..
> > > >
> > > > Pinning really is right answer for this kind of work flow. I think
> > > > converting pinning to notifers should not be done unless notifier
> > > > invalidation is relatively bounded.
> > > >
> > > > I know people like notifiers because they give a bit nicer performance
> > > > in some happy cases, but this cripples all the bad cases..
> > > >
> > > > If pinning doesn't work for some reason maybe we should address that?
> > >
> > > Note that the dma fence is only true for user ptr buffer which predate
> > > any HMM work and thus were using mmu notifier already. You need the
> > > mmu notifier there because of fork and other corner cases.
> >
> > I wonder if we should try to fix the fork case more directly - RDMA
> > has this same problem and added MADV_DONTFORK a long time ago as a
> > hacky way to deal with it.
> >
> > Some crazy page pin that resolved COW in a way that always kept the
> > physical memory with the mm that initiated the pin?
>
> Just no way to deal with it easily, i thought about forcing the
> anon_vma (page->mapping for anonymous page) to the anon_vma that
> belongs to the vma against which the GUP was done but it would
> break things if page is already in other branch of a fork tree.
> Also this forbid fast GUP.
>
> Quite frankly the fork was not the main motivating factor. GPU
> can pin potentialy GBytes of memory thus we wanted to be able
> to release it but since Michal changes to reclaim code this is
> no longer effective.
What where how? My patch to annote reclaim paths with mmu notifier
possibility just landed in -mm, so if direct reclaim can't reclaim mmu
notifier'ed stuff anymore we need to know.
Also this would resolve the entire pain we're discussing in this
thread about dma_fence_wait deadlocking against anything that's not
GFP_ATOMIC ...
-Daniel
>
> User buffer should never end up in those weird corner case, iirc
> the first usage was for xorg exa texture upload, then generalize
> to texture upload in mesa and latter on to more upload cases
> (vertices, ...). At least this is what i remember today. So in
> those cases we do not expect fork, splice, mremap, mprotect, ...
>
> Maybe we can audit how user ptr buffer are use today and see if
> we can define a usage pattern that would allow to cut corner in
> kernel. For instance we could use mmu notifier just to block CPU
> pte update while we do GUP and thus never wait on dma fence.
>
> Then GPU driver just keep the GUP pin around until they are done
> with the page. They can also use the mmu notifier to keep a flag
> so that the driver know if it needs to redo a GUP ie:
>
> The notifier path:
> GPU_mmu_notifier_start_callback(range)
> gpu_lock_cpu_pagetable(range)
> for_each_bo_in(bo, range) {
> bo->need_gup = true;
> }
> gpu_unlock_cpu_pagetable(range)
>
> GPU_validate_buffer_pages(bo)
> if (!bo->need_gup)
> return;
> put_pages(bo->pages);
> range = bo_vaddr_range(bo)
> gpu_lock_cpu_pagetable(range)
> GUP(bo->pages, range)
> gpu_unlock_cpu_pagetable(range)
>
>
> Depending on how user_ptr are use today this could work.
>
>
> > (isn't this broken for O_DIRECT as well anyhow?)
>
> Yes it can in theory, if you have an application that does O_DIRECT
> and fork concurrently (ie O_DIRECT in one thread and fork in another).
> Note that O_DIRECT after fork is fine, it is an issue only if GUP_fast
> was able to lookup a page with write permission before fork had the
> chance to update it to read only for COW.
>
> But doing O_DIRECT (or anything that use GUP fast) in one thread and
> fork in another is inherently broken ie there is no way to fix it.
>
> See 17839856fd588f4ab6b789f482ed3ffd7c403e1f
>
> >
> > How does mmu_notifiers help the fork case anyhow? Block fork from
> > progressing?
>
> It enforce ordering between fork and GUP, if fork is first it blocks
> GUP and if forks is last then fork waits on GUP and then user buffer
> get invalidated.
>
> >
> > > I probably need to warn AMD folks again that using HMM means that you
> > > must be able to update the GPU page table asynchronously without
> > > fence wait.
> >
> > It is kind of unrelated to HMM, it just shouldn't be using mmu
> > notifiers to replace page pinning..
>
> Well my POV is that if you abide by rules HMM defined then you do
> not need to pin pages. The rule is asynchronous device page table
> update.
>
> Pinning pages is problematic it blocks many core mm features and
> it is just bad all around. Also it is inherently broken in front
> of fork/mremap/splice/...
>
> Cheers,
> Jérôme
>
--
Daniel Vetter
Software Engineer, Intel Corporation
http://blog.ffwll.ch
next prev parent reply other threads:[~2020-06-19 20:43 UTC|newest]
Thread overview: 99+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-06-04 8:12 [PATCH 00/18] dma-fence lockdep annotations, round 2 Daniel Vetter
2020-06-04 8:12 ` [PATCH 01/18] mm: Track mmu notifiers in fs_reclaim_acquire/release Daniel Vetter
2020-06-10 12:01 ` Thomas Hellström (Intel)
2020-06-10 12:25 ` [Intel-gfx] " Daniel Vetter
2020-06-10 19:41 ` [PATCH] " Daniel Vetter
2020-06-11 14:29 ` Jason Gunthorpe
2020-06-21 17:42 ` Qian Cai
2020-06-21 18:07 ` Daniel Vetter
2020-06-21 20:01 ` Daniel Vetter
2020-06-21 22:09 ` Qian Cai
2020-06-23 16:17 ` Qian Cai
2020-06-23 22:13 ` Daniel Vetter
2020-06-23 22:29 ` Qian Cai
2020-06-23 22:31 ` Dave Chinner
2020-06-23 22:36 ` Daniel Vetter
2020-06-21 17:00 ` [PATCH 01/18] " Qian Cai
2020-06-21 17:28 ` Daniel Vetter
2020-06-21 17:46 ` Qian Cai
2020-06-04 8:12 ` [PATCH 02/18] dma-buf: minor doc touch-ups Daniel Vetter
2020-06-10 13:07 ` Thomas Hellström (Intel)
2020-06-04 8:12 ` [PATCH 03/18] dma-fence: basic lockdep annotations Daniel Vetter
2020-06-04 8:57 ` Thomas Hellström (Intel)
2020-06-04 9:21 ` Daniel Vetter
[not found] ` <159126281827.25109.3992161193069793005@build.alporthouse.com>
2020-06-04 9:36 ` [Intel-gfx] " Daniel Vetter
2020-06-05 13:29 ` [PATCH] " Daniel Vetter
2020-06-05 14:30 ` Thomas Hellström (Intel)
2020-06-11 9:57 ` Maarten Lankhorst
2020-06-10 14:21 ` [Intel-gfx] [PATCH 03/18] " Tvrtko Ursulin
2020-06-10 15:17 ` Daniel Vetter
2020-06-11 10:36 ` Tvrtko Ursulin
2020-06-11 11:29 ` Daniel Vetter
2020-06-11 14:29 ` Tvrtko Ursulin
2020-06-11 15:03 ` Daniel Vetter
[not found] ` <159186243606.1506.4437341616828968890@build.alporthouse.com>
2020-06-11 8:44 ` Dave Airlie
2020-06-11 9:01 ` [Intel-gfx] " Daniel Stone
[not found] ` <159255511144.7737.12635440776531222029@build.alporthouse.com>
2020-06-19 8:51 ` Daniel Vetter
[not found] ` <159255801588.7737.4425728073225310839@build.alporthouse.com>
2020-06-19 9:43 ` Daniel Vetter
[not found] ` <159257233754.7737.17318605310513355800@build.alporthouse.com>
2020-06-22 9:16 ` Daniel Vetter
2020-07-09 7:29 ` Daniel Stone
2020-07-09 8:01 ` Daniel Vetter
2020-06-04 8:12 ` [PATCH 04/18] dma-fence: prime " Daniel Vetter
2020-06-11 7:30 ` [Linaro-mm-sig] " Thomas Hellström (Intel)
2020-06-11 8:34 ` Daniel Vetter
2020-06-11 14:15 ` Jason Gunthorpe
2020-06-11 23:35 ` Felix Kuehling
2020-06-12 5:11 ` Daniel Vetter
2020-06-19 18:13 ` Jerome Glisse
2020-06-23 7:39 ` Daniel Vetter
2020-06-23 18:44 ` Felix Kuehling
2020-06-23 19:02 ` Daniel Vetter
2020-06-16 12:07 ` Daniel Vetter
2020-06-16 14:53 ` Jason Gunthorpe
2020-06-17 7:57 ` Daniel Vetter
2020-06-17 15:29 ` Jason Gunthorpe
2020-06-18 14:42 ` Daniel Vetter
2020-06-17 6:48 ` Daniel Vetter
2020-06-17 15:28 ` Jason Gunthorpe
2020-06-18 15:00 ` Daniel Vetter
2020-06-18 17:23 ` Jason Gunthorpe
2020-06-19 7:22 ` Daniel Vetter
2020-06-19 11:39 ` Jason Gunthorpe
2020-06-19 15:06 ` Daniel Vetter
2020-06-19 15:15 ` Jason Gunthorpe
2020-06-19 16:19 ` Daniel Vetter
2020-06-19 17:23 ` Jason Gunthorpe
2020-06-19 18:09 ` Jerome Glisse
2020-06-19 18:18 ` Jason Gunthorpe
2020-06-19 19:48 ` Felix Kuehling
2020-06-19 19:55 ` Jason Gunthorpe
2020-06-19 20:03 ` Felix Kuehling
2020-06-19 20:31 ` Jerome Glisse
2020-06-22 11:46 ` Jason Gunthorpe
2020-06-22 20:15 ` Jerome Glisse
2020-06-23 0:02 ` Jason Gunthorpe
2020-06-19 20:10 ` Jerome Glisse
2020-06-19 20:43 ` Daniel Vetter [this message]
2020-06-19 20:59 ` Jerome Glisse
2020-06-23 0:05 ` Jason Gunthorpe
2020-06-19 19:11 ` Alex Deucher
2020-06-19 19:30 ` Felix Kuehling
2020-06-19 19:40 ` Jerome Glisse
2020-06-19 19:51 ` Jason Gunthorpe
2020-06-04 8:12 ` [PATCH 05/18] drm/vkms: Annotate vblank timer Daniel Vetter
2020-06-04 8:12 ` [PATCH 06/18] drm/vblank: Annotate with dma-fence signalling section Daniel Vetter
2020-06-04 8:12 ` [PATCH 07/18] drm/atomic-helper: Add dma-fence annotations Daniel Vetter
2020-06-04 8:12 ` [PATCH 08/18] drm/amdgpu: add dma-fence annotations to atomic commit path Daniel Vetter
2020-06-23 10:51 ` Daniel Vetter
2020-06-04 8:12 ` [PATCH 09/18] drm/scheduler: use dma-fence annotations in main thread Daniel Vetter
2020-06-04 8:12 ` [PATCH 10/18] drm/amdgpu: use dma-fence annotations in cs_submit() Daniel Vetter
2020-06-04 8:12 ` [PATCH 11/18] drm/amdgpu: s/GFP_KERNEL/GFP_ATOMIC in scheduler code Daniel Vetter
2020-06-04 8:12 ` [PATCH 12/18] drm/amdgpu: DC also loves to allocate stuff where it shouldn't Daniel Vetter
2020-06-04 8:12 ` [PATCH 13/18] drm/amdgpu/dc: Stop dma_resv_lock inversion in commit_tail Daniel Vetter
2020-06-05 8:30 ` Pierre-Eric Pelloux-Prayer
2020-06-05 12:41 ` Daniel Vetter
2020-06-04 8:12 ` [PATCH 14/18] drm/scheduler: use dma-fence annotations in tdr work Daniel Vetter
2020-06-04 8:12 ` [PATCH 15/18] drm/amdgpu: use dma-fence annotations for gpu reset code Daniel Vetter
2020-06-04 8:12 ` [PATCH 16/18] Revert "drm/amdgpu: add fbdev suspend/resume on gpu reset" Daniel Vetter
2020-06-04 8:12 ` [PATCH 17/18] drm/amdgpu: gpu recovery does full modesets Daniel Vetter
2020-06-04 8:12 ` [PATCH 18/18] drm/i915: Annotate dma_fence_work Daniel Vetter
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=CAKMK7uFZgQH3bP4iC9MPArpngeSHESK62KFEeJvYyV9NSJ_GRw@mail.gmail.com \
--to=daniel@ffwll.ch \
--cc=amd-gfx@lists.freedesktop.org \
--cc=christian.koenig@amd.com \
--cc=daniel.vetter@intel.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=intel-gfx@lists.freedesktop.org \
--cc=jgg@ziepe.ca \
--cc=jglisse@redhat.com \
--cc=linaro-mm-sig@lists.linaro.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-media@vger.kernel.org \
--cc=linux-rdma@vger.kernel.org \
--cc=maarten.lankhorst@linux.intel.com \
--cc=mika.kuoppala@intel.com \
--cc=thomas.hellstrom@intel.com \
--cc=thomas_os@shipmail.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).