All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Koenig, Christian" <Christian.Koenig@amd.com>
To: Jason Gunthorpe <jgg@mellanox.com>
Cc: "Yang, Philip" <Philip.Yang@amd.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	John Hubbard <jhubbard@nvidia.com>,
	"Kuehling, Felix" <Felix.Kuehling@amd.com>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Jerome Glisse <jglisse@redhat.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	Ben Skeggs <bskeggs@redhat.com>
Subject: Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
Date: Mon, 21 Oct 2019 14:28:46 +0000	[thread overview]
Message-ID: <e07092c3-8ccd-9814-835c-6c462017aff8@amd.com> (raw)
In-Reply-To: <20191021135744.GA25164@mellanox.com>

Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> On Sun, Oct 20, 2019 at 02:21:42PM +0000, Koenig, Christian wrote:
>> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
>>> On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:
>>> [SNIP]
>>>    
>>>> So again how are they serialized?
>>> The 'driver lock' thing does it, read the hmm documentation, the hmm
>>> approach is basically the only approach that was correct of all the
>>> drivers..
>> Well that's what I've did, but what HMM does still doesn't looks correct
>> to me.
> It has a bug, but the basic flow seems to work.
>
> https://patchwork.kernel.org/patch/11191

Maybe wrong link? That link looks like an unrelated discussion on kernel 
image relocation.

>>> So long as the 'driver lock' is held the range cannot become
>>> invalidated as the 'driver lock' prevents progress of invalidation.
>> Correct, but the problem is it doesn't wait for ongoing operations to
>> complete.
>>
>> See I'm talking about the following case:
>>
>> Thread A    Thread B
>> invalidate_range_start()
>>                       mmu_range_read_begin()
>>                       get_user_pages()/hmm_range_fault()
>>                       grab_driver_lock()
>> Updating the ptes
>> invalidate_range_end()
>>
>> As far as I can see in invalidate_range_start() the driver lock is taken
>> to make sure that we can't start any invalidation while the driver is
>> using the pages for a command submission.
> Again, this uses the seqlock like scheme *and* the driver lock.
>
> In this case after grab_driver_lock() mmu_range_read_retry() will
> return false if Thread A has progressed to 'updating the ptes.
>
> For instance here is how the concurrency resolves for retry:
>
>         CPU1                                CPU2
>                                    seq = mmu_range_read_begin()
> invalidate_range_start()
>    invalidate_seq++

How that was order was what confusing me. But I've read up on the code 
in mmu_range_read_begin() and found the lines I was looking for:

+    if (is_invalidating)
+        wait_event(mmn_mm->wq,
+               READ_ONCE(mmn_mm->invalidate_seq) != seq);

[SNIP]

> For the above I've simplified the mechanics of the invalidate_seq, you
> need to look through the patch to see how it actually works.

Yea, that you also allow multiple write sides is pretty neat.

>> Well we don't update the seqlock after the update to the protected data
>> structure (the page table) happened, but rather before that.
> ??? This is what mn_itree_inv_end() does, it is called by
> invalidate_range_end
>
>> That doesn't looks like the normal patter for a seqlock to me and as far
>> as I can see that is quite a bug in the HMM design/logic.
> Well, hmm has a bug because it doesn't use a seqlock pattern, see the
> above URL.
>
> One of the motivations for this work is to squash that bug by adding a
> seqlock like pattern. But the basic hmm flow and collision-retry
> approach seems sound.
>
> Do you see a problem with this patch?

No, not any more.

Essentially you are doing the same thing I've tried to do with the 
original amdgpu implementation. The difference is that you don't try to 
use a per range sequence (which is a good idea, we never got that fully 
working) and you allow multiple writers at the same time.

Feel free to stitch an Acked-by: Christian König 
<christian.koenig@amd.com> on patch #2, but you still doing a bunch of 
things in there which are way beyond my understanding (e.g. where are 
all the SMP barriers?).

Cheers,
Christian.

>
> Jason


WARNING: multiple messages have this Message-ID (diff)
From: "Koenig, Christian" <Christian.Koenig-5C7GfCeVMHo@public.gmane.org>
To: Jason Gunthorpe <jgg-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>
Cc: Andrea Arcangeli
	<aarcange-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"Yang, Philip" <Philip.Yang-5C7GfCeVMHo@public.gmane.org>,
	Ralph Campbell
	<rcampbell-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>,
	"linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org"
	<linux-rdma-u79uwXL29TY76Z2rM5mHXA@public.gmane.org>,
	John Hubbard <jhubbard-DDmLM1+adcrQT0dZR+AlfA@public.gmane.org>,
	"Kuehling, Felix" <Felix.Kuehling-5C7GfCeVMHo@public.gmane.org>,
	"amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org"
	<amd-gfx-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>,
	"linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org"
	<linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org>,
	Jerome Glisse <jglisse-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>,
	"dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org"
	<dri-devel-PD4FTy7X32lNgt0PjOBp9y5qC8QIuHrW@public.gmane.org>,
	Ben Skeggs <bskeggs-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Subject: Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
Date: Mon, 21 Oct 2019 14:28:46 +0000	[thread overview]
Message-ID: <e07092c3-8ccd-9814-835c-6c462017aff8@amd.com> (raw)
In-Reply-To: <20191021135744.GA25164-VPRAkNaXOzVWk0Htik3J/w@public.gmane.org>

Am 21.10.19 um 15:57 schrieb Jason Gunthorpe:
> On Sun, Oct 20, 2019 at 02:21:42PM +0000, Koenig, Christian wrote:
>> Am 18.10.19 um 22:36 schrieb Jason Gunthorpe:
>>> On Thu, Oct 17, 2019 at 04:47:20PM +0000, Koenig, Christian wrote:
>>> [SNIP]
>>>    
>>>> So again how are they serialized?
>>> The 'driver lock' thing does it, read the hmm documentation, the hmm
>>> approach is basically the only approach that was correct of all the
>>> drivers..
>> Well that's what I've did, but what HMM does still doesn't looks correct
>> to me.
> It has a bug, but the basic flow seems to work.
>
> https://patchwork.kernel.org/patch/11191

Maybe wrong link? That link looks like an unrelated discussion on kernel 
image relocation.

>>> So long as the 'driver lock' is held the range cannot become
>>> invalidated as the 'driver lock' prevents progress of invalidation.
>> Correct, but the problem is it doesn't wait for ongoing operations to
>> complete.
>>
>> See I'm talking about the following case:
>>
>> Thread A    Thread B
>> invalidate_range_start()
>>                       mmu_range_read_begin()
>>                       get_user_pages()/hmm_range_fault()
>>                       grab_driver_lock()
>> Updating the ptes
>> invalidate_range_end()
>>
>> As far as I can see in invalidate_range_start() the driver lock is taken
>> to make sure that we can't start any invalidation while the driver is
>> using the pages for a command submission.
> Again, this uses the seqlock like scheme *and* the driver lock.
>
> In this case after grab_driver_lock() mmu_range_read_retry() will
> return false if Thread A has progressed to 'updating the ptes.
>
> For instance here is how the concurrency resolves for retry:
>
>         CPU1                                CPU2
>                                    seq = mmu_range_read_begin()
> invalidate_range_start()
>    invalidate_seq++

How that was order was what confusing me. But I've read up on the code 
in mmu_range_read_begin() and found the lines I was looking for:

+    if (is_invalidating)
+        wait_event(mmn_mm->wq,
+               READ_ONCE(mmn_mm->invalidate_seq) != seq);

[SNIP]

> For the above I've simplified the mechanics of the invalidate_seq, you
> need to look through the patch to see how it actually works.

Yea, that you also allow multiple write sides is pretty neat.

>> Well we don't update the seqlock after the update to the protected data
>> structure (the page table) happened, but rather before that.
> ??? This is what mn_itree_inv_end() does, it is called by
> invalidate_range_end
>
>> That doesn't looks like the normal patter for a seqlock to me and as far
>> as I can see that is quite a bug in the HMM design/logic.
> Well, hmm has a bug because it doesn't use a seqlock pattern, see the
> above URL.
>
> One of the motivations for this work is to squash that bug by adding a
> seqlock like pattern. But the basic hmm flow and collision-retry
> approach seems sound.
>
> Do you see a problem with this patch?

No, not any more.

Essentially you are doing the same thing I've tried to do with the 
original amdgpu implementation. The difference is that you don't try to 
use a per range sequence (which is a good idea, we never got that fully 
working) and you allow multiple writers at the same time.

Feel free to stitch an Acked-by: Christian König 
<christian.koenig@amd.com> on patch #2, but you still doing a bunch of 
things in there which are way beyond my understanding (e.g. where are 
all the SMP barriers?).

Cheers,
Christian.

>
> Jason

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2019-10-21 14:28 UTC|newest]

Thread overview: 138+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
2019-10-15 18:12 ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:32   ` Jerome Glisse
2019-10-21 18:32     ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:30   ` Jerome Glisse
2019-10-21 18:30     ` Jerome Glisse
2019-10-21 18:54     ` Jason Gunthorpe
2019-10-21 18:54       ` Jason Gunthorpe
2019-10-21 19:11       ` Jerome Glisse
2019-10-21 19:11         ` Jerome Glisse
2019-10-21 19:24         ` Jason Gunthorpe
2019-10-21 19:24           ` Jason Gunthorpe
2019-10-21 19:47           ` Jerome Glisse
2019-10-21 19:47             ` Jerome Glisse
2019-10-27 23:15   ` Jason Gunthorpe
2019-10-27 23:15     ` Jason Gunthorpe
2019-10-27 23:15     ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:33   ` Jerome Glisse
2019-10-21 18:33     ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:31   ` Jerome Glisse
2019-10-21 18:31     ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert() Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-11-04 20:25   ` Jason Gunthorpe
2019-11-04 20:25     ` Jason Gunthorpe
2019-11-04 20:25     ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-29 12:15   ` Dennis Dalessandro
2019-10-29 12:15     ` Dennis Dalessandro
2019-10-29 12:15     ` Dennis Dalessandro
2019-10-15 18:12 ` [PATCH hmm 07/15] drm/radeon: use mmu_range_notifier_insert Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER Jason Gunthorpe
2019-10-15 18:12   ` [Xen-devel] " Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-16  5:11   ` Jürgen Groß
2019-10-16  5:11     ` [Xen-devel] " Jürgen Groß
2019-10-16  5:11     ` Jürgen Groß
2019-10-16  6:35     ` Oleksandr Andrushchenko
2019-10-16  6:35       ` [Xen-devel] " Oleksandr Andrushchenko
2019-10-16  6:35       ` Oleksandr Andrushchenko
2019-10-21 19:12       ` Jason Gunthorpe
2019-10-21 19:12         ` [Xen-devel] " Jason Gunthorpe
2019-10-21 19:12         ` Jason Gunthorpe
2019-10-28  6:25         ` [Xen-devel] " Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-15 18:12 ` [PATCH hmm 09/15] xen/gntdev: use mmu_range_notifier_insert Jason Gunthorpe
2019-10-15 18:12   ` [Xen-devel] " Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 12/15] drm/amdgpu: Call find_vma under mmap_sem Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 14/15] drm/amdgpu: Use mmu_range_notifier " Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:38   ` Jerome Glisse
2019-10-21 18:38     ` Jerome Glisse
2019-10-21 18:57     ` Jason Gunthorpe
2019-10-21 18:57       ` Jason Gunthorpe
2019-10-21 19:19       ` Jerome Glisse
2019-10-21 19:19         ` Jerome Glisse
2019-10-16  8:58 ` [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Christian König
2019-10-16  8:58   ` Christian König
2019-10-16 16:04   ` Jason Gunthorpe
2019-10-16 16:04     ` Jason Gunthorpe
2019-10-17  8:54     ` Christian König
2019-10-17  8:54       ` Christian König
2019-10-17 16:26       ` Yang, Philip
2019-10-17 16:26         ` Yang, Philip
2019-10-17 16:47         ` Koenig, Christian
2019-10-17 16:47           ` Koenig, Christian
2019-10-18 20:36           ` Jason Gunthorpe
2019-10-18 20:36             ` Jason Gunthorpe
2019-10-20 14:21             ` Koenig, Christian
2019-10-20 14:21               ` Koenig, Christian
2019-10-21 13:57               ` Jason Gunthorpe
2019-10-21 13:57                 ` Jason Gunthorpe
2019-10-21 14:28                 ` Koenig, Christian [this message]
2019-10-21 14:28                   ` Koenig, Christian
2019-10-21 15:12                   ` Jason Gunthorpe
2019-10-21 15:12                     ` Jason Gunthorpe
2019-10-22  7:57                     ` Daniel Vetter
2019-10-22  7:57                       ` Daniel Vetter
2019-10-22 15:01                       ` Jason Gunthorpe
2019-10-22 15:01                         ` Jason Gunthorpe
2019-10-23  9:08                         ` Daniel Vetter
2019-10-23  9:08                           ` Daniel Vetter
2019-10-23  9:08                           ` Daniel Vetter
2019-10-23  9:32                           ` Christian König
2019-10-23  9:32                             ` Christian König
2019-10-23  9:32                             ` Christian König
2019-10-23 16:52                             ` Jerome Glisse
2019-10-23 16:52                               ` Jerome Glisse
2019-10-23 16:52                               ` Jerome Glisse
2019-10-23 16:52                               ` Jerome Glisse
2019-10-23 17:24                               ` Jason Gunthorpe
2019-10-23 17:24                                 ` Jason Gunthorpe
2019-10-23 17:24                                 ` Jason Gunthorpe
2019-10-23 17:24                                 ` Jason Gunthorpe
2019-10-24  2:16                                 ` Christoph Hellwig
2019-10-24  2:16                                   ` Christoph Hellwig
2019-10-24  2:16                                   ` Christoph Hellwig
2019-10-21 15:55 ` Dennis Dalessandro
2019-10-21 15:55   ` Dennis Dalessandro
2019-10-21 16:58   ` Jason Gunthorpe
2019-10-21 16:58     ` Jason Gunthorpe
2019-10-22 11:56     ` Dennis Dalessandro
2019-10-22 11:56       ` Dennis Dalessandro
2019-10-22 14:37       ` Jason Gunthorpe
2019-10-22 14:37         ` Jason Gunthorpe
2019-10-21 18:40 ` Jerome Glisse
2019-10-21 18:40   ` Jerome Glisse
2019-10-21 19:06   ` Jason Gunthorpe
2019-10-21 19:06     ` Jason Gunthorpe
2019-10-23 20:26     ` Jerome Glisse
2019-10-23 20:26       ` Jerome Glisse
2019-10-23 20:26       ` Jerome Glisse
2019-10-23 20:26       ` Jerome Glisse
2019-10-17 16:44 Koenig, Christian
2019-10-17 16:44 ` Koenig, Christian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e07092c3-8ccd-9814-835c-6c462017aff8@amd.com \
    --to=christian.koenig@amd.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=Philip.Yang@amd.com \
    --cc=aarcange@redhat.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jgg@mellanox.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.