All of lore.kernel.org
 help / color / mirror / Atom feed
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Jason Gunthorpe <jgg@mellanox.com>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Yang, Philip" <Philip.Yang@amd.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	John Hubbard <jhubbard@nvidia.com>,
	"Kuehling, Felix" <Felix.Kuehling@amd.com>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Jerome Glisse <jglisse@redhat.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	Ben Skeggs <bskeggs@redhat.com>
Subject: Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
Date: Wed, 23 Oct 2019 11:32:16 +0200	[thread overview]
Message-ID: <13edf841-421e-3522-fcec-ef919c2013ef@gmail.com> (raw)
In-Reply-To: <20191023090858.GV11828@phenom.ffwll.local>

Am 23.10.19 um 11:08 schrieb Daniel Vetter:
> On Tue, Oct 22, 2019 at 03:01:13PM +0000, Jason Gunthorpe wrote:
>> On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
>>
>>>> The unusual bit in all of this is using a lock's critical region to
>>>> 'protect' data for read, but updating that same data before the lock's
>>>> critical secion. ie relying on the unlock barrier to 'release' program
>>>> ordered stores done before the lock's own critical region, and the
>>>> lock side barrier to 'acquire' those stores.
>>> I think this unusual use of locks as barriers for other unlocked accesses
>>> deserves comments even more than just normal barriers. Can you pls add
>>> them? I think the design seeems sound ...
>>>
>>> Also the comment on the driver's lock hopefully prevents driver
>>> maintainers from moving the driver_lock around in a way that would very
>>> subtle break the scheme, so I think having the acquire barrier commented
>>> in each place would be really good.
>> There is already a lot of documentation, I think it would be helpful
>> if you could suggest some specific places where you think an addition
>> would help? I think the perspective of someone less familiar with this
>> design would really improve the documentation
> Hm I just meant the usual recommendation that "barriers must have comments
> explaining what they order, and where the other side of the barrier is".
> Using unlock/lock as a barrier imo just makes that an even better idea.
> Usually what I do is something like "we need to order $this against $that
> below, and the other side of this barrier is in function()." With maybe a
> bit more if it's not obvious how things go wrong if the orderin is broken.
>
> Ofc seqlock.h itself skimps on that rule and doesn't bother explaining its
> barriers :-/
>
>> I've been tempted to force the driver to store the seq number directly
>> under the driver lock - this makes the scheme much clearer, ie
>> something like this:
>>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> index 712c99918551bc..738fa670dcfb19 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> @@ -488,7 +488,8 @@ struct svm_notifier {
>>   };
>>   
>>   static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>> -                                        const struct mmu_notifier_range *range)
>> +                                        const struct mmu_notifier_range *range,
>> +                                        unsigned long seq)
>>   {
>>          struct svm_notifier *sn =
>>                  container_of(mrn, struct svm_notifier, notifier);
>> @@ -504,6 +505,7 @@ static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>>                  mutex_lock(&sn->svmm->mutex);
>>          else if (!mutex_trylock(&sn->svmm->mutex))
>>                  return false;
>> +       mmu_range_notifier_update_seq(mrn, seq);
>>          mutex_unlock(&sn->svmm->mutex);
>>          return true;
>>   }
>>
>>
>> At the cost of making the driver a bit more complex, what do you
>> think?
> Hm, spinning this further ... could we initialize the mmu range notifier
> with a pointer to the driver lock, so that we could put a
> lockdep_assert_held into mmu_range_notifier_update_seq? I think that would
> make this scheme substantially more driver-hacker proof :-)

Going another step further.... what hinders us to put the lock into the 
mmu range notifier itself and have _lock()/_unlock() helpers?

I mean having the lock in the driver only makes sense when the driver 
would be using the same lock for multiple things, e.g. multiple MMU 
range notifiers under the same lock. But I really don't see that use 
case here.

Regards,
Christian.

>
> Cheers, Daniel


WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Jason Gunthorpe <jgg@mellanox.com>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Yang, Philip" <Philip.Yang@amd.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	John Hubbard <jhubbard@nvidia.com>,
	"Kuehling, Felix" <Felix.Kuehling@amd.com>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Jerome Glisse <jglisse@redhat.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	Ben Skeggs <bskeggs@redhat.com>
Subject: Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
Date: Wed, 23 Oct 2019 11:32:16 +0200	[thread overview]
Message-ID: <13edf841-421e-3522-fcec-ef919c2013ef@gmail.com> (raw)
In-Reply-To: <20191023090858.GV11828@phenom.ffwll.local>

Am 23.10.19 um 11:08 schrieb Daniel Vetter:
> On Tue, Oct 22, 2019 at 03:01:13PM +0000, Jason Gunthorpe wrote:
>> On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
>>
>>>> The unusual bit in all of this is using a lock's critical region to
>>>> 'protect' data for read, but updating that same data before the lock's
>>>> critical secion. ie relying on the unlock barrier to 'release' program
>>>> ordered stores done before the lock's own critical region, and the
>>>> lock side barrier to 'acquire' those stores.
>>> I think this unusual use of locks as barriers for other unlocked accesses
>>> deserves comments even more than just normal barriers. Can you pls add
>>> them? I think the design seeems sound ...
>>>
>>> Also the comment on the driver's lock hopefully prevents driver
>>> maintainers from moving the driver_lock around in a way that would very
>>> subtle break the scheme, so I think having the acquire barrier commented
>>> in each place would be really good.
>> There is already a lot of documentation, I think it would be helpful
>> if you could suggest some specific places where you think an addition
>> would help? I think the perspective of someone less familiar with this
>> design would really improve the documentation
> Hm I just meant the usual recommendation that "barriers must have comments
> explaining what they order, and where the other side of the barrier is".
> Using unlock/lock as a barrier imo just makes that an even better idea.
> Usually what I do is something like "we need to order $this against $that
> below, and the other side of this barrier is in function()." With maybe a
> bit more if it's not obvious how things go wrong if the orderin is broken.
>
> Ofc seqlock.h itself skimps on that rule and doesn't bother explaining its
> barriers :-/
>
>> I've been tempted to force the driver to store the seq number directly
>> under the driver lock - this makes the scheme much clearer, ie
>> something like this:
>>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> index 712c99918551bc..738fa670dcfb19 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> @@ -488,7 +488,8 @@ struct svm_notifier {
>>   };
>>   
>>   static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>> -                                        const struct mmu_notifier_range *range)
>> +                                        const struct mmu_notifier_range *range,
>> +                                        unsigned long seq)
>>   {
>>          struct svm_notifier *sn =
>>                  container_of(mrn, struct svm_notifier, notifier);
>> @@ -504,6 +505,7 @@ static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>>                  mutex_lock(&sn->svmm->mutex);
>>          else if (!mutex_trylock(&sn->svmm->mutex))
>>                  return false;
>> +       mmu_range_notifier_update_seq(mrn, seq);
>>          mutex_unlock(&sn->svmm->mutex);
>>          return true;
>>   }
>>
>>
>> At the cost of making the driver a bit more complex, what do you
>> think?
> Hm, spinning this further ... could we initialize the mmu range notifier
> with a pointer to the driver lock, so that we could put a
> lockdep_assert_held into mmu_range_notifier_update_seq? I think that would
> make this scheme substantially more driver-hacker proof :-)

Going another step further.... what hinders us to put the lock into the 
mmu range notifier itself and have _lock()/_unlock() helpers?

I mean having the lock in the driver only makes sense when the driver 
would be using the same lock for multiple things, e.g. multiple MMU 
range notifiers under the same lock. But I really don't see that use 
case here.

Regards,
Christian.

>
> Cheers, Daniel

_______________________________________________
dri-devel mailing list
dri-devel@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/dri-devel

WARNING: multiple messages have this Message-ID (diff)
From: "Christian König" <ckoenig.leichtzumerken@gmail.com>
To: Jason Gunthorpe <jgg@mellanox.com>,
	"Koenig, Christian" <Christian.Koenig@amd.com>,
	Andrea Arcangeli <aarcange@redhat.com>,
	"Yang, Philip" <Philip.Yang@amd.com>,
	Ralph Campbell <rcampbell@nvidia.com>,
	"linux-rdma@vger.kernel.org" <linux-rdma@vger.kernel.org>,
	John Hubbard <jhubbard@nvidia.com>,
	"Kuehling, Felix" <Felix.Kuehling@amd.com>,
	"amd-gfx@lists.freedesktop.org" <amd-gfx@lists.freedesktop.org>,
	"linux-mm@kvack.org" <linux-mm@kvack.org>,
	Jerome Glisse <jglisse@redhat.com>,
	"dri-devel@lists.freedesktop.org"
	<dri-devel@lists.freedesktop.org>,
	Ben Skeggs <bskeggs@redhat.com>
Subject: Re: [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking
Date: Wed, 23 Oct 2019 11:32:16 +0200	[thread overview]
Message-ID: <13edf841-421e-3522-fcec-ef919c2013ef@gmail.com> (raw)
Message-ID: <20191023093216.sbdeaQ1xVEJ7FihAPVZzlC_5cmhwuCqOI6JiFjJEVQQ@z> (raw)
In-Reply-To: <20191023090858.GV11828@phenom.ffwll.local>

Am 23.10.19 um 11:08 schrieb Daniel Vetter:
> On Tue, Oct 22, 2019 at 03:01:13PM +0000, Jason Gunthorpe wrote:
>> On Tue, Oct 22, 2019 at 09:57:35AM +0200, Daniel Vetter wrote:
>>
>>>> The unusual bit in all of this is using a lock's critical region to
>>>> 'protect' data for read, but updating that same data before the lock's
>>>> critical secion. ie relying on the unlock barrier to 'release' program
>>>> ordered stores done before the lock's own critical region, and the
>>>> lock side barrier to 'acquire' those stores.
>>> I think this unusual use of locks as barriers for other unlocked accesses
>>> deserves comments even more than just normal barriers. Can you pls add
>>> them? I think the design seeems sound ...
>>>
>>> Also the comment on the driver's lock hopefully prevents driver
>>> maintainers from moving the driver_lock around in a way that would very
>>> subtle break the scheme, so I think having the acquire barrier commented
>>> in each place would be really good.
>> There is already a lot of documentation, I think it would be helpful
>> if you could suggest some specific places where you think an addition
>> would help? I think the perspective of someone less familiar with this
>> design would really improve the documentation
> Hm I just meant the usual recommendation that "barriers must have comments
> explaining what they order, and where the other side of the barrier is".
> Using unlock/lock as a barrier imo just makes that an even better idea.
> Usually what I do is something like "we need to order $this against $that
> below, and the other side of this barrier is in function()." With maybe a
> bit more if it's not obvious how things go wrong if the orderin is broken.
>
> Ofc seqlock.h itself skimps on that rule and doesn't bother explaining its
> barriers :-/
>
>> I've been tempted to force the driver to store the seq number directly
>> under the driver lock - this makes the scheme much clearer, ie
>> something like this:
>>
>> diff --git a/drivers/gpu/drm/nouveau/nouveau_svm.c b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> index 712c99918551bc..738fa670dcfb19 100644
>> --- a/drivers/gpu/drm/nouveau/nouveau_svm.c
>> +++ b/drivers/gpu/drm/nouveau/nouveau_svm.c
>> @@ -488,7 +488,8 @@ struct svm_notifier {
>>   };
>>   
>>   static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>> -                                        const struct mmu_notifier_range *range)
>> +                                        const struct mmu_notifier_range *range,
>> +                                        unsigned long seq)
>>   {
>>          struct svm_notifier *sn =
>>                  container_of(mrn, struct svm_notifier, notifier);
>> @@ -504,6 +505,7 @@ static bool nouveau_svm_range_invalidate(struct mmu_range_notifier *mrn,
>>                  mutex_lock(&sn->svmm->mutex);
>>          else if (!mutex_trylock(&sn->svmm->mutex))
>>                  return false;
>> +       mmu_range_notifier_update_seq(mrn, seq);
>>          mutex_unlock(&sn->svmm->mutex);
>>          return true;
>>   }
>>
>>
>> At the cost of making the driver a bit more complex, what do you
>> think?
> Hm, spinning this further ... could we initialize the mmu range notifier
> with a pointer to the driver lock, so that we could put a
> lockdep_assert_held into mmu_range_notifier_update_seq? I think that would
> make this scheme substantially more driver-hacker proof :-)

Going another step further.... what hinders us to put the lock into the 
mmu range notifier itself and have _lock()/_unlock() helpers?

I mean having the lock in the driver only makes sense when the driver 
would be using the same lock for multiple things, e.g. multiple MMU 
range notifiers under the same lock. But I really don't see that use 
case here.

Regards,
Christian.

>
> Cheers, Daniel

_______________________________________________
amd-gfx mailing list
amd-gfx@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/amd-gfx

  reply	other threads:[~2019-10-23  9:32 UTC|newest]

Thread overview: 138+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-10-15 18:12 [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Jason Gunthorpe
2019-10-15 18:12 ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 01/15] mm/mmu_notifier: define the header pre-processor parts even if disabled Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:32   ` Jerome Glisse
2019-10-21 18:32     ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 02/15] mm/mmu_notifier: add an interval tree notifier Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:30   ` Jerome Glisse
2019-10-21 18:30     ` Jerome Glisse
2019-10-21 18:54     ` Jason Gunthorpe
2019-10-21 18:54       ` Jason Gunthorpe
2019-10-21 19:11       ` Jerome Glisse
2019-10-21 19:11         ` Jerome Glisse
2019-10-21 19:24         ` Jason Gunthorpe
2019-10-21 19:24           ` Jason Gunthorpe
2019-10-21 19:47           ` Jerome Glisse
2019-10-21 19:47             ` Jerome Glisse
2019-10-27 23:15   ` Jason Gunthorpe
2019-10-27 23:15     ` Jason Gunthorpe
2019-10-27 23:15     ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 03/15] mm/hmm: allow hmm_range to be used with a mmu_range_notifier or hmm_mirror Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:33   ` Jerome Glisse
2019-10-21 18:33     ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 04/15] mm/hmm: define the pre-processor related parts of hmm.h even if disabled Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:31   ` Jerome Glisse
2019-10-21 18:31     ` Jerome Glisse
2019-10-15 18:12 ` [PATCH hmm 05/15] RDMA/odp: Use mmu_range_notifier_insert() Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-11-04 20:25   ` Jason Gunthorpe
2019-11-04 20:25     ` Jason Gunthorpe
2019-11-04 20:25     ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 06/15] RDMA/hfi1: Use mmu_range_notifier_inset for user_exp_rcv Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-29 12:15   ` Dennis Dalessandro
2019-10-29 12:15     ` Dennis Dalessandro
2019-10-29 12:15     ` Dennis Dalessandro
2019-10-15 18:12 ` [PATCH hmm 07/15] drm/radeon: use mmu_range_notifier_insert Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 08/15] xen/gntdev: Use select for DMA_SHARED_BUFFER Jason Gunthorpe
2019-10-15 18:12   ` [Xen-devel] " Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-16  5:11   ` Jürgen Groß
2019-10-16  5:11     ` [Xen-devel] " Jürgen Groß
2019-10-16  5:11     ` Jürgen Groß
2019-10-16  6:35     ` Oleksandr Andrushchenko
2019-10-16  6:35       ` [Xen-devel] " Oleksandr Andrushchenko
2019-10-16  6:35       ` Oleksandr Andrushchenko
2019-10-21 19:12       ` Jason Gunthorpe
2019-10-21 19:12         ` [Xen-devel] " Jason Gunthorpe
2019-10-21 19:12         ` Jason Gunthorpe
2019-10-28  6:25         ` [Xen-devel] " Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-28  6:25           ` Oleksandr Andrushchenko
2019-10-15 18:12 ` [PATCH hmm 09/15] xen/gntdev: use mmu_range_notifier_insert Jason Gunthorpe
2019-10-15 18:12   ` [Xen-devel] " Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 10/15] nouveau: use mmu_notifier directly for invalidate_range_start Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 11/15] nouveau: use mmu_range_notifier instead of hmm_mirror Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 12/15] drm/amdgpu: Call find_vma under mmap_sem Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 13/15] drm/amdgpu: Use mmu_range_insert instead of hmm_mirror Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 14/15] drm/amdgpu: Use mmu_range_notifier " Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-15 18:12 ` [PATCH hmm 15/15] mm/hmm: remove hmm_mirror and related Jason Gunthorpe
2019-10-15 18:12   ` Jason Gunthorpe
2019-10-21 18:38   ` Jerome Glisse
2019-10-21 18:38     ` Jerome Glisse
2019-10-21 18:57     ` Jason Gunthorpe
2019-10-21 18:57       ` Jason Gunthorpe
2019-10-21 19:19       ` Jerome Glisse
2019-10-21 19:19         ` Jerome Glisse
2019-10-16  8:58 ` [PATCH hmm 00/15] Consolidate the mmu notifier interval_tree and locking Christian König
2019-10-16  8:58   ` Christian König
2019-10-16 16:04   ` Jason Gunthorpe
2019-10-16 16:04     ` Jason Gunthorpe
2019-10-17  8:54     ` Christian König
2019-10-17  8:54       ` Christian König
2019-10-17 16:26       ` Yang, Philip
2019-10-17 16:26         ` Yang, Philip
2019-10-17 16:47         ` Koenig, Christian
2019-10-17 16:47           ` Koenig, Christian
2019-10-18 20:36           ` Jason Gunthorpe
2019-10-18 20:36             ` Jason Gunthorpe
2019-10-20 14:21             ` Koenig, Christian
2019-10-20 14:21               ` Koenig, Christian
2019-10-21 13:57               ` Jason Gunthorpe
2019-10-21 13:57                 ` Jason Gunthorpe
2019-10-21 14:28                 ` Koenig, Christian
2019-10-21 14:28                   ` Koenig, Christian
2019-10-21 15:12                   ` Jason Gunthorpe
2019-10-21 15:12                     ` Jason Gunthorpe
2019-10-22  7:57                     ` Daniel Vetter
2019-10-22  7:57                       ` Daniel Vetter
2019-10-22 15:01                       ` Jason Gunthorpe
2019-10-22 15:01                         ` Jason Gunthorpe
2019-10-23  9:08                         ` Daniel Vetter
2019-10-23  9:08                           ` Daniel Vetter
2019-10-23  9:08                           ` Daniel Vetter
2019-10-23  9:32                           ` Christian König [this message]
2019-10-23  9:32                             ` Christian König
2019-10-23  9:32                             ` Christian König
2019-10-23 16:52                             ` Jerome Glisse
2019-10-23 16:52                               ` Jerome Glisse
2019-10-23 16:52                               ` Jerome Glisse
2019-10-23 16:52                               ` Jerome Glisse
2019-10-23 17:24                               ` Jason Gunthorpe
2019-10-23 17:24                                 ` Jason Gunthorpe
2019-10-23 17:24                                 ` Jason Gunthorpe
2019-10-23 17:24                                 ` Jason Gunthorpe
2019-10-24  2:16                                 ` Christoph Hellwig
2019-10-24  2:16                                   ` Christoph Hellwig
2019-10-24  2:16                                   ` Christoph Hellwig
2019-10-21 15:55 ` Dennis Dalessandro
2019-10-21 15:55   ` Dennis Dalessandro
2019-10-21 16:58   ` Jason Gunthorpe
2019-10-21 16:58     ` Jason Gunthorpe
2019-10-22 11:56     ` Dennis Dalessandro
2019-10-22 11:56       ` Dennis Dalessandro
2019-10-22 14:37       ` Jason Gunthorpe
2019-10-22 14:37         ` Jason Gunthorpe
2019-10-21 18:40 ` Jerome Glisse
2019-10-21 18:40   ` Jerome Glisse
2019-10-21 19:06   ` Jason Gunthorpe
2019-10-21 19:06     ` Jason Gunthorpe
2019-10-23 20:26     ` Jerome Glisse
2019-10-23 20:26       ` Jerome Glisse
2019-10-23 20:26       ` Jerome Glisse
2019-10-23 20:26       ` Jerome Glisse
2019-10-17 16:44 Koenig, Christian
2019-10-17 16:44 ` Koenig, Christian

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=13edf841-421e-3522-fcec-ef919c2013ef@gmail.com \
    --to=ckoenig.leichtzumerken@gmail.com \
    --cc=Christian.Koenig@amd.com \
    --cc=Felix.Kuehling@amd.com \
    --cc=Philip.Yang@amd.com \
    --cc=aarcange@redhat.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=jgg@mellanox.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-mm@kvack.org \
    --cc=linux-rdma@vger.kernel.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.