From: Peter Xu <peterx@redhat.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: linux-mm@kvack.org, akpm@linux-foundation.org,
nouveau@lists.freedesktop.org, bskeggs@redhat.com,
rcampbell@nvidia.com, linux-doc@vger.kernel.org,
jhubbard@nvidia.com, bsingharora@gmail.com,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
hch@infradead.org, jglisse@redhat.com, willy@infradead.org,
jgg@nvidia.com, hughd@google.com
Subject: Re: [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte()
Date: Wed, 26 May 2021 21:44:22 -0400 [thread overview]
Message-ID: <YK75dpdwU9AIKJ6i@t490s> (raw)
In-Reply-To: <2005328.bFqPmhE5MS@nvdebian>
On Thu, May 27, 2021 at 11:20:36AM +1000, Alistair Popple wrote:
> On Thursday, 27 May 2021 5:50:05 AM AEST Peter Xu wrote:
> > On Mon, May 24, 2021 at 11:27:21PM +1000, Alistair Popple wrote:
> > > Currently if copy_nonpresent_pte() returns a non-zero value it is
> > > assumed to be a swap entry which requires further processing outside the
> > > loop in copy_pte_range() after dropping locks. This prevents other
> > > values being returned to signal conditions such as failure which a
> > > subsequent change requires.
> > >
> > > Instead make copy_nonpresent_pte() return an error code if further
> > > processing is required and read the value for the swap entry in the main
> > > loop under the ptl.
> > >
> > > Signed-off-by: Alistair Popple <apopple@nvidia.com>
> > >
> > > ---
> > >
> > > v9:
> > >
> > > New for v9 to allow device exclusive handling to occur in
> > > copy_nonpresent_pte().
> > > ---
> > >
> > > mm/memory.c | 12 +++++++-----
> > > 1 file changed, 7 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 2fb455c365c2..e061cfa18c11 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct
> > > mm_struct *src_mm,>
> > > if (likely(!non_swap_entry(entry))) {
> > >
> > > if (swap_duplicate(entry) < 0)
> > >
> > > - return entry.val;
> > > + return -EAGAIN;
> > >
> > > /* make sure dst_mm is on swapoff's mmlist. */
> > > if (unlikely(list_empty(&dst_mm->mmlist))) {
> > >
> > > @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma,
> > > struct vm_area_struct *src_vma,>
> > > continue;
> > >
> > > }
> > > if (unlikely(!pte_present(*src_pte))) {
> > >
> > > - entry.val = copy_nonpresent_pte(dst_mm, src_mm,
> > > - dst_pte, src_pte,
> > > - src_vma, addr, rss);
> > > - if (entry.val)
> > > + ret = copy_nonpresent_pte(dst_mm, src_mm,
> > > + dst_pte, src_pte,
> > > + src_vma, addr, rss);
> > > + if (ret == -EAGAIN) {
> > > + entry = pte_to_swp_entry(*src_pte);
> > >
> > > break;
> > >
> > > + }
> > >
> > > progress += 8;
> > > continue;
> > >
> > > }
> >
> > Note that -EAGAIN was previously used by copy_present_page() for early cow
> > use. Here later although we check entry.val first:
> >
> > if (entry.val) {
> > if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
> > ret = -ENOMEM;
> > goto out;
> > }
> > entry.val = 0;
> > } else if (ret) {
> > WARN_ON_ONCE(ret != -EAGAIN);
> > prealloc = page_copy_prealloc(src_mm, src_vma, addr);
> > if (!prealloc)
> > return -ENOMEM;
> > /* We've captured and resolved the error. Reset, try again.
> > */ ret = 0;
> > }
> >
> > We didn't reset "ret" in entry.val case (maybe we should?). Then in the next
> > round of "goto again" if "ret" is unluckily untouched, it could reach the
> > 2nd if check, and I think it could cause an unexpected
> > page_copy_prealloc().
>
> Thanks, I had considered that but saw "ret" was always set either by
> copy_nonpresent_pte() or copy_present_pte(). However missed the "unlucky" case
> at the start of the loop:
>
> if (progress >= 32) {
> progress = 0;
> if (need_resched() ||
> spin_needbreak(src_ptl) || pin_needbreak(dst_ptl))
> break;
>
> Looking at this again though checking different variables to figure out what
> to do outside the locks and reusing error codes seems error prone. I reused -
> EAGAIN for copy_nonpresent_pte() simply because that seemed the most sensible
> error code, but I don't think that aids readability and it might be better to
> use a unique error code for each case needing extra handling.
>
> So it might be better if I update this patch to:
> 1) Use unique error codes for each case requiring special handling outside the
> lock.
> 2) Only check "ret" to determine what to do outside locks (ie. not entry.val)
> 3) Document these.
> 4) Always reset ret after handling.
>
> Thoughts?
Looks good to me. Thanks,
--
Peter Xu
WARNING: multiple messages have this Message-ID (diff)
From: Peter Xu <peterx@redhat.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: rcampbell@nvidia.com, willy@infradead.org,
linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org,
bsingharora@gmail.com, hughd@google.com,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
hch@infradead.org, linux-mm@kvack.org, bskeggs@redhat.com,
jgg@nvidia.com, akpm@linux-foundation.org
Subject: Re: [Nouveau] [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte()
Date: Wed, 26 May 2021 21:44:22 -0400 [thread overview]
Message-ID: <YK75dpdwU9AIKJ6i@t490s> (raw)
In-Reply-To: <2005328.bFqPmhE5MS@nvdebian>
On Thu, May 27, 2021 at 11:20:36AM +1000, Alistair Popple wrote:
> On Thursday, 27 May 2021 5:50:05 AM AEST Peter Xu wrote:
> > On Mon, May 24, 2021 at 11:27:21PM +1000, Alistair Popple wrote:
> > > Currently if copy_nonpresent_pte() returns a non-zero value it is
> > > assumed to be a swap entry which requires further processing outside the
> > > loop in copy_pte_range() after dropping locks. This prevents other
> > > values being returned to signal conditions such as failure which a
> > > subsequent change requires.
> > >
> > > Instead make copy_nonpresent_pte() return an error code if further
> > > processing is required and read the value for the swap entry in the main
> > > loop under the ptl.
> > >
> > > Signed-off-by: Alistair Popple <apopple@nvidia.com>
> > >
> > > ---
> > >
> > > v9:
> > >
> > > New for v9 to allow device exclusive handling to occur in
> > > copy_nonpresent_pte().
> > > ---
> > >
> > > mm/memory.c | 12 +++++++-----
> > > 1 file changed, 7 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 2fb455c365c2..e061cfa18c11 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct
> > > mm_struct *src_mm,>
> > > if (likely(!non_swap_entry(entry))) {
> > >
> > > if (swap_duplicate(entry) < 0)
> > >
> > > - return entry.val;
> > > + return -EAGAIN;
> > >
> > > /* make sure dst_mm is on swapoff's mmlist. */
> > > if (unlikely(list_empty(&dst_mm->mmlist))) {
> > >
> > > @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma,
> > > struct vm_area_struct *src_vma,>
> > > continue;
> > >
> > > }
> > > if (unlikely(!pte_present(*src_pte))) {
> > >
> > > - entry.val = copy_nonpresent_pte(dst_mm, src_mm,
> > > - dst_pte, src_pte,
> > > - src_vma, addr, rss);
> > > - if (entry.val)
> > > + ret = copy_nonpresent_pte(dst_mm, src_mm,
> > > + dst_pte, src_pte,
> > > + src_vma, addr, rss);
> > > + if (ret == -EAGAIN) {
> > > + entry = pte_to_swp_entry(*src_pte);
> > >
> > > break;
> > >
> > > + }
> > >
> > > progress += 8;
> > > continue;
> > >
> > > }
> >
> > Note that -EAGAIN was previously used by copy_present_page() for early cow
> > use. Here later although we check entry.val first:
> >
> > if (entry.val) {
> > if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
> > ret = -ENOMEM;
> > goto out;
> > }
> > entry.val = 0;
> > } else if (ret) {
> > WARN_ON_ONCE(ret != -EAGAIN);
> > prealloc = page_copy_prealloc(src_mm, src_vma, addr);
> > if (!prealloc)
> > return -ENOMEM;
> > /* We've captured and resolved the error. Reset, try again.
> > */ ret = 0;
> > }
> >
> > We didn't reset "ret" in entry.val case (maybe we should?). Then in the next
> > round of "goto again" if "ret" is unluckily untouched, it could reach the
> > 2nd if check, and I think it could cause an unexpected
> > page_copy_prealloc().
>
> Thanks, I had considered that but saw "ret" was always set either by
> copy_nonpresent_pte() or copy_present_pte(). However missed the "unlucky" case
> at the start of the loop:
>
> if (progress >= 32) {
> progress = 0;
> if (need_resched() ||
> spin_needbreak(src_ptl) || pin_needbreak(dst_ptl))
> break;
>
> Looking at this again though checking different variables to figure out what
> to do outside the locks and reusing error codes seems error prone. I reused -
> EAGAIN for copy_nonpresent_pte() simply because that seemed the most sensible
> error code, but I don't think that aids readability and it might be better to
> use a unique error code for each case needing extra handling.
>
> So it might be better if I update this patch to:
> 1) Use unique error codes for each case requiring special handling outside the
> lock.
> 2) Only check "ret" to determine what to do outside locks (ie. not entry.val)
> 3) Document these.
> 4) Always reset ret after handling.
>
> Thoughts?
Looks good to me. Thanks,
--
Peter Xu
_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau
WARNING: multiple messages have this Message-ID (diff)
From: Peter Xu <peterx@redhat.com>
To: Alistair Popple <apopple@nvidia.com>
Cc: rcampbell@nvidia.com, willy@infradead.org,
linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org,
bsingharora@gmail.com, hughd@google.com,
linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org,
hch@infradead.org, linux-mm@kvack.org, jglisse@redhat.com,
bskeggs@redhat.com, jgg@nvidia.com, jhubbard@nvidia.com,
akpm@linux-foundation.org
Subject: Re: [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte()
Date: Wed, 26 May 2021 21:44:22 -0400 [thread overview]
Message-ID: <YK75dpdwU9AIKJ6i@t490s> (raw)
In-Reply-To: <2005328.bFqPmhE5MS@nvdebian>
On Thu, May 27, 2021 at 11:20:36AM +1000, Alistair Popple wrote:
> On Thursday, 27 May 2021 5:50:05 AM AEST Peter Xu wrote:
> > On Mon, May 24, 2021 at 11:27:21PM +1000, Alistair Popple wrote:
> > > Currently if copy_nonpresent_pte() returns a non-zero value it is
> > > assumed to be a swap entry which requires further processing outside the
> > > loop in copy_pte_range() after dropping locks. This prevents other
> > > values being returned to signal conditions such as failure which a
> > > subsequent change requires.
> > >
> > > Instead make copy_nonpresent_pte() return an error code if further
> > > processing is required and read the value for the swap entry in the main
> > > loop under the ptl.
> > >
> > > Signed-off-by: Alistair Popple <apopple@nvidia.com>
> > >
> > > ---
> > >
> > > v9:
> > >
> > > New for v9 to allow device exclusive handling to occur in
> > > copy_nonpresent_pte().
> > > ---
> > >
> > > mm/memory.c | 12 +++++++-----
> > > 1 file changed, 7 insertions(+), 5 deletions(-)
> > >
> > > diff --git a/mm/memory.c b/mm/memory.c
> > > index 2fb455c365c2..e061cfa18c11 100644
> > > --- a/mm/memory.c
> > > +++ b/mm/memory.c
> > > @@ -718,7 +718,7 @@ copy_nonpresent_pte(struct mm_struct *dst_mm, struct
> > > mm_struct *src_mm,>
> > > if (likely(!non_swap_entry(entry))) {
> > >
> > > if (swap_duplicate(entry) < 0)
> > >
> > > - return entry.val;
> > > + return -EAGAIN;
> > >
> > > /* make sure dst_mm is on swapoff's mmlist. */
> > > if (unlikely(list_empty(&dst_mm->mmlist))) {
> > >
> > > @@ -974,11 +974,13 @@ copy_pte_range(struct vm_area_struct *dst_vma,
> > > struct vm_area_struct *src_vma,>
> > > continue;
> > >
> > > }
> > > if (unlikely(!pte_present(*src_pte))) {
> > >
> > > - entry.val = copy_nonpresent_pte(dst_mm, src_mm,
> > > - dst_pte, src_pte,
> > > - src_vma, addr, rss);
> > > - if (entry.val)
> > > + ret = copy_nonpresent_pte(dst_mm, src_mm,
> > > + dst_pte, src_pte,
> > > + src_vma, addr, rss);
> > > + if (ret == -EAGAIN) {
> > > + entry = pte_to_swp_entry(*src_pte);
> > >
> > > break;
> > >
> > > + }
> > >
> > > progress += 8;
> > > continue;
> > >
> > > }
> >
> > Note that -EAGAIN was previously used by copy_present_page() for early cow
> > use. Here later although we check entry.val first:
> >
> > if (entry.val) {
> > if (add_swap_count_continuation(entry, GFP_KERNEL) < 0) {
> > ret = -ENOMEM;
> > goto out;
> > }
> > entry.val = 0;
> > } else if (ret) {
> > WARN_ON_ONCE(ret != -EAGAIN);
> > prealloc = page_copy_prealloc(src_mm, src_vma, addr);
> > if (!prealloc)
> > return -ENOMEM;
> > /* We've captured and resolved the error. Reset, try again.
> > */ ret = 0;
> > }
> >
> > We didn't reset "ret" in entry.val case (maybe we should?). Then in the next
> > round of "goto again" if "ret" is unluckily untouched, it could reach the
> > 2nd if check, and I think it could cause an unexpected
> > page_copy_prealloc().
>
> Thanks, I had considered that but saw "ret" was always set either by
> copy_nonpresent_pte() or copy_present_pte(). However missed the "unlucky" case
> at the start of the loop:
>
> if (progress >= 32) {
> progress = 0;
> if (need_resched() ||
> spin_needbreak(src_ptl) || pin_needbreak(dst_ptl))
> break;
>
> Looking at this again though checking different variables to figure out what
> to do outside the locks and reusing error codes seems error prone. I reused -
> EAGAIN for copy_nonpresent_pte() simply because that seemed the most sensible
> error code, but I don't think that aids readability and it might be better to
> use a unique error code for each case needing extra handling.
>
> So it might be better if I update this patch to:
> 1) Use unique error codes for each case requiring special handling outside the
> lock.
> 2) Only check "ret" to determine what to do outside locks (ie. not entry.val)
> 3) Document these.
> 4) Always reset ret after handling.
>
> Thoughts?
Looks good to me. Thanks,
--
Peter Xu
next prev parent reply other threads:[~2021-05-27 1:44 UTC|newest]
Thread overview: 123+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-24 13:27 [PATCH v9 00/10] Add support for SVM atomics in Nouveau Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 01/10] mm: Remove special swap entry functions Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 02/10] mm/swapops: Rework swap entry manipulation code Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 03/10] mm/rmap: Split try_to_munlock from try_to_unmap Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-25 18:39 ` Liam Howlett
2021-05-25 18:39 ` Liam Howlett
2021-05-25 18:39 ` [Nouveau] " Liam Howlett
2021-05-25 23:45 ` Shakeel Butt
2021-05-25 23:45 ` Shakeel Butt
2021-05-25 23:45 ` [Nouveau] " Shakeel Butt
2021-05-25 23:45 ` Shakeel Butt
2021-06-04 20:49 ` Liam Howlett
2021-06-04 20:49 ` Liam Howlett
2021-06-04 20:49 ` [Nouveau] " Liam Howlett
2021-06-05 0:41 ` Shakeel Butt
2021-06-05 0:41 ` Shakeel Butt
2021-06-05 0:41 ` [Nouveau] " Shakeel Butt
2021-06-05 0:41 ` Shakeel Butt
2021-06-05 3:39 ` Liam Howlett
2021-06-05 3:39 ` Liam Howlett
2021-06-05 3:39 ` [Nouveau] " Liam Howlett
2021-06-05 4:19 ` Shakeel Butt
2021-06-05 4:19 ` Shakeel Butt
2021-06-05 4:19 ` [Nouveau] " Shakeel Butt
2021-06-05 4:19 ` Shakeel Butt
2021-06-07 4:51 ` Alistair Popple
2021-06-07 4:51 ` Alistair Popple
2021-06-07 4:51 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 04/10] mm/rmap: Split migration into its own function Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 05/10] mm: Rename migrate_pgmap_owner Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-26 19:41 ` Peter Xu
2021-05-26 19:41 ` Peter Xu
2021-05-26 19:41 ` [Nouveau] " Peter Xu
2021-05-24 13:27 ` [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-26 19:50 ` Peter Xu
2021-05-26 19:50 ` Peter Xu
2021-05-26 19:50 ` [Nouveau] " Peter Xu
2021-05-27 1:20 ` Alistair Popple
2021-05-27 1:20 ` Alistair Popple
2021-05-27 1:20 ` [Nouveau] " Alistair Popple
2021-05-27 1:44 ` Peter Xu [this message]
2021-05-27 1:44 ` Peter Xu
2021-05-27 1:44 ` [Nouveau] " Peter Xu
2021-05-24 13:27 ` [PATCH v9 07/10] mm: Device exclusive memory access Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 22:11 ` Andrew Morton
2021-05-24 22:11 ` Andrew Morton
2021-05-24 22:11 ` [Nouveau] " Andrew Morton
2021-05-25 1:31 ` John Hubbard
2021-05-25 1:31 ` John Hubbard
2021-05-25 1:31 ` [Nouveau] " John Hubbard
2021-05-25 9:21 ` Alistair Popple
2021-05-25 9:21 ` Alistair Popple
2021-05-25 9:21 ` [Nouveau] " Alistair Popple
2021-05-25 11:51 ` Balbir Singh
2021-05-25 11:51 ` Balbir Singh
2021-05-25 11:51 ` [Nouveau] " Balbir Singh
2021-05-26 7:17 ` John Hubbard
2021-05-26 7:17 ` John Hubbard
2021-05-26 7:17 ` [Nouveau] " John Hubbard
2021-05-26 13:30 ` Alistair Popple
2021-05-26 13:30 ` Alistair Popple
2021-05-26 13:30 ` [Nouveau] " Alistair Popple
2021-06-02 8:50 ` Balbir Singh
2021-06-02 8:50 ` Balbir Singh
2021-06-02 8:50 ` [Nouveau] " Balbir Singh
2021-06-02 14:37 ` Peter Xu
2021-06-02 14:37 ` Peter Xu
2021-06-02 14:37 ` [Nouveau] " Peter Xu
2021-06-03 11:39 ` Alistair Popple
2021-06-03 11:39 ` Alistair Popple
2021-06-03 11:39 ` [Nouveau] " Alistair Popple
2021-06-03 14:47 ` Peter Xu
2021-06-03 14:47 ` Peter Xu
2021-06-03 14:47 ` [Nouveau] " Peter Xu
2021-06-04 1:07 ` Alistair Popple
2021-06-04 1:07 ` Alistair Popple
2021-06-04 1:07 ` [Nouveau] " Alistair Popple
2021-06-04 15:20 ` Peter Xu
2021-06-04 15:20 ` Peter Xu
2021-06-04 15:20 ` [Nouveau] " Peter Xu
2021-06-03 8:37 ` John Hubbard
2021-06-03 8:37 ` John Hubbard
2021-06-03 8:37 ` [Nouveau] " John Hubbard
2021-05-26 19:28 ` Peter Xu
2021-05-26 19:28 ` Peter Xu
2021-05-26 19:28 ` [Nouveau] " Peter Xu
2021-05-27 3:35 ` Alistair Popple
2021-05-27 3:35 ` Alistair Popple
2021-05-27 3:35 ` [Nouveau] " Alistair Popple
2021-05-27 13:04 ` Peter Xu
2021-05-27 13:04 ` Peter Xu
2021-05-27 13:04 ` [Nouveau] " Peter Xu
2021-05-28 1:48 ` Alistair Popple
2021-05-28 1:48 ` Alistair Popple
2021-05-28 1:48 ` [Nouveau] " Alistair Popple
2021-05-28 13:11 ` Peter Xu
2021-05-28 13:11 ` Peter Xu
2021-05-28 13:11 ` [Nouveau] " Peter Xu
2021-05-24 13:27 ` [PATCH v9 08/10] mm: Selftests for exclusive device memory Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 09/10] nouveau/svm: Refactor nouveau_range_fault Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 10/10] nouveau/svm: Implement atomic SVM access Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YK75dpdwU9AIKJ6i@t490s \
--to=peterx@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=apopple@nvidia.com \
--cc=bsingharora@gmail.com \
--cc=bskeggs@redhat.com \
--cc=dri-devel@lists.freedesktop.org \
--cc=hch@infradead.org \
--cc=hughd@google.com \
--cc=jgg@nvidia.com \
--cc=jglisse@redhat.com \
--cc=jhubbard@nvidia.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=nouveau@lists.freedesktop.org \
--cc=rcampbell@nvidia.com \
--cc=willy@infradead.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.