All of lore.kernel.org
 help / color / mirror / Atom feed
From: Alistair Popple <apopple@nvidia.com>
To: Peter Xu <peterx@redhat.com>
Cc: Balbir Singh <bsingharora@gmail.com>,
	John Hubbard <jhubbard@nvidia.com>,
	Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
	<nouveau@lists.freedesktop.org>, <bskeggs@redhat.com>,
	<rcampbell@nvidia.com>, <linux-doc@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <dri-devel@lists.freedesktop.org>,
	<hch@infradead.org>, <jglisse@redhat.com>, <willy@infradead.org>,
	<jgg@nvidia.com>, <hughd@google.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v9 07/10] mm: Device exclusive memory access
Date: Thu, 3 Jun 2021 21:39:32 +1000	[thread overview]
Message-ID: <3853054.AI2YdRgKcH@nvdebian> (raw)
In-Reply-To: <YLeXqp/U0DgylI/u@t490s>

On Thursday, 3 June 2021 12:37:30 AM AEST Peter Xu wrote:
> External email: Use caution opening links or attachments
> 
> On Wed, Jun 02, 2021 at 06:50:37PM +1000, Balbir Singh wrote:
> > On Wed, May 26, 2021 at 12:17:18AM -0700, John Hubbard wrote:
> > > On 5/25/21 4:51 AM, Balbir Singh wrote:
> > > ...
> > > 
> > > > > How beneficial is this code to nouveau users?  I see that it permits
> > > > > a
> > > > > part of OpenCL to be implemented, but how useful/important is this
> > > > > in
> > > > > the real world?
> > > > 
> > > > That is a very good question! I've not reviewed the code, but a sample
> > > > program with the described use case would make things easy to parse.
> > > > I suspect that is not easy to build at the moment?
> > > 
> > > The cover letter says this:
> > > 
> > > This has been tested with upstream Mesa 21.1.0 and a simple OpenCL
> > > program
> > > which checks that GPU atomic accesses to system memory are atomic.
> > > Without
> > > this series the test fails as there is no way of write-protecting the
> > > page
> > > mapping which results in the device clobbering CPU writes. For reference
> > > the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/
> > > 
> > > Further testing has been performed by adding support for testing
> > > exclusive
> > > access to the hmm-tests kselftests.
> > > 
> > > ...so that seems to cover the "sample program" request, at least.
> > 
> > Thanks, I'll take a look
> > 
> > > > I wonder how we co-ordinate all the work the mm is doing, page
> > > > migration,
> > > > reclaim with device exclusive access? Do we have any numbers for the
> > > > worst
> > > > case page fault latency when something is marked away for exclusive
> > > > access?
> > > 
> > > CPU page fault latency is approximately "terrible", if a page is
> > > resident on the GPU. We have to spin up a DMA engine on the GPU and
> > > have it copy the page over the PCIe bus, after all.
> > > 
> > > > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE
> > > > would
> > > 
> > > Yes, for now.
> > > 
> > > > only impact the address space of programs using the GPU. Should the
> > > > exclusively marked range live in the unreclaimable list and recycled
> > > > back to active/in-active to account for the fact that
> > > > 
> > > > 1. It is not reclaimable and reclaim will only hurt via page faults?
> > > > 2. It ages the page correctly or at-least allows for that possibility
> > > > when the> > > 
> > > >     page is used by the GPU.
> > > 
> > > I'm not sure that that is *necessarily* something we can conclude. It
> > > depends upon access patterns of each program. For example, a
> > > "reduction" parallel program sends over lots of data to the GPU, and
> > > only a tiny bit of (reduced!) data comes back to the CPU. In that case,
> > > freeing the physical page on the CPU is actually the best decision for
> > > the OS to make (if the OS is sufficiently prescient).> 
> > With a shared device or a device exclusive range, it would be good to get
> > the device usage pattern and update the mm with that knowledge, so that
> > the LRU can be better maintained. With your comment you seem to suggest
> > that a page used by the GPU might be a good candidate for reclaim based
> > on the CPU's understanding of the age of the page should not account for
> > use by the device
> > (are GPU workloads - access once and discard?)
> 
> Hmm, besides the aging info, this reminded me: do we need to isolate the
> page from lru too when marking device exclusive access?
> 
> Afaict the current patch didn't do that so I think it's reclaimable.  If we
> still have the rmap then we'll get a mmu notify CLEAR when unmapping that
> special pte, so device driver should be able to drop the ownership.  However
> we dropped the rmap when marking exclusive.  Now I don't know whether and
> how it'll work if page reclaim runs with the page being exclusively owned
> if without isolating the page..

Reclaim won't run on the page due to the extra references from the special 
swap entries.

> --
> Peter Xu





WARNING: multiple messages have this Message-ID (diff)
From: Alistair Popple <apopple@nvidia.com>
To: Peter Xu <peterx@redhat.com>
Cc: rcampbell@nvidia.com, willy@infradead.org,
	linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org,
	Balbir Singh <bsingharora@gmail.com>,
	hughd@google.com, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, hch@infradead.org,
	linux-mm@kvack.org, bskeggs@redhat.com, jgg@nvidia.com,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [Nouveau] [PATCH v9 07/10] mm: Device exclusive memory access
Date: Thu, 3 Jun 2021 21:39:32 +1000	[thread overview]
Message-ID: <3853054.AI2YdRgKcH@nvdebian> (raw)
In-Reply-To: <YLeXqp/U0DgylI/u@t490s>

On Thursday, 3 June 2021 12:37:30 AM AEST Peter Xu wrote:
> External email: Use caution opening links or attachments
> 
> On Wed, Jun 02, 2021 at 06:50:37PM +1000, Balbir Singh wrote:
> > On Wed, May 26, 2021 at 12:17:18AM -0700, John Hubbard wrote:
> > > On 5/25/21 4:51 AM, Balbir Singh wrote:
> > > ...
> > > 
> > > > > How beneficial is this code to nouveau users?  I see that it permits
> > > > > a
> > > > > part of OpenCL to be implemented, but how useful/important is this
> > > > > in
> > > > > the real world?
> > > > 
> > > > That is a very good question! I've not reviewed the code, but a sample
> > > > program with the described use case would make things easy to parse.
> > > > I suspect that is not easy to build at the moment?
> > > 
> > > The cover letter says this:
> > > 
> > > This has been tested with upstream Mesa 21.1.0 and a simple OpenCL
> > > program
> > > which checks that GPU atomic accesses to system memory are atomic.
> > > Without
> > > this series the test fails as there is no way of write-protecting the
> > > page
> > > mapping which results in the device clobbering CPU writes. For reference
> > > the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/
> > > 
> > > Further testing has been performed by adding support for testing
> > > exclusive
> > > access to the hmm-tests kselftests.
> > > 
> > > ...so that seems to cover the "sample program" request, at least.
> > 
> > Thanks, I'll take a look
> > 
> > > > I wonder how we co-ordinate all the work the mm is doing, page
> > > > migration,
> > > > reclaim with device exclusive access? Do we have any numbers for the
> > > > worst
> > > > case page fault latency when something is marked away for exclusive
> > > > access?
> > > 
> > > CPU page fault latency is approximately "terrible", if a page is
> > > resident on the GPU. We have to spin up a DMA engine on the GPU and
> > > have it copy the page over the PCIe bus, after all.
> > > 
> > > > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE
> > > > would
> > > 
> > > Yes, for now.
> > > 
> > > > only impact the address space of programs using the GPU. Should the
> > > > exclusively marked range live in the unreclaimable list and recycled
> > > > back to active/in-active to account for the fact that
> > > > 
> > > > 1. It is not reclaimable and reclaim will only hurt via page faults?
> > > > 2. It ages the page correctly or at-least allows for that possibility
> > > > when the> > > 
> > > >     page is used by the GPU.
> > > 
> > > I'm not sure that that is *necessarily* something we can conclude. It
> > > depends upon access patterns of each program. For example, a
> > > "reduction" parallel program sends over lots of data to the GPU, and
> > > only a tiny bit of (reduced!) data comes back to the CPU. In that case,
> > > freeing the physical page on the CPU is actually the best decision for
> > > the OS to make (if the OS is sufficiently prescient).> 
> > With a shared device or a device exclusive range, it would be good to get
> > the device usage pattern and update the mm with that knowledge, so that
> > the LRU can be better maintained. With your comment you seem to suggest
> > that a page used by the GPU might be a good candidate for reclaim based
> > on the CPU's understanding of the age of the page should not account for
> > use by the device
> > (are GPU workloads - access once and discard?)
> 
> Hmm, besides the aging info, this reminded me: do we need to isolate the
> page from lru too when marking device exclusive access?
> 
> Afaict the current patch didn't do that so I think it's reclaimable.  If we
> still have the rmap then we'll get a mmu notify CLEAR when unmapping that
> special pte, so device driver should be able to drop the ownership.  However
> we dropped the rmap when marking exclusive.  Now I don't know whether and
> how it'll work if page reclaim runs with the page being exclusively owned
> if without isolating the page..

Reclaim won't run on the page due to the extra references from the special 
swap entries.

> --
> Peter Xu




_______________________________________________
Nouveau mailing list
Nouveau@lists.freedesktop.org
https://lists.freedesktop.org/mailman/listinfo/nouveau

WARNING: multiple messages have this Message-ID (diff)
From: Alistair Popple <apopple@nvidia.com>
To: Peter Xu <peterx@redhat.com>
Cc: rcampbell@nvidia.com, willy@infradead.org,
	linux-doc@vger.kernel.org, nouveau@lists.freedesktop.org,
	Balbir Singh <bsingharora@gmail.com>,
	hughd@google.com, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, hch@infradead.org,
	linux-mm@kvack.org, jglisse@redhat.com, bskeggs@redhat.com,
	jgg@nvidia.com, John Hubbard <jhubbard@nvidia.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v9 07/10] mm: Device exclusive memory access
Date: Thu, 3 Jun 2021 21:39:32 +1000	[thread overview]
Message-ID: <3853054.AI2YdRgKcH@nvdebian> (raw)
In-Reply-To: <YLeXqp/U0DgylI/u@t490s>

On Thursday, 3 June 2021 12:37:30 AM AEST Peter Xu wrote:
> External email: Use caution opening links or attachments
> 
> On Wed, Jun 02, 2021 at 06:50:37PM +1000, Balbir Singh wrote:
> > On Wed, May 26, 2021 at 12:17:18AM -0700, John Hubbard wrote:
> > > On 5/25/21 4:51 AM, Balbir Singh wrote:
> > > ...
> > > 
> > > > > How beneficial is this code to nouveau users?  I see that it permits
> > > > > a
> > > > > part of OpenCL to be implemented, but how useful/important is this
> > > > > in
> > > > > the real world?
> > > > 
> > > > That is a very good question! I've not reviewed the code, but a sample
> > > > program with the described use case would make things easy to parse.
> > > > I suspect that is not easy to build at the moment?
> > > 
> > > The cover letter says this:
> > > 
> > > This has been tested with upstream Mesa 21.1.0 and a simple OpenCL
> > > program
> > > which checks that GPU atomic accesses to system memory are atomic.
> > > Without
> > > this series the test fails as there is no way of write-protecting the
> > > page
> > > mapping which results in the device clobbering CPU writes. For reference
> > > the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/
> > > 
> > > Further testing has been performed by adding support for testing
> > > exclusive
> > > access to the hmm-tests kselftests.
> > > 
> > > ...so that seems to cover the "sample program" request, at least.
> > 
> > Thanks, I'll take a look
> > 
> > > > I wonder how we co-ordinate all the work the mm is doing, page
> > > > migration,
> > > > reclaim with device exclusive access? Do we have any numbers for the
> > > > worst
> > > > case page fault latency when something is marked away for exclusive
> > > > access?
> > > 
> > > CPU page fault latency is approximately "terrible", if a page is
> > > resident on the GPU. We have to spin up a DMA engine on the GPU and
> > > have it copy the page over the PCIe bus, after all.
> > > 
> > > > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE
> > > > would
> > > 
> > > Yes, for now.
> > > 
> > > > only impact the address space of programs using the GPU. Should the
> > > > exclusively marked range live in the unreclaimable list and recycled
> > > > back to active/in-active to account for the fact that
> > > > 
> > > > 1. It is not reclaimable and reclaim will only hurt via page faults?
> > > > 2. It ages the page correctly or at-least allows for that possibility
> > > > when the> > > 
> > > >     page is used by the GPU.
> > > 
> > > I'm not sure that that is *necessarily* something we can conclude. It
> > > depends upon access patterns of each program. For example, a
> > > "reduction" parallel program sends over lots of data to the GPU, and
> > > only a tiny bit of (reduced!) data comes back to the CPU. In that case,
> > > freeing the physical page on the CPU is actually the best decision for
> > > the OS to make (if the OS is sufficiently prescient).> 
> > With a shared device or a device exclusive range, it would be good to get
> > the device usage pattern and update the mm with that knowledge, so that
> > the LRU can be better maintained. With your comment you seem to suggest
> > that a page used by the GPU might be a good candidate for reclaim based
> > on the CPU's understanding of the age of the page should not account for
> > use by the device
> > (are GPU workloads - access once and discard?)
> 
> Hmm, besides the aging info, this reminded me: do we need to isolate the
> page from lru too when marking device exclusive access?
> 
> Afaict the current patch didn't do that so I think it's reclaimable.  If we
> still have the rmap then we'll get a mmu notify CLEAR when unmapping that
> special pte, so device driver should be able to drop the ownership.  However
> we dropped the rmap when marking exclusive.  Now I don't know whether and
> how it'll work if page reclaim runs with the page being exclusively owned
> if without isolating the page..

Reclaim won't run on the page due to the extra references from the special 
swap entries.

> --
> Peter Xu





  reply	other threads:[~2021-06-03 11:39 UTC|newest]

Thread overview: 123+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-24 13:27 [PATCH v9 00/10] Add support for SVM atomics in Nouveau Alistair Popple
2021-05-24 13:27 ` Alistair Popple
2021-05-24 13:27 ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 01/10] mm: Remove special swap entry functions Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 02/10] mm/swapops: Rework swap entry manipulation code Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 03/10] mm/rmap: Split try_to_munlock from try_to_unmap Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-25 18:39   ` Liam Howlett
2021-05-25 18:39     ` Liam Howlett
2021-05-25 18:39     ` [Nouveau] " Liam Howlett
2021-05-25 23:45     ` Shakeel Butt
2021-05-25 23:45       ` Shakeel Butt
2021-05-25 23:45       ` [Nouveau] " Shakeel Butt
2021-05-25 23:45       ` Shakeel Butt
2021-06-04 20:49       ` Liam Howlett
2021-06-04 20:49         ` Liam Howlett
2021-06-04 20:49         ` [Nouveau] " Liam Howlett
2021-06-05  0:41         ` Shakeel Butt
2021-06-05  0:41           ` Shakeel Butt
2021-06-05  0:41           ` [Nouveau] " Shakeel Butt
2021-06-05  0:41           ` Shakeel Butt
2021-06-05  3:39           ` Liam Howlett
2021-06-05  3:39             ` Liam Howlett
2021-06-05  3:39             ` [Nouveau] " Liam Howlett
2021-06-05  4:19             ` Shakeel Butt
2021-06-05  4:19               ` Shakeel Butt
2021-06-05  4:19               ` [Nouveau] " Shakeel Butt
2021-06-05  4:19               ` Shakeel Butt
2021-06-07  4:51           ` Alistair Popple
2021-06-07  4:51             ` Alistair Popple
2021-06-07  4:51             ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 04/10] mm/rmap: Split migration into its own function Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 05/10] mm: Rename migrate_pgmap_owner Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-26 19:41   ` Peter Xu
2021-05-26 19:41     ` Peter Xu
2021-05-26 19:41     ` [Nouveau] " Peter Xu
2021-05-24 13:27 ` [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-26 19:50   ` Peter Xu
2021-05-26 19:50     ` Peter Xu
2021-05-26 19:50     ` [Nouveau] " Peter Xu
2021-05-27  1:20     ` Alistair Popple
2021-05-27  1:20       ` Alistair Popple
2021-05-27  1:20       ` [Nouveau] " Alistair Popple
2021-05-27  1:44       ` Peter Xu
2021-05-27  1:44         ` Peter Xu
2021-05-27  1:44         ` [Nouveau] " Peter Xu
2021-05-24 13:27 ` [PATCH v9 07/10] mm: Device exclusive memory access Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-24 22:11   ` Andrew Morton
2021-05-24 22:11     ` Andrew Morton
2021-05-24 22:11     ` [Nouveau] " Andrew Morton
2021-05-25  1:31     ` John Hubbard
2021-05-25  1:31       ` John Hubbard
2021-05-25  1:31       ` [Nouveau] " John Hubbard
2021-05-25  9:21       ` Alistair Popple
2021-05-25  9:21         ` Alistair Popple
2021-05-25  9:21         ` [Nouveau] " Alistair Popple
2021-05-25 11:51     ` Balbir Singh
2021-05-25 11:51       ` Balbir Singh
2021-05-25 11:51       ` [Nouveau] " Balbir Singh
2021-05-26  7:17       ` John Hubbard
2021-05-26  7:17         ` John Hubbard
2021-05-26  7:17         ` [Nouveau] " John Hubbard
2021-05-26 13:30         ` Alistair Popple
2021-05-26 13:30           ` Alistair Popple
2021-05-26 13:30           ` [Nouveau] " Alistair Popple
2021-06-02  8:50         ` Balbir Singh
2021-06-02  8:50           ` Balbir Singh
2021-06-02  8:50           ` [Nouveau] " Balbir Singh
2021-06-02 14:37           ` Peter Xu
2021-06-02 14:37             ` Peter Xu
2021-06-02 14:37             ` [Nouveau] " Peter Xu
2021-06-03 11:39             ` Alistair Popple [this message]
2021-06-03 11:39               ` Alistair Popple
2021-06-03 11:39               ` [Nouveau] " Alistair Popple
2021-06-03 14:47               ` Peter Xu
2021-06-03 14:47                 ` Peter Xu
2021-06-03 14:47                 ` [Nouveau] " Peter Xu
2021-06-04  1:07                 ` Alistair Popple
2021-06-04  1:07                   ` Alistair Popple
2021-06-04  1:07                   ` [Nouveau] " Alistair Popple
2021-06-04 15:20                   ` Peter Xu
2021-06-04 15:20                     ` Peter Xu
2021-06-04 15:20                     ` [Nouveau] " Peter Xu
2021-06-03  8:37           ` John Hubbard
2021-06-03  8:37             ` John Hubbard
2021-06-03  8:37             ` [Nouveau] " John Hubbard
2021-05-26 19:28   ` Peter Xu
2021-05-26 19:28     ` Peter Xu
2021-05-26 19:28     ` [Nouveau] " Peter Xu
2021-05-27  3:35     ` Alistair Popple
2021-05-27  3:35       ` Alistair Popple
2021-05-27  3:35       ` [Nouveau] " Alistair Popple
2021-05-27 13:04       ` Peter Xu
2021-05-27 13:04         ` Peter Xu
2021-05-27 13:04         ` [Nouveau] " Peter Xu
2021-05-28  1:48         ` Alistair Popple
2021-05-28  1:48           ` Alistair Popple
2021-05-28  1:48           ` [Nouveau] " Alistair Popple
2021-05-28 13:11           ` Peter Xu
2021-05-28 13:11             ` Peter Xu
2021-05-28 13:11             ` [Nouveau] " Peter Xu
2021-05-24 13:27 ` [PATCH v9 08/10] mm: Selftests for exclusive device memory Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 09/10] nouveau/svm: Refactor nouveau_range_fault Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple
2021-05-24 13:27 ` [PATCH v9 10/10] nouveau/svm: Implement atomic SVM access Alistair Popple
2021-05-24 13:27   ` Alistair Popple
2021-05-24 13:27   ` [Nouveau] " Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=3853054.AI2YdRgKcH@nvdebian \
    --to=apopple@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=bsingharora@gmail.com \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=hch@lst.de \
    --cc=hughd@google.com \
    --cc=jgg@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=peterx@redhat.com \
    --cc=rcampbell@nvidia.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.