linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Balbir Singh <bsingharora@gmail.com>
Cc: John Hubbard <jhubbard@nvidia.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Alistair Popple <apopple@nvidia.com>,
	linux-mm@kvack.org, nouveau@lists.freedesktop.org,
	bskeggs@redhat.com, rcampbell@nvidia.com,
	linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org,
	dri-devel@lists.freedesktop.org, hch@infradead.org,
	jglisse@redhat.com, willy@infradead.org, jgg@nvidia.com,
	hughd@google.com, Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v9 07/10] mm: Device exclusive memory access
Date: Wed, 2 Jun 2021 10:37:30 -0400	[thread overview]
Message-ID: <YLeXqp/U0DgylI/u@t490s> (raw)
In-Reply-To: <YLdGXSw0zdiovn4i@balbir-desktop>

On Wed, Jun 02, 2021 at 06:50:37PM +1000, Balbir Singh wrote:
> On Wed, May 26, 2021 at 12:17:18AM -0700, John Hubbard wrote:
> > On 5/25/21 4:51 AM, Balbir Singh wrote:
> > ...
> > > > How beneficial is this code to nouveau users?  I see that it permits a
> > > > part of OpenCL to be implemented, but how useful/important is this in
> > > > the real world?
> > > 
> > > That is a very good question! I've not reviewed the code, but a sample
> > > program with the described use case would make things easy to parse.
> > > I suspect that is not easy to build at the moment?
> > > 
> > 
> > The cover letter says this:
> > 
> > This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program
> > which checks that GPU atomic accesses to system memory are atomic. Without
> > this series the test fails as there is no way of write-protecting the page
> > mapping which results in the device clobbering CPU writes. For reference
> > the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/
> > 
> > Further testing has been performed by adding support for testing exclusive
> > access to the hmm-tests kselftests.
> > 
> > ...so that seems to cover the "sample program" request, at least.
> 
> Thanks, I'll take a look
> 
> > 
> > > I wonder how we co-ordinate all the work the mm is doing, page migration,
> > > reclaim with device exclusive access? Do we have any numbers for the worst
> > > case page fault latency when something is marked away for exclusive access?
> > 
> > CPU page fault latency is approximately "terrible", if a page is resident on
> > the GPU. We have to spin up a DMA engine on the GPU and have it copy the page
> > over the PCIe bus, after all.
> > 
> > > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE would
> > 
> > Yes, for now.
> > 
> > > only impact the address space of programs using the GPU. Should the exclusively
> > > marked range live in the unreclaimable list and recycled back to active/in-active
> > > to account for the fact that
> > > 
> > > 1. It is not reclaimable and reclaim will only hurt via page faults?
> > > 2. It ages the page correctly or at-least allows for that possibility when the
> > >     page is used by the GPU.
> > 
> > I'm not sure that that is *necessarily* something we can conclude. It depends upon
> > access patterns of each program. For example, a "reduction" parallel program sends
> > over lots of data to the GPU, and only a tiny bit of (reduced!) data comes back
> > to the CPU. In that case, freeing the physical page on the CPU is actually the
> > best decision for the OS to make (if the OS is sufficiently prescient).
> >
> 
> With a shared device or a device exclusive range, it would be good to get the device
> usage pattern and update the mm with that knowledge, so that the LRU can be better
> maintained. With your comment you seem to suggest that a page used by the GPU might
> be a good candidate for reclaim based on the CPU's understanding of the age of
> the page should not account for use by the device
> (are GPU workloads - access once and discard?) 

Hmm, besides the aging info, this reminded me: do we need to isolate the page
from lru too when marking device exclusive access?

Afaict the current patch didn't do that so I think it's reclaimable.  If we
still have the rmap then we'll get a mmu notify CLEAR when unmapping that
special pte, so device driver should be able to drop the ownership.  However we
dropped the rmap when marking exclusive.  Now I don't know whether and how
it'll work if page reclaim runs with the page being exclusively owned if
without isolating the page..

-- 
Peter Xu


  reply	other threads:[~2021-06-02 14:37 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-24 13:27 [PATCH v9 00/10] Add support for SVM atomics in Nouveau Alistair Popple
2021-05-24 13:27 ` [PATCH v9 01/10] mm: Remove special swap entry functions Alistair Popple
2021-05-24 13:27 ` [PATCH v9 02/10] mm/swapops: Rework swap entry manipulation code Alistair Popple
2021-05-24 13:27 ` [PATCH v9 03/10] mm/rmap: Split try_to_munlock from try_to_unmap Alistair Popple
2021-05-25 18:39   ` Liam Howlett
2021-05-25 23:45     ` Shakeel Butt
2021-06-04 20:49       ` Liam Howlett
2021-06-05  0:41         ` Shakeel Butt
2021-06-05  3:39           ` Liam Howlett
2021-06-05  4:19             ` Shakeel Butt
2021-06-07  4:51           ` Alistair Popple
2021-05-24 13:27 ` [PATCH v9 04/10] mm/rmap: Split migration into its own function Alistair Popple
2021-05-24 13:27 ` [PATCH v9 05/10] mm: Rename migrate_pgmap_owner Alistair Popple
2021-05-26 19:41   ` Peter Xu
2021-05-24 13:27 ` [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Alistair Popple
2021-05-26 19:50   ` Peter Xu
2021-05-27  1:20     ` Alistair Popple
2021-05-27  1:44       ` Peter Xu
2021-05-24 13:27 ` [PATCH v9 07/10] mm: Device exclusive memory access Alistair Popple
2021-05-24 22:11   ` Andrew Morton
2021-05-25  1:31     ` John Hubbard
2021-05-25  9:21       ` Alistair Popple
2021-05-25 11:51     ` Balbir Singh
2021-05-26  7:17       ` John Hubbard
2021-05-26 13:30         ` Alistair Popple
2021-06-02  8:50         ` Balbir Singh
2021-06-02 14:37           ` Peter Xu [this message]
2021-06-03 11:39             ` Alistair Popple
2021-06-03 14:47               ` Peter Xu
2021-06-04  1:07                 ` Alistair Popple
2021-06-04 15:20                   ` Peter Xu
2021-06-03  8:37           ` John Hubbard
2021-05-26 19:28   ` Peter Xu
2021-05-27  3:35     ` Alistair Popple
2021-05-27 13:04       ` Peter Xu
2021-05-28  1:48         ` Alistair Popple
2021-05-28 13:11           ` Peter Xu
2021-05-24 13:27 ` [PATCH v9 08/10] mm: Selftests for exclusive device memory Alistair Popple
2021-05-24 13:27 ` [PATCH v9 09/10] nouveau/svm: Refactor nouveau_range_fault Alistair Popple
2021-05-24 13:27 ` [PATCH v9 10/10] nouveau/svm: Implement atomic SVM access Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YLeXqp/U0DgylI/u@t490s \
    --to=peterx@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=apopple@nvidia.com \
    --cc=bsingharora@gmail.com \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=hch@lst.de \
    --cc=hughd@google.com \
    --cc=jgg@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=rcampbell@nvidia.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).