linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Alistair Popple <apopple@nvidia.com>
To: John Hubbard <jhubbard@nvidia.com>
Cc: Balbir Singh <bsingharora@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
	<nouveau@lists.freedesktop.org>, <bskeggs@redhat.com>,
	<rcampbell@nvidia.com>, <linux-doc@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <dri-devel@lists.freedesktop.org>,
	<hch@infradead.org>, <jglisse@redhat.com>, <willy@infradead.org>,
	<jgg@nvidia.com>, <peterx@redhat.com>, <hughd@google.com>,
	Christoph Hellwig <hch@lst.de>
Subject: Re: [PATCH v9 07/10] mm: Device exclusive memory access
Date: Wed, 26 May 2021 23:30:06 +1000	[thread overview]
Message-ID: <1743144.c4ng0vEeQp@nvdebian> (raw)
In-Reply-To: <8844f8c1-d78c-e0f9-c046-592bd75d4c07@nvidia.com>

On Wednesday, 26 May 2021 5:17:18 PM AEST John Hubbard wrote:
> On 5/25/21 4:51 AM, Balbir Singh wrote:
> ...
> 
> >> How beneficial is this code to nouveau users?  I see that it permits a
> >> part of OpenCL to be implemented, but how useful/important is this in
> >> the real world?
> > 
> > That is a very good question! I've not reviewed the code, but a sample
> > program with the described use case would make things easy to parse.
> > I suspect that is not easy to build at the moment?
> 
> The cover letter says this:
> 
> This has been tested with upstream Mesa 21.1.0 and a simple OpenCL program
> which checks that GPU atomic accesses to system memory are atomic. Without
> this series the test fails as there is no way of write-protecting the page
> mapping which results in the device clobbering CPU writes. For reference
> the test is available at https://ozlabs.org/~apopple/opencl_svm_atomics/
> 
> Further testing has been performed by adding support for testing exclusive
> access to the hmm-tests kselftests.
> 
> ...so that seems to cover the "sample program" request, at least.

It is also sufficiently easy to build, assuming of course you have the 
appropriate Mesa/LLVM/OpenCL libraries installed :-)

If you are interested I have some scripts which may help with building Mesa, 
etc. Not that that is especially hard either, it's just there are a couple of 
different dependencies required.

> > I wonder how we co-ordinate all the work the mm is doing, page migration,
> > reclaim with device exclusive access? Do we have any numbers for the worst
> > case page fault latency when something is marked away for exclusive
> > access?
>
> CPU page fault latency is approximately "terrible", if a page is resident on
> the GPU. We have to spin up a DMA engine on the GPU and have it copy the
> page over the PCIe bus, after all.

Although for clarity that describes latency for CPU faults to device private 
pages which are always resident on the GPU. A CPU fault to a page being 
exclusively accessed will be slightly less terrible as it only requires the 
GPU MMU/TLB mappings to be taken down in much the same as for any other MMU 
notifier callback as the page is mapped by the GPU rather than resident there.

> > I presume for now this is anonymous memory only? SWP_DEVICE_EXCLUSIVE
> > would
> 
> Yes, for now.
> 
> > only impact the address space of programs using the GPU. Should the
> > exclusively marked range live in the unreclaimable list and recycled back
> > to active/in-active to account for the fact that
> > 
> > 1. It is not reclaimable and reclaim will only hurt via page faults?
> > 2. It ages the page correctly or at-least allows for that possibility when
> > the> 
> >     page is used by the GPU.
> 
> I'm not sure that that is *necessarily* something we can conclude. It
> depends upon access patterns of each program. For example, a "reduction"
> parallel program sends over lots of data to the GPU, and only a tiny bit of
> (reduced!) data comes back to the CPU. In that case, freeing the physical
> page on the CPU is actually the best decision for the OS to make (if the OS
> is sufficiently prescient).
> 
> thanks,





  reply	other threads:[~2021-05-26 13:31 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-24 13:27 [PATCH v9 00/10] Add support for SVM atomics in Nouveau Alistair Popple
2021-05-24 13:27 ` [PATCH v9 01/10] mm: Remove special swap entry functions Alistair Popple
2021-05-24 13:27 ` [PATCH v9 02/10] mm/swapops: Rework swap entry manipulation code Alistair Popple
2021-05-24 13:27 ` [PATCH v9 03/10] mm/rmap: Split try_to_munlock from try_to_unmap Alistair Popple
2021-05-25 18:39   ` Liam Howlett
2021-05-25 23:45     ` Shakeel Butt
2021-06-04 20:49       ` Liam Howlett
2021-06-05  0:41         ` Shakeel Butt
2021-06-05  3:39           ` Liam Howlett
2021-06-05  4:19             ` Shakeel Butt
2021-06-07  4:51           ` Alistair Popple
2021-05-24 13:27 ` [PATCH v9 04/10] mm/rmap: Split migration into its own function Alistair Popple
2021-05-24 13:27 ` [PATCH v9 05/10] mm: Rename migrate_pgmap_owner Alistair Popple
2021-05-26 19:41   ` Peter Xu
2021-05-24 13:27 ` [PATCH v9 06/10] mm/memory.c: Allow different return codes for copy_nonpresent_pte() Alistair Popple
2021-05-26 19:50   ` Peter Xu
2021-05-27  1:20     ` Alistair Popple
2021-05-27  1:44       ` Peter Xu
2021-05-24 13:27 ` [PATCH v9 07/10] mm: Device exclusive memory access Alistair Popple
2021-05-24 22:11   ` Andrew Morton
2021-05-25  1:31     ` John Hubbard
2021-05-25  9:21       ` Alistair Popple
2021-05-25 11:51     ` Balbir Singh
2021-05-26  7:17       ` John Hubbard
2021-05-26 13:30         ` Alistair Popple [this message]
2021-06-02  8:50         ` Balbir Singh
2021-06-02 14:37           ` Peter Xu
2021-06-03 11:39             ` Alistair Popple
2021-06-03 14:47               ` Peter Xu
2021-06-04  1:07                 ` Alistair Popple
2021-06-04 15:20                   ` Peter Xu
2021-06-03  8:37           ` John Hubbard
2021-05-26 19:28   ` Peter Xu
2021-05-27  3:35     ` Alistair Popple
2021-05-27 13:04       ` Peter Xu
2021-05-28  1:48         ` Alistair Popple
2021-05-28 13:11           ` Peter Xu
2021-05-24 13:27 ` [PATCH v9 08/10] mm: Selftests for exclusive device memory Alistair Popple
2021-05-24 13:27 ` [PATCH v9 09/10] nouveau/svm: Refactor nouveau_range_fault Alistair Popple
2021-05-24 13:27 ` [PATCH v9 10/10] nouveau/svm: Implement atomic SVM access Alistair Popple

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1743144.c4ng0vEeQp@nvdebian \
    --to=apopple@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=bsingharora@gmail.com \
    --cc=bskeggs@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@infradead.org \
    --cc=hch@lst.de \
    --cc=hughd@google.com \
    --cc=jgg@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=jhubbard@nvidia.com \
    --cc=linux-doc@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=nouveau@lists.freedesktop.org \
    --cc=peterx@redhat.com \
    --cc=rcampbell@nvidia.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).