linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Felix Kuehling <felix.kuehling@amd.com>
To: Dave Chinner <david@fromorbit.com>
Cc: Christoph Hellwig <hch@lst.de>,
	"Sierra Guiza, Alejandro (Alex)" <alex.sierra@amd.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	rcampbell@nvidia.com, linux-ext4@vger.kernel.org,
	linux-xfs@vger.kernel.org, amd-gfx@lists.freedesktop.org,
	dri-devel@lists.freedesktop.org, jgg@nvidia.com,
	jglisse@redhat.com
Subject: Re: [PATCH v1 03/14] mm: add iomem vma selection for memory migration
Date: Wed, 1 Sep 2021 19:07:34 -0400	[thread overview]
Message-ID: <e6a9e3cc-9dca-946a-c3fc-f86753fe8fd4@amd.com> (raw)
In-Reply-To: <20210901220308.GL2566745@dread.disaster.area>

On 2021-09-01 6:03 p.m., Dave Chinner wrote:
> On Wed, Sep 01, 2021 at 11:40:43AM -0400, Felix Kuehling wrote:
>> Am 2021-09-01 um 4:29 a.m. schrieb Christoph Hellwig:
>>> On Mon, Aug 30, 2021 at 01:04:43PM -0400, Felix Kuehling wrote:
>>>>>> driver code is not really involved in updating the CPU mappings. Maybe
>>>>>> it's something we need to do in the migration helpers.
>>>>> It looks like I'm totally misunderstanding what you are adding here
>>>>> then.  Why do we need any special treatment at all for memory that
>>>>> has normal struct pages and is part of the direct kernel map?
>>>> The pages are like normal memory for purposes of mapping them in CPU
>>>> page tables and for coherent access from the CPU.
>>> That's the user page tables.  What about the kernel direct map?
>>> If there is a normal kernel struct page backing there really should
>>> be no need for the pgmap.
>> I'm not sure. The physical address ranges are in the UEFI system address
>> map as special-purpose memory. Does Linux create the struct pages and
>> kernel direct map for that without a pgmap call? I didn't see that last
>> time I went digging through that code.
>>
>>
>>>>  From an application
>>>> perspective, we want file-backed and anonymous mappings to be able to
>>>> use DEVICE_PUBLIC pages with coherent CPU access. The goal is to
>>>> optimize performance for GPU heavy workloads while minimizing the need
>>>> to migrate data back-and-forth between system memory and device memory.
>>> I don't really understand that part.  file backed pages are always
>>> allocated by the file system using the pagecache helpers, that is
>>> using the page allocator.  Anonymouns memory also always comes from
>>> the page allocator.
>> I'm coming at this from my experience with DEVICE_PRIVATE. Both
>> anonymous and file-backed pages should be migrateable to DEVICE_PRIVATE
>> memory by the migrate_vma_* helpers for more efficient access by our
>> GPU. (*) It's part of the basic premise of HMM as I understand it. I
>> would expect the same thing to work for DEVICE_PUBLIC memory.
>>
>> (*) I believe migrating file-backed pages to DEVICE_PRIVATE doesn't
>> currently work, but that's something I'm hoping to fix at some point.
> FWIW, I'd love to see the architecture documents that define how
> filesystems are supposed to interact with this device private
> memory. This whole "hand filesystem controlled memory to other
> devices" is a minefield that is trivial to get wrong iand very
> difficult to fix - just look at the historical mess that RDMA
> to/from file backed and/or DAX pages has been.
>
> So, really, from my perspective as a filesystem engineer, I want to
> see an actual specification for how this new memory type is going to
> interact with filesystem and the page cache so everyone has some
> idea of how this is going to work and can point out how it doesn't
> work before code that simply doesn't work is pushed out into
> production systems and then merged....

OK. To be clear, that's not part of this patch series. And I have no 
authority to push anything in this part of the kernel, so you have 
nothing to fear. ;)

FWIW, we already have the ability to map file-backed system memory pages 
into device page tables with HMM and interval notifiers. But we cannot 
currently migrate them to ZONE_DEVICE pages. Beyond that, my 
understanding of how filesystems and page cache work is rather 
superficial at this point. I'll keep your name in mind for when I am 
ready to discuss this in more detail.

Cheers,
   Felix


>
> Cheers,
>
> Dave.


  reply	other threads:[~2021-09-01 23:07 UTC|newest]

Thread overview: 42+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-08-25  3:48 [PATCH v1 00/14] Add MEMORY_DEVICE_PUBLIC for CPU-accessible coherent device memory Alex Sierra
2021-08-25  3:48 ` [PATCH v1 01/14] ext4/xfs: add page refcount helper Alex Sierra
2021-08-25  7:35   ` Christoph Hellwig
2021-08-25 15:09   ` Theodore Ts'o
2021-08-25 15:49   ` Darrick J. Wong
2021-08-25  3:48 ` [PATCH v1 02/14] mm: remove extra ZONE_DEVICE struct page refcount Alex Sierra
2021-08-25  7:35   ` Christoph Hellwig
2021-08-25 11:15   ` Vlastimil Babka
2021-08-25 17:49     ` Ralph Campbell
2021-08-27 11:26       ` Vlastimil Babka
2021-08-25  3:48 ` [PATCH v1 03/14] mm: add iomem vma selection for memory migration Alex Sierra
2021-08-25  7:40   ` Christoph Hellwig
2021-08-25  7:46   ` Christoph Hellwig
2021-08-25 18:24     ` Sierra Guiza, Alejandro (Alex)
2021-08-26 22:27       ` Felix Kuehling
2021-08-30  8:28         ` Christoph Hellwig
2021-08-30 17:04           ` Felix Kuehling
2021-09-01  8:29             ` Christoph Hellwig
2021-09-01 15:40               ` Felix Kuehling
2021-09-01 22:03                 ` Dave Chinner
2021-09-01 23:07                   ` Felix Kuehling [this message]
2021-09-02  1:14                     ` Dave Chinner
2021-09-09  4:55                       ` Felix Kuehling
2021-09-02  8:18                 ` Christoph Hellwig
2021-09-02 18:07                   ` Dan Williams
2021-09-09  4:02                   ` Felix Kuehling
2021-08-25 14:24   ` Felix Kuehling
2021-08-25  3:48 ` [PATCH v1 04/14] mm: add zone device public type memory support Alex Sierra
2021-08-25  3:48 ` [PATCH v1 05/14] drm/amdkfd: ref count init for device pages Alex Sierra
2021-08-25 14:34   ` Felix Kuehling
2021-08-25  3:48 ` [PATCH v1 06/14] drm/amdkfd: add SPM support for SVM Alex Sierra
2021-08-25 14:45   ` Felix Kuehling
2021-08-25  3:48 ` [PATCH v1 07/14] drm/amdkfd: public type as sys mem on migration to ram Alex Sierra
2021-08-25  3:48 ` [PATCH v1 08/14] mm: add public type support to migrate_vma helpers Alex Sierra
2021-08-25  7:47   ` Christoph Hellwig
2021-08-25  3:48 ` [PATCH v1 09/14] mm: call pgmap->ops->page_free for DEVICE_PUBLIC pages Alex Sierra
2021-08-25  7:46   ` Christoph Hellwig
2021-08-25  3:48 ` [PATCH v1 10/14] lib: test_hmm add ioctl to get zone device type Alex Sierra
2021-08-25  3:48 ` [PATCH v1 11/14] lib: test_hmm add module param for " Alex Sierra
2021-08-25  3:48 ` [PATCH v1 12/14] lib: add support for device public type in test_hmm Alex Sierra
2021-08-25  3:48 ` [PATCH v1 13/14] tools: update hmm-test to support device public type Alex Sierra
2021-08-25  3:48 ` [PATCH v1 14/14] tools: update test_hmm script to support SP config Alex Sierra

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=e6a9e3cc-9dca-946a-c3fc-f86753fe8fd4@amd.com \
    --to=felix.kuehling@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.sierra@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=david@fromorbit.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=jgg@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=rcampbell@nvidia.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).