linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Felix Kuehling <felix.kuehling@amd.com>
To: David Hildenbrand <david@redhat.com>,
	Alex Sierra <alex.sierra@amd.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	rcampbell@nvidia.com, linux-ext4@vger.kernel.org,
	linux-xfs@vger.kernel.org
Cc: amd-gfx@lists.freedesktop.org, dri-devel@lists.freedesktop.org,
	hch@lst.de, jgg@nvidia.com, jglisse@redhat.com,
	apopple@nvidia.com, willy@infradead.org
Subject: Re: [PATCH v3 00/10] Add MEMORY_DEVICE_COHERENT for coherent device memory mapping
Date: Wed, 12 Jan 2022 11:08:53 -0500	[thread overview]
Message-ID: <f0d6b6d6-806e-4c6f-cbb7-677ef32dfcad@amd.com> (raw)
In-Reply-To: <8c4df8e4-ef99-c3fd-dcca-759e92739d4c@redhat.com>

Am 2022-01-12 um 6:16 a.m. schrieb David Hildenbrand:
> On 10.01.22 23:31, Alex Sierra wrote:
>> This patch series introduces MEMORY_DEVICE_COHERENT, a type of memory
>> owned by a device that can be mapped into CPU page tables like
>> MEMORY_DEVICE_GENERIC and can also be migrated like
>> MEMORY_DEVICE_PRIVATE.
>>
>> Christoph, the suggestion to incorporate Ralph Campbell’s refcount
>> cleanup patch into our hardware page migration patchset originally came
>> from you, but it proved impractical to do things in that order because
>> the refcount cleanup introduced a bug with wide ranging structural
>> implications. Instead, we amended Ralph’s patch so that it could be
>> applied after merging the migration work. As we saw from the recent
>> discussion, merging the refcount work is going to take some time and
>> cooperation between multiple development groups, while the migration
>> work is ready now and is needed now. So we propose to merge this
>> patchset first and continue to work with Ralph and others to merge the
>> refcount cleanup separately, when it is ready.
>>
>> This patch series is mostly self-contained except for a few places where
>> it needs to update other subsystems to handle the new memory type.
>> System stability and performance are not affected according to our
>> ongoing testing, including xfstests.
>>
>> How it works: The system BIOS advertises the GPU device memory
>> (aka VRAM) as SPM (special purpose memory) in the UEFI system address
>> map.
>>
>> The amdgpu driver registers the memory with devmap as
>> MEMORY_DEVICE_COHERENT using devm_memremap_pages. The initial user for
>> this hardware page migration capability is the Frontier supercomputer
>> project. This functionality is not AMD-specific. We expect other GPU
>> vendors to find this functionality useful, and possibly other hardware
>> types in the future.
>>
>> Our test nodes in the lab are similar to the Frontier configuration,
>> with .5 TB of system memory plus 256 GB of device memory split across
>> 4 GPUs, all in a single coherent address space. Page migration is
>> expected to improve application efficiency significantly. We will
>> report empirical results as they become available.
> Hi,
>
> might be a dumb question because I'm not too familiar with
> MEMORY_DEVICE_COHERENT, but who's in charge of migrating *to* that
> memory? Or how does a process ever get a grab on such pages?

Device memory management and migration to device memory work the same as
MEMORY_DEVICE_PRIVATE. The device driver is in charge of managing the
memory and migrating data to it in response to application requests
(e.g. hipMemPrefetchAsync) or device page faults.

The nice thing about MEMORY_DEVICE_COHERENT is, that the CPU, or a 3rd
party device (e.g. a NIC) can access the memory without migrations
disrupting execution of high performance application code on the GPU.


>
> And where does migration come into play? I assume migration is only
> required to migrate off of that device memory to ordinary system RAM
> when required because the device memory has to be freed up, correct?

That's one case. For example memory pressure can force the GPU driver to
evict some device-coherent memory back to system memory. Also,
applications can request a migration to system memory explicitly (again
with something like hipMemPrefetchAsync).

Regards,
  Felix


>
> (a high level description on how this is exploited from users space
> would be great)
>


      reply	other threads:[~2022-01-12 16:09 UTC|newest]

Thread overview: 24+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-01-10 22:31 [PATCH v3 00/10] Add MEMORY_DEVICE_COHERENT for coherent device memory mapping Alex Sierra
2022-01-10 22:31 ` [PATCH v3 01/10] mm: add zone device coherent type memory support Alex Sierra
2022-01-20  4:08   ` Alistair Popple
2022-01-10 22:31 ` [PATCH v3 02/10] mm: add device coherent vma selection for memory migration Alex Sierra
2022-01-10 22:31 ` [PATCH v3 03/10] mm/gup: fail get_user_pages for LONGTERM dev coherent type Alex Sierra
2022-01-20 12:36   ` Joao Martins
2022-01-20 13:18     ` Alistair Popple
2022-01-10 22:31 ` [PATCH v3 04/10] drm/amdkfd: add SPM support for SVM Alex Sierra
2022-01-10 22:31 ` [PATCH v3 05/10] drm/amdkfd: coherent type as sys mem on migration to ram Alex Sierra
2022-01-10 22:31 ` [PATCH v3 06/10] lib: test_hmm add ioctl to get zone device type Alex Sierra
2022-01-20  5:01   ` Alistair Popple
2022-01-10 22:31 ` [PATCH v3 07/10] lib: test_hmm add module param for " Alex Sierra
2022-01-20  5:23   ` Alistair Popple
2022-01-10 22:31 ` [PATCH v3 08/10] lib: add support for device coherent type in test_hmm Alex Sierra
2022-01-20  6:00   ` Alistair Popple
2022-01-10 22:32 ` [PATCH v3 09/10] tools: update hmm-test to support device coherent type Alex Sierra
2022-01-20  6:14   ` Alistair Popple
2022-01-27  3:22     ` Sierra Guiza, Alejandro (Alex)
2022-01-10 22:32 ` [PATCH v3 10/10] tools: update test_hmm script to support SP config Alex Sierra
2022-01-20  6:17   ` Alistair Popple
2022-01-12 11:06 ` [PATCH v3 00/10] Add MEMORY_DEVICE_COHERENT for coherent device memory mapping Alistair Popple
2022-01-20  6:33   ` Alistair Popple
2022-01-12 11:16 ` David Hildenbrand
2022-01-12 16:08   ` Felix Kuehling [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f0d6b6d6-806e-4c6f-cbb7-677ef32dfcad@amd.com \
    --to=felix.kuehling@amd.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.sierra@amd.com \
    --cc=amd-gfx@lists.freedesktop.org \
    --cc=apopple@nvidia.com \
    --cc=david@redhat.com \
    --cc=dri-devel@lists.freedesktop.org \
    --cc=hch@lst.de \
    --cc=jgg@nvidia.com \
    --cc=jglisse@redhat.com \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=rcampbell@nvidia.com \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).