archive mirror
 help / color / mirror / Atom feed
From: Felix Kuehling <>
To: Jason Gunthorpe <>,
	Felix Kuehling <>
Subject: Re: [RFC PATCH 0/5] Support DEVICE_GENERIC memory in migrate_vma_*
Date: Fri, 28 May 2021 11:56:36 -0400	[thread overview]
Message-ID: <> (raw)
In-Reply-To: <>

Am 2021-05-28 um 9:08 a.m. schrieb Jason Gunthorpe:
> On Thu, May 27, 2021 at 07:08:04PM -0400, Felix Kuehling wrote:
>> Now we're trying to migrate data to and from that memory using the
>> migrate_vma_* helpers so we can support page-based migration in our
>> unified memory allocations, while also supporting CPU access to those
>> pages.
> So you have completely coherent and indistinguishable GPU and CPU
> memory and the need of migration is basicaly alot like NUMA policy
> choice - get better access locality?

Yes. For a typical GPU compute application it means the GPU gets the
best bandwidth/latency, and the CPU can coherently access the results
without page faults and migrations. That's especially valuable for
applications with persistent compute kernels that want to exploit
concurrency between CPU and GPU.

>> This patch series makes a few changes to make MEMORY_DEVICE_GENERIC pages
>> behave correctly in the migrate_vma_* helpers. We are looking for feedback
>> about this approach. If we're close, what's needed to make our patches
>> acceptable upstream? If we're not close, any suggestions how else to
>> achieve what we are trying to do (i.e. page migration and coherent CPU
>> access to VRAM)?
> I'm not an expert in migrate, but it doesn't look outrageous.
> Have you thought about allowing MEMORY_DEVICE_GENERIC to work with
> hmm_range_fault() so you can have nice uniform RDMA?

Yes. That's our plan for RDMA to unified memory on this system. My
understanding was, that DEVICE_GENERIC pages should already work with
hmm_range_fault. But maybe I'm missing something.

> People have wanted to do that with MEMORY_DEVICE_PRIVATE but nobody
> finished the work

Yeah, for DEVICE_PRIVATE it seems more tricky because the peer device is
not the owner of the pages and would need help from the actual owner to
get proper DMA addresses.


> Jason

  reply	other threads:[~2021-05-28 15:56 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-27 23:08 Felix Kuehling
2021-05-27 23:08 ` [RFC PATCH 1/5] drm/amdkfd: add SPM support for SVM Felix Kuehling
2021-05-29  6:38   ` Christoph Hellwig
2021-05-29 18:42     ` Felix Kuehling
2021-05-27 23:08 ` [RFC PATCH 2/5] drm/amdkfd: generic type as sys mem on migration to ram Felix Kuehling
2021-05-27 23:08 ` [RFC PATCH 3/5] include/linux/mm.h: helper to check zone device generic type Felix Kuehling
2021-05-27 23:08 ` [RFC PATCH 4/5] mm: add generic type support for device zone page migration Felix Kuehling
2021-05-29  6:40   ` Christoph Hellwig
2021-05-27 23:08 ` [RFC PATCH 5/5] mm: changes to unref pages with Generic type Felix Kuehling
2021-05-29  6:42   ` Christoph Hellwig
2021-05-29 18:44     ` Felix Kuehling
2021-05-28 13:08 ` [RFC PATCH 0/5] Support DEVICE_GENERIC memory in migrate_vma_* Jason Gunthorpe
2021-05-28 15:56   ` Felix Kuehling [this message]
2021-05-29  6:41     ` Christoph Hellwig
2021-05-29 18:37       ` Felix Kuehling

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \ \ \ \ \ \ \ \ \ \ \
    --subject='Re: [RFC PATCH 0/5] Support DEVICE_GENERIC memory in migrate_vma_*' \

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).