linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Jerome Glisse <jglisse@redhat.com>
To: David Nellans <dnellans@nvidia.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
	linux-mm@kvack.org, John Hubbard <jhubbard@nvidia.com>,
	Evgeny Baskakov <ebaskakov@nvidia.com>,
	Mark Hairgrove <mhairgrove@nvidia.com>,
	Sherry Cheung <SCheung@nvidia.com>,
	Subhash Gutti <sgutti@nvidia.com>,
	Cameron Buschardt <cabuschardt@nvidia.com>,
	Zi Yan <zi.yan@cs.rutgers.edu>,
	Anshuman Khandual <khandual@linux.vnet.ibm.com>
Subject: Re: [HMM v15 13/16] mm/hmm/migrate: new memory migration helper for use with device memory v2
Date: Tue, 10 Jan 2017 11:58:36 -0500	[thread overview]
Message-ID: <20170110165835.GA3342@redhat.com> (raw)
In-Reply-To: <9642114e-3093-cff0-e177-1071b478f27f@nvidia.com>

On Tue, Jan 10, 2017 at 09:30:30AM -0600, David Nellans wrote:
> 
> > You are mischaracterizing patch 11-14. Patch 11-12 adds new flags and
> > modify existing functions so that they can be share. Patch 13 implement
> > new migration helper while patch 14 optimize this new migration helper.
> >
> > hmm_migrate() is different from existing migration code because it works
> > on virtual address range of a process. Existing migration code works
> > from page. The only difference with existing code is that we collect
> > pages from virtual address and we allow use of dma engine to perform
> > copy.
> You're right, but why not just introduce a new general migration interface
> that works on vma range first, then case all the normal migration paths for
> HMM and then DMA?  Being able to migrate based on vma range certainly
> makes user level control of memory placement/migration less complicated
> than page interfaces.

Special casing for HMM and DMA is already what those patches do. They share
as much code as doable with existing path. There is one thing to consider
here, because we are working on vma range we can easily optimize the unmap
step. This is why i do not share any of the outer loop with existing code.

Sharing more code than this will be counter-productive from optimization
point of view.

> 
> > There is nothing that ie hmm_migrate() to HMM. If that make you feel better
> > i can drop the hmm_ prefix but i would need another name than migrate() as
> > it is already taken. I can probably name it vma_range_dma_migrate() or
> > something like that.
> >
> > The only think that is HMM specific in this code is understanding HMM special
> > page table entry and handling those. Such entry can only be migrated by DMA
> > and not by memcpy hence why i do not modify existing code to support those.
> I'd be happier if there was a vma_migrate proposed independently, I think
> it would find users outside the HMM sandbox. In the IBM migration case,
> they might want the vma interface but choose to use CPU based migration
> rather than this DMA interface, It certainly would make testing of the
> vma_migrate interface easier.

Like i said that code is not in HMM sandbox, it seats behind its own kernel
option and do not rely on any HMM thing beside hmm_pfn_t which is pfn with
a bunch of flags. The only difference with existing code is that it does
understand HMM CPU pte. It can easily be rename without hmm_ prefix if that
is what people want. The hmm_pfn_t is harder to replace as there isn't any-
thing that match the requirement (need few flags: DEVICE,MIGRATE,EMPTY,
UNADDRESSABLE).

The DMA is a callback function the caller of hmm_migrate() provide so you can
easily provide a callback that just do memcpy (well copy_highpage()). There
is no need to make any change. I can even provide a default CPU copy call-
back.

Cheers,
Jérôme

  reply	other threads:[~2017-01-10 16:58 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-01-06 16:46 [HMM v15 00/16] HMM (Heterogeneous Memory Management) v15 Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 01/16] mm/free_hot_cold_page: catch ZONE_DEVICE pages Jérôme Glisse
2017-01-09  9:19   ` Balbir Singh
2017-01-09 16:21     ` Dave Hansen
2017-01-09 16:57       ` Jerome Glisse
2017-01-09 17:00         ` Dave Hansen
2017-01-09 17:58           ` Jerome Glisse
2017-01-06 16:46 ` [HMM v15 02/16] mm/memory/hotplug: convert device bool to int to allow for more flags v2 Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 03/16] mm/ZONE_DEVICE/devmem_pages_remove: allow early removal of device memory Jérôme Glisse
2017-01-06 17:58   ` Dan Williams
2017-01-06 16:46 ` [HMM v15 04/16] mm/ZONE_DEVICE/free-page: callback when page is freed Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 05/16] mm/ZONE_DEVICE/unaddressable: add support for un-addressable device memory v2 Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 06/16] mm/ZONE_DEVICE/x86: add support for un-addressable device memory Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 07/16] mm/hmm: heterogeneous memory management (HMM for short) Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 08/16] mm/hmm/mirror: mirror process address space on device with HMM helpers Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 09/16] mm/hmm/mirror: helper to snapshot CPU page table Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 10/16] mm/hmm/mirror: device page fault handler Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 11/16] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 12/16] mm/hmm/migrate: add new boolean copy flag to migratepage() callback Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 13/16] mm/hmm/migrate: new memory migration helper for use with device memory v2 Jérôme Glisse
2017-01-06 16:46   ` David Nellans
2017-01-06 17:13     ` Jerome Glisse
2017-01-10 15:30       ` David Nellans
2017-01-10 16:58         ` Jerome Glisse [this message]
2017-01-06 16:46 ` [HMM v15 14/16] mm/hmm/migrate: optimize page map once in vma being migrated Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 15/16] mm/hmm/devmem: device driver helper to hotplug ZONE_DEVICE memory Jérôme Glisse
2017-01-06 16:46 ` [HMM v15 16/16] mm/hmm/devmem: dummy HMM device as an helper for " Jérôme Glisse
2017-01-06 20:54 ` [HMM v15 00/16] HMM (Heterogeneous Memory Management) v15 Dave Hansen
2017-01-06 21:16   ` Jerome Glisse
2017-01-06 21:36     ` Jerome Glisse

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170110165835.GA3342@redhat.com \
    --to=jglisse@redhat.com \
    --cc=SCheung@nvidia.com \
    --cc=akpm@linux-foundation.org \
    --cc=cabuschardt@nvidia.com \
    --cc=dnellans@nvidia.com \
    --cc=ebaskakov@nvidia.com \
    --cc=jhubbard@nvidia.com \
    --cc=khandual@linux.vnet.ibm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhairgrove@nvidia.com \
    --cc=sgutti@nvidia.com \
    --cc=zi.yan@cs.rutgers.edu \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).