From: Jerome Glisse <jglisse@redhat.com>
To: Anshuman Khandual <khandual@linux.vnet.ibm.com>
Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org,
linux-mm@kvack.org, John Hubbard <jhubbard@nvidia.com>,
Jatin Kumar <jakumar@nvidia.com>,
Mark Hairgrove <mhairgrove@nvidia.com>,
Sherry Cheung <SCheung@nvidia.com>,
Subhash Gutti <sgutti@nvidia.com>
Subject: Re: [HMM v13 08/18] mm/hmm: heterogeneous memory management (HMM for short)
Date: Mon, 28 Nov 2016 04:41:30 -0500 (EST) [thread overview]
Message-ID: <249830138.374473.1480326090692.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <583B9D64.7020005@linux.vnet.ibm.com>
> On 11/27/2016 06:40 PM, Jerome Glisse wrote:
> > On Wed, Nov 23, 2016 at 09:33:35AM +0530, Anshuman Khandual wrote:
> >> On 11/18/2016 11:48 PM, Jérôme Glisse wrote:
> >
> > [...]
> >
> >>> + *
> >>> + * hmm_vma_migrate(vma, start, end, ops);
> >>> + *
> >>> + * With ops struct providing 2 callback alloc_and_copy() which allocated
> >>> the
> >>> + * destination memory and initialize it using source memory. Migration
> >>> can fail
> >>> + * after this step and thus last callback finalize_and_map() allow the
> >>> device
> >>> + * driver to know which page were successfully migrated and which were
> >>> not.
> >>
> >> So we have page->pgmap->free_devpage() to release the individual page back
> >> into the device driver management during migration and also we have this
> >> ops
> >> based finalize_and_mmap() to check on the failed instances inside a single
> >> migration context which can contain set of pages at a time.
> >>
> >>> + *
> >>> + * This can easily be use outside of HMM intended use case.
> >>
> >> Where you think this can be used outside of HMM ?
> >
> > Well on the radar is new memory hierarchy that seems to be on every CPU
> > designer
> > roadmap. Where you have a fast small HBM like memory package with the CPU
> > and then
> > you have the regular memory.
> >
> > In the embedded world they want to migrate active process to fast CPU
> > memory and
> > shutdown the regular memory to save power.
> >
> > In the HPC world they want to migrate hot data of hot process to this fast
> > memory.
> >
> > In both case we are talking about process base memory migration and in case
> > of
> > embedded they also have DMA engine they can use to offload the copy
> > operation
> > itself.
> >
> > This are the useful case i have in mind but other people might see that
> > code and
> > realise they could also use it for their own specific corner case.
>
> If there are plans for HBM or specialized type of memory which will be
> packaged inside the CPU (without any other device accessing it like in
> the case of GPU or Network Card), then I think in that case using HMM
> is not ideal. CPU will be the only thing accessing this memory and
> there is never going to be any other device or context which can access
> this outside of CPU. Hence role of a device driver is redundant, it
> should be initialized and used as a basic platform component.
AFAIK no CPU can saturate the bandwidth of this memory and thus they only
make sense when there is something like a GPU on die. So in my mind this
kind of memory is always use preferably by a GPU but could still be use by
CPU. In that context you also always have a DMA engine to offload memory
from CPU. I was more selling the HMM migration code in that context :)
> In that case what we need is a core VM managed memory with certain kind
> of restrictions around the allocation and a way of explicit allocation
> into it if required. Representing these memory like a cpu less restrictive
> coherent device memory node is a better solution IMHO. These RFCs what I
> have posted regarding CDM representation are efforts in this direction.
>
> [RFC Specialized Zonelists] https://lkml.org/lkml/2016/10/24/19
> [RFC Restrictive mems_allowed] https://lkml.org/lkml/2016/11/22/339
>
> I believe both HMM and CDM have their own use cases and will complement
> each other.
Yes how this memory is represented probably better be represented by something
like CDM.
Cheers,
Jérôme
next prev parent reply other threads:[~2016-11-28 9:41 UTC|newest]
Thread overview: 73+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-11-18 18:18 [HMM v13 00/18] HMM (Heterogeneous Memory Management) v13 Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 01/18] mm/memory/hotplug: convert device parameter bool to set of flags Jérôme Glisse
2016-11-21 0:44 ` Balbir Singh
2016-11-21 4:53 ` Jerome Glisse
2016-11-21 6:57 ` Anshuman Khandual
2016-11-21 12:19 ` Jerome Glisse
2016-11-21 6:41 ` Anshuman Khandual
2016-11-21 12:27 ` Jerome Glisse
2016-11-22 5:35 ` Anshuman Khandual
2016-11-22 14:08 ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 02/18] mm/ZONE_DEVICE/unaddressable: add support for un-addressable device memory Jérôme Glisse
2016-11-21 8:06 ` Anshuman Khandual
2016-11-21 12:33 ` Jerome Glisse
2016-11-22 5:15 ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 03/18] mm/ZONE_DEVICE/free_hot_cold_page: catch ZONE_DEVICE pages Jérôme Glisse
2016-11-21 8:18 ` Anshuman Khandual
2016-11-21 12:50 ` Jerome Glisse
2016-11-22 4:30 ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 04/18] mm/ZONE_DEVICE/free-page: callback when page is freed Jérôme Glisse
2016-11-21 1:49 ` Balbir Singh
2016-11-21 4:57 ` Jerome Glisse
2016-11-21 8:26 ` Anshuman Khandual
2016-11-21 12:34 ` Jerome Glisse
2016-11-22 5:02 ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 05/18] mm/ZONE_DEVICE/devmem_pages_remove: allow early removal of device memory Jérôme Glisse
2016-11-21 10:37 ` Anshuman Khandual
2016-11-21 12:39 ` Jerome Glisse
2016-11-22 4:54 ` Anshuman Khandual
2016-11-18 18:18 ` [HMM v13 06/18] mm/ZONE_DEVICE/unaddressable: add special swap for unaddressable Jérôme Glisse
2016-11-21 2:06 ` Balbir Singh
2016-11-21 5:05 ` Jerome Glisse
2016-11-22 2:19 ` Balbir Singh
2016-11-22 13:59 ` Jerome Glisse
2016-11-21 11:10 ` Anshuman Khandual
2016-11-21 10:58 ` Anshuman Khandual
2016-11-21 12:42 ` Jerome Glisse
2016-11-22 4:48 ` Anshuman Khandual
2016-11-24 13:56 ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 07/18] mm/ZONE_DEVICE/x86: add support for un-addressable device memory Jérôme Glisse
2016-11-21 2:08 ` Balbir Singh
2016-11-21 5:08 ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 08/18] mm/hmm: heterogeneous memory management (HMM for short) Jérôme Glisse
2016-11-21 2:29 ` Balbir Singh
2016-11-21 5:14 ` Jerome Glisse
2016-11-23 4:03 ` Anshuman Khandual
2016-11-27 13:10 ` Jerome Glisse
2016-11-28 2:58 ` Anshuman Khandual
2016-11-28 9:41 ` Jerome Glisse [this message]
2016-11-18 18:18 ` [HMM v13 09/18] mm/hmm/mirror: mirror process address space on device with HMM helpers Jérôme Glisse
2016-11-21 2:42 ` Balbir Singh
2016-11-21 5:18 ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 10/18] mm/hmm/mirror: add range lock helper, prevent CPU page table update for the range Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 11/18] mm/hmm/mirror: add range monitor helper, to monitor CPU page table update Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 12/18] mm/hmm/mirror: helper to snapshot CPU page table Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 13/18] mm/hmm/mirror: device page fault handler Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 14/18] mm/hmm/migrate: support un-addressable ZONE_DEVICE page in migration Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 15/18] mm/hmm/migrate: add new boolean copy flag to migratepage() callback Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 16/18] mm/hmm/migrate: new memory migration helper for use with device memory Jérôme Glisse
2016-11-18 19:57 ` Aneesh Kumar K.V
2016-11-18 20:15 ` Jerome Glisse
2016-11-19 14:32 ` Aneesh Kumar K.V
2016-11-19 17:17 ` Jerome Glisse
2016-11-20 18:21 ` Aneesh Kumar K.V
2016-11-20 20:06 ` Jerome Glisse
2016-11-21 3:30 ` Balbir Singh
2016-11-21 5:31 ` Jerome Glisse
2016-11-18 18:18 ` [HMM v13 17/18] mm/hmm/devmem: device driver helper to hotplug ZONE_DEVICE memory Jérôme Glisse
2016-11-18 18:18 ` [HMM v13 18/18] mm/hmm/devmem: dummy HMM device as an helper for " Jérôme Glisse
2016-11-19 0:41 ` [HMM v13 00/18] HMM (Heterogeneous Memory Management) v13 John Hubbard
2016-11-19 14:50 ` Aneesh Kumar K.V
2016-11-23 9:16 ` Haggai Eran
2016-11-25 16:16 ` Jerome Glisse
2016-11-27 13:27 ` Haggai Eran
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=249830138.374473.1480326090692.JavaMail.zimbra@redhat.com \
--to=jglisse@redhat.com \
--cc=SCheung@nvidia.com \
--cc=akpm@linux-foundation.org \
--cc=jakumar@nvidia.com \
--cc=jhubbard@nvidia.com \
--cc=khandual@linux.vnet.ibm.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mhairgrove@nvidia.com \
--cc=sgutti@nvidia.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).