From: Ross Zwisler <ross.zwisler@linux.intel.com> To: "Jérôme Glisse" <jglisse@redhat.com> Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dan Williams <dan.j.williams@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, John Hubbard <jhubbard@nvidia.com>, Ross Zwisler <ross.zwisler@linux.intel.com> Subject: Re: [HMM 07/15] mm/ZONE_DEVICE: new type of ZONE_DEVICE for unaddressable memory v3 Date: Tue, 30 May 2017 10:43:55 -0600 [thread overview] Message-ID: <20170530164355.GA25891@linux.intel.com> (raw) In-Reply-To: <20170524172024.30810-8-jglisse@redhat.com> On Wed, May 24, 2017 at 01:20:16PM -0400, Jérôme Glisse wrote: > HMM (heterogeneous memory management) need struct page to support migration > from system main memory to device memory. Reasons for HMM and migration to > device memory is explained with HMM core patch. > > This patch deals with device memory that is un-addressable memory (ie CPU > can not access it). Hence we do not want those struct page to be manage > like regular memory. That is why we extend ZONE_DEVICE to support different > types of memory. > > A persistent memory type is define for existing user of ZONE_DEVICE and a > new device un-addressable type is added for the un-addressable memory type. > There is a clear separation between what is expected from each memory type > and existing user of ZONE_DEVICE are un-affected by new requirement and new > use of the un-addressable type. All specific code path are protect with > test against the memory type. > > Because memory is un-addressable we use a new special swap type for when > a page is migrated to device memory (this reduces the number of maximum > swap file). > > The main two additions beside memory type to ZONE_DEVICE is two callbacks. > First one, page_free() is call whenever page refcount reach 1 (which means > the page is free as ZONE_DEVICE page never reach a refcount of 0). This > allow device driver to manage its memory and associated struct page. > > The second callback page_fault() happens when there is a CPU access to > an address that is back by a device page (which are un-addressable by the > CPU). This callback is responsible to migrate the page back to system > main memory. Device driver can not block migration back to system memory, > HMM make sure that such page can not be pin into device memory. > > If device is in some error condition and can not migrate memory back then > a CPU page fault to device memory should end with SIGBUS. > > Changed since v2: > - s/DEVICE_UNADDRESSABLE/DEVICE_PRIVATE > Changed since v1: > - rename to device private memory (from device unaddressable) > > Signed-off-by: Jérôme Glisse <jglisse@redhat.com> > Acked-by: Dan Williams <dan.j.williams@intel.com> > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > --- <> > @@ -35,18 +37,88 @@ static inline struct vmem_altmap *to_vmem_altmap(unsigned long memmap_start) > } > #endif > > +/* > + * Specialize ZONE_DEVICE memory into multiple types each having differents > + * usage. > + * > + * MEMORY_DEVICE_PUBLIC: > + * Persistent device memory (pmem): struct page might be allocated in different > + * memory and architecture might want to perform special actions. It is similar > + * to regular memory, in that the CPU can access it transparently. However, > + * it is likely to have different bandwidth and latency than regular memory. > + * See Documentation/nvdimm/nvdimm.txt for more information. > + * > + * MEMORY_DEVICE_PRIVATE: > + * Device memory that is not directly addressable by the CPU: CPU can neither > + * read nor write _UNADDRESSABLE memory. In this case, we do still have struct _PRIVATE Just noticed that one holdover from the DEVICE_UNADDRESSABLE naming. > + * pages backing the device memory. Doing so simplifies the implementation, but > + * it is important to remember that there are certain points at which the struct > + * page must be treated as an opaque object, rather than a "normal" struct page. > + * A more complete discussion of unaddressable memory may be found in > + * include/linux/hmm.h and Documentation/vm/hmm.txt. > + */ > +enum memory_type { > + MEMORY_DEVICE_PUBLIC = 0, > + MEMORY_DEVICE_PRIVATE, > +};
WARNING: multiple messages have this Message-ID (diff)
From: Ross Zwisler <ross.zwisler@linux.intel.com> To: "Jérôme Glisse" <jglisse@redhat.com> Cc: akpm@linux-foundation.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, Dan Williams <dan.j.williams@intel.com>, "Kirill A . Shutemov" <kirill.shutemov@linux.intel.com>, John Hubbard <jhubbard@nvidia.com>, Ross Zwisler <ross.zwisler@linux.intel.com> Subject: Re: [HMM 07/15] mm/ZONE_DEVICE: new type of ZONE_DEVICE for unaddressable memory v3 Date: Tue, 30 May 2017 10:43:55 -0600 [thread overview] Message-ID: <20170530164355.GA25891@linux.intel.com> (raw) In-Reply-To: <20170524172024.30810-8-jglisse@redhat.com> On Wed, May 24, 2017 at 01:20:16PM -0400, Jerome Glisse wrote: > HMM (heterogeneous memory management) need struct page to support migration > from system main memory to device memory. Reasons for HMM and migration to > device memory is explained with HMM core patch. > > This patch deals with device memory that is un-addressable memory (ie CPU > can not access it). Hence we do not want those struct page to be manage > like regular memory. That is why we extend ZONE_DEVICE to support different > types of memory. > > A persistent memory type is define for existing user of ZONE_DEVICE and a > new device un-addressable type is added for the un-addressable memory type. > There is a clear separation between what is expected from each memory type > and existing user of ZONE_DEVICE are un-affected by new requirement and new > use of the un-addressable type. All specific code path are protect with > test against the memory type. > > Because memory is un-addressable we use a new special swap type for when > a page is migrated to device memory (this reduces the number of maximum > swap file). > > The main two additions beside memory type to ZONE_DEVICE is two callbacks. > First one, page_free() is call whenever page refcount reach 1 (which means > the page is free as ZONE_DEVICE page never reach a refcount of 0). This > allow device driver to manage its memory and associated struct page. > > The second callback page_fault() happens when there is a CPU access to > an address that is back by a device page (which are un-addressable by the > CPU). This callback is responsible to migrate the page back to system > main memory. Device driver can not block migration back to system memory, > HMM make sure that such page can not be pin into device memory. > > If device is in some error condition and can not migrate memory back then > a CPU page fault to device memory should end with SIGBUS. > > Changed since v2: > - s/DEVICE_UNADDRESSABLE/DEVICE_PRIVATE > Changed since v1: > - rename to device private memory (from device unaddressable) > > Signed-off-by: Jerome Glisse <jglisse@redhat.com> > Acked-by: Dan Williams <dan.j.williams@intel.com> > Cc: Ross Zwisler <ross.zwisler@linux.intel.com> > --- <> > @@ -35,18 +37,88 @@ static inline struct vmem_altmap *to_vmem_altmap(unsigned long memmap_start) > } > #endif > > +/* > + * Specialize ZONE_DEVICE memory into multiple types each having differents > + * usage. > + * > + * MEMORY_DEVICE_PUBLIC: > + * Persistent device memory (pmem): struct page might be allocated in different > + * memory and architecture might want to perform special actions. It is similar > + * to regular memory, in that the CPU can access it transparently. However, > + * it is likely to have different bandwidth and latency than regular memory. > + * See Documentation/nvdimm/nvdimm.txt for more information. > + * > + * MEMORY_DEVICE_PRIVATE: > + * Device memory that is not directly addressable by the CPU: CPU can neither > + * read nor write _UNADDRESSABLE memory. In this case, we do still have struct _PRIVATE Just noticed that one holdover from the DEVICE_UNADDRESSABLE naming. > + * pages backing the device memory. Doing so simplifies the implementation, but > + * it is important to remember that there are certain points at which the struct > + * page must be treated as an opaque object, rather than a "normal" struct page. > + * A more complete discussion of unaddressable memory may be found in > + * include/linux/hmm.h and Documentation/vm/hmm.txt. > + */ > +enum memory_type { > + MEMORY_DEVICE_PUBLIC = 0, > + MEMORY_DEVICE_PRIVATE, > +}; -- To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to majordomo@kvack.org. For more info on Linux MM, see: http://www.linux-mm.org/ . Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
next prev parent reply other threads:[~2017-05-30 16:43 UTC|newest] Thread overview: 79+ messages / expand[flat|nested] mbox.gz Atom feed top 2017-05-24 17:20 [HMM 00/15] HMM (Heterogeneous Memory Management) v23 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 01/15] hmm: heterogeneous memory management documentation Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-06-24 6:15 ` John Hubbard 2017-06-24 6:15 ` John Hubbard 2017-05-24 17:20 ` [HMM 02/15] mm/hmm: heterogeneous memory management (HMM for short) v4 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-31 2:10 ` Balbir Singh 2017-05-31 2:10 ` Balbir Singh 2017-06-01 22:35 ` Jerome Glisse 2017-06-01 22:35 ` Jerome Glisse 2017-05-24 17:20 ` [HMM 03/15] mm/hmm/mirror: mirror process address space on device with HMM helpers v3 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 04/15] mm/hmm/mirror: helper to snapshot CPU page table v3 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 05/15] mm/hmm/mirror: device page fault handler Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 06/15] mm/memory_hotplug: introduce add_pages Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-31 1:31 ` Balbir Singh 2017-05-31 1:31 ` Balbir Singh 2017-05-24 17:20 ` [HMM 07/15] mm/ZONE_DEVICE: new type of ZONE_DEVICE for unaddressable memory v3 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-30 16:43 ` Ross Zwisler [this message] 2017-05-30 16:43 ` Ross Zwisler 2017-05-30 21:43 ` Jerome Glisse 2017-05-30 21:43 ` Jerome Glisse 2017-05-31 1:23 ` Balbir Singh 2017-05-31 1:23 ` Balbir Singh 2017-06-09 3:55 ` John Hubbard 2017-06-09 3:55 ` John Hubbard 2017-06-12 17:57 ` Jerome Glisse 2017-06-12 17:57 ` Jerome Glisse 2017-06-15 3:41 ` zhong jiang 2017-06-15 3:41 ` zhong jiang 2017-06-15 17:43 ` Jerome Glisse 2017-06-15 17:43 ` Jerome Glisse 2017-05-24 17:20 ` [HMM 08/15] mm/ZONE_DEVICE: special case put_page() for device private pages v2 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 09/15] mm/hmm/devmem: device memory hotplug using ZONE_DEVICE v5 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-06-24 3:54 ` John Hubbard 2017-06-24 3:54 ` John Hubbard 2017-05-24 17:20 ` [HMM 10/15] mm/hmm/devmem: dummy HMM device for ZONE_DEVICE memory v3 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 11/15] mm/migrate: new migrate mode MIGRATE_SYNC_NO_COPY Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 12/15] mm/migrate: new memory migration helper for use with device memory v4 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-31 3:59 ` Balbir Singh 2017-05-31 3:59 ` Balbir Singh 2017-06-01 22:35 ` Jerome Glisse 2017-06-01 22:35 ` Jerome Glisse 2017-06-07 9:02 ` Balbir Singh 2017-06-07 9:02 ` Balbir Singh 2017-06-07 14:06 ` Jerome Glisse 2017-06-07 14:06 ` Jerome Glisse 2017-05-24 17:20 ` [HMM 13/15] mm/migrate: migrate_vma() unmap page from vma while collecting pages Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-24 17:20 ` [HMM 14/15] mm/migrate: support un-addressable ZONE_DEVICE page in migration v2 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-05-31 4:09 ` Balbir Singh 2017-05-31 4:09 ` Balbir Singh 2017-05-31 8:39 ` Balbir Singh 2017-05-31 8:39 ` Balbir Singh 2017-05-24 17:20 ` [HMM 15/15] mm/migrate: allow migrate_vma() to alloc new page on empty entry v2 Jérôme Glisse 2017-05-24 17:20 ` Jérôme Glisse 2017-06-16 7:22 ` [HMM 00/15] HMM (Heterogeneous Memory Management) v23 Bridgman, John 2017-06-16 14:47 ` Jerome Glisse 2017-06-16 14:47 ` Jerome Glisse 2017-06-16 17:55 ` Bridgman, John 2017-06-16 17:55 ` Bridgman, John 2017-06-16 18:04 ` Jerome Glisse 2017-06-16 18:04 ` Jerome Glisse 2017-06-23 15:00 ` Bob Liu 2017-06-23 15:00 ` Bob Liu 2017-06-23 15:28 ` Jerome Glisse 2017-06-23 15:28 ` Jerome Glisse
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20170530164355.GA25891@linux.intel.com \ --to=ross.zwisler@linux.intel.com \ --cc=akpm@linux-foundation.org \ --cc=dan.j.williams@intel.com \ --cc=jglisse@redhat.com \ --cc=jhubbard@nvidia.com \ --cc=kirill.shutemov@linux.intel.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.