From: Ira Weiny <ira.weiny@intel.com> To: Christoph Hellwig <hch@lst.de> Cc: linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, "Jérôme Glisse" <jglisse@redhat.com>, "Jason Gunthorpe" <jgg@mellanox.com>, "Ben Skeggs" <bskeggs@redhat.com>, nouveau@lists.freedesktop.org Subject: Re: [PATCH 14/25] memremap: replace the altmap_valid field with a PGMAP_ALTMAP_VALID flag Date: Wed, 26 Jun 2019 14:13:00 -0700 [thread overview] Message-ID: <20190626211300.GF4605@iweiny-DESK2.sc.intel.com> (raw) In-Reply-To: <20190626122724.13313-15-hch@lst.de> On Wed, Jun 26, 2019 at 02:27:13PM +0200, Christoph Hellwig wrote: > Add a flags field to struct dev_pagemap to replace the altmap_valid > boolean to be a little more extensible. Also add a pgmap_altmap() helper > to find the optional altmap and clean up the code using the altmap using > it. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> > --- > arch/powerpc/mm/mem.c | 10 +--------- > arch/x86/mm/init_64.c | 8 ++------ > drivers/nvdimm/pfn_devs.c | 3 +-- > drivers/nvdimm/pmem.c | 1 - > include/linux/memremap.h | 12 +++++++++++- > kernel/memremap.c | 26 ++++++++++---------------- > mm/hmm.c | 1 - > mm/memory_hotplug.c | 6 ++---- > mm/page_alloc.c | 5 ++--- > 9 files changed, 29 insertions(+), 43 deletions(-) > > diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c > index cba29131bccc..f774d80df025 100644 > --- a/arch/powerpc/mm/mem.c > +++ b/arch/powerpc/mm/mem.c > @@ -131,17 +131,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, > { > unsigned long start_pfn = start >> PAGE_SHIFT; > unsigned long nr_pages = size >> PAGE_SHIFT; > - struct page *page; > + struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); > int ret; > > - /* > - * If we have an altmap then we need to skip over any reserved PFNs > - * when querying the zone. > - */ > - page = pfn_to_page(start_pfn); > - if (altmap) > - page += vmem_altmap_offset(altmap); > - > __remove_pages(page_zone(page), start_pfn, nr_pages, altmap); > > /* Remove htab bolted mappings for this section of memory */ > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > index 693aaf28d5fe..3139e992ef9d 100644 > --- a/arch/x86/mm/init_64.c > +++ b/arch/x86/mm/init_64.c > @@ -1211,13 +1211,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, > { > unsigned long start_pfn = start >> PAGE_SHIFT; > unsigned long nr_pages = size >> PAGE_SHIFT; > - struct page *page = pfn_to_page(start_pfn); > - struct zone *zone; > + struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); > + struct zone *zone = page_zone(page); > > - /* With altmap the first mapped page is offset from @start */ > - if (altmap) > - page += vmem_altmap_offset(altmap); > - zone = page_zone(page); > __remove_pages(zone, start_pfn, nr_pages, altmap); > kernel_physical_mapping_remove(start, start + size); > } > diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c > index 0f81fc56bbfd..55fb6b7433ed 100644 > --- a/drivers/nvdimm/pfn_devs.c > +++ b/drivers/nvdimm/pfn_devs.c > @@ -622,7 +622,6 @@ static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap) > if (offset < reserve) > return -EINVAL; > nd_pfn->npfns = le64_to_cpu(pfn_sb->npfns); > - pgmap->altmap_valid = false; > } else if (nd_pfn->mode == PFN_MODE_PMEM) { > nd_pfn->npfns = PFN_SECTION_ALIGN_UP((resource_size(res) > - offset) / PAGE_SIZE); > @@ -634,7 +633,7 @@ static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap) > memcpy(altmap, &__altmap, sizeof(*altmap)); > altmap->free = PHYS_PFN(offset - reserve); > altmap->alloc = 0; > - pgmap->altmap_valid = true; > + pgmap->flags |= PGMAP_ALTMAP_VALID; > } else > return -ENXIO; > > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c > index 093408ce40ad..e7d8cc9f41e8 100644 > --- a/drivers/nvdimm/pmem.c > +++ b/drivers/nvdimm/pmem.c > @@ -412,7 +412,6 @@ static int pmem_attach_disk(struct device *dev, > bb_res.start += pmem->data_offset; > } else if (pmem_should_map_pages(dev)) { > memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res)); > - pmem->pgmap.altmap_valid = false; > pmem->pgmap.type = MEMORY_DEVICE_FS_DAX; > pmem->pgmap.ops = &fsdax_pagemap_ops; > addr = devm_memremap_pages(dev, &pmem->pgmap); > diff --git a/include/linux/memremap.h b/include/linux/memremap.h > index 336eca601dad..e25685b878e9 100644 > --- a/include/linux/memremap.h > +++ b/include/linux/memremap.h > @@ -88,6 +88,8 @@ struct dev_pagemap_ops { > vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf); > }; > > +#define PGMAP_ALTMAP_VALID (1 << 0) > + > /** > * struct dev_pagemap - metadata for ZONE_DEVICE mappings > * @altmap: pre-allocated/reserved memory for vmemmap allocations > @@ -96,19 +98,27 @@ struct dev_pagemap_ops { > * @dev: host device of the mapping for debug > * @data: private data pointer for page_free() > * @type: memory type: see MEMORY_* in memory_hotplug.h > + * @flags: PGMAP_* flags to specify defailed behavior > * @ops: method table > */ > struct dev_pagemap { > struct vmem_altmap altmap; > - bool altmap_valid; > struct resource res; > struct percpu_ref *ref; > struct device *dev; > enum memory_type type; > + unsigned int flags; > u64 pci_p2pdma_bus_offset; > const struct dev_pagemap_ops *ops; > }; > > +static inline struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap) > +{ > + if (pgmap->flags & PGMAP_ALTMAP_VALID) > + return &pgmap->altmap; > + return NULL; > +} > + > #ifdef CONFIG_ZONE_DEVICE > void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); > void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); > diff --git a/kernel/memremap.c b/kernel/memremap.c > index 6c3dbb692037..eee490e7d7e1 100644 > --- a/kernel/memremap.c > +++ b/kernel/memremap.c > @@ -54,14 +54,8 @@ static void pgmap_array_delete(struct resource *res) > > static unsigned long pfn_first(struct dev_pagemap *pgmap) > { > - const struct resource *res = &pgmap->res; > - struct vmem_altmap *altmap = &pgmap->altmap; > - unsigned long pfn; > - > - pfn = res->start >> PAGE_SHIFT; > - if (pgmap->altmap_valid) > - pfn += vmem_altmap_offset(altmap); > - return pfn; > + return (pgmap->res.start >> PAGE_SHIFT) + > + vmem_altmap_offset(pgmap_altmap(pgmap)); > } > > static unsigned long pfn_end(struct dev_pagemap *pgmap) > @@ -109,7 +103,7 @@ static void devm_memremap_pages_release(void *data) > align_size >> PAGE_SHIFT, NULL); > } else { > arch_remove_memory(nid, align_start, align_size, > - pgmap->altmap_valid ? &pgmap->altmap : NULL); > + pgmap_altmap(pgmap)); > kasan_remove_zero_shadow(__va(align_start), align_size); > } > mem_hotplug_done(); > @@ -129,8 +123,8 @@ static void devm_memremap_pages_release(void *data) > * 1/ At a minimum the res, ref and type and ops members of @pgmap must be > * initialized by the caller before passing it to this function > * > - * 2/ The altmap field may optionally be initialized, in which case altmap_valid > - * must be set to true > + * 2/ The altmap field may optionally be initialized, in which case > + * PGMAP_ALTMAP_VALID must be set in pgmap->flags. > * > * 3/ pgmap->ref must be 'live' on entry and will be killed and reaped > * at devm_memremap_pages_release() time, or if this routine fails. > @@ -142,15 +136,13 @@ static void devm_memremap_pages_release(void *data) > void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) > { > resource_size_t align_start, align_size, align_end; > - struct vmem_altmap *altmap = pgmap->altmap_valid ? > - &pgmap->altmap : NULL; > struct resource *res = &pgmap->res; > struct dev_pagemap *conflict_pgmap; > struct mhp_restrictions restrictions = { > /* > * We do not want any optional features only our own memmap > */ > - .altmap = altmap, > + .altmap = pgmap_altmap(pgmap), > }; > pgprot_t pgprot = PAGE_KERNEL; > int error, nid, is_ram; > @@ -274,7 +266,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) > > zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; > move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT, > - align_size >> PAGE_SHIFT, altmap); > + align_size >> PAGE_SHIFT, pgmap_altmap(pgmap)); > } > > mem_hotplug_done(); > @@ -319,7 +311,9 @@ EXPORT_SYMBOL_GPL(devm_memunmap_pages); > unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) > { > /* number of pfns from base where pfn_to_page() is valid */ > - return altmap->reserve + altmap->free; > + if (altmap) > + return altmap->reserve + altmap->free; > + return 0; > } > > void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns) > diff --git a/mm/hmm.c b/mm/hmm.c > index 36e25cdbdac1..e4470462298f 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -1442,7 +1442,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, > devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; > devmem->pagemap.res = *devmem->resource; > devmem->pagemap.ops = &hmm_pagemap_ops; > - devmem->pagemap.altmap_valid = false; > devmem->pagemap.ref = &devmem->ref; > > result = devm_memremap_pages(devmem->device, &devmem->pagemap); > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index e096c987d261..6166ba5a15f3 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -557,10 +557,8 @@ void __remove_pages(struct zone *zone, unsigned long phys_start_pfn, > int sections_to_remove; > > /* In the ZONE_DEVICE case device driver owns the memory region */ > - if (is_dev_zone(zone)) { > - if (altmap) > - map_offset = vmem_altmap_offset(altmap); > - } > + if (is_dev_zone(zone)) > + map_offset = vmem_altmap_offset(altmap); > > clear_zone_contiguous(zone); > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d66bc8abe0af..17a39d40a556 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5853,6 +5853,7 @@ void __ref memmap_init_zone_device(struct zone *zone, > { > unsigned long pfn, end_pfn = start_pfn + size; > struct pglist_data *pgdat = zone->zone_pgdat; > + struct vmem_altmap *altmap = pgmap_altmap(pgmap); > unsigned long zone_idx = zone_idx(zone); > unsigned long start = jiffies; > int nid = pgdat->node_id; > @@ -5865,9 +5866,7 @@ void __ref memmap_init_zone_device(struct zone *zone, > * of the pages reserved for the memmap, so we can just jump to > * the end of that region and start processing the device pages. > */ > - if (pgmap->altmap_valid) { > - struct vmem_altmap *altmap = &pgmap->altmap; > - > + if (altmap) { > start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); > size = end_pfn - start_pfn; > } > -- > 2.20.1 > > _______________________________________________ > Linux-nvdimm mailing list > Linux-nvdimm@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm _______________________________________________ Linux-nvdimm mailing list Linux-nvdimm@lists.01.org https://lists.01.org/mailman/listinfo/linux-nvdimm
WARNING: multiple messages have this Message-ID (diff)
From: Ira Weiny <ira.weiny@intel.com> To: Christoph Hellwig <hch@lst.de> Cc: "Dan Williams" <dan.j.williams@intel.com>, "Jérôme Glisse" <jglisse@redhat.com>, "Jason Gunthorpe" <jgg@mellanox.com>, "Ben Skeggs" <bskeggs@redhat.com>, linux-nvdimm@lists.01.org, linux-pci@vger.kernel.org, linux-kernel@vger.kernel.org, dri-devel@lists.freedesktop.org, linux-mm@kvack.org, nouveau@lists.freedesktop.org Subject: Re: [PATCH 14/25] memremap: replace the altmap_valid field with a PGMAP_ALTMAP_VALID flag Date: Wed, 26 Jun 2019 14:13:00 -0700 [thread overview] Message-ID: <20190626211300.GF4605@iweiny-DESK2.sc.intel.com> (raw) In-Reply-To: <20190626122724.13313-15-hch@lst.de> On Wed, Jun 26, 2019 at 02:27:13PM +0200, Christoph Hellwig wrote: > Add a flags field to struct dev_pagemap to replace the altmap_valid > boolean to be a little more extensible. Also add a pgmap_altmap() helper > to find the optional altmap and clean up the code using the altmap using > it. > > Signed-off-by: Christoph Hellwig <hch@lst.de> Reviewed-by: Ira Weiny <ira.weiny@intel.com> > --- > arch/powerpc/mm/mem.c | 10 +--------- > arch/x86/mm/init_64.c | 8 ++------ > drivers/nvdimm/pfn_devs.c | 3 +-- > drivers/nvdimm/pmem.c | 1 - > include/linux/memremap.h | 12 +++++++++++- > kernel/memremap.c | 26 ++++++++++---------------- > mm/hmm.c | 1 - > mm/memory_hotplug.c | 6 ++---- > mm/page_alloc.c | 5 ++--- > 9 files changed, 29 insertions(+), 43 deletions(-) > > diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c > index cba29131bccc..f774d80df025 100644 > --- a/arch/powerpc/mm/mem.c > +++ b/arch/powerpc/mm/mem.c > @@ -131,17 +131,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, > { > unsigned long start_pfn = start >> PAGE_SHIFT; > unsigned long nr_pages = size >> PAGE_SHIFT; > - struct page *page; > + struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); > int ret; > > - /* > - * If we have an altmap then we need to skip over any reserved PFNs > - * when querying the zone. > - */ > - page = pfn_to_page(start_pfn); > - if (altmap) > - page += vmem_altmap_offset(altmap); > - > __remove_pages(page_zone(page), start_pfn, nr_pages, altmap); > > /* Remove htab bolted mappings for this section of memory */ > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c > index 693aaf28d5fe..3139e992ef9d 100644 > --- a/arch/x86/mm/init_64.c > +++ b/arch/x86/mm/init_64.c > @@ -1211,13 +1211,9 @@ void __ref arch_remove_memory(int nid, u64 start, u64 size, > { > unsigned long start_pfn = start >> PAGE_SHIFT; > unsigned long nr_pages = size >> PAGE_SHIFT; > - struct page *page = pfn_to_page(start_pfn); > - struct zone *zone; > + struct page *page = pfn_to_page(start_pfn) + vmem_altmap_offset(altmap); > + struct zone *zone = page_zone(page); > > - /* With altmap the first mapped page is offset from @start */ > - if (altmap) > - page += vmem_altmap_offset(altmap); > - zone = page_zone(page); > __remove_pages(zone, start_pfn, nr_pages, altmap); > kernel_physical_mapping_remove(start, start + size); > } > diff --git a/drivers/nvdimm/pfn_devs.c b/drivers/nvdimm/pfn_devs.c > index 0f81fc56bbfd..55fb6b7433ed 100644 > --- a/drivers/nvdimm/pfn_devs.c > +++ b/drivers/nvdimm/pfn_devs.c > @@ -622,7 +622,6 @@ static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap) > if (offset < reserve) > return -EINVAL; > nd_pfn->npfns = le64_to_cpu(pfn_sb->npfns); > - pgmap->altmap_valid = false; > } else if (nd_pfn->mode == PFN_MODE_PMEM) { > nd_pfn->npfns = PFN_SECTION_ALIGN_UP((resource_size(res) > - offset) / PAGE_SIZE); > @@ -634,7 +633,7 @@ static int __nvdimm_setup_pfn(struct nd_pfn *nd_pfn, struct dev_pagemap *pgmap) > memcpy(altmap, &__altmap, sizeof(*altmap)); > altmap->free = PHYS_PFN(offset - reserve); > altmap->alloc = 0; > - pgmap->altmap_valid = true; > + pgmap->flags |= PGMAP_ALTMAP_VALID; > } else > return -ENXIO; > > diff --git a/drivers/nvdimm/pmem.c b/drivers/nvdimm/pmem.c > index 093408ce40ad..e7d8cc9f41e8 100644 > --- a/drivers/nvdimm/pmem.c > +++ b/drivers/nvdimm/pmem.c > @@ -412,7 +412,6 @@ static int pmem_attach_disk(struct device *dev, > bb_res.start += pmem->data_offset; > } else if (pmem_should_map_pages(dev)) { > memcpy(&pmem->pgmap.res, &nsio->res, sizeof(pmem->pgmap.res)); > - pmem->pgmap.altmap_valid = false; > pmem->pgmap.type = MEMORY_DEVICE_FS_DAX; > pmem->pgmap.ops = &fsdax_pagemap_ops; > addr = devm_memremap_pages(dev, &pmem->pgmap); > diff --git a/include/linux/memremap.h b/include/linux/memremap.h > index 336eca601dad..e25685b878e9 100644 > --- a/include/linux/memremap.h > +++ b/include/linux/memremap.h > @@ -88,6 +88,8 @@ struct dev_pagemap_ops { > vm_fault_t (*migrate_to_ram)(struct vm_fault *vmf); > }; > > +#define PGMAP_ALTMAP_VALID (1 << 0) > + > /** > * struct dev_pagemap - metadata for ZONE_DEVICE mappings > * @altmap: pre-allocated/reserved memory for vmemmap allocations > @@ -96,19 +98,27 @@ struct dev_pagemap_ops { > * @dev: host device of the mapping for debug > * @data: private data pointer for page_free() > * @type: memory type: see MEMORY_* in memory_hotplug.h > + * @flags: PGMAP_* flags to specify defailed behavior > * @ops: method table > */ > struct dev_pagemap { > struct vmem_altmap altmap; > - bool altmap_valid; > struct resource res; > struct percpu_ref *ref; > struct device *dev; > enum memory_type type; > + unsigned int flags; > u64 pci_p2pdma_bus_offset; > const struct dev_pagemap_ops *ops; > }; > > +static inline struct vmem_altmap *pgmap_altmap(struct dev_pagemap *pgmap) > +{ > + if (pgmap->flags & PGMAP_ALTMAP_VALID) > + return &pgmap->altmap; > + return NULL; > +} > + > #ifdef CONFIG_ZONE_DEVICE > void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap); > void devm_memunmap_pages(struct device *dev, struct dev_pagemap *pgmap); > diff --git a/kernel/memremap.c b/kernel/memremap.c > index 6c3dbb692037..eee490e7d7e1 100644 > --- a/kernel/memremap.c > +++ b/kernel/memremap.c > @@ -54,14 +54,8 @@ static void pgmap_array_delete(struct resource *res) > > static unsigned long pfn_first(struct dev_pagemap *pgmap) > { > - const struct resource *res = &pgmap->res; > - struct vmem_altmap *altmap = &pgmap->altmap; > - unsigned long pfn; > - > - pfn = res->start >> PAGE_SHIFT; > - if (pgmap->altmap_valid) > - pfn += vmem_altmap_offset(altmap); > - return pfn; > + return (pgmap->res.start >> PAGE_SHIFT) + > + vmem_altmap_offset(pgmap_altmap(pgmap)); > } > > static unsigned long pfn_end(struct dev_pagemap *pgmap) > @@ -109,7 +103,7 @@ static void devm_memremap_pages_release(void *data) > align_size >> PAGE_SHIFT, NULL); > } else { > arch_remove_memory(nid, align_start, align_size, > - pgmap->altmap_valid ? &pgmap->altmap : NULL); > + pgmap_altmap(pgmap)); > kasan_remove_zero_shadow(__va(align_start), align_size); > } > mem_hotplug_done(); > @@ -129,8 +123,8 @@ static void devm_memremap_pages_release(void *data) > * 1/ At a minimum the res, ref and type and ops members of @pgmap must be > * initialized by the caller before passing it to this function > * > - * 2/ The altmap field may optionally be initialized, in which case altmap_valid > - * must be set to true > + * 2/ The altmap field may optionally be initialized, in which case > + * PGMAP_ALTMAP_VALID must be set in pgmap->flags. > * > * 3/ pgmap->ref must be 'live' on entry and will be killed and reaped > * at devm_memremap_pages_release() time, or if this routine fails. > @@ -142,15 +136,13 @@ static void devm_memremap_pages_release(void *data) > void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) > { > resource_size_t align_start, align_size, align_end; > - struct vmem_altmap *altmap = pgmap->altmap_valid ? > - &pgmap->altmap : NULL; > struct resource *res = &pgmap->res; > struct dev_pagemap *conflict_pgmap; > struct mhp_restrictions restrictions = { > /* > * We do not want any optional features only our own memmap > */ > - .altmap = altmap, > + .altmap = pgmap_altmap(pgmap), > }; > pgprot_t pgprot = PAGE_KERNEL; > int error, nid, is_ram; > @@ -274,7 +266,7 @@ void *devm_memremap_pages(struct device *dev, struct dev_pagemap *pgmap) > > zone = &NODE_DATA(nid)->node_zones[ZONE_DEVICE]; > move_pfn_range_to_zone(zone, align_start >> PAGE_SHIFT, > - align_size >> PAGE_SHIFT, altmap); > + align_size >> PAGE_SHIFT, pgmap_altmap(pgmap)); > } > > mem_hotplug_done(); > @@ -319,7 +311,9 @@ EXPORT_SYMBOL_GPL(devm_memunmap_pages); > unsigned long vmem_altmap_offset(struct vmem_altmap *altmap) > { > /* number of pfns from base where pfn_to_page() is valid */ > - return altmap->reserve + altmap->free; > + if (altmap) > + return altmap->reserve + altmap->free; > + return 0; > } > > void vmem_altmap_free(struct vmem_altmap *altmap, unsigned long nr_pfns) > diff --git a/mm/hmm.c b/mm/hmm.c > index 36e25cdbdac1..e4470462298f 100644 > --- a/mm/hmm.c > +++ b/mm/hmm.c > @@ -1442,7 +1442,6 @@ struct hmm_devmem *hmm_devmem_add(const struct hmm_devmem_ops *ops, > devmem->pagemap.type = MEMORY_DEVICE_PRIVATE; > devmem->pagemap.res = *devmem->resource; > devmem->pagemap.ops = &hmm_pagemap_ops; > - devmem->pagemap.altmap_valid = false; > devmem->pagemap.ref = &devmem->ref; > > result = devm_memremap_pages(devmem->device, &devmem->pagemap); > diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c > index e096c987d261..6166ba5a15f3 100644 > --- a/mm/memory_hotplug.c > +++ b/mm/memory_hotplug.c > @@ -557,10 +557,8 @@ void __remove_pages(struct zone *zone, unsigned long phys_start_pfn, > int sections_to_remove; > > /* In the ZONE_DEVICE case device driver owns the memory region */ > - if (is_dev_zone(zone)) { > - if (altmap) > - map_offset = vmem_altmap_offset(altmap); > - } > + if (is_dev_zone(zone)) > + map_offset = vmem_altmap_offset(altmap); > > clear_zone_contiguous(zone); > > diff --git a/mm/page_alloc.c b/mm/page_alloc.c > index d66bc8abe0af..17a39d40a556 100644 > --- a/mm/page_alloc.c > +++ b/mm/page_alloc.c > @@ -5853,6 +5853,7 @@ void __ref memmap_init_zone_device(struct zone *zone, > { > unsigned long pfn, end_pfn = start_pfn + size; > struct pglist_data *pgdat = zone->zone_pgdat; > + struct vmem_altmap *altmap = pgmap_altmap(pgmap); > unsigned long zone_idx = zone_idx(zone); > unsigned long start = jiffies; > int nid = pgdat->node_id; > @@ -5865,9 +5866,7 @@ void __ref memmap_init_zone_device(struct zone *zone, > * of the pages reserved for the memmap, so we can just jump to > * the end of that region and start processing the device pages. > */ > - if (pgmap->altmap_valid) { > - struct vmem_altmap *altmap = &pgmap->altmap; > - > + if (altmap) { > start_pfn = altmap->base_pfn + vmem_altmap_offset(altmap); > size = end_pfn - start_pfn; > } > -- > 2.20.1 > > _______________________________________________ > Linux-nvdimm mailing list > Linux-nvdimm@lists.01.org > https://lists.01.org/mailman/listinfo/linux-nvdimm
next prev parent reply other threads:[~2019-06-26 21:13 UTC|newest] Thread overview: 139+ messages / expand[flat|nested] mbox.gz Atom feed top 2019-06-26 12:26 dev_pagemap related cleanups v3 Christoph Hellwig 2019-06-26 12:26 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 01/25] mm: remove the unused ARCH_HAS_HMM_DEVICE Kconfig option Christoph Hellwig 2019-06-26 12:27 ` [PATCH 02/25] mm: remove the struct hmm_device infrastructure Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 03/25] mm: remove hmm_devmem_add_resource Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-27 16:18 ` Jason Gunthorpe 2019-06-27 16:54 ` Christoph Hellwig 2019-06-27 16:54 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 04/25] mm: remove MEMORY_DEVICE_PUBLIC support Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 16:00 ` Dan Williams 2019-06-26 16:00 ` Dan Williams 2019-06-26 17:14 ` Ira Weiny 2019-06-26 17:14 ` Ira Weiny 2019-06-26 17:14 ` Ira Weiny 2019-06-26 18:49 ` Ira Weiny 2019-06-26 18:49 ` Ira Weiny 2019-06-26 18:49 ` Ira Weiny 2019-06-26 12:27 ` [PATCH 05/25] mm: don't clear ->mapping in hmm_devmem_free Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 06/25] mm: export alloc_pages_vma Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:36 ` Michal Hocko 2019-06-26 12:36 ` Michal Hocko 2019-06-26 12:27 ` [PATCH 07/25] mm: factor out a devm_request_free_mem_region helper Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 08/25] memremap: validate the pagemap type passed to devm_memremap_pages Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 18:01 ` Ira Weiny 2019-06-26 18:01 ` Ira Weiny 2019-06-26 12:27 ` [PATCH 09/25] memremap: move dev_pagemap callbacks into a separate structure Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 10/25] memremap: pass a struct dev_pagemap to ->kill and ->cleanup Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 11/25] memremap: lift the devmap_enable manipulation into devm_memremap_pages Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 19:04 ` Ira Weiny 2019-06-26 19:04 ` Ira Weiny 2019-06-26 19:04 ` Ira Weiny 2019-06-27 8:50 ` Christoph Hellwig 2019-06-27 8:50 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 12/25] memremap: add a migrate_to_ram method to struct dev_pagemap_ops Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-27 16:29 ` Jason Gunthorpe 2019-06-27 16:29 ` Jason Gunthorpe 2019-06-27 16:53 ` Christoph Hellwig 2019-06-27 16:53 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 13/25] memremap: remove the data field in struct dev_pagemap Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 14/25] memremap: replace the altmap_valid field with a PGMAP_ALTMAP_VALID flag Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 21:13 ` Ira Weiny [this message] 2019-06-26 21:13 ` Ira Weiny 2019-06-26 12:27 ` [PATCH 15/25] memremap: provide an optional internal refcount in struct dev_pagemap Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 21:47 ` Ira Weiny 2019-06-26 21:47 ` Ira Weiny 2019-06-27 8:51 ` Christoph Hellwig 2019-06-27 8:51 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 16/25] device-dax: use the dev_pagemap internal refcount Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 21:48 ` Ira Weiny 2019-06-26 21:48 ` Ira Weiny 2019-06-26 21:48 ` Ira Weiny 2019-06-28 15:38 ` Jason Gunthorpe 2019-06-28 16:27 ` Dan Williams 2019-06-28 16:27 ` Dan Williams 2019-06-28 17:02 ` Jason Gunthorpe 2019-06-28 17:02 ` Jason Gunthorpe 2019-06-28 17:08 ` Dan Williams 2019-06-28 17:10 ` Dan Williams 2019-06-28 17:10 ` Dan Williams 2019-06-28 18:29 ` Jason Gunthorpe 2019-06-28 18:29 ` Jason Gunthorpe 2019-06-28 18:44 ` Dan Williams 2019-06-28 18:51 ` Christoph Hellwig 2019-06-28 18:51 ` Christoph Hellwig 2019-06-28 18:59 ` Dan Williams 2019-06-28 18:59 ` Dan Williams 2019-06-28 19:02 ` Christoph Hellwig 2019-06-28 19:02 ` Christoph Hellwig 2019-06-28 19:14 ` Dan Williams 2019-06-28 19:14 ` Dan Williams 2019-06-28 19:14 ` Dan Williams 2019-07-02 22:35 ` Andrew Morton 2019-07-02 22:35 ` Andrew Morton 2019-07-02 22:35 ` Andrew Morton 2019-06-26 12:27 ` [PATCH 17/25] PCI/P2PDMA: " Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 21:49 ` Ira Weiny 2019-06-26 21:49 ` Ira Weiny 2019-06-26 21:49 ` Ira Weiny 2019-06-27 18:49 ` Logan Gunthorpe 2019-06-26 12:27 ` [PATCH 18/25] nouveau: use alloc_page_vma directly Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 19/25] nouveau: use devm_memremap_pages directly Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 20/25] mm: remove hmm_vma_alloc_locked_page Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-27 16:26 ` Jason Gunthorpe 2019-06-26 12:27 ` [PATCH 21/25] mm: remove hmm_devmem_add Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 22/25] mm: simplify ZONE_DEVICE page private data Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` [PATCH 23/25] mm: sort out the DEVICE_PRIVATE Kconfig mess Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 21:36 ` Ira Weiny 2019-06-26 21:36 ` Ira Weiny 2019-06-26 12:27 ` [PATCH 24/25] mm: remove the HMM config option Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 21:38 ` Ira Weiny 2019-06-26 21:38 ` Ira Weiny 2019-06-26 21:38 ` Ira Weiny 2019-06-27 16:29 ` Jason Gunthorpe 2019-06-26 12:27 ` [PATCH 25/25] mm: don't select MIGRATE_VMA_HELPER from HMM_MIRROR Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig 2019-06-26 12:27 ` Christoph Hellwig
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20190626211300.GF4605@iweiny-DESK2.sc.intel.com \ --to=ira.weiny@intel.com \ --cc=bskeggs@redhat.com \ --cc=dri-devel@lists.freedesktop.org \ --cc=hch@lst.de \ --cc=jgg@mellanox.com \ --cc=jglisse@redhat.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=linux-nvdimm@lists.01.org \ --cc=linux-pci@vger.kernel.org \ --cc=nouveau@lists.freedesktop.org \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: linkBe sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes, see mirroring instructions on how to clone and mirror all data and code used by this external index.