* [to-be-updated] mm-enable-section-unaligned-devm_memremap_pages.patch removed from -mm tree
@ 2017-02-15 21:52 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2017-02-15 21:52 UTC (permalink / raw)
To: dan.j.williams, logang, mhocko, stephen.bates, toshi.kani, mm-commits
The patch titled
Subject: mm: enable section-unaligned devm_memremap_pages()
has been removed from the -mm tree. Its filename was
mm-enable-section-unaligned-devm_memremap_pages.patch
This patch was dropped because an updated version will be merged
------------------------------------------------------
From: Dan Williams <dan.j.williams@intel.com>
Subject: mm: enable section-unaligned devm_memremap_pages()
Teach devm_memremap_pages() about the new sub-section capabilities of
arch_{add,remove}_memory().
Link: http://lkml.kernel.org/r/148486366055.19694.17199008017867229383.stgit@dwillia2-desk3.amr.corp.intel.com
Signed-off-by: Dan Williams <dan.j.williams@intel.com>
Cc: Michal Hocko <mhocko@suse.com>
Cc: Toshi Kani <toshi.kani@hpe.com>
Cc: Logan Gunthorpe <logang@deltatee.com>
Cc: Stephen Bates <stephen.bates@microsemi.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
kernel/memremap.c | 22 +++++++---------------
1 file changed, 7 insertions(+), 15 deletions(-)
diff -puN kernel/memremap.c~mm-enable-section-unaligned-devm_memremap_pages kernel/memremap.c
--- a/kernel/memremap.c~mm-enable-section-unaligned-devm_memremap_pages
+++ a/kernel/memremap.c
@@ -256,7 +256,6 @@ static void devm_memremap_pages_release(
{
struct page_map *page_map = data;
struct resource *res = &page_map->res;
- resource_size_t align_start, align_size;
struct dev_pagemap *pgmap = &page_map->pgmap;
if (percpu_ref_tryget_live(pgmap->ref)) {
@@ -265,12 +264,10 @@ static void devm_memremap_pages_release(
}
/* pages are dead and unused, undo the arch mapping */
- align_start = res->start & PA_SECTION_MASK;
- align_size = ALIGN(resource_size(res), PA_SECTION_SIZE);
mem_hotplug_begin();
- arch_remove_memory(align_start, align_size);
+ arch_remove_memory(res->start, resource_size(res));
mem_hotplug_done();
- untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
+ untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
pgmap_radix_release(res);
dev_WARN_ONCE(dev, pgmap->altmap && pgmap->altmap->alloc,
"%s: failed to free all reserved pages\n", __func__);
@@ -305,17 +302,13 @@ struct dev_pagemap *find_dev_pagemap(res
void *devm_memremap_pages(struct device *dev, struct resource *res,
struct percpu_ref *ref, struct vmem_altmap *altmap)
{
- resource_size_t align_start, align_size, align_end;
unsigned long pfn, pgoff, order;
pgprot_t pgprot = PAGE_KERNEL;
struct dev_pagemap *pgmap;
struct page_map *page_map;
int error, nid, is_ram;
- align_start = res->start & PA_SECTION_MASK;
- align_size = ALIGN(res->start + resource_size(res), PA_SECTION_SIZE)
- - align_start;
- is_ram = region_intersects(align_start, align_size,
+ is_ram = region_intersects(res->start, resource_size(res),
IORESOURCE_SYSTEM_RAM, IORES_DESC_NONE);
if (is_ram == REGION_MIXED) {
@@ -348,7 +341,6 @@ void *devm_memremap_pages(struct device
mutex_lock(&pgmap_lock);
error = 0;
- align_end = align_start + align_size - 1;
foreach_order_pgoff(res, order, pgoff) {
struct dev_pagemap *dup;
@@ -377,13 +369,13 @@ void *devm_memremap_pages(struct device
if (nid < 0)
nid = numa_mem_id();
- error = track_pfn_remap(NULL, &pgprot, PHYS_PFN(align_start), 0,
- align_size);
+ error = track_pfn_remap(NULL, &pgprot, PHYS_PFN(res->start), 0,
+ resource_size(res));
if (error)
goto err_pfn_remap;
mem_hotplug_begin();
- error = arch_add_memory(nid, align_start, align_size, true);
+ error = arch_add_memory(nid, res->start, resource_size(res), true);
mem_hotplug_done();
if (error)
goto err_add_memory;
@@ -404,7 +396,7 @@ void *devm_memremap_pages(struct device
return __va(res->start);
err_add_memory:
- untrack_pfn(NULL, PHYS_PFN(align_start), align_size);
+ untrack_pfn(NULL, PHYS_PFN(res->start), resource_size(res));
err_pfn_remap:
err_radix:
pgmap_radix_release(res);
_
Patches currently in -mm which might be from dan.j.williams@intel.com are
libnvdimm-pfn-dax-stop-padding-pmem-namespaces-to-section-alignment.patch
mm-fix-get_user_pages-vs-device-dax-pud-mappings.patch
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2017-02-15 21:52 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-15 21:52 [to-be-updated] mm-enable-section-unaligned-devm_memremap_pages.patch removed from -mm tree akpm
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).