linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC PATCH v2 0/4] Allocate memmap from hotadded memory (per device)
@ 2020-11-25 11:20 Oscar Salvador
  2020-11-25 11:20 ` [RFC PATCH v2 1/4] mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY Oscar Salvador
                   ` (3 more replies)
  0 siblings, 4 replies; 11+ messages in thread
From: Oscar Salvador @ 2020-11-25 11:20 UTC (permalink / raw)
  To: david
  Cc: mhocko, linux-kernel, linux-mm, vbabka, pasha.tatashin, Oscar Salvador

This is v2 of [1]:

Changes from v1 -> v2:
 - Addressed feedback provided by David
 - Add a arch_support_memmap_on_memory to be called
   from mhp_supports_memmap_on_memory, as atm,
   only ARM, powerpc and x86_64 have altmat support.


Original cover letter:

----
The primary goal of this patchset is to reduce memory overhead of the
hot-added memory (at least for SPARSEMEM_VMEMMAP memory model).
The current way we use to populate memmap (struct page array) has two main drawbacks:

a) it consumes an additional memory until the hotadded memory itself is
   onlined and
b) memmap might end up on a different numa node which is especially true
   for movable_node configuration.
c) due to fragmentation we might end up populating memmap with base
   pages

One way to mitigate all these issues is to simply allocate memmap array
(which is the largest memory footprint of the physical memory hotplug)
from the hot-added memory itself. SPARSEMEM_VMEMMAP memory model allows
us to map any pfn range so the memory doesn't need to be online to be
usable for the array. See patch 3 for more details.
This feature is only usable when CONFIG_SPARSEMEM_VMEMMAP is set.

[Overall design]:

Implementation wise we reuse vmem_altmap infrastructure to override
the default allocator used by vmemap_populate. Once the memmap is
allocated we need a way to mark altmap pfns used for the allocation.
If MHP_MEMMAP_ON_MEMORY flag was passed, we set up the layout of the
altmap structure in add_memory_resource), and then we call
mark_vmemmap_pages() to mark vmemmap pages.

memory_block gained a new field called nr_vmemmap_pages.
This plays well for two reasons:

 1) {offline/online}_pages know the differente between start_pfn and
    valid_start_pfn, which is start_pfn + nr_vmemmap_pages.
    In this way all isolation/migration/initialization operations are
    done to the right range of memory without vmemmap pages to get involved.
    This allows us for a much cleaner handling.

 2) In try_remove_memory, we construct a new vmemap_altmap struct with the
    right info, so we end up calling vmem_altmap_free instead of free_pagetable
    when removing the memory.

[1] https://patchwork.kernel.org/project/linux-mm/cover/20201022125835.26396-1-osalvador@suse.de/

Oscar Salvador (4):
  mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY
  mm,memory_hotplug: Allocate memmap from the added memory range
  mm,memory_hotplug: Add mhp_supports_memmap_on_memory
  mm,memory_hotplug: Enable MHP_MEMMAP_ON_MEMORY when supported

 arch/arm64/mm/mmu.c                       |   5 +
 arch/powerpc/mm/mem.c                     |   5 +
 arch/powerpc/platforms/powernv/memtrace.c |   2 +-
 arch/x86/mm/init_64.c                     |   5 +
 drivers/acpi/acpi_memhotplug.c            |   5 +-
 drivers/base/memory.c                     |  21 ++--
 include/linux/memory.h                    |   7 +-
 include/linux/memory_hotplug.h            |  21 +++-
 include/linux/memremap.h                  |   2 +-
 mm/memory_hotplug.c                       | 119 ++++++++++++++++++----
 mm/page_alloc.c                           |   4 +-
 11 files changed, 158 insertions(+), 38 deletions(-)

-- 
2.26.2



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [RFC PATCH v2 1/4] mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY
  2020-11-25 11:20 [RFC PATCH v2 0/4] Allocate memmap from hotadded memory (per device) Oscar Salvador
@ 2020-11-25 11:20 ` Oscar Salvador
  2020-11-27 14:59   ` Michal Hocko
  2020-11-25 11:20 ` [RFC PATCH v2 2/4] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 11+ messages in thread
From: Oscar Salvador @ 2020-11-25 11:20 UTC (permalink / raw)
  To: david
  Cc: mhocko, linux-kernel, linux-mm, vbabka, pasha.tatashin, Oscar Salvador

MHP_MEMMAP_ON_MEMORY tells the system that we want the memmap
pagetables to be built from the hot-added range.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 include/linux/memory_hotplug.h | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 551093b74596..5b1ea1f8d1ab 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -70,6 +70,14 @@ typedef int __bitwise mhp_t;
  */
 #define MEMHP_MERGE_RESOURCE	((__force mhp_t)BIT(0))
 
+/*
+ * We want memmap (struct page array) to be self contained.
+ * To do so, we will use the beginning of the hot-added range to build
+ * the page tables for the memmap array that describes the entire range.
+ * Only selected architectures support it with SPARSE_VMEMMAP.
+ */
+#define MHP_MEMMAP_ON_MEMORY	((__force mhp_t)BIT(1))
+
 /*
  * Extended parameters for memory hotplug:
  * altmap: alternative allocator for memmap array (optional)
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH v2 2/4] mm,memory_hotplug: Allocate memmap from the added memory range
  2020-11-25 11:20 [RFC PATCH v2 0/4] Allocate memmap from hotadded memory (per device) Oscar Salvador
  2020-11-25 11:20 ` [RFC PATCH v2 1/4] mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY Oscar Salvador
@ 2020-11-25 11:20 ` Oscar Salvador
  2020-11-27 15:15   ` Michal Hocko
  2020-11-25 11:20 ` [RFC PATCH v2 3/4] mm,memory_hotplug: Add mhp_supports_memmap_on_memory Oscar Salvador
  2020-11-25 11:20 ` [RFC PATCH v2 4/4] mm,memory_hotplug: Enable MHP_MEMMAP_ON_MEMORY when supported Oscar Salvador
  3 siblings, 1 reply; 11+ messages in thread
From: Oscar Salvador @ 2020-11-25 11:20 UTC (permalink / raw)
  To: david
  Cc: mhocko, linux-kernel, linux-mm, vbabka, pasha.tatashin, Oscar Salvador

Physical memory hotadd has to allocate a memmap (struct page array) for
the newly added memory section. Currently, alloc_pages_node() is used
for those allocations.

This has some disadvantages:
 a) an existing memory is consumed for that purpose
    (eg: ~2MB per 128MB memory section on x86_64)
 b) if the whole node is movable then we have off-node struct pages
    which has performance drawbacks.
 c) It might be there are no PMD_ALIGNED chunks so memmap array gets
    populated with base pages.

This can be improved when CONFIG_SPARSEMEM_VMEMMAP is enabled.

Vmemap page tables can map arbitrary memory.
That means that we can simply use the beginning of each memory section and
map struct pages there.
struct pages which back the allocated space then just need to be treated
carefully.

Implementation wise we will reuse vmem_altmap infrastructure to override
the default allocator used by __populate_section_memmap. Once the memmap is
allocated, we are going to need a way to mark altmap pfns used for the allocation.
If MHP_MEMMAP_ON_MEMORY flag was passed, we will set up the layout of the
altmap structure in add_memory_resouce(), and then we will call
mhp_mark_vmemmap_pages() to properly mark those pages.

Online/Offline:

 In the memory_block structure, a new field is created in order to
 store the number of vmemmap_pages.
 Having that around simplifies things a lot since in {online/offline}_pages
 we can know how much we have to skip forward until we have the first non-
 vmemmap page, for operations like isolation/migration/initialization.

Hot-remove:

 If the range was using memmap on memory (aka vmemmap pages),
 we construct an altmap structure so free_hugepage_table does
 the right thing and calls vmem_altmap_free instead of
 free_pagetable.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 arch/powerpc/platforms/powernv/memtrace.c |  2 +-
 drivers/base/memory.c                     | 21 +++--
 include/linux/memory.h                    |  7 +-
 include/linux/memory_hotplug.h            | 13 +++-
 include/linux/memremap.h                  |  2 +-
 mm/memory_hotplug.c                       | 95 ++++++++++++++++++-----
 mm/page_alloc.c                           |  4 +-
 7 files changed, 107 insertions(+), 37 deletions(-)

diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
index 6828108486f8..8d45b8a5a9d0 100644
--- a/arch/powerpc/platforms/powernv/memtrace.c
+++ b/arch/powerpc/platforms/powernv/memtrace.c
@@ -79,7 +79,7 @@ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
 	walk_memory_blocks(start, size, (void *)MEM_GOING_OFFLINE,
 			   change_memblock_state);
 
-	if (offline_pages(start_pfn, nr_pages)) {
+	if (offline_pages(start_pfn, nr_pages, 0)) {
 		walk_memory_blocks(start, size, (void *)MEM_ONLINE,
 				   change_memblock_state);
 		return false;
diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index eef4ffb6122c..675974e39a84 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -175,7 +175,7 @@ int memory_notify(unsigned long val, void *v)
  */
 static int
 memory_block_action(unsigned long start_section_nr, unsigned long action,
-		    int online_type, int nid)
+		    int online_type, int nid, unsigned long nr_vmemmap_pages)
 {
 	unsigned long start_pfn;
 	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
@@ -185,10 +185,11 @@ memory_block_action(unsigned long start_section_nr, unsigned long action,
 
 	switch (action) {
 	case MEM_ONLINE:
-		ret = online_pages(start_pfn, nr_pages, online_type, nid);
+		ret = online_pages(start_pfn, nr_pages, nr_vmemmap_pages,
+				   online_type, nid);
 		break;
 	case MEM_OFFLINE:
-		ret = offline_pages(start_pfn, nr_pages);
+		ret = offline_pages(start_pfn, nr_pages, nr_vmemmap_pages);
 		break;
 	default:
 		WARN(1, KERN_WARNING "%s(%ld, %ld) unknown action: "
@@ -211,7 +212,7 @@ static int memory_block_change_state(struct memory_block *mem,
 		mem->state = MEM_GOING_OFFLINE;
 
 	ret = memory_block_action(mem->start_section_nr, to_state,
-				  mem->online_type, mem->nid);
+				  mem->online_type, mem->nid, mem->nr_vmemmap_pages);
 
 	mem->state = ret ? from_state_req : to_state;
 
@@ -571,7 +572,8 @@ int register_memory(struct memory_block *memory)
 	return ret;
 }
 
-static int init_memory_block(unsigned long block_id, unsigned long state)
+static int init_memory_block(unsigned long block_id, unsigned long state,
+			     unsigned long nr_vmemmap_pages)
 {
 	struct memory_block *mem;
 	unsigned long start_pfn;
@@ -591,6 +593,7 @@ static int init_memory_block(unsigned long block_id, unsigned long state)
 	start_pfn = section_nr_to_pfn(mem->start_section_nr);
 	mem->phys_device = arch_get_memory_phys_device(start_pfn);
 	mem->nid = NUMA_NO_NODE;
+	mem->nr_vmemmap_pages = nr_vmemmap_pages;
 
 	ret = register_memory(mem);
 
@@ -610,7 +613,7 @@ static int add_memory_block(unsigned long base_section_nr)
 	if (section_count == 0)
 		return 0;
 	return init_memory_block(memory_block_id(base_section_nr),
-				 MEM_ONLINE);
+				 MEM_ONLINE, 0);
 }
 
 static void unregister_memory(struct memory_block *memory)
@@ -632,7 +635,8 @@ static void unregister_memory(struct memory_block *memory)
  *
  * Called under device_hotplug_lock.
  */
-int create_memory_block_devices(unsigned long start, unsigned long size)
+int create_memory_block_devices(unsigned long start, unsigned long size,
+				unsigned long vmemmap_pages)
 {
 	const unsigned long start_block_id = pfn_to_block_id(PFN_DOWN(start));
 	unsigned long end_block_id = pfn_to_block_id(PFN_DOWN(start + size));
@@ -645,9 +649,10 @@ int create_memory_block_devices(unsigned long start, unsigned long size)
 		return -EINVAL;
 
 	for (block_id = start_block_id; block_id != end_block_id; block_id++) {
-		ret = init_memory_block(block_id, MEM_OFFLINE);
+		ret = init_memory_block(block_id, MEM_OFFLINE, vmemmap_pages);
 		if (ret)
 			break;
+		vmemmap_pages = 0;
 	}
 	if (ret) {
 		end_block_id = block_id;
diff --git a/include/linux/memory.h b/include/linux/memory.h
index 439a89e758d8..b910d2aea879 100644
--- a/include/linux/memory.h
+++ b/include/linux/memory.h
@@ -30,6 +30,11 @@ struct memory_block {
 	int phys_device;		/* to which fru does this belong? */
 	struct device dev;
 	int nid;			/* NID for this memory block */
+	unsigned long nr_vmemmap_pages;	/*
+					 * Number of vmemmap pages. These pages
+					 * lay at the beginning of the memory
+					 * block.
+					 */
 };
 
 int arch_get_memory_phys_device(unsigned long start_pfn);
@@ -81,7 +86,7 @@ static inline int memory_notify(unsigned long val, void *v)
 #else
 extern int register_memory_notifier(struct notifier_block *nb);
 extern void unregister_memory_notifier(struct notifier_block *nb);
-int create_memory_block_devices(unsigned long start, unsigned long size);
+int create_memory_block_devices(unsigned long start, unsigned long size, unsigned long vmemmap_pages);
 void remove_memory_block_devices(unsigned long start, unsigned long size);
 extern void memory_dev_init(void);
 extern int memory_notify(unsigned long val, void *v);
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 5b1ea1f8d1ab..757452279965 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -121,11 +121,13 @@ extern int zone_grow_waitqueues(struct zone *zone, unsigned long nr_pages);
 extern int add_one_highpage(struct page *page, int pfn, int bad_ppro);
 /* VM interface that may be used by firmware interface */
 extern int online_pages(unsigned long pfn, unsigned long nr_pages,
-			int online_type, int nid);
+			unsigned long nr_vmemmap_pages, int online_type,
+			int nid);
 extern struct zone *test_pages_in_a_zone(unsigned long start_pfn,
 					 unsigned long end_pfn);
 extern void __offline_isolated_pages(unsigned long start_pfn,
-				     unsigned long end_pfn);
+				     unsigned long end_pfn,
+				     unsigned long buddy_start_pfn);
 
 typedef void (*online_page_callback_t)(struct page *page, unsigned int order);
 
@@ -320,7 +322,8 @@ static inline void pgdat_resize_init(struct pglist_data *pgdat) {}
 #ifdef CONFIG_MEMORY_HOTREMOVE
 
 extern void try_offline_node(int nid);
-extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages);
+extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
+			 unsigned long nr_vmemmap_pages);
 extern int remove_memory(int nid, u64 start, u64 size);
 extern void __remove_memory(int nid, u64 start, u64 size);
 extern int offline_and_remove_memory(int nid, u64 start, u64 size);
@@ -328,7 +331,8 @@ extern int offline_and_remove_memory(int nid, u64 start, u64 size);
 #else
 static inline void try_offline_node(int nid) {}
 
-static inline int offline_pages(unsigned long start_pfn, unsigned long nr_pages)
+static inline int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
+				unsigned long nr_vmemmap_pages)
 {
 	return -EINVAL;
 }
@@ -369,6 +373,7 @@ extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
 					  unsigned long pnum);
 extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
 		unsigned long nr_pages);
+extern bool mhp_supports_memmap_on_memory(unsigned long size);
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
 #endif /* __LINUX_MEMORY_HOTPLUG_H */
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index 79c49e7f5c30..3465681cf664 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -17,7 +17,7 @@ struct device;
  * @alloc: track pages consumed, private to vmemmap_populate()
  */
 struct vmem_altmap {
-	const unsigned long base_pfn;
+	unsigned long base_pfn;
 	const unsigned long end_pfn;
 	const unsigned long reserve;
 	unsigned long free;
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 63b2e46b6555..87fbc2cc0d90 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -607,10 +607,12 @@ void generic_online_page(struct page *page, unsigned int order)
 }
 EXPORT_SYMBOL_GPL(generic_online_page);
 
-static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
+static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
+			       unsigned long buddy_start_pfn)
 {
 	const unsigned long end_pfn = start_pfn + nr_pages;
 	unsigned long pfn;
+	unsigned int order;
 
 	/*
 	 * Online the pages in MAX_ORDER - 1 aligned chunks. The callback might
@@ -618,8 +620,12 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
 	 * later). We account all pages as being online and belonging to this
 	 * zone ("present").
 	 */
-	for (pfn = start_pfn; pfn < end_pfn; pfn += MAX_ORDER_NR_PAGES)
-		(*online_page_callback)(pfn_to_page(pfn), MAX_ORDER - 1);
+	for (pfn = buddy_start_pfn; pfn < end_pfn; pfn += (1 << order)) {
+		order = MAX_ORDER - 1;
+		while (pfn & ((1 << order) - 1))
+			order--;
+		(*online_page_callback)(pfn_to_page(pfn), order);
+	}
 
 	/* mark all involved sections as online */
 	online_mem_sections(start_pfn, end_pfn);
@@ -679,6 +685,7 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon
 	pgdat->node_spanned_pages = max(start_pfn + nr_pages, old_end_pfn) - pgdat->node_start_pfn;
 
 }
+
 /*
  * Associate the pfn range with the given zone, initializing the memmaps
  * and resizing the pgdat/zone data to span the added pages. After this
@@ -778,9 +785,9 @@ struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
 }
 
 int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
-		       int online_type, int nid)
+		       unsigned long nr_vmemmap_pages, int online_type, int nid)
 {
-	unsigned long flags;
+	unsigned long flags, buddy_start_pfn, buddy_nr_pages;
 	struct zone *zone;
 	int need_zonelists_rebuild = 0;
 	int ret;
@@ -791,11 +798,18 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 			 !IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION)))
 		return -EINVAL;
 
+	buddy_start_pfn = pfn + nr_vmemmap_pages;
+	buddy_nr_pages = nr_pages - nr_vmemmap_pages;
+
 	mem_hotplug_begin();
 
 	/* associate pfn range with the zone */
 	zone = zone_for_pfn_range(online_type, nid, pfn, nr_pages);
-	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_ISOLATE);
+	if (nr_vmemmap_pages)
+		move_pfn_range_to_zone(zone, pfn, nr_vmemmap_pages, NULL,
+				       MIGRATE_UNMOVABLE);
+	move_pfn_range_to_zone(zone, buddy_start_pfn, buddy_nr_pages, NULL,
+			       MIGRATE_ISOLATE);
 
 	arg.start_pfn = pfn;
 	arg.nr_pages = nr_pages;
@@ -811,7 +825,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 	 * onlining, such that undo_isolate_page_range() works correctly.
 	 */
 	spin_lock_irqsave(&zone->lock, flags);
-	zone->nr_isolate_pageblock += nr_pages / pageblock_nr_pages;
+	zone->nr_isolate_pageblock += buddy_nr_pages / pageblock_nr_pages;
 	spin_unlock_irqrestore(&zone->lock, flags);
 
 	/*
@@ -824,7 +838,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 		setup_zone_pageset(zone);
 	}
 
-	online_pages_range(pfn, nr_pages);
+	online_pages_range(pfn, nr_pages, buddy_start_pfn);
 	zone->present_pages += nr_pages;
 
 	pgdat_resize_lock(zone->zone_pgdat, &flags);
@@ -837,7 +851,9 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 	zone_pcp_update(zone);
 
 	/* Basic onlining is complete, allow allocation of onlined pages. */
-	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
+	undo_isolate_page_range(buddy_start_pfn,
+				buddy_start_pfn + buddy_nr_pages,
+				MIGRATE_MOVABLE);
 
 	/*
 	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
@@ -1021,6 +1037,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
 int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
 {
 	struct mhp_params params = { .pgprot = PAGE_KERNEL };
+	struct vmem_altmap mhp_altmap = {};
 	u64 start, size;
 	bool new_node = false;
 	int ret;
@@ -1047,13 +1064,22 @@ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
 		goto error;
 	new_node = ret;
 
+	/*
+	 * Self hosted memmap array
+	 */
+	if (mhp_flags & MHP_MEMMAP_ON_MEMORY) {
+		mhp_altmap.free = size >> PAGE_SHIFT;
+		mhp_altmap.base_pfn = start >> PAGE_SHIFT;
+		params.altmap = &mhp_altmap;
+	}
+
 	/* call arch's memory hotadd */
 	ret = arch_add_memory(nid, start, size, &params);
 	if (ret < 0)
 		goto error;
 
 	/* create memory block devices after memory was added */
-	ret = create_memory_block_devices(start, size);
+	ret = create_memory_block_devices(start, size, mhp_altmap.alloc);
 	if (ret) {
 		arch_remove_memory(nid, start, size, NULL);
 		goto error;
@@ -1449,10 +1475,11 @@ static int count_system_ram_pages_cb(unsigned long start_pfn,
 	return 0;
 }
 
-int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
+int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
+			unsigned long nr_vmemmap_pages)
 {
 	const unsigned long end_pfn = start_pfn + nr_pages;
-	unsigned long pfn, system_ram_pages = 0;
+	unsigned long pfn, buddy_start_pfn, buddy_nr_pages, system_ram_pages = 0;
 	unsigned long flags;
 	struct zone *zone;
 	struct memory_notify arg;
@@ -1464,6 +1491,9 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 			 !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))
 		return -EINVAL;
 
+	buddy_start_pfn = start_pfn + nr_vmemmap_pages;
+	buddy_nr_pages = nr_pages - nr_vmemmap_pages;
+
 	mem_hotplug_begin();
 
 	/*
@@ -1493,8 +1523,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 	node = zone_to_nid(zone);
 
 	/* set above range as isolated */
-	ret = start_isolate_page_range(start_pfn, end_pfn,
-				       MIGRATE_MOVABLE,
+	ret = start_isolate_page_range(buddy_start_pfn, end_pfn, MIGRATE_MOVABLE,
 				       MEMORY_OFFLINE | REPORT_FAILURE);
 	if (ret) {
 		reason = "failure to isolate range";
@@ -1513,7 +1542,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 	}
 
 	do {
-		pfn = start_pfn;
+		pfn = buddy_start_pfn;
 		do {
 			if (signal_pending(current)) {
 				ret = -EINTR;
@@ -1544,7 +1573,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 		 * offlining actually in order to make hugetlbfs's object
 		 * counting consistent.
 		 */
-		ret = dissolve_free_huge_pages(start_pfn, end_pfn);
+		ret = dissolve_free_huge_pages(buddy_start_pfn, end_pfn);
 		if (ret) {
 			reason = "failure to dissolve huge pages";
 			goto failed_removal_isolated;
@@ -1562,13 +1591,13 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 		 * because has_unmovable_pages explicitly checks for
 		 * PageBuddy on freed pages on other zones.
 		 */
-		ret = test_pages_isolated(start_pfn, end_pfn, MEMORY_OFFLINE);
+		ret = test_pages_isolated(buddy_start_pfn, end_pfn, MEMORY_OFFLINE);
 		if (ret)
 			drain_all_pages(zone);
 	} while (ret);
 
 	/* Mark all sections offline and remove free pages from the buddy. */
-	__offline_isolated_pages(start_pfn, end_pfn);
+	__offline_isolated_pages(start_pfn, end_pfn, buddy_start_pfn);
 	pr_info("Offlined Pages %ld\n", nr_pages);
 
 	/*
@@ -1577,11 +1606,11 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 	 * of isolated pageblocks, memory onlining will properly revert this.
 	 */
 	spin_lock_irqsave(&zone->lock, flags);
-	zone->nr_isolate_pageblock -= nr_pages / pageblock_nr_pages;
+	zone->nr_isolate_pageblock -= buddy_nr_pages / pageblock_nr_pages;
 	spin_unlock_irqrestore(&zone->lock, flags);
 
 	/* removal success */
-	adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages);
+	adjust_managed_page_count(pfn_to_page(start_pfn), -buddy_nr_pages);
 	zone->present_pages -= nr_pages;
 
 	pgdat_resize_lock(zone->zone_pgdat, &flags);
@@ -1639,6 +1668,16 @@ static int check_memblock_offlined_cb(struct memory_block *mem, void *arg)
 	return 0;
 }
 
+static int get_memblock_vmemmap_pages_cb(struct memory_block *mem, void *arg)
+{
+	unsigned long *nr_vmemmap_pages = (unsigned long *)arg;
+	int ret = !mem->nr_vmemmap_pages;
+
+	if (!ret)
+		*nr_vmemmap_pages += mem->nr_vmemmap_pages;
+	return ret;
+}
+
 static int check_cpu_on_node(pg_data_t *pgdat)
 {
 	int cpu;
@@ -1713,6 +1752,9 @@ EXPORT_SYMBOL(try_offline_node);
 static int __ref try_remove_memory(int nid, u64 start, u64 size)
 {
 	int rc = 0;
+	struct vmem_altmap mhp_altmap = {};
+	struct vmem_altmap *altmap = NULL;
+	unsigned long nr_vmemmap_pages = 0;
 
 	BUG_ON(check_hotplug_memory_range(start, size));
 
@@ -1725,6 +1767,17 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
 	if (rc)
 		return rc;
 
+	/*
+	 * Prepare a vmem_altmap struct if we used it at hot-add, so
+	 * remove_pmd_table->free_hugepage_table does the right thing.
+	 */
+	(void)walk_memory_blocks(start, size, &nr_vmemmap_pages,
+	     get_memblock_vmemmap_pages_cb);
+	if (nr_vmemmap_pages) {
+		mhp_altmap.alloc = nr_vmemmap_pages;
+		altmap = &mhp_altmap;
+	}
+
 	/* remove memmap entry */
 	firmware_map_remove(start, start + size, "System RAM");
 
@@ -1736,7 +1789,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
 
 	mem_hotplug_begin();
 
-	arch_remove_memory(nid, start, size, NULL);
+	arch_remove_memory(nid, start, size, altmap);
 
 	if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
 		memblock_free(start, size);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index eaa227a479e4..aac60d02e04b 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -8753,7 +8753,8 @@ void zone_pcp_reset(struct zone *zone)
  * All pages in the range must be in a single zone, must not contain holes,
  * must span full sections, and must be isolated before calling this function.
  */
-void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
+void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn,
+			      unsigned long buddy_start_pfn)
 {
 	unsigned long pfn = start_pfn;
 	struct page *page;
@@ -8764,6 +8765,7 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
 	offline_mem_sections(pfn, end_pfn);
 	zone = page_zone(pfn_to_page(pfn));
 	spin_lock_irqsave(&zone->lock, flags);
+	pfn = buddy_start_pfn;
 	while (pfn < end_pfn) {
 		page = pfn_to_page(pfn);
 		/*
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH v2 3/4] mm,memory_hotplug: Add mhp_supports_memmap_on_memory
  2020-11-25 11:20 [RFC PATCH v2 0/4] Allocate memmap from hotadded memory (per device) Oscar Salvador
  2020-11-25 11:20 ` [RFC PATCH v2 1/4] mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY Oscar Salvador
  2020-11-25 11:20 ` [RFC PATCH v2 2/4] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
@ 2020-11-25 11:20 ` Oscar Salvador
  2020-11-27 15:02   ` Michal Hocko
  2020-11-25 11:20 ` [RFC PATCH v2 4/4] mm,memory_hotplug: Enable MHP_MEMMAP_ON_MEMORY when supported Oscar Salvador
  3 siblings, 1 reply; 11+ messages in thread
From: Oscar Salvador @ 2020-11-25 11:20 UTC (permalink / raw)
  To: david
  Cc: mhocko, linux-kernel, linux-mm, vbabka, pasha.tatashin, Oscar Salvador

mhp_supports_memmap_on_memory is meant to be used by the caller prior
to hot-adding memory in order to figure out whether it can enable
MHP_MEMMAP_ON_MEMORY or not.

Enabling MHP_MEMMAP_ON_MEMORY requires:

 - CONFIG_SPARSEMEM_VMEMMAP
 - architecture support for altmap
 - hot-added range spans a single memory block

At the moment, only three architectures support passing altmap when
building the page tables: x86, POWERPC and ARM.
Define an arch_support_memmap_on_memory function on those architectures
that returns true, and define a __weak variant of it that will be used
on the others.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 arch/arm64/mm/mmu.c   |  5 +++++
 arch/powerpc/mm/mem.c |  5 +++++
 arch/x86/mm/init_64.c |  5 +++++
 mm/memory_hotplug.c   | 24 ++++++++++++++++++++++++
 4 files changed, 39 insertions(+)

diff --git a/arch/arm64/mm/mmu.c b/arch/arm64/mm/mmu.c
index ca692a815731..0da4e4f8794f 100644
--- a/arch/arm64/mm/mmu.c
+++ b/arch/arm64/mm/mmu.c
@@ -1456,6 +1456,11 @@ static bool inside_linear_region(u64 start, u64 size)
 	       (start + size - 1) <= __pa(PAGE_END - 1);
 }
 
+bool arch_support_memmap_on_memory(void)
+{
+	return true;
+}
+
 int arch_add_memory(int nid, u64 start, u64 size,
 		    struct mhp_params *params)
 {
diff --git a/arch/powerpc/mm/mem.c b/arch/powerpc/mm/mem.c
index 3fc325bebe4d..18e7e28fe713 100644
--- a/arch/powerpc/mm/mem.c
+++ b/arch/powerpc/mm/mem.c
@@ -121,6 +121,11 @@ static void flush_dcache_range_chunked(unsigned long start, unsigned long stop,
 	}
 }
 
+bool arch_support_memmap_on_memory(void)
+{
+	return true;
+}
+
 int __ref arch_add_memory(int nid, u64 start, u64 size,
 			  struct mhp_params *params)
 {
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..ffb9d87c77e8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -860,6 +860,11 @@ int add_pages(int nid, unsigned long start_pfn, unsigned long nr_pages,
 	return ret;
 }
 
+bool arch_support_memmap_on_memory(void)
+{
+	return true;
+}
+
 int arch_add_memory(int nid, u64 start, u64 size,
 		    struct mhp_params *params)
 {
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 87fbc2cc0d90..10255606ff85 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1028,6 +1028,20 @@ static int online_memory_block(struct memory_block *mem, void *arg)
 	return device_online(&mem->dev);
 }
 
+bool __weak arch_support_memmap_on_memory(void)
+{
+	return false;
+}
+
+bool mhp_supports_memmap_on_memory(unsigned long size)
+{
+	if (!arch_support_memmap_on_memory() ||
+	    !IS_ENABLED(CONFIG_SPARSEMEM_VMEMMAP) ||
+	    size > memory_block_size_bytes())
+		return false;
+	return true;
+}
+
 /*
  * NOTE: The caller must call lock_device_hotplug() to serialize hotplug
  * and online/offline operations (triggered e.g. by sysfs).
@@ -1064,6 +1078,16 @@ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
 		goto error;
 	new_node = ret;
 
+	/*
+	 * Return -EINVAL if caller specified MHP_MEMMAP_ON_MEMORY and we do
+	 * not support it.
+	 */
+	if ((mhp_flags & MHP_MEMMAP_ON_MEMORY) &&
+	    !mhp_supports_memmap_on_memory(size)) {
+		ret = -EINVAL;
+		goto error;
+	}
+
 	/*
 	 * Self hosted memmap array
 	 */
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [RFC PATCH v2 4/4] mm,memory_hotplug: Enable MHP_MEMMAP_ON_MEMORY when supported
  2020-11-25 11:20 [RFC PATCH v2 0/4] Allocate memmap from hotadded memory (per device) Oscar Salvador
                   ` (2 preceding siblings ...)
  2020-11-25 11:20 ` [RFC PATCH v2 3/4] mm,memory_hotplug: Add mhp_supports_memmap_on_memory Oscar Salvador
@ 2020-11-25 11:20 ` Oscar Salvador
  2020-11-27 11:55   ` Oscar Salvador
  3 siblings, 1 reply; 11+ messages in thread
From: Oscar Salvador @ 2020-11-25 11:20 UTC (permalink / raw)
  To: david
  Cc: mhocko, linux-kernel, linux-mm, vbabka, pasha.tatashin, Oscar Salvador

Let the callers check whether they can use MHP_MEMMAP_ON_MEMORY by
checking mhp_supports_memmap_on_memory().
Currently, we only support MHP_MEMMAP_ON_MEMORY in case
CONFIG_SPARSEMEM_VMEMMAP is enabled, the architecture supports altmap,
and the range to be added spans a single memory block.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 drivers/acpi/acpi_memhotplug.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/acpi/acpi_memhotplug.c b/drivers/acpi/acpi_memhotplug.c
index b02fd51e5589..1fe645ef0b6c 100644
--- a/drivers/acpi/acpi_memhotplug.c
+++ b/drivers/acpi/acpi_memhotplug.c
@@ -172,6 +172,7 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device)
 	int result, num_enabled = 0;
 	struct acpi_memory_info *info;
 	int node;
+	mhp_t mhp_flags = MHP_NONE;
 
 	node = acpi_get_node(handle);
 	/*
@@ -194,8 +195,10 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device)
 		if (node < 0)
 			node = memory_add_physaddr_to_nid(info->start_addr);
 
+		if (mhp_supports_memmap_on_memory(info->length))
+			mhp_flags |= MHP_MEMMAP_ON_MEMORY;
 		result = __add_memory(node, info->start_addr, info->length,
-				      MHP_NONE);
+				      mhp_flags);
 
 		/*
 		 * If the memory block has been used by the kernel, add_memory()
-- 
2.26.2



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH v2 4/4] mm,memory_hotplug: Enable MHP_MEMMAP_ON_MEMORY when supported
  2020-11-25 11:20 ` [RFC PATCH v2 4/4] mm,memory_hotplug: Enable MHP_MEMMAP_ON_MEMORY when supported Oscar Salvador
@ 2020-11-27 11:55   ` Oscar Salvador
  0 siblings, 0 replies; 11+ messages in thread
From: Oscar Salvador @ 2020-11-27 11:55 UTC (permalink / raw)
  To: david; +Cc: mhocko, linux-kernel, linux-mm, vbabka, pasha.tatashin

On Wed, Nov 25, 2020 at 12:20:48PM +0100, Oscar Salvador wrote:
> diff --git a/drivers/acpi/acpi_memhotplug.c b/drivers/acpi/acpi_memhotplug.c
> index b02fd51e5589..1fe645ef0b6c 100644
> --- a/drivers/acpi/acpi_memhotplug.c
> +++ b/drivers/acpi/acpi_memhotplug.c
> @@ -172,6 +172,7 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device)
>  	int result, num_enabled = 0;
>  	struct acpi_memory_info *info;
>  	int node;
> +	mhp_t mhp_flags = MHP_NONE;
>  
>  	node = acpi_get_node(handle);
>  	/*
> @@ -194,8 +195,10 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device)
>  		if (node < 0)
>  			node = memory_add_physaddr_to_nid(info->start_addr);
>  
> +		if (mhp_supports_memmap_on_memory(info->length))
> +			mhp_flags |= MHP_MEMMAP_ON_MEMORY;
>  		result = __add_memory(node, info->start_addr, info->length,
> -				      MHP_NONE);
> +				      mhp_flags);
>  
>  		/*
>  		 * If the memory block has been used by the kernel, add_memory()

I fortot the following chunk:

diff --git a/arch/powerpc/platforms/pseries/hotplug-memory.c b/arch/powerpc/platforms/pseries/hotplug-memory.c
index 7efe6ec5d14a..3aba817e729e 100644
--- a/arch/powerpc/platforms/pseries/hotplug-memory.c
+++ b/arch/powerpc/platforms/pseries/hotplug-memory.c
@@ -612,6 +612,7 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb)
 {
        unsigned long block_sz;
        int nid, rc;
+       mhp_t mhp_flags = MHP_NONE;
 
        if (lmb->flags & DRCONF_MEM_ASSIGNED)
                return -EINVAL;
@@ -629,8 +630,10 @@ static int dlpar_add_lmb(struct drmem_lmb *lmb)
        if (nid < 0 || !node_possible(nid))
                nid = first_online_node;
 
+       if (mhp_supports_memmap_on_memory(block_sz))
+               mhp_flags |= MHP_MEMMAP_ON_MEMORY;
        /* Add the memory */
-       rc = __add_memory(nid, lmb->base_addr, block_sz, MHP_NONE);
+       rc = __add_memory(nid, lmb->base_addr, block_sz, mhp_flags);
        if (rc) {
                invalidate_lmb_associativity_index(lmb);
                return rc;

I will add it for the next spin once I get more feedback. 

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH v2 1/4] mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY
  2020-11-25 11:20 ` [RFC PATCH v2 1/4] mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY Oscar Salvador
@ 2020-11-27 14:59   ` Michal Hocko
  0 siblings, 0 replies; 11+ messages in thread
From: Michal Hocko @ 2020-11-27 14:59 UTC (permalink / raw)
  To: Oscar Salvador; +Cc: david, linux-kernel, linux-mm, vbabka, pasha.tatashin

On Wed 25-11-20 12:20:45, Oscar Salvador wrote:
> MHP_MEMMAP_ON_MEMORY tells the system that we want the memmap
> pagetables to be built from the hot-added range.
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>

This should be folded into the patch which uses the flag.

> ---
>  include/linux/memory_hotplug.h | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
> index 551093b74596..5b1ea1f8d1ab 100644
> --- a/include/linux/memory_hotplug.h
> +++ b/include/linux/memory_hotplug.h
> @@ -70,6 +70,14 @@ typedef int __bitwise mhp_t;
>   */
>  #define MEMHP_MERGE_RESOURCE	((__force mhp_t)BIT(0))
>  
> +/*
> + * We want memmap (struct page array) to be self contained.
> + * To do so, we will use the beginning of the hot-added range to build
> + * the page tables for the memmap array that describes the entire range.
> + * Only selected architectures support it with SPARSE_VMEMMAP.
> + */
> +#define MHP_MEMMAP_ON_MEMORY	((__force mhp_t)BIT(1))
> +
>  /*
>   * Extended parameters for memory hotplug:
>   * altmap: alternative allocator for memmap array (optional)
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH v2 3/4] mm,memory_hotplug: Add mhp_supports_memmap_on_memory
  2020-11-25 11:20 ` [RFC PATCH v2 3/4] mm,memory_hotplug: Add mhp_supports_memmap_on_memory Oscar Salvador
@ 2020-11-27 15:02   ` Michal Hocko
  2020-11-30  8:50     ` Oscar Salvador
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2020-11-27 15:02 UTC (permalink / raw)
  To: Oscar Salvador; +Cc: david, linux-kernel, linux-mm, vbabka, pasha.tatashin

On Wed 25-11-20 12:20:47, Oscar Salvador wrote:
> mhp_supports_memmap_on_memory is meant to be used by the caller prior
> to hot-adding memory in order to figure out whether it can enable
> MHP_MEMMAP_ON_MEMORY or not.
> 
> Enabling MHP_MEMMAP_ON_MEMORY requires:
> 
>  - CONFIG_SPARSEMEM_VMEMMAP
>  - architecture support for altmap
>  - hot-added range spans a single memory block

It should also require a tunable (kernel parameter for now but maybe we
will need a more fine grained control later) to enable this explicitly.
Earlier discussions have pointed out that allocating vmemmap from each
section can lead to a sparse memory unsuitable for very large pages.
So I believe this should be an opt in.
 
Also is there any reason why this cannot be a preparatory patch for the
actual implementation? It would look more natural that way to me.

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH v2 2/4] mm,memory_hotplug: Allocate memmap from the added memory range
  2020-11-25 11:20 ` [RFC PATCH v2 2/4] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
@ 2020-11-27 15:15   ` Michal Hocko
  2020-11-30  9:12     ` Oscar Salvador
  0 siblings, 1 reply; 11+ messages in thread
From: Michal Hocko @ 2020-11-27 15:15 UTC (permalink / raw)
  To: Oscar Salvador; +Cc: david, linux-kernel, linux-mm, vbabka, pasha.tatashin

On Wed 25-11-20 12:20:46, Oscar Salvador wrote:
> Physical memory hotadd has to allocate a memmap (struct page array) for
> the newly added memory section. Currently, alloc_pages_node() is used
> for those allocations.
> 
> This has some disadvantages:
>  a) an existing memory is consumed for that purpose
>     (eg: ~2MB per 128MB memory section on x86_64)
>  b) if the whole node is movable then we have off-node struct pages
>     which has performance drawbacks.
>  c) It might be there are no PMD_ALIGNED chunks so memmap array gets
>     populated with base pages.
> 
> This can be improved when CONFIG_SPARSEMEM_VMEMMAP is enabled.
> 
> Vmemap page tables can map arbitrary memory.
> That means that we can simply use the beginning of each memory section and
> map struct pages there.

Did you mean each memory block rather than section?

> struct pages which back the allocated space then just need to be treated
> carefully.
> 
> Implementation wise we will reuse vmem_altmap infrastructure to override
> the default allocator used by __populate_section_memmap. Once the memmap is
> allocated, we are going to need a way to mark altmap pfns used for the allocation.
> If MHP_MEMMAP_ON_MEMORY flag was passed, we will set up the layout of the
> altmap structure in add_memory_resouce(), and then we will call
> mhp_mark_vmemmap_pages() to properly mark those pages.
> 
> Online/Offline:
> 
>  In the memory_block structure, a new field is created in order to
>  store the number of vmemmap_pages.

Is this really needed? We know how many pfns are required for a block of
a specific size, right?

I have only glanced through the patch so I might be missing something
but I am really wondering why you haven't chosen to use altmap directly
here.

>  Having that around simplifies things a lot since in {online/offline}_pages
>  we can know how much we have to skip forward until we have the first non-
>  vmemmap page, for operations like isolation/migration/initialization.
> 
> Hot-remove:
> 
>  If the range was using memmap on memory (aka vmemmap pages),
>  we construct an altmap structure so free_hugepage_table does
>  the right thing and calls vmem_altmap_free instead of
>  free_pagetable.

It would be also good to describe how does a pfn walker recognize such a
page? Most of them will simply ignore it but e.g. hotplug walker will
need to skip over those because they are not preventing offlining as
they will go away with the memory block together.

Some basic description of testing done would be suitable as well.

Thanks!

> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> ---
>  arch/powerpc/platforms/powernv/memtrace.c |  2 +-
>  drivers/base/memory.c                     | 21 +++--
>  include/linux/memory.h                    |  7 +-
>  include/linux/memory_hotplug.h            | 13 +++-
>  include/linux/memremap.h                  |  2 +-
>  mm/memory_hotplug.c                       | 95 ++++++++++++++++++-----
>  mm/page_alloc.c                           |  4 +-
>  7 files changed, 107 insertions(+), 37 deletions(-)
> 
> diff --git a/arch/powerpc/platforms/powernv/memtrace.c b/arch/powerpc/platforms/powernv/memtrace.c
> index 6828108486f8..8d45b8a5a9d0 100644
> --- a/arch/powerpc/platforms/powernv/memtrace.c
> +++ b/arch/powerpc/platforms/powernv/memtrace.c
> @@ -79,7 +79,7 @@ static bool memtrace_offline_pages(u32 nid, u64 start_pfn, u64 nr_pages)
>  	walk_memory_blocks(start, size, (void *)MEM_GOING_OFFLINE,
>  			   change_memblock_state);
>  
> -	if (offline_pages(start_pfn, nr_pages)) {
> +	if (offline_pages(start_pfn, nr_pages, 0)) {
>  		walk_memory_blocks(start, size, (void *)MEM_ONLINE,
>  				   change_memblock_state);
>  		return false;
> diff --git a/drivers/base/memory.c b/drivers/base/memory.c
> index eef4ffb6122c..675974e39a84 100644
> --- a/drivers/base/memory.c
> +++ b/drivers/base/memory.c
> @@ -175,7 +175,7 @@ int memory_notify(unsigned long val, void *v)
>   */
>  static int
>  memory_block_action(unsigned long start_section_nr, unsigned long action,
> -		    int online_type, int nid)
> +		    int online_type, int nid, unsigned long nr_vmemmap_pages)
>  {
>  	unsigned long start_pfn;
>  	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
> @@ -185,10 +185,11 @@ memory_block_action(unsigned long start_section_nr, unsigned long action,
>  
>  	switch (action) {
>  	case MEM_ONLINE:
> -		ret = online_pages(start_pfn, nr_pages, online_type, nid);
> +		ret = online_pages(start_pfn, nr_pages, nr_vmemmap_pages,
> +				   online_type, nid);
>  		break;
>  	case MEM_OFFLINE:
> -		ret = offline_pages(start_pfn, nr_pages);
> +		ret = offline_pages(start_pfn, nr_pages, nr_vmemmap_pages);
>  		break;
>  	default:
>  		WARN(1, KERN_WARNING "%s(%ld, %ld) unknown action: "
> @@ -211,7 +212,7 @@ static int memory_block_change_state(struct memory_block *mem,
>  		mem->state = MEM_GOING_OFFLINE;
>  
>  	ret = memory_block_action(mem->start_section_nr, to_state,
> -				  mem->online_type, mem->nid);
> +				  mem->online_type, mem->nid, mem->nr_vmemmap_pages);
>  
>  	mem->state = ret ? from_state_req : to_state;
>  
> @@ -571,7 +572,8 @@ int register_memory(struct memory_block *memory)
>  	return ret;
>  }
>  
> -static int init_memory_block(unsigned long block_id, unsigned long state)
> +static int init_memory_block(unsigned long block_id, unsigned long state,
> +			     unsigned long nr_vmemmap_pages)
>  {
>  	struct memory_block *mem;
>  	unsigned long start_pfn;
> @@ -591,6 +593,7 @@ static int init_memory_block(unsigned long block_id, unsigned long state)
>  	start_pfn = section_nr_to_pfn(mem->start_section_nr);
>  	mem->phys_device = arch_get_memory_phys_device(start_pfn);
>  	mem->nid = NUMA_NO_NODE;
> +	mem->nr_vmemmap_pages = nr_vmemmap_pages;
>  
>  	ret = register_memory(mem);
>  
> @@ -610,7 +613,7 @@ static int add_memory_block(unsigned long base_section_nr)
>  	if (section_count == 0)
>  		return 0;
>  	return init_memory_block(memory_block_id(base_section_nr),
> -				 MEM_ONLINE);
> +				 MEM_ONLINE, 0);
>  }
>  
>  static void unregister_memory(struct memory_block *memory)
> @@ -632,7 +635,8 @@ static void unregister_memory(struct memory_block *memory)
>   *
>   * Called under device_hotplug_lock.
>   */
> -int create_memory_block_devices(unsigned long start, unsigned long size)
> +int create_memory_block_devices(unsigned long start, unsigned long size,
> +				unsigned long vmemmap_pages)
>  {
>  	const unsigned long start_block_id = pfn_to_block_id(PFN_DOWN(start));
>  	unsigned long end_block_id = pfn_to_block_id(PFN_DOWN(start + size));
> @@ -645,9 +649,10 @@ int create_memory_block_devices(unsigned long start, unsigned long size)
>  		return -EINVAL;
>  
>  	for (block_id = start_block_id; block_id != end_block_id; block_id++) {
> -		ret = init_memory_block(block_id, MEM_OFFLINE);
> +		ret = init_memory_block(block_id, MEM_OFFLINE, vmemmap_pages);
>  		if (ret)
>  			break;
> +		vmemmap_pages = 0;
>  	}
>  	if (ret) {
>  		end_block_id = block_id;
> diff --git a/include/linux/memory.h b/include/linux/memory.h
> index 439a89e758d8..b910d2aea879 100644
> --- a/include/linux/memory.h
> +++ b/include/linux/memory.h
> @@ -30,6 +30,11 @@ struct memory_block {
>  	int phys_device;		/* to which fru does this belong? */
>  	struct device dev;
>  	int nid;			/* NID for this memory block */
> +	unsigned long nr_vmemmap_pages;	/*
> +					 * Number of vmemmap pages. These pages
> +					 * lay at the beginning of the memory
> +					 * block.
> +					 */
>  };
>  
>  int arch_get_memory_phys_device(unsigned long start_pfn);
> @@ -81,7 +86,7 @@ static inline int memory_notify(unsigned long val, void *v)
>  #else
>  extern int register_memory_notifier(struct notifier_block *nb);
>  extern void unregister_memory_notifier(struct notifier_block *nb);
> -int create_memory_block_devices(unsigned long start, unsigned long size);
> +int create_memory_block_devices(unsigned long start, unsigned long size, unsigned long vmemmap_pages);
>  void remove_memory_block_devices(unsigned long start, unsigned long size);
>  extern void memory_dev_init(void);
>  extern int memory_notify(unsigned long val, void *v);
> diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
> index 5b1ea1f8d1ab..757452279965 100644
> --- a/include/linux/memory_hotplug.h
> +++ b/include/linux/memory_hotplug.h
> @@ -121,11 +121,13 @@ extern int zone_grow_waitqueues(struct zone *zone, unsigned long nr_pages);
>  extern int add_one_highpage(struct page *page, int pfn, int bad_ppro);
>  /* VM interface that may be used by firmware interface */
>  extern int online_pages(unsigned long pfn, unsigned long nr_pages,
> -			int online_type, int nid);
> +			unsigned long nr_vmemmap_pages, int online_type,
> +			int nid);
>  extern struct zone *test_pages_in_a_zone(unsigned long start_pfn,
>  					 unsigned long end_pfn);
>  extern void __offline_isolated_pages(unsigned long start_pfn,
> -				     unsigned long end_pfn);
> +				     unsigned long end_pfn,
> +				     unsigned long buddy_start_pfn);
>  
>  typedef void (*online_page_callback_t)(struct page *page, unsigned int order);
>  
> @@ -320,7 +322,8 @@ static inline void pgdat_resize_init(struct pglist_data *pgdat) {}
>  #ifdef CONFIG_MEMORY_HOTREMOVE
>  
>  extern void try_offline_node(int nid);
> -extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages);
> +extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
> +			 unsigned long nr_vmemmap_pages);
>  extern int remove_memory(int nid, u64 start, u64 size);
>  extern void __remove_memory(int nid, u64 start, u64 size);
>  extern int offline_and_remove_memory(int nid, u64 start, u64 size);
> @@ -328,7 +331,8 @@ extern int offline_and_remove_memory(int nid, u64 start, u64 size);
>  #else
>  static inline void try_offline_node(int nid) {}
>  
> -static inline int offline_pages(unsigned long start_pfn, unsigned long nr_pages)
> +static inline int offline_pages(unsigned long start_pfn, unsigned long nr_pages,
> +				unsigned long nr_vmemmap_pages)
>  {
>  	return -EINVAL;
>  }
> @@ -369,6 +373,7 @@ extern struct page *sparse_decode_mem_map(unsigned long coded_mem_map,
>  					  unsigned long pnum);
>  extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
>  		unsigned long nr_pages);
> +extern bool mhp_supports_memmap_on_memory(unsigned long size);
>  #endif /* CONFIG_MEMORY_HOTPLUG */
>  
>  #endif /* __LINUX_MEMORY_HOTPLUG_H */
> diff --git a/include/linux/memremap.h b/include/linux/memremap.h
> index 79c49e7f5c30..3465681cf664 100644
> --- a/include/linux/memremap.h
> +++ b/include/linux/memremap.h
> @@ -17,7 +17,7 @@ struct device;
>   * @alloc: track pages consumed, private to vmemmap_populate()
>   */
>  struct vmem_altmap {
> -	const unsigned long base_pfn;
> +	unsigned long base_pfn;
>  	const unsigned long end_pfn;
>  	const unsigned long reserve;
>  	unsigned long free;
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 63b2e46b6555..87fbc2cc0d90 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -607,10 +607,12 @@ void generic_online_page(struct page *page, unsigned int order)
>  }
>  EXPORT_SYMBOL_GPL(generic_online_page);
>  
> -static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
> +static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages,
> +			       unsigned long buddy_start_pfn)
>  {
>  	const unsigned long end_pfn = start_pfn + nr_pages;
>  	unsigned long pfn;
> +	unsigned int order;
>  
>  	/*
>  	 * Online the pages in MAX_ORDER - 1 aligned chunks. The callback might
> @@ -618,8 +620,12 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
>  	 * later). We account all pages as being online and belonging to this
>  	 * zone ("present").
>  	 */
> -	for (pfn = start_pfn; pfn < end_pfn; pfn += MAX_ORDER_NR_PAGES)
> -		(*online_page_callback)(pfn_to_page(pfn), MAX_ORDER - 1);
> +	for (pfn = buddy_start_pfn; pfn < end_pfn; pfn += (1 << order)) {
> +		order = MAX_ORDER - 1;
> +		while (pfn & ((1 << order) - 1))
> +			order--;
> +		(*online_page_callback)(pfn_to_page(pfn), order);
> +	}
>  
>  	/* mark all involved sections as online */
>  	online_mem_sections(start_pfn, end_pfn);
> @@ -679,6 +685,7 @@ static void __meminit resize_pgdat_range(struct pglist_data *pgdat, unsigned lon
>  	pgdat->node_spanned_pages = max(start_pfn + nr_pages, old_end_pfn) - pgdat->node_start_pfn;
>  
>  }
> +
>  /*
>   * Associate the pfn range with the given zone, initializing the memmaps
>   * and resizing the pgdat/zone data to span the added pages. After this
> @@ -778,9 +785,9 @@ struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
>  }
>  
>  int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
> -		       int online_type, int nid)
> +		       unsigned long nr_vmemmap_pages, int online_type, int nid)
>  {
> -	unsigned long flags;
> +	unsigned long flags, buddy_start_pfn, buddy_nr_pages;
>  	struct zone *zone;
>  	int need_zonelists_rebuild = 0;
>  	int ret;
> @@ -791,11 +798,18 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  			 !IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION)))
>  		return -EINVAL;
>  
> +	buddy_start_pfn = pfn + nr_vmemmap_pages;
> +	buddy_nr_pages = nr_pages - nr_vmemmap_pages;
> +
>  	mem_hotplug_begin();
>  
>  	/* associate pfn range with the zone */
>  	zone = zone_for_pfn_range(online_type, nid, pfn, nr_pages);
> -	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_ISOLATE);
> +	if (nr_vmemmap_pages)
> +		move_pfn_range_to_zone(zone, pfn, nr_vmemmap_pages, NULL,
> +				       MIGRATE_UNMOVABLE);
> +	move_pfn_range_to_zone(zone, buddy_start_pfn, buddy_nr_pages, NULL,
> +			       MIGRATE_ISOLATE);
>  
>  	arg.start_pfn = pfn;
>  	arg.nr_pages = nr_pages;
> @@ -811,7 +825,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  	 * onlining, such that undo_isolate_page_range() works correctly.
>  	 */
>  	spin_lock_irqsave(&zone->lock, flags);
> -	zone->nr_isolate_pageblock += nr_pages / pageblock_nr_pages;
> +	zone->nr_isolate_pageblock += buddy_nr_pages / pageblock_nr_pages;
>  	spin_unlock_irqrestore(&zone->lock, flags);
>  
>  	/*
> @@ -824,7 +838,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  		setup_zone_pageset(zone);
>  	}
>  
> -	online_pages_range(pfn, nr_pages);
> +	online_pages_range(pfn, nr_pages, buddy_start_pfn);
>  	zone->present_pages += nr_pages;
>  
>  	pgdat_resize_lock(zone->zone_pgdat, &flags);
> @@ -837,7 +851,9 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>  	zone_pcp_update(zone);
>  
>  	/* Basic onlining is complete, allow allocation of onlined pages. */
> -	undo_isolate_page_range(pfn, pfn + nr_pages, MIGRATE_MOVABLE);
> +	undo_isolate_page_range(buddy_start_pfn,
> +				buddy_start_pfn + buddy_nr_pages,
> +				MIGRATE_MOVABLE);
>  
>  	/*
>  	 * Freshly onlined pages aren't shuffled (e.g., all pages are placed to
> @@ -1021,6 +1037,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
>  int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
>  {
>  	struct mhp_params params = { .pgprot = PAGE_KERNEL };
> +	struct vmem_altmap mhp_altmap = {};
>  	u64 start, size;
>  	bool new_node = false;
>  	int ret;
> @@ -1047,13 +1064,22 @@ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
>  		goto error;
>  	new_node = ret;
>  
> +	/*
> +	 * Self hosted memmap array
> +	 */
> +	if (mhp_flags & MHP_MEMMAP_ON_MEMORY) {
> +		mhp_altmap.free = size >> PAGE_SHIFT;
> +		mhp_altmap.base_pfn = start >> PAGE_SHIFT;
> +		params.altmap = &mhp_altmap;
> +	}
> +
>  	/* call arch's memory hotadd */
>  	ret = arch_add_memory(nid, start, size, &params);
>  	if (ret < 0)
>  		goto error;
>  
>  	/* create memory block devices after memory was added */
> -	ret = create_memory_block_devices(start, size);
> +	ret = create_memory_block_devices(start, size, mhp_altmap.alloc);
>  	if (ret) {
>  		arch_remove_memory(nid, start, size, NULL);
>  		goto error;
> @@ -1449,10 +1475,11 @@ static int count_system_ram_pages_cb(unsigned long start_pfn,
>  	return 0;
>  }
>  
> -int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
> +int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages,
> +			unsigned long nr_vmemmap_pages)
>  {
>  	const unsigned long end_pfn = start_pfn + nr_pages;
> -	unsigned long pfn, system_ram_pages = 0;
> +	unsigned long pfn, buddy_start_pfn, buddy_nr_pages, system_ram_pages = 0;
>  	unsigned long flags;
>  	struct zone *zone;
>  	struct memory_notify arg;
> @@ -1464,6 +1491,9 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>  			 !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))
>  		return -EINVAL;
>  
> +	buddy_start_pfn = start_pfn + nr_vmemmap_pages;
> +	buddy_nr_pages = nr_pages - nr_vmemmap_pages;
> +
>  	mem_hotplug_begin();
>  
>  	/*
> @@ -1493,8 +1523,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>  	node = zone_to_nid(zone);
>  
>  	/* set above range as isolated */
> -	ret = start_isolate_page_range(start_pfn, end_pfn,
> -				       MIGRATE_MOVABLE,
> +	ret = start_isolate_page_range(buddy_start_pfn, end_pfn, MIGRATE_MOVABLE,
>  				       MEMORY_OFFLINE | REPORT_FAILURE);
>  	if (ret) {
>  		reason = "failure to isolate range";
> @@ -1513,7 +1542,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>  	}
>  
>  	do {
> -		pfn = start_pfn;
> +		pfn = buddy_start_pfn;
>  		do {
>  			if (signal_pending(current)) {
>  				ret = -EINTR;
> @@ -1544,7 +1573,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>  		 * offlining actually in order to make hugetlbfs's object
>  		 * counting consistent.
>  		 */
> -		ret = dissolve_free_huge_pages(start_pfn, end_pfn);
> +		ret = dissolve_free_huge_pages(buddy_start_pfn, end_pfn);
>  		if (ret) {
>  			reason = "failure to dissolve huge pages";
>  			goto failed_removal_isolated;
> @@ -1562,13 +1591,13 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>  		 * because has_unmovable_pages explicitly checks for
>  		 * PageBuddy on freed pages on other zones.
>  		 */
> -		ret = test_pages_isolated(start_pfn, end_pfn, MEMORY_OFFLINE);
> +		ret = test_pages_isolated(buddy_start_pfn, end_pfn, MEMORY_OFFLINE);
>  		if (ret)
>  			drain_all_pages(zone);
>  	} while (ret);
>  
>  	/* Mark all sections offline and remove free pages from the buddy. */
> -	__offline_isolated_pages(start_pfn, end_pfn);
> +	__offline_isolated_pages(start_pfn, end_pfn, buddy_start_pfn);
>  	pr_info("Offlined Pages %ld\n", nr_pages);
>  
>  	/*
> @@ -1577,11 +1606,11 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>  	 * of isolated pageblocks, memory onlining will properly revert this.
>  	 */
>  	spin_lock_irqsave(&zone->lock, flags);
> -	zone->nr_isolate_pageblock -= nr_pages / pageblock_nr_pages;
> +	zone->nr_isolate_pageblock -= buddy_nr_pages / pageblock_nr_pages;
>  	spin_unlock_irqrestore(&zone->lock, flags);
>  
>  	/* removal success */
> -	adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages);
> +	adjust_managed_page_count(pfn_to_page(start_pfn), -buddy_nr_pages);
>  	zone->present_pages -= nr_pages;
>  
>  	pgdat_resize_lock(zone->zone_pgdat, &flags);
> @@ -1639,6 +1668,16 @@ static int check_memblock_offlined_cb(struct memory_block *mem, void *arg)
>  	return 0;
>  }
>  
> +static int get_memblock_vmemmap_pages_cb(struct memory_block *mem, void *arg)
> +{
> +	unsigned long *nr_vmemmap_pages = (unsigned long *)arg;
> +	int ret = !mem->nr_vmemmap_pages;
> +
> +	if (!ret)
> +		*nr_vmemmap_pages += mem->nr_vmemmap_pages;
> +	return ret;
> +}
> +
>  static int check_cpu_on_node(pg_data_t *pgdat)
>  {
>  	int cpu;
> @@ -1713,6 +1752,9 @@ EXPORT_SYMBOL(try_offline_node);
>  static int __ref try_remove_memory(int nid, u64 start, u64 size)
>  {
>  	int rc = 0;
> +	struct vmem_altmap mhp_altmap = {};
> +	struct vmem_altmap *altmap = NULL;
> +	unsigned long nr_vmemmap_pages = 0;
>  
>  	BUG_ON(check_hotplug_memory_range(start, size));
>  
> @@ -1725,6 +1767,17 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
>  	if (rc)
>  		return rc;
>  
> +	/*
> +	 * Prepare a vmem_altmap struct if we used it at hot-add, so
> +	 * remove_pmd_table->free_hugepage_table does the right thing.
> +	 */
> +	(void)walk_memory_blocks(start, size, &nr_vmemmap_pages,
> +	     get_memblock_vmemmap_pages_cb);
> +	if (nr_vmemmap_pages) {
> +		mhp_altmap.alloc = nr_vmemmap_pages;
> +		altmap = &mhp_altmap;
> +	}
> +
>  	/* remove memmap entry */
>  	firmware_map_remove(start, start + size, "System RAM");
>  
> @@ -1736,7 +1789,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
>  
>  	mem_hotplug_begin();
>  
> -	arch_remove_memory(nid, start, size, NULL);
> +	arch_remove_memory(nid, start, size, altmap);
>  
>  	if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
>  		memblock_free(start, size);
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index eaa227a479e4..aac60d02e04b 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -8753,7 +8753,8 @@ void zone_pcp_reset(struct zone *zone)
>   * All pages in the range must be in a single zone, must not contain holes,
>   * must span full sections, and must be isolated before calling this function.
>   */
> -void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
> +void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn,
> +			      unsigned long buddy_start_pfn)
>  {
>  	unsigned long pfn = start_pfn;
>  	struct page *page;
> @@ -8764,6 +8765,7 @@ void __offline_isolated_pages(unsigned long start_pfn, unsigned long end_pfn)
>  	offline_mem_sections(pfn, end_pfn);
>  	zone = page_zone(pfn_to_page(pfn));
>  	spin_lock_irqsave(&zone->lock, flags);
> +	pfn = buddy_start_pfn;
>  	while (pfn < end_pfn) {
>  		page = pfn_to_page(pfn);
>  		/*
> -- 
> 2.26.2

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH v2 3/4] mm,memory_hotplug: Add mhp_supports_memmap_on_memory
  2020-11-27 15:02   ` Michal Hocko
@ 2020-11-30  8:50     ` Oscar Salvador
  0 siblings, 0 replies; 11+ messages in thread
From: Oscar Salvador @ 2020-11-30  8:50 UTC (permalink / raw)
  To: Michal Hocko; +Cc: david, linux-kernel, linux-mm, vbabka, pasha.tatashin

On Fri, Nov 27, 2020 at 04:02:53PM +0100, Michal Hocko wrote:
> It should also require a tunable (kernel parameter for now but maybe we
> will need a more fine grained control later) to enable this explicitly.
> Earlier discussions have pointed out that allocating vmemmap from each
> section can lead to a sparse memory unsuitable for very large pages.
> So I believe this should be an opt in.

Yeah, I already had that in mind, just did not get to implement it in this RFC
yet as I was more focused on the implementation per se.
I thought about a tunable in /sys/device/system/memory/[file], but a kernel
command line boot option would also work and for the first implementation
would be less for a hassel.

> Also is there any reason why this cannot be a preparatory patch for the
> actual implementation? It would look more natural that way to me.

I guess you are right, will re-order it in a future submission.

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [RFC PATCH v2 2/4] mm,memory_hotplug: Allocate memmap from the added memory range
  2020-11-27 15:15   ` Michal Hocko
@ 2020-11-30  9:12     ` Oscar Salvador
  0 siblings, 0 replies; 11+ messages in thread
From: Oscar Salvador @ 2020-11-30  9:12 UTC (permalink / raw)
  To: Michal Hocko; +Cc: david, linux-kernel, linux-mm, vbabka, pasha.tatashin

On Fri, Nov 27, 2020 at 04:15:36PM +0100, Michal Hocko wrote:
> > Vmemap page tables can map arbitrary memory.
> > That means that we can simply use the beginning of each memory section and
> > map struct pages there.
> 
> Did you mean each memory block rather than section?

Yes, sorry, I did not update that part.

> > struct pages which back the allocated space then just need to be treated
> > carefully.
> > 
> > Implementation wise we will reuse vmem_altmap infrastructure to override
> > the default allocator used by __populate_section_memmap. Once the memmap is
> > allocated, we are going to need a way to mark altmap pfns used for the allocation.
> > If MHP_MEMMAP_ON_MEMORY flag was passed, we will set up the layout of the
> > altmap structure in add_memory_resouce(), and then we will call
> > mhp_mark_vmemmap_pages() to properly mark those pages.
> > 
> > Online/Offline:
> > 
> >  In the memory_block structure, a new field is created in order to
> >  store the number of vmemmap_pages.
> 
> Is this really needed? We know how many pfns are required for a block of
> a specific size, right?
> 
> I have only glanced through the patch so I might be missing something
> but I am really wondering why you haven't chosen to use altmap directly
> here.

Well, this is my bad, I did not update the changelog wrt. to the previous version
so it might be confusing.
I will make sure to update it for the next submission, but let me explain it
here to shed some light.

We no longer need to use mhp_mark_vmemap_pages to mark vmemmap pages pages.
Prior to online_pages(), the whole range is offline, so no one should be
messing with any pages within that range.
The initialization of the pages takes places in online_pages():

We have:

start_pfn = first_pfn_of_the_range
buddy_start_pfn = first_pfn_of_the_range + nr_vmemmap_pages


We do have:

+	if (nr_vmemmap_pages)
+		move_pfn_range_to_zone(zone, pfn, nr_vmemmap_pages, NULL,
+				       MIGRATE_UNMOVABLE);
+	move_pfn_range_to_zone(zone, buddy_start_pfn, buddy_nr_pages, NULL,
+			       MIGRATE_ISOLATE);

Now, all the range is initialized and marked PageReserved, but we only send
to buddy (buddy_start_pfn, end_pfn].

And so, (start_pfn, buddy_start_pfn - 1] reamins PageReserved.
And we know that pfn walkers to skip Reserved pages.

About the altmap part.
Altmap is used in the hot-add phase in add_memory_resource.

The thing is, we could avoid adding the memory_block's nr_vmemmap_pages field
, but we would have to mark the vmemmap pages as we used to do in previous
implementations (see [1]), but I find this way cleaner, and it adds much less
code (previous implementions can be see in [2]), and as a starter I find it
much better.

> It would be also good to describe how does a pfn walker recognize such a
> page? Most of them will simply ignore it but e.g. hotplug walker will
> need to skip over those because they are not preventing offlining as
> they will go away with the memory block together.

Wrt. hotplug walker, same as above, we only care to migrate the
(buddy_start_pfn, end_pfn], so the first pfn to migrate and isolate is set to
buddy_start_pfn.

Other pfns walkers should merely skip vmemmap pages because they are Reserved.

> Some basic description of testing done would be suitable as well.

Well, that is:

 - Hot-add memory to a specific numa node
 - Online memory
 - numactl -H and /proc/zoneinfo reflects the truth and nr_vmemmap_pages are
   extracted where they have to be
 - Start a memory stress program and bind it to the numa node we added memory
   so we make sure it gets exercised
 - Wait for a while and when node's free pages have decreased considerably,
   offline memory
 - Check that memory went offline and check /proc/zoneinfo and numactl -H
   again
 - Hot-remove range


[1] https://patchwork.kernel.org/project/linux-mm/patch/20201022125835.26396-3-osalvador@suse.de/
[2] https://patchwork.kernel.org/project/linux-mm/cover/20201022125835.26396-1-osalvador@suse.de/
 
-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-11-30  9:12 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-25 11:20 [RFC PATCH v2 0/4] Allocate memmap from hotadded memory (per device) Oscar Salvador
2020-11-25 11:20 ` [RFC PATCH v2 1/4] mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY Oscar Salvador
2020-11-27 14:59   ` Michal Hocko
2020-11-25 11:20 ` [RFC PATCH v2 2/4] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
2020-11-27 15:15   ` Michal Hocko
2020-11-30  9:12     ` Oscar Salvador
2020-11-25 11:20 ` [RFC PATCH v2 3/4] mm,memory_hotplug: Add mhp_supports_memmap_on_memory Oscar Salvador
2020-11-27 15:02   ` Michal Hocko
2020-11-30  8:50     ` Oscar Salvador
2020-11-25 11:20 ` [RFC PATCH v2 4/4] mm,memory_hotplug: Enable MHP_MEMMAP_ON_MEMORY when supported Oscar Salvador
2020-11-27 11:55   ` Oscar Salvador

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).