All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v6 0/8] Allocate memmap from hotadded memory (per device)
@ 2021-04-06 11:11 Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 1/8] drivers/base/memory: Introduce memory_block_{online,offline} Oscar Salvador
                   ` (7 more replies)
  0 siblings, 8 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador

Changes from v5 -> v6:
 - Addressed feedback by Michal
 - Create memory_block_{online,offline} functions
 - Create vmemmap_* functions to deal with vmemmap stuff, so 
   {online,offline}_pages remain untouched
 - Add adjust_present_page_count's patch from David
 - Relax check in {offline,online}_pages
 - Rework changelogs

Changes from v4 -> v5:
 - Addressed feedback from David (patch#1)
 - Tested on x86_64 with different struct page sizes and on large/small memory
   blocks
 - Tested on arm64 with 4K, 64K (with and without THP) and with different struct
   page sizes

 NOTE: We might need to make this feature and hugetlb-vmemmap feature [1] mutually
       exclusive. I raised an issue I see in [2].
       Hugetlb-vmemmap feature has been withdrawn for the time being due to the need
       in further changes  wrt. locking/freeing context.
       I will keep an eye, and when the time comes again I will see how the two
       features play together and how one another can be disabled when needed.

Changes from v3 -> v4:
 - Addressed feedback from David
 - Wrap memmap_on_memory module thingy with #ifdef
   on MHP_MEMMAP_ON_MEMORY
 - Move "depend on MEMORY_HOTPLUG" to MHP_MEMMAP_ON_MEMORY
   in generic mm/Kconfig
 - Collect David's Reviewed-bys

Changes from v2 -> v3:
 - Addressed feedback from David
 - Squash former patch#4 and and patch#5 into patch#1
 - Fix config dependency CONFIR_SPARSE_VMEMMAP vs CONFIG_SPARSE_VMEMMAP_ENABLE
 - Simplify module parameter functions

Changes from v1 -> v2
 - Addressed feedback from David
 - Fence off the feature in case struct page size is not
   multiple of PMD size or pageblock alignment cannot be guaranted
 - Tested on x86_64 small and large memory_blocks
 - Tested on arm64 4KB and 64KB page sizes (for some reason I cannot boot
   my VM with 16KB page size).

 Arm64 with 4KB page size behaves like x86_64 after [1], which made section
 size smaller.
 With 64KB, the feature gets fenced off due to pageblock alignment.

Changes from RFCv3 -> v1:
 - Addressed feedback from David
 - Re-order patches

Changes from v2 -> v3 (RFC):
 - Re-order patches (Michal)
 - Fold "mm,memory_hotplug: Introduce MHP_MEMMAP_ON_MEMORY" in patch#1
 - Add kernel boot option to enable this feature (Michal)

Changes from v1 -> v2 (RFC):
 - Addressed feedback provided by David
 - Add a arch_support_memmap_on_memory to be called
   from mhp_supports_memmap_on_memory, as atm,
   only ARM, powerpc and x86_64 have altmat support.

[1] https://lore.kernel.org/lkml/cover.1611206601.git.sudaraja@codeaurora.org

Original cover letter:

----

 The primary goal of this patchset is to reduce memory overhead of the
 hot-added memory (at least for SPARSEMEM_VMEMMAP memory model).
 The current way we use to populate memmap (struct page array) has two main drawbacks:

 a) it consumes an additional memory until the hotadded memory itself is
    onlined and
 b) memmap might end up on a different numa node which is especially true
    for movable_node configuration.
 c) due to fragmentation we might end up populating memmap with base
    pages

 One way to mitigate all these issues is to simply allocate memmap array
 (which is the largest memory footprint of the physical memory hotplug)
 from the hot-added memory itself. SPARSEMEM_VMEMMAP memory model allows
 us to map any pfn range so the memory doesn't need to be online to be
 usable for the array. See patch 4 for more details.
 This feature is only usable when CONFIG_SPARSEMEM_VMEMMAP is set.

 [Overall design]:

 Implementation wise we reuse vmem_altmap infrastructure to override
 the default allocator used by vmemap_populate.
 memory_block structure gains a new field called nr_vmemmap_pages,
 which accounts for the number of vmemmap pages used by that memory_block.
 E.g: On x86_64, that is 512 vmemmap pages on small memory bloks and 4096
      on large memory blocks (1GB)

 We also introduce new two functions: memory_block_{online,offline}.
 These functions take care of initializing/unitializing vmemmap pages
 prior to calling {online,offline}_pages, so the latter functions can
 remain totally untouched.

 More details can be found in the respective changelogs.

David Hildenbrand (1):
  mm,memory_hotplug: Factor out adjusting present pages into
    adjust_present_page_count()

Oscar Salvador (7):
  drivers/base/memory: Introduce memory_block_{online,offline}
  mm,memory_hotplug: Relax fully spanned sections check
  mm,memory_hotplug: Allocate memmap from the added memory range
  acpi,memhotplug: Enable MHP_MEMMAP_ON_MEMORY when supported
  mm,memory_hotplug: Add kernel boot option to enable memmap_on_memory
  x86/Kconfig: Introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
  arm64/Kconfig: Introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE

 Documentation/admin-guide/kernel-parameters.txt |  17 ++
 arch/arm64/Kconfig                              |   3 +
 arch/x86/Kconfig                                |   3 +
 drivers/acpi/acpi_memhotplug.c                  |   5 +-
 drivers/base/memory.c                           |  93 +++++++++--
 include/linux/memory.h                          |   8 +-
 include/linux/memory_hotplug.h                  |  13 ++
 include/linux/memremap.h                        |   2 +-
 include/linux/mmzone.h                          |   5 +
 mm/Kconfig                                      |   5 +
 mm/Makefile                                     |   5 +-
 mm/memory_hotplug.c                             | 199 +++++++++++++++++++++---
 12 files changed, 320 insertions(+), 38 deletions(-)

-- 
2.16.3


^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH v6 1/8] drivers/base/memory: Introduce memory_block_{online,offline}
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check Oscar Salvador
                   ` (6 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador

This is a preparatory patch that introduces two new functions:
memory_block_online() and memory_block_offline().

For now, these functions will only call online_pages() and offline_pages()
respectively, but they will be later in charge of preparing the vmemmap
pages, carrying out the initialization and proper accounting of such
pages.

Since memory_block struct contains all the information, pass this struct
down the chain till the end functions.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 drivers/base/memory.c | 33 +++++++++++++++++++++------------
 1 file changed, 21 insertions(+), 12 deletions(-)

diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index f35298425575..f209925a5d4e 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -169,30 +169,41 @@ int memory_notify(unsigned long val, void *v)
 	return blocking_notifier_call_chain(&memory_chain, val, v);
 }
 
+static int memory_block_online(struct memory_block *mem)
+{
+	unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
+	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
+
+	return online_pages(start_pfn, nr_pages, mem->online_type, mem->nid);
+}
+
+static int memory_block_offline(struct memory_block *mem)
+{
+	unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
+	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
+
+	return offline_pages(start_pfn, nr_pages);
+}
+
 /*
  * MEMORY_HOTPLUG depends on SPARSEMEM in mm/Kconfig, so it is
  * OK to have direct references to sparsemem variables in here.
  */
 static int
-memory_block_action(unsigned long start_section_nr, unsigned long action,
-		    int online_type, int nid)
+memory_block_action(struct memory_block *mem, unsigned long action)
 {
-	unsigned long start_pfn;
-	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
 	int ret;
 
-	start_pfn = section_nr_to_pfn(start_section_nr);
-
 	switch (action) {
 	case MEM_ONLINE:
-		ret = online_pages(start_pfn, nr_pages, online_type, nid);
+		ret = memory_block_online(mem);
 		break;
 	case MEM_OFFLINE:
-		ret = offline_pages(start_pfn, nr_pages);
+		ret = memory_block_offline(mem);
 		break;
 	default:
 		WARN(1, KERN_WARNING "%s(%ld, %ld) unknown action: "
-		     "%ld\n", __func__, start_section_nr, action, action);
+		     "%ld\n", __func__, mem->start_section_nr, action, action);
 		ret = -EINVAL;
 	}
 
@@ -210,9 +221,7 @@ static int memory_block_change_state(struct memory_block *mem,
 	if (to_state == MEM_OFFLINE)
 		mem->state = MEM_GOING_OFFLINE;
 
-	ret = memory_block_action(mem->start_section_nr, to_state,
-				  mem->online_type, mem->nid);
-
+	ret = memory_block_action(mem, to_state);
 	mem->state = ret ? from_state_req : to_state;
 
 	return ret;
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 1/8] drivers/base/memory: Introduce memory_block_{online,offline} Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  2021-04-06 15:32   ` David Hildenbrand
  2021-04-06 11:11 ` [PATCH v6 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count() Oscar Salvador
                   ` (5 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador

When using self-hosted vmemmap pages, the number of pages passed to
{online,offline}_pages might not fully span sections, but they always
fully span pageblocks.
Relax the check account for that case.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 mm/memory_hotplug.c | 16 ++++++++++++----
 1 file changed, 12 insertions(+), 4 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0cdbbfbc5757..5fe3e3942b19 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -838,9 +838,13 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 	int ret;
 	struct memory_notify arg;
 
-	/* We can only online full sections (e.g., SECTION_IS_ONLINE) */
+	/* We can only offline full sections (e.g., SECTION_IS_ONLINE).
+	 * However, when using e.g: memmap_on_memory, some pages are initialized
+	 * prior to calling in here. The remaining amount of pages must be
+	 * pageblock aligned.
+	 */
 	if (WARN_ON_ONCE(!nr_pages ||
-			 !IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION)))
+			 !IS_ALIGNED(pfn | nr_pages, pageblock_nr_pages)))
 		return -EINVAL;
 
 	mem_hotplug_begin();
@@ -1573,9 +1577,13 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 	int ret, node;
 	char *reason;
 
-	/* We can only offline full sections (e.g., SECTION_IS_ONLINE) */
+	/* We can only offline full sections (e.g., SECTION_IS_ONLINE).
+	 * However, when using e.g: memmap_on_memory, some pages are initialized
+	 * prior to calling in here. The remaining amount of pages must be
+	 * pageblock aligned.
+	 */
 	if (WARN_ON_ONCE(!nr_pages ||
-			 !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))
+			 !IS_ALIGNED(start_pfn | nr_pages, pageblock_nr_pages)))
 		return -EINVAL;
 
 	mem_hotplug_begin();
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count()
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 1/8] drivers/base/memory: Introduce memory_block_{online,offline} Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  2021-04-06 15:33   ` David Hildenbrand
  2021-04-06 11:11 ` [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
                   ` (4 subsequent siblings)
  7 siblings, 1 reply; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador

From: David Hildenbrand <david@redhat.com>

Let's have a single place (inspired by adjust_managed_page_count()) where
we adjust present pages, to prepare for additional bookkeeping.
In contrast to adjust_managed_page_count(), only memory onlining/offlining
is allowed to modify the number of present pages.

Signed-off-by: David Hildenbrand <david@redhat.com>
Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
---
 mm/memory_hotplug.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 5fe3e3942b19..7411f6b5287d 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -829,6 +829,16 @@ struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
 	return default_zone_for_pfn(nid, start_pfn, nr_pages);
 }
 
+static void adjust_present_page_count(struct zone *zone, long nr_pages)
+{
+	unsigned long flags;
+
+	zone->present_pages += nr_pages;
+	pgdat_resize_lock(zone->zone_pgdat, &flags);
+	zone->zone_pgdat->node_present_pages += nr_pages;
+	pgdat_resize_unlock(zone->zone_pgdat, &flags);
+}
+
 int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 		       int online_type, int nid)
 {
@@ -881,11 +891,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 	}
 
 	online_pages_range(pfn, nr_pages);
-	zone->present_pages += nr_pages;
-
-	pgdat_resize_lock(zone->zone_pgdat, &flags);
-	zone->zone_pgdat->node_present_pages += nr_pages;
-	pgdat_resize_unlock(zone->zone_pgdat, &flags);
+	adjust_present_page_count(zone, nr_pages);
 
 	node_states_set_node(nid, &arg);
 	if (need_zonelists_rebuild)
@@ -1699,11 +1705,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 
 	/* removal success */
 	adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages);
-	zone->present_pages -= nr_pages;
-
-	pgdat_resize_lock(zone->zone_pgdat, &flags);
-	zone->zone_pgdat->node_present_pages -= nr_pages;
-	pgdat_resize_unlock(zone->zone_pgdat, &flags);
+	adjust_present_page_count(zone, -nr_pages);
 
 	init_per_zone_wmark_min();
 
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
                   ` (2 preceding siblings ...)
  2021-04-06 11:11 ` [PATCH v6 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count() Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  2021-04-06 17:10     ` [PATCH v6 4/8] mm, memory_hotplug: " kernel test robot
  2021-04-06 20:28   ` [PATCH v6 4/8] mm,memory_hotplug: " Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 5/8] acpi,memhotplug: Enable MHP_MEMMAP_ON_MEMORY when supported Oscar Salvador
                   ` (3 subsequent siblings)
  7 siblings, 2 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador

Physical memory hotadd has to allocate a memmap (struct page array) for
the newly added memory section. Currently, alloc_pages_node() is used
for those allocations.

This has some disadvantages:
 a) an existing memory is consumed for that purpose
    (eg: ~2MB per 128MB memory section on x86_64)
 b) if the whole node is movable then we have off-node struct pages
    which has performance drawbacks.
 c) It might be there are no PMD_ALIGNED chunks so memmap array gets
    populated with base pages.

This can be improved when CONFIG_SPARSEMEM_VMEMMAP is enabled.

Vmemap page tables can map arbitrary memory.
That means that we can simply use the beginning of each memory section and
map struct pages there.
struct pages which back the allocated space then just need to be treated
carefully.

Implementation wise we will reuse vmem_altmap infrastructure to override
the default allocator used by __populate_section_memmap.
Part of the implementation also relies on memory_block structure gaining
a new field which specifies the number of vmemmap_pages at the beginning.
This patch also introduces the following functions:

 - vmemmap_init_space: Initializes vmemmap pages by calling move_pfn_range_to_zone(),
		       calls kasan_add_zero_shadow() or the vmemmap range and marks
		       online as many sections as vmemmap pages fully span.
 - vmemmap_adjust_pages: Accounts/substract vmemmap_pages to node and zone
			 present_pages
 - vmemmap_deinit_space: Undoes what vmemmap_init_space does.

The new function memory_block_online() calls vmemmap_init_space() before
doing the actual online_pages(). Should online_pages() fail, we clean up
by calling vmemmap_adjust_pages() and vmemmap_deinit_space().

On offline, memory_block_offline() calls vmemmap_adjust_pages() prior to calling
offline_pages(), because offline_pages() performs the tearing-down of kthreads
and the rebuilding of the zonelists if the node/zone become empty.
If offline_pages() fails, we account back vmemmap pages by vmemmap_adjust_pages().
If it succeeds, we call vmemmap_deinit_space().

Hot-remove:

 We need to be careful when removing memory, as adding and
 removing memory needs to be done with the same granularity.
 To check that this assumption is not violated, we check the
 memory range we want to remove and if a) any memory block has
 vmemmap pages and b) the range spans more than a single memory
 block, we scream out loud and refuse to proceed.

 If all is good and the range was using memmap on memory (aka vmemmap pages),
 we construct an altmap structure so free_hugepage_table does the right
 thing and calls vmem_altmap_free instead of free_pagetable.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
---
 drivers/base/memory.c          |  64 +++++++++++++++--
 include/linux/memory.h         |   8 ++-
 include/linux/memory_hotplug.h |  13 ++++
 include/linux/memremap.h       |   2 +-
 include/linux/mmzone.h         |   5 ++
 mm/Kconfig                     |   5 ++
 mm/memory_hotplug.c            | 153 +++++++++++++++++++++++++++++++++++++++--
 7 files changed, 238 insertions(+), 12 deletions(-)

diff --git a/drivers/base/memory.c b/drivers/base/memory.c
index f209925a5d4e..a5e536a3e9a4 100644
--- a/drivers/base/memory.c
+++ b/drivers/base/memory.c
@@ -173,16 +173,65 @@ static int memory_block_online(struct memory_block *mem)
 {
 	unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
 	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
+	unsigned long nr_vmemmap_pages = mem->nr_vmemmap_pages;
+	int ret;
+
+	/*
+	 * Although vmemmap pages have a different lifecycle than the pages
+	 * they describe (they remain until the memory is unplugged), doing
+	 * its initialization and accounting at hot-{online,offline} stage
+	 * simplifies things a lot
+	 */
+	if (nr_vmemmap_pages) {
+		ret = vmemmap_init_space(start_pfn, nr_vmemmap_pages, mem->nid,
+					 mem->online_type);
+		if (ret)
+			return ret;
+	}
 
-	return online_pages(start_pfn, nr_pages, mem->online_type, mem->nid);
+	ret = online_pages(start_pfn + nr_vmemmap_pages,
+			   nr_pages - nr_vmemmap_pages, mem->online_type,
+			   mem->nid);
+
+	/*
+	 * Undo the work if online_pages() fails.
+	 */
+	if (ret && nr_vmemmap_pages) {
+		vmemmap_adjust_pages(start_pfn, -nr_vmemmap_pages);
+		vmemmap_deinit_space(start_pfn, nr_vmemmap_pages);
+	}
+
+	return ret;
 }
 
 static int memory_block_offline(struct memory_block *mem)
 {
 	unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
 	unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
+	unsigned long nr_vmemmap_pages = mem->nr_vmemmap_pages;
+	int ret;
+
+	/*
+	 * offline_pages() relies on present_pages in order to perform the
+	 * tearing-down of kthreads and the rebuilding of the zonelists.
+	 */
+	if (nr_vmemmap_pages)
+		vmemmap_adjust_pages(start_pfn, -nr_vmemmap_pages);
+
+	ret = offline_pages(start_pfn + nr_vmemmap_pages,
+			    nr_pages - nr_vmemmap_pages);
 
-	return offline_pages(start_pfn, nr_pages);
+	/*
+	 * Re-adjust present pages if offline_pages() fails.
+	 */
+	if (nr_vmemmap_pages) {
+		if (ret)
+			vmemmap_adjust_pages(start_pfn, nr_vmemmap_pages);
+		else
+			vmemmap_deinit_space(start_pfn, nr_pages);
+	}
+
+	return ret;
 }
 
 /*
@@ -576,7 +625,8 @@ int register_memory(struct memory_block *memory)
 	return ret;
 }
 
-static int init_memory_block(unsigned long block_id, unsigned long state)
+static int init_memory_block(unsigned long block_id, unsigned long state,
+			     unsigned long nr_vmemmap_pages)
 {
 	struct memory_block *mem;
 	int ret = 0;
@@ -593,6 +643,7 @@ static int init_memory_block(unsigned long block_id, unsigned long state)
 	mem->start_section_nr = block_id * sections_per_block;
 	mem->state = state;
 	mem->nid = NUMA_NO_NODE;
+	mem->nr_vmemmap_pages = nr_vmemmap_pages;
 
 	ret = register_memory(mem);
 
@@ -612,7 +663,7 @@ static int add_memory_block(unsigned long base_section_nr)
 	if (section_count == 0)
 		return 0;
 	return init_memory_block(memory_block_id(base_section_nr),
-				 MEM_ONLINE);
+				 MEM_ONLINE, 0);
 }
 
 static void unregister_memory(struct memory_block *memory)
@@ -634,7 +685,8 @@ static void unregister_memory(struct memory_block *memory)
  *
  * Called under device_hotplug_lock.
  */
-int create_memory_block_devices(unsigned long start, unsigned long size)
+int create_memory_block_devices(unsigned long start, unsigned long size,
+				unsigned long vmemmap_pages)
 {
 	const unsigned long start_block_id = pfn_to_block_id(PFN_DOWN(start));
 	unsigned long end_block_id = pfn_to_block_id(PFN_DOWN(start + size));
@@ -647,7 +699,7 @@ int create_memory_block_devices(unsigned long start, unsigned long size)
 		return -EINVAL;
 
 	for (block_id = start_block_id; block_id != end_block_id; block_id++) {
-		ret = init_memory_block(block_id, MEM_OFFLINE);
+		ret = init_memory_block(block_id, MEM_OFFLINE, vmemmap_pages);
 		if (ret)
 			break;
 	}
diff --git a/include/linux/memory.h b/include/linux/memory.h
index 4da95e684e20..97e92e8b556a 100644
--- a/include/linux/memory.h
+++ b/include/linux/memory.h
@@ -29,6 +29,11 @@ struct memory_block {
 	int online_type;		/* for passing data to online routine */
 	int nid;			/* NID for this memory block */
 	struct device dev;
+	/*
+	 * Number of vmemmap pages. These pages
+	 * lay at the beginning of the memory block.
+	 */
+	unsigned long nr_vmemmap_pages;
 };
 
 int arch_get_memory_phys_device(unsigned long start_pfn);
@@ -80,7 +85,8 @@ static inline int memory_notify(unsigned long val, void *v)
 #else
 extern int register_memory_notifier(struct notifier_block *nb);
 extern void unregister_memory_notifier(struct notifier_block *nb);
-int create_memory_block_devices(unsigned long start, unsigned long size);
+int create_memory_block_devices(unsigned long start, unsigned long size,
+				unsigned long vmemmap_pages);
 void remove_memory_block_devices(unsigned long start, unsigned long size);
 extern void memory_dev_init(void);
 extern int memory_notify(unsigned long val, void *v);
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 7288aa5ef73b..cc3dbc0abf02 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -55,6 +55,14 @@ typedef int __bitwise mhp_t;
  */
 #define MHP_MERGE_RESOURCE	((__force mhp_t)BIT(0))
 
+/*
+ * We want memmap (struct page array) to be self contained.
+ * To do so, we will use the beginning of the hot-added range to build
+ * the page tables for the memmap array that describes the entire range.
+ * Only selected architectures support it with SPARSE_VMEMMAP.
+ */
+#define MHP_MEMMAP_ON_MEMORY   ((__force mhp_t)BIT(1))
+
 /*
  * Extended parameters for memory hotplug:
  * altmap: alternative allocator for memmap array (optional)
@@ -100,6 +108,9 @@ extern int zone_grow_free_lists(struct zone *zone, unsigned long new_nr_pages);
 extern int zone_grow_waitqueues(struct zone *zone, unsigned long nr_pages);
 extern int add_one_highpage(struct page *page, int pfn, int bad_ppro);
 /* VM interface that may be used by firmware interface */
+extern void vmemmap_adjust_pages(unsigned long pfn, long nr_pages);
+extern int vmemmap_init_space(unsigned long pfn, unsigned long nr_pages,
+			      int nid, int online_type);
 extern int online_pages(unsigned long pfn, unsigned long nr_pages,
 			int online_type, int nid);
 extern struct zone *test_pages_in_a_zone(unsigned long start_pfn,
@@ -306,6 +317,7 @@ static inline void pgdat_resize_init(struct pglist_data *pgdat) {}
 
 #ifdef CONFIG_MEMORY_HOTREMOVE
 
+extern void vmemmap_deinit_space(unsigned long pfn, unsigned long nr_pages);
 extern void try_offline_node(int nid);
 extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages);
 extern int remove_memory(int nid, u64 start, u64 size);
@@ -359,6 +371,7 @@ extern struct zone *zone_for_pfn_range(int online_type, int nid, unsigned start_
 extern int arch_create_linear_mapping(int nid, u64 start, u64 size,
 				      struct mhp_params *params);
 void arch_remove_linear_mapping(u64 start, u64 size);
+extern bool mhp_supports_memmap_on_memory(unsigned long size);
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
 #endif /* __LINUX_MEMORY_HOTPLUG_H */
diff --git a/include/linux/memremap.h b/include/linux/memremap.h
index f5b464daeeca..45a79da89c5f 100644
--- a/include/linux/memremap.h
+++ b/include/linux/memremap.h
@@ -17,7 +17,7 @@ struct device;
  * @alloc: track pages consumed, private to vmemmap_populate()
  */
 struct vmem_altmap {
-	const unsigned long base_pfn;
+	unsigned long base_pfn;
 	const unsigned long end_pfn;
 	const unsigned long reserve;
 	unsigned long free;
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 47946cec7584..747e1c21d8e2 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -427,6 +427,11 @@ enum zone_type {
 	 *    techniques might use alloc_contig_range() to hide previously
 	 *    exposed pages from the buddy again (e.g., to implement some sort
 	 *    of memory unplug in virtio-mem).
+	 * 6. Memory-hotplug: when using memmap_on_memory and onlining the memory
+	 *    to the MOVABLE zone, the vmemmap pages are also placed in such
+	 *    zone. Such pages cannot be really moved around as they are
+	 *    self-stored in the range, but they are treated as movable when
+	 *    the range they describe is about to be offlined.
 	 *
 	 * In general, no unmovable allocations that degrade memory offlining
 	 * should end up in ZONE_MOVABLE. Allocators (like alloc_contig_range())
diff --git a/mm/Kconfig b/mm/Kconfig
index 24c045b24b95..febf805000f8 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -183,6 +183,11 @@ config MEMORY_HOTREMOVE
 	depends on MEMORY_HOTPLUG && ARCH_ENABLE_MEMORY_HOTREMOVE
 	depends on MIGRATION
 
+config MHP_MEMMAP_ON_MEMORY
+	def_bool y
+	depends on MEMORY_HOTPLUG && SPARSEMEM_VMEMMAP
+	depends on ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
+
 # Heavily threaded applications may benefit from splitting the mm-wide
 # page_table_lock, so that faults on different parts of the user address
 # space can be handled with less contention: split it at this NR_CPUS.
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 7411f6b5287d..43f0daf922e6 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -42,6 +42,8 @@
 #include "internal.h"
 #include "shuffle.h"
 
+static bool memmap_on_memory;
+
 /*
  * online_page_callback contains pointer to current page onlining function.
  * Initially it is generic_online_page(). If it is required it could be
@@ -641,7 +643,12 @@ EXPORT_SYMBOL_GPL(generic_online_page);
 static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
 {
 	const unsigned long end_pfn = start_pfn + nr_pages;
-	unsigned long pfn;
+	unsigned long pfn = start_pfn;
+
+	while (!IS_ALIGNED(pfn, MAX_ORDER_NR_PAGES)) {
+		(*online_page_callback)(pfn_to_page(pfn), pageblock_order);
+		pfn += pageblock_nr_pages;
+	}
 
 	/*
 	 * Online the pages in MAX_ORDER - 1 aligned chunks. The callback might
@@ -649,7 +656,7 @@ static void online_pages_range(unsigned long start_pfn, unsigned long nr_pages)
 	 * later). We account all pages as being online and belonging to this
 	 * zone ("present").
 	 */
-	for (pfn = start_pfn; pfn < end_pfn; pfn += MAX_ORDER_NR_PAGES)
+	for (; pfn < end_pfn; pfn += MAX_ORDER_NR_PAGES)
 		(*online_page_callback)(pfn_to_page(pfn), MAX_ORDER - 1);
 
 	/* mark all involved sections as online */
@@ -839,6 +846,45 @@ static void adjust_present_page_count(struct zone *zone, long nr_pages)
 	pgdat_resize_unlock(zone->zone_pgdat, &flags);
 }
 
+void vmemmap_adjust_pages(unsigned long pfn, long nr_pages)
+{
+	struct zone *zone = page_zone(pfn_to_page(pfn));
+
+	adjust_present_page_count(zone, nr_pages);
+}
+
+int vmemmap_init_space(unsigned long pfn, unsigned long nr_pages, int nid,
+		       int online_type)
+{
+	unsigned long end_pfn = pfn + nr_pages;
+	struct zone *zone;
+	int ret;
+
+	/*
+	 * Initialize vmemmap pages with the corresponding node, zone links set.
+	 */
+	zone = zone_for_pfn_range(online_type, nid, pfn, nr_pages);
+	move_pfn_range_to_zone(zone, pfn, nr_pages, NULL, MIGRATE_UNMOVABLE);
+
+	ret = kasan_add_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages));
+	if (ret) {
+		remove_pfn_range_from_zone(zone, pfn, nr_pages);
+		return ret;
+	}
+
+	vmemmap_adjust_pages(pfn, nr_pages);
+
+	/*
+	 * It might be that the vmemmap_pages fully span sections. If that is
+	 * the case, mark those sections online here as otherwise they will be
+	 * left offline.
+	 */
+	if (nr_pages >= PAGES_PER_SECTION)
+	        online_mem_sections(pfn, ALIGN_DOWN(end_pfn, PAGES_PER_SECTION));
+
+	return ret;
+}
+
 int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 		       int online_type, int nid)
 {
@@ -1074,6 +1120,45 @@ static int online_memory_block(struct memory_block *mem, void *arg)
 	return device_online(&mem->dev);
 }
 
+bool mhp_supports_memmap_on_memory(unsigned long size)
+{
+	unsigned long nr_vmemmap_pages = size / PAGE_SIZE;
+	unsigned long vmemmap_size = nr_vmemmap_pages * sizeof(struct page);
+	unsigned long remaining_size = size - vmemmap_size;
+
+	/*
+	 * Besides having arch support and the feature enabled at runtime, we
+	 * need a few more assumptions to hold true:
+	 *
+	 * a) We span a single memory block: memory onlining/offlinin;g happens
+	 *    in memory block granularity. We don't want the vmemmap of online
+	 *    memory blocks to reside on offline memory blocks. In the future,
+	 *    we might want to support variable-sized memory blocks to make the
+	 *    feature more versatile.
+	 *
+	 * b) The vmemmap pages span complete PMDs: We don't want vmemmap code
+	 *    to populate memory from the altmap for unrelated parts (i.e.,
+	 *    other memory blocks)
+	 *
+	 * c) The vmemmap pages (and thereby the pages that will be exposed to
+	 *    the buddy) have to cover full pageblocks: memory onlining/offlining
+	 *    code requires applicable ranges to be page-aligned, for example, to
+	 *    set the migratetypes properly.
+	 *
+	 * TODO: Although we have a check here to make sure that vmemmap pages
+	 *       fully populate a PMD, it is not the right place to check for
+	 *       this. A much better solution involves improving vmemmap code
+	 *       to fallback to base pages when trying to populate vmemmap using
+	 *       altmap as an alternative source of memory, and we do not exactly
+	 *       populate a single PMD.
+	 */
+	return memmap_on_memory &&
+	       IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) &&
+	       size == memory_block_size_bytes() &&
+	       IS_ALIGNED(vmemmap_size, PMD_SIZE) &&
+	       IS_ALIGNED(remaining_size, (pageblock_nr_pages << PAGE_SHIFT));
+}
+
 /*
  * NOTE: The caller must call lock_device_hotplug() to serialize hotplug
  * and online/offline operations (triggered e.g. by sysfs).
@@ -1083,6 +1168,7 @@ static int online_memory_block(struct memory_block *mem, void *arg)
 int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
 {
 	struct mhp_params params = { .pgprot = pgprot_mhp(PAGE_KERNEL) };
+	struct vmem_altmap mhp_altmap = {};
 	u64 start, size;
 	bool new_node = false;
 	int ret;
@@ -1109,13 +1195,26 @@ int __ref add_memory_resource(int nid, struct resource *res, mhp_t mhp_flags)
 		goto error;
 	new_node = ret;
 
+	/*
+	 * Self hosted memmap array
+	 */
+	if (mhp_flags & MHP_MEMMAP_ON_MEMORY) {
+		if (!mhp_supports_memmap_on_memory(size)) {
+			ret = -EINVAL;
+			goto error;
+		}
+		mhp_altmap.free = PHYS_PFN(size);
+		mhp_altmap.base_pfn = PHYS_PFN(start);
+		params.altmap = &mhp_altmap;
+	}
+
 	/* call arch's memory hotadd */
 	ret = arch_add_memory(nid, start, size, &params);
 	if (ret < 0)
 		goto error;
 
 	/* create memory block devices after memory was added */
-	ret = create_memory_block_devices(start, size);
+	ret = create_memory_block_devices(start, size, mhp_altmap.alloc);
 	if (ret) {
 		arch_remove_memory(nid, start, size, NULL);
 		goto error;
@@ -1573,6 +1672,16 @@ static int count_system_ram_pages_cb(unsigned long start_pfn,
 	return 0;
 }
 
+void vmemmap_deinit_space(unsigned long pfn, unsigned long nr_pages)
+{
+	/*
+	 * The pages associated with this vmemmap have been offlined, so
+	 * we can reset its state here in case we have page_init_poison.
+	 */
+	remove_pfn_range_from_zone(page_zone(pfn_to_page(pfn)), pfn, nr_pages);
+	kasan_remove_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages));
+}
+
 int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 {
 	const unsigned long end_pfn = start_pfn + nr_pages;
@@ -1760,6 +1869,14 @@ static int check_memblock_offlined_cb(struct memory_block *mem, void *arg)
 	return 0;
 }
 
+static int get_nr_vmemmap_pages_cb(struct memory_block *mem, void *arg)
+{
+	/*
+	 * If not set, continue with the next block.
+	 */
+	return mem->nr_vmemmap_pages;
+}
+
 static int check_cpu_on_node(pg_data_t *pgdat)
 {
 	int cpu;
@@ -1834,6 +1951,9 @@ EXPORT_SYMBOL(try_offline_node);
 static int __ref try_remove_memory(int nid, u64 start, u64 size)
 {
 	int rc = 0;
+	struct vmem_altmap mhp_altmap = {};
+	struct vmem_altmap *altmap = NULL;
+	unsigned long nr_vmemmap_pages;
 
 	BUG_ON(check_hotplug_memory_range(start, size));
 
@@ -1846,6 +1966,31 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
 	if (rc)
 		return rc;
 
+	/*
+	 * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in
+	 * the same granularity it was added - a single memory block.
+	 */
+	if (memmap_on_memory) {
+		nr_vmemmap_pages = walk_memory_blocks(start, size, NULL,
+						      get_nr_vmemmap_pages_cb);
+		if (nr_vmemmap_pages) {
+			if (size != memory_block_size_bytes()) {
+				pr_warn("Refuse to remove %#llx - %#llx,"
+					"wrong granularity\n",
+					start, start + size);
+				return -EINVAL;
+			}
+
+			/*
+			 * Let remove_pmd_table->free_hugepage_table do the
+			 * right thing if we used vmem_altmap when hot-adding
+			 * the range.
+			 */
+			mhp_altmap.alloc = nr_vmemmap_pages;
+			altmap = &mhp_altmap;
+		}
+	}
+
 	/* remove memmap entry */
 	firmware_map_remove(start, start + size, "System RAM");
 
@@ -1857,7 +2002,7 @@ static int __ref try_remove_memory(int nid, u64 start, u64 size)
 
 	mem_hotplug_begin();
 
-	arch_remove_memory(nid, start, size, NULL);
+	arch_remove_memory(nid, start, size, altmap);
 
 	if (IS_ENABLED(CONFIG_ARCH_KEEP_MEMBLOCK)) {
 		memblock_free(start, size);
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 5/8] acpi,memhotplug: Enable MHP_MEMMAP_ON_MEMORY when supported
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
                   ` (3 preceding siblings ...)
  2021-04-06 11:11 ` [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 6/8] mm,memory_hotplug: Add kernel boot option to enable memmap_on_memory Oscar Salvador
                   ` (2 subsequent siblings)
  7 siblings, 0 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador, Michal Hocko

Let the caller check whether it can pass MHP_MEMMAP_ON_MEMORY by
checking mhp_supports_memmap_on_memory().
MHP_MEMMAP_ON_MEMORY can only be set in case
ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE is enabled, the architecture supports
altmap, and the range to be added spans a single memory block.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 drivers/acpi/acpi_memhotplug.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/drivers/acpi/acpi_memhotplug.c b/drivers/acpi/acpi_memhotplug.c
index b02fd51e5589..8cc195c4c861 100644
--- a/drivers/acpi/acpi_memhotplug.c
+++ b/drivers/acpi/acpi_memhotplug.c
@@ -171,6 +171,7 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device)
 	acpi_handle handle = mem_device->device->handle;
 	int result, num_enabled = 0;
 	struct acpi_memory_info *info;
+	mhp_t mhp_flags = MHP_NONE;
 	int node;
 
 	node = acpi_get_node(handle);
@@ -194,8 +195,10 @@ static int acpi_memory_enable_device(struct acpi_memory_device *mem_device)
 		if (node < 0)
 			node = memory_add_physaddr_to_nid(info->start_addr);
 
+		if (mhp_supports_memmap_on_memory(info->length))
+			mhp_flags |= MHP_MEMMAP_ON_MEMORY;
 		result = __add_memory(node, info->start_addr, info->length,
-				      MHP_NONE);
+				      mhp_flags);
 
 		/*
 		 * If the memory block has been used by the kernel, add_memory()
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 6/8] mm,memory_hotplug: Add kernel boot option to enable memmap_on_memory
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
                   ` (4 preceding siblings ...)
  2021-04-06 11:11 ` [PATCH v6 5/8] acpi,memhotplug: Enable MHP_MEMMAP_ON_MEMORY when supported Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 7/8] x86/Kconfig: Introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 8/8] arm64/Kconfig: " Oscar Salvador
  7 siblings, 0 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador, Michal Hocko

Self stored memmap leads to a sparse memory situation which is unsuitable
for workloads that requires large contiguous memory chunks, so make this
an opt-in which needs to be explicitly enabled.

To control this, let memory_hotplug have its own memory space, as suggested
by David, so we can add memory_hotplug.memmap_on_memory parameter.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Acked-by: Michal Hocko <mhocko@suse.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 17 +++++++++++++++++
 mm/Makefile                                     |  5 ++++-
 mm/memory_hotplug.c                             | 10 +++++++++-
 3 files changed, 30 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 04545725f187..af32c17cd4eb 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -2794,6 +2794,23 @@
 			seconds.  Use this parameter to check at some
 			other rate.  0 disables periodic checking.
 
+	memory_hotplug.memmap_on_memory
+			[KNL,X86,ARM] Boolean flag to enable this feature.
+			Format: {on | off (default)}
+			When enabled, runtime hotplugged memory will
+			allocate its internal metadata (struct pages)
+			from the hotadded memory which will allow to
+			hotadd a lot of memory without requiring
+			additional memory to do so.
+			This feature is disabled by default because it
+			has some implication on large (e.g. GB)
+			allocations in some configurations (e.g. small
+			memory blocks).
+			The state of the flag can be read in
+			/sys/module/memory_hotplug/parameters/memmap_on_memory.
+			Note that even when enabled, there are a few cases where
+			the feature is not effective.
+
 	memtest=	[KNL,X86,ARM,PPC] Enable memtest
 			Format: <integer>
 			default : 0 <disable>
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..82ae9482f5e3 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -58,9 +58,13 @@ obj-y			:= filemap.o mempool.o oom_kill.o fadvise.o \
 page-alloc-y := page_alloc.o
 page-alloc-$(CONFIG_SHUFFLE_PAGE_ALLOCATOR) += shuffle.o
 
+# Give 'memory_hotplug' its own module-parameter namespace
+memory-hotplug-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
+
 obj-y += page-alloc.o
 obj-y += init-mm.o
 obj-y += memblock.o
+obj-y += $(memory-hotplug-y)
 
 ifdef CONFIG_MMU
 	obj-$(CONFIG_ADVISE_SYSCALLS)	+= madvise.o
@@ -83,7 +87,6 @@ obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_KFENCE) += kfence/
 obj-$(CONFIG_FAILSLAB) += failslab.o
-obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
 obj-$(CONFIG_TRANSPARENT_HUGEPAGE) += huge_memory.o khugepaged.o
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 43f0daf922e6..5cecc3a7d5a5 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -42,7 +42,15 @@
 #include "internal.h"
 #include "shuffle.h"
 
-static bool memmap_on_memory;
+
+/*
+ * memory_hotplug.memmap_on_memory parameter
+ */
+static bool memmap_on_memory __ro_after_init;
+#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
+module_param(memmap_on_memory, bool, 0444);
+MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug");
+#endif
 
 /*
  * online_page_callback contains pointer to current page onlining function.
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 7/8] x86/Kconfig: Introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
                   ` (5 preceding siblings ...)
  2021-04-06 11:11 ` [PATCH v6 6/8] mm,memory_hotplug: Add kernel boot option to enable memmap_on_memory Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  2021-04-06 11:11 ` [PATCH v6 8/8] arm64/Kconfig: " Oscar Salvador
  7 siblings, 0 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador

Enable x86_64 platform to use the MHP_MEMMAP_ON_MEMORY feature.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index 2792879d398e..9f0211df1746 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -2433,6 +2433,9 @@ config ARCH_ENABLE_MEMORY_HOTREMOVE
 	def_bool y
 	depends on MEMORY_HOTPLUG
 
+config ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
+	def_bool y
+
 config USE_PERCPU_NUMA_NODE_ID
 	def_bool y
 	depends on NUMA
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v6 8/8] arm64/Kconfig: Introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
  2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
                   ` (6 preceding siblings ...)
  2021-04-06 11:11 ` [PATCH v6 7/8] x86/Kconfig: Introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE Oscar Salvador
@ 2021-04-06 11:11 ` Oscar Salvador
  7 siblings, 0 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 11:11 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel,
	Oscar Salvador

Enable arm64 platform to use the MHP_MEMMAP_ON_MEMORY feature.

Signed-off-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/arm64/Kconfig | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 5656e7aacd69..0e23acd173f0 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -309,6 +309,9 @@ config ARCH_ENABLE_MEMORY_HOTPLUG
 config ARCH_ENABLE_MEMORY_HOTREMOVE
 	def_bool y
 
+config ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE
+	def_bool y
+
 config SMP
 	def_bool y
 
-- 
2.16.3


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check
  2021-04-06 11:11 ` [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check Oscar Salvador
@ 2021-04-06 15:32   ` David Hildenbrand
  2021-04-06 16:33     ` Oscar Salvador
  0 siblings, 1 reply; 23+ messages in thread
From: David Hildenbrand @ 2021-04-06 15:32 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: Michal Hocko, Anshuman Khandual, Pavel Tatashin, Vlastimil Babka,
	linux-mm, linux-kernel

On 06.04.21 13:11, Oscar Salvador wrote:
> When using self-hosted vmemmap pages, the number of pages passed to
> {online,offline}_pages might not fully span sections, but they always
> fully span pageblocks.
> Relax the check account for that case.
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> ---
>   mm/memory_hotplug.c | 16 ++++++++++++----
>   1 file changed, 12 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 0cdbbfbc5757..5fe3e3942b19 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -838,9 +838,13 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>   	int ret;
>   	struct memory_notify arg;
>   
> -	/* We can only online full sections (e.g., SECTION_IS_ONLINE) */
> +	/* We can only offline full sections (e.g., SECTION_IS_ONLINE).
> +	 * However, when using e.g: memmap_on_memory, some pages are initialized
> +	 * prior to calling in here. The remaining amount of pages must be
> +	 * pageblock aligned.
> +	 */
>   	if (WARN_ON_ONCE(!nr_pages ||
> -			 !IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION)))
> +			 !IS_ALIGNED(pfn | nr_pages, pageblock_nr_pages)))
>   		return -EINVAL;
>   
>   	mem_hotplug_begin();
> @@ -1573,9 +1577,13 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>   	int ret, node;
>   	char *reason;
>   
> -	/* We can only offline full sections (e.g., SECTION_IS_ONLINE) */
> +	/* We can only offline full sections (e.g., SECTION_IS_ONLINE).
> +	 * However, when using e.g: memmap_on_memory, some pages are initialized
> +	 * prior to calling in here. The remaining amount of pages must be
> +	 * pageblock aligned.
> +	 */
>   	if (WARN_ON_ONCE(!nr_pages ||
> -			 !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))
> +			 !IS_ALIGNED(start_pfn | nr_pages, pageblock_nr_pages)))
>   		return -EINVAL;
>   
>   	mem_hotplug_begin();
> 

I'd only relax start_pfn. That way the function is pretty much 
impossible to abuse for sub-section onlining/offlining.

if (WARN_ON_ONCE(!nr_pages ||
		 !IS_ALIGNED(start_pfn, pageblock_nr_pages))
		 !IS_ALIGNED(start_pfn + nr_pages, PAGES_PER_SECTION)))

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count()
  2021-04-06 11:11 ` [PATCH v6 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count() Oscar Salvador
@ 2021-04-06 15:33   ` David Hildenbrand
  2021-04-07  7:11     ` Oscar Salvador
  0 siblings, 1 reply; 23+ messages in thread
From: David Hildenbrand @ 2021-04-06 15:33 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: Michal Hocko, Anshuman Khandual, Pavel Tatashin, Vlastimil Babka,
	linux-mm, linux-kernel

On 06.04.21 13:11, Oscar Salvador wrote:
> From: David Hildenbrand <david@redhat.com>
> 
> Let's have a single place (inspired by adjust_managed_page_count()) where
> we adjust present pages, to prepare for additional bookkeeping.

Maybe in the context of this series, remove the "additional bookkeeping" 
part.

> In contrast to adjust_managed_page_count(), only memory onlining/offlining
> is allowed to modify the number of present pages.
> 
> Signed-off-by: David Hildenbrand <david@redhat.com>
> Signed-off-by: Oscar Salvador <osalvador@suse.de>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> ---
>   mm/memory_hotplug.c | 22 ++++++++++++----------
>   1 file changed, 12 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 5fe3e3942b19..7411f6b5287d 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -829,6 +829,16 @@ struct zone * zone_for_pfn_range(int online_type, int nid, unsigned start_pfn,
>   	return default_zone_for_pfn(nid, start_pfn, nr_pages);
>   }
>   
> +static void adjust_present_page_count(struct zone *zone, long nr_pages)
> +{
> +	unsigned long flags;
> +
> +	zone->present_pages += nr_pages;
> +	pgdat_resize_lock(zone->zone_pgdat, &flags);
> +	zone->zone_pgdat->node_present_pages += nr_pages;
> +	pgdat_resize_unlock(zone->zone_pgdat, &flags);
> +}
> +
>   int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>   		       int online_type, int nid)
>   {
> @@ -881,11 +891,7 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>   	}
>   
>   	online_pages_range(pfn, nr_pages);
> -	zone->present_pages += nr_pages;
> -
> -	pgdat_resize_lock(zone->zone_pgdat, &flags);
> -	zone->zone_pgdat->node_present_pages += nr_pages;
> -	pgdat_resize_unlock(zone->zone_pgdat, &flags);
> +	adjust_present_page_count(zone, nr_pages);
>   
>   	node_states_set_node(nid, &arg);
>   	if (need_zonelists_rebuild)
> @@ -1699,11 +1705,7 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>   
>   	/* removal success */
>   	adjust_managed_page_count(pfn_to_page(start_pfn), -nr_pages);
> -	zone->present_pages -= nr_pages;
> -
> -	pgdat_resize_lock(zone->zone_pgdat, &flags);
> -	zone->zone_pgdat->node_present_pages -= nr_pages;
> -	pgdat_resize_unlock(zone->zone_pgdat, &flags);
> +	adjust_present_page_count(zone, -nr_pages);
>   
>   	init_per_zone_wmark_min();
>   
> 


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check
  2021-04-06 15:32   ` David Hildenbrand
@ 2021-04-06 16:33     ` Oscar Salvador
  2021-04-06 20:43       ` David Hildenbrand
  0 siblings, 1 reply; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 16:33 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Michal Hocko, Anshuman Khandual, Pavel Tatashin,
	Vlastimil Babka, linux-mm, linux-kernel

On 2021-04-06 17:32, David Hildenbrand wrote:
> I'd only relax start_pfn. That way the function is pretty much
> impossible to abuse for sub-section onlining/offlining.
> 
> if (WARN_ON_ONCE(!nr_pages ||
> 		 !IS_ALIGNED(start_pfn, pageblock_nr_pages))
> 		 !IS_ALIGNED(start_pfn + nr_pages, PAGES_PER_SECTION)))

But this is not going to work.
When using memmap_on_memory, the nr of pages that online_pages() and 
offline_pages() get might be less than PAGES_PER_SECTION, so this check 
will always blow us up.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range
  2021-04-06 11:11 ` [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
@ 2021-04-06 17:10     ` kernel test robot
  2021-04-06 20:28   ` [PATCH v6 4/8] mm,memory_hotplug: " Oscar Salvador
  1 sibling, 0 replies; 23+ messages in thread
From: kernel test robot @ 2021-04-06 17:10 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: kbuild-all, clang-built-linux, Linux Memory Management List,
	David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-kernel, Oscar Salvador

[-- Attachment #1: Type: text/plain, Size: 3937 bytes --]

Hi Oscar,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on driver-core/driver-core-testing]
[also build test ERROR on pm/linux-next tip/x86/core arm64/for-next/core linus/master v5.12-rc6]
[cannot apply to hnaz-linux-mm/master next-20210406]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Oscar-Salvador/Allocate-memmap-from-hotadded-memory-per-device/20210406-191333
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 6e11b376fd74356e32d842be588e12dc9bf6e197
config: x86_64-randconfig-a015-20210406 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project a46f59a747a7273cc439efaf3b4f98d8b63d2f20)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/4d4265dd4e598c7b0971d57894685136229f5d07
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Oscar-Salvador/Allocate-memmap-from-hotadded-memory-per-device/20210406-191333
        git checkout 4d4265dd4e598c7b0971d57894685136229f5d07
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> drivers/base/memory.c:201:3: error: implicit declaration of function 'vmemmap_deinit_space' [-Werror,-Wimplicit-function-declaration]
                   vmemmap_deinit_space(start_pfn, nr_vmemmap_pages);
                   ^
   drivers/base/memory.c:201:3: note: did you mean 'vmemmap_init_space'?
   include/linux/memory_hotplug.h:112:12: note: 'vmemmap_init_space' declared here
   extern int vmemmap_init_space(unsigned long pfn, unsigned long nr_pages,
              ^
   drivers/base/memory.c:231:4: error: implicit declaration of function 'vmemmap_deinit_space' [-Werror,-Wimplicit-function-declaration]
                           vmemmap_deinit_space(start_pfn, nr_pages);
                           ^
   2 errors generated.


vim +/vmemmap_deinit_space +201 drivers/base/memory.c

   171	
   172	static int memory_block_online(struct memory_block *mem)
   173	{
   174		unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
   175		unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
   176		unsigned long nr_vmemmap_pages = mem->nr_vmemmap_pages;
   177		int ret;
   178	
   179		/*
   180		 * Although vmemmap pages have a different lifecycle than the pages
   181		 * they describe (they remain until the memory is unplugged), doing
   182		 * its initialization and accounting at hot-{online,offline} stage
   183		 * simplifies things a lot
   184		 */
   185		if (nr_vmemmap_pages) {
   186			ret = vmemmap_init_space(start_pfn, nr_vmemmap_pages, mem->nid,
   187						 mem->online_type);
   188			if (ret)
   189				return ret;
   190		}
   191	
   192		ret = online_pages(start_pfn + nr_vmemmap_pages,
   193				   nr_pages - nr_vmemmap_pages, mem->online_type,
   194				   mem->nid);
   195	
   196		/*
   197		 * Undo the work if online_pages() fails.
   198		 */
   199		if (ret && nr_vmemmap_pages) {
   200			vmemmap_adjust_pages(start_pfn, -nr_vmemmap_pages);
 > 201			vmemmap_deinit_space(start_pfn, nr_vmemmap_pages);
   202		}
   203	
   204		return ret;
   205	}
   206	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all@lists.01.org

[-- Attachment #2: .config.gz --]
[-- Type: application/gzip, Size: 41796 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 4/8] mm, memory_hotplug: Allocate memmap from the added memory range
@ 2021-04-06 17:10     ` kernel test robot
  0 siblings, 0 replies; 23+ messages in thread
From: kernel test robot @ 2021-04-06 17:10 UTC (permalink / raw)
  To: kbuild-all

[-- Attachment #1: Type: text/plain, Size: 4026 bytes --]

Hi Oscar,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on driver-core/driver-core-testing]
[also build test ERROR on pm/linux-next tip/x86/core arm64/for-next/core linus/master v5.12-rc6]
[cannot apply to hnaz-linux-mm/master next-20210406]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/0day-ci/linux/commits/Oscar-Salvador/Allocate-memmap-from-hotadded-memory-per-device/20210406-191333
base:   https://git.kernel.org/pub/scm/linux/kernel/git/gregkh/driver-core.git 6e11b376fd74356e32d842be588e12dc9bf6e197
config: x86_64-randconfig-a015-20210406 (attached as .config)
compiler: clang version 13.0.0 (https://github.com/llvm/llvm-project a46f59a747a7273cc439efaf3b4f98d8b63d2f20)
reproduce (this is a W=1 build):
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # install x86_64 cross compiling tool for clang build
        # apt-get install binutils-x86-64-linux-gnu
        # https://github.com/0day-ci/linux/commit/4d4265dd4e598c7b0971d57894685136229f5d07
        git remote add linux-review https://github.com/0day-ci/linux
        git fetch --no-tags linux-review Oscar-Salvador/Allocate-memmap-from-hotadded-memory-per-device/20210406-191333
        git checkout 4d4265dd4e598c7b0971d57894685136229f5d07
        # save the attached .config to linux build tree
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=clang make.cross ARCH=x86_64 

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

>> drivers/base/memory.c:201:3: error: implicit declaration of function 'vmemmap_deinit_space' [-Werror,-Wimplicit-function-declaration]
                   vmemmap_deinit_space(start_pfn, nr_vmemmap_pages);
                   ^
   drivers/base/memory.c:201:3: note: did you mean 'vmemmap_init_space'?
   include/linux/memory_hotplug.h:112:12: note: 'vmemmap_init_space' declared here
   extern int vmemmap_init_space(unsigned long pfn, unsigned long nr_pages,
              ^
   drivers/base/memory.c:231:4: error: implicit declaration of function 'vmemmap_deinit_space' [-Werror,-Wimplicit-function-declaration]
                           vmemmap_deinit_space(start_pfn, nr_pages);
                           ^
   2 errors generated.


vim +/vmemmap_deinit_space +201 drivers/base/memory.c

   171	
   172	static int memory_block_online(struct memory_block *mem)
   173	{
   174		unsigned long start_pfn = section_nr_to_pfn(mem->start_section_nr);
   175		unsigned long nr_pages = PAGES_PER_SECTION * sections_per_block;
   176		unsigned long nr_vmemmap_pages = mem->nr_vmemmap_pages;
   177		int ret;
   178	
   179		/*
   180		 * Although vmemmap pages have a different lifecycle than the pages
   181		 * they describe (they remain until the memory is unplugged), doing
   182		 * its initialization and accounting at hot-{online,offline} stage
   183		 * simplifies things a lot
   184		 */
   185		if (nr_vmemmap_pages) {
   186			ret = vmemmap_init_space(start_pfn, nr_vmemmap_pages, mem->nid,
   187						 mem->online_type);
   188			if (ret)
   189				return ret;
   190		}
   191	
   192		ret = online_pages(start_pfn + nr_vmemmap_pages,
   193				   nr_pages - nr_vmemmap_pages, mem->online_type,
   194				   mem->nid);
   195	
   196		/*
   197		 * Undo the work if online_pages() fails.
   198		 */
   199		if (ret && nr_vmemmap_pages) {
   200			vmemmap_adjust_pages(start_pfn, -nr_vmemmap_pages);
 > 201			vmemmap_deinit_space(start_pfn, nr_vmemmap_pages);
   202		}
   203	
   204		return ret;
   205	}
   206	

---
0-DAY CI Kernel Test Service, Intel Corporation
https://lists.01.org/hyperkitty/list/kbuild-all(a)lists.01.org

[-- Attachment #2: config.gz --]
[-- Type: application/gzip, Size: 41796 bytes --]

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range
  2021-04-06 11:11 ` [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
  2021-04-06 17:10     ` [PATCH v6 4/8] mm, memory_hotplug: " kernel test robot
@ 2021-04-06 20:28   ` Oscar Salvador
  2021-04-07 20:38     ` Oscar Salvador
  1 sibling, 1 reply; 23+ messages in thread
From: Oscar Salvador @ 2021-04-06 20:28 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel

On Tue, Apr 06, 2021 at 01:11:11PM +0200, Oscar Salvador wrote:
> Physical memory hotadd has to allocate a memmap (struct page array) for
> the newly added memory section. Currently, alloc_pages_node() is used
> for those allocations.
> 
> This has some disadvantages:
>  a) an existing memory is consumed for that purpose
>     (eg: ~2MB per 128MB memory section on x86_64)
>  b) if the whole node is movable then we have off-node struct pages
>     which has performance drawbacks.
>  c) It might be there are no PMD_ALIGNED chunks so memmap array gets
>     populated with base pages.
> 
> This can be improved when CONFIG_SPARSEMEM_VMEMMAP is enabled.
> 
> Vmemap page tables can map arbitrary memory.
> That means that we can simply use the beginning of each memory section and
> map struct pages there.
> struct pages which back the allocated space then just need to be treated
> carefully.
> 
> Implementation wise we will reuse vmem_altmap infrastructure to override
> the default allocator used by __populate_section_memmap.
> Part of the implementation also relies on memory_block structure gaining
> a new field which specifies the number of vmemmap_pages at the beginning.
> This patch also introduces the following functions:
> 
>  - vmemmap_init_space: Initializes vmemmap pages by calling move_pfn_range_to_zone(),
> 		       calls kasan_add_zero_shadow() or the vmemmap range and marks
> 		       online as many sections as vmemmap pages fully span.
>  - vmemmap_adjust_pages: Accounts/substract vmemmap_pages to node and zone
> 			 present_pages
>  - vmemmap_deinit_space: Undoes what vmemmap_init_space does.
> 
> The new function memory_block_online() calls vmemmap_init_space() before
> doing the actual online_pages(). Should online_pages() fail, we clean up
> by calling vmemmap_adjust_pages() and vmemmap_deinit_space().
> 
> On offline, memory_block_offline() calls vmemmap_adjust_pages() prior to calling
> offline_pages(), because offline_pages() performs the tearing-down of kthreads
> and the rebuilding of the zonelists if the node/zone become empty.
> If offline_pages() fails, we account back vmemmap pages by vmemmap_adjust_pages().
> If it succeeds, we call vmemmap_deinit_space().
> 
> Hot-remove:
> 
>  We need to be careful when removing memory, as adding and
>  removing memory needs to be done with the same granularity.
>  To check that this assumption is not violated, we check the
>  memory range we want to remove and if a) any memory block has
>  vmemmap pages and b) the range spans more than a single memory
>  block, we scream out loud and refuse to proceed.
> 
>  If all is good and the range was using memmap on memory (aka vmemmap pages),
>  we construct an altmap structure so free_hugepage_table does the right
>  thing and calls vmem_altmap_free instead of free_pagetable.
> 
> Signed-off-by: Oscar Salvador <osalvador@suse.de>

Heh, it seems I spaced out today.

We need a few things on top:

- In case !CONFIG_MEMORY_HOTREMOVE, we still need vmemmap_deinit_space
  as we call it from memory_block_online() too in case online_pages()
  fails. So we need to move it out of #CONFIG_MEMORY_HOTREMOVE, with
  the others.
- If vmemmap pages fully spans sections, we need to mark those sections
  as online, since online_pages() will not do it for us.
  In vmemmap_deinit_space, we need to mark them back offline.
  Since vmemmap_deinit_space might get called from
  memory_block_online(), we need to move the offline_mem_sections()
  out of #CONFIG_MEMORY_HOTREMOVE code.

So, we would need the following on top:

diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index cc3dbc0abf02..c7669d2accfd 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -111,6 +111,7 @@ extern int add_one_highpage(struct page *page, int pfn, int bad_ppro);
 extern void vmemmap_adjust_pages(unsigned long pfn, long nr_pages);
 extern int vmemmap_init_space(unsigned long pfn, unsigned long nr_pages,
 			      int nid, int online_type);
+extern void vmemmap_deinit_space(unsigned long pfn, unsigned long nr_pages);
 extern int online_pages(unsigned long pfn, unsigned long nr_pages,
 			int online_type, int nid);
 extern struct zone *test_pages_in_a_zone(unsigned long start_pfn,
@@ -317,7 +318,6 @@ static inline void pgdat_resize_init(struct pglist_data *pgdat) {}

 #ifdef CONFIG_MEMORY_HOTREMOVE

-extern void vmemmap_deinit_space(unsigned long pfn, unsigned long nr_pages);
 extern void try_offline_node(int nid);
 extern int offline_pages(unsigned long start_pfn, unsigned long nr_pages);
 extern int remove_memory(int nid, u64 start, u64 size);
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 747e1c21d8e2..76f4ca5ed230 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -1383,10 +1383,8 @@ static inline int online_section_nr(unsigned long nr)

 #ifdef CONFIG_MEMORY_HOTPLUG
 void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn);
-#ifdef CONFIG_MEMORY_HOTREMOVE
 void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn);
 #endif
-#endif

 static inline struct mem_section *__pfn_to_section(unsigned long pfn)
 {
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 43f0daf922e6..68f11751cd82 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -885,6 +885,25 @@ int vmemmap_init_space(unsigned long pfn, unsigned long nr_pages, int nid,
 	return ret;
 }

+void vmemmap_deinit_space(unsigned long pfn, unsigned long nr_pages)
+{
+	unsigned long end_pfn = pfn + nr_pages;
+        /*
+	 * The pages associated with this vmemmap have been offlined, so
+	 * we can reset its state here in case we have page_init_poison.
+	 */
+	remove_pfn_range_from_zone(page_zone(pfn_to_page(pfn)), pfn, nr_pages);
+	kasan_remove_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages));
+
+	/*
+	 * It might be that the vmemmap_pages fully span sections. If that is
+	 * the case, mark those sections offline here as otherwise they will be
+	 * left online.
+	 */
+	if (nr_pages >= PAGES_PER_SECTION)
+		offline_mem_sections(pfn, ALIGN_DOWN(end_pfn, PAGES_PER_SECTION));
+}
+
 int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
 		       int online_type, int nid)
 {
@@ -1672,16 +1691,6 @@ static int count_system_ram_pages_cb(unsigned long start_pfn,
 	return 0;
 }

-void vmemmap_deinit_space(unsigned long pfn, unsigned long nr_pages)
-{
-	/*
-	 * The pages associated with this vmemmap have been offlined, so
-	 * we can reset its state here in case we have page_init_poison.
-	 */
-	remove_pfn_range_from_zone(page_zone(pfn_to_page(pfn)), pfn, nr_pages);
-	kasan_remove_zero_shadow(__va(PFN_PHYS(pfn)), PFN_PHYS(nr_pages));
-}
-
 int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
 {
 	const unsigned long end_pfn = start_pfn + nr_pages;
diff --git a/mm/sparse.c b/mm/sparse.c
index 7bd23f9d6cef..8e96cf00536b 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -623,7 +623,6 @@ void online_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
 	}
 }

-#ifdef CONFIG_MEMORY_HOTREMOVE
 /* Mark all memory sections within the pfn range as offline */
 void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
 {
@@ -644,7 +643,6 @@ void offline_mem_sections(unsigned long start_pfn, unsigned long end_pfn)
 		ms->section_mem_map &= ~SECTION_IS_ONLINE;
 	}
 }
-#endif

 #ifdef CONFIG_SPARSEMEM_VMEMMAP
 static struct page * __meminit populate_section_memmap(unsigned long pfn,


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check
  2021-04-06 16:33     ` Oscar Salvador
@ 2021-04-06 20:43       ` David Hildenbrand
  2021-04-07  7:42         ` Oscar Salvador
  0 siblings, 1 reply; 23+ messages in thread
From: David Hildenbrand @ 2021-04-06 20:43 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: David Hildenbrand, Andrew Morton, Michal Hocko,
	Anshuman Khandual, Pavel Tatashin, Vlastimil Babka, linux-mm,
	linux-kernel


> Am 06.04.2021 um 18:34 schrieb Oscar Salvador <osalvador@suse.de>:
> 
> On 2021-04-06 17:32, David Hildenbrand wrote:
>> I'd only relax start_pfn. That way the function is pretty much
>> impossible to abuse for sub-section onlining/offlining.
>> if (WARN_ON_ONCE(!nr_pages ||
>>         !IS_ALIGNED(start_pfn, pageblock_nr_pages))
>>         !IS_ALIGNED(start_pfn + nr_pages, PAGES_PER_SECTION)))
> 
> But this is not going to work.
> When using memmap_on_memory, the nr of pages that online_pages() and offline_pages() get might be less than PAGES_PER_SECTION, so this check will always blow us up.

But not end_pfn as given in my version or what am I missing?


> -- 
> Oscar Salvador
> SUSE L3
> 


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count()
  2021-04-06 15:33   ` David Hildenbrand
@ 2021-04-07  7:11     ` Oscar Salvador
  0 siblings, 0 replies; 23+ messages in thread
From: Oscar Salvador @ 2021-04-07  7:11 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Michal Hocko, Anshuman Khandual, Pavel Tatashin,
	Vlastimil Babka, linux-mm, linux-kernel

On Tue, Apr 06, 2021 at 05:33:46PM +0200, David Hildenbrand wrote:
> On 06.04.21 13:11, Oscar Salvador wrote:
> > From: David Hildenbrand <david@redhat.com>
> > 
> > Let's have a single place (inspired by adjust_managed_page_count()) where
> > we adjust present pages, to prepare for additional bookkeeping.
> 
> Maybe in the context of this series, remove the "additional bookkeeping"
> part.

Definitely, it was a slip.

Thanks

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check
  2021-04-06 20:43       ` David Hildenbrand
@ 2021-04-07  7:42         ` Oscar Salvador
  2021-04-07  7:43           ` David Hildenbrand
  0 siblings, 1 reply; 23+ messages in thread
From: Oscar Salvador @ 2021-04-07  7:42 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Andrew Morton, Michal Hocko, Anshuman Khandual, Pavel Tatashin,
	Vlastimil Babka, linux-mm, linux-kernel

On Tue, Apr 06, 2021 at 10:43:01PM +0200, David Hildenbrand wrote:
> But not end_pfn as given in my version or what am I missing?

Nah, was my fault, I managed to see:

 if (WARN_ON_ONCE(!nr_pages ||
         !IS_ALIGNED(start_pfn, pageblock_nr_pages))
         !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))

which would not work.

I agree that keeping the PAGES_PER_SECTION check maks this safer, so this all
should have been:

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 0cdbbfbc5757..25e59d5dc13c 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -838,9 +838,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
        int ret;
        struct memory_notify arg;
 
-       /* We can only online full sections (e.g., SECTION_IS_ONLINE) */
+       /* We can only offline full sections (e.g., SECTION_IS_ONLINE).
+        * However, when using e.g: memmap_on_memory, some pages are initialized
+        * prior to calling in here. The remaining amount of pages must be
+        * pageblock aligned.
+        */
        if (WARN_ON_ONCE(!nr_pages ||
-                        !IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION)))
+                        !IS_ALIGNED(pfn, pageblock_nr_pages) ||
+                        !IS_ALIGNED(pfn + nr_pages, PAGES_PER_SECTION)))
                return -EINVAL;
 
        mem_hotplug_begin();
@@ -1573,9 +1578,14 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
        int ret, node;
        char *reason;
 
-       /* We can only offline full sections (e.g., SECTION_IS_ONLINE) */
+       /* We can only offline full sections (e.g., SECTION_IS_ONLINE).
+        * However, when using e.g: memmap_on_memory, some pages are initialized
+        * prior to calling in here. The remaining amount of pages must be
+        * pageblock aligned.
+        */
        if (WARN_ON_ONCE(!nr_pages ||
-                        !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))
+                        !IS_ALIGNED(start_pfn, pageblock_nr_pages) ||
+                        !IS_ALIGNED(start_pfn + nr_pages, PAGES_PER_SECTION)))
                return -EINVAL;
 
        mem_hotplug_begin();

 

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check
  2021-04-07  7:42         ` Oscar Salvador
@ 2021-04-07  7:43           ` David Hildenbrand
  0 siblings, 0 replies; 23+ messages in thread
From: David Hildenbrand @ 2021-04-07  7:43 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Andrew Morton, Michal Hocko, Anshuman Khandual, Pavel Tatashin,
	Vlastimil Babka, linux-mm, linux-kernel

On 07.04.21 09:42, Oscar Salvador wrote:
> On Tue, Apr 06, 2021 at 10:43:01PM +0200, David Hildenbrand wrote:
>> But not end_pfn as given in my version or what am I missing?
> 
> Nah, was my fault, I managed to see:
> 
>   if (WARN_ON_ONCE(!nr_pages ||
>           !IS_ALIGNED(start_pfn, pageblock_nr_pages))
>           !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))
> 
> which would not work.
> 
> I agree that keeping the PAGES_PER_SECTION check maks this safer, so this all
> should have been:
> 
> diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
> index 0cdbbfbc5757..25e59d5dc13c 100644
> --- a/mm/memory_hotplug.c
> +++ b/mm/memory_hotplug.c
> @@ -838,9 +838,14 @@ int __ref online_pages(unsigned long pfn, unsigned long nr_pages,
>          int ret;
>          struct memory_notify arg;
>   
> -       /* We can only online full sections (e.g., SECTION_IS_ONLINE) */
> +       /* We can only offline full sections (e.g., SECTION_IS_ONLINE).
> +        * However, when using e.g: memmap_on_memory, some pages are initialized
> +        * prior to calling in here. The remaining amount of pages must be
> +        * pageblock aligned.
> +        */
>          if (WARN_ON_ONCE(!nr_pages ||
> -                        !IS_ALIGNED(pfn | nr_pages, PAGES_PER_SECTION)))
> +                        !IS_ALIGNED(pfn, pageblock_nr_pages) ||
> +                        !IS_ALIGNED(pfn + nr_pages, PAGES_PER_SECTION)))
>                  return -EINVAL;
>   
>          mem_hotplug_begin();
> @@ -1573,9 +1578,14 @@ int __ref offline_pages(unsigned long start_pfn, unsigned long nr_pages)
>          int ret, node;
>          char *reason;
>   
> -       /* We can only offline full sections (e.g., SECTION_IS_ONLINE) */
> +       /* We can only offline full sections (e.g., SECTION_IS_ONLINE).
> +        * However, when using e.g: memmap_on_memory, some pages are initialized
> +        * prior to calling in here. The remaining amount of pages must be
> +        * pageblock aligned.
> +        */
>          if (WARN_ON_ONCE(!nr_pages ||
> -                        !IS_ALIGNED(start_pfn | nr_pages, PAGES_PER_SECTION)))
> +                        !IS_ALIGNED(start_pfn, pageblock_nr_pages) ||
> +                        !IS_ALIGNED(start_pfn + nr_pages, PAGES_PER_SECTION)))
>                  return -EINVAL;
>   
>          mem_hotplug_begin();
> 
>   
> 

Reviewed-by: David Hildenbrand <david@redhat.com>

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range
  2021-04-06 20:28   ` [PATCH v6 4/8] mm,memory_hotplug: " Oscar Salvador
@ 2021-04-07 20:38     ` Oscar Salvador
  2021-04-09  5:05       ` Andrew Morton
  0 siblings, 1 reply; 23+ messages in thread
From: Oscar Salvador @ 2021-04-07 20:38 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel

On 2021-04-06 22:28, Oscar Salvador wrote:
> Heh, it seems I spaced out today.
> 
> We need a few things on top:
> 

Should I send a new version with the fixup included?
I think it would ease the review but I do not want to spam.


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range
  2021-04-07 20:38     ` Oscar Salvador
@ 2021-04-09  5:05       ` Andrew Morton
  2021-04-09  5:10         ` Oscar Salvador
  0 siblings, 1 reply; 23+ messages in thread
From: Andrew Morton @ 2021-04-09  5:05 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel

On Wed, 07 Apr 2021 22:38:37 +0200 Oscar Salvador <osalvador@suse.de> wrote:

> On 2021-04-06 22:28, Oscar Salvador wrote:
> > Heh, it seems I spaced out today.
> > 
> > We need a few things on top:
> > 
> 

Yes please.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range
  2021-04-09  5:05       ` Andrew Morton
@ 2021-04-09  5:10         ` Oscar Salvador
  2021-04-09  8:10           ` David Hildenbrand
  0 siblings, 1 reply; 23+ messages in thread
From: Oscar Salvador @ 2021-04-09  5:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: David Hildenbrand, Michal Hocko, Anshuman Khandual,
	Pavel Tatashin, Vlastimil Babka, linux-mm, linux-kernel

On Thu, Apr 08, 2021 at 10:05:18PM -0700, Andrew Morton wrote:
> Yes please.

I just sent v7 with that yesterday [1]

Hope David/Michal finds some time to review patch#4 as that is the only
missing piece atm.

[1] https://lkml.org/lkml/2021/4/8/546


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range
  2021-04-09  5:10         ` Oscar Salvador
@ 2021-04-09  8:10           ` David Hildenbrand
  0 siblings, 0 replies; 23+ messages in thread
From: David Hildenbrand @ 2021-04-09  8:10 UTC (permalink / raw)
  To: Oscar Salvador, Andrew Morton
  Cc: Michal Hocko, Anshuman Khandual, Pavel Tatashin, Vlastimil Babka,
	linux-mm, linux-kernel

On 09.04.21 07:10, Oscar Salvador wrote:
> On Thu, Apr 08, 2021 at 10:05:18PM -0700, Andrew Morton wrote:
>> Yes please.
> 
> I just sent v7 with that yesterday [1]
> 
> Hope David/Michal finds some time to review patch#4 as that is the only
> missing piece atm.

I will have a look next week. Still have to wrap my head around the 
present_pages stuff and if that couldn't be handled in a cleaner way.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2021-04-09  8:11 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-04-06 11:11 [PATCH v6 0/8] Allocate memmap from hotadded memory (per device) Oscar Salvador
2021-04-06 11:11 ` [PATCH v6 1/8] drivers/base/memory: Introduce memory_block_{online,offline} Oscar Salvador
2021-04-06 11:11 ` [PATCH v6 2/8] mm,memory_hotplug: Relax fully spanned sections check Oscar Salvador
2021-04-06 15:32   ` David Hildenbrand
2021-04-06 16:33     ` Oscar Salvador
2021-04-06 20:43       ` David Hildenbrand
2021-04-07  7:42         ` Oscar Salvador
2021-04-07  7:43           ` David Hildenbrand
2021-04-06 11:11 ` [PATCH v6 3/8] mm,memory_hotplug: Factor out adjusting present pages into adjust_present_page_count() Oscar Salvador
2021-04-06 15:33   ` David Hildenbrand
2021-04-07  7:11     ` Oscar Salvador
2021-04-06 11:11 ` [PATCH v6 4/8] mm,memory_hotplug: Allocate memmap from the added memory range Oscar Salvador
2021-04-06 17:10   ` kernel test robot
2021-04-06 17:10     ` [PATCH v6 4/8] mm, memory_hotplug: " kernel test robot
2021-04-06 20:28   ` [PATCH v6 4/8] mm,memory_hotplug: " Oscar Salvador
2021-04-07 20:38     ` Oscar Salvador
2021-04-09  5:05       ` Andrew Morton
2021-04-09  5:10         ` Oscar Salvador
2021-04-09  8:10           ` David Hildenbrand
2021-04-06 11:11 ` [PATCH v6 5/8] acpi,memhotplug: Enable MHP_MEMMAP_ON_MEMORY when supported Oscar Salvador
2021-04-06 11:11 ` [PATCH v6 6/8] mm,memory_hotplug: Add kernel boot option to enable memmap_on_memory Oscar Salvador
2021-04-06 11:11 ` [PATCH v6 7/8] x86/Kconfig: Introduce ARCH_MHP_MEMMAP_ON_MEMORY_ENABLE Oscar Salvador
2021-04-06 11:11 ` [PATCH v6 8/8] arm64/Kconfig: " Oscar Salvador

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.