linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO
@ 2023-07-24 13:46 Usama Arif
  2023-07-24 13:46 ` [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled Usama Arif
                   ` (4 more replies)
  0 siblings, 5 replies; 10+ messages in thread
From: Usama Arif @ 2023-07-24 13:46 UTC (permalink / raw)
  To: linux-mm, muchun.song, mike.kravetz, rppt
  Cc: linux-kernel, fam.zheng, liangma, simon.evans, punit.agrawal, Usama Arif

If the region is for gigantic hugepages and if HVO is enabled, then those
struct pages which will be freed later by HVO don't need to be prepared and
initialized. This can save significant time when a large number of hugepages
are allocated at boot time.

For a 1G hugepage, this series avoid initialization and preparation of
262144 - 64 = 262080 struct pages per hugepage.

When tested on a 512G system (which can allocate max 500 1G hugepages), the
kexec-boot time with HVO and DEFERRED_STRUCT_PAGE_INIT enabled without this
patchseries to running init is 3.9 seconds. With this patch it is 1.2 seconds.
This represents an approximately 70% reduction in boot time and will
significantly reduce server downtime when using a large number of
gigantic pages.

Thanks,
Usama

Usama Arif (4):
  mm/hugetlb: Skip prep of tail pages when HVO is enabled
  mm/memblock: Add hugepage_size member to struct memblock_region
  mm/hugetlb_vmemmap: Use nid of the head page to reallocate it
  mm/memblock: Skip initialization of struct pages freed later by HVO

 arch/arm64/mm/kasan_init.c                   |  2 +-
 arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
 arch/powerpc/platforms/pseries/setup.c       |  4 +-
 arch/powerpc/sysdev/dart_iommu.c             |  2 +-
 include/linux/memblock.h                     |  8 +-
 mm/cma.c                                     |  4 +-
 mm/hugetlb.c                                 | 36 +++++---
 mm/hugetlb_vmemmap.c                         |  6 +-
 mm/hugetlb_vmemmap.h                         |  4 +
 mm/memblock.c                                | 87 +++++++++++++-------
 mm/mm_init.c                                 |  2 +-
 mm/sparse-vmemmap.c                          |  2 +-
 tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
 13 files changed, 106 insertions(+), 55 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled
  2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
@ 2023-07-24 13:46 ` Usama Arif
  2023-07-24 13:46 ` [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region Usama Arif
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 10+ messages in thread
From: Usama Arif @ 2023-07-24 13:46 UTC (permalink / raw)
  To: linux-mm, muchun.song, mike.kravetz, rppt
  Cc: linux-kernel, fam.zheng, liangma, simon.evans, punit.agrawal, Usama Arif

When vmemmap is optimizable, it will free all the
duplicated tail pages in hugetlb_vmemmap_optimize while
preparing the new hugepage. Hence, there is no need to
prepare them.

For 1G x86 hugepages, it avoids preparing
262144 - 64 = 262080 struct pages per hugepage.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 mm/hugetlb.c         | 30 +++++++++++++++++++++---------
 mm/hugetlb_vmemmap.c |  2 +-
 mm/hugetlb_vmemmap.h |  1 +
 3 files changed, 23 insertions(+), 10 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 64a3239b6407..24352abbb9e5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1943,13 +1943,22 @@ static void prep_new_hugetlb_folio(struct hstate *h, struct folio *folio, int ni
 }
 
 static bool __prep_compound_gigantic_folio(struct folio *folio,
-					unsigned int order, bool demote)
+					unsigned int order, bool demote,
+					bool hugetlb_vmemmap_optimizable)
 {
 	int i, j;
 	int nr_pages = 1 << order;
 	struct page *p;
 
 	__folio_clear_reserved(folio);
+
+	/*
+	 * No need to prep pages that will be freed later by hugetlb_vmemmap_optimize
+	 * in prep_new_huge_page. Hence, reduce nr_pages to the pages that will be kept.
+	 */
+	if (hugetlb_vmemmap_optimizable)
+		nr_pages = HUGETLB_VMEMMAP_RESERVE_SIZE / sizeof(struct page);
+
 	for (i = 0; i < nr_pages; i++) {
 		p = folio_page(folio, i);
 
@@ -2020,15 +2029,15 @@ static bool __prep_compound_gigantic_folio(struct folio *folio,
 }
 
 static bool prep_compound_gigantic_folio(struct folio *folio,
-							unsigned int order)
+							unsigned int order, bool hugetlb_vmemmap_optimizable)
 {
-	return __prep_compound_gigantic_folio(folio, order, false);
+	return __prep_compound_gigantic_folio(folio, order, false, hugetlb_vmemmap_optimizable);
 }
 
 static bool prep_compound_gigantic_folio_for_demote(struct folio *folio,
-							unsigned int order)
+							unsigned int order, bool hugetlb_vmemmap_optimizable)
 {
-	return __prep_compound_gigantic_folio(folio, order, true);
+	return __prep_compound_gigantic_folio(folio, order, true, hugetlb_vmemmap_optimizable);
 }
 
 /*
@@ -2185,7 +2194,8 @@ static struct folio *alloc_fresh_hugetlb_folio(struct hstate *h,
 	if (!folio)
 		return NULL;
 	if (hstate_is_gigantic(h)) {
-		if (!prep_compound_gigantic_folio(folio, huge_page_order(h))) {
+		if (!prep_compound_gigantic_folio(folio, huge_page_order(h),
+							 vmemmap_should_optimize(h, &folio->page))) {
 			/*
 			 * Rare failure to convert pages to compound page.
 			 * Free pages and try again - ONCE!
@@ -3201,7 +3211,8 @@ static void __init gather_bootmem_prealloc(void)
 
 		VM_BUG_ON(!hstate_is_gigantic(h));
 		WARN_ON(folio_ref_count(folio) != 1);
-		if (prep_compound_gigantic_folio(folio, huge_page_order(h))) {
+		if (prep_compound_gigantic_folio(folio, huge_page_order(h),
+						vmemmap_should_optimize(h, page))) {
 			WARN_ON(folio_test_reserved(folio));
 			prep_new_hugetlb_folio(h, folio, folio_nid(folio));
 			free_huge_page(page); /* add to the hugepage allocator */
@@ -3624,8 +3635,9 @@ static int demote_free_hugetlb_folio(struct hstate *h, struct folio *folio)
 		subpage = folio_page(folio, i);
 		inner_folio = page_folio(subpage);
 		if (hstate_is_gigantic(target_hstate))
-			prep_compound_gigantic_folio_for_demote(inner_folio,
-							target_hstate->order);
+			prep_compound_gigantic_folio_for_demote(folio,
+							target_hstate->order,
+							vmemmap_should_optimize(target_hstate, subpage));
 		else
 			prep_compound_page(subpage, target_hstate->order);
 		folio_change_private(inner_folio, NULL);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index c2007ef5e9b0..b721e87de2b3 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -486,7 +486,7 @@ int hugetlb_vmemmap_restore(const struct hstate *h, struct page *head)
 }
 
 /* Return true iff a HugeTLB whose vmemmap should and can be optimized. */
-static bool vmemmap_should_optimize(const struct hstate *h, const struct page *head)
+bool vmemmap_should_optimize(const struct hstate *h, const struct page *head)
 {
 	if (!READ_ONCE(vmemmap_optimize_enabled))
 		return false;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 25bd0e002431..3525c514c061 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -57,4 +57,5 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h)
 {
 	return hugetlb_vmemmap_optimizable_size(h) != 0;
 }
+bool vmemmap_should_optimize(const struct hstate *h, const struct page *head);
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region
  2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
  2023-07-24 13:46 ` [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled Usama Arif
@ 2023-07-24 13:46 ` Usama Arif
  2023-07-26 11:01   ` Mike Rapoport
  2023-07-24 13:46 ` [RFC 3/4] mm/hugetlb_vmemmap: Use nid of the head page to reallocate it Usama Arif
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 10+ messages in thread
From: Usama Arif @ 2023-07-24 13:46 UTC (permalink / raw)
  To: linux-mm, muchun.song, mike.kravetz, rppt
  Cc: linux-kernel, fam.zheng, liangma, simon.evans, punit.agrawal, Usama Arif

This propagates the hugepage size from the memblock APIs
(memblock_alloc_try_nid_raw and memblock_alloc_range_nid)
so that it can be stored in struct memblock region. This does not
introduce any functional change and hugepage_size is not used in
this commit. It is just a setup for the next commit where huge_pagesize
is used to skip initialization of struct pages that will be freed later
when HVO is enabled.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 arch/arm64/mm/kasan_init.c                   |  2 +-
 arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
 arch/powerpc/platforms/pseries/setup.c       |  4 +-
 arch/powerpc/sysdev/dart_iommu.c             |  2 +-
 include/linux/memblock.h                     |  8 ++-
 mm/cma.c                                     |  4 +-
 mm/hugetlb.c                                 |  6 +-
 mm/memblock.c                                | 60 ++++++++++++--------
 mm/mm_init.c                                 |  2 +-
 mm/sparse-vmemmap.c                          |  2 +-
 tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
 11 files changed, 56 insertions(+), 38 deletions(-)

diff --git a/arch/arm64/mm/kasan_init.c b/arch/arm64/mm/kasan_init.c
index f17d066e85eb..39992a418891 100644
--- a/arch/arm64/mm/kasan_init.c
+++ b/arch/arm64/mm/kasan_init.c
@@ -50,7 +50,7 @@ static phys_addr_t __init kasan_alloc_raw_page(int node)
 	void *p = memblock_alloc_try_nid_raw(PAGE_SIZE, PAGE_SIZE,
 						__pa(MAX_DMA_ADDRESS),
 						MEMBLOCK_ALLOC_NOLEAKTRACE,
-						node);
+						node, 0);
 	if (!p)
 		panic("%s: Failed to allocate %lu bytes align=0x%lx nid=%d from=%llx\n",
 		      __func__, PAGE_SIZE, PAGE_SIZE, node,
diff --git a/arch/powerpc/platforms/pasemi/iommu.c b/arch/powerpc/platforms/pasemi/iommu.c
index 375487cba874..6963cdf76bce 100644
--- a/arch/powerpc/platforms/pasemi/iommu.c
+++ b/arch/powerpc/platforms/pasemi/iommu.c
@@ -201,7 +201,7 @@ static int __init iob_init(struct device_node *dn)
 	/* For 2G space, 8x64 pages (2^21 bytes) is max total l2 size */
 	iob_l2_base = memblock_alloc_try_nid_raw(1UL << 21, 1UL << 21,
 					MEMBLOCK_LOW_LIMIT, 0x80000000,
-					NUMA_NO_NODE);
+					NUMA_NO_NODE, 0);
 	if (!iob_l2_base)
 		panic("%s: Failed to allocate %lu bytes align=0x%lx max_addr=%x\n",
 		      __func__, 1UL << 21, 1UL << 21, 0x80000000);
diff --git a/arch/powerpc/platforms/pseries/setup.c b/arch/powerpc/platforms/pseries/setup.c
index e2a57cfa6c83..cec7198b59d2 100644
--- a/arch/powerpc/platforms/pseries/setup.c
+++ b/arch/powerpc/platforms/pseries/setup.c
@@ -160,7 +160,7 @@ static void __init fwnmi_init(void)
 	 */
 	mce_data_buf = memblock_alloc_try_nid_raw(RTAS_ERROR_LOG_MAX * nr_cpus,
 					RTAS_ERROR_LOG_MAX, MEMBLOCK_LOW_LIMIT,
-					ppc64_rma_size, NUMA_NO_NODE);
+					ppc64_rma_size, NUMA_NO_NODE, 0);
 	if (!mce_data_buf)
 		panic("Failed to allocate %d bytes below %pa for MCE buffer\n",
 		      RTAS_ERROR_LOG_MAX * nr_cpus, &ppc64_rma_size);
@@ -176,7 +176,7 @@ static void __init fwnmi_init(void)
 		size = sizeof(struct slb_entry) * mmu_slb_size * nr_cpus;
 		slb_ptr = memblock_alloc_try_nid_raw(size,
 				sizeof(struct slb_entry), MEMBLOCK_LOW_LIMIT,
-				ppc64_rma_size, NUMA_NO_NODE);
+				ppc64_rma_size, NUMA_NO_NODE, 0);
 		if (!slb_ptr)
 			panic("Failed to allocate %zu bytes below %pa for slb area\n",
 			      size, &ppc64_rma_size);
diff --git a/arch/powerpc/sysdev/dart_iommu.c b/arch/powerpc/sysdev/dart_iommu.c
index 98096bbfd62e..86c676b61899 100644
--- a/arch/powerpc/sysdev/dart_iommu.c
+++ b/arch/powerpc/sysdev/dart_iommu.c
@@ -239,7 +239,7 @@ static void __init allocate_dart(void)
 	 */
 	dart_tablebase = memblock_alloc_try_nid_raw(SZ_16M, SZ_16M,
 					MEMBLOCK_LOW_LIMIT, SZ_2G,
-					NUMA_NO_NODE);
+					NUMA_NO_NODE, 0);
 	if (!dart_tablebase)
 		panic("Failed to allocate 16MB below 2GB for DART table\n");
 
diff --git a/include/linux/memblock.h b/include/linux/memblock.h
index f71ff9f0ec81..bb8019540d73 100644
--- a/include/linux/memblock.h
+++ b/include/linux/memblock.h
@@ -63,6 +63,7 @@ struct memblock_region {
 #ifdef CONFIG_NUMA
 	int nid;
 #endif
+	phys_addr_t hugepage_size;
 };
 
 /**
@@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
 				      phys_addr_t start, phys_addr_t end);
 phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
 				      phys_addr_t align, phys_addr_t start,
-				      phys_addr_t end, int nid, bool exact_nid);
+				      phys_addr_t end, int nid, bool exact_nid,
+				      phys_addr_t hugepage_size);
 phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
 
 static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
@@ -415,7 +417,7 @@ void *memblock_alloc_exact_nid_raw(phys_addr_t size, phys_addr_t align,
 				 int nid);
 void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
 				 phys_addr_t min_addr, phys_addr_t max_addr,
-				 int nid);
+				 int nid, phys_addr_t hugepage_size);
 void *memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align,
 			     phys_addr_t min_addr, phys_addr_t max_addr,
 			     int nid);
@@ -431,7 +433,7 @@ static inline void *memblock_alloc_raw(phys_addr_t size,
 {
 	return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT,
 					  MEMBLOCK_ALLOC_ACCESSIBLE,
-					  NUMA_NO_NODE);
+					  NUMA_NO_NODE, 0);
 }
 
 static inline void *memblock_alloc_from(phys_addr_t size,
diff --git a/mm/cma.c b/mm/cma.c
index a4cfe995e11e..a270905aa7f2 100644
--- a/mm/cma.c
+++ b/mm/cma.c
@@ -334,7 +334,7 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
 		if (!memblock_bottom_up() && memblock_end >= SZ_4G + size) {
 			memblock_set_bottom_up(true);
 			addr = memblock_alloc_range_nid(size, alignment, SZ_4G,
-							limit, nid, true);
+							limit, nid, true, 0);
 			memblock_set_bottom_up(false);
 		}
 #endif
@@ -353,7 +353,7 @@ int __init cma_declare_contiguous_nid(phys_addr_t base,
 
 		if (!addr) {
 			addr = memblock_alloc_range_nid(size, alignment, base,
-					limit, nid, true);
+					limit, nid, true, 0);
 			if (!addr) {
 				ret = -ENOMEM;
 				goto err;
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 24352abbb9e5..5ba7fd702458 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3168,7 +3168,8 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid)
 	/* do node specific alloc */
 	if (nid != NUMA_NO_NODE) {
 		m = memblock_alloc_try_nid_raw(huge_page_size(h), huge_page_size(h),
-				0, MEMBLOCK_ALLOC_ACCESSIBLE, nid);
+				0, MEMBLOCK_ALLOC_ACCESSIBLE, nid,
+				hugetlb_vmemmap_optimizable(h) ? huge_page_size(h) : 0);
 		if (!m)
 			return 0;
 		goto found;
@@ -3177,7 +3178,8 @@ int __alloc_bootmem_huge_page(struct hstate *h, int nid)
 	for_each_node_mask_to_alloc(h, nr_nodes, node, &node_states[N_MEMORY]) {
 		m = memblock_alloc_try_nid_raw(
 				huge_page_size(h), huge_page_size(h),
-				0, MEMBLOCK_ALLOC_ACCESSIBLE, node);
+				0, MEMBLOCK_ALLOC_ACCESSIBLE, node,
+				hugetlb_vmemmap_optimizable(h) ? huge_page_size(h) : 0);
 		/*
 		 * Use the beginning of the huge page to store the
 		 * huge_bootmem_page struct (until gather_bootmem
diff --git a/mm/memblock.c b/mm/memblock.c
index f9e61e565a53..e92d437bcb51 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -549,7 +549,8 @@ static void __init_memblock memblock_insert_region(struct memblock_type *type,
 						   int idx, phys_addr_t base,
 						   phys_addr_t size,
 						   int nid,
-						   enum memblock_flags flags)
+						   enum memblock_flags flags,
+						   phys_addr_t hugepage_size)
 {
 	struct memblock_region *rgn = &type->regions[idx];
 
@@ -558,6 +559,7 @@ static void __init_memblock memblock_insert_region(struct memblock_type *type,
 	rgn->base = base;
 	rgn->size = size;
 	rgn->flags = flags;
+	rgn->hugepage_size = hugepage_size;
 	memblock_set_region_node(rgn, nid);
 	type->cnt++;
 	type->total_size += size;
@@ -581,7 +583,7 @@ static void __init_memblock memblock_insert_region(struct memblock_type *type,
  */
 static int __init_memblock memblock_add_range(struct memblock_type *type,
 				phys_addr_t base, phys_addr_t size,
-				int nid, enum memblock_flags flags)
+				int nid, enum memblock_flags flags, phys_addr_t hugepage_size)
 {
 	bool insert = false;
 	phys_addr_t obase = base;
@@ -598,6 +600,7 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
 		type->regions[0].base = base;
 		type->regions[0].size = size;
 		type->regions[0].flags = flags;
+		type->regions[0].hugepage_size = hugepage_size;
 		memblock_set_region_node(&type->regions[0], nid);
 		type->total_size = size;
 		return 0;
@@ -646,7 +649,7 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
 				end_rgn = idx + 1;
 				memblock_insert_region(type, idx++, base,
 						       rbase - base, nid,
-						       flags);
+						       flags, hugepage_size);
 			}
 		}
 		/* area below @rend is dealt with, forget about it */
@@ -661,7 +664,7 @@ static int __init_memblock memblock_add_range(struct memblock_type *type,
 				start_rgn = idx;
 			end_rgn = idx + 1;
 			memblock_insert_region(type, idx, base, end - base,
-					       nid, flags);
+					       nid, flags, hugepage_size);
 		}
 	}
 
@@ -705,7 +708,7 @@ int __init_memblock memblock_add_node(phys_addr_t base, phys_addr_t size,
 	memblock_dbg("%s: [%pa-%pa] nid=%d flags=%x %pS\n", __func__,
 		     &base, &end, nid, flags, (void *)_RET_IP_);
 
-	return memblock_add_range(&memblock.memory, base, size, nid, flags);
+	return memblock_add_range(&memblock.memory, base, size, nid, flags, 0);
 }
 
 /**
@@ -726,7 +729,7 @@ int __init_memblock memblock_add(phys_addr_t base, phys_addr_t size)
 	memblock_dbg("%s: [%pa-%pa] %pS\n", __func__,
 		     &base, &end, (void *)_RET_IP_);
 
-	return memblock_add_range(&memblock.memory, base, size, MAX_NUMNODES, 0);
+	return memblock_add_range(&memblock.memory, base, size, MAX_NUMNODES, 0, 0);
 }
 
 /**
@@ -782,7 +785,7 @@ static int __init_memblock memblock_isolate_range(struct memblock_type *type,
 			type->total_size -= base - rbase;
 			memblock_insert_region(type, idx, rbase, base - rbase,
 					       memblock_get_region_node(rgn),
-					       rgn->flags);
+					       rgn->flags, 0);
 		} else if (rend > end) {
 			/*
 			 * @rgn intersects from above.  Split and redo the
@@ -793,7 +796,7 @@ static int __init_memblock memblock_isolate_range(struct memblock_type *type,
 			type->total_size -= end - rbase;
 			memblock_insert_region(type, idx--, rbase, end - rbase,
 					       memblock_get_region_node(rgn),
-					       rgn->flags);
+					       rgn->flags, 0);
 		} else {
 			/* @rgn is fully contained, record it */
 			if (!*end_rgn)
@@ -863,14 +866,20 @@ int __init_memblock memblock_phys_free(phys_addr_t base, phys_addr_t size)
 	return memblock_remove_range(&memblock.reserved, base, size);
 }
 
-int __init_memblock memblock_reserve(phys_addr_t base, phys_addr_t size)
+int __init_memblock memblock_reserve_huge(phys_addr_t base, phys_addr_t size,
+					   phys_addr_t hugepage_size)
 {
 	phys_addr_t end = base + size - 1;
 
 	memblock_dbg("%s: [%pa-%pa] %pS\n", __func__,
 		     &base, &end, (void *)_RET_IP_);
 
-	return memblock_add_range(&memblock.reserved, base, size, MAX_NUMNODES, 0);
+	return memblock_add_range(&memblock.reserved, base, size, MAX_NUMNODES, 0, hugepage_size);
+}
+
+int __init_memblock memblock_reserve(phys_addr_t base, phys_addr_t size)
+{
+	return memblock_reserve_huge(base, size, 0);
 }
 
 #ifdef CONFIG_HAVE_MEMBLOCK_PHYS_MAP
@@ -881,7 +890,7 @@ int __init_memblock memblock_physmem_add(phys_addr_t base, phys_addr_t size)
 	memblock_dbg("%s: [%pa-%pa] %pS\n", __func__,
 		     &base, &end, (void *)_RET_IP_);
 
-	return memblock_add_range(&physmem, base, size, MAX_NUMNODES, 0);
+	return memblock_add_range(&physmem, base, size, MAX_NUMNODES, 0, 0);
 }
 #endif
 
@@ -1365,6 +1374,7 @@ __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
  * @end: the upper bound of the memory region to allocate (phys address)
  * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
  * @exact_nid: control the allocation fall back to other nodes
+ * @hugepage_size: size of the hugepages in bytes
  *
  * The allocation is performed from memory region limited by
  * memblock.current_limit if @end == %MEMBLOCK_ALLOC_ACCESSIBLE.
@@ -1385,7 +1395,7 @@ __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
 phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
 					phys_addr_t align, phys_addr_t start,
 					phys_addr_t end, int nid,
-					bool exact_nid)
+					bool exact_nid, phys_addr_t hugepage_size)
 {
 	enum memblock_flags flags = choose_memblock_flags();
 	phys_addr_t found;
@@ -1402,14 +1412,14 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
 again:
 	found = memblock_find_in_range_node(size, align, start, end, nid,
 					    flags);
-	if (found && !memblock_reserve(found, size))
+	if (found && !memblock_reserve_huge(found, size, hugepage_size))
 		goto done;
 
 	if (nid != NUMA_NO_NODE && !exact_nid) {
 		found = memblock_find_in_range_node(size, align, start,
 						    end, NUMA_NO_NODE,
 						    flags);
-		if (found && !memblock_reserve(found, size))
+		if (found && !memblock_reserve_huge(found, size, hugepage_size))
 			goto done;
 	}
 
@@ -1469,7 +1479,7 @@ phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
 		     __func__, (u64)size, (u64)align, &start, &end,
 		     (void *)_RET_IP_);
 	return memblock_alloc_range_nid(size, align, start, end, NUMA_NO_NODE,
-					false);
+					false, 0);
 }
 
 /**
@@ -1488,7 +1498,7 @@ phys_addr_t __init memblock_phys_alloc_range(phys_addr_t size,
 phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid)
 {
 	return memblock_alloc_range_nid(size, align, 0,
-					MEMBLOCK_ALLOC_ACCESSIBLE, nid, false);
+					MEMBLOCK_ALLOC_ACCESSIBLE, nid, false, 0);
 }
 
 /**
@@ -1514,7 +1524,7 @@ phys_addr_t __init memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t ali
 static void * __init memblock_alloc_internal(
 				phys_addr_t size, phys_addr_t align,
 				phys_addr_t min_addr, phys_addr_t max_addr,
-				int nid, bool exact_nid)
+				int nid, bool exact_nid, phys_addr_t hugepage_size)
 {
 	phys_addr_t alloc;
 
@@ -1530,12 +1540,12 @@ static void * __init memblock_alloc_internal(
 		max_addr = memblock.current_limit;
 
 	alloc = memblock_alloc_range_nid(size, align, min_addr, max_addr, nid,
-					exact_nid);
+					exact_nid, hugepage_size);
 
 	/* retry allocation without lower limit */
 	if (!alloc && min_addr)
 		alloc = memblock_alloc_range_nid(size, align, 0, max_addr, nid,
-						exact_nid);
+						exact_nid, hugepage_size);
 
 	if (!alloc)
 		return NULL;
@@ -1571,7 +1581,7 @@ void * __init memblock_alloc_exact_nid_raw(
 		     &max_addr, (void *)_RET_IP_);
 
 	return memblock_alloc_internal(size, align, min_addr, max_addr, nid,
-				       true);
+				       true, 0);
 }
 
 /**
@@ -1585,25 +1595,29 @@ void * __init memblock_alloc_exact_nid_raw(
  *	      is preferred (phys address), or %MEMBLOCK_ALLOC_ACCESSIBLE to
  *	      allocate only from memory limited by memblock.current_limit value
  * @nid: nid of the free area to find, %NUMA_NO_NODE for any node
+ * @hugepage_size: size of the hugepages in bytes
  *
  * Public function, provides additional debug information (including caller
  * info), if enabled. Does not zero allocated memory, does not panic if request
  * cannot be satisfied.
  *
+ * If hugepage_size is not 0 and HVO is enabled, then only the struct pages
+ * that are not freed by HVO are initialized using the hugepage_size parameter.
+ *
  * Return:
  * Virtual address of allocated memory block on success, NULL on failure.
  */
 void * __init memblock_alloc_try_nid_raw(
 			phys_addr_t size, phys_addr_t align,
 			phys_addr_t min_addr, phys_addr_t max_addr,
-			int nid)
+			int nid, phys_addr_t hugepage_size)
 {
 	memblock_dbg("%s: %llu bytes align=0x%llx nid=%d from=%pa max_addr=%pa %pS\n",
 		     __func__, (u64)size, (u64)align, nid, &min_addr,
 		     &max_addr, (void *)_RET_IP_);
 
 	return memblock_alloc_internal(size, align, min_addr, max_addr, nid,
-				       false);
+				       false, hugepage_size);
 }
 
 /**
@@ -1634,7 +1648,7 @@ void * __init memblock_alloc_try_nid(
 		     __func__, (u64)size, (u64)align, nid, &min_addr,
 		     &max_addr, (void *)_RET_IP_);
 	ptr = memblock_alloc_internal(size, align,
-					   min_addr, max_addr, nid, false);
+					   min_addr, max_addr, nid, false, 0);
 	if (ptr)
 		memset(ptr, 0, size);
 
diff --git a/mm/mm_init.c b/mm/mm_init.c
index a1963c3322af..c36d768bb671 100644
--- a/mm/mm_init.c
+++ b/mm/mm_init.c
@@ -1615,7 +1615,7 @@ void __init *memmap_alloc(phys_addr_t size, phys_addr_t align,
 	else
 		ptr = memblock_alloc_try_nid_raw(size, align, min_addr,
 						 MEMBLOCK_ALLOC_ACCESSIBLE,
-						 nid);
+						 nid, 0);
 
 	if (ptr && size > 0)
 		page_init_poison(ptr, size);
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index a044a130405b..56b8b8e684df 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -43,7 +43,7 @@ static void * __ref __earlyonly_bootmem_alloc(int node,
 				unsigned long goal)
 {
 	return memblock_alloc_try_nid_raw(size, align, goal,
-					       MEMBLOCK_ALLOC_ACCESSIBLE, node);
+					       MEMBLOCK_ALLOC_ACCESSIBLE, node, 0);
 }
 
 void * __meminit vmemmap_alloc_block(unsigned long size, int node)
diff --git a/tools/testing/memblock/tests/alloc_nid_api.c b/tools/testing/memblock/tests/alloc_nid_api.c
index 49bb416d34ff..225044366fbb 100644
--- a/tools/testing/memblock/tests/alloc_nid_api.c
+++ b/tools/testing/memblock/tests/alloc_nid_api.c
@@ -43,7 +43,7 @@ static inline void *run_memblock_alloc_nid(phys_addr_t size,
 						    max_addr, nid);
 	if (alloc_nid_test_flags & TEST_F_RAW)
 		return memblock_alloc_try_nid_raw(size, align, min_addr,
-						  max_addr, nid);
+						  max_addr, nid, 0);
 	return memblock_alloc_try_nid(size, align, min_addr, max_addr, nid);
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 3/4] mm/hugetlb_vmemmap: Use nid of the head page to reallocate it
  2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
  2023-07-24 13:46 ` [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled Usama Arif
  2023-07-24 13:46 ` [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region Usama Arif
@ 2023-07-24 13:46 ` Usama Arif
  2023-07-24 13:46 ` [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO Usama Arif
  2023-07-26 10:34 ` [RFC 0/4] mm/memblock: Skip prep and " Usama Arif
  4 siblings, 0 replies; 10+ messages in thread
From: Usama Arif @ 2023-07-24 13:46 UTC (permalink / raw)
  To: linux-mm, muchun.song, mike.kravetz, rppt
  Cc: linux-kernel, fam.zheng, liangma, simon.evans, punit.agrawal, Usama Arif

If tail page prep and initialization is skipped, then the "start"
page will not contain the correct nid. Use the nid from first
vmemap page.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index b721e87de2b3..bdf750a4786b 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -324,7 +324,7 @@ static int vmemmap_remap_free(unsigned long start, unsigned long end,
 		.reuse_addr	= reuse,
 		.vmemmap_pages	= &vmemmap_pages,
 	};
-	int nid = page_to_nid((struct page *)start);
+	int nid = page_to_nid((struct page *)reuse);
 	gfp_t gfp_mask = GFP_KERNEL | __GFP_THISNODE | __GFP_NORETRY |
 			__GFP_NOWARN;
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO
  2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
                   ` (2 preceding siblings ...)
  2023-07-24 13:46 ` [RFC 3/4] mm/hugetlb_vmemmap: Use nid of the head page to reallocate it Usama Arif
@ 2023-07-24 13:46 ` Usama Arif
  2023-07-26 10:34 ` [RFC 0/4] mm/memblock: Skip prep and " Usama Arif
  4 siblings, 0 replies; 10+ messages in thread
From: Usama Arif @ 2023-07-24 13:46 UTC (permalink / raw)
  To: linux-mm, muchun.song, mike.kravetz, rppt
  Cc: linux-kernel, fam.zheng, liangma, simon.evans, punit.agrawal, Usama Arif

If the region is for hugepages and if HVO is enabled, then those
struct pages which will be freed later don't need to be initialized.
This can save significant time when a large number of hugepages are
allocated at boot time. As memmap_init_reserved_pages is only called at
boot time, we don't need to worry about memory hotplug.

Hugepage regions are kept separate from non hugepage regions in
memblock_merge_regions so that initialization for unused struct pages
can be skipped for the entire region.

Signed-off-by: Usama Arif <usama.arif@bytedance.com>
---
 mm/hugetlb_vmemmap.c |  2 +-
 mm/hugetlb_vmemmap.h |  3 +++
 mm/memblock.c        | 27 ++++++++++++++++++++++-----
 3 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index bdf750a4786b..b5b7834e0f42 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -443,7 +443,7 @@ static int vmemmap_remap_alloc(unsigned long start, unsigned long end,
 DEFINE_STATIC_KEY_FALSE(hugetlb_optimize_vmemmap_key);
 EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
 
-static bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
+bool vmemmap_optimize_enabled = IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON);
 core_param(hugetlb_free_vmemmap, vmemmap_optimize_enabled, bool, 0);
 
 /**
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 3525c514c061..8b9a1563f7b9 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -58,4 +58,7 @@ static inline bool hugetlb_vmemmap_optimizable(const struct hstate *h)
 	return hugetlb_vmemmap_optimizable_size(h) != 0;
 }
 bool vmemmap_should_optimize(const struct hstate *h, const struct page *head);
+
+extern bool vmemmap_optimize_enabled;
+
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/memblock.c b/mm/memblock.c
index e92d437bcb51..62072a0226de 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -21,6 +21,7 @@
 #include <linux/io.h>
 
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 #define INIT_MEMBLOCK_REGIONS			128
 #define INIT_PHYSMEM_REGIONS			4
@@ -519,7 +520,8 @@ static void __init_memblock memblock_merge_regions(struct memblock_type *type,
 		if (this->base + this->size != next->base ||
 		    memblock_get_region_node(this) !=
 		    memblock_get_region_node(next) ||
-		    this->flags != next->flags) {
+		    this->flags != next->flags ||
+		    this->hugepage_size != next->hugepage_size) {
 			BUG_ON(this->base + this->size > next->base);
 			i++;
 			continue;
@@ -2125,10 +2127,25 @@ static void __init memmap_init_reserved_pages(void)
 	/* initialize struct pages for the reserved regions */
 	for_each_reserved_mem_region(region) {
 		nid = memblock_get_region_node(region);
-		start = region->base;
-		end = start + region->size;
-
-		reserve_bootmem_region(start, end, nid);
+		/*
+		 * If the region is for hugepages and if HVO is enabled, then those
+		 * struct pages which will be freed later don't need to be initialized.
+		 * This can save significant time when a large number of hugepages are
+		 * allocated at boot time. As this is at boot time, we don't need to
+		 * worry about memory hotplug.
+		 */
+		if (region->hugepage_size && vmemmap_optimize_enabled) {
+			for (start = region->base;
+			    start < region->base + region->size;
+			    start += region->hugepage_size) {
+				end = start + HUGETLB_VMEMMAP_RESERVE_SIZE * sizeof(struct page);
+				reserve_bootmem_region(start, end, nid);
+			}
+		} else {
+			start = region->base;
+			end = start + region->size;
+			reserve_bootmem_region(start, end, nid);
+		}
 	}
 }
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO
  2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
                   ` (3 preceding siblings ...)
  2023-07-24 13:46 ` [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO Usama Arif
@ 2023-07-26 10:34 ` Usama Arif
  4 siblings, 0 replies; 10+ messages in thread
From: Usama Arif @ 2023-07-26 10:34 UTC (permalink / raw)
  To: linux-mm, muchun.song, mike.kravetz, rppt
  Cc: linux-kernel, fam.zheng, liangma, simon.evans, punit.agrawal



On 24/07/2023 14:46, Usama Arif wrote:
> If the region is for gigantic hugepages and if HVO is enabled, then those
> struct pages which will be freed later by HVO don't need to be prepared and
> initialized. This can save significant time when a large number of hugepages
> are allocated at boot time.
> 
> For a 1G hugepage, this series avoid initialization and preparation of
> 262144 - 64 = 262080 struct pages per hugepage.
> 
> When tested on a 512G system (which can allocate max 500 1G hugepages), the
> kexec-boot time with HVO and DEFERRED_STRUCT_PAGE_INIT enabled without this
> patchseries to running init is 3.9 seconds. With this patch it is 1.2 seconds.
> This represents an approximately 70% reduction in boot time and will
> significantly reduce server downtime when using a large number of
> gigantic pages.

There were a few errors reported by kernel-bot if different config 
options were changed (CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP disabled, 
CONFIG_CMA enabled). I will fix these in the next revision if the 
general approach in the patches looks good to the community to start review.

Thanks,
Usama

> 
> Thanks,
> Usama
> 
> Usama Arif (4):
>    mm/hugetlb: Skip prep of tail pages when HVO is enabled
>    mm/memblock: Add hugepage_size member to struct memblock_region
>    mm/hugetlb_vmemmap: Use nid of the head page to reallocate it
>    mm/memblock: Skip initialization of struct pages freed later by HVO
> 
>   arch/arm64/mm/kasan_init.c                   |  2 +-
>   arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
>   arch/powerpc/platforms/pseries/setup.c       |  4 +-
>   arch/powerpc/sysdev/dart_iommu.c             |  2 +-
>   include/linux/memblock.h                     |  8 +-
>   mm/cma.c                                     |  4 +-
>   mm/hugetlb.c                                 | 36 +++++---
>   mm/hugetlb_vmemmap.c                         |  6 +-
>   mm/hugetlb_vmemmap.h                         |  4 +
>   mm/memblock.c                                | 87 +++++++++++++-------
>   mm/mm_init.c                                 |  2 +-
>   mm/sparse-vmemmap.c                          |  2 +-
>   tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
>   13 files changed, 106 insertions(+), 55 deletions(-)
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region
  2023-07-24 13:46 ` [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region Usama Arif
@ 2023-07-26 11:01   ` Mike Rapoport
  2023-07-26 15:02     ` [External] " Usama Arif
  0 siblings, 1 reply; 10+ messages in thread
From: Mike Rapoport @ 2023-07-26 11:01 UTC (permalink / raw)
  To: Usama Arif
  Cc: linux-mm, muchun.song, mike.kravetz, linux-kernel, fam.zheng,
	liangma, simon.evans, punit.agrawal

On Mon, Jul 24, 2023 at 02:46:42PM +0100, Usama Arif wrote:
> This propagates the hugepage size from the memblock APIs
> (memblock_alloc_try_nid_raw and memblock_alloc_range_nid)
> so that it can be stored in struct memblock region. This does not
> introduce any functional change and hugepage_size is not used in
> this commit. It is just a setup for the next commit where huge_pagesize
> is used to skip initialization of struct pages that will be freed later
> when HVO is enabled.
> 
> Signed-off-by: Usama Arif <usama.arif@bytedance.com>
> ---
>  arch/arm64/mm/kasan_init.c                   |  2 +-
>  arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
>  arch/powerpc/platforms/pseries/setup.c       |  4 +-
>  arch/powerpc/sysdev/dart_iommu.c             |  2 +-
>  include/linux/memblock.h                     |  8 ++-
>  mm/cma.c                                     |  4 +-
>  mm/hugetlb.c                                 |  6 +-
>  mm/memblock.c                                | 60 ++++++++++++--------
>  mm/mm_init.c                                 |  2 +-
>  mm/sparse-vmemmap.c                          |  2 +-
>  tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
>  11 files changed, 56 insertions(+), 38 deletions(-)
> 

[ snip ]

> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> index f71ff9f0ec81..bb8019540d73 100644
> --- a/include/linux/memblock.h
> +++ b/include/linux/memblock.h
> @@ -63,6 +63,7 @@ struct memblock_region {
>  #ifdef CONFIG_NUMA
>  	int nid;
>  #endif
> +	phys_addr_t hugepage_size;
>  };
>  
>  /**
> @@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
>  				      phys_addr_t start, phys_addr_t end);
>  phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
>  				      phys_addr_t align, phys_addr_t start,
> -				      phys_addr_t end, int nid, bool exact_nid);
> +				      phys_addr_t end, int nid, bool exact_nid,
> +				      phys_addr_t hugepage_size);

Rather than adding yet another parameter to memblock_phys_alloc_range() we
can have an API that sets a flag on the reserved regions.
With this the hugetlb reservation code can set a flag when HVO is
enabled and memmap_init_reserved_pages() will skip regions with this flag
set.

>  phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
>  
>  static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
> @@ -415,7 +417,7 @@ void *memblock_alloc_exact_nid_raw(phys_addr_t size, phys_addr_t align,
>  				 int nid);
>  void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
>  				 phys_addr_t min_addr, phys_addr_t max_addr,
> -				 int nid);
> +				 int nid, phys_addr_t hugepage_size);
>  void *memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align,
>  			     phys_addr_t min_addr, phys_addr_t max_addr,
>  			     int nid);
> @@ -431,7 +433,7 @@ static inline void *memblock_alloc_raw(phys_addr_t size,
>  {
>  	return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT,
>  					  MEMBLOCK_ALLOC_ACCESSIBLE,
> -					  NUMA_NO_NODE);
> +					  NUMA_NO_NODE, 0);
>  }
>  
>  static inline void *memblock_alloc_from(phys_addr_t size,

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [External] Re: [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region
  2023-07-26 11:01   ` Mike Rapoport
@ 2023-07-26 15:02     ` Usama Arif
  2023-07-27  4:30       ` Mike Rapoport
  0 siblings, 1 reply; 10+ messages in thread
From: Usama Arif @ 2023-07-26 15:02 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: linux-mm, muchun.song, mike.kravetz, linux-kernel, fam.zheng,
	liangma, simon.evans, punit.agrawal



On 26/07/2023 12:01, Mike Rapoport wrote:
> On Mon, Jul 24, 2023 at 02:46:42PM +0100, Usama Arif wrote:
>> This propagates the hugepage size from the memblock APIs
>> (memblock_alloc_try_nid_raw and memblock_alloc_range_nid)
>> so that it can be stored in struct memblock region. This does not
>> introduce any functional change and hugepage_size is not used in
>> this commit. It is just a setup for the next commit where huge_pagesize
>> is used to skip initialization of struct pages that will be freed later
>> when HVO is enabled.
>>
>> Signed-off-by: Usama Arif <usama.arif@bytedance.com>
>> ---
>>   arch/arm64/mm/kasan_init.c                   |  2 +-
>>   arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
>>   arch/powerpc/platforms/pseries/setup.c       |  4 +-
>>   arch/powerpc/sysdev/dart_iommu.c             |  2 +-
>>   include/linux/memblock.h                     |  8 ++-
>>   mm/cma.c                                     |  4 +-
>>   mm/hugetlb.c                                 |  6 +-
>>   mm/memblock.c                                | 60 ++++++++++++--------
>>   mm/mm_init.c                                 |  2 +-
>>   mm/sparse-vmemmap.c                          |  2 +-
>>   tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
>>   11 files changed, 56 insertions(+), 38 deletions(-)
>>
> 
> [ snip ]
> 
>> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
>> index f71ff9f0ec81..bb8019540d73 100644
>> --- a/include/linux/memblock.h
>> +++ b/include/linux/memblock.h
>> @@ -63,6 +63,7 @@ struct memblock_region {
>>   #ifdef CONFIG_NUMA
>>   	int nid;
>>   #endif
>> +	phys_addr_t hugepage_size;
>>   };
>>   
>>   /**
>> @@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
>>   				      phys_addr_t start, phys_addr_t end);
>>   phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
>>   				      phys_addr_t align, phys_addr_t start,
>> -				      phys_addr_t end, int nid, bool exact_nid);
>> +				      phys_addr_t end, int nid, bool exact_nid,
>> +				      phys_addr_t hugepage_size);
> 
> Rather than adding yet another parameter to memblock_phys_alloc_range() we
> can have an API that sets a flag on the reserved regions.
> With this the hugetlb reservation code can set a flag when HVO is
> enabled and memmap_init_reserved_pages() will skip regions with this flag
> set.
> 

Hi,

Thanks for the review.

I think you meant memblock_alloc_range_nid/memblock_alloc_try_nid_raw 
and not memblock_phys_alloc_range?

My initial approach was to use flags, but I think it looks worse than 
what I have done in this RFC (I have pushed the flags prototype at 
https://github.com/uarif1/linux/commits/flags_skip_prep_init_gigantic_HVO, 
top 4 commits for reference (the main difference is patch 2 and 4 from 
RFC)). The major points are (the bigger issue is in patch 4):

- (RFC vs flags patch 2 comparison) In the RFC, hugepage_size is 
propagated from memblock_alloc_try_nid_raw through function calls. When 
using flags, the "no_init" boolean is propogated from 
memblock_alloc_try_nid_raw through function calls until the region flags 
are available in memblock_add_range and the new MEMBLOCK_NOINIT flag is 
set. I think its a bit more tricky to introduce a new function to set 
the flag in the region AFTER the call to memblock_alloc_try_nid_raw has 
finished as the memblock_region can not be found.
So something (hugepage_size/flag information) still has to be propagated 
through function calls and a new argument needs to be added.

- (RFC vs flags patch 4 comparison) We can't skip initialization of the 
whole region, only the tail pages. We still need to initialize the 
HUGETLB_VMEMMAP_RESERVE_SIZE (PAGE_SIZE) struct pages for each gigantic 
page.
In the RFC, hugepage_size from patch 2 was used in the for loop in 
memmap_init_reserved_pages in patch 4 to reserve 
HUGETLB_VMEMMAP_RESERVE_SIZE struct pages for every hugepage_size. This 
looks very simple and not hacky.
If we use a flag, there are 2 ways to initialize the 
HUGETLB_VMEMMAP_RESERVE_SIZE struct pages per hugepage:

1. (implemented in github link patch 4) memmap_init_reserved_pages skips 
the region for initialization as you suggested, and then we initialize 
HUGETLB_VMEMMAP_RESERVE_SIZE struct pages per hugepage somewhere later 
(I did it in gather_bootmem_prealloc). When calling 
reserve_bootmem_region in gather_bootmem_prealloc, we need to skip 
early_page_uninitialised and this makes it look a bit hacky.

2. We initialize the HUGETLB_VMEMMAP_RESERVE_SIZE struct pages per 
hugepage in memmap_init_reserved_pages itself. As we have used a flag 
and havent passed hugepage_size, we need to get the gigantic page size 
somehow. There doesnt seem to be a nice way to determine the gigantic 
page size in that function which is architecture dependent. I think 
gigantic page size can be given by PAGE_SIZE << (PUD_SHIFT - 
PAGE_SHIFT), but not sure if this is ok for all architectures? If we can 
use PAGE_SIZE << (PUD_SHIFT - PAGE_SHIFT) it will look much better than 
point 1.

Both the RFC patches and the github flags implementation work, but I 
think RFC patches look much cleaner. If there is a strong preference for 
the the github patches I can send it to mailing list?

Thanks,
Usama


>>   phys_addr_t memblock_phys_alloc_try_nid(phys_addr_t size, phys_addr_t align, int nid);
>>   
>>   static __always_inline phys_addr_t memblock_phys_alloc(phys_addr_t size,
>> @@ -415,7 +417,7 @@ void *memblock_alloc_exact_nid_raw(phys_addr_t size, phys_addr_t align,
>>   				 int nid);
>>   void *memblock_alloc_try_nid_raw(phys_addr_t size, phys_addr_t align,
>>   				 phys_addr_t min_addr, phys_addr_t max_addr,
>> -				 int nid);
>> +				 int nid, phys_addr_t hugepage_size);
>>   void *memblock_alloc_try_nid(phys_addr_t size, phys_addr_t align,
>>   			     phys_addr_t min_addr, phys_addr_t max_addr,
>>   			     int nid);
>> @@ -431,7 +433,7 @@ static inline void *memblock_alloc_raw(phys_addr_t size,
>>   {
>>   	return memblock_alloc_try_nid_raw(size, align, MEMBLOCK_LOW_LIMIT,
>>   					  MEMBLOCK_ALLOC_ACCESSIBLE,
>> -					  NUMA_NO_NODE);
>> +					  NUMA_NO_NODE, 0);
>>   }
>>   
>>   static inline void *memblock_alloc_from(phys_addr_t size,
> 

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [External] Re: [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region
  2023-07-26 15:02     ` [External] " Usama Arif
@ 2023-07-27  4:30       ` Mike Rapoport
  2023-07-27 20:56         ` Usama Arif
  0 siblings, 1 reply; 10+ messages in thread
From: Mike Rapoport @ 2023-07-27  4:30 UTC (permalink / raw)
  To: Usama Arif
  Cc: linux-mm, muchun.song, mike.kravetz, linux-kernel, fam.zheng,
	liangma, simon.evans, punit.agrawal

On Wed, Jul 26, 2023 at 04:02:21PM +0100, Usama Arif wrote:
> 
> On 26/07/2023 12:01, Mike Rapoport wrote:
> > On Mon, Jul 24, 2023 at 02:46:42PM +0100, Usama Arif wrote:
> > > This propagates the hugepage size from the memblock APIs
> > > (memblock_alloc_try_nid_raw and memblock_alloc_range_nid)
> > > so that it can be stored in struct memblock region. This does not
> > > introduce any functional change and hugepage_size is not used in
> > > this commit. It is just a setup for the next commit where huge_pagesize
> > > is used to skip initialization of struct pages that will be freed later
> > > when HVO is enabled.
> > > 
> > > Signed-off-by: Usama Arif <usama.arif@bytedance.com>
> > > ---
> > >   arch/arm64/mm/kasan_init.c                   |  2 +-
> > >   arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
> > >   arch/powerpc/platforms/pseries/setup.c       |  4 +-
> > >   arch/powerpc/sysdev/dart_iommu.c             |  2 +-
> > >   include/linux/memblock.h                     |  8 ++-
> > >   mm/cma.c                                     |  4 +-
> > >   mm/hugetlb.c                                 |  6 +-
> > >   mm/memblock.c                                | 60 ++++++++++++--------
> > >   mm/mm_init.c                                 |  2 +-
> > >   mm/sparse-vmemmap.c                          |  2 +-
> > >   tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
> > >   11 files changed, 56 insertions(+), 38 deletions(-)
> > > 
> > 
> > [ snip ]
> > 
> > > diff --git a/include/linux/memblock.h b/include/linux/memblock.h
> > > index f71ff9f0ec81..bb8019540d73 100644
> > > --- a/include/linux/memblock.h
> > > +++ b/include/linux/memblock.h
> > > @@ -63,6 +63,7 @@ struct memblock_region {
> > >   #ifdef CONFIG_NUMA
> > >   	int nid;
> > >   #endif
> > > +	phys_addr_t hugepage_size;
> > >   };
> > >   /**
> > > @@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
> > >   				      phys_addr_t start, phys_addr_t end);
> > >   phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
> > >   				      phys_addr_t align, phys_addr_t start,
> > > -				      phys_addr_t end, int nid, bool exact_nid);
> > > +				      phys_addr_t end, int nid, bool exact_nid,
> > > +				      phys_addr_t hugepage_size);
> > 
> > Rather than adding yet another parameter to memblock_phys_alloc_range() we
> > can have an API that sets a flag on the reserved regions.
> > With this the hugetlb reservation code can set a flag when HVO is
> > enabled and memmap_init_reserved_pages() will skip regions with this flag
> > set.
> > 
> 
> Hi,
> 
> Thanks for the review.
> 
> I think you meant memblock_alloc_range_nid/memblock_alloc_try_nid_raw and
> not memblock_phys_alloc_range?

Yes.
 
> My initial approach was to use flags, but I think it looks worse than what I
> have done in this RFC (I have pushed the flags prototype at
> https://github.com/uarif1/linux/commits/flags_skip_prep_init_gigantic_HVO,
> top 4 commits for reference (the main difference is patch 2 and 4 from
> RFC)). The major points are (the bigger issue is in patch 4):
> 
> - (RFC vs flags patch 2 comparison) In the RFC, hugepage_size is propagated
> from memblock_alloc_try_nid_raw through function calls. When using flags,
> the "no_init" boolean is propogated from memblock_alloc_try_nid_raw through
> function calls until the region flags are available in memblock_add_range
> and the new MEMBLOCK_NOINIT flag is set. I think its a bit more tricky to
> introduce a new function to set the flag in the region AFTER the call to
> memblock_alloc_try_nid_raw has finished as the memblock_region can not be
> found.
> So something (hugepage_size/flag information) still has to be propagated
> through function calls and a new argument needs to be added.

Sorry if I wasn't clear. I didn't mean to add flags parameter, I meant to
add a flag and a function that sets this flag for a range. So for
MEMBLOCK_NOINIT there would be 

int memblock_mark_noinit(phys_addr_t base, phys_addr_t size);

I'd just name this flag MEMBLOCK_RSRV_NOINIT to make it clear it controls
the reserved regions.

This won't require updating all call sites of memblock_alloc_range_nid()
and memblock_alloc_try_nid_raw() but only a small refactoring of
memblock_setclr_flag() and its callers.

> - (RFC vs flags patch 4 comparison) We can't skip initialization of the
> whole region, only the tail pages. We still need to initialize the
> HUGETLB_VMEMMAP_RESERVE_SIZE (PAGE_SIZE) struct pages for each gigantic
> page.
> In the RFC, hugepage_size from patch 2 was used in the for loop in
> memmap_init_reserved_pages in patch 4 to reserve
> HUGETLB_VMEMMAP_RESERVE_SIZE struct pages for every hugepage_size. This
> looks very simple and not hacky.

But this requires having hugetlb details in memblock which feels backwards
to me.

With memblock_mark_noinit() you can decide what parts of a gigantic page
should be initialized in __alloc_bootmem_huge_page() and mark as NOINIT
only relevant range.

> If we use a flag, there are 2 ways to initialize the
> HUGETLB_VMEMMAP_RESERVE_SIZE struct pages per hugepage:
> 
> 1. (implemented in github link patch 4) memmap_init_reserved_pages skips the
> region for initialization as you suggested, and then we initialize
> HUGETLB_VMEMMAP_RESERVE_SIZE struct pages per hugepage somewhere later (I
> did it in gather_bootmem_prealloc). When calling reserve_bootmem_region in
> gather_bootmem_prealloc, we need to skip early_page_uninitialised and this
> makes it look a bit hacky.
> 
> 2. We initialize the HUGETLB_VMEMMAP_RESERVE_SIZE struct pages per hugepage
> in memmap_init_reserved_pages itself. As we have used a flag and havent
> passed hugepage_size, we need to get the gigantic page size somehow. There
> doesnt seem to be a nice way to determine the gigantic page size in that
> function which is architecture dependent. I think gigantic page size can be
> given by PAGE_SIZE << (PUD_SHIFT - PAGE_SHIFT), but not sure if this is ok
> for all architectures? If we can use PAGE_SIZE << (PUD_SHIFT - PAGE_SHIFT)
> it will look much better than point 1.
> 
> Both the RFC patches and the github flags implementation work, but I think
> RFC patches look much cleaner. If there is a strong preference for the the
> github patches I can send it to mailing list?
> 
> Thanks,
> Usama

-- 
Sincerely yours,
Mike.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [External] Re: [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region
  2023-07-27  4:30       ` Mike Rapoport
@ 2023-07-27 20:56         ` Usama Arif
  0 siblings, 0 replies; 10+ messages in thread
From: Usama Arif @ 2023-07-27 20:56 UTC (permalink / raw)
  To: Mike Rapoport
  Cc: linux-mm, muchun.song, mike.kravetz, linux-kernel, fam.zheng,
	liangma, simon.evans, punit.agrawal



On 27/07/2023 05:30, Mike Rapoport wrote:
> On Wed, Jul 26, 2023 at 04:02:21PM +0100, Usama Arif wrote:
>>
>> On 26/07/2023 12:01, Mike Rapoport wrote:
>>> On Mon, Jul 24, 2023 at 02:46:42PM +0100, Usama Arif wrote:
>>>> This propagates the hugepage size from the memblock APIs
>>>> (memblock_alloc_try_nid_raw and memblock_alloc_range_nid)
>>>> so that it can be stored in struct memblock region. This does not
>>>> introduce any functional change and hugepage_size is not used in
>>>> this commit. It is just a setup for the next commit where huge_pagesize
>>>> is used to skip initialization of struct pages that will be freed later
>>>> when HVO is enabled.
>>>>
>>>> Signed-off-by: Usama Arif <usama.arif@bytedance.com>
>>>> ---
>>>>    arch/arm64/mm/kasan_init.c                   |  2 +-
>>>>    arch/powerpc/platforms/pasemi/iommu.c        |  2 +-
>>>>    arch/powerpc/platforms/pseries/setup.c       |  4 +-
>>>>    arch/powerpc/sysdev/dart_iommu.c             |  2 +-
>>>>    include/linux/memblock.h                     |  8 ++-
>>>>    mm/cma.c                                     |  4 +-
>>>>    mm/hugetlb.c                                 |  6 +-
>>>>    mm/memblock.c                                | 60 ++++++++++++--------
>>>>    mm/mm_init.c                                 |  2 +-
>>>>    mm/sparse-vmemmap.c                          |  2 +-
>>>>    tools/testing/memblock/tests/alloc_nid_api.c |  2 +-
>>>>    11 files changed, 56 insertions(+), 38 deletions(-)
>>>>
>>>
>>> [ snip ]
>>>
>>>> diff --git a/include/linux/memblock.h b/include/linux/memblock.h
>>>> index f71ff9f0ec81..bb8019540d73 100644
>>>> --- a/include/linux/memblock.h
>>>> +++ b/include/linux/memblock.h
>>>> @@ -63,6 +63,7 @@ struct memblock_region {
>>>>    #ifdef CONFIG_NUMA
>>>>    	int nid;
>>>>    #endif
>>>> +	phys_addr_t hugepage_size;
>>>>    };
>>>>    /**
>>>> @@ -400,7 +401,8 @@ phys_addr_t memblock_phys_alloc_range(phys_addr_t size, phys_addr_t align,
>>>>    				      phys_addr_t start, phys_addr_t end);
>>>>    phys_addr_t memblock_alloc_range_nid(phys_addr_t size,
>>>>    				      phys_addr_t align, phys_addr_t start,
>>>> -				      phys_addr_t end, int nid, bool exact_nid);
>>>> +				      phys_addr_t end, int nid, bool exact_nid,
>>>> +				      phys_addr_t hugepage_size);
>>>
>>> Rather than adding yet another parameter to memblock_phys_alloc_range() we
>>> can have an API that sets a flag on the reserved regions.
>>> With this the hugetlb reservation code can set a flag when HVO is
>>> enabled and memmap_init_reserved_pages() will skip regions with this flag
>>> set.
>>>
>>
>> Hi,
>>
>> Thanks for the review.
>>
>> I think you meant memblock_alloc_range_nid/memblock_alloc_try_nid_raw and
>> not memblock_phys_alloc_range?
> 
> Yes.
>   
>> My initial approach was to use flags, but I think it looks worse than what I
>> have done in this RFC (I have pushed the flags prototype at
>> https://github.com/uarif1/linux/commits/flags_skip_prep_init_gigantic_HVO,
>> top 4 commits for reference (the main difference is patch 2 and 4 from
>> RFC)). The major points are (the bigger issue is in patch 4):
>>
>> - (RFC vs flags patch 2 comparison) In the RFC, hugepage_size is propagated
>> from memblock_alloc_try_nid_raw through function calls. When using flags,
>> the "no_init" boolean is propogated from memblock_alloc_try_nid_raw through
>> function calls until the region flags are available in memblock_add_range
>> and the new MEMBLOCK_NOINIT flag is set. I think its a bit more tricky to
>> introduce a new function to set the flag in the region AFTER the call to
>> memblock_alloc_try_nid_raw has finished as the memblock_region can not be
>> found.
>> So something (hugepage_size/flag information) still has to be propagated
>> through function calls and a new argument needs to be added.
> 
> Sorry if I wasn't clear. I didn't mean to add flags parameter, I meant to
> add a flag and a function that sets this flag for a range. So for
> MEMBLOCK_NOINIT there would be
> 
> int memblock_mark_noinit(phys_addr_t base, phys_addr_t size);
> 
> I'd just name this flag MEMBLOCK_RSRV_NOINIT to make it clear it controls
> the reserved regions.
> 
> This won't require updating all call sites of memblock_alloc_range_nid()
> and memblock_alloc_try_nid_raw() but only a small refactoring of
> memblock_setclr_flag() and its callers.
> 

Thanks for this, its much cleaner doing the way you described. I have 
sent v1 implementing this 
https://lore.kernel.org/all/20230727204624.1942372-1-usama.arif@bytedance.com/.

Regards,
Usama


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-07-27 20:56 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-07-24 13:46 [RFC 0/4] mm/memblock: Skip prep and initialization of struct pages freed later by HVO Usama Arif
2023-07-24 13:46 ` [RFC 1/4] mm/hugetlb: Skip prep of tail pages when HVO is enabled Usama Arif
2023-07-24 13:46 ` [RFC 2/4] mm/memblock: Add hugepage_size member to struct memblock_region Usama Arif
2023-07-26 11:01   ` Mike Rapoport
2023-07-26 15:02     ` [External] " Usama Arif
2023-07-27  4:30       ` Mike Rapoport
2023-07-27 20:56         ` Usama Arif
2023-07-24 13:46 ` [RFC 3/4] mm/hugetlb_vmemmap: Use nid of the head page to reallocate it Usama Arif
2023-07-24 13:46 ` [RFC 4/4] mm/memblock: Skip initialization of struct pages freed later by HVO Usama Arif
2023-07-26 10:34 ` [RFC 0/4] mm/memblock: Skip prep and " Usama Arif

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).