linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 0/4] add hugetlb_free_vmemmap sysctl
@ 2022-03-30 15:37 Muchun Song
  2022-03-30 15:37 ` [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 Muchun Song
                   ` (3 more replies)
  0 siblings, 4 replies; 12+ messages in thread
From: Muchun Song @ 2022-03-30 15:37 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

This series is based on next-20220310.

This series amis to add hugetlb_free_vmemmap sysctl to enable the feature
of freeing vmemmap pages of HugeTLB pages.

v6:
  - Remove "make syncconfig" from Kbuild.

v5:
  - Fix not working properly if one is workig off of a very clean build
    reported by Luis Chamberlain.
  - Add Suggested-by for Luis Chamberlain.

Thanks.

v4:
  - Introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 inspired by Luis.

v3:
  - Add pr_warn_once() (Mike).
  - Handle the transition from enabling to disabling (Luis)

v2:
  - Fix compilation when !CONFIG_MHP_MEMMAP_ON_MEMORY reported by kernel
    test robot <lkp@intel.com>.
  - Move sysctl code from kernel/sysctl.c to mm/hugetlb_vmemmap.c.

Muchun Song (4):
  mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2
  mm: memory_hotplug: override memmap_on_memory when
    hugetlb_free_vmemmap=on
  sysctl: allow to set extra1 to SYSCTL_ONE
  mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl

 Documentation/admin-guide/sysctl/vm.rst |  14 ++++
 Kbuild                                  |  15 ++++-
 include/linux/memory_hotplug.h          |   9 +++
 include/linux/mm_types.h                |   2 +
 include/linux/page-flags.h              |   3 +-
 kernel/sysctl.c                         |   2 +-
 mm/hugetlb_vmemmap.c                    | 109 ++++++++++++++++++++++++--------
 mm/hugetlb_vmemmap.h                    |   8 ++-
 mm/memory_hotplug.c                     |  27 ++++++--
 mm/struct_page_size.c                   |  20 ++++++
 10 files changed, 171 insertions(+), 38 deletions(-)
 create mode 100644 mm/struct_page_size.c

-- 
2.11.0



^ permalink raw reply	[flat|nested] 12+ messages in thread

* [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2
  2022-03-30 15:37 [PATCH v6 0/4] add hugetlb_free_vmemmap sysctl Muchun Song
@ 2022-03-30 15:37 ` Muchun Song
  2022-03-31  2:28   ` Andrew Morton
  2022-03-31 12:39   ` kernel test robot
  2022-03-30 15:37 ` [PATCH v6 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Muchun Song
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 12+ messages in thread
From: Muchun Song @ 2022-03-30 15:37 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

If the size of "struct page" is not the power of two and this
feature is enabled, then the vmemmap pages of HugeTLB will be
corrupted after remapping (panic is about to happen in theory).
But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on
x86_64.  However, it is not a conventional configuration nowadays.
So it is not a real word issue, just the result of a code review.
But we have to prevent anyone from configuring that combined
configuration.  In order to avoid many checks like "is_power_of_2
(sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce
STRUCT_PAGE_SIZE_IS_POWER_OF_2 to detect if the size of struct
page is power of 2 and make this feature depends on this new
macro.  Then we could prevent anyone do any unexpected
configuration.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Suggested-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Kbuild                     | 15 ++++++++++++++-
 include/linux/mm_types.h   |  2 ++
 include/linux/page-flags.h |  3 ++-
 mm/hugetlb_vmemmap.c       |  8 ++------
 mm/hugetlb_vmemmap.h       |  4 ++--
 mm/struct_page_size.c      | 20 ++++++++++++++++++++
 6 files changed, 42 insertions(+), 10 deletions(-)
 create mode 100644 mm/struct_page_size.c

diff --git a/Kbuild b/Kbuild
index fa441b98c9f6..7f90ba21dd51 100644
--- a/Kbuild
+++ b/Kbuild
@@ -24,6 +24,19 @@ $(timeconst-file): kernel/time/timeconst.bc FORCE
 	$(call filechk,gentimeconst)
 
 #####
+# Generate struct_page_size.h.
+
+struct_page_size-file := include/generated/struct_page_size.h
+
+always-y += $(struct_page_size-file)
+targets += mm/struct_page_size.s
+
+mm/struct_page_size.s: $(timeconst-file) $(bounds-file)
+
+$(struct_page_size-file): mm/struct_page_size.s FORCE
+	$(call filechk,offsets,__LINUX_STRUCT_PAGE_SIZE_H__)
+
+#####
 # Generate asm-offsets.h
 
 offsets-file := include/generated/asm-offsets.h
@@ -31,7 +44,7 @@ offsets-file := include/generated/asm-offsets.h
 always-y += $(offsets-file)
 targets += arch/$(SRCARCH)/kernel/asm-offsets.s
 
-arch/$(SRCARCH)/kernel/asm-offsets.s: $(timeconst-file) $(bounds-file)
+arch/$(SRCARCH)/kernel/asm-offsets.s: $(timeconst-file) $(bounds-file) $(struct_page_size-file)
 
 $(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE
 	$(call filechk,offsets,__ASM_OFFSETS_H__)
diff --git a/include/linux/mm_types.h b/include/linux/mm_types.h
index 8834e38c06a4..5fbff44a4310 100644
--- a/include/linux/mm_types.h
+++ b/include/linux/mm_types.h
@@ -223,6 +223,7 @@ struct page {
 #endif
 } _struct_page_alignment;
 
+#ifndef __GENERATING_STRUCT_PAGE_SIZE_IS_POWER_OF_2_H
 /**
  * struct folio - Represents a contiguous set of bytes.
  * @flags: Identical to the page flags.
@@ -844,5 +845,6 @@ enum fault_flag {
 	FAULT_FLAG_INSTRUCTION =	1 << 8,
 	FAULT_FLAG_INTERRUPTIBLE =	1 << 9,
 };
+#endif /* !__GENERATING_STRUCT_PAGE_SIZE_IS_POWER_OF_2_H */
 
 #endif /* _LINUX_MM_TYPES_H */
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index fc4f294cc8d7..15fcdff0e7ee 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -12,6 +12,7 @@
 #ifndef __GENERATING_BOUNDS_H
 #include <linux/mm_types.h>
 #include <generated/bounds.h>
+#include <generated/struct_page_size.h>
 #endif /* !__GENERATING_BOUNDS_H */
 
 /*
@@ -190,7 +191,7 @@ enum pageflags {
 
 #ifndef __GENERATING_BOUNDS_H
 
-#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+#if defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) && defined(STRUCT_PAGE_SIZE_IS_POWER_OF_2)
 DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON,
 			 hugetlb_free_vmemmap_enabled_key);
 
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 791626983c2e..951cf83010c7 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -178,6 +178,7 @@
 
 #include "hugetlb_vmemmap.h"
 
+#ifdef STRUCT_PAGE_SIZE_IS_POWER_OF_2
 /*
  * There are a lot of struct page structures associated with each HugeTLB page.
  * For tail pages, the value of compound_head is the same. So we can reuse first
@@ -194,12 +195,6 @@ EXPORT_SYMBOL(hugetlb_free_vmemmap_enabled_key);
 
 static int __init early_hugetlb_free_vmemmap_param(char *buf)
 {
-	/* We cannot optimize if a "struct page" crosses page boundaries. */
-	if (!is_power_of_2(sizeof(struct page))) {
-		pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n");
-		return 0;
-	}
-
 	if (!buf)
 		return -EINVAL;
 
@@ -302,3 +297,4 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
 		h->name);
 }
+#endif /* STRUCT_PAGE_SIZE_IS_POWER_OF_2 */
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index cb2bef8f9e73..b137fd8b6ba4 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -10,7 +10,7 @@
 #define _LINUX_HUGETLB_VMEMMAP_H
 #include <linux/hugetlb.h>
 
-#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+#if defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP) && defined(STRUCT_PAGE_SIZE_IS_POWER_OF_2)
 int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 void hugetlb_vmemmap_init(struct hstate *h);
@@ -41,5 +41,5 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return 0;
 }
-#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP && STRUCT_PAGE_SIZE_IS_POWER_OF_2 */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/struct_page_size.c b/mm/struct_page_size.c
new file mode 100644
index 000000000000..6fc29c1227a0
--- /dev/null
+++ b/mm/struct_page_size.c
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Generate definitions needed by the preprocessor.
+ * This code generates raw asm output which is post-processed
+ * to extract and format the required data.
+ */
+
+#define __GENERATING_STRUCT_PAGE_SIZE_IS_POWER_OF_2_H
+/* Include headers that define the enum constants of interest */
+#include <linux/mm_types.h>
+#include <linux/kbuild.h>
+#include <linux/log2.h>
+
+int main(void)
+{
+	if (is_power_of_2(sizeof(struct page)))
+		DEFINE(STRUCT_PAGE_SIZE_IS_POWER_OF_2, is_power_of_2(sizeof(struct page)));
+
+	return 0;
+}
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v6 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on
  2022-03-30 15:37 [PATCH v6 0/4] add hugetlb_free_vmemmap sysctl Muchun Song
  2022-03-30 15:37 ` [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 Muchun Song
@ 2022-03-30 15:37 ` Muchun Song
  2022-03-30 15:37 ` [PATCH v6 3/4] sysctl: allow to set extra1 to SYSCTL_ONE Muchun Song
  2022-03-30 15:37 ` [PATCH v6 4/4] mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl Muchun Song
  3 siblings, 0 replies; 12+ messages in thread
From: Muchun Song @ 2022-03-30 15:37 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

When "hugetlb_free_vmemmap=on" and "memory_hotplug.memmap_on_memory"
are both passed to boot cmdline, the variable of "memmap_on_memory"
will be set to 1 even if the vmemmap pages will not be allocated from
the hotadded memory since the former takes precedence over the latter.
In the next patch, we want to enable or disable the feature of freeing
vmemmap pages of HugeTLB via sysctl.  We need a way to know if the
feature of memory_hotplug.memmap_on_memory is enabled when enabling
the feature of freeing vmemmap pages since those two features are not
compatible, however, the variable of "memmap_on_memory" cannot indicate
this nowadays.  Do not set "memmap_on_memory" to 1 when both parameters
are passed to cmdline, in this case, "memmap_on_memory" could indicate
if this feature is enabled by the users.

Also introduce mhp_memmap_on_memory() helper to move the definition of
"memmap_on_memory" to the scope of CONFIG_MHP_MEMMAP_ON_MEMORY.  In the
next patch, mhp_memmap_on_memory() will also be exported to be used in
hugetlb_vmemmap.c.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/memory_hotplug.c | 32 ++++++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 416b38ca8def..da594b382829 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -42,14 +42,36 @@
 #include "internal.h"
 #include "shuffle.h"
 
+#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
+static int memmap_on_memory_set(const char *val, const struct kernel_param *kp)
+{
+	if (hugetlb_free_vmemmap_enabled())
+		return 0;
+	return param_set_bool(val, kp);
+}
+
+static const struct kernel_param_ops memmap_on_memory_ops = {
+	.flags	= KERNEL_PARAM_OPS_FL_NOARG,
+	.set	= memmap_on_memory_set,
+	.get	= param_get_bool,
+};
 
 /*
  * memory_hotplug.memmap_on_memory parameter
  */
 static bool memmap_on_memory __ro_after_init;
-#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
-module_param(memmap_on_memory, bool, 0444);
+module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444);
 MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug");
+
+static inline bool mhp_memmap_on_memory(void)
+{
+	return memmap_on_memory;
+}
+#else
+static inline bool mhp_memmap_on_memory(void)
+{
+	return false;
+}
 #endif
 
 enum {
@@ -1288,9 +1310,7 @@ bool mhp_supports_memmap_on_memory(unsigned long size)
 	 *       altmap as an alternative source of memory, and we do not exactly
 	 *       populate a single PMD.
 	 */
-	return memmap_on_memory &&
-	       !hugetlb_free_vmemmap_enabled() &&
-	       IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) &&
+	return mhp_memmap_on_memory() &&
 	       size == memory_block_size_bytes() &&
 	       IS_ALIGNED(vmemmap_size, PMD_SIZE) &&
 	       IS_ALIGNED(remaining_size, (pageblock_nr_pages << PAGE_SHIFT));
@@ -2074,7 +2094,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
 	 * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in
 	 * the same granularity it was added - a single memory block.
 	 */
-	if (memmap_on_memory) {
+	if (mhp_memmap_on_memory()) {
 		nr_vmemmap_pages = walk_memory_blocks(start, size, NULL,
 						      get_nr_vmemmap_pages_cb);
 		if (nr_vmemmap_pages) {
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v6 3/4] sysctl: allow to set extra1 to SYSCTL_ONE
  2022-03-30 15:37 [PATCH v6 0/4] add hugetlb_free_vmemmap sysctl Muchun Song
  2022-03-30 15:37 ` [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 Muchun Song
  2022-03-30 15:37 ` [PATCH v6 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Muchun Song
@ 2022-03-30 15:37 ` Muchun Song
  2022-03-30 15:37 ` [PATCH v6 4/4] mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl Muchun Song
  3 siblings, 0 replies; 12+ messages in thread
From: Muchun Song @ 2022-03-30 15:37 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

proc_do_static_key() does not consider the situation where a sysctl is only
allowed to be enabled and cannot be disabled under certain circumstances
since it set "->extra1" to SYSCTL_ZERO unconditionally.  This patch add the
functionality to set "->extra1" accordingly.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 kernel/sysctl.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/kernel/sysctl.c b/kernel/sysctl.c
index 770d5f7c7ae4..1e89c3e428ad 100644
--- a/kernel/sysctl.c
+++ b/kernel/sysctl.c
@@ -1638,7 +1638,7 @@ int proc_do_static_key(struct ctl_table *table, int write,
 		.data   = &val,
 		.maxlen = sizeof(val),
 		.mode   = table->mode,
-		.extra1 = SYSCTL_ZERO,
+		.extra1 = table->extra1 == SYSCTL_ONE ? SYSCTL_ONE : SYSCTL_ZERO,
 		.extra2 = SYSCTL_ONE,
 	};
 
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* [PATCH v6 4/4] mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl
  2022-03-30 15:37 [PATCH v6 0/4] add hugetlb_free_vmemmap sysctl Muchun Song
                   ` (2 preceding siblings ...)
  2022-03-30 15:37 ` [PATCH v6 3/4] sysctl: allow to set extra1 to SYSCTL_ONE Muchun Song
@ 2022-03-30 15:37 ` Muchun Song
  2022-03-31  2:36   ` Andrew Morton
  3 siblings, 1 reply; 12+ messages in thread
From: Muchun Song @ 2022-03-30 15:37 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

We must add "hugetlb_free_vmemmap=on" to boot cmdline and reboot the
server to enable the feature of freeing vmemmap pages of HugeTLB
pages.  Rebooting usually takes a long time.  Add a sysctl to enable
or disable the feature at runtime without rebooting.

Disabling requires there is no any optimized HugeTLB page in the
system.  If you fail to disable it, you can set "nr_hugepages" to 0
and then retry.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/sysctl/vm.rst |  14 +++++
 include/linux/memory_hotplug.h          |   9 +++
 mm/hugetlb_vmemmap.c                    | 101 +++++++++++++++++++++++++-------
 mm/hugetlb_vmemmap.h                    |   4 +-
 mm/memory_hotplug.c                     |   7 +--
 5 files changed, 108 insertions(+), 27 deletions(-)

diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index f4804ce37c58..9e0e153ed935 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -561,6 +561,20 @@ Change the minimum size of the hugepage pool.
 See Documentation/admin-guide/mm/hugetlbpage.rst
 
 
+hugetlb_free_vmemmap
+====================
+
+Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap
+pages associated with each HugeTLB page.  Once true, the vmemmap pages of
+subsequent allocation of HugeTLB pages from buddy system will be optimized,
+whereas already allocated HugeTLB pages will not be optimized.  If you fail
+to disable this feature, you can set "nr_hugepages" to 0 and then retry
+since it is only allowed to be disabled after there is no any optimized
+HugeTLB page in the system.
+
+See Documentation/admin-guide/mm/hugetlbpage.rst
+
+
 nr_hugepages_mempolicy
 ======================
 
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 1ce6f8044f1e..9b015b254e86 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -348,4 +348,13 @@ void arch_remove_linear_mapping(u64 start, u64 size);
 extern bool mhp_supports_memmap_on_memory(unsigned long size);
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
+#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
+bool mhp_memmap_on_memory(void);
+#else
+static inline bool mhp_memmap_on_memory(void)
+{
+	return false;
+}
+#endif
+
 #endif /* __LINUX_MEMORY_HOTPLUG_H */
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 951cf83010c7..5df41b148e79 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -176,6 +176,7 @@
  */
 #define pr_fmt(fmt)	"HugeTLB: " fmt
 
+#include <linux/memory_hotplug.h>
 #include "hugetlb_vmemmap.h"
 
 #ifdef STRUCT_PAGE_SIZE_IS_POWER_OF_2
@@ -193,6 +194,10 @@ DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON,
 			hugetlb_free_vmemmap_enabled_key);
 EXPORT_SYMBOL(hugetlb_free_vmemmap_enabled_key);
 
+/* How many HugeTLB pages with vmemmap pages optimized. */
+static atomic_long_t optimized_pages = ATOMIC_LONG_INIT(0);
+static DECLARE_RWSEM(sysctl_rwsem);
+
 static int __init early_hugetlb_free_vmemmap_param(char *buf)
 {
 	if (!buf)
@@ -209,11 +214,6 @@ static int __init early_hugetlb_free_vmemmap_param(char *buf)
 }
 early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
 
-static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
-{
-	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
-}
-
 /*
  * Previously discarded vmemmap pages will be allocated and remapping
  * after this function returns zero.
@@ -222,14 +222,18 @@ int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 	int ret;
 	unsigned long vmemmap_addr = (unsigned long)head;
-	unsigned long vmemmap_end, vmemmap_reuse;
+	unsigned long vmemmap_end, vmemmap_reuse, vmemmap_pages;
 
 	if (!HPageVmemmapOptimized(head))
 		return 0;
 
-	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
-	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
-	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
+	vmemmap_addr	+= RESERVE_VMEMMAP_SIZE;
+	vmemmap_pages	= free_vmemmap_pages_per_hpage(h);
+	vmemmap_end	= vmemmap_addr + (vmemmap_pages << PAGE_SHIFT);
+	vmemmap_reuse	= vmemmap_addr - PAGE_SIZE;
+
+	VM_BUG_ON_PAGE(!vmemmap_pages, head);
+
 	/*
 	 * The pages which the vmemmap virtual address range [@vmemmap_addr,
 	 * @vmemmap_end) are mapped to are freed to the buddy allocator, and
@@ -239,8 +243,14 @@ int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 	 */
 	ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
 				  GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
-	if (!ret)
+	if (!ret) {
 		ClearHPageVmemmapOptimized(head);
+		/*
+		 * Paired with acquire semantic in
+		 * hugetlb_free_vmemmap_handler().
+		 */
+		atomic_long_dec_return_release(&optimized_pages);
+	}
 
 	return ret;
 }
@@ -248,22 +258,28 @@ int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 	unsigned long vmemmap_addr = (unsigned long)head;
-	unsigned long vmemmap_end, vmemmap_reuse;
+	unsigned long vmemmap_end, vmemmap_reuse, vmemmap_pages;
 
-	if (!free_vmemmap_pages_per_hpage(h))
-		return;
+	down_read(&sysctl_rwsem);
+	vmemmap_pages = free_vmemmap_pages_per_hpage(h);
+	if (!vmemmap_pages)
+		goto out;
 
-	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
-	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
-	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
+	vmemmap_addr	+= RESERVE_VMEMMAP_SIZE;
+	vmemmap_end	= vmemmap_addr + (vmemmap_pages << PAGE_SHIFT);
+	vmemmap_reuse	= vmemmap_addr - PAGE_SIZE;
 
 	/*
 	 * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end)
 	 * to the page which @vmemmap_reuse is mapped to, then free the pages
 	 * which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
 	 */
-	if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse))
+	if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse)) {
 		SetHPageVmemmapOptimized(head);
+		atomic_long_inc(&optimized_pages);
+	}
+out:
+	up_read(&sysctl_rwsem);
 }
 
 void __init hugetlb_vmemmap_init(struct hstate *h)
@@ -279,9 +295,6 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
 		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
 
-	if (!hugetlb_free_vmemmap_enabled())
-		return;
-
 	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
 	/*
 	 * The head page is not to be freed to buddy allocator, the other tail
@@ -297,4 +310,52 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
 		h->name);
 }
+
+static int hugetlb_free_vmemmap_handler(struct ctl_table *table, int write,
+					void *buffer, size_t *length,
+					loff_t *ppos)
+{
+	int ret;
+
+	down_write(&sysctl_rwsem);
+	/*
+	 * Cannot be disabled when there is at lease one optimized
+	 * HugeTLB in the system.
+	 *
+	 * The acquire semantic is paired with release semantic in
+	 * alloc_huge_page_vmemmap(). If we saw the @optimized_pages
+	 * with 0, all the operations of vmemmap pages remapping from
+	 * alloc_huge_page_vmemmap() are visible too so that we can
+	 * safely disable static key.
+	 */
+	table->extra1 = atomic_long_read_acquire(&optimized_pages) ?
+			SYSCTL_ONE : SYSCTL_ZERO;
+	ret = proc_do_static_key(table, write, buffer, length, ppos);
+	up_write(&sysctl_rwsem);
+
+	return ret;
+}
+
+static struct ctl_table hugetlb_vmemmap_sysctls[] = {
+	{
+		.procname	= "hugetlb_free_vmemmap",
+		.data		= &hugetlb_free_vmemmap_enabled_key.key,
+		.mode		= 0644,
+		.proc_handler	= hugetlb_free_vmemmap_handler,
+	},
+	{ }
+};
+
+static __init int hugetlb_vmemmap_sysctls_init(void)
+{
+	/*
+	 * The vmemmap pages cannot be optimized if
+	 * "memory_hotplug.memmap_on_memory" is enabled.
+	 */
+	if (!mhp_memmap_on_memory())
+		register_sysctl_init("vm", hugetlb_vmemmap_sysctls);
+
+	return 0;
+}
+late_initcall(hugetlb_vmemmap_sysctls_init);
 #endif /* STRUCT_PAGE_SIZE_IS_POWER_OF_2 */
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index b137fd8b6ba4..89cd85fb0d26 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -21,7 +21,9 @@ void hugetlb_vmemmap_init(struct hstate *h);
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
-	return h->nr_free_vmemmap_pages;
+	if (hugetlb_free_vmemmap_enabled())
+		return h->nr_free_vmemmap_pages;
+	return 0;
 }
 #else
 static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index da594b382829..793c04cfe46f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -63,15 +63,10 @@ static bool memmap_on_memory __ro_after_init;
 module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444);
 MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug");
 
-static inline bool mhp_memmap_on_memory(void)
+bool mhp_memmap_on_memory(void)
 {
 	return memmap_on_memory;
 }
-#else
-static inline bool mhp_memmap_on_memory(void)
-{
-	return false;
-}
 #endif
 
 enum {
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2
  2022-03-30 15:37 ` [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 Muchun Song
@ 2022-03-31  2:28   ` Andrew Morton
  2022-03-31  2:52     ` Muchun Song
  2022-03-31 12:39   ` kernel test robot
  1 sibling, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2022-03-31  2:28 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy, linux-doc, linux-kernel, linux-mm,
	duanxiongchun, smuchun

On Wed, 30 Mar 2022 23:37:42 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> If the size of "struct page" is not the power of two and this
> feature is enabled,

What is "this feature"?   Let's spell it out?

> then the vmemmap pages of HugeTLB will be
> corrupted after remapping (panic is about to happen in theory).
> But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on
> x86_64.  However, it is not a conventional configuration nowadays.
> So it is not a real word issue, just the result of a code review.
> But we have to prevent anyone from configuring that combined
> configuration.  In order to avoid many checks like "is_power_of_2
> (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce
> STRUCT_PAGE_SIZE_IS_POWER_OF_2 to detect if the size of struct
> page is power of 2 and make this feature depends on this new
> macro.  Then we could prevent anyone do any unexpected
> configuration.
> 
> ...
>
> --- /dev/null
> +++ b/mm/struct_page_size.c
> @@ -0,0 +1,20 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Generate definitions needed by the preprocessor.
> + * This code generates raw asm output which is post-processed
> + * to extract and format the required data.
> + */
> +
> +#define __GENERATING_STRUCT_PAGE_SIZE_IS_POWER_OF_2_H
> +/* Include headers that define the enum constants of interest */
> +#include <linux/mm_types.h>
> +#include <linux/kbuild.h>
> +#include <linux/log2.h>
> +
> +int main(void)
> +{
> +	if (is_power_of_2(sizeof(struct page)))
> +		DEFINE(STRUCT_PAGE_SIZE_IS_POWER_OF_2, is_power_of_2(sizeof(struct page)));

Why not

	DEFINE(STRUCT_PAGE_SIZE_IS_POWER_OF_2, 1);

?

> +	return 0;
> +}
> -- 
> 2.11.0


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 4/4] mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl
  2022-03-30 15:37 ` [PATCH v6 4/4] mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl Muchun Song
@ 2022-03-31  2:36   ` Andrew Morton
  2022-03-31  3:45     ` Muchun Song
  0 siblings, 1 reply; 12+ messages in thread
From: Andrew Morton @ 2022-03-31  2:36 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy, linux-doc, linux-kernel, linux-mm,
	duanxiongchun, smuchun

On Wed, 30 Mar 2022 23:37:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> We must add "hugetlb_free_vmemmap=on" to boot cmdline and reboot the
> server to enable the feature of freeing vmemmap pages of HugeTLB
> pages.  Rebooting usually takes a long time.  Add a sysctl to enable
> or disable the feature at runtime without rebooting.

I forget, why did we add the hugetlb_free_vmemmap option in the first
place?  Why not just leave the feature enabled in all cases?

Furthermore, why would anyone want to tweak this at runtime?  What is
the use case?  Where is the end-user value in all of this?

> Disabling requires there is no any optimized HugeTLB page in the
> system.  If you fail to disable it, you can set "nr_hugepages" to 0
> and then retry.
> 
> --- a/Documentation/admin-guide/sysctl/vm.rst
> +++ b/Documentation/admin-guide/sysctl/vm.rst
> @@ -561,6 +561,20 @@ Change the minimum size of the hugepage pool.
>  See Documentation/admin-guide/mm/hugetlbpage.rst
>  
>  
> +hugetlb_free_vmemmap
> +====================
> +
> +Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap
> +pages associated with each HugeTLB page.  Once true, the vmemmap pages of
> +subsequent allocation of HugeTLB pages from buddy system will be optimized,
> +whereas already allocated HugeTLB pages will not be optimized.  If you fail
> +to disable this feature, you can set "nr_hugepages" to 0 and then retry
> +since it is only allowed to be disabled after there is no any optimized
> +HugeTLB page in the system.
> +

Pity the poor user who is looking at this and wondering whether it will
improve or worsen things.  If we don't tell them, who will?  Are they
supposed to just experiment?

What can we add here to help them understand whether this might be
beneficial?

> +See Documentation/admin-guide/mm/hugetlbpage.rst




^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2
  2022-03-31  2:28   ` Andrew Morton
@ 2022-03-31  2:52     ` Muchun Song
  2022-03-31  2:57       ` Andrew Morton
  0 siblings, 1 reply; 12+ messages in thread
From: Muchun Song @ 2022-03-31  2:52 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Jonathan Corbet, Mike Kravetz, Luis Chamberlain, Kees Cook,
	Iurii Zaikin, Oscar Salvador, David Hildenbrand, Masahiro Yamada,
	Linux Doc Mailing List, LKML, Linux Memory Management List,
	Xiongchun duan, Muchun Song

On Thu, Mar 31, 2022 at 10:28 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Wed, 30 Mar 2022 23:37:42 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
>
> > If the size of "struct page" is not the power of two and this
> > feature is enabled,
>
> What is "this feature"?   Let's spell it out?

Will do.

>
> > then the vmemmap pages of HugeTLB will be
> > corrupted after remapping (panic is about to happen in theory).
> > But this only exists when !CONFIG_MEMCG && !CONFIG_SLUB on
> > x86_64.  However, it is not a conventional configuration nowadays.
> > So it is not a real word issue, just the result of a code review.
> > But we have to prevent anyone from configuring that combined
> > configuration.  In order to avoid many checks like "is_power_of_2
> > (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce
> > STRUCT_PAGE_SIZE_IS_POWER_OF_2 to detect if the size of struct
> > page is power of 2 and make this feature depends on this new
> > macro.  Then we could prevent anyone do any unexpected
> > configuration.
> >
> > ...
> >
> > --- /dev/null
> > +++ b/mm/struct_page_size.c
> > @@ -0,0 +1,20 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Generate definitions needed by the preprocessor.
> > + * This code generates raw asm output which is post-processed
> > + * to extract and format the required data.
> > + */
> > +
> > +#define __GENERATING_STRUCT_PAGE_SIZE_IS_POWER_OF_2_H
> > +/* Include headers that define the enum constants of interest */
> > +#include <linux/mm_types.h>
> > +#include <linux/kbuild.h>
> > +#include <linux/log2.h>
> > +
> > +int main(void)
> > +{
> > +     if (is_power_of_2(sizeof(struct page)))
> > +             DEFINE(STRUCT_PAGE_SIZE_IS_POWER_OF_2, is_power_of_2(sizeof(struct page)));
>
> Why not
>
>         DEFINE(STRUCT_PAGE_SIZE_IS_POWER_OF_2, 1);
>

Yep, this is more simple.  But the 2nd parameter of DEFINE() will
go into the comments.  I want to make it more clear when someone
reads the code of this macro.  The two different sentences will
generate the following two different comments.  Which one do
you think is better?

#define STRUCT_PAGE_SIZE_IS_POWER_OF_2 1 /*
is_power_of_2(sizeof(struct page)) */
#define STRUCT_PAGE_SIZE_IS_POWER_OF_2 1 /* 1 */

Thanks.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2
  2022-03-31  2:52     ` Muchun Song
@ 2022-03-31  2:57       ` Andrew Morton
  0 siblings, 0 replies; 12+ messages in thread
From: Andrew Morton @ 2022-03-31  2:57 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Luis Chamberlain, Kees Cook,
	Iurii Zaikin, Oscar Salvador, David Hildenbrand, Masahiro Yamada,
	Linux Doc Mailing List, LKML, Linux Memory Management List,
	Xiongchun duan, Muchun Song

On Thu, 31 Mar 2022 10:52:58 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> > > +int main(void)
> > > +{
> > > +     if (is_power_of_2(sizeof(struct page)))
> > > +             DEFINE(STRUCT_PAGE_SIZE_IS_POWER_OF_2, is_power_of_2(sizeof(struct page)));
> >
> > Why not
> >
> >         DEFINE(STRUCT_PAGE_SIZE_IS_POWER_OF_2, 1);
> >
> 
> Yep, this is more simple.  But the 2nd parameter of DEFINE() will
> go into the comments.  I want to make it more clear when someone
> reads the code of this macro.  The two different sentences will
> generate the following two different comments.  Which one do
> you think is better?
> 
> #define STRUCT_PAGE_SIZE_IS_POWER_OF_2 1 /*
> is_power_of_2(sizeof(struct page)) */
> #define STRUCT_PAGE_SIZE_IS_POWER_OF_2 1 /* 1 */

The former ;)


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 4/4] mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl
  2022-03-31  2:36   ` Andrew Morton
@ 2022-03-31  3:45     ` Muchun Song
  0 siblings, 0 replies; 12+ messages in thread
From: Muchun Song @ 2022-03-31  3:45 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Jonathan Corbet, Mike Kravetz, Luis Chamberlain, Kees Cook,
	Iurii Zaikin, Oscar Salvador, David Hildenbrand, Masahiro Yamada,
	Linux Doc Mailing List, LKML, Linux Memory Management List,
	Xiongchun duan, Muchun Song

On Thu, Mar 31, 2022 at 10:37 AM Andrew Morton
<akpm@linux-foundation.org> wrote:
>
> On Wed, 30 Mar 2022 23:37:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
>
> > We must add "hugetlb_free_vmemmap=on" to boot cmdline and reboot the
> > server to enable the feature of freeing vmemmap pages of HugeTLB
> > pages.  Rebooting usually takes a long time.  Add a sysctl to enable
> > or disable the feature at runtime without rebooting.
>
> I forget, why did we add the hugetlb_free_vmemmap option in the first
> place? Why not just leave the feature enabled in all cases?

The 1st reason is because we disable PMD/huge page mapping
of vmemmap pages (in the original version) which increase page
table pages.  So if a user/sysadmin only  uses a small number of
HugeTLB pages (as a percentage of system memory), they could
end up using more memory with hugetlb_free_vmemmap on as
opposed to off.  Now this tradeoff is gone.

The 2nd reason is this feature adds more overhead in the path of
HugeTLB allocation/freeing from/to the buddy system.  As Mike said
in the link [1].
"
There are still some instances where huge pages
are allocated 'on the fly' instead of being pulled from the pool.  Michal
pointed out the case of page migration.  It is also possible for someone to
use hugetlbfs without pre-allocating huge pages to the pool.  I remember the
use case pointed out in commit 099730d67417.  It says, "I have a hugetlbfs
user which is never explicitly allocating huge pages with 'nr_hugepages'.
They only set 'nr_overcommit_hugepages' and then let the pages be allocated
from the buddy allocator at fault time."  In this case, I suspect they were
using 'page fault' allocation for initialization much like someone using
/proc/sys/vm/nr_hugepages.  So, the overhead may not be as noticeable.
"

For those different workloads, we introduce hugetlb_free_vmemmap and
expect users to make decisions based on their workloads.

[1] https://patchwork.kernel.org/comment/23752641/

>
> Furthermore, why would anyone want to tweak this at runtime?  What is
> the use case?  Where is the end-user value in all of this?

If the workload is changed in the future on a server.  The users need to
adapt this at runtime without rebooting the server.

>
> > Disabling requires there is no any optimized HugeTLB page in the
> > system.  If you fail to disable it, you can set "nr_hugepages" to 0
> > and then retry.
> >
> > --- a/Documentation/admin-guide/sysctl/vm.rst
> > +++ b/Documentation/admin-guide/sysctl/vm.rst
> > @@ -561,6 +561,20 @@ Change the minimum size of the hugepage pool.
> >  See Documentation/admin-guide/mm/hugetlbpage.rst
> >
> >
> > +hugetlb_free_vmemmap
> > +====================
> > +
> > +Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap
> > +pages associated with each HugeTLB page.  Once true, the vmemmap pages of
> > +subsequent allocation of HugeTLB pages from buddy system will be optimized,
> > +whereas already allocated HugeTLB pages will not be optimized.  If you fail
> > +to disable this feature, you can set "nr_hugepages" to 0 and then retry
> > +since it is only allowed to be disabled after there is no any optimized
> > +HugeTLB page in the system.
> > +
>
> Pity the poor user who is looking at this and wondering whether it will
> improve or worsen things.  If we don't tell them, who will?  Are they
> supposed to just experiment?
>
> What can we add here to help them understand whether this might be
> beneficial?
>

My bad. I should explain more details to let users make better decisions.

Thanks.


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2
  2022-03-30 15:37 ` [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 Muchun Song
  2022-03-31  2:28   ` Andrew Morton
@ 2022-03-31 12:39   ` kernel test robot
  2022-03-31 15:26     ` Muchun Song
  1 sibling, 1 reply; 12+ messages in thread
From: kernel test robot @ 2022-03-31 12:39 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, akpm, mcgrof, keescook,
	yzaikin, osalvador, david, masahiroy
  Cc: kbuild-all, linux-doc, linux-kernel, linux-mm, duanxiongchun,
	smuchun, Muchun Song

Hi Muchun,

Thank you for the patch! Yet something to improve:

[auto build test ERROR on hnaz-mm/master]
[also build test ERROR on mcgrof/sysctl-next linus/master next-20220331]
[cannot apply to v5.17]
[If your patch is applied to the wrong git tree, kindly drop us a note.
And when submitting patch, we suggest to use '--base' as documented in
https://git-scm.com/docs/git-format-patch]

url:    https://github.com/intel-lab-lkp/linux/commits/Muchun-Song/add-hugetlb_free_vmemmap-sysctl/20220330-234018
base:   https://github.com/hnaz/linux-mm master
config: ia64-randconfig-s031-20220331 (https://download.01.org/0day-ci/archive/20220331/202203312010.ct30oFE6-lkp@intel.com/config)
compiler: ia64-linux-gcc (GCC) 11.2.0
reproduce:
        wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # apt-get install sparse
        # sparse version: v0.6.4-dirty
        # https://github.com/intel-lab-lkp/linux/commit/5164c566d4fbdb808689ee4552ed95eab421522e
        git remote add linux-review https://github.com/intel-lab-lkp/linux
        git fetch --no-tags linux-review Muchun-Song/add-hugetlb_free_vmemmap-sysctl/20220330-234018
        git checkout 5164c566d4fbdb808689ee4552ed95eab421522e
        # save the config file to linux build tree
        mkdir build_dir
        COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=ia64 prepare

If you fix the issue, kindly add following tag as appropriate
Reported-by: kernel test robot <lkp@intel.com>

All errors (new ones prefixed by >>):

   In file included from arch/ia64/include/asm/thread_info.h:10,
                    from include/linux/thread_info.h:60,
                    from include/asm-generic/preempt.h:5,
                    from ./arch/ia64/include/generated/asm/preempt.h:1,
                    from include/linux/preempt.h:78,
                    from include/linux/spinlock.h:55,
                    from include/linux/kref.h:16,
                    from include/linux/mm_types.h:8,
                    from mm/struct_page_size.c:10:
>> arch/ia64/include/asm/asm-offsets.h:1:10: fatal error: generated/asm-offsets.h: No such file or directory
       1 | #include <generated/asm-offsets.h>
         |          ^~~~~~~~~~~~~~~~~~~~~~~~~
   compilation terminated.
   make[2]: *** [scripts/Makefile.build:127: mm/struct_page_size.s] Error 1
   make[2]: Target '__build' not remade because of errors.
   make[1]: *** [Makefile:1261: prepare0] Error 2
   make[1]: Target 'prepare' not remade because of errors.
   make: *** [Makefile:226: __sub-make] Error 2
   make: Target 'prepare' not remade because of errors.


vim +1 arch/ia64/include/asm/asm-offsets.h

559df2e0210352f Sam Ravnborg 2009-04-19 @1  #include <generated/asm-offsets.h>

-- 
0-DAY CI Kernel Test Service
https://01.org/lkp


^ permalink raw reply	[flat|nested] 12+ messages in thread

* Re: [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2
  2022-03-31 12:39   ` kernel test robot
@ 2022-03-31 15:26     ` Muchun Song
  0 siblings, 0 replies; 12+ messages in thread
From: Muchun Song @ 2022-03-31 15:26 UTC (permalink / raw)
  To: kernel test robot
  Cc: Jonathan Corbet, Mike Kravetz, Andrew Morton, Luis Chamberlain,
	Kees Cook, Iurii Zaikin, Oscar Salvador, David Hildenbrand,
	Masahiro Yamada, kbuild-all, Linux Doc Mailing List, LKML,
	Linux Memory Management List, Xiongchun duan, Muchun Song

On Thu, Mar 31, 2022 at 8:40 PM kernel test robot <lkp@intel.com> wrote:
>
> Hi Muchun,
>
> Thank you for the patch! Yet something to improve:
>
> [auto build test ERROR on hnaz-mm/master]
> [also build test ERROR on mcgrof/sysctl-next linus/master next-20220331]
> [cannot apply to v5.17]
> [If your patch is applied to the wrong git tree, kindly drop us a note.
> And when submitting patch, we suggest to use '--base' as documented in
> https://git-scm.com/docs/git-format-patch]
>
> url:    https://github.com/intel-lab-lkp/linux/commits/Muchun-Song/add-hugetlb_free_vmemmap-sysctl/20220330-234018
> base:   https://github.com/hnaz/linux-mm master
> config: ia64-randconfig-s031-20220331 (https://download.01.org/0day-ci/archive/20220331/202203312010.ct30oFE6-lkp@intel.com/config)
> compiler: ia64-linux-gcc (GCC) 11.2.0
> reproduce:
>         wget https://raw.githubusercontent.com/intel/lkp-tests/master/sbin/make.cross -O ~/bin/make.cross
>         chmod +x ~/bin/make.cross
>         # apt-get install sparse
>         # sparse version: v0.6.4-dirty
>         # https://github.com/intel-lab-lkp/linux/commit/5164c566d4fbdb808689ee4552ed95eab421522e
>         git remote add linux-review https://github.com/intel-lab-lkp/linux
>         git fetch --no-tags linux-review Muchun-Song/add-hugetlb_free_vmemmap-sysctl/20220330-234018
>         git checkout 5164c566d4fbdb808689ee4552ed95eab421522e
>         # save the config file to linux build tree
>         mkdir build_dir
>         COMPILER_INSTALL_PATH=$HOME/0day COMPILER=gcc-11.2.0 make.cross C=1 CF='-fdiagnostic-prefix -D__CHECK_ENDIAN__' O=build_dir ARCH=ia64 prepare
>
> If you fix the issue, kindly add following tag as appropriate
> Reported-by: kernel test robot <lkp@intel.com>
>
> All errors (new ones prefixed by >>):
>
>    In file included from arch/ia64/include/asm/thread_info.h:10,
>                     from include/linux/thread_info.h:60,
>                     from include/asm-generic/preempt.h:5,
>                     from ./arch/ia64/include/generated/asm/preempt.h:1,
>                     from include/linux/preempt.h:78,
>                     from include/linux/spinlock.h:55,
>                     from include/linux/kref.h:16,
>                     from include/linux/mm_types.h:8,
>                     from mm/struct_page_size.c:10:
> >> arch/ia64/include/asm/asm-offsets.h:1:10: fatal error: generated/asm-offsets.h: No such file or directory
>        1 | #include <generated/asm-offsets.h>
>          |          ^~~~~~~~~~~~~~~~~~~~~~~~~
>    compilation terminated.
>    make[2]: *** [scripts/Makefile.build:127: mm/struct_page_size.s] Error 1
>    make[2]: Target '__build' not remade because of errors.
>    make[1]: *** [Makefile:1261: prepare0] Error 2
>    make[1]: Target 'prepare' not remade because of errors.
>    make: *** [Makefile:226: __sub-make] Error 2
>    make: Target 'prepare' not remade because of errors.
>

It is a circular dependency issue, I'll fix this in the next version.
Thanks for your report.


^ permalink raw reply	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2022-03-31 15:27 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-30 15:37 [PATCH v6 0/4] add hugetlb_free_vmemmap sysctl Muchun Song
2022-03-30 15:37 ` [PATCH v6 1/4] mm: hugetlb_vmemmap: introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 Muchun Song
2022-03-31  2:28   ` Andrew Morton
2022-03-31  2:52     ` Muchun Song
2022-03-31  2:57       ` Andrew Morton
2022-03-31 12:39   ` kernel test robot
2022-03-31 15:26     ` Muchun Song
2022-03-30 15:37 ` [PATCH v6 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Muchun Song
2022-03-30 15:37 ` [PATCH v6 3/4] sysctl: allow to set extra1 to SYSCTL_ONE Muchun Song
2022-03-30 15:37 ` [PATCH v6 4/4] mm: hugetlb_vmemmap: add hugetlb_free_vmemmap sysctl Muchun Song
2022-03-31  2:36   ` Andrew Morton
2022-03-31  3:45     ` Muchun Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).