All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v8 0/4] add hugetlb_optimize_vmemmap sysctl
@ 2022-04-13 14:47 Muchun Song
  2022-04-13 14:47 ` [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP Muchun Song
                   ` (3 more replies)
  0 siblings, 4 replies; 13+ messages in thread
From: Muchun Song @ 2022-04-13 14:47 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

This series is based on next-20220407.

This series amis to add hugetlb_optimize_vmemmap sysctl to enable or disable
the feature of optimizing vmemmap pages associated with HugeTLB pages.

v8:
  - Fix compilation (scripts/selinux/mdp/mdp.c) error when
    CONFIG_SECURITY_SELINUX is selected.

v7:
  - Fix circular dependency issue reported by kernel test robot.
  - Introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP instead of
    STRUCT_PAGE_SIZE_IS_POWER_OF_2.
  - Add more comments into vm.rst to explain hugetlb_optimize_vmemmap (Andrew).
  - Drop the patch "sysctl: allow to set extra1 to SYSCTL_ONE".
  - Add a new patch "use kstrtobool for hugetlb_vmemmap param parsing".
  - Reuse static_key's refcount to count the number of HugeTLB pages with
    vmemmap pages optimized to simplify the lock scheme.

v6:
  - Remove "make syncconfig" from Kbuild.

v5:
  - Fix not working properly if one is workig off of a very clean build
    reported by Luis Chamberlain.
  - Add Suggested-by for Luis Chamberlain.

v4:
  - Introduce STRUCT_PAGE_SIZE_IS_POWER_OF_2 inspired by Luis.

v3:
  - Add pr_warn_once() (Mike).
  - Handle the transition from enabling to disabling (Luis)

v2:
  - Fix compilation when !CONFIG_MHP_MEMMAP_ON_MEMORY reported by kernel
    test robot <lkp@intel.com>.
  - Move sysctl code from kernel/sysctl.c to mm/hugetlb_vmemmap.c.

Muchun Song (4):
  mm: hugetlb_vmemmap: introduce
    CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  mm: memory_hotplug: override memmap_on_memory when
    hugetlb_free_vmemmap=on
  mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing
  mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl

 Documentation/admin-guide/kernel-parameters.txt |   6 +-
 Documentation/admin-guide/sysctl/vm.rst         |  27 +++++++
 Kbuild                                          |  19 +++++
 arch/x86/mm/init_64.c                           |   2 +-
 include/linux/hugetlb.h                         |   2 +-
 include/linux/kconfig.h                         |   4 +
 include/linux/memory_hotplug.h                  |   9 +++
 include/linux/mm.h                              |   2 +-
 include/linux/page-flags.h                      |   2 +-
 kernel/autoconf_ext.c                           |  26 ++++++
 mm/hugetlb_vmemmap.c                            | 102 ++++++++++++++++++++----
 mm/hugetlb_vmemmap.h                            |   8 +-
 mm/memory_hotplug.c                             |  27 +++++--
 mm/sparse-vmemmap.c                             |   4 +-
 scripts/mod/Makefile                            |   2 +
 15 files changed, 207 insertions(+), 35 deletions(-)
 create mode 100644 kernel/autoconf_ext.c

-- 
2.11.0


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  2022-04-13 14:47 [PATCH v8 0/4] add hugetlb_optimize_vmemmap sysctl Muchun Song
@ 2022-04-13 14:47 ` Muchun Song
  2022-04-13 19:08   ` Andrew Morton
  2022-04-20 17:11   ` Masahiro Yamada
  2022-04-13 14:47 ` [PATCH v8 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Muchun Song
                   ` (2 subsequent siblings)
  3 siblings, 2 replies; 13+ messages in thread
From: Muchun Song @ 2022-04-13 14:47 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

If the size of "struct page" is not the power of two but with the feature
of minimizing overhead of struct page associated with each HugeTLB is
enabled, then the vmemmap pages of HugeTLB will be corrupted after
remapping (panic is about to happen in theory).  But this only exists when
!CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
configuration nowadays.  So it is not a real word issue, just the result
of a code review.  But we have to prevent anyone from configuring that
combined configurations.  In order to avoid many checks like "is_power_of_2
(sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro
CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP to represent the size of struct
page is power of two and CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is
configured.  Then make the codes of this feature depends on this new macro.
Then we could prevent anyone do any unexpected configurations.  A new
autoconf_ext.h is introduced as well, which serves as an extension for
autoconf.h since those special configurations (e.g.
CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP here) are rely on the autoconf.h
(generated from Kconfig), so we cannot embed those configurations into
Kconfig.  After this change, it would be easy if someone want to do the
similar thing (add a new CONFIG) in the future.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Suggested-by: Luis Chamberlain <mcgrof@kernel.org>
---
 Kbuild                     | 19 +++++++++++++++++++
 arch/x86/mm/init_64.c      |  2 +-
 include/linux/hugetlb.h    |  2 +-
 include/linux/kconfig.h    |  4 ++++
 include/linux/mm.h         |  2 +-
 include/linux/page-flags.h |  2 +-
 kernel/autoconf_ext.c      | 26 ++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.c       |  8 ++------
 mm/hugetlb_vmemmap.h       |  4 ++--
 mm/sparse-vmemmap.c        |  4 ++--
 scripts/mod/Makefile       |  2 ++
 11 files changed, 61 insertions(+), 14 deletions(-)
 create mode 100644 kernel/autoconf_ext.c

diff --git a/Kbuild b/Kbuild
index fa441b98c9f6..83c0d5a418d1 100644
--- a/Kbuild
+++ b/Kbuild
@@ -2,6 +2,12 @@
 #
 # Kbuild for top-level directory of the kernel
 
+# autoconf_ext.h is generated last since it depends on other generated headers,
+# however those other generated headers may include autoconf_ext.h. Use the
+# following macro to avoid circular dependency.
+
+KBUILD_CFLAGS_KERNEL += -D__EXCLUDE_AUTOCONF_EXT_H
+
 #####
 # Generate bounds.h
 
@@ -37,6 +43,19 @@ $(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE
 	$(call filechk,offsets,__ASM_OFFSETS_H__)
 
 #####
+# Generate autoconf_ext.h.
+
+autoconf_ext-file := include/generated/autoconf_ext.h
+
+always-y += $(autoconf_ext-file)
+targets += kernel/autoconf_ext.s
+
+kernel/autoconf_ext.s: $(bounds-file) $(timeconst-file) $(offsets-file)
+
+$(autoconf_ext-file): kernel/autoconf_ext.s FORCE
+	$(call filechk,offsets,__LINUX_AUTOCONF_EXT_H__)
+
+#####
 # Check for missing system calls
 
 always-y += missing-syscalls
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 4b9e0012bbbf..9b8dfa6e4da8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1268,7 +1268,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP)
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ac2ece9e9c79..d42de8abd2b6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -623,7 +623,7 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 	unsigned int optimize_vmemmap_pages;
 #endif
 #ifdef CONFIG_CGROUP_HUGETLB
diff --git a/include/linux/kconfig.h b/include/linux/kconfig.h
index 20d1079e92b4..00796794f177 100644
--- a/include/linux/kconfig.h
+++ b/include/linux/kconfig.h
@@ -4,6 +4,10 @@
 
 #include <generated/autoconf.h>
 
+#if defined(__KERNEL__) && !defined(__EXCLUDE_AUTOCONF_EXT_H)
+#include <generated/autoconf_ext.h>
+#endif
+
 #ifdef CONFIG_CPU_BIG_ENDIAN
 #define __BIG_ENDIAN 4321
 #else
diff --git a/include/linux/mm.h b/include/linux/mm.h
index e0ad13486035..4c36f77a5745 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3186,7 +3186,7 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 }
 #endif
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 int vmemmap_remap_free(unsigned long start, unsigned long end,
 		       unsigned long reuse);
 int vmemmap_remap_alloc(unsigned long start, unsigned long end,
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index b70124b9c7c1..e409b10cd677 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -199,7 +199,7 @@ enum pageflags {
 
 #ifndef __GENERATING_BOUNDS_H
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 DECLARE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON,
 			 hugetlb_optimize_vmemmap_key);
 
diff --git a/kernel/autoconf_ext.c b/kernel/autoconf_ext.c
new file mode 100644
index 000000000000..8475735c6fc9
--- /dev/null
+++ b/kernel/autoconf_ext.c
@@ -0,0 +1,26 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Generate definitions needed by the preprocessor.
+ * This code generates raw asm output which is post-processed
+ * to extract and format the required data.
+ */
+#include <linux/mm_types.h>
+#include <linux/kbuild.h>
+#include <linux/log2.h>
+
+int main(void)
+{
+	if (IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) &&
+	    is_power_of_2(sizeof(struct page))) {
+		/*
+		 * The 2nd parameter of DEFINE() will go into the comments. Do
+		 * not pass 1 directly to it to make the generated macro more
+		 * clear for the readers.
+		 */
+		DEFINE(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP,
+		       IS_ENABLED(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP) &&
+		       is_power_of_2(sizeof(struct page)));
+	}
+
+	return 0;
+}
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 2655434a946b..be73782cc1cf 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -178,6 +178,7 @@
 
 #include "hugetlb_vmemmap.h"
 
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 /*
  * There are a lot of struct page structures associated with each HugeTLB page.
  * For tail pages, the value of compound_head is the same. So we can reuse first
@@ -194,12 +195,6 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
 
 static int __init hugetlb_vmemmap_early_param(char *buf)
 {
-	/* We cannot optimize if a "struct page" crosses page boundaries. */
-	if (!is_power_of_2(sizeof(struct page))) {
-		pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n");
-		return 0;
-	}
-
 	if (!buf)
 		return -EINVAL;
 
@@ -300,3 +295,4 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	pr_info("can optimize %d vmemmap pages for %s\n",
 		h->optimize_vmemmap_pages, h->name);
 }
+#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 109b0a53b6fe..3afae3ff37fa 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -10,7 +10,7 @@
 #define _LINUX_HUGETLB_VMEMMAP_H
 #include <linux/hugetlb.h>
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head);
 void hugetlb_vmemmap_free(struct hstate *h, struct page *head);
 void hugetlb_vmemmap_init(struct hstate *h);
@@ -41,5 +41,5 @@ static inline unsigned int hugetlb_optimize_vmemmap_pages(struct hstate *h)
 {
 	return 0;
 }
-#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */
+#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 52f36527bab3..6c7f1a9ce2dd 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -34,7 +34,7 @@
 #include <asm/pgalloc.h>
 #include <asm/tlbflush.h>
 
-#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
+#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
 /**
  * struct vmemmap_remap_walk - walk vmemmap page table
  *
@@ -420,7 +420,7 @@ int vmemmap_remap_alloc(unsigned long start, unsigned long end,
 
 	return 0;
 }
-#endif /* CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP */
+#endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */
 
 /*
  * Allocate a block of memory to be used to back the virtual memory map
diff --git a/scripts/mod/Makefile b/scripts/mod/Makefile
index c9e38ad937fd..f82ab128c086 100644
--- a/scripts/mod/Makefile
+++ b/scripts/mod/Makefile
@@ -1,6 +1,8 @@
 # SPDX-License-Identifier: GPL-2.0
 OBJECT_FILES_NON_STANDARD := y
 CFLAGS_REMOVE_empty.o += $(CC_FLAGS_LTO)
+# See comments in Kbuild
+KBUILD_CFLAGS_KERNEL += -D__EXCLUDE_AUTOCONF_EXT_H
 
 hostprogs-always-y	+= modpost mk_elfconfig
 always-y		+= empty.o
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on
  2022-04-13 14:47 [PATCH v8 0/4] add hugetlb_optimize_vmemmap sysctl Muchun Song
  2022-04-13 14:47 ` [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP Muchun Song
@ 2022-04-13 14:47 ` Muchun Song
  2022-04-13 14:47 ` [PATCH v8 3/4] mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing Muchun Song
  2022-04-13 14:47 ` [PATCH v8 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl Muchun Song
  3 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-04-13 14:47 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

When "hugetlb_free_vmemmap=on" and "memory_hotplug.memmap_on_memory"
are both passed to boot cmdline, the variable of "memmap_on_memory"
will be set to 1 even if the vmemmap pages will not be allocated from
the hotadded memory since the former takes precedence over the latter.
In the next patch, we want to enable or disable the feature of freeing
vmemmap pages of HugeTLB via sysctl.  We need a way to know if the
feature of memory_hotplug.memmap_on_memory is enabled when enabling
the feature of freeing vmemmap pages since those two features are not
compatible, however, the variable of "memmap_on_memory" cannot indicate
this nowadays.  Do not set "memmap_on_memory" to 1 when both parameters
are passed to cmdline, in this case, "memmap_on_memory" could indicate
if this feature is enabled by the users.

Also introduce mhp_memmap_on_memory() helper to move the definition of
"memmap_on_memory" to the scope of CONFIG_MHP_MEMMAP_ON_MEMORY.  In the
next patch, mhp_memmap_on_memory() will also be exported to be used in
hugetlb_vmemmap.c.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/memory_hotplug.c | 32 ++++++++++++++++++++++++++------
 1 file changed, 26 insertions(+), 6 deletions(-)

diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 74430f88853d..f6eab03397d3 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -42,14 +42,36 @@
 #include "internal.h"
 #include "shuffle.h"
 
+#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
+static int memmap_on_memory_set(const char *val, const struct kernel_param *kp)
+{
+	if (hugetlb_optimize_vmemmap_enabled())
+		return 0;
+	return param_set_bool(val, kp);
+}
+
+static const struct kernel_param_ops memmap_on_memory_ops = {
+	.flags	= KERNEL_PARAM_OPS_FL_NOARG,
+	.set	= memmap_on_memory_set,
+	.get	= param_get_bool,
+};
 
 /*
  * memory_hotplug.memmap_on_memory parameter
  */
 static bool memmap_on_memory __ro_after_init;
-#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
-module_param(memmap_on_memory, bool, 0444);
+module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444);
 MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug");
+
+static inline bool mhp_memmap_on_memory(void)
+{
+	return memmap_on_memory;
+}
+#else
+static inline bool mhp_memmap_on_memory(void)
+{
+	return false;
+}
 #endif
 
 enum {
@@ -1272,9 +1294,7 @@ bool mhp_supports_memmap_on_memory(unsigned long size)
 	 *       altmap as an alternative source of memory, and we do not exactly
 	 *       populate a single PMD.
 	 */
-	return memmap_on_memory &&
-	       !hugetlb_optimize_vmemmap_enabled() &&
-	       IS_ENABLED(CONFIG_MHP_MEMMAP_ON_MEMORY) &&
+	return mhp_memmap_on_memory() &&
 	       size == memory_block_size_bytes() &&
 	       IS_ALIGNED(vmemmap_size, PMD_SIZE) &&
 	       IS_ALIGNED(remaining_size, (pageblock_nr_pages << PAGE_SHIFT));
@@ -2081,7 +2101,7 @@ static int __ref try_remove_memory(u64 start, u64 size)
 	 * We only support removing memory added with MHP_MEMMAP_ON_MEMORY in
 	 * the same granularity it was added - a single memory block.
 	 */
-	if (memmap_on_memory) {
+	if (mhp_memmap_on_memory()) {
 		nr_vmemmap_pages = walk_memory_blocks(start, size, NULL,
 						      get_nr_vmemmap_pages_cb);
 		if (nr_vmemmap_pages) {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 3/4] mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing
  2022-04-13 14:47 [PATCH v8 0/4] add hugetlb_optimize_vmemmap sysctl Muchun Song
  2022-04-13 14:47 ` [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP Muchun Song
  2022-04-13 14:47 ` [PATCH v8 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Muchun Song
@ 2022-04-13 14:47 ` Muchun Song
  2022-04-13 14:47 ` [PATCH v8 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl Muchun Song
  3 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-04-13 14:47 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

Use kstrtobool rather than open coding "on" and "off" parsing in
mm/hugetlb_vmemmap.c,  which is more powerful to handle all kinds
of parameters like 'Yy1Nn0' or [oO][NnFf] for "on" and "off".

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  6 +++---
 mm/hugetlb_vmemmap.c                            | 10 +++++-----
 2 files changed, 8 insertions(+), 8 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index f3cf9f21f6eb..6ea428023d51 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1669,10 +1669,10 @@
 			enabled.
 			Allows heavy hugetlb users to free up some more
 			memory (7 * PAGE_SIZE for each 2MB hugetlb page).
-			Format: { on | off (default) }
+			Format: { [oO][Nn]/Y/y/1 | [oO][Ff]/N/n/0 (default) }
 
-			on:  enable the feature
-			off: disable the feature
+			[oO][Nn]/Y/y/1: enable the feature
+			[oO][Ff]/N/n/0: disable the feature
 
 			Built with CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON=y,
 			the default is on.
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index be73782cc1cf..4b6a5cf16f11 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -195,15 +195,15 @@ EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
 
 static int __init hugetlb_vmemmap_early_param(char *buf)
 {
-	if (!buf)
+	bool enable;
+
+	if (kstrtobool(buf, &enable))
 		return -EINVAL;
 
-	if (!strcmp(buf, "on"))
+	if (enable)
 		static_branch_enable(&hugetlb_optimize_vmemmap_key);
-	else if (!strcmp(buf, "off"))
-		static_branch_disable(&hugetlb_optimize_vmemmap_key);
 	else
-		return -EINVAL;
+		static_branch_disable(&hugetlb_optimize_vmemmap_key);
 
 	return 0;
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v8 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl
  2022-04-13 14:47 [PATCH v8 0/4] add hugetlb_optimize_vmemmap sysctl Muchun Song
                   ` (2 preceding siblings ...)
  2022-04-13 14:47 ` [PATCH v8 3/4] mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing Muchun Song
@ 2022-04-13 14:47 ` Muchun Song
  2022-04-13 19:10   ` Andrew Morton
  3 siblings, 1 reply; 13+ messages in thread
From: Muchun Song @ 2022-04-13 14:47 UTC (permalink / raw)
  To: corbet, mike.kravetz, akpm, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy
  Cc: linux-doc, linux-kernel, linux-mm, duanxiongchun, smuchun, Muchun Song

We must add hugetlb_free_vmemmap=on (or "off") to the boot cmdline and
reboot the server to enable or disable the feature of optimizing vmemmap
pages associated with HugeTLB pages.  However, rebooting usually takes a
long time.  So add a sysctl to enable or disable the feature at runtime
without rebooting.

Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages
from buddy allocator will be optimized (7 pages per 2MB HugeTLB page and
4095 pages per 1GB HugeTLB page), whereas already allocated HugeTLB pages
will not be optimized.  When those optimized HugeTLB pages are freed from
the HugeTLB pool to the buddy allocator, the vmemmap pages representing
that range needs to be remapped again and the vmemmap pages discarded
earlier need to be rellocated again.  If your use case is that HugeTLB
pages are allocated 'on the fly' instead of being pulled from the HugeTLB
pool, you should weigh the benefits of memory savings against the more
overhead of allocation or freeing HugeTLB pages between the HugeTLB pool
and the buddy allocator.  Another behavior to note is that if the system
is under heavy memory pressure, it could prevent the user from freeing
HugeTLB pages from the HugeTLB pool to the buddy allocator since the
allocation of vmemmap pages could be failed, you have to retry later if
your system encounter this situation.

Once disabled, the vmemmap pages of subsequent allocation of HugeTLB
pages from buddy allocator will not be optimized, whereas already
optimized HugeTLB pages will not be affected.  If you want to make sure
there is no optimized HugeTLB pages, you can set "nr_hugepages" to 0
first and then disable this.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/sysctl/vm.rst | 27 ++++++++++
 include/linux/memory_hotplug.h          |  9 ++++
 mm/hugetlb_vmemmap.c                    | 90 +++++++++++++++++++++++++++++----
 mm/hugetlb_vmemmap.h                    |  4 +-
 mm/memory_hotplug.c                     |  7 +--
 5 files changed, 121 insertions(+), 16 deletions(-)

diff --git a/Documentation/admin-guide/sysctl/vm.rst b/Documentation/admin-guide/sysctl/vm.rst
index 747e325ebcd0..5c804ada85c5 100644
--- a/Documentation/admin-guide/sysctl/vm.rst
+++ b/Documentation/admin-guide/sysctl/vm.rst
@@ -562,6 +562,33 @@ Change the minimum size of the hugepage pool.
 See Documentation/admin-guide/mm/hugetlbpage.rst
 
 
+hugetlb_optimize_vmemmap
+========================
+
+Enable (set to 1) or disable (set to 0) the feature of optimizing vmemmap pages
+associated with each HugeTLB page.
+
+Once enabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
+buddy allocator will be optimized (7 pages per 2MB HugeTLB page and 4095 pages
+per 1GB HugeTLB page), whereas already allocated HugeTLB pages will not be
+optimized.  When those optimized HugeTLB pages are freed from the HugeTLB pool
+to the buddy allocator, the vmemmap pages representing that range needs to be
+remapped again and the vmemmap pages discarded earlier need to be rellocated
+again.  If your use case is that HugeTLB pages are allocated 'on the fly'
+instead of being pulled from the HugeTLB pool, you should weigh the benefits of
+memory savings against the more overhead of allocation or freeing HugeTLB pages
+between the HugeTLB pool and the buddy allocator.  Another behavior to note is
+that if the system is under heavy memory pressure, it could prevent the user
+from freeing HugeTLB pages from the HugeTLB pool to the buddy allocator since
+the allocation of vmemmap pages could be failed, you have to retry later if
+your system encounter this situation.
+
+Once disabled, the vmemmap pages of subsequent allocation of HugeTLB pages from
+buddy allocator will not be optimized, whereas already optimized HugeTLB pages
+will not be affected.  If you want to make sure there is no optimized HugeTLB
+pages, you can set "nr_hugepages" to 0 first and then disable this.
+
+
 nr_hugepages_mempolicy
 ======================
 
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 7ab15d6fb227..93f2cbea0f9b 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -348,4 +348,13 @@ void arch_remove_linear_mapping(u64 start, u64 size);
 extern bool mhp_supports_memmap_on_memory(unsigned long size);
 #endif /* CONFIG_MEMORY_HOTPLUG */
 
+#ifdef CONFIG_MHP_MEMMAP_ON_MEMORY
+bool mhp_memmap_on_memory(void);
+#else
+static inline bool mhp_memmap_on_memory(void)
+{
+	return false;
+}
+#endif
+
 #endif /* __LINUX_MEMORY_HOTPLUG_H */
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 4b6a5cf16f11..94571d63916d 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -176,6 +176,7 @@
  */
 #define pr_fmt(fmt)	"HugeTLB: " fmt
 
+#include <linux/memory_hotplug.h>
 #include "hugetlb_vmemmap.h"
 
 #ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
@@ -189,21 +190,40 @@
 #define RESERVE_VMEMMAP_NR		1U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
+enum vmemmap_optimize_mode {
+	VMEMMAP_OPTIMIZE_OFF,
+	VMEMMAP_OPTIMIZE_ON,
+};
+
 DEFINE_STATIC_KEY_MAYBE(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP_DEFAULT_ON,
 			hugetlb_optimize_vmemmap_key);
 EXPORT_SYMBOL(hugetlb_optimize_vmemmap_key);
 
+static enum vmemmap_optimize_mode vmemmap_optimize_mode =
+	IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP_DEFAULT_ON);
+
+static void vmemmap_optimize_mode_switch(enum vmemmap_optimize_mode to)
+{
+	if (vmemmap_optimize_mode == to)
+		return;
+
+	if (to == VMEMMAP_OPTIMIZE_OFF)
+		static_branch_dec(&hugetlb_optimize_vmemmap_key);
+	else
+		static_branch_inc(&hugetlb_optimize_vmemmap_key);
+	vmemmap_optimize_mode = to;
+}
+
 static int __init hugetlb_vmemmap_early_param(char *buf)
 {
 	bool enable;
+	enum vmemmap_optimize_mode mode;
 
 	if (kstrtobool(buf, &enable))
 		return -EINVAL;
 
-	if (enable)
-		static_branch_enable(&hugetlb_optimize_vmemmap_key);
-	else
-		static_branch_disable(&hugetlb_optimize_vmemmap_key);
+	mode = enable ? VMEMMAP_OPTIMIZE_ON : VMEMMAP_OPTIMIZE_OFF;
+	vmemmap_optimize_mode_switch(mode);
 
 	return 0;
 }
@@ -236,8 +256,10 @@ int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head)
 	 */
 	ret = vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
 				  GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
-	if (!ret)
+	if (!ret) {
 		ClearHPageVmemmapOptimized(head);
+		static_branch_dec(&hugetlb_optimize_vmemmap_key);
+	}
 
 	return ret;
 }
@@ -251,6 +273,8 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page *head)
 	if (!vmemmap_pages)
 		return;
 
+	static_branch_inc(&hugetlb_optimize_vmemmap_key);
+
 	vmemmap_addr	+= RESERVE_VMEMMAP_SIZE;
 	vmemmap_end	= vmemmap_addr + (vmemmap_pages << PAGE_SHIFT);
 	vmemmap_reuse	= vmemmap_addr - PAGE_SIZE;
@@ -260,7 +284,9 @@ void hugetlb_vmemmap_free(struct hstate *h, struct page *head)
 	 * to the page which @vmemmap_reuse is mapped to, then free the pages
 	 * which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
 	 */
-	if (!vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse))
+	if (vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse))
+		static_branch_dec(&hugetlb_optimize_vmemmap_key);
+	else
 		SetHPageVmemmapOptimized(head);
 }
 
@@ -277,9 +303,6 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
 		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
 
-	if (!hugetlb_optimize_vmemmap_enabled())
-		return;
-
 	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
 	/*
 	 * The head page is not to be freed to buddy allocator, the other tail
@@ -295,4 +318,53 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	pr_info("can optimize %d vmemmap pages for %s\n",
 		h->optimize_vmemmap_pages, h->name);
 }
+
+#ifdef CONFIG_PROC_SYSCTL
+static int hugetlb_optimize_vmemmap_handler(struct ctl_table *table, int write,
+					    void *buffer, size_t *length,
+					    loff_t *ppos)
+{
+	int ret;
+	enum vmemmap_optimize_mode mode;
+	static DEFINE_MUTEX(sysctl_mutex);
+
+	if (write && !capable(CAP_SYS_ADMIN))
+		return -EPERM;
+
+	mutex_lock(&sysctl_mutex);
+	mode = vmemmap_optimize_mode;
+	table->data = &mode;
+	ret = proc_dointvec_minmax(table, write, buffer, length, ppos);
+	if (write && !ret)
+		vmemmap_optimize_mode_switch(mode);
+	mutex_unlock(&sysctl_mutex);
+
+	return ret;
+}
+
+static struct ctl_table hugetlb_vmemmap_sysctls[] = {
+	{
+		.procname	= "hugetlb_optimize_vmemmap",
+		.maxlen		= sizeof(enum vmemmap_optimize_mode),
+		.mode		= 0644,
+		.proc_handler	= hugetlb_optimize_vmemmap_handler,
+		.extra1		= SYSCTL_ZERO,
+		.extra2		= SYSCTL_ONE,
+	},
+	{ }
+};
+
+static __init int hugetlb_vmemmap_sysctls_init(void)
+{
+	/*
+	 * The vmemmap pages cannot be optimized if
+	 * "memory_hotplug.memmap_on_memory" is enabled.
+	 */
+	if (!mhp_memmap_on_memory())
+		register_sysctl_init("vm", hugetlb_vmemmap_sysctls);
+
+	return 0;
+}
+late_initcall(hugetlb_vmemmap_sysctls_init);
+#endif /* CONFIG_PROC_SYSCTL */
 #endif /* CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP */
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 3afae3ff37fa..63ae2766ffe0 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -21,7 +21,9 @@ void hugetlb_vmemmap_init(struct hstate *h);
  */
 static inline unsigned int hugetlb_optimize_vmemmap_pages(struct hstate *h)
 {
-	return h->optimize_vmemmap_pages;
+	if (hugetlb_optimize_vmemmap_enabled())
+		return h->optimize_vmemmap_pages;
+	return 0;
 }
 #else
 static inline int hugetlb_vmemmap_alloc(struct hstate *h, struct page *head)
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index f6eab03397d3..af844a01e956 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -63,15 +63,10 @@ static bool memmap_on_memory __ro_after_init;
 module_param_cb(memmap_on_memory, &memmap_on_memory_ops, &memmap_on_memory, 0444);
 MODULE_PARM_DESC(memmap_on_memory, "Enable memmap on memory for memory hotplug");
 
-static inline bool mhp_memmap_on_memory(void)
+bool mhp_memmap_on_memory(void)
 {
 	return memmap_on_memory;
 }
-#else
-static inline bool mhp_memmap_on_memory(void)
-{
-	return false;
-}
 #endif
 
 enum {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  2022-04-13 14:47 ` [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP Muchun Song
@ 2022-04-13 19:08   ` Andrew Morton
  2022-04-14  3:10     ` Muchun Song
  2022-04-20 17:11   ` Masahiro Yamada
  1 sibling, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2022-04-13 19:08 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy, linux-doc, linux-kernel, linux-mm,
	duanxiongchun, smuchun

On Wed, 13 Apr 2022 22:47:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> If the size of "struct page" is not the power of two but with the feature
> of minimizing overhead of struct page associated with each HugeTLB is
> enabled, then the vmemmap pages of HugeTLB will be corrupted after
> remapping (panic is about to happen in theory).  But this only exists when
> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> configuration nowadays.  So it is not a real word issue, just the result
> of a code review.

The patch does add a whole bunch of tricky junk to address something
which won't happen.  How about we simply disable
CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP if (!CONFIG_MEMCG &&
!CONFIG_SLUB)?


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl
  2022-04-13 14:47 ` [PATCH v8 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl Muchun Song
@ 2022-04-13 19:10   ` Andrew Morton
  2022-04-14  4:04     ` Muchun Song
  0 siblings, 1 reply; 13+ messages in thread
From: Andrew Morton @ 2022-04-13 19:10 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy, linux-doc, linux-kernel, linux-mm,
	duanxiongchun, smuchun

On Wed, 13 Apr 2022 22:47:48 +0800 Muchun Song <songmuchun@bytedance.com> wrote:

> We must add hugetlb_free_vmemmap=on (or "off") to the boot cmdline and
> reboot the server to enable or disable the feature of optimizing vmemmap
> pages associated with HugeTLB pages.  However, rebooting usually takes a
> long time.  So add a sysctl to enable or disable the feature at runtime
> without rebooting.

Do we really need this feature?  Really?  What's the use case and what
is the end-user value?

Presumably CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP worsens things for some
setups/workloads?  Please tell us much more about that.  What is the
magnitude of the deoptimization?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  2022-04-13 19:08   ` Andrew Morton
@ 2022-04-14  3:10     ` Muchun Song
  2022-04-19  6:23       ` Muchun Song
  0 siblings, 1 reply; 13+ messages in thread
From: Muchun Song @ 2022-04-14  3:10 UTC (permalink / raw)
  To: Andrew Morton
  Cc: corbet, mike.kravetz, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy, linux-doc, linux-kernel, linux-mm,
	duanxiongchun, smuchun

On Wed, Apr 13, 2022 at 12:08:04PM -0700, Andrew Morton wrote:
> On Wed, 13 Apr 2022 22:47:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
> 
> > If the size of "struct page" is not the power of two but with the feature
> > of minimizing overhead of struct page associated with each HugeTLB is
> > enabled, then the vmemmap pages of HugeTLB will be corrupted after
> > remapping (panic is about to happen in theory).  But this only exists when
> > !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> > configuration nowadays.  So it is not a real word issue, just the result
> > of a code review.
> 
> The patch does add a whole bunch of tricky junk to address something
> which won't happen.  How about we simply disable
> CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP if (!CONFIG_MEMCG &&
> !CONFIG_SLUB)?
>
 
I'm afraid not. The size of 'struct page' also depends on
LAST_CPUPID_NOT_IN_PAGE_FLAGS which could be defined
when CONFIG_NODES_SHIFT or CONFIG_KASAN_SW_TAGS
or CONFIG_NR_CPUS is configured with a large value.  Then
the size would be more than 64 bytes.

Seems like the approach [1] is more simple and feasible,
which also could prevent the users from doing unexpected
configurations, however, it is objected by Masahiro.
Shall we look back at the approach again?

Thanks.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl
  2022-04-13 19:10   ` Andrew Morton
@ 2022-04-14  4:04     ` Muchun Song
  0 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-04-14  4:04 UTC (permalink / raw)
  To: Andrew Morton
  Cc: corbet, mike.kravetz, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy, linux-doc, linux-kernel, linux-mm,
	duanxiongchun, smuchun

On Wed, Apr 13, 2022 at 12:10:51PM -0700, Andrew Morton wrote:
> On Wed, 13 Apr 2022 22:47:48 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
> 
> > We must add hugetlb_free_vmemmap=on (or "off") to the boot cmdline and
> > reboot the server to enable or disable the feature of optimizing vmemmap
> > pages associated with HugeTLB pages.  However, rebooting usually takes a
> > long time.  So add a sysctl to enable or disable the feature at runtime
> > without rebooting.
> 
> Do we really need this feature?  Really?  What's the use case and what
> is the end-user value?
>

We know that this feature is disabled by default without passing
"hugetlb_free_vmemmap=on" to the boot cmdline. When we (ByteDance)
deliver the servers to the users who want to enable this feature,
they have to configure the grub (change boot cmdline) and reboot the
servers, whereas rebooting usually takes a long time (we have
thousands of servers). It's a very bad experience for the users.
So we need a approach to enable this feature after rebooting. This
is a use case in our practical environment.

> Presumably CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP worsens things for some
> setups/workloads?  Please tell us much more about that.  What is the
> magnitude of the deoptimization?
>

Most workloads are allocated HugeTLB from the pool not the buddy
allocator, which are the users of this feature. However if the
use case is that HugeTLB pages are allocated 'on the fly' instead
of being pulled from the HugeTLB pool, those workloads would be
affected with this feature enabled.  Those workloads could be
identified by the characteristics of they never explicitly allocating
huge pages with 'nr_hugepages' but only set 'nr_overcommit_hugepages'
and then let the pages be allocated from the buddy allocator at
fault time.  I don't know what the specific workload is. But I
can confirm it is a real use case from the commit 099730d67417.
For those workloads, the page fault time could be ~2x slower than
before. I suspect those users want to disable this feature
if the system has enabled this before if they don't think the memory
savings benefit is enough to make up for the performance drop.

Do you think it's ok if I put those information in the commit
log or vm.rst?

Thanks.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  2022-04-14  3:10     ` Muchun Song
@ 2022-04-19  6:23       ` Muchun Song
  0 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-04-19  6:23 UTC (permalink / raw)
  To: Andrew Morton
  Cc: corbet, mike.kravetz, mcgrof, keescook, yzaikin, osalvador,
	david, masahiroy, linux-doc, linux-kernel, linux-mm,
	duanxiongchun, smuchun

On Thu, Apr 14, 2022 at 11:10:01AM +0800, Muchun Song wrote:
> On Wed, Apr 13, 2022 at 12:08:04PM -0700, Andrew Morton wrote:
> > On Wed, 13 Apr 2022 22:47:45 +0800 Muchun Song <songmuchun@bytedance.com> wrote:
> > 
> > > If the size of "struct page" is not the power of two but with the feature
> > > of minimizing overhead of struct page associated with each HugeTLB is
> > > enabled, then the vmemmap pages of HugeTLB will be corrupted after
> > > remapping (panic is about to happen in theory).  But this only exists when
> > > !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> > > configuration nowadays.  So it is not a real word issue, just the result
> > > of a code review.
> > 
> > The patch does add a whole bunch of tricky junk to address something
> > which won't happen.  How about we simply disable
> > CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP if (!CONFIG_MEMCG &&
> > !CONFIG_SLUB)?
> >
>  
> I'm afraid not. The size of 'struct page' also depends on
> LAST_CPUPID_NOT_IN_PAGE_FLAGS which could be defined
> when CONFIG_NODES_SHIFT or CONFIG_KASAN_SW_TAGS
> or CONFIG_NR_CPUS is configured with a large value.  Then
> the size would be more than 64 bytes.
> 
> Seems like the approach [1] is more simple and feasible,

Sorry, forgot to post the Link.

[1] https://lore.kernel.org/all/20220323125523.79254-2-songmuchun@bytedance.com/

> which also could prevent the users from doing unexpected
> configurations, however, it is objected by Masahiro.
> Shall we look back at the approach again?
>

Hi all,

Friendly ping.

I have implemented 3 approaches to address this issue.

  1) V8 has added a lot of tricky code.
  2) V5 has added a feadback from Kbuild to Kconfig, as Masahiro
     said, it is terrible.
  3) V1 [2] has added a check of is_power_of_2() into hugetlb_vmemmap.c.

Iterated and explored through 8 versions, v1 seems to be the easiest way
to address this.  I think reusing v1 may be the best choice now.
What do you think?

[2] https://lore.kernel.org/all/20220228071022.26143-2-songmuchun@bytedance.com/

Thanks.


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  2022-04-13 14:47 ` [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP Muchun Song
  2022-04-13 19:08   ` Andrew Morton
@ 2022-04-20 17:11   ` Masahiro Yamada
  2022-04-20 23:30     ` Mike Kravetz
  1 sibling, 1 reply; 13+ messages in thread
From: Masahiro Yamada @ 2022-04-20 17:11 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, mike.kravetz, Andrew Morton, Luis R. Rodriguez,
	Kees Cook, Iurii Zaikin, Oscar Salvador, David Hildenbrand,
	open list:DOCUMENTATION, Linux Kernel Mailing List,
	Linux Memory Management List, duanxiongchun, smuchun

On Wed, Apr 13, 2022 at 11:48 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> If the size of "struct page" is not the power of two but with the feature
> of minimizing overhead of struct page associated with each HugeTLB is
> enabled, then the vmemmap pages of HugeTLB will be corrupted after
> remapping (panic is about to happen in theory).  But this only exists when
> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> configuration nowadays.  So it is not a real word issue, just the result
> of a code review.  But we have to prevent anyone from configuring that
> combined configurations.  In order to avoid many checks like "is_power_of_2
> (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro
> CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP to represent the size of struct
> page is power of two and CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP is
> configured.  Then make the codes of this feature depends on this new macro.
> Then we could prevent anyone do any unexpected configurations.  A new
> autoconf_ext.h is introduced as well, which serves as an extension for
> autoconf.h since those special configurations (e.g.
> CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP here) are rely on the autoconf.h
> (generated from Kconfig), so we cannot embed those configurations into
> Kconfig.  After this change, it would be easy if someone want to do the
> similar thing (add a new CONFIG) in the future.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Suggested-by: Luis Chamberlain <mcgrof@kernel.org>
> ---
>  Kbuild                     | 19 +++++++++++++++++++
>  arch/x86/mm/init_64.c      |  2 +-
>  include/linux/hugetlb.h    |  2 +-
>  include/linux/kconfig.h    |  4 ++++
>  include/linux/mm.h         |  2 +-
>  include/linux/page-flags.h |  2 +-
>  kernel/autoconf_ext.c      | 26 ++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.c       |  8 ++------
>  mm/hugetlb_vmemmap.h       |  4 ++--
>  mm/sparse-vmemmap.c        |  4 ++--
>  scripts/mod/Makefile       |  2 ++
>  11 files changed, 61 insertions(+), 14 deletions(-)
>  create mode 100644 kernel/autoconf_ext.c
>
> diff --git a/Kbuild b/Kbuild
> index fa441b98c9f6..83c0d5a418d1 100644
> --- a/Kbuild
> +++ b/Kbuild
> @@ -2,6 +2,12 @@
>  #
>  # Kbuild for top-level directory of the kernel
>
> +# autoconf_ext.h is generated last since it depends on other generated headers,
> +# however those other generated headers may include autoconf_ext.h. Use the
> +# following macro to avoid circular dependency.
> +
> +KBUILD_CFLAGS_KERNEL += -D__EXCLUDE_AUTOCONF_EXT_H
> +
>  #####
>  # Generate bounds.h
>
> @@ -37,6 +43,19 @@ $(offsets-file): arch/$(SRCARCH)/kernel/asm-offsets.s FORCE
>         $(call filechk,offsets,__ASM_OFFSETS_H__)
>
>  #####
> +# Generate autoconf_ext.h.
> +
> +autoconf_ext-file := include/generated/autoconf_ext.h
> +
> +always-y += $(autoconf_ext-file)
> +targets += kernel/autoconf_ext.s
> +
> +kernel/autoconf_ext.s: $(bounds-file) $(timeconst-file) $(offsets-file)
> +
> +$(autoconf_ext-file): kernel/autoconf_ext.s FORCE
> +       $(call filechk,offsets,__LINUX_AUTOCONF_EXT_H__)
> +
> +#####
>  # Check for missing system calls
>
>  always-y += missing-syscalls
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 4b9e0012bbbf..9b8dfa6e4da8 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1268,7 +1268,7 @@ static struct kcore_list kcore_vsyscall;
>
>  static void __init register_page_bootmem_info(void)
>  {
> -#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP)
> +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP)
>         int i;
>
>         for_each_online_node(i)
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ac2ece9e9c79..d42de8abd2b6 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -623,7 +623,7 @@ struct hstate {
>         unsigned int nr_huge_pages_node[MAX_NUMNODES];
>         unsigned int free_huge_pages_node[MAX_NUMNODES];
>         unsigned int surplus_huge_pages_node[MAX_NUMNODES];
> -#ifdef CONFIG_HUGETLB_PAGE_OPTIMIZE_VMEMMAP
> +#ifdef CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
>         unsigned int optimize_vmemmap_pages;
>  #endif
>  #ifdef CONFIG_CGROUP_HUGETLB
> diff --git a/include/linux/kconfig.h b/include/linux/kconfig.h
> index 20d1079e92b4..00796794f177 100644
> --- a/include/linux/kconfig.h
> +++ b/include/linux/kconfig.h
> @@ -4,6 +4,10 @@
>
>  #include <generated/autoconf.h>
>
> +#if defined(__KERNEL__) && !defined(__EXCLUDE_AUTOCONF_EXT_H)
> +#include <generated/autoconf_ext.h>
> +#endif
> +


Please do not do this either.

When autoconf_ext.h is updated, the kernel tree
would be rebuilt entirely.





-- 
Best Regards
Masahiro Yamada

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  2022-04-20 17:11   ` Masahiro Yamada
@ 2022-04-20 23:30     ` Mike Kravetz
  2022-04-21  3:18       ` Muchun Song
  0 siblings, 1 reply; 13+ messages in thread
From: Mike Kravetz @ 2022-04-20 23:30 UTC (permalink / raw)
  To: Masahiro Yamada, Muchun Song
  Cc: Jonathan Corbet, Andrew Morton, Luis R. Rodriguez, Kees Cook,
	Iurii Zaikin, Oscar Salvador, David Hildenbrand,
	open list:DOCUMENTATION, Linux Kernel Mailing List,
	Linux Memory Management List, duanxiongchun, smuchun

On 4/20/22 10:11, Masahiro Yamada wrote:
> On Wed, Apr 13, 2022 at 11:48 PM Muchun Song <songmuchun@bytedance.com> wrote:
>>
>> If the size of "struct page" is not the power of two but with the feature
>> of minimizing overhead of struct page associated with each HugeTLB is
>> enabled, then the vmemmap pages of HugeTLB will be corrupted after
>> remapping (panic is about to happen in theory).  But this only exists when
>> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
>> configuration nowadays.  So it is not a real word issue, just the result
>> of a code review.  But we have to prevent anyone from configuring that
>> combined configurations.  In order to avoid many checks like "is_power_of_2
>> (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro

Sorry for jumping in so late.  I am far from expert in Kconfig so did not pay
much attention to all those discussions.

Why not just add one (or a few) simple runtime checks for struct page not a
power of two before enabling hugetlb vmemmap optimization in the code?  Sure,
it would be ideal to never build/include the vmemmap optimization code if
this can be detected at config time.  However, it seems this is a very rare
combination and the checks for it at config time are very complex.

Would we really need many checks throughout the code as you mention above?
Or, do we only need to check or two before enabling
hugetlb_optimize_vmemmap_key?
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP
  2022-04-20 23:30     ` Mike Kravetz
@ 2022-04-21  3:18       ` Muchun Song
  0 siblings, 0 replies; 13+ messages in thread
From: Muchun Song @ 2022-04-21  3:18 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Masahiro Yamada, Jonathan Corbet, Andrew Morton,
	Luis R. Rodriguez, Kees Cook, Iurii Zaikin, Oscar Salvador,
	David Hildenbrand, open list:DOCUMENTATION,
	Linux Kernel Mailing List, Linux Memory Management List,
	duanxiongchun, smuchun

On Wed, Apr 20, 2022 at 04:30:02PM -0700, Mike Kravetz wrote:
> On 4/20/22 10:11, Masahiro Yamada wrote:
> > On Wed, Apr 13, 2022 at 11:48 PM Muchun Song <songmuchun@bytedance.com> wrote:
> >>
> >> If the size of "struct page" is not the power of two but with the feature
> >> of minimizing overhead of struct page associated with each HugeTLB is
> >> enabled, then the vmemmap pages of HugeTLB will be corrupted after
> >> remapping (panic is about to happen in theory).  But this only exists when
> >> !CONFIG_MEMCG && !CONFIG_SLUB on x86_64.  However, it is not a conventional
> >> configuration nowadays.  So it is not a real word issue, just the result
> >> of a code review.  But we have to prevent anyone from configuring that
> >> combined configurations.  In order to avoid many checks like "is_power_of_2
> >> (sizeof(struct page))" through mm/hugetlb_vmemmap.c.  Introduce a new macro
> 
> Sorry for jumping in so late.  I am far from expert in Kconfig so did not pay
> much attention to all those discussions.
> 
> Why not just add one (or a few) simple runtime checks for struct page not a
> power of two before enabling hugetlb vmemmap optimization in the code?  Sure,
> it would be ideal to never build/include the vmemmap optimization code if
> this can be detected at config time.  However, it seems this is a very rare
> combination and the checks for it at config time are very complex.

Right. Iterated and explored through 8 versions, I realized checking
it at config time is very complex.
 
> Would we really need many checks throughout the code as you mention above?
> Or, do we only need to check or two before enabling
> hugetlb_optimize_vmemmap_key?

Yep, now there is only one place where needs to check that size.
I think I should go back to v1, it is simpler.

Thanks Mike.

^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-04-21  3:18 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-04-13 14:47 [PATCH v8 0/4] add hugetlb_optimize_vmemmap sysctl Muchun Song
2022-04-13 14:47 ` [PATCH v8 1/4] mm: hugetlb_vmemmap: introduce CONFIG_HUGETLB_PAGE_HAS_OPTIMIZE_VMEMMAP Muchun Song
2022-04-13 19:08   ` Andrew Morton
2022-04-14  3:10     ` Muchun Song
2022-04-19  6:23       ` Muchun Song
2022-04-20 17:11   ` Masahiro Yamada
2022-04-20 23:30     ` Mike Kravetz
2022-04-21  3:18       ` Muchun Song
2022-04-13 14:47 ` [PATCH v8 2/4] mm: memory_hotplug: override memmap_on_memory when hugetlb_free_vmemmap=on Muchun Song
2022-04-13 14:47 ` [PATCH v8 3/4] mm: hugetlb_vmemmap: use kstrtobool for hugetlb_vmemmap param parsing Muchun Song
2022-04-13 14:47 ` [PATCH v8 4/4] mm: hugetlb_vmemmap: add hugetlb_optimize_vmemmap sysctl Muchun Song
2022-04-13 19:10   ` Andrew Morton
2022-04-14  4:04     ` Muchun Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.