linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 00/19] Free some vmemmap pages of hugetlb page
@ 2020-10-26 14:50 Muchun Song
  2020-10-26 14:50 ` [PATCH v2 01/19] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
                   ` (20 more replies)
  0 siblings, 21 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:50 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Hi all,

This patch series will free some vmemmap pages(struct page structures)
associated with each hugetlbpage when preallocated to save memory.

Nowadays we track the status of physical page frames using `struct page`
arranged in one or more arrays. And here exists one-to-one mapping between
the physical page frame and the corresponding `struct page`.

The hugetlbpage support is built on top of multiple page size support
that is provided by most modern architectures. For example, x86 CPUs
normally support 4K and 2M (1G if architecturally supported) page sizes.
Every hugetlbpage has more than one `struct page`. The 2M hugetlbpage
has 512 `struct page` and 1G hugetlbpage has 4096 `struct page`. But
in the core of hugetlbpage only uses the first 4 `struct page` to store
metadata associated with each hugetlbpage. The rest of the `struct page`
are usually read the compound_head field which are all the same value.
If we can free some struct page memory to buddy system so that we can
save a lot of memory.

When the system boot up, every 2M hugetlbpage has 512 `struct page` which
is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

   hugetlbpage                  struct pages(8 pages)          page frame(8 pages)
  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
  |           |                     |     0     | -------------> |     0     |
  |           |                     |     1     | -------------> |     1     |
  |           |                     |     2     | -------------> |     2     |
  |           |                     |     3     | -------------> |     3     |
  |           |                     |     4     | -------------> |     4     |
  |     2M    |                     |     5     | -------------> |     5     |
  |           |                     |     6     | -------------> |     6     |
  |           |                     |     7     | -------------> |     7     |
  |           |                     +-----------+                +-----------+
  |           |
  |           |
  +-----------+


When a hugetlbpage is preallocated, we can change the mapping from above to
bellow.

   hugetlbpage                  struct pages(8 pages)          page frame(8 pages)
  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
  |           |                     |     0     | -------------> |     0     |
  |           |                     |     1     | -------------> |     1     |
  |           |                     |     2     | -------------> +-----------+
  |           |                     |     3     | -----------------^ ^ ^ ^ ^
  |           |                     |     4     | -------------------+ | | |
  |     2M    |                     |     5     | ---------------------+ | |
  |           |                     |     6     | -----------------------+ |
  |           |                     |     7     | -------------------------+
  |           |                     +-----------+
  |           |
  |           |
  +-----------+

For tail pages, the value of compound_dtor is the same. So we can reuse
first page of tail page structs. We map the virtual addresses of the
remaining 6 pages of tail page structs to the first tail page struct,
and then free these 6 pages. Therefore, we need to reserve at least 2
pages as vmemmap areas.

When a hugetlbpage is freed to the buddy system, we should allocate 6
pages for vmemmap pages and restore the previous mapping relationship.

If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
substantial gain. On our server, run some SPDK/QEMU applications which
will use 1000GB hugetlbpage. With this feature enabled, we can save
~16GB(1G hugepage)/~11GB(2MB hugepage) memory.

  changelog in v2:
  1. Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  2. Fix some typo and code style problems.
  3. Remove unused handle_vmemmap_fault().
  4. Merge some commits to one commit suggested by Mike.

Muchun Song (19):
  mm/memory_hotplug: Move bootmem info registration API to
    bootmem_info.c
  mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  mm/hugetlb: Introduce pgtable allocation/freeing helpers
  mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  mm/hugetlb: Defer freeing of hugetlb pages
  mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb
    page
  mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper
  mm/hugetlb: Use PG_slab to indicate split pmd
  mm/hugetlb: Support freeing vmemmap pages of gigantic page
  mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power
    of two
  mm/hugetlb: Clear PageHWPoison on the non-error memory page
  mm/hugetlb: Flush work when dissolving hugetlb page
  mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  mm/hugetlb: Merge pte to huge pmd only for gigantic page
  mm/hugetlb: Gather discrete indexes of tail page
  mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct
    page

 .../admin-guide/kernel-parameters.txt         |   9 +
 Documentation/admin-guide/mm/hugetlbpage.rst  |   3 +
 arch/x86/include/asm/hugetlb.h                |  20 +
 arch/x86/include/asm/pgtable_64_types.h       |   8 +
 arch/x86/mm/init_64.c                         |   5 +-
 fs/Kconfig                                    |  16 +
 include/linux/bootmem_info.h                  |  65 ++
 include/linux/hugetlb.h                       |  50 ++
 include/linux/hugetlb_cgroup.h                |  15 +-
 include/linux/memory_hotplug.h                |  27 -
 mm/Makefile                                   |   1 +
 mm/bootmem_info.c                             | 125 +++
 mm/hugetlb.c                                  | 795 +++++++++++++++++-
 mm/memory_hotplug.c                           | 116 ---
 mm/sparse.c                                   |   1 +
 15 files changed, 1091 insertions(+), 165 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

-- 
2.20.1


^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v2 01/19] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
@ 2020-10-26 14:50 ` Muchun Song
  2020-10-26 14:50 ` [PATCH v2 02/19] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
                   ` (19 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:50 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Move bootmem info registration common API to individual bootmem_info.c
for later patch use. This is just code movement without any functional
change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 arch/x86/mm/init_64.c          |  1 +
 include/linux/bootmem_info.h   | 27 ++++++++++
 include/linux/memory_hotplug.h | 23 --------
 mm/Makefile                    |  1 +
 mm/bootmem_info.c              | 99 ++++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 91 +------------------------------
 6 files changed, 129 insertions(+), 113 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..c7f7ad55b625 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -33,6 +33,7 @@
 #include <linux/nmi.h>
 #include <linux/gfp.h>
 #include <linux/kcore.h>
+#include <linux/bootmem_info.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
new file mode 100644
index 000000000000..65bb9b23140f
--- /dev/null
+++ b/include/linux/bootmem_info.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_BOOTMEM_INFO_H
+#define __LINUX_BOOTMEM_INFO_H
+
+#include <linux/mmzone.h>
+
+/*
+ * Types for free bootmem stored in page->lru.next. These have to be in
+ * some random range in unsigned long space for debugging purposes.
+ */
+enum {
+	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
+	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
+	MIX_SECTION_INFO,
+	NODE_INFO,
+	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
+};
+
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+#else
+static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+}
+#endif
+
+#endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 51a877fec8da..19e5d067294c 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -33,18 +33,6 @@ struct vmem_altmap;
 	___page;						   \
 })
 
-/*
- * Types for free bootmem stored in page->lru.next. These have to be in
- * some random range in unsigned long space for debugging purposes.
- */
-enum {
-	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
-	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
-	MIX_SECTION_INFO,
-	NODE_INFO,
-	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
-};
-
 /* Types for control the zone type of onlined and offlined memory */
 enum {
 	/* Offline the memory. */
@@ -209,13 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
-#else
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-#endif
 extern void put_page_bootmem(struct page *page);
 extern void get_page_bootmem(unsigned long ingo, struct page *page,
 			     unsigned long type);
@@ -254,10 +235,6 @@ static inline int mhp_notimplemented(const char *func)
 	return -ENOSYS;
 }
 
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-
 static inline int try_online_node(int nid)
 {
 	return 0;
diff --git a/mm/Makefile b/mm/Makefile
index d5649f1c12c0..752111587c99 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -82,6 +82,7 @@ obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
+obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
new file mode 100644
index 000000000000..39fa8fc120bc
--- /dev/null
+++ b/mm/bootmem_info.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/mm/bootmem_info.c
+ *
+ *  Copyright (C)
+ */
+#include <linux/mm.h>
+#include <linux/compiler.h>
+#include <linux/memblock.h>
+#include <linux/bootmem_info.h>
+#include <linux/memory_hotplug.h>
+
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	/* Get section's memmap address */
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	/*
+	 * Get page for the memmap's phys address
+	 * XXX: need more consideration for sparse_vmemmap...
+	 */
+	page = virt_to_page(memmap);
+	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
+	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
+
+	/* remember memmap's page */
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, SECTION_INFO);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+
+}
+#else /* CONFIG_SPARSEMEM_VMEMMAP */
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+}
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
+
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+	unsigned long i, pfn, end_pfn, nr_pages;
+	int node = pgdat->node_id;
+	struct page *page;
+
+	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
+	page = virt_to_page(pgdat);
+
+	for (i = 0; i < nr_pages; i++, page++)
+		get_page_bootmem(node, page, NODE_INFO);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	/* register section info */
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+		/*
+		 * Some platforms can assign the same pfn to multiple nodes - on
+		 * node0 as well as nodeN.  To avoid registering a pfn against
+		 * multiple nodes we check that this pfn does not already
+		 * reside in some other nodes.
+		 */
+		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
+			register_page_bootmem_info_section(pfn);
+	}
+}
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index baded53b9ff9..2da4ad071456 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -21,6 +21,7 @@
 #include <linux/memory.h>
 #include <linux/memremap.h>
 #include <linux/memory_hotplug.h>
+#include <linux/bootmem_info.h>
 #include <linux/highmem.h>
 #include <linux/vmalloc.h>
 #include <linux/ioport.h>
@@ -167,96 +168,6 @@ void put_page_bootmem(struct page *page)
 	}
 }
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	/* Get section's memmap address */
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	/*
-	 * Get page for the memmap's phys address
-	 * XXX: need more consideration for sparse_vmemmap...
-	 */
-	page = virt_to_page(memmap);
-	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
-	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
-
-	/* remember memmap's page */
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, SECTION_INFO);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-
-}
-#else /* CONFIG_SPARSEMEM_VMEMMAP */
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
-
-void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-	unsigned long i, pfn, end_pfn, nr_pages;
-	int node = pgdat->node_id;
-	struct page *page;
-
-	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
-	page = virt_to_page(pgdat);
-
-	for (i = 0; i < nr_pages; i++, page++)
-		get_page_bootmem(node, page, NODE_INFO);
-
-	pfn = pgdat->node_start_pfn;
-	end_pfn = pgdat_end_pfn(pgdat);
-
-	/* register section info */
-	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		/*
-		 * Some platforms can assign the same pfn to multiple nodes - on
-		 * node0 as well as nodeN.  To avoid registering a pfn against
-		 * multiple nodes we check that this pfn does not already
-		 * reside in some other nodes.
-		 */
-		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
-			register_page_bootmem_info_section(pfn);
-	}
-}
-#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 02/19] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
  2020-10-26 14:50 ` [PATCH v2 01/19] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
@ 2020-10-26 14:50 ` Muchun Song
  2020-10-26 14:50 ` [PATCH v2 03/19] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
                   ` (18 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:50 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the later patch, we will use {get,put}_page_bootmem() to initialize
the page for vmemmap or free vmemmap page to buddy. So move them out of
CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any
functional change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 arch/x86/mm/init_64.c          |  2 +-
 include/linux/bootmem_info.h   | 13 +++++++++++++
 include/linux/memory_hotplug.h |  4 ----
 mm/bootmem_info.c              | 26 ++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 27 ---------------------------
 mm/sparse.c                    |  1 +
 6 files changed, 41 insertions(+), 32 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index c7f7ad55b625..0a45f062826e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1572,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 	return err;
 }
 
-#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void register_page_bootmem_memmap(unsigned long section_nr,
 				  struct page *start_page, unsigned long nr_pages)
 {
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 65bb9b23140f..4ed6dee1adc9 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -18,10 +18,23 @@ enum {
 
 #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+
+void get_page_bootmem(unsigned long info, struct page *page,
+		      unsigned long type);
+void put_page_bootmem(struct page *page);
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
 }
+
+static inline void put_page_bootmem(struct page *page)
+{
+}
+
+static inline void get_page_bootmem(unsigned long info, struct page *page,
+				    unsigned long type)
+{
+}
 #endif
 
 #endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 19e5d067294c..c9f3361fe84b 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -197,10 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-extern void put_page_bootmem(struct page *page);
-extern void get_page_bootmem(unsigned long ingo, struct page *page,
-			     unsigned long type);
-
 void get_online_mems(void);
 void put_online_mems(void);
 
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
index 39fa8fc120bc..d276e96e487f 100644
--- a/mm/bootmem_info.c
+++ b/mm/bootmem_info.c
@@ -10,6 +10,32 @@
 #include <linux/bootmem_info.h>
 #include <linux/memory_hotplug.h>
 
+void get_page_bootmem(unsigned long info,  struct page *page,
+		      unsigned long type)
+{
+	page->freelist = (void *)type;
+	SetPagePrivate(page);
+	set_page_private(page, info);
+	page_ref_inc(page);
+}
+
+void put_page_bootmem(struct page *page)
+{
+	unsigned long type;
+
+	type = (unsigned long) page->freelist;
+	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
+	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
+
+	if (page_ref_dec_return(page) == 1) {
+		page->freelist = NULL;
+		ClearPagePrivate(page);
+		set_page_private(page, 0);
+		INIT_LIST_HEAD(&page->lru);
+		free_reserved_page(page);
+	}
+}
+
 #ifndef CONFIG_SPARSEMEM_VMEMMAP
 static void register_page_bootmem_info_section(unsigned long start_pfn)
 {
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2da4ad071456..ae57eedc341f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -21,7 +21,6 @@
 #include <linux/memory.h>
 #include <linux/memremap.h>
 #include <linux/memory_hotplug.h>
-#include <linux/bootmem_info.h>
 #include <linux/highmem.h>
 #include <linux/vmalloc.h>
 #include <linux/ioport.h>
@@ -142,32 +141,6 @@ static void release_memory_resource(struct resource *res)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
-void get_page_bootmem(unsigned long info,  struct page *page,
-		      unsigned long type)
-{
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
-	page_ref_inc(page);
-}
-
-void put_page_bootmem(struct page *page)
-{
-	unsigned long type;
-
-	type = (unsigned long) page->freelist;
-	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
-	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
-
-	if (page_ref_dec_return(page) == 1) {
-		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
-		INIT_LIST_HEAD(&page->lru);
-		free_reserved_page(page);
-	}
-}
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
diff --git a/mm/sparse.c b/mm/sparse.c
index b25ad8e64839..a4138410d890 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -13,6 +13,7 @@
 #include <linux/vmalloc.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/bootmem_info.h>
 
 #include "internal.h"
 #include <asm/dma.h>
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 03/19] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
  2020-10-26 14:50 ` [PATCH v2 01/19] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
  2020-10-26 14:50 ` [PATCH v2 02/19] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
@ 2020-10-26 14:50 ` Muchun Song
  2020-10-29 10:29   ` Oscar Salvador
  2020-10-26 14:50 ` [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
                   ` (17 subsequent siblings)
  20 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:50 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
whether to enable the feature of freeing unused vmemmap associated
with HugeTLB pages. Now only support x86.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/mm/init_64.c |  2 +-
 fs/Kconfig            | 16 ++++++++++++++++
 2 files changed, 17 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0a45f062826e..0435bee2e172 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/fs/Kconfig b/fs/Kconfig
index 976e8b9033c4..5a4265ff2a86 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -245,6 +245,22 @@ config HUGETLBFS
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
 
+config HUGETLB_PAGE_FREE_VMEMMAP
+	bool "Free unused vmemmap associated with HugeTLB pages"
+	default n
+	depends on X86
+	depends on HUGETLB_PAGE
+	depends on SPARSEMEM_VMEMMAP
+	depends on HAVE_BOOTMEM_INFO_NODE
+	help
+	  There are many struct page structures associated with each HugeTLB
+	  page. But we only use a few struct page structures. In this case,
+	  it wastes some memory. It is better to free the unused struct page
+	  structures to buddy system which can save some memory. For
+	  architectures that support it, say Y here.
+
+	  If unsure, say N.
+
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (2 preceding siblings ...)
  2020-10-26 14:50 ` [PATCH v2 03/19] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-10-26 14:50 ` Muchun Song
  2020-10-27 22:03   ` Mike Kravetz
  2020-10-29 13:26   ` Oscar Salvador
  2020-10-26 14:51 ` [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
                   ` (16 subsequent siblings)
  20 siblings, 2 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:50 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

If the size of hugetlb page is 2MB, we need 512 struct page structures
(8 pages) to be associated with it. As far as I know, we only use the
first 4 struct page structures.

For tail pages, the value of compound_dtor is the same. So we can reuse
first page of tail page structs. We map the virtual addresses of the
remaining 6 pages of tail page structs to the first tail page struct,
and then free these 6 pages. Therefore, we need to reserve at least 2
pages as vmemmap areas.

So we introduce a new nr_free_vmemmap_pages field in the hstate to
indicate how many vmemmap pages associated with a hugetlb page that we
can free to buddy system.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h |  3 +++
 mm/hugetlb.c            | 35 +++++++++++++++++++++++++++++++++++
 2 files changed, 38 insertions(+)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d5cc5f802dd4..eed3dd3bd626 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -492,6 +492,9 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	unsigned int nr_free_vmemmap_pages;
+#endif
 #ifdef CONFIG_CGROUP_HUGETLB
 	/* cgroup control files */
 	struct cftype cgroup_files_dfl[7];
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 81a41aa080a5..f1b2b733b49b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1292,6 +1292,39 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+#define RESERVE_VMEMMAP_NR	2U
+
+static inline unsigned int nr_free_vmemmap(struct hstate *h)
+{
+	return h->nr_free_vmemmap_pages;
+}
+
+static void __init hugetlb_vmemmap_init(struct hstate *h)
+{
+	unsigned int order = huge_page_order(h);
+	unsigned int vmemmap_pages;
+
+	vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
+	/*
+	 * The head page and the first tail page not free to buddy system,
+	 * the others page will map to the first tail page. So there are
+	 * (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed.
+	 */
+	if (vmemmap_pages > RESERVE_VMEMMAP_NR)
+		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
+	else
+		h->nr_free_vmemmap_pages = 0;
+
+	pr_info("HugeTLB: can free %d vmemmap pages for %s\n",
+		h->nr_free_vmemmap_pages, h->name);
+}
+#else
+static inline void hugetlb_vmemmap_init(struct hstate *h)
+{
+}
+#endif
+
 static void update_and_free_page(struct hstate *h, struct page *page)
 {
 	int i;
@@ -3285,6 +3318,8 @@ void __init hugetlb_add_hstate(unsigned int order)
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
 
+	hugetlb_vmemmap_init(h);
+
 	parsed_hstate = h;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (3 preceding siblings ...)
  2020-10-26 14:50 ` [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-28  0:32   ` Mike Kravetz
  2020-11-05 13:23   ` Oscar Salvador
  2020-10-26 14:51 ` [PATCH v2 06/19] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
                   ` (15 subsequent siblings)
  20 siblings, 2 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

On some architectures, the vmemmap areas use huge page mapping.
If we want to free the unused vmemmap pages, we have to split
the huge pmd firstly. So we should pre-allocate pgtable to split
huge pmd.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/include/asm/hugetlb.h |   5 ++
 include/linux/hugetlb.h        |  17 +++++
 mm/hugetlb.c                   | 117 +++++++++++++++++++++++++++++++++
 3 files changed, 139 insertions(+)

diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
index 1721b1aadeb1..f5e882f999cd 100644
--- a/arch/x86/include/asm/hugetlb.h
+++ b/arch/x86/include/asm/hugetlb.h
@@ -5,6 +5,11 @@
 #include <asm/page.h>
 #include <asm-generic/hugetlb.h>
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+#define VMEMMAP_HPAGE_SHIFT			PMD_SHIFT
+#define arch_vmemmap_support_huge_mapping()	boot_cpu_has(X86_FEATURE_PSE)
+#endif
+
 #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
 
 #endif /* _ASM_X86_HUGETLB_H */
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index eed3dd3bd626..ace304a6196c 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -593,6 +593,23 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h)
 
 #include <asm/hugetlb.h>
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+#ifndef arch_vmemmap_support_huge_mapping
+static inline bool arch_vmemmap_support_huge_mapping(void)
+{
+	return false;
+}
+#endif
+
+#ifndef VMEMMAP_HPAGE_SHIFT
+#define VMEMMAP_HPAGE_SHIFT		PMD_SHIFT
+#endif
+#define VMEMMAP_HPAGE_ORDER		(VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT)
+#define VMEMMAP_HPAGE_NR		(1 << VMEMMAP_HPAGE_ORDER)
+#define VMEMMAP_HPAGE_SIZE		((1UL) << VMEMMAP_HPAGE_SHIFT)
+#define VMEMMAP_HPAGE_MASK		(~(VMEMMAP_HPAGE_SIZE - 1))
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+
 #ifndef is_hugepage_only_range
 static inline int is_hugepage_only_range(struct mm_struct *mm,
 					unsigned long addr, unsigned long len)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f1b2b733b49b..d6ae9b6876be 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1295,11 +1295,108 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 #define RESERVE_VMEMMAP_NR	2U
 
+#define page_huge_pte(page)	((page)->pmd_huge_pte)
+
 static inline unsigned int nr_free_vmemmap(struct hstate *h)
 {
 	return h->nr_free_vmemmap_pages;
 }
 
+static inline unsigned int nr_vmemmap(struct hstate *h)
+{
+	return nr_free_vmemmap(h) + RESERVE_VMEMMAP_NR;
+}
+
+static inline unsigned long nr_vmemmap_size(struct hstate *h)
+{
+	return (unsigned long)nr_vmemmap(h) << PAGE_SHIFT;
+}
+
+static inline unsigned int nr_pgtable(struct hstate *h)
+{
+	unsigned long vmemmap_size = nr_vmemmap_size(h);
+
+	if (!arch_vmemmap_support_huge_mapping())
+		return 0;
+
+	/*
+	 * No need pre-allocate page tabels when there is no vmemmap pages
+	 * to free.
+	 */
+	if (!nr_free_vmemmap(h))
+		return 0;
+
+	return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT;
+}
+
+static inline void vmemmap_pgtable_init(struct page *page)
+{
+	page_huge_pte(page) = NULL;
+}
+
+static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p)
+{
+	pgtable_t pgtable = virt_to_page(pte_p);
+
+	/* FIFO */
+	if (!page_huge_pte(page))
+		INIT_LIST_HEAD(&pgtable->lru);
+	else
+		list_add(&pgtable->lru, &page_huge_pte(page)->lru);
+	page_huge_pte(page) = pgtable;
+}
+
+static pte_t *vmemmap_pgtable_withdraw(struct page *page)
+{
+	pgtable_t pgtable;
+
+	/* FIFO */
+	pgtable = page_huge_pte(page);
+	if (unlikely(!pgtable))
+		return NULL;
+	page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru,
+						       struct page, lru);
+	if (page_huge_pte(page))
+		list_del(&pgtable->lru);
+	return page_to_virt(pgtable);
+}
+
+static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
+{
+	int i;
+	pte_t *pte_p;
+	unsigned int nr = nr_pgtable(h);
+
+	if (!nr)
+		return 0;
+
+	vmemmap_pgtable_init(page);
+
+	for (i = 0; i < nr; i++) {
+		pte_p = pte_alloc_one_kernel(&init_mm);
+		if (!pte_p)
+			goto out;
+		vmemmap_pgtable_deposit(page, pte_p);
+	}
+
+	return 0;
+out:
+	while (i-- && (pte_p = vmemmap_pgtable_withdraw(page)))
+		pte_free_kernel(&init_mm, pte_p);
+	return -ENOMEM;
+}
+
+static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
+{
+	pte_t *pte_p;
+
+	if (!nr_pgtable(h))
+		return;
+
+	while ((pte_p = vmemmap_pgtable_withdraw(page)))
+		pte_free_kernel(&init_mm, pte_p);
+}
+
 static void __init hugetlb_vmemmap_init(struct hstate *h)
 {
 	unsigned int order = huge_page_order(h);
@@ -1323,6 +1420,15 @@ static void __init hugetlb_vmemmap_init(struct hstate *h)
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
 }
+
+static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
+{
+	return 0;
+}
+
+static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
+{
+}
 #endif
 
 static void update_and_free_page(struct hstate *h, struct page *page)
@@ -1531,6 +1637,9 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	/* Must be called before the initialization of @page->lru */
+	vmemmap_pgtable_free(h, page);
+
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
 	set_hugetlb_cgroup(page, NULL);
@@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
 	if (!page)
 		return NULL;
 
+	if (vmemmap_pgtable_prealloc(h, page)) {
+		if (hstate_is_gigantic(h))
+			free_gigantic_page(page, huge_page_order(h));
+		else
+			put_page(page);
+		return NULL;
+	}
+
 	if (hstate_is_gigantic(h))
 		prep_compound_gigantic_page(page, huge_page_order(h));
 	prep_new_huge_page(h, page, page_to_nid(page));
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 06/19] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (4 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
                   ` (14 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the later patch, we can use the free_vmemmap_page() to free the
unused vmemmap pages and initialize a page for vmemmap page using
via prepare_vmemmap_page().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/bootmem_info.h | 25 +++++++++++++++++++++++++
 1 file changed, 25 insertions(+)

diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 4ed6dee1adc9..ce9d8c97369d 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -3,6 +3,7 @@
 #define __LINUX_BOOTMEM_INFO_H
 
 #include <linux/mmzone.h>
+#include <linux/mm.h>
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
@@ -22,6 +23,30 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 void get_page_bootmem(unsigned long info, struct page *page,
 		      unsigned long type);
 void put_page_bootmem(struct page *page);
+
+static inline void free_vmemmap_page(struct page *page)
+{
+	VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
+
+	/* bootmem page has reserved flag in the reserve_bootmem_region */
+	if (PageReserved(page)) {
+		unsigned long magic = (unsigned long)page->freelist;
+
+		if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
+			put_page_bootmem(page);
+		else
+			WARN_ON(1);
+	}
+}
+
+static inline void prepare_vmemmap_page(struct page *page)
+{
+	unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
+
+	get_page_bootmem(section_nr, page, SECTION_INFO);
+	__SetPageReserved(page);
+	adjust_managed_page_count(page, -1);
+}
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (5 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 06/19] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 16:01   ` Matthew Wilcox
  2020-10-28 23:42   ` Mike Kravetz
  2020-10-26 14:51 ` [PATCH v2 08/19] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
                   ` (13 subsequent siblings)
  20 siblings, 2 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we allocate a hugetlb page from the buddy, we should free the
unused vmemmap pages associated with it. We can do that in the
prep_new_huge_page().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/include/asm/hugetlb.h          |   7 +
 arch/x86/include/asm/pgtable_64_types.h |   8 +
 include/linux/hugetlb.h                 |   7 +
 mm/hugetlb.c                            | 190 ++++++++++++++++++++++++
 4 files changed, 212 insertions(+)

diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
index f5e882f999cd..7c3eb60c2198 100644
--- a/arch/x86/include/asm/hugetlb.h
+++ b/arch/x86/include/asm/hugetlb.h
@@ -4,10 +4,17 @@
 
 #include <asm/page.h>
 #include <asm-generic/hugetlb.h>
+#include <asm/pgtable.h>
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 #define VMEMMAP_HPAGE_SHIFT			PMD_SHIFT
 #define arch_vmemmap_support_huge_mapping()	boot_cpu_has(X86_FEATURE_PSE)
+
+#define vmemmap_pmd_huge vmemmap_pmd_huge
+static inline bool vmemmap_pmd_huge(pmd_t *pmd)
+{
+	return pmd_large(*pmd);
+}
 #endif
 
 #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 52e5f5f2240d..bedbd2e7d06c 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d;
 # define VMEMMAP_START		__VMEMMAP_BASE_L4
 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
 
+/*
+ * VMEMMAP_SIZE - allows the whole linear region to be covered by
+ *                a struct page array.
+ */
+#define VMEMMAP_SIZE		(1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \
+					 1 + ilog2(sizeof(struct page))))
+#define VMEMMAP_END		(VMEMMAP_START + VMEMMAP_SIZE)
+
 #define VMALLOC_END		(VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1)
 
 #define MODULES_VADDR		(__START_KERNEL_map + KERNEL_IMAGE_SIZE)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ace304a6196c..919f47d77117 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -601,6 +601,13 @@ static inline bool arch_vmemmap_support_huge_mapping(void)
 }
 #endif
 
+#ifndef vmemmap_pmd_huge
+static inline bool vmemmap_pmd_huge(pmd_t *pmd)
+{
+	return pmd_huge(*pmd);
+}
+#endif
+
 #ifndef VMEMMAP_HPAGE_SHIFT
 #define VMEMMAP_HPAGE_SHIFT		PMD_SHIFT
 #endif
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d6ae9b6876be..aa012d603e06 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1293,10 +1293,20 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 #endif
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+#include <linux/bootmem_info.h>
+
 #define RESERVE_VMEMMAP_NR	2U
+#define RESERVE_VMEMMAP_SIZE	(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
 #define page_huge_pte(page)	((page)->pmd_huge_pte)
 
+#define vmemmap_hpage_addr_end(addr, end)				\
+({									\
+	unsigned long __boundary;					\
+	__boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK;\
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);		\
+})
+
 static inline unsigned int nr_free_vmemmap(struct hstate *h)
 {
 	return h->nr_free_vmemmap_pages;
@@ -1416,6 +1426,181 @@ static void __init hugetlb_vmemmap_init(struct hstate *h)
 	pr_info("HugeTLB: can free %d vmemmap pages for %s\n",
 		h->nr_free_vmemmap_pages, h->name);
 }
+
+static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd)
+{
+	static DEFINE_SPINLOCK(pgtable_lock);
+
+	return &pgtable_lock;
+}
+
+/*
+ * Walk a vmemmap address to the pmd it maps.
+ */
+static pmd_t *vmemmap_to_pmd(const void *page)
+{
+	unsigned long addr = (unsigned long)page;
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	if (addr < VMEMMAP_START || addr >= VMEMMAP_END)
+		return NULL;
+
+	pgd = pgd_offset_k(addr);
+	if (pgd_none(*pgd))
+		return NULL;
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d))
+		return NULL;
+	pud = pud_offset(p4d, addr);
+
+	WARN_ON_ONCE(pud_bad(*pud));
+	if (pud_none(*pud) || pud_bad(*pud))
+		return NULL;
+	pmd = pmd_offset(pud, addr);
+
+	return pmd;
+}
+
+static inline int freed_vmemmap_hpage(struct page *page)
+{
+	return atomic_read(&page->_mapcount) + 1;
+}
+
+static inline int freed_vmemmap_hpage_inc(struct page *page)
+{
+	return atomic_inc_return_relaxed(&page->_mapcount) + 1;
+}
+
+static inline int freed_vmemmap_hpage_dec(struct page *page)
+{
+	return atomic_dec_return_relaxed(&page->_mapcount) + 1;
+}
+
+static inline void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
+					 unsigned long start,
+					 unsigned int nr_free,
+					 struct list_head *free_pages)
+{
+	pte_t entry = mk_pte(reuse, PAGE_KERNEL);
+	unsigned long addr;
+	unsigned long end = start + (nr_free << PAGE_SHIFT);
+
+	for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) {
+		struct page *page;
+		pte_t old = *ptep;
+
+		VM_WARN_ON(!pte_present(old));
+		page = pte_page(old);
+		list_add(&page->lru, free_pages);
+
+		set_pte_at(&init_mm, addr, ptep, entry);
+	}
+}
+
+static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
+					 unsigned long addr,
+					 struct list_head *free_pages)
+{
+	unsigned long next;
+	unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE;
+	unsigned long end = addr + nr_vmemmap_size(h);
+	struct page *reuse = NULL;
+
+	addr = start;
+	do {
+		unsigned int nr_pages;
+		pte_t *ptep;
+
+		ptep = pte_offset_kernel(pmd, addr);
+		if (!reuse)
+			reuse = pte_page(ptep[-1]);
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		nr_pages = (next - addr) >> PAGE_SHIFT;
+		__free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages,
+					     free_pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
+static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr)
+{
+	struct mm_struct *mm = &init_mm;
+	struct page *page;
+	pmd_t old_pmd, _pmd;
+	int i;
+
+	old_pmd = READ_ONCE(*pmd);
+	page = pmd_page(old_pmd);
+	pmd_populate_kernel(mm, &_pmd, pte_p);
+
+	for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
+		pte_t entry, *pte;
+
+		entry = mk_pte(page + i, PAGE_KERNEL);
+		pte = pte_offset_kernel(&_pmd, addr);
+		VM_BUG_ON(!pte_none(*pte));
+		set_pte_at(mm, addr, pte, entry);
+	}
+
+	/* make pte visible before pmd */
+	smp_wmb();
+	pmd_populate_kernel(mm, pmd, pte_p);
+}
+
+static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd)
+{
+	pte_t *pte_p;
+	unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK;
+	unsigned long addr = start;
+
+	while ((pte_p = vmemmap_pgtable_withdraw(head))) {
+		VM_BUG_ON(freed_vmemmap_hpage(virt_to_page(pte_p)));
+		split_vmemmap_pmd(pmd++, pte_p, addr);
+		addr += VMEMMAP_HPAGE_SIZE;
+	}
+
+	flush_tlb_kernel_range(start, addr);
+}
+
+static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	pmd_t *pmd;
+	spinlock_t *ptl;
+	LIST_HEAD(free_pages);
+
+	if (!nr_free_vmemmap(h))
+		return;
+
+	pmd = vmemmap_to_pmd(head);
+	ptl = vmemmap_pmd_lockptr(pmd);
+
+	spin_lock(ptl);
+	if (vmemmap_pmd_huge(pmd)) {
+		VM_BUG_ON(!nr_pgtable(h));
+		split_vmemmap_huge_page(head, pmd);
+	}
+
+	__free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages);
+	freed_vmemmap_hpage_inc(pmd_page(*pmd));
+	spin_unlock(ptl);
+
+	free_vmemmap_page_list(&free_pages);
+}
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
@@ -1429,6 +1614,10 @@ static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
 static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
 {
 }
+
+static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
 #endif
 
 static void update_and_free_page(struct hstate *h, struct page *page)
@@ -1637,6 +1826,7 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	free_huge_page_vmemmap(h, page);
 	/* Must be called before the initialization of @page->lru */
 	vmemmap_pgtable_free(h, page);
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 08/19] mm/hugetlb: Defer freeing of hugetlb pages
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (6 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 09/19] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
                   ` (12 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the subsequent patch, we will allocate the vmemmap pages when free
huge pages. But update_and_free_page() is be called from a non-task
context(and hold hugetlb_lock), we can defer the actual freeing in
a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 94 +++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 85 insertions(+), 9 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index aa012d603e06..a5500c79e2df 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1292,6 +1292,8 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
+static void __free_hugepage(struct hstate *h, struct page *page);
+
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 #include <linux/bootmem_info.h>
 
@@ -1601,6 +1603,64 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 
 	free_vmemmap_page_list(&free_pages);
 }
+
+/*
+ * As update_and_free_page() is be called from a non-task context(and hold
+ * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
+ * use GFP_ATOMIC to allocate a lot of vmemmap pages.
+ *
+ * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of
+ * pages to be freed and frees them one-by-one. As the page->mapping pointer
+ * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is
+ * reused as the llist_node structure of a lockless linked list of huge
+ * pages to be freed.
+ */
+static LLIST_HEAD(hpage_update_freelist);
+
+static void update_hpage_vmemmap_workfn(struct work_struct *work)
+{
+	struct llist_node *node;
+	struct page *page;
+
+	node = llist_del_all(&hpage_update_freelist);
+
+	while (node) {
+		page = container_of((struct address_space **)node,
+				     struct page, mapping);
+		node = node->next;
+		page->mapping = NULL;
+		__free_hugepage(page_hstate(page), page);
+
+		cond_resched();
+	}
+}
+static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
+
+static inline void __update_and_free_page(struct hstate *h, struct page *page)
+{
+	/* No need to allocate vmemmap pages */
+	if (!nr_free_vmemmap(h)) {
+		__free_hugepage(h, page);
+		return;
+	}
+
+	/*
+	 * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap
+	 * pages.
+	 *
+	 * Only call schedule_work() if hpage_update_freelist is previously
+	 * empty. Otherwise, schedule_work() had been called but the workfn
+	 * hasn't retrieved the list yet.
+	 */
+	if (llist_add((struct llist_node *)&page->mapping,
+		      &hpage_update_freelist))
+		schedule_work(&hpage_update_work);
+}
+
+static inline void free_gigantic_page_comm(struct hstate *h, struct page *page)
+{
+	free_gigantic_page(page, huge_page_order(h));
+}
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
@@ -1618,17 +1678,39 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
+
+static inline void __update_and_free_page(struct hstate *h, struct page *page)
+{
+	__free_hugepage(h, page);
+}
+
+static inline void free_gigantic_page_comm(struct hstate *h, struct page *page)
+{
+	/*
+	 * Temporarily drop the hugetlb_lock, because
+	 * we might block in free_gigantic_page().
+	 */
+	spin_unlock(&hugetlb_lock);
+	free_gigantic_page(page, huge_page_order(h));
+	spin_lock(&hugetlb_lock);
+}
 #endif
 
 static void update_and_free_page(struct hstate *h, struct page *page)
 {
-	int i;
-
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
 		return;
 
 	h->nr_huge_pages--;
 	h->nr_huge_pages_node[page_to_nid(page)]--;
+
+	__update_and_free_page(h, page);
+}
+
+static void __free_hugepage(struct hstate *h, struct page *page)
+{
+	int i;
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
@@ -1640,14 +1722,8 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
 	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
-		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
-		 */
-		spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
-		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
+		free_gigantic_page_comm(h, page);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 09/19] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (7 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 08/19] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 10/19] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
                   ` (11 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we free a hugetlb page to the buddy, we should allocate the vmemmap
pages associated with it. We can do that in the __free_hugepage().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 108 +++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 108 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a5500c79e2df..cea580058a16 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1299,6 +1299,7 @@ static void __free_hugepage(struct hstate *h, struct page *page);
 
 #define RESERVE_VMEMMAP_NR	2U
 #define RESERVE_VMEMMAP_SIZE	(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+#define GFP_VMEMMAP_PAGE	(GFP_KERNEL | __GFP_NOFAIL | __GFP_MEMALLOC)
 
 #define page_huge_pte(page)	((page)->pmd_huge_pte)
 
@@ -1604,6 +1605,107 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 	free_vmemmap_page_list(&free_pages);
 }
 
+static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
+					  unsigned long start,
+					  unsigned int nr_remap,
+					  struct list_head *remap_pages)
+{
+	void *from = (void *)page_private(reuse);
+	unsigned long addr, end = start + (nr_remap << PAGE_SHIFT);
+
+	for (addr = start; addr < end; addr += PAGE_SIZE) {
+		void *to;
+		struct page *page;
+		pte_t entry, old = *ptep;
+
+		page = list_first_entry_or_null(remap_pages, struct page, lru);
+		list_del(&page->lru);
+		to = page_to_virt(page);
+		copy_page(to, from);
+
+		/*
+		 * Make sure that any data that writes to the @to is made
+		 * visible to the physical page.
+		 */
+		flush_kernel_vmap_range(to, PAGE_SIZE);
+
+		prepare_vmemmap_page(page);
+
+		entry = mk_pte(page, PAGE_KERNEL);
+		set_pte_at(&init_mm, addr, ptep++, entry);
+
+		VM_BUG_ON(!pte_present(old) || pte_page(old) != reuse);
+	}
+}
+
+static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
+					  unsigned long addr,
+					  struct list_head *remap_pages)
+{
+	unsigned long next;
+	unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE;
+	unsigned long end = addr + nr_vmemmap_size(h);
+	struct page *reuse = NULL;
+
+	addr = start;
+	do {
+		unsigned int nr_pages;
+		pte_t *ptep;
+
+		ptep = pte_offset_kernel(pmd, addr);
+		if (!reuse) {
+			reuse = pte_page(ptep[-1]);
+			set_page_private(reuse, addr - PAGE_SIZE);
+		}
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		nr_pages = (next - addr) >> PAGE_SHIFT;
+		__remap_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages,
+					      remap_pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
+static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
+{
+	int i;
+
+	for (i = 0; i < nr_free_vmemmap(h); i++) {
+		struct page *page;
+
+		/* This should not fail */
+		page = alloc_page(GFP_VMEMMAP_PAGE);
+		list_add_tail(&page->lru, list);
+	}
+}
+
+static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	pmd_t *pmd;
+	spinlock_t *ptl;
+	LIST_HEAD(remap_pages);
+
+	if (!nr_free_vmemmap(h))
+		return;
+
+	alloc_vmemmap_pages(h, &remap_pages);
+
+	pmd = vmemmap_to_pmd(head);
+	ptl = vmemmap_pmd_lockptr(pmd);
+
+	spin_lock(ptl);
+	__remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head,
+				      &remap_pages);
+	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) {
+		/*
+		 * Todo:
+		 * Merge pte to huge pmd if it has ever been split.
+		 */
+	}
+	spin_unlock(ptl);
+}
+
 /*
  * As update_and_free_page() is be called from a non-task context(and hold
  * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
@@ -1679,6 +1781,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
 
+static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+
 static inline void __update_and_free_page(struct hstate *h, struct page *page)
 {
 	__free_hugepage(h, page);
@@ -1711,6 +1817,8 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 {
 	int i;
 
+	alloc_huge_page_vmemmap(h, page);
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 10/19] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (8 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 09/19] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 11/19] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
                   ` (10 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The __free_huge_page_pmd_vmemmap and __remap_huge_page_pmd_vmemmap are
almost the same code. So introduce remap_free_huge_page_pmd_vmemmap
helper to simplify the code.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 98 +++++++++++++++++++++-------------------------------
 1 file changed, 39 insertions(+), 59 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index cea580058a16..bd0c4e7fd994 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1482,6 +1482,41 @@ static inline int freed_vmemmap_hpage_dec(struct page *page)
 	return atomic_dec_return_relaxed(&page->_mapcount) + 1;
 }
 
+typedef void (*remap_pte_fn)(struct page *reuse, pte_t *ptep,
+			     unsigned long start, unsigned int nr_pages,
+			     struct list_head *pages);
+
+static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
+					unsigned long addr,
+					struct list_head *pages,
+					remap_pte_fn remap_fn)
+{
+	unsigned long next;
+	unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
+	unsigned long end = addr + nr_vmemmap_size(h);
+	struct page *reuse = NULL;
+
+	flush_cache_vunmap(start, end);
+
+	addr = start;
+	do {
+		unsigned int nr_pages;
+		pte_t *ptep;
+
+		ptep = pte_offset_kernel(pmd, addr);
+		if (!reuse) {
+			reuse = pte_page(ptep[-1]);
+			set_page_private(reuse, addr - PAGE_SIZE);
+		}
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		nr_pages = (next - addr) >> PAGE_SHIFT;
+		remap_fn(reuse, ptep, addr, nr_pages, pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
 static inline void free_vmemmap_page_list(struct list_head *list)
 {
 	struct page *page, *next;
@@ -1513,33 +1548,6 @@ static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 	}
 }
 
-static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
-					 unsigned long addr,
-					 struct list_head *free_pages)
-{
-	unsigned long next;
-	unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE;
-	unsigned long end = addr + nr_vmemmap_size(h);
-	struct page *reuse = NULL;
-
-	addr = start;
-	do {
-		unsigned int nr_pages;
-		pte_t *ptep;
-
-		ptep = pte_offset_kernel(pmd, addr);
-		if (!reuse)
-			reuse = pte_page(ptep[-1]);
-
-		next = vmemmap_hpage_addr_end(addr, end);
-		nr_pages = (next - addr) >> PAGE_SHIFT;
-		__free_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages,
-					     free_pages);
-	} while (pmd++, addr = next, addr != end);
-
-	flush_tlb_kernel_range(start, end);
-}
-
 static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr)
 {
 	struct mm_struct *mm = &init_mm;
@@ -1598,7 +1606,8 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 		split_vmemmap_huge_page(head, pmd);
 	}
 
-	__free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages);
+	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages,
+				    __free_huge_page_pte_vmemmap);
 	freed_vmemmap_hpage_inc(pmd_page(*pmd));
 	spin_unlock(ptl);
 
@@ -1638,35 +1647,6 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 	}
 }
 
-static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
-					  unsigned long addr,
-					  struct list_head *remap_pages)
-{
-	unsigned long next;
-	unsigned long start = addr + RESERVE_VMEMMAP_NR * PAGE_SIZE;
-	unsigned long end = addr + nr_vmemmap_size(h);
-	struct page *reuse = NULL;
-
-	addr = start;
-	do {
-		unsigned int nr_pages;
-		pte_t *ptep;
-
-		ptep = pte_offset_kernel(pmd, addr);
-		if (!reuse) {
-			reuse = pte_page(ptep[-1]);
-			set_page_private(reuse, addr - PAGE_SIZE);
-		}
-
-		next = vmemmap_hpage_addr_end(addr, end);
-		nr_pages = (next - addr) >> PAGE_SHIFT;
-		__remap_huge_page_pte_vmemmap(reuse, ptep, addr, nr_pages,
-					      remap_pages);
-	} while (pmd++, addr = next, addr != end);
-
-	flush_tlb_kernel_range(start, end);
-}
-
 static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
 {
 	int i;
@@ -1695,8 +1675,8 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 	ptl = vmemmap_pmd_lockptr(pmd);
 
 	spin_lock(ptl);
-	__remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head,
-				      &remap_pages);
+	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages,
+				    __remap_huge_page_pte_vmemmap);
 	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) {
 		/*
 		 * Todo:
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 11/19] mm/hugetlb: Use PG_slab to indicate split pmd
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (9 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 10/19] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 12/19] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
                   ` (9 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we allocate hugetlb page from buddy, we may need split huge pmd
to pte. When we free the hugetlb page, we can merge pte to pmd. So
we need to distinguish whether the previous pmd has been split. The
page table is not allocated from slab. So we can reuse the PG_slab
to indicate that the pmd has been split.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 23 ++++++++++++++++++++++-
 1 file changed, 22 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bd0c4e7fd994..f75b93fb4c07 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1588,6 +1588,25 @@ static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd)
 	flush_tlb_kernel_range(start, addr);
 }
 
+static inline bool pmd_split(pmd_t *pmd)
+{
+	return PageSlab(pmd_page(*pmd));
+}
+
+static inline void set_pmd_split(pmd_t *pmd)
+{
+	/*
+	 * We should not use slab for page table allocation. So we can set
+	 * PG_slab to indicate that the pmd has been split.
+	 */
+	__SetPageSlab(pmd_page(*pmd));
+}
+
+static inline void clear_pmd_split(pmd_t *pmd)
+{
+	__ClearPageSlab(pmd_page(*pmd));
+}
+
 static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 	pmd_t *pmd;
@@ -1604,6 +1623,7 @@ static void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 	if (vmemmap_pmd_huge(pmd)) {
 		VM_BUG_ON(!nr_pgtable(h));
 		split_vmemmap_huge_page(head, pmd);
+		set_pmd_split(pmd);
 	}
 
 	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages,
@@ -1677,11 +1697,12 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 	spin_lock(ptl);
 	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages,
 				    __remap_huge_page_pte_vmemmap);
-	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) {
+	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) {
 		/*
 		 * Todo:
 		 * Merge pte to huge pmd if it has ever been split.
 		 */
+		clear_pmd_split(pmd);
 	}
 	spin_unlock(ptl);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 12/19] mm/hugetlb: Support freeing vmemmap pages of gigantic page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (10 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 11/19] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 13/19] mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power of two Muchun Song
                   ` (8 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The gigantic page is allocated by bootmem, if we want to free the
unused vmemmap pages. We also should allocate the page table. So
we also allocate page tables from bootmem.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h |  3 +++
 mm/hugetlb.c            | 57 +++++++++++++++++++++++++++++++++++++++++
 2 files changed, 60 insertions(+)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 919f47d77117..695d3041ae7d 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -506,6 +506,9 @@ struct hstate {
 struct huge_bootmem_page {
 	struct list_head list;
 	struct hstate *hstate;
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	pte_t *vmemmap_pgtable;
+#endif
 };
 
 struct page *alloc_huge_page(struct vm_area_struct *vma,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f75b93fb4c07..d98b55ad1a90 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1410,6 +1410,48 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
 		pte_free_kernel(&init_mm, pte_p);
 }
 
+static unsigned long __init gather_vmemmap_pgtable_prealloc(void)
+{
+	struct huge_bootmem_page *m, *tmp;
+	unsigned long nr_free = 0;
+
+	list_for_each_entry_safe(m, tmp, &huge_boot_pages, list) {
+		struct hstate *h = m->hstate;
+		unsigned int pgtable_size = nr_pgtable(h) << PAGE_SHIFT;
+
+		if (!pgtable_size)
+			continue;
+
+		m->vmemmap_pgtable = memblock_alloc_try_nid(pgtable_size,
+				PAGE_SIZE, 0, MEMBLOCK_ALLOC_ACCESSIBLE,
+				NUMA_NO_NODE);
+		if (!m->vmemmap_pgtable) {
+			nr_free++;
+			list_del(&m->list);
+			memblock_free_early(__pa(m), huge_page_size(h));
+		}
+	}
+
+	return nr_free;
+}
+
+static void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m,
+					       struct page *page)
+{
+	int i;
+	struct hstate *h = m->hstate;
+	unsigned long pgtable = (unsigned long)m->vmemmap_pgtable;
+	unsigned int nr = nr_pgtable(h);
+
+	if (!nr)
+		return;
+
+	vmemmap_pgtable_init(page);
+
+	for (i = 0; i < nr; i++, pgtable += PAGE_SIZE)
+		vmemmap_pgtable_deposit(page, (pte_t *)pgtable);
+}
+
 static void __init hugetlb_vmemmap_init(struct hstate *h)
 {
 	unsigned int order = huge_page_order(h);
@@ -1778,6 +1820,16 @@ static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
 {
 }
 
+static inline unsigned long gather_vmemmap_pgtable_prealloc(void)
+{
+	return 0;
+}
+
+static inline void gather_vmemmap_pgtable_init(struct huge_bootmem_page *m,
+					       struct page *page)
+{
+}
+
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
@@ -3039,6 +3091,7 @@ static void __init gather_bootmem_prealloc(void)
 		WARN_ON(page_count(page) != 1);
 		prep_compound_huge_page(page, h->order);
 		WARN_ON(PageReserved(page));
+		gather_vmemmap_pgtable_init(m, page);
 		prep_new_huge_page(h, page, page_to_nid(page));
 		put_page(page); /* free it into the hugepage allocator */
 
@@ -3091,6 +3144,10 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
 			break;
 		cond_resched();
 	}
+
+	if (hstate_is_gigantic(h))
+		i -= gather_vmemmap_pgtable_prealloc();
+
 	if (i < h->max_huge_pages) {
 		char buf[32];
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 13/19] mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power of two
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (11 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 12/19] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 14/19] mm/hugetlb: Clear PageHWPoison on the non-error memory page Muchun Song
                   ` (7 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We only can free the unused vmemmap to the buddy system when the
size of struct page is a power of two. So add a BUILD_BUG_ON to
check the illegal case.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 4 ++++
 1 file changed, 4 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index d98b55ad1a90..e3209fd2e6b2 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3776,6 +3776,10 @@ static int __init hugetlb_init(void)
 {
 	int i;
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	BUILD_BUG_ON_NOT_POWER_OF_2(sizeof(struct page));
+#endif
+
 	if (!hugepages_supported()) {
 		if (hugetlb_max_hstate || default_hstate_max_huge_pages)
 			pr_warn("HugeTLB: huge pages not supported, ignoring associated command-line parameters\n");
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 14/19] mm/hugetlb: Clear PageHWPoison on the non-error memory page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (12 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 13/19] mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power of two Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 15/19] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
                   ` (6 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Because we reuse the first tail page, if we set PageHWPosion on a
tail page. It indicates that we may set PageHWPoison on a series
of pages. So we need to clear PageHWPoison on the non-error pages.
We use the head[3].mapping to record the real error page index and
clear non-error page PageHWPoison later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 33 +++++++++++++++++++++++++++++++++
 1 file changed, 33 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e3209fd2e6b2..7198bd9bdce5 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1806,6 +1806,21 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page)
 {
 	free_gigantic_page(page, huge_page_order(h));
 }
+
+static inline bool subpage_hwpoison(struct page *head, struct page *page)
+{
+	return page_private(head + 4) == page - head;
+}
+
+static inline void set_subpage_hwpoison(struct page *head, struct page *page)
+{
+	set_page_private(head + 4, page - head);
+}
+
+static inline void clear_subpage_hwpoison(struct page *head)
+{
+	set_page_private(head + 4, 0);
+}
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
@@ -1853,6 +1868,19 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page)
 	free_gigantic_page(page, huge_page_order(h));
 	spin_lock(&hugetlb_lock);
 }
+
+static inline bool subpage_hwpoison(struct page *head, struct page *page)
+{
+	return true;
+}
+
+static inline void set_subpage_hwpoison(struct page *head, struct page *page)
+{
+}
+
+static inline void clear_subpage_hwpoison(struct page *head)
+{
+}
 #endif
 
 static void update_and_free_page(struct hstate *h, struct page *page)
@@ -1877,6 +1905,9 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 				1 << PG_referenced | 1 << PG_dirty |
 				1 << PG_active | 1 << PG_private |
 				1 << PG_writeback);
+
+		if (PageHWPoison(page + i) && !subpage_hwpoison(page, page + i))
+			ClearPageHWPoison(page + i);
 	}
 	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
 	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
@@ -2066,6 +2097,7 @@ static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 	free_huge_page_vmemmap(h, page);
 	/* Must be called before the initialization of @page->lru */
 	vmemmap_pgtable_free(h, page);
+	clear_subpage_hwpoison(page);
 
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
@@ -2436,6 +2468,7 @@ int dissolve_free_huge_page(struct page *page)
 			SetPageHWPoison(page);
 			ClearPageHWPoison(head);
 		}
+		set_subpage_hwpoison(head, page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 15/19] mm/hugetlb: Flush work when dissolving hugetlb page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (13 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 14/19] mm/hugetlb: Clear PageHWPoison on the non-error memory page Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 16/19] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
                   ` (5 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We should flush work when dissolving a hugetlb page to make sure that
the hugetlb page is freed to the buddy.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 7198bd9bdce5..509de0732d9f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1807,6 +1807,11 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page)
 	free_gigantic_page(page, huge_page_order(h));
 }
 
+static inline void flush_free_huge_page_work(void)
+{
+	flush_work(&hpage_update_work);
+}
+
 static inline bool subpage_hwpoison(struct page *head, struct page *page)
 {
 	return page_private(head + 4) == page - head;
@@ -1869,6 +1874,10 @@ static inline void free_gigantic_page_comm(struct hstate *h, struct page *page)
 	spin_lock(&hugetlb_lock);
 }
 
+static inline void flush_free_huge_page_work(void)
+{
+}
+
 static inline bool subpage_hwpoison(struct page *head, struct page *page)
 {
 	return true;
@@ -2443,6 +2452,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 int dissolve_free_huge_page(struct page *page)
 {
 	int rc = -EBUSY;
+	bool need_flush = false;
 
 	/* Not to disrupt normal path by vainly holding hugetlb_lock */
 	if (!PageHuge(page))
@@ -2474,10 +2484,19 @@ int dissolve_free_huge_page(struct page *page)
 		h->free_huge_pages_node[nid]--;
 		h->max_huge_pages--;
 		update_and_free_page(h, head);
+		need_flush = true;
 		rc = 0;
 	}
 out:
 	spin_unlock(&hugetlb_lock);
+
+	/*
+	 * We should flush work before return to make sure that
+	 * the hugetlb page is freed to the buddy.
+	 */
+	if (need_flush)
+		flush_free_huge_page_work();
+
 	return rc;
 }
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 16/19] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (14 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 15/19] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 17/19] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
                   ` (4 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 .../admin-guide/kernel-parameters.txt         |  9 ++++++++
 Documentation/admin-guide/mm/hugetlbpage.rst  |  3 +++
 mm/hugetlb.c                                  | 23 +++++++++++++++++++
 3 files changed, 35 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 5debfe238027..ccf07293cb63 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1551,6 +1551,15 @@
 			Documentation/admin-guide/mm/hugetlbpage.rst.
 			Format: size[KMG]
 
+	hugetlb_free_vmemmap=
+			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+			this controls freeing unused vmemmap pages associated
+			with each HugeTLB page.
+			Format: { on (default) | off }
+
+			on:  enable the feature
+			off: disable the feature
+
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
 			Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..7d6129ee97dd 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -145,6 +145,9 @@ default_hugepagesz
 
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
+hugetlb_free_vmemmap
+	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this disables freeing
+	unused vmemmap pages associated each HugeTLB page.
 
 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
 indicates the current number of pre-allocated huge pages of the default size.
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 509de0732d9f..82467d573fee 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1310,6 +1310,8 @@ static void __free_hugepage(struct hstate *h, struct page *page);
 	(__boundary - 1 < (end) - 1) ? __boundary : (end);		\
 })
 
+static bool hugetlb_free_vmemmap_disabled __initdata;
+
 static inline unsigned int nr_free_vmemmap(struct hstate *h)
 {
 	return h->nr_free_vmemmap_pages;
@@ -1457,6 +1459,13 @@ static void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int order = huge_page_order(h);
 	unsigned int vmemmap_pages;
 
+	if (hugetlb_free_vmemmap_disabled) {
+		h->nr_free_vmemmap_pages = 0;
+		pr_info("HugeTLB: disable free vmemmap pages for %s\n",
+			h->name);
+		return;
+	}
+
 	vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
 	/*
 	 * The head page and the first tail page not free to buddy system,
@@ -1826,6 +1835,20 @@ static inline void clear_subpage_hwpoison(struct page *head)
 {
 	set_page_private(head + 4, 0);
 }
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	if (!strcmp(buf, "off"))
+		hugetlb_free_vmemmap_disabled = true;
+	else if (strcmp(buf, "on"))
+		return -EINVAL;
+
+	return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 17/19] mm/hugetlb: Merge pte to huge pmd only for gigantic page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (15 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 16/19] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 18/19] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
                   ` (3 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Merge pte to huge pmd if it has ever been split. Now only support
gigantic page which's vmemmap pages size is an integer multiple of
PMD_SIZE. This is the simplest case to handle.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/include/asm/hugetlb.h |   8 +++
 include/linux/hugetlb.h        |   7 +++
 mm/hugetlb.c                   | 106 ++++++++++++++++++++++++++++++++-
 3 files changed, 119 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
index 7c3eb60c2198..9f9e19dd0578 100644
--- a/arch/x86/include/asm/hugetlb.h
+++ b/arch/x86/include/asm/hugetlb.h
@@ -15,6 +15,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd)
 {
 	return pmd_large(*pmd);
 }
+
+#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge
+static inline pmd_t vmemmap_pmd_mkhuge(struct page *page)
+{
+	pte_t entry = pfn_pte(page_to_pfn(page), PAGE_KERNEL_LARGE);
+
+	return __pmd(pte_val(entry));
+}
 #endif
 
 #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 695d3041ae7d..3a45199cc5c1 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -611,6 +611,13 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd)
 }
 #endif
 
+#ifndef vmemmap_pmd_mkhuge
+static inline pmd_t vmemmap_pmd_mkhuge(struct page *page)
+{
+	return pmd_mkhuge(mk_pmd(page, PAGE_KERNEL));
+}
+#endif
+
 #ifndef VMEMMAP_HPAGE_SHIFT
 #define VMEMMAP_HPAGE_SHIFT		PMD_SHIFT
 #endif
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 82467d573fee..a526bcdb137b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1718,6 +1718,62 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 	}
 }
 
+static void __replace_huge_page_pte_vmemmap(pte_t *ptep, unsigned long start,
+					    unsigned int nr, struct page *huge,
+					    struct list_head *free_pages)
+{
+	unsigned long addr;
+	unsigned long end = start + (nr << PAGE_SHIFT);
+
+	for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) {
+		struct page *page;
+		pte_t old = *ptep;
+		pte_t entry;
+
+		prepare_vmemmap_page(huge);
+
+		entry = mk_pte(huge++, PAGE_KERNEL);
+		VM_WARN_ON(!pte_present(old));
+		page = pte_page(old);
+		list_add(&page->lru, free_pages);
+
+		set_pte_at(&init_mm, addr, ptep, entry);
+	}
+}
+
+static void replace_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start,
+					  struct page *huge,
+					  struct list_head *free_pages)
+{
+	unsigned long end = start + VMEMMAP_HPAGE_SIZE;
+
+	flush_cache_vunmap(start, end);
+	__replace_huge_page_pte_vmemmap(pte_offset_kernel(pmd, start), start,
+					VMEMMAP_HPAGE_NR, huge, free_pages);
+	flush_tlb_kernel_range(start, end);
+}
+
+static pte_t *merge_vmemmap_pte(pmd_t *pmdp, unsigned long addr)
+{
+	pte_t *pte;
+	struct page *page;
+
+	pte = pte_offset_kernel(pmdp, addr);
+	page = pte_page(*pte);
+	set_pmd(pmdp, vmemmap_pmd_mkhuge(page));
+
+	return pte;
+}
+
+static void merge_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start,
+					struct page *huge,
+					struct list_head *free_pages)
+{
+	replace_huge_page_pmd_vmemmap(pmd, start, huge, free_pages);
+	pte_free_kernel(&init_mm, merge_vmemmap_pte(pmd, start));
+	flush_tlb_kernel_range(start, start + VMEMMAP_HPAGE_SIZE);
+}
+
 static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
 {
 	int i;
@@ -1731,6 +1787,15 @@ static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
 	}
 }
 
+static inline void dissolve_compound_page(struct page *page, unsigned int order)
+{
+	int i;
+	unsigned int nr_pages = 1 << order;
+
+	for (i = 1; i < nr_pages; i++)
+		set_page_refcounted(page + i);
+}
+
 static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 	pmd_t *pmd;
@@ -1750,10 +1815,47 @@ static void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 				    __remap_huge_page_pte_vmemmap);
 	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) {
 		/*
-		 * Todo:
-		 * Merge pte to huge pmd if it has ever been split.
+		 * Merge pte to huge pmd if it has ever been split. Now only
+		 * support gigantic page which's vmemmap pages size is an
+		 * integer multiple of PMD_SIZE. This is the simplest case
+		 * to handle.
 		 */
 		clear_pmd_split(pmd);
+
+		if (IS_ALIGNED(nr_vmemmap(h), VMEMMAP_HPAGE_NR)) {
+			unsigned long addr = (unsigned long)head;
+			unsigned long end = addr + nr_vmemmap_size(h);
+
+			spin_unlock(ptl);
+
+			for (; addr < end; addr += VMEMMAP_HPAGE_SIZE) {
+				void *to;
+				struct page *page;
+
+				page = alloc_pages(GFP_VMEMMAP_PAGE & ~__GFP_NOFAIL,
+						   VMEMMAP_HPAGE_ORDER);
+				dissolve_compound_page(page,
+						       VMEMMAP_HPAGE_ORDER);
+				if (!page)
+					goto out;
+
+				to = page_to_virt(page);
+				memcpy(to, (void *)addr, VMEMMAP_HPAGE_SIZE);
+
+				/*
+				 * Make sure that any data that writes to the
+				 * @to is made visible to the physical page.
+				 */
+				flush_kernel_vmap_range(to, VMEMMAP_HPAGE_SIZE);
+
+				merge_huge_page_pmd_vmemmap(pmd++, addr, page,
+							    &remap_pages);
+			}
+
+out:
+			free_vmemmap_page_list(&remap_pages);
+			return;
+		}
 	}
 	spin_unlock(ptl);
 }
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 18/19] mm/hugetlb: Gather discrete indexes of tail page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (16 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 17/19] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 14:51 ` [PATCH v2 19/19] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
                   ` (2 subsequent siblings)
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

For hugetlb page, there are more metadata to save in the struct
page. But the head struct page cannot meet our needs, so we have
to abuse other tail struct page to store the metadata. In order
to avoid conflicts caused by subsequent use of more tail struct
pages, we can gather these discrete indexes of tail struct page
In this case, it will be easier to add a new tail page index later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h        | 13 +++++++++++++
 include/linux/hugetlb_cgroup.h | 15 +++++++++------
 mm/hugetlb.c                   | 18 +++++++++---------
 3 files changed, 31 insertions(+), 15 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 3a45199cc5c1..6d377b6099f0 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t;
 #include <linux/shm.h>
 #include <asm/tlbflush.h>
 
+enum {
+	SUBPAGE_INDEX_ACTIVE = 1,	/* reuse page flags of PG_private */
+	SUBPAGE_INDEX_TEMPORARY,	/* reuse page->mapping */
+#ifdef CONFIG_CGROUP_HUGETLB
+	SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */
+	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
+#endif
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
+#endif
+	NR_USED_SUBPAGE,
+};
+
 struct hugepage_subpool {
 	spinlock_t lock;
 	long count;
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 2ad6e92f124a..3d3c1c49efe4 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -24,8 +24,9 @@ struct file_region;
 /*
  * Minimum page order trackable by hugetlb cgroup.
  * At least 4 pages are necessary for all the tracking information.
- * The second tail page (hpage[2]) is the fault usage cgroup.
- * The third tail page (hpage[3]) is the reservation usage cgroup.
+ * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
+ * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
+ * is the reservation usage cgroup.
  */
 #define HUGETLB_CGROUP_MIN_ORDER	2
 
@@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
 	if (rsvd)
-		return (struct hugetlb_cgroup *)page[3].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
 	else
-		return (struct hugetlb_cgroup *)page[2].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
 }
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
@@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
 	if (rsvd)
-		page[3].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+				 (unsigned long)h_cg);
 	else
-		page[2].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+				 (unsigned long)h_cg);
 	return 0;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a526bcdb137b..029b00ed52ed 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1925,17 +1925,17 @@ static inline void flush_free_huge_page_work(void)
 
 static inline bool subpage_hwpoison(struct page *head, struct page *page)
 {
-	return page_private(head + 4) == page - head;
+	return page_private(head + SUBPAGE_INDEX_HWPOISON) == page - head;
 }
 
 static inline void set_subpage_hwpoison(struct page *head, struct page *page)
 {
-	set_page_private(head + 4, page - head);
+	set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
 }
 
 static inline void clear_subpage_hwpoison(struct page *head)
 {
-	set_page_private(head + 4, 0);
+	set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0);
 }
 
 static int __init early_hugetlb_free_vmemmap_param(char *buf)
@@ -2075,20 +2075,20 @@ struct hstate *size_to_hstate(unsigned long size)
 bool page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return PageHead(page) && PagePrivate(&page[1]);
+	return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /* never called for tail page */
 static void set_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	SetPagePrivate(&page[1]);
+	SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 static void clear_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	ClearPagePrivate(&page[1]);
+	ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /*
@@ -2100,17 +2100,17 @@ static inline bool PageHugeTemporary(struct page *page)
 	if (!PageHuge(page))
 		return false;
 
-	return (unsigned long)page[2].mapping == -1U;
+	return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U;
 }
 
 static inline void SetPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = (void *)-1U;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U;
 }
 
 static inline void ClearPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = NULL;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL;
 }
 
 static void __free_huge_page(struct page *page)
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v2 19/19] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (17 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 18/19] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
@ 2020-10-26 14:51 ` Muchun Song
  2020-10-26 15:53 ` [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Matthew Wilcox
  2020-10-30  9:14 ` Michal Hocko
  20 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-26 14:51 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

There are only `RESERVE_VMEMMAP_SIZE / sizeof(struct page)` struct pages
can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON
to catch this invalid usage of tail struct page.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 029b00ed52ed..b196373a2a39 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3955,6 +3955,8 @@ static int __init hugetlb_init(void)
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 	BUILD_BUG_ON_NOT_POWER_OF_2(sizeof(struct page));
+	BUILD_BUG_ON(NR_USED_SUBPAGE >=
+		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
 #endif
 
 	if (!hugepages_supported()) {
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 00/19] Free some vmemmap pages of hugetlb page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (18 preceding siblings ...)
  2020-10-26 14:51 ` [PATCH v2 19/19] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
@ 2020-10-26 15:53 ` Matthew Wilcox
  2020-10-27  2:54   ` [External] " Muchun Song
  2020-10-30  9:14 ` Michal Hocko
  20 siblings, 1 reply; 43+ messages in thread
From: Matthew Wilcox @ 2020-10-26 15:53 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, duanxiongchun, linux-doc, linux-kernel,
	linux-mm, linux-fsdevel

On Mon, Oct 26, 2020 at 10:50:55PM +0800, Muchun Song wrote:
> For tail pages, the value of compound_dtor is the same. So we can reuse

compound_dtor is only set on the first tail page.  compound_head is
what you mean here, I think.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-10-26 14:51 ` [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-10-26 16:01   ` Matthew Wilcox
  2020-10-27  2:58     ` [External] " Muchun Song
  2020-10-28 23:42   ` Mike Kravetz
  1 sibling, 1 reply; 43+ messages in thread
From: Matthew Wilcox @ 2020-10-26 16:01 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, duanxiongchun, linux-doc, linux-kernel,
	linux-mm, linux-fsdevel

On Mon, Oct 26, 2020 at 10:51:02PM +0800, Muchun Song wrote:
> +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr)
> +{
> +	struct mm_struct *mm = &init_mm;
> +	struct page *page;
> +	pmd_t old_pmd, _pmd;
> +	int i;
> +
> +	old_pmd = READ_ONCE(*pmd);
> +	page = pmd_page(old_pmd);
> +	pmd_populate_kernel(mm, &_pmd, pte_p);
> +
> +	for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
> +		pte_t entry, *pte;
> +
> +		entry = mk_pte(page + i, PAGE_KERNEL);

I'd be happier if that were:

	pgprot_t pgprot = PAGE_KERNEL;
...
	for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
		pte_t entry, *pte;

		entry = mk_pte(page + i, pgprot);
		pgprot = PAGE_KERNEL_RO;

so that all subsequent tail pages are mapped read-only.


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 00/19] Free some vmemmap pages of hugetlb page
  2020-10-26 15:53 ` [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Matthew Wilcox
@ 2020-10-27  2:54   ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-27  2:54 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Mon, Oct 26, 2020 at 11:53 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Mon, Oct 26, 2020 at 10:50:55PM +0800, Muchun Song wrote:
> > For tail pages, the value of compound_dtor is the same. So we can reuse
>
> compound_dtor is only set on the first tail page.  compound_head is
> what you mean here, I think.
>

Yes, that's right.  Sorry for the confusion. Thanks.

-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-10-26 16:01   ` Matthew Wilcox
@ 2020-10-27  2:58     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-27  2:58 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Tue, Oct 27, 2020 at 12:01 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Mon, Oct 26, 2020 at 10:51:02PM +0800, Muchun Song wrote:
> > +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr)
> > +{
> > +     struct mm_struct *mm = &init_mm;
> > +     struct page *page;
> > +     pmd_t old_pmd, _pmd;
> > +     int i;
> > +
> > +     old_pmd = READ_ONCE(*pmd);
> > +     page = pmd_page(old_pmd);
> > +     pmd_populate_kernel(mm, &_pmd, pte_p);
> > +
> > +     for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
> > +             pte_t entry, *pte;
> > +
> > +             entry = mk_pte(page + i, PAGE_KERNEL);
>
> I'd be happier if that were:
>
>         pgprot_t pgprot = PAGE_KERNEL;
> ...
>         for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
>                 pte_t entry, *pte;
>
>                 entry = mk_pte(page + i, pgprot);
>                 pgprot = PAGE_KERNEL_RO;
>
> so that all subsequent tail pages are mapped read-only.
>

Good idea, do this can catch some illegal operations. Thanks.


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-10-26 14:50 ` [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-10-27 22:03   ` Mike Kravetz
  2020-10-29 13:26   ` Oscar Salvador
  1 sibling, 0 replies; 43+ messages in thread
From: Mike Kravetz @ 2020-10-27 22:03 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 10/26/20 7:50 AM, Muchun Song wrote:
> If the size of hugetlb page is 2MB, we need 512 struct page structures
> (8 pages) to be associated with it. As far as I know, we only use the
> first 4 struct page structures.

Use of first 4 struct page structures comes from HUGETLB_CGROUP_MIN_ORDER.
You could point that out here.

I thought about creating a HUGETLB_MIN_ORDER definition that could be used
to calculate RESERVE_VMEMMAP_NR.  However, I think a hard coded value of
2U as in the patch is OK.

> For tail pages, the value of compound_dtor is the same. So we can reuse
> first page of tail page structs. We map the virtual addresses of the
> remaining 6 pages of tail page structs to the first tail page struct,
> and then free these 6 pages. Therefore, we need to reserve at least 2
> pages as vmemmap areas.
> 
> So we introduce a new nr_free_vmemmap_pages field in the hstate to
> indicate how many vmemmap pages associated with a hugetlb page that we
> can free to buddy system.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  include/linux/hugetlb.h |  3 +++
>  mm/hugetlb.c            | 35 +++++++++++++++++++++++++++++++++++
>  2 files changed, 38 insertions(+)

Patch looks fine with updated commit message.
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-10-26 14:51 ` [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
@ 2020-10-28  0:32   ` Mike Kravetz
  2020-10-28  7:26     ` [External] " Muchun Song
  2020-11-05 13:23   ` Oscar Salvador
  1 sibling, 1 reply; 43+ messages in thread
From: Mike Kravetz @ 2020-10-28  0:32 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 10/26/20 7:51 AM, Muchun Song wrote:
> On some architectures, the vmemmap areas use huge page mapping.
> If we want to free the unused vmemmap pages, we have to split
> the huge pmd firstly. So we should pre-allocate pgtable to split
> huge pmd.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  arch/x86/include/asm/hugetlb.h |   5 ++
>  include/linux/hugetlb.h        |  17 +++++
>  mm/hugetlb.c                   | 117 +++++++++++++++++++++++++++++++++
>  3 files changed, 139 insertions(+)
> 
> diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
> index 1721b1aadeb1..f5e882f999cd 100644
> --- a/arch/x86/include/asm/hugetlb.h
> +++ b/arch/x86/include/asm/hugetlb.h
> @@ -5,6 +5,11 @@
>  #include <asm/page.h>
>  #include <asm-generic/hugetlb.h>
>  
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +#define VMEMMAP_HPAGE_SHIFT			PMD_SHIFT
> +#define arch_vmemmap_support_huge_mapping()	boot_cpu_has(X86_FEATURE_PSE)
> +#endif
> +
>  #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
>  
>  #endif /* _ASM_X86_HUGETLB_H */
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index eed3dd3bd626..ace304a6196c 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -593,6 +593,23 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h)
>  
>  #include <asm/hugetlb.h>
>  
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +#ifndef arch_vmemmap_support_huge_mapping
> +static inline bool arch_vmemmap_support_huge_mapping(void)
> +{
> +	return false;
> +}
> +#endif
> +
> +#ifndef VMEMMAP_HPAGE_SHIFT
> +#define VMEMMAP_HPAGE_SHIFT		PMD_SHIFT
> +#endif
> +#define VMEMMAP_HPAGE_ORDER		(VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT)
> +#define VMEMMAP_HPAGE_NR		(1 << VMEMMAP_HPAGE_ORDER)
> +#define VMEMMAP_HPAGE_SIZE		((1UL) << VMEMMAP_HPAGE_SHIFT)
> +#define VMEMMAP_HPAGE_MASK		(~(VMEMMAP_HPAGE_SIZE - 1))
> +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> +
>  #ifndef is_hugepage_only_range
>  static inline int is_hugepage_only_range(struct mm_struct *mm,
>  					unsigned long addr, unsigned long len)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f1b2b733b49b..d6ae9b6876be 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1295,11 +1295,108 @@ static inline void destroy_compound_gigantic_page(struct page *page,
>  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
>  #define RESERVE_VMEMMAP_NR	2U
>  
> +#define page_huge_pte(page)	((page)->pmd_huge_pte)
> +

I am not good at function names.  The following suggestions may be too
verbose.  However, they helped me understand purpose of routines.

>  static inline unsigned int nr_free_vmemmap(struct hstate *h)

	perhaps?	 	free_vmemmap_pages_per_hpage()

>  {
>  	return h->nr_free_vmemmap_pages;
>  }
>  
> +static inline unsigned int nr_vmemmap(struct hstate *h)

	perhaps?		vmemmap_pages_per_hpage()

> +{
> +	return nr_free_vmemmap(h) + RESERVE_VMEMMAP_NR;
> +}
> +
> +static inline unsigned long nr_vmemmap_size(struct hstate *h)

	perhaps?		vmemmap_pages_size_per_hpage()

> +{
> +	return (unsigned long)nr_vmemmap(h) << PAGE_SHIFT;
> +}
> +
> +static inline unsigned int nr_pgtable(struct hstate *h)

	perhaps?	pgtable_pages_to_prealloc_per_hpage()

> +{
> +	unsigned long vmemmap_size = nr_vmemmap_size(h);
> +
> +	if (!arch_vmemmap_support_huge_mapping())
> +		return 0;
> +
> +	/*
> +	 * No need pre-allocate page tabels when there is no vmemmap pages
> +	 * to free.
> +	 */
> +	if (!nr_free_vmemmap(h))
> +		return 0;
> +
> +	return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT;
> +}
> +
> +static inline void vmemmap_pgtable_init(struct page *page)
> +{
> +	page_huge_pte(page) = NULL;
> +}
> +

I see the following routines follow the pattern for vmemmap manipulation
in dax.

> +static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p)
> +{
> +	pgtable_t pgtable = virt_to_page(pte_p);
> +
> +	/* FIFO */
> +	if (!page_huge_pte(page))
> +		INIT_LIST_HEAD(&pgtable->lru);
> +	else
> +		list_add(&pgtable->lru, &page_huge_pte(page)->lru);
> +	page_huge_pte(page) = pgtable;
> +}
> +
> +static pte_t *vmemmap_pgtable_withdraw(struct page *page)
> +{
> +	pgtable_t pgtable;
> +
> +	/* FIFO */
> +	pgtable = page_huge_pte(page);
> +	if (unlikely(!pgtable))
> +		return NULL;
> +	page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru,
> +						       struct page, lru);
> +	if (page_huge_pte(page))
> +		list_del(&pgtable->lru);
> +	return page_to_virt(pgtable);
> +}
> +
> +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> +	int i;
> +	pte_t *pte_p;
> +	unsigned int nr = nr_pgtable(h);
> +
> +	if (!nr)
> +		return 0;
> +
> +	vmemmap_pgtable_init(page);
> +
> +	for (i = 0; i < nr; i++) {
> +		pte_p = pte_alloc_one_kernel(&init_mm);
> +		if (!pte_p)
> +			goto out;
> +		vmemmap_pgtable_deposit(page, pte_p);
> +	}
> +
> +	return 0;
> +out:
> +	while (i-- && (pte_p = vmemmap_pgtable_withdraw(page)))
> +		pte_free_kernel(&init_mm, pte_p);
> +	return -ENOMEM;
> +}
> +
> +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
> +{
> +	pte_t *pte_p;
> +
> +	if (!nr_pgtable(h))
> +		return;
> +
> +	while ((pte_p = vmemmap_pgtable_withdraw(page)))
> +		pte_free_kernel(&init_mm, pte_p);
> +}
> +
>  static void __init hugetlb_vmemmap_init(struct hstate *h)
>  {
>  	unsigned int order = huge_page_order(h);
> @@ -1323,6 +1420,15 @@ static void __init hugetlb_vmemmap_init(struct hstate *h)
>  static inline void hugetlb_vmemmap_init(struct hstate *h)
>  {
>  }
> +
> +static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> +	return 0;
> +}
> +
> +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
> +{
> +}
>  #endif
>  
>  static void update_and_free_page(struct hstate *h, struct page *page)
> @@ -1531,6 +1637,9 @@ void free_huge_page(struct page *page)
>  
>  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
>  {
> +	/* Must be called before the initialization of @page->lru */
> +	vmemmap_pgtable_free(h, page);
> +
>  	INIT_LIST_HEAD(&page->lru);
>  	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
>  	set_hugetlb_cgroup(page, NULL);
> @@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
>  	if (!page)
>  		return NULL;
>  
> +	if (vmemmap_pgtable_prealloc(h, page)) {
> +		if (hstate_is_gigantic(h))
> +			free_gigantic_page(page, huge_page_order(h));
> +		else
> +			put_page(page);
> +		return NULL;
> +	}
> +

It seems a bit strange that we will fail a huge page allocation if
vmemmap_pgtable_prealloc fails.  Not sure, but it almost seems like we shold
allow the allocation and log a warning?  It is somewhat unfortunate that
we need to allocate a page to free pages.

>  	if (hstate_is_gigantic(h))
>  		prep_compound_gigantic_page(page, huge_page_order(h));
>  	prep_new_huge_page(h, page, page_to_nid(page));
> 


-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-10-28  0:32   ` Mike Kravetz
@ 2020-10-28  7:26     ` Muchun Song
  2020-10-28 23:42       ` Mike Kravetz
  0 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-10-28  7:26 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Oct 28, 2020 at 8:33 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 10/26/20 7:51 AM, Muchun Song wrote:
> > On some architectures, the vmemmap areas use huge page mapping.
> > If we want to free the unused vmemmap pages, we have to split
> > the huge pmd firstly. So we should pre-allocate pgtable to split
> > huge pmd.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  arch/x86/include/asm/hugetlb.h |   5 ++
> >  include/linux/hugetlb.h        |  17 +++++
> >  mm/hugetlb.c                   | 117 +++++++++++++++++++++++++++++++++
> >  3 files changed, 139 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
> > index 1721b1aadeb1..f5e882f999cd 100644
> > --- a/arch/x86/include/asm/hugetlb.h
> > +++ b/arch/x86/include/asm/hugetlb.h
> > @@ -5,6 +5,11 @@
> >  #include <asm/page.h>
> >  #include <asm-generic/hugetlb.h>
> >
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +#define VMEMMAP_HPAGE_SHIFT                  PMD_SHIFT
> > +#define arch_vmemmap_support_huge_mapping()  boot_cpu_has(X86_FEATURE_PSE)
> > +#endif
> > +
> >  #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
> >
> >  #endif /* _ASM_X86_HUGETLB_H */
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index eed3dd3bd626..ace304a6196c 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -593,6 +593,23 @@ static inline unsigned int blocks_per_huge_page(struct hstate *h)
> >
> >  #include <asm/hugetlb.h>
> >
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +#ifndef arch_vmemmap_support_huge_mapping
> > +static inline bool arch_vmemmap_support_huge_mapping(void)
> > +{
> > +     return false;
> > +}
> > +#endif
> > +
> > +#ifndef VMEMMAP_HPAGE_SHIFT
> > +#define VMEMMAP_HPAGE_SHIFT          PMD_SHIFT
> > +#endif
> > +#define VMEMMAP_HPAGE_ORDER          (VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT)
> > +#define VMEMMAP_HPAGE_NR             (1 << VMEMMAP_HPAGE_ORDER)
> > +#define VMEMMAP_HPAGE_SIZE           ((1UL) << VMEMMAP_HPAGE_SHIFT)
> > +#define VMEMMAP_HPAGE_MASK           (~(VMEMMAP_HPAGE_SIZE - 1))
> > +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> > +
> >  #ifndef is_hugepage_only_range
> >  static inline int is_hugepage_only_range(struct mm_struct *mm,
> >                                       unsigned long addr, unsigned long len)
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f1b2b733b49b..d6ae9b6876be 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1295,11 +1295,108 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> >  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> >  #define RESERVE_VMEMMAP_NR   2U
> >
> > +#define page_huge_pte(page)  ((page)->pmd_huge_pte)
> > +
>
> I am not good at function names.  The following suggestions may be too
> verbose.  However, they helped me understand purpose of routines.
>
> >  static inline unsigned int nr_free_vmemmap(struct hstate *h)
>
>         perhaps?                free_vmemmap_pages_per_hpage()
>
> >  {
> >       return h->nr_free_vmemmap_pages;
> >  }
> >
> > +static inline unsigned int nr_vmemmap(struct hstate *h)
>
>         perhaps?                vmemmap_pages_per_hpage()
>
> > +{
> > +     return nr_free_vmemmap(h) + RESERVE_VMEMMAP_NR;
> > +}
> > +
> > +static inline unsigned long nr_vmemmap_size(struct hstate *h)
>
>         perhaps?                vmemmap_pages_size_per_hpage()
>
> > +{
> > +     return (unsigned long)nr_vmemmap(h) << PAGE_SHIFT;
> > +}
> > +
> > +static inline unsigned int nr_pgtable(struct hstate *h)
>
>         perhaps?        pgtable_pages_to_prealloc_per_hpage()

Good suggestions. Thanks. I will apply this.

>
> > +{
> > +     unsigned long vmemmap_size = nr_vmemmap_size(h);
> > +
> > +     if (!arch_vmemmap_support_huge_mapping())
> > +             return 0;
> > +
> > +     /*
> > +      * No need pre-allocate page tabels when there is no vmemmap pages
> > +      * to free.
> > +      */
> > +     if (!nr_free_vmemmap(h))
> > +             return 0;
> > +
> > +     return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT;
> > +}
> > +
> > +static inline void vmemmap_pgtable_init(struct page *page)
> > +{
> > +     page_huge_pte(page) = NULL;
> > +}
> > +
>
> I see the following routines follow the pattern for vmemmap manipulation
> in dax.

Did you mean move those functions to mm/sparse-vmemmap.c?

>
> > +static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p)
> > +{
> > +     pgtable_t pgtable = virt_to_page(pte_p);
> > +
> > +     /* FIFO */
> > +     if (!page_huge_pte(page))
> > +             INIT_LIST_HEAD(&pgtable->lru);
> > +     else
> > +             list_add(&pgtable->lru, &page_huge_pte(page)->lru);
> > +     page_huge_pte(page) = pgtable;
> > +}
> > +
> > +static pte_t *vmemmap_pgtable_withdraw(struct page *page)
> > +{
> > +     pgtable_t pgtable;
> > +
> > +     /* FIFO */
> > +     pgtable = page_huge_pte(page);
> > +     if (unlikely(!pgtable))
> > +             return NULL;
> > +     page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru,
> > +                                                    struct page, lru);
> > +     if (page_huge_pte(page))
> > +             list_del(&pgtable->lru);
> > +     return page_to_virt(pgtable);
> > +}
> > +
> > +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> > +{
> > +     int i;
> > +     pte_t *pte_p;
> > +     unsigned int nr = nr_pgtable(h);
> > +
> > +     if (!nr)
> > +             return 0;
> > +
> > +     vmemmap_pgtable_init(page);
> > +
> > +     for (i = 0; i < nr; i++) {
> > +             pte_p = pte_alloc_one_kernel(&init_mm);
> > +             if (!pte_p)
> > +                     goto out;
> > +             vmemmap_pgtable_deposit(page, pte_p);
> > +     }
> > +
> > +     return 0;
> > +out:
> > +     while (i-- && (pte_p = vmemmap_pgtable_withdraw(page)))
> > +             pte_free_kernel(&init_mm, pte_p);
> > +     return -ENOMEM;
> > +}
> > +
> > +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
> > +{
> > +     pte_t *pte_p;
> > +
> > +     if (!nr_pgtable(h))
> > +             return;
> > +
> > +     while ((pte_p = vmemmap_pgtable_withdraw(page)))
> > +             pte_free_kernel(&init_mm, pte_p);
> > +}
> > +
> >  static void __init hugetlb_vmemmap_init(struct hstate *h)
> >  {
> >       unsigned int order = huge_page_order(h);
> > @@ -1323,6 +1420,15 @@ static void __init hugetlb_vmemmap_init(struct hstate *h)
> >  static inline void hugetlb_vmemmap_init(struct hstate *h)
> >  {
> >  }
> > +
> > +static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> > +{
> > +     return 0;
> > +}
> > +
> > +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
> > +{
> > +}
> >  #endif
> >
> >  static void update_and_free_page(struct hstate *h, struct page *page)
> > @@ -1531,6 +1637,9 @@ void free_huge_page(struct page *page)
> >
> >  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
> >  {
> > +     /* Must be called before the initialization of @page->lru */
> > +     vmemmap_pgtable_free(h, page);
> > +
> >       INIT_LIST_HEAD(&page->lru);
> >       set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> >       set_hugetlb_cgroup(page, NULL);
> > @@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
> >       if (!page)
> >               return NULL;
> >
> > +     if (vmemmap_pgtable_prealloc(h, page)) {
> > +             if (hstate_is_gigantic(h))
> > +                     free_gigantic_page(page, huge_page_order(h));
> > +             else
> > +                     put_page(page);
> > +             return NULL;
> > +     }
> > +
>
> It seems a bit strange that we will fail a huge page allocation if
> vmemmap_pgtable_prealloc fails.  Not sure, but it almost seems like we shold
> allow the allocation and log a warning?  It is somewhat unfortunate that
> we need to allocate a page to free pages.

Yeah, it seems unfortunate. But if we allocate success, we can free some
vmemmap pages later. Like a compromise :) . If we can successfully allocate
a huge page, I also prefer to be able to successfully allocate another one page.
If we allow the allocation when vmemmap_pgtable_prealloc fails, we also
need to mark this page that vmemmap has not been released. Seems
increase complexity.

Thanks.

>
> >       if (hstate_is_gigantic(h))
> >               prep_compound_gigantic_page(page, huge_page_order(h));
> >       prep_new_huge_page(h, page, page_to_nid(page));
> >
>
>
> --
> Mike Kravetz



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-10-28  7:26     ` [External] " Muchun Song
@ 2020-10-28 23:42       ` Mike Kravetz
  0 siblings, 0 replies; 43+ messages in thread
From: Mike Kravetz @ 2020-10-28 23:42 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On 10/28/20 12:26 AM, Muchun Song wrote:
> On Wed, Oct 28, 2020 at 8:33 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>> On 10/26/20 7:51 AM, Muchun Song wrote:
>>
>> I see the following routines follow the pattern for vmemmap manipulation
>> in dax.
> 
> Did you mean move those functions to mm/sparse-vmemmap.c?

No.  Sorry, that was mostly a not to myself.

>>> +static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p)
>>> +{
>>> +     pgtable_t pgtable = virt_to_page(pte_p);
>>> +
>>> +     /* FIFO */
>>> +     if (!page_huge_pte(page))
>>> +             INIT_LIST_HEAD(&pgtable->lru);
>>> +     else
>>> +             list_add(&pgtable->lru, &page_huge_pte(page)->lru);
>>> +     page_huge_pte(page) = pgtable;
>>> +}
>>> +
>>> +static pte_t *vmemmap_pgtable_withdraw(struct page *page)
>>> +{
>>> +     pgtable_t pgtable;
>>> +
>>> +     /* FIFO */
>>> +     pgtable = page_huge_pte(page);
>>> +     if (unlikely(!pgtable))
>>> +             return NULL;
>>> +     page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru,
>>> +                                                    struct page, lru);
>>> +     if (page_huge_pte(page))
>>> +             list_del(&pgtable->lru);
>>> +     return page_to_virt(pgtable);
>>> +}
>>> +
...
>>> @@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
>>>       if (!page)
>>>               return NULL;
>>>
>>> +     if (vmemmap_pgtable_prealloc(h, page)) {
>>> +             if (hstate_is_gigantic(h))
>>> +                     free_gigantic_page(page, huge_page_order(h));
>>> +             else
>>> +                     put_page(page);
>>> +             return NULL;
>>> +     }
>>> +
>>
>> It seems a bit strange that we will fail a huge page allocation if
>> vmemmap_pgtable_prealloc fails.  Not sure, but it almost seems like we shold
>> allow the allocation and log a warning?  It is somewhat unfortunate that
>> we need to allocate a page to free pages.
> 
> Yeah, it seems unfortunate. But if we allocate success, we can free some
> vmemmap pages later. Like a compromise :) . If we can successfully allocate
> a huge page, I also prefer to be able to successfully allocate another one page.
> If we allow the allocation when vmemmap_pgtable_prealloc fails, we also
> need to mark this page that vmemmap has not been released. Seems
> increase complexity.

Yes, I think it is better to leave code as it is and avoid complexity.

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-10-26 14:51 ` [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
  2020-10-26 16:01   ` Matthew Wilcox
@ 2020-10-28 23:42   ` Mike Kravetz
  2020-10-29  6:13     ` [External] " Muchun Song
  1 sibling, 1 reply; 43+ messages in thread
From: Mike Kravetz @ 2020-10-28 23:42 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 10/26/20 7:51 AM, Muchun Song wrote:
> When we allocate a hugetlb page from the buddy, we should free the
> unused vmemmap pages associated with it. We can do that in the
> prep_new_huge_page().
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  arch/x86/include/asm/hugetlb.h          |   7 +
>  arch/x86/include/asm/pgtable_64_types.h |   8 +
>  include/linux/hugetlb.h                 |   7 +
>  mm/hugetlb.c                            | 190 ++++++++++++++++++++++++
>  4 files changed, 212 insertions(+)
> 
> diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
> index f5e882f999cd..7c3eb60c2198 100644
> --- a/arch/x86/include/asm/hugetlb.h
> +++ b/arch/x86/include/asm/hugetlb.h
> @@ -4,10 +4,17 @@
>  
>  #include <asm/page.h>
>  #include <asm-generic/hugetlb.h>
> +#include <asm/pgtable.h>
>  
>  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
>  #define VMEMMAP_HPAGE_SHIFT			PMD_SHIFT
>  #define arch_vmemmap_support_huge_mapping()	boot_cpu_has(X86_FEATURE_PSE)
> +
> +#define vmemmap_pmd_huge vmemmap_pmd_huge
> +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> +{
> +	return pmd_large(*pmd);
> +}
>  #endif
>  
>  #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
> diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> index 52e5f5f2240d..bedbd2e7d06c 100644
> --- a/arch/x86/include/asm/pgtable_64_types.h
> +++ b/arch/x86/include/asm/pgtable_64_types.h
> @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d;
>  # define VMEMMAP_START		__VMEMMAP_BASE_L4
>  #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
>  
> +/*
> + * VMEMMAP_SIZE - allows the whole linear region to be covered by
> + *                a struct page array.
> + */
> +#define VMEMMAP_SIZE		(1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \
> +					 1 + ilog2(sizeof(struct page))))
> +#define VMEMMAP_END		(VMEMMAP_START + VMEMMAP_SIZE)
> +
>  #define VMALLOC_END		(VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1)
>  
>  #define MODULES_VADDR		(__START_KERNEL_map + KERNEL_IMAGE_SIZE)
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ace304a6196c..919f47d77117 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -601,6 +601,13 @@ static inline bool arch_vmemmap_support_huge_mapping(void)
>  }
>  #endif
>  
> +#ifndef vmemmap_pmd_huge

Let's add
#define vmemmap_pmd_huge vmemmap_pmd_huge
just in case code gets moved around in header file.

> +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> +{
> +	return pmd_huge(*pmd);
> +}
> +#endif
> +
>  #ifndef VMEMMAP_HPAGE_SHIFT
>  #define VMEMMAP_HPAGE_SHIFT		PMD_SHIFT
>  #endif
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index d6ae9b6876be..aa012d603e06 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1293,10 +1293,20 @@ static inline void destroy_compound_gigantic_page(struct page *page,
>  #endif
>  
>  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +#include <linux/bootmem_info.h>
> +
>  #define RESERVE_VMEMMAP_NR	2U
> +#define RESERVE_VMEMMAP_SIZE	(RESERVE_VMEMMAP_NR << PAGE_SHIFT)

Since RESERVE_VMEMMAP_SIZE is not used here, perhaps it should be added
in the patch where it is first used.

>  
>  #define page_huge_pte(page)	((page)->pmd_huge_pte)
>  
> +#define vmemmap_hpage_addr_end(addr, end)				\
> +({									\
> +	unsigned long __boundary;					\
> +	__boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK;\
> +	(__boundary - 1 < (end) - 1) ? __boundary : (end);		\
> +})
> +
>  static inline unsigned int nr_free_vmemmap(struct hstate *h)
>  {
>  	return h->nr_free_vmemmap_pages;
> @@ -1416,6 +1426,181 @@ static void __init hugetlb_vmemmap_init(struct hstate *h)
>  	pr_info("HugeTLB: can free %d vmemmap pages for %s\n",
>  		h->nr_free_vmemmap_pages, h->name);
>  }
> +
> +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd)
> +{
> +	static DEFINE_SPINLOCK(pgtable_lock);
> +
> +	return &pgtable_lock;
> +}

This is just a global lock.  Correct?  And hugetlb specific?

It should be OK as the page table entries for huegtlb pages will not
overlap with other entries.

> +
> +/*
> + * Walk a vmemmap address to the pmd it maps.
> + */
> +static pmd_t *vmemmap_to_pmd(const void *page)
> +{
> +	unsigned long addr = (unsigned long)page;
> +	pgd_t *pgd;
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +
> +	if (addr < VMEMMAP_START || addr >= VMEMMAP_END)
> +		return NULL;
> +
> +	pgd = pgd_offset_k(addr);
> +	if (pgd_none(*pgd))
> +		return NULL;
> +	p4d = p4d_offset(pgd, addr);
> +	if (p4d_none(*p4d))
> +		return NULL;
> +	pud = pud_offset(p4d, addr);
> +
> +	WARN_ON_ONCE(pud_bad(*pud));
> +	if (pud_none(*pud) || pud_bad(*pud))
> +		return NULL;
> +	pmd = pmd_offset(pud, addr);
> +
> +	return pmd;
> +}

That routine is not really hugetlb specific.  Perhaps we could move it
to sparse-vmemmap.c?  Or elsewhere?

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-10-28 23:42   ` Mike Kravetz
@ 2020-10-29  6:13     ` Muchun Song
  2020-10-29 21:59       ` Mike Kravetz
  0 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-10-29  6:13 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Oct 29, 2020 at 7:42 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 10/26/20 7:51 AM, Muchun Song wrote:
> > When we allocate a hugetlb page from the buddy, we should free the
> > unused vmemmap pages associated with it. We can do that in the
> > prep_new_huge_page().
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  arch/x86/include/asm/hugetlb.h          |   7 +
> >  arch/x86/include/asm/pgtable_64_types.h |   8 +
> >  include/linux/hugetlb.h                 |   7 +
> >  mm/hugetlb.c                            | 190 ++++++++++++++++++++++++
> >  4 files changed, 212 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
> > index f5e882f999cd..7c3eb60c2198 100644
> > --- a/arch/x86/include/asm/hugetlb.h
> > +++ b/arch/x86/include/asm/hugetlb.h
> > @@ -4,10 +4,17 @@
> >
> >  #include <asm/page.h>
> >  #include <asm-generic/hugetlb.h>
> > +#include <asm/pgtable.h>
> >
> >  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> >  #define VMEMMAP_HPAGE_SHIFT                  PMD_SHIFT
> >  #define arch_vmemmap_support_huge_mapping()  boot_cpu_has(X86_FEATURE_PSE)
> > +
> > +#define vmemmap_pmd_huge vmemmap_pmd_huge
> > +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> > +{
> > +     return pmd_large(*pmd);
> > +}
> >  #endif
> >
> >  #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
> > diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
> > index 52e5f5f2240d..bedbd2e7d06c 100644
> > --- a/arch/x86/include/asm/pgtable_64_types.h
> > +++ b/arch/x86/include/asm/pgtable_64_types.h
> > @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d;
> >  # define VMEMMAP_START               __VMEMMAP_BASE_L4
> >  #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
> >
> > +/*
> > + * VMEMMAP_SIZE - allows the whole linear region to be covered by
> > + *                a struct page array.
> > + */
> > +#define VMEMMAP_SIZE         (1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \
> > +                                      1 + ilog2(sizeof(struct page))))
> > +#define VMEMMAP_END          (VMEMMAP_START + VMEMMAP_SIZE)
> > +
> >  #define VMALLOC_END          (VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1)
> >
> >  #define MODULES_VADDR                (__START_KERNEL_map + KERNEL_IMAGE_SIZE)
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index ace304a6196c..919f47d77117 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -601,6 +601,13 @@ static inline bool arch_vmemmap_support_huge_mapping(void)
> >  }
> >  #endif
> >
> > +#ifndef vmemmap_pmd_huge
>
> Let's add
> #define vmemmap_pmd_huge vmemmap_pmd_huge
> just in case code gets moved around in header file.

OK, will do.

>
> > +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> > +{
> > +     return pmd_huge(*pmd);
> > +}
> > +#endif
> > +
> >  #ifndef VMEMMAP_HPAGE_SHIFT
> >  #define VMEMMAP_HPAGE_SHIFT          PMD_SHIFT
> >  #endif
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index d6ae9b6876be..aa012d603e06 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1293,10 +1293,20 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> >  #endif
> >
> >  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +#include <linux/bootmem_info.h>
> > +
> >  #define RESERVE_VMEMMAP_NR   2U
> > +#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT)
>
> Since RESERVE_VMEMMAP_SIZE is not used here, perhaps it should be added
> in the patch where it is first used.

Will do.

>
> >
> >  #define page_huge_pte(page)  ((page)->pmd_huge_pte)
> >
> > +#define vmemmap_hpage_addr_end(addr, end)                            \
> > +({                                                                   \
> > +     unsigned long __boundary;                                       \
> > +     __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK;\
> > +     (__boundary - 1 < (end) - 1) ? __boundary : (end);              \
> > +})
> > +
> >  static inline unsigned int nr_free_vmemmap(struct hstate *h)
> >  {
> >       return h->nr_free_vmemmap_pages;
> > @@ -1416,6 +1426,181 @@ static void __init hugetlb_vmemmap_init(struct hstate *h)
> >       pr_info("HugeTLB: can free %d vmemmap pages for %s\n",
> >               h->nr_free_vmemmap_pages, h->name);
> >  }
> > +
> > +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd)
> > +{
> > +     static DEFINE_SPINLOCK(pgtable_lock);
> > +
> > +     return &pgtable_lock;
> > +}
>
> This is just a global lock.  Correct?  And hugetlb specific?

Yes, it is a global lock. Originally, I wanted to use the pmd lock(e.g.
pmd_lockptr()). But we need to allocate memory for the spinlock and
initialize it when ALLOC_SPLIT_PTLOCKS. It may increase the
complexity.

And I think that here alloc/free hugetlb pages is not a frequent operation.
So I finally use a global lock. Maybe it is enough.

>
> It should be OK as the page table entries for huegtlb pages will not
> overlap with other entries.

Does "hugetlb specific" mean the pmd lock? or per hugetlb lock?
If it is pmd lock, this is fine to me. If not, it may not be enough.
Because the lock also guards the splitting of pmd pgtable.

Thanks.
>
> > +
> > +/*
> > + * Walk a vmemmap address to the pmd it maps.
> > + */
> > +static pmd_t *vmemmap_to_pmd(const void *page)
> > +{
> > +     unsigned long addr = (unsigned long)page;
> > +     pgd_t *pgd;
> > +     p4d_t *p4d;
> > +     pud_t *pud;
> > +     pmd_t *pmd;
> > +
> > +     if (addr < VMEMMAP_START || addr >= VMEMMAP_END)
> > +             return NULL;
> > +
> > +     pgd = pgd_offset_k(addr);
> > +     if (pgd_none(*pgd))
> > +             return NULL;
> > +     p4d = p4d_offset(pgd, addr);
> > +     if (p4d_none(*p4d))
> > +             return NULL;
> > +     pud = pud_offset(p4d, addr);
> > +
> > +     WARN_ON_ONCE(pud_bad(*pud));
> > +     if (pud_none(*pud) || pud_bad(*pud))
> > +             return NULL;
> > +     pmd = pmd_offset(pud, addr);
> > +
> > +     return pmd;
> > +}
>
> That routine is not really hugetlb specific.  Perhaps we could move it
> to sparse-vmemmap.c?  Or elsewhere?

Yeah, we can move it to sparse-vmemmap.c, maybe better.

>
> --
> Mike Kravetz



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 03/19] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-10-26 14:50 ` [PATCH v2 03/19] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-10-29 10:29   ` Oscar Salvador
  2020-10-29 13:34     ` [External] " Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-10-29 10:29 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Mon, Oct 26, 2020 at 10:50:58PM +0800, Muchun Song wrote:
> The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> whether to enable the feature of freeing unused vmemmap associated
> with HugeTLB pages. Now only support x86.

Why this needs to be a config thing?
If this space-memory-optimization does not come with a trade-off,
why does the user have to set this instead of coming by default?


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-10-26 14:50 ` [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
  2020-10-27 22:03   ` Mike Kravetz
@ 2020-10-29 13:26   ` Oscar Salvador
  2020-10-29 13:41     ` [External] " Muchun Song
  1 sibling, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-10-29 13:26 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Mon, Oct 26, 2020 at 10:50:59PM +0800, Muchun Song wrote:
> If the size of hugetlb page is 2MB, we need 512 struct page structures
> (8 pages) to be associated with it. As far as I know, we only use the
> first 4 struct page structures.

As Mike pointed out, better describe what those "4" mean.
 
> For tail pages, the value of compound_dtor is the same. So we can reuse

I might be missing something, but HUGETLB_PAGE_DTOR is only set on the
first tail, right?

> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +#define RESERVE_VMEMMAP_NR	2U

Although you can get that from the changelog, maybe a brief comment explaining
why RESERVE_VMEMMAP_NR == 2.
> +
> +static inline unsigned int nr_free_vmemmap(struct hstate *h)
> +{
> +	return h->nr_free_vmemmap_pages;
> +}

Better add this in the patch that is used?

> +	if (vmemmap_pages > RESERVE_VMEMMAP_NR)
> +		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
> +	else
> +		h->nr_free_vmemmap_pages = 0;

Can we really have an scenario where we end up with vmemmap_pages < RESERVE_VMEMMAP_NR?

> +
> +	pr_info("HugeTLB: can free %d vmemmap pages for %s\n",
> +		h->nr_free_vmemmap_pages, h->name);

I do not think this is useful unless debugging situations, so I would either
scratch that or make it pr_debug.


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 03/19] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-10-29 10:29   ` Oscar Salvador
@ 2020-10-29 13:34     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-29 13:34 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Oct 29, 2020 at 6:29 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Mon, Oct 26, 2020 at 10:50:58PM +0800, Muchun Song wrote:
> > The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> > whether to enable the feature of freeing unused vmemmap associated
> > with HugeTLB pages. Now only support x86.
>
> Why this needs to be a config thing?
> If this space-memory-optimization does not come with a trade-off,
> why does the user have to set this instead of coming by default?

Now we only support x86_64. If we want to support other archs. We
need some arch special code to support this feature. In the future,
if this patch series is merged to mainline, I will implement this
optimization for other archs. At that time we can remove the
HUGETLB_PAGE_FREE_VMEMMAP.

Thanks.

>
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-10-29 13:26   ` Oscar Salvador
@ 2020-10-29 13:41     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-29 13:41 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Oct 29, 2020 at 9:26 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Mon, Oct 26, 2020 at 10:50:59PM +0800, Muchun Song wrote:
> > If the size of hugetlb page is 2MB, we need 512 struct page structures
> > (8 pages) to be associated with it. As far as I know, we only use the
> > first 4 struct page structures.
>
> As Mike pointed out, better describe what those "4" mean.

Yeah, thanks.

>
> > For tail pages, the value of compound_dtor is the same. So we can reuse
>
> I might be missing something, but HUGETLB_PAGE_DTOR is only set on the
> first tail, right?

Sorry for confusion. Here I mean the `compound_head` is the same.

>
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +#define RESERVE_VMEMMAP_NR   2U
>
> Although you can get that from the changelog, maybe a brief comment explaining
> why RESERVE_VMEMMAP_NR == 2.
> > +
> > +static inline unsigned int nr_free_vmemmap(struct hstate *h)
> > +{
> > +     return h->nr_free_vmemmap_pages;
> > +}
>
> Better add this in the patch that is used?

OK, I will do it. thanks.

>
> > +     if (vmemmap_pages > RESERVE_VMEMMAP_NR)
> > +             h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
> > +     else
> > +             h->nr_free_vmemmap_pages = 0;
>
> Can we really have an scenario where we end up with vmemmap_pages < RESERVE_VMEMMAP_NR?

I think that this is impossible. On the safe side, I do this comparison.
Do you think we should remove this comparison? Is that right?

>
> > +
> > +     pr_info("HugeTLB: can free %d vmemmap pages for %s\n",
> > +             h->nr_free_vmemmap_pages, h->name);
>
> I do not think this is useful unless debugging situations, so I would either
> scratch that or make it pr_debug.

Thanks for your suggestions.

>
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-10-29  6:13     ` [External] " Muchun Song
@ 2020-10-29 21:59       ` Mike Kravetz
  2020-10-30  2:58         ` Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Mike Kravetz @ 2020-10-29 21:59 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On 10/28/20 11:13 PM, Muchun Song wrote:
> On Thu, Oct 29, 2020 at 7:42 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>>
>> On 10/26/20 7:51 AM, Muchun Song wrote:
>>> +
>>> +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd)
>>> +{
>>> +     static DEFINE_SPINLOCK(pgtable_lock);
>>> +
>>> +     return &pgtable_lock;
>>> +}
>>
>> This is just a global lock.  Correct?  And hugetlb specific?
> 
> Yes, it is a global lock. Originally, I wanted to use the pmd lock(e.g.
> pmd_lockptr()). But we need to allocate memory for the spinlock and
> initialize it when ALLOC_SPLIT_PTLOCKS. It may increase the
> complexity.
> 
> And I think that here alloc/free hugetlb pages is not a frequent operation.
> So I finally use a global lock. Maybe it is enough.
> 
>>
>> It should be OK as the page table entries for huegtlb pages will not
>> overlap with other entries.
> 
> Does "hugetlb specific" mean the pmd lock? or per hugetlb lock?
> If it is pmd lock, this is fine to me. If not, it may not be enough.
> Because the lock also guards the splitting of pmd pgtable.

By "hugetlb specific", I was trying to say that only hugetlb code would
use this lock.  It is not a concern now.  However, there has been talk
about other code doing something similar to remove struct pages.  If that
ever happens then we will need a different locking scheme.

Disregard my statement about there being no overlap.  I was confusing
page tables for huge pages with page tables for mappings mmap entries
of huge pages.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-10-29 21:59       ` Mike Kravetz
@ 2020-10-30  2:58         ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-10-30  2:58 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri, Oct 30, 2020 at 6:00 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 10/28/20 11:13 PM, Muchun Song wrote:
> > On Thu, Oct 29, 2020 at 7:42 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> >>
> >> On 10/26/20 7:51 AM, Muchun Song wrote:
> >>> +
> >>> +static inline spinlock_t *vmemmap_pmd_lockptr(pmd_t *pmd)
> >>> +{
> >>> +     static DEFINE_SPINLOCK(pgtable_lock);
> >>> +
> >>> +     return &pgtable_lock;
> >>> +}
> >>
> >> This is just a global lock.  Correct?  And hugetlb specific?
> >
> > Yes, it is a global lock. Originally, I wanted to use the pmd lock(e.g.
> > pmd_lockptr()). But we need to allocate memory for the spinlock and
> > initialize it when ALLOC_SPLIT_PTLOCKS. It may increase the
> > complexity.
> >
> > And I think that here alloc/free hugetlb pages is not a frequent operation.
> > So I finally use a global lock. Maybe it is enough.
> >
> >>
> >> It should be OK as the page table entries for huegtlb pages will not
> >> overlap with other entries.
> >
> > Does "hugetlb specific" mean the pmd lock? or per hugetlb lock?
> > If it is pmd lock, this is fine to me. If not, it may not be enough.
> > Because the lock also guards the splitting of pmd pgtable.
>
> By "hugetlb specific", I was trying to say that only hugetlb code would
> use this lock.  It is not a concern now.  However, there has been talk
> about other code doing something similar to remove struct pages.  If that
> ever happens then we will need a different locking scheme.

Agreed, It is not a concern now :)

>
> Disregard my statement about there being no overlap.  I was confusing
> page tables for huge pages with page tables for mappings mmap entries
> of huge pages.
> --
> Mike Kravetz



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 00/19] Free some vmemmap pages of hugetlb page
  2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (19 preceding siblings ...)
  2020-10-26 15:53 ` [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Matthew Wilcox
@ 2020-10-30  9:14 ` Michal Hocko
  2020-10-30 10:24   ` [External] " Muchun Song
  20 siblings, 1 reply; 43+ messages in thread
From: Michal Hocko @ 2020-10-30  9:14 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Mon 26-10-20 22:50:55, Muchun Song wrote:
> If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
> substantial gain. On our server, run some SPDK/QEMU applications which
> will use 1000GB hugetlbpage. With this feature enabled, we can save
> ~16GB(1G hugepage)/~11GB(2MB hugepage) memory.
[...]
>  15 files changed, 1091 insertions(+), 165 deletions(-)
>  create mode 100644 include/linux/bootmem_info.h
>  create mode 100644 mm/bootmem_info.c

This is a neat idea but the code footprint is really non trivial. To a
very tricky code which hugetlb is unfortunately.

Saving 1,6% of memory is definitely interesting especially for 1GB pages
which tend to be more static and where the savings are more visible.

Anyway, I haven't seen any runtime overhead analysis here. What is the
price to modify the vmemmap page tables and make them pte rather than
pmd based (especially for 2MB hugetlb). Also, how expensive is the
vmemmap page tables reconstruction on the freeing path?

Thanks!
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 00/19] Free some vmemmap pages of hugetlb page
  2020-10-30  9:14 ` Michal Hocko
@ 2020-10-30 10:24   ` Muchun Song
  2020-10-30 15:19     ` Michal Hocko
  0 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-10-30 10:24 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri, Oct 30, 2020 at 5:14 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 26-10-20 22:50:55, Muchun Song wrote:
> > If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
> > substantial gain. On our server, run some SPDK/QEMU applications which
> > will use 1000GB hugetlbpage. With this feature enabled, we can save
> > ~16GB(1G hugepage)/~11GB(2MB hugepage) memory.
> [...]
> >  15 files changed, 1091 insertions(+), 165 deletions(-)
> >  create mode 100644 include/linux/bootmem_info.h
> >  create mode 100644 mm/bootmem_info.c
>
> This is a neat idea but the code footprint is really non trivial. To a
> very tricky code which hugetlb is unfortunately.
>
> Saving 1,6% of memory is definitely interesting especially for 1GB pages
> which tend to be more static and where the savings are more visible.
>
> Anyway, I haven't seen any runtime overhead analysis here. What is the
> price to modify the vmemmap page tables and make them pte rather than
> pmd based (especially for 2MB hugetlb). Also, how expensive is the
> vmemmap page tables reconstruction on the freeing path?

Yeah, I haven't tested the remapping overhead of reserving a hugetlb
page. I can do that. But the overhead is not on the allocation/freeing of
each hugetlb page, it is only once when we reserve some hugetlb pages
through /proc/sys/vm/nr_hugepages. Once the reservation is successful,
the subsequent allocation, freeing and using are the same as before
(not patched). So I think that the overhead is acceptable.

Thanks.

>
> Thanks!
> --
> Michal Hocko
> SUSE Labs



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 00/19] Free some vmemmap pages of hugetlb page
  2020-10-30 10:24   ` [External] " Muchun Song
@ 2020-10-30 15:19     ` Michal Hocko
  0 siblings, 0 replies; 43+ messages in thread
From: Michal Hocko @ 2020-10-30 15:19 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri 30-10-20 18:24:25, Muchun Song wrote:
> On Fri, Oct 30, 2020 at 5:14 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Mon 26-10-20 22:50:55, Muchun Song wrote:
> > > If we uses the 1G hugetlbpage, we can save 4095 pages. This is a very
> > > substantial gain. On our server, run some SPDK/QEMU applications which
> > > will use 1000GB hugetlbpage. With this feature enabled, we can save
> > > ~16GB(1G hugepage)/~11GB(2MB hugepage) memory.
> > [...]
> > >  15 files changed, 1091 insertions(+), 165 deletions(-)
> > >  create mode 100644 include/linux/bootmem_info.h
> > >  create mode 100644 mm/bootmem_info.c
> >
> > This is a neat idea but the code footprint is really non trivial. To a
> > very tricky code which hugetlb is unfortunately.
> >
> > Saving 1,6% of memory is definitely interesting especially for 1GB pages
> > which tend to be more static and where the savings are more visible.
> >
> > Anyway, I haven't seen any runtime overhead analysis here. What is the
> > price to modify the vmemmap page tables and make them pte rather than
> > pmd based (especially for 2MB hugetlb). Also, how expensive is the
> > vmemmap page tables reconstruction on the freeing path?
> 
> Yeah, I haven't tested the remapping overhead of reserving a hugetlb
> page. I can do that. But the overhead is not on the allocation/freeing of
> each hugetlb page, it is only once when we reserve some hugetlb pages
> through /proc/sys/vm/nr_hugepages. Once the reservation is successful,
> the subsequent allocation, freeing and using are the same as before
> (not patched).

Yes, that is quite clear. Except for the hugetlb overcommit and
migration if the pool is depeleted. Maybe few other cases.

> So I think that the overhead is acceptable.

Having some numbers for a such a large feature is really needed.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-10-26 14:51 ` [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
  2020-10-28  0:32   ` Mike Kravetz
@ 2020-11-05 13:23   ` Oscar Salvador
  2020-11-05 16:08     ` [External] " Muchun Song
  1 sibling, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-11-05 13:23 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Mon, Oct 26, 2020 at 10:51:00PM +0800, Muchun Song wrote:
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +#define VMEMMAP_HPAGE_SHIFT			PMD_SHIFT
> +#define arch_vmemmap_support_huge_mapping()	boot_cpu_has(X86_FEATURE_PSE)

I do not think you need this.
We already have hugepages_supported().

> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +#ifndef arch_vmemmap_support_huge_mapping
> +static inline bool arch_vmemmap_support_huge_mapping(void)
> +{
> +	return false;
> +}

Same as above

>  static inline unsigned int nr_free_vmemmap(struct hstate *h)
>  {
>  	return h->nr_free_vmemmap_pages;
>  }
>  
> +static inline unsigned int nr_vmemmap(struct hstate *h)
> +{
> +	return nr_free_vmemmap(h) + RESERVE_VMEMMAP_NR;
> +}
> +
> +static inline unsigned long nr_vmemmap_size(struct hstate *h)
> +{
> +	return (unsigned long)nr_vmemmap(h) << PAGE_SHIFT;
> +}
> +
> +static inline unsigned int nr_pgtable(struct hstate *h)
> +{
> +	unsigned long vmemmap_size = nr_vmemmap_size(h);
> +
> +	if (!arch_vmemmap_support_huge_mapping())
> +		return 0;
> +
> +	/*
> +	 * No need pre-allocate page tabels when there is no vmemmap pages
> +	 * to free.
> +	 */
> +	if (!nr_free_vmemmap(h))
> +		return 0;
> +
> +	return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT;
> +}

IMHO, Mike's naming suggestion fit much better.

> +static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p)
> +{
> +	pgtable_t pgtable = virt_to_page(pte_p);
> +
> +	/* FIFO */
> +	if (!page_huge_pte(page))
> +		INIT_LIST_HEAD(&pgtable->lru);
> +	else
> +		list_add(&pgtable->lru, &page_huge_pte(page)->lru);
> +	page_huge_pte(page) = pgtable;
> +}

I think it would make more sense if this took a pgtable argument
instead of a pte_t *.

> +static pte_t *vmemmap_pgtable_withdraw(struct page *page)
> +{
> +	pgtable_t pgtable;
> +
> +	/* FIFO */
> +	pgtable = page_huge_pte(page);
> +	if (unlikely(!pgtable))
> +		return NULL;

AFAICS, above check only needs to be run once.
It think we can move it to vmemmap_pgtable_free, can't we?

> +	page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru,
> +						       struct page, lru);
> +	if (page_huge_pte(page))
> +		list_del(&pgtable->lru);
> +	return page_to_virt(pgtable);
> +}

At the risk of adding more code, I think it would be nice to return a
pagetable_t?
So it is more coherent with the name of the function and with what
we are doing.

It is a pitty we cannot converge these and pgtable_trans_huge_*.
They share some code but it is different enough.

> +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> +	int i;
> +	pte_t *pte_p;
> +	unsigned int nr = nr_pgtable(h);
> +
> +	if (!nr)
> +		return 0;
> +
> +	vmemmap_pgtable_init(page);

Maybe just open code this one?

> +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
> +{
> +	pte_t *pte_p;
> +
> +	if (!nr_pgtable(h))
> +		return;
> +
> +	while ((pte_p = vmemmap_pgtable_withdraw(page)))
> +		pte_free_kernel(&init_mm, pte_p);

As mentioned above, move the pgtable_t check from vmemmap_pgtable_withdraw
in here.

  
>  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
>  {
> +	/* Must be called before the initialization of @page->lru */
> +	vmemmap_pgtable_free(h, page);
> +
>  	INIT_LIST_HEAD(&page->lru);
>  	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
>  	set_hugetlb_cgroup(page, NULL);
> @@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
>  	if (!page)
>  		return NULL;
>  
> +	if (vmemmap_pgtable_prealloc(h, page)) {
> +		if (hstate_is_gigantic(h))
> +			free_gigantic_page(page, huge_page_order(h));
> +		else
> +			put_page(page);
> +		return NULL;
> +	}
> +

I must confess I am bit puzzled.

IIUC, in this patch we are just adding the helpers to create/tear the page
tables.
I do not think we actually need to call vmemmap_pgtable_prealloc/vmemmap_pgtable_free, do we?
In the end, we are just allocating pages for pagetables and then free them shortly.

I think it would make more sense to add the calls when they need to be?


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-05 13:23   ` Oscar Salvador
@ 2020-11-05 16:08     ` Muchun Song
  2020-11-06  9:46       ` Oscar Salvador
  0 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-11-05 16:08 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Nov 5, 2020 at 9:23 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Mon, Oct 26, 2020 at 10:51:00PM +0800, Muchun Song wrote:
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +#define VMEMMAP_HPAGE_SHIFT                  PMD_SHIFT
> > +#define arch_vmemmap_support_huge_mapping()  boot_cpu_has(X86_FEATURE_PSE)
>
> I do not think you need this.
> We already have hugepages_supported().

Maybe some architectures support hugepage, but the vmemmap do not
use the hugepage map. In  this case, we need it. But I am not sure if it
exists in the real world. At least, x86 can reuse hugepages_supported.

>
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +#ifndef arch_vmemmap_support_huge_mapping
> > +static inline bool arch_vmemmap_support_huge_mapping(void)
> > +{
> > +     return false;
> > +}
>
> Same as above
>
> >  static inline unsigned int nr_free_vmemmap(struct hstate *h)
> >  {
> >       return h->nr_free_vmemmap_pages;
> >  }
> >
> > +static inline unsigned int nr_vmemmap(struct hstate *h)
> > +{
> > +     return nr_free_vmemmap(h) + RESERVE_VMEMMAP_NR;
> > +}
> > +
> > +static inline unsigned long nr_vmemmap_size(struct hstate *h)
> > +{
> > +     return (unsigned long)nr_vmemmap(h) << PAGE_SHIFT;
> > +}
> > +
> > +static inline unsigned int nr_pgtable(struct hstate *h)
> > +{
> > +     unsigned long vmemmap_size = nr_vmemmap_size(h);
> > +
> > +     if (!arch_vmemmap_support_huge_mapping())
> > +             return 0;
> > +
> > +     /*
> > +      * No need pre-allocate page tabels when there is no vmemmap pages
> > +      * to free.
> > +      */
> > +     if (!nr_free_vmemmap(h))
> > +             return 0;
> > +
> > +     return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT;
> > +}
>
> IMHO, Mike's naming suggestion fit much better.

I will do that.

>
> > +static void vmemmap_pgtable_deposit(struct page *page, pte_t *pte_p)
> > +{
> > +     pgtable_t pgtable = virt_to_page(pte_p);
> > +
> > +     /* FIFO */
> > +     if (!page_huge_pte(page))
> > +             INIT_LIST_HEAD(&pgtable->lru);
> > +     else
> > +             list_add(&pgtable->lru, &page_huge_pte(page)->lru);
> > +     page_huge_pte(page) = pgtable;
> > +}
>
> I think it would make more sense if this took a pgtable argument
> instead of a pte_t *.

Will do. Thanks for your suggestions.

>
> > +static pte_t *vmemmap_pgtable_withdraw(struct page *page)
> > +{
> > +     pgtable_t pgtable;
> > +
> > +     /* FIFO */
> > +     pgtable = page_huge_pte(page);
> > +     if (unlikely(!pgtable))
> > +             return NULL;
>
> AFAICS, above check only needs to be run once.
> It think we can move it to vmemmap_pgtable_free, can't we?

Yeah, we can. Thanks.

>
> > +     page_huge_pte(page) = list_first_entry_or_null(&pgtable->lru,
> > +                                                    struct page, lru);
> > +     if (page_huge_pte(page))
> > +             list_del(&pgtable->lru);
> > +     return page_to_virt(pgtable);
> > +}
>
> At the risk of adding more code, I think it would be nice to return a
> pagetable_t?
> So it is more coherent with the name of the function and with what
> we are doing.

Yeah.

>
> It is a pity we cannot converge these and pgtable_trans_huge_*.
> They share some code but it is different enough.
>
> > +static int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> > +{
> > +     int i;
> > +     pte_t *pte_p;
> > +     unsigned int nr = nr_pgtable(h);
> > +
> > +     if (!nr)
> > +             return 0;
> > +
> > +     vmemmap_pgtable_init(page);
>
> Maybe just open code this one?

Sorry. I don't quite understand what it means. Could you explain?


>
> > +static inline void vmemmap_pgtable_free(struct hstate *h, struct page *page)
> > +{
> > +     pte_t *pte_p;
> > +
> > +     if (!nr_pgtable(h))
> > +             return;
> > +
> > +     while ((pte_p = vmemmap_pgtable_withdraw(page)))
> > +             pte_free_kernel(&init_mm, pte_p);
>
> As mentioned above, move the pgtable_t check from vmemmap_pgtable_withdraw
> in here.

OK.

>
>
> >  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
> >  {
> > +     /* Must be called before the initialization of @page->lru */
> > +     vmemmap_pgtable_free(h, page);
> > +
> >       INIT_LIST_HEAD(&page->lru);
> >       set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> >       set_hugetlb_cgroup(page, NULL);
> > @@ -1783,6 +1892,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
> >       if (!page)
> >               return NULL;
> >
> > +     if (vmemmap_pgtable_prealloc(h, page)) {
> > +             if (hstate_is_gigantic(h))
> > +                     free_gigantic_page(page, huge_page_order(h));
> > +             else
> > +                     put_page(page);
> > +             return NULL;
> > +     }
> > +
>
> I must confess I am bit puzzled.
>
> IIUC, in this patch we are just adding the helpers to create/tear the page
> tables.
> I do not think we actually need to call vmemmap_pgtable_prealloc/vmemmap_pgtable_free, do we?
> In the end, we are just allocating pages for pagetables and then free them shortly.
>
> I think it would make more sense to add the calls when they need to be?

OK,  will do. Thanks very much.

>
>
> --
> Oscar Salvador
> SUSE L3



--
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-05 16:08     ` [External] " Muchun Song
@ 2020-11-06  9:46       ` Oscar Salvador
  2020-11-06 16:43         ` Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-11-06  9:46 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri, Nov 06, 2020 at 12:08:22AM +0800, Muchun Song wrote:
> > I do not think you need this.
> > We already have hugepages_supported().
> 
> Maybe some architectures support hugepage, but the vmemmap do not
> use the hugepage map. In  this case, we need it. But I am not sure if it
> exists in the real world. At least, x86 can reuse hugepages_supported.

Yes, but that is the point.
IIUC, this patchset will enable HugeTLB vmemmap pages only for x86_64.
Then, let us make the patchset specific to that architecture.

If at some point this grows more users (powerpc, arm, ...), then we
can add the missing code, but for now it makes sense to only include
the bits to make this work on x86_64.

And also according to this the changelog is a bit "misleading".

"On some architectures, the vmemmap areas use huge page mapping.
If we want to free the unused vmemmap pages, we have to split
the huge pmd firstly. So we should pre-allocate pgtable to split
huge pmd."

On x86_64, vmemmap is always PMD mapped if the machine has hugepages
support and if we have 2MB contiguos pages and PMD aligned.
e.g: I have seen cases where after the system has ran for a period
of time hotplug operations were mapping the vmemmap representing
the hot-added range on page base, because we could not find
enough contiguos and aligned memory.

Something that [1] tries to solve:

[1] https://patchwork.kernel.org/project/linux-mm/cover/20201022125835.26396-1-osalvador@suse.de/

But anyway, my point is that let us make it clear in the changelog that
this is aimed for x86_64 at the moment.
Saying "on some architures" might make think people that this is not
x86_64 specific.

>> > > +     vmemmap_pgtable_init(page);
> >
> > Maybe just open code this one?
> 
> Sorry. I don't quite understand what it means. Could you explain?

I meant doing 

page_huge_pte(page) = NULL

But no strong feelings.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-06  9:46       ` Oscar Salvador
@ 2020-11-06 16:43         ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-11-06 16:43 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri, Nov 6, 2020 at 5:46 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Fri, Nov 06, 2020 at 12:08:22AM +0800, Muchun Song wrote:
> > > I do not think you need this.
> > > We already have hugepages_supported().
> >
> > Maybe some architectures support hugepage, but the vmemmap do not
> > use the hugepage map. In  this case, we need it. But I am not sure if it
> > exists in the real world. At least, x86 can reuse hugepages_supported.
>
> Yes, but that is the point.
> IIUC, this patchset will enable HugeTLB vmemmap pages only for x86_64.
> Then, let us make the patchset specific to that architecture.
>
> If at some point this grows more users (powerpc, arm, ...), then we
> can add the missing code, but for now it makes sense to only include
> the bits to make this work on x86_64.
>
> And also according to this the changelog is a bit "misleading".
>
> "On some architectures, the vmemmap areas use huge page mapping.
> If we want to free the unused vmemmap pages, we have to split
> the huge pmd firstly. So we should pre-allocate pgtable to split
> huge pmd."

Thanks for your suggestions. I will rewrite the commit log.

>
> On x86_64, vmemmap is always PMD mapped if the machine has hugepages
> support and if we have 2MB contiguos pages and PMD aligned.
> e.g: I have seen cases where after the system has ran for a period
> of time hotplug operations were mapping the vmemmap representing
> the hot-added range on page base, because we could not find
> enough contiguos and aligned memory.
>
> Something that [1] tries to solve:
>
> [1] https://patchwork.kernel.org/project/linux-mm/cover/20201022125835.26396-1-osalvador@suse.de/
>
> But anyway, my point is that let us make it clear in the changelog that
> this is aimed for x86_64 at the moment.
> Saying "on some architures" might make think people that this is not
> x86_64 specific.
>
> >> > > +     vmemmap_pgtable_init(page);
> > >
> > > Maybe just open code this one?
> >
> > Sorry. I don't quite understand what it means. Could you explain?
>
> I meant doing
>
> page_huge_pte(page) = NULL
>
> But no strong feelings.
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2020-11-06 16:43 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-26 14:50 [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Muchun Song
2020-10-26 14:50 ` [PATCH v2 01/19] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
2020-10-26 14:50 ` [PATCH v2 02/19] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
2020-10-26 14:50 ` [PATCH v2 03/19] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-10-29 10:29   ` Oscar Salvador
2020-10-29 13:34     ` [External] " Muchun Song
2020-10-26 14:50 ` [PATCH v2 04/19] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-10-27 22:03   ` Mike Kravetz
2020-10-29 13:26   ` Oscar Salvador
2020-10-29 13:41     ` [External] " Muchun Song
2020-10-26 14:51 ` [PATCH v2 05/19] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
2020-10-28  0:32   ` Mike Kravetz
2020-10-28  7:26     ` [External] " Muchun Song
2020-10-28 23:42       ` Mike Kravetz
2020-11-05 13:23   ` Oscar Salvador
2020-11-05 16:08     ` [External] " Muchun Song
2020-11-06  9:46       ` Oscar Salvador
2020-11-06 16:43         ` Muchun Song
2020-10-26 14:51 ` [PATCH v2 06/19] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
2020-10-26 14:51 ` [PATCH v2 07/19] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
2020-10-26 16:01   ` Matthew Wilcox
2020-10-27  2:58     ` [External] " Muchun Song
2020-10-28 23:42   ` Mike Kravetz
2020-10-29  6:13     ` [External] " Muchun Song
2020-10-29 21:59       ` Mike Kravetz
2020-10-30  2:58         ` Muchun Song
2020-10-26 14:51 ` [PATCH v2 08/19] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
2020-10-26 14:51 ` [PATCH v2 09/19] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
2020-10-26 14:51 ` [PATCH v2 10/19] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
2020-10-26 14:51 ` [PATCH v2 11/19] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
2020-10-26 14:51 ` [PATCH v2 12/19] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
2020-10-26 14:51 ` [PATCH v2 13/19] mm/hugetlb: Add a BUILD_BUG_ON to check if struct page size is a power of two Muchun Song
2020-10-26 14:51 ` [PATCH v2 14/19] mm/hugetlb: Clear PageHWPoison on the non-error memory page Muchun Song
2020-10-26 14:51 ` [PATCH v2 15/19] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-10-26 14:51 ` [PATCH v2 16/19] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-10-26 14:51 ` [PATCH v2 17/19] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
2020-10-26 14:51 ` [PATCH v2 18/19] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-10-26 14:51 ` [PATCH v2 19/19] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
2020-10-26 15:53 ` [PATCH v2 00/19] Free some vmemmap pages of hugetlb page Matthew Wilcox
2020-10-27  2:54   ` [External] " Muchun Song
2020-10-30  9:14 ` Michal Hocko
2020-10-30 10:24   ` [External] " Muchun Song
2020-10-30 15:19     ` Michal Hocko

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).