linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
@ 2020-11-13 10:59 Muchun Song
  2020-11-13 10:59 ` [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
                   ` (21 more replies)
  0 siblings, 22 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Hi all,

This patch series will free some vmemmap pages(struct page structures)
associated with each hugetlbpage when preallocated to save memory.

Nowadays we track the status of physical page frames using struct page
structures arranged in one or more arrays. And here exists one-to-one
mapping between the physical page frame and the corresponding struct page
structure.

The HugeTLB support is built on top of multiple page size support that
is provided by most modern architectures. For example, x86 CPUs normally
support 4K and 2M (1G if architecturally supported) page sizes. Every
HugeTLB has more than one struct page structure. The 2M HugeTLB has 512
struct page structure and 1G HugeTLB has 4096 struct page structures. But
in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page structures to
store metadata associated with each HugeTLB. The rest of the struct page
structures are usually read the compound_head field which are all the same
value. If we can free some struct page memory to buddy system so that we
can save a lot of memory.

When the system boot up, every 2M HugeTLB has 512 struct page structures
which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

   hugetlbpage                  struct pages(8 pages)          page frame(8 pages)
  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
  |           |                     |     0     | -------------> |     0     |
  |           |                     |     1     | -------------> |     1     |
  |           |                     |     2     | -------------> |     2     |
  |           |                     |     3     | -------------> |     3     |
  |           |                     |     4     | -------------> |     4     |
  |     2M    |                     |     5     | -------------> |     5     |
  |           |                     |     6     | -------------> |     6     |
  |           |                     |     7     | -------------> |     7     |
  |           |                     +-----------+                +-----------+
  |           |
  |           |
  +-----------+


When a hugetlbpage is preallocated, we can change the mapping from above to
bellow.

   hugetlbpage                  struct pages(8 pages)          page frame(8 pages)
  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
  |           |                     |     0     | -------------> |     0     |
  |           |                     |     1     | -------------> |     1     |
  |           |                     |     2     | -------------> +-----------+
  |           |                     |     3     | -----------------^ ^ ^ ^ ^
  |           |                     |     4     | -------------------+ | | |
  |     2M    |                     |     5     | ---------------------+ | |
  |           |                     |     6     | -----------------------+ |
  |           |                     |     7     | -------------------------+
  |           |                     +-----------+
  |           |
  |           |
  +-----------+

For tail pages, the value of compound_head is the same. So we can reuse
first page of tail page structs. We map the virtual addresses of the
remaining 6 pages of tail page structs to the first tail page struct,
and then free these 6 pages. Therefore, we need to reserve at least 2
pages as vmemmap areas.

When a hugetlbpage is freed to the buddy system, we should allocate six
pages for vmemmap pages and restore the previous mapping relationship.

If we uses the 1G hugetlbpage, we can save 4088 pages(There are 4096 pages for
struct page structures, we reserve 2 pages for vmemmap and 8 pages for page
tables. So we can save 4088 pages). This is a very substantial gain. On our
server, run some SPDK/QEMU applications which will use 1024GB hugetlbpage.
With this feature enabled, we can save ~16GB(1G hugepage)/~11GB(2MB hugepage)
memory.

Because there are vmemmap page tables reconstruction on the freeing/allocating
path, it increases some overhead. Here are some overhead analysis.

1) Allocating 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           8360 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          1868 |@@@@@@@@@@@                                         |
   [32K, 64K)            10 |                                                    |
   [64K, 128K)            2 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.066s
   user     0m0.000s
   sys      0m0.066s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10176 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             62 |                                                    |
   [16K, 32K)             2 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 @MB hugetlb pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.004s
   user     0m0.000s
   sys      0m0.002s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [16K, 32K)         10240 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.077s
   user     0m0.001s
   sys      0m0.075s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            9950 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)            287 |@                                                   |
   [16K, 32K)             3 |                                                    |

   Summarize: The overhead of __free_hugepage is about ~2-4x slower than before.
              But according to the allocation test above, I think that here is
	      also ~2x slower than before.

              But why the 'real' time of patched is smaller than before? Because
	      In this patch series, the freeing hugetlb is asynchronous(through
	      kwoker).

Although the overhead has increased, the overhead is not significant. Like MIke
said, "However, remember that the majority of use cases create hugetlb pages at
or shortly after boot time and add them to the pool. So, additional overhead is
at pool creation time. There is no change to 'normal run time' operations of
getting a page from or returning a page to the pool (think page fault/unmap)".

  changelog in v4:
  1. Move all the vmemmap functions to hugetlb_vmemmap.c.
  2. Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
     disable this feature, we should disable it by a boot/kernel command line.
  3. Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
  4. Initialize page table lock for vmemmap through core_initcall mechanism.

  Thanks for Mike and Oscar's suggestions.

  changelog in v3:
  1. Rename some helps function name. Thanks Mike.
  2. Rework some code. Thanks Mike and Oscar.
  3. Remap the tail vmemmap page with PAGE_KERNEL_RO instead of
     PAGE_KERNEL. Thanks Matthew.
  4. Add some overhead analysis in the cover letter.
  5. Use vmemap pmd table lock instead of a hugetlb specific global lock.

  changelog in v2:
  1. Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  2. Fix some typo and code style problems.
  3. Remove unused handle_vmemmap_fault().
  4. Merge some commits to one commit suggested by Mike.

Muchun Song (21):
  mm/memory_hotplug: Move bootmem info registration API to
    bootmem_info.c
  mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  mm/hugetlb: Introduce pgtable allocation/freeing helpers
  mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  mm/bootmem_info: Combine bootmem info and type into page->freelist
  mm/hugetlb: Initialize page table lock for vmemmap
  mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  mm/hugetlb: Defer freeing of hugetlb pages
  mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb
    page
  mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper
  mm/hugetlb: Use PG_slab to indicate split pmd
  mm/hugetlb: Support freeing vmemmap pages of gigantic page
  mm/hugetlb: Set the PageHWPoison to the raw error page
  mm/hugetlb: Flush work when dissolving hugetlb page
  mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  mm/hugetlb: Merge pte to huge pmd only for gigantic page
  mm/hugetlb: Gather discrete indexes of tail page
  mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct
    page
  mm/hugetlb: Disable freeing vmemmap if struct page size is not power
    of two

 Documentation/admin-guide/kernel-parameters.txt |   9 +
 Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
 arch/x86/include/asm/hugetlb.h                  |  17 +
 arch/x86/include/asm/pgtable_64_types.h         |   8 +
 arch/x86/mm/init_64.c                           |   7 +-
 fs/Kconfig                                      |  14 +
 include/linux/bootmem_info.h                    |  78 +++
 include/linux/hugetlb.h                         |  19 +
 include/linux/hugetlb_cgroup.h                  |  15 +-
 include/linux/memory_hotplug.h                  |  27 -
 mm/Makefile                                     |   2 +
 mm/bootmem_info.c                               | 124 ++++
 mm/hugetlb.c                                    | 163 +++++-
 mm/hugetlb_vmemmap.c                            | 732 ++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                            | 104 ++++
 mm/memory_hotplug.c                             | 116 ----
 mm/sparse.c                                     |   5 +-
 17 files changed, 1263 insertions(+), 180 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-16 13:50   ` Oscar Salvador
  2020-11-13 10:59 ` [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
                   ` (20 subsequent siblings)
  21 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Move bootmem info registration common API to individual bootmem_info.c
for later patch use. This is just code movement without any functional
change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 arch/x86/mm/init_64.c          |  1 +
 include/linux/bootmem_info.h   | 27 ++++++++++++
 include/linux/memory_hotplug.h | 23 ----------
 mm/Makefile                    |  1 +
 mm/bootmem_info.c              | 99 ++++++++++++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 91 +-------------------------------------
 6 files changed, 129 insertions(+), 113 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..c7f7ad55b625 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -33,6 +33,7 @@
 #include <linux/nmi.h>
 #include <linux/gfp.h>
 #include <linux/kcore.h>
+#include <linux/bootmem_info.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
new file mode 100644
index 000000000000..65bb9b23140f
--- /dev/null
+++ b/include/linux/bootmem_info.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_BOOTMEM_INFO_H
+#define __LINUX_BOOTMEM_INFO_H
+
+#include <linux/mmzone.h>
+
+/*
+ * Types for free bootmem stored in page->lru.next. These have to be in
+ * some random range in unsigned long space for debugging purposes.
+ */
+enum {
+	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
+	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
+	MIX_SECTION_INFO,
+	NODE_INFO,
+	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
+};
+
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+#else
+static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+}
+#endif
+
+#endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 51a877fec8da..19e5d067294c 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -33,18 +33,6 @@ struct vmem_altmap;
 	___page;						   \
 })
 
-/*
- * Types for free bootmem stored in page->lru.next. These have to be in
- * some random range in unsigned long space for debugging purposes.
- */
-enum {
-	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
-	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
-	MIX_SECTION_INFO,
-	NODE_INFO,
-	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
-};
-
 /* Types for control the zone type of onlined and offlined memory */
 enum {
 	/* Offline the memory. */
@@ -209,13 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
-#else
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-#endif
 extern void put_page_bootmem(struct page *page);
 extern void get_page_bootmem(unsigned long ingo, struct page *page,
 			     unsigned long type);
@@ -254,10 +235,6 @@ static inline int mhp_notimplemented(const char *func)
 	return -ENOSYS;
 }
 
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-
 static inline int try_online_node(int nid)
 {
 	return 0;
diff --git a/mm/Makefile b/mm/Makefile
index d5649f1c12c0..752111587c99 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -82,6 +82,7 @@ obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_FAILSLAB) += failslab.o
+obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
new file mode 100644
index 000000000000..39fa8fc120bc
--- /dev/null
+++ b/mm/bootmem_info.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/mm/bootmem_info.c
+ *
+ *  Copyright (C)
+ */
+#include <linux/mm.h>
+#include <linux/compiler.h>
+#include <linux/memblock.h>
+#include <linux/bootmem_info.h>
+#include <linux/memory_hotplug.h>
+
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	/* Get section's memmap address */
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	/*
+	 * Get page for the memmap's phys address
+	 * XXX: need more consideration for sparse_vmemmap...
+	 */
+	page = virt_to_page(memmap);
+	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
+	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
+
+	/* remember memmap's page */
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, SECTION_INFO);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+
+}
+#else /* CONFIG_SPARSEMEM_VMEMMAP */
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+}
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
+
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+	unsigned long i, pfn, end_pfn, nr_pages;
+	int node = pgdat->node_id;
+	struct page *page;
+
+	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
+	page = virt_to_page(pgdat);
+
+	for (i = 0; i < nr_pages; i++, page++)
+		get_page_bootmem(node, page, NODE_INFO);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	/* register section info */
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+		/*
+		 * Some platforms can assign the same pfn to multiple nodes - on
+		 * node0 as well as nodeN.  To avoid registering a pfn against
+		 * multiple nodes we check that this pfn does not already
+		 * reside in some other nodes.
+		 */
+		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
+			register_page_bootmem_info_section(pfn);
+	}
+}
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index baded53b9ff9..2da4ad071456 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -21,6 +21,7 @@
 #include <linux/memory.h>
 #include <linux/memremap.h>
 #include <linux/memory_hotplug.h>
+#include <linux/bootmem_info.h>
 #include <linux/highmem.h>
 #include <linux/vmalloc.h>
 #include <linux/ioport.h>
@@ -167,96 +168,6 @@ void put_page_bootmem(struct page *page)
 	}
 }
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	/* Get section's memmap address */
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	/*
-	 * Get page for the memmap's phys address
-	 * XXX: need more consideration for sparse_vmemmap...
-	 */
-	page = virt_to_page(memmap);
-	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
-	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
-
-	/* remember memmap's page */
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, SECTION_INFO);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-
-}
-#else /* CONFIG_SPARSEMEM_VMEMMAP */
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
-
-void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-	unsigned long i, pfn, end_pfn, nr_pages;
-	int node = pgdat->node_id;
-	struct page *page;
-
-	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
-	page = virt_to_page(pgdat);
-
-	for (i = 0; i < nr_pages; i++, page++)
-		get_page_bootmem(node, page, NODE_INFO);
-
-	pfn = pgdat->node_start_pfn;
-	end_pfn = pgdat_end_pfn(pgdat);
-
-	/* register section info */
-	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		/*
-		 * Some platforms can assign the same pfn to multiple nodes - on
-		 * node0 as well as nodeN.  To avoid registering a pfn against
-		 * multiple nodes we check that this pfn does not already
-		 * reside in some other nodes.
-		 */
-		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
-			register_page_bootmem_info_section(pfn);
-	}
-}
-#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
  2020-11-13 10:59 ` [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-16 13:52   ` Oscar Salvador
  2020-11-13 10:59 ` [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
                   ` (19 subsequent siblings)
  21 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the later patch, we will use {get,put}_page_bootmem() to initialize
the page for vmemmap or free vmemmap page to buddy. So move them out of
CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any
functional change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 arch/x86/mm/init_64.c          |  2 +-
 include/linux/bootmem_info.h   | 13 +++++++++++++
 include/linux/memory_hotplug.h |  4 ----
 mm/bootmem_info.c              | 25 +++++++++++++++++++++++++
 mm/memory_hotplug.c            | 27 ---------------------------
 mm/sparse.c                    |  1 +
 6 files changed, 40 insertions(+), 32 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index c7f7ad55b625..0a45f062826e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1572,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 	return err;
 }
 
-#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void register_page_bootmem_memmap(unsigned long section_nr,
 				  struct page *start_page, unsigned long nr_pages)
 {
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 65bb9b23140f..4ed6dee1adc9 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -18,10 +18,23 @@ enum {
 
 #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+
+void get_page_bootmem(unsigned long info, struct page *page,
+		      unsigned long type);
+void put_page_bootmem(struct page *page);
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
 }
+
+static inline void put_page_bootmem(struct page *page)
+{
+}
+
+static inline void get_page_bootmem(unsigned long info, struct page *page,
+				    unsigned long type)
+{
+}
 #endif
 
 #endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 19e5d067294c..c9f3361fe84b 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -197,10 +197,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-extern void put_page_bootmem(struct page *page);
-extern void get_page_bootmem(unsigned long ingo, struct page *page,
-			     unsigned long type);
-
 void get_online_mems(void);
 void put_online_mems(void);
 
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
index 39fa8fc120bc..fcab5a3f8cc0 100644
--- a/mm/bootmem_info.c
+++ b/mm/bootmem_info.c
@@ -10,6 +10,31 @@
 #include <linux/bootmem_info.h>
 #include <linux/memory_hotplug.h>
 
+void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
+{
+	page->freelist = (void *)type;
+	SetPagePrivate(page);
+	set_page_private(page, info);
+	page_ref_inc(page);
+}
+
+void put_page_bootmem(struct page *page)
+{
+	unsigned long type;
+
+	type = (unsigned long) page->freelist;
+	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
+	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
+
+	if (page_ref_dec_return(page) == 1) {
+		page->freelist = NULL;
+		ClearPagePrivate(page);
+		set_page_private(page, 0);
+		INIT_LIST_HEAD(&page->lru);
+		free_reserved_page(page);
+	}
+}
+
 #ifndef CONFIG_SPARSEMEM_VMEMMAP
 static void register_page_bootmem_info_section(unsigned long start_pfn)
 {
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 2da4ad071456..ae57eedc341f 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -21,7 +21,6 @@
 #include <linux/memory.h>
 #include <linux/memremap.h>
 #include <linux/memory_hotplug.h>
-#include <linux/bootmem_info.h>
 #include <linux/highmem.h>
 #include <linux/vmalloc.h>
 #include <linux/ioport.h>
@@ -142,32 +141,6 @@ static void release_memory_resource(struct resource *res)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
-void get_page_bootmem(unsigned long info,  struct page *page,
-		      unsigned long type)
-{
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
-	page_ref_inc(page);
-}
-
-void put_page_bootmem(struct page *page)
-{
-	unsigned long type;
-
-	type = (unsigned long) page->freelist;
-	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
-	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
-
-	if (page_ref_dec_return(page) == 1) {
-		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
-		INIT_LIST_HEAD(&page->lru);
-		free_reserved_page(page);
-	}
-}
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
diff --git a/mm/sparse.c b/mm/sparse.c
index b25ad8e64839..a4138410d890 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -13,6 +13,7 @@
 #include <linux/vmalloc.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/bootmem_info.h>
 
 #include "internal.h"
 #include <asm/dma.h>
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
  2020-11-13 10:59 ` [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
  2020-11-13 10:59 ` [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-18 22:38   ` Mike Kravetz
  2020-11-13 10:59 ` [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
                   ` (18 subsequent siblings)
  21 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
whether to enable the feature of freeing unused vmemmap associated
with HugeTLB pages. Now only support x86.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/mm/init_64.c |  2 +-
 fs/Kconfig            | 14 ++++++++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0a45f062826e..0435bee2e172 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/fs/Kconfig b/fs/Kconfig
index 976e8b9033c4..67e1bc99574f 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -245,6 +245,20 @@ config HUGETLBFS
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
 
+config HUGETLB_PAGE_FREE_VMEMMAP
+	def_bool HUGETLB_PAGE
+	depends on X86
+	depends on SPARSEMEM_VMEMMAP
+	depends on HAVE_BOOTMEM_INFO_NODE
+	help
+	  When using SPARSEMEM_VMEMMAP, the system can save up some memory
+	  from pre-allocated HugeTLB pages when they are not used. 6 pages
+	  per 2MB HugeTLB page and 4094 per 1GB HugeTLB page.
+
+	  When the pages are going to be used or freed up, the vmemmap array
+	  representing that range needs to be remapped again and the pages
+	  we discarded earlier need to be rellocated again.
+
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (2 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-16 13:33   ` Oscar Salvador
  2020-11-18 23:48   ` Mike Kravetz
  2020-11-13 10:59 ` [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
                   ` (17 subsequent siblings)
  21 siblings, 2 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

If the size of HugeTLB page is 2MB, we need 512 struct page structures
(8 pages) to be associated with it. As far as I know, we only use the
first 4 struct page structures. Use of first 4 struct page structures
comes from HUGETLB_CGROUP_MIN_ORDER.

For tail pages, the value of compound_head is the same. So we can reuse
first page of tail page structs. We map the virtual addresses of the
remaining 6 pages of tail page structs to the first tail page struct,
and then free these 6 pages. Therefore, we need to reserve at least 2
pages as vmemmap areas.

So we introduce a new nr_free_vmemmap_pages field in the hstate to
indicate how many vmemmap pages associated with a HugeTLB page that we
can free to buddy system.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 include/linux/hugetlb.h |   3 ++
 mm/Makefile             |   1 +
 mm/hugetlb.c            |   3 ++
 mm/hugetlb_vmemmap.c    | 108 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h    |  20 +++++++++
 5 files changed, 135 insertions(+)
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index d5cc5f802dd4..eed3dd3bd626 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -492,6 +492,9 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	unsigned int nr_free_vmemmap_pages;
+#endif
 #ifdef CONFIG_CGROUP_HUGETLB
 	/* cgroup control files */
 	struct cftype cgroup_files_dfl[7];
diff --git a/mm/Makefile b/mm/Makefile
index 752111587c99..2a734576bbc0 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
 obj-$(CONFIG_ZSWAP)	+= zswap.o
 obj-$(CONFIG_HAS_DMA)	+= dmapool.o
 obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
+obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
 obj-$(CONFIG_NUMA) 	+= mempolicy.o
 obj-$(CONFIG_SPARSEMEM)	+= sparse.o
 obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 81a41aa080a5..f88032c24667 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -42,6 +42,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/page_owner.h>
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
@@ -3285,6 +3286,8 @@ void __init hugetlb_add_hstate(unsigned int order)
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
 
+	hugetlb_vmemmap_init(h);
+
 	parsed_hstate = h;
 }
 
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
new file mode 100644
index 000000000000..a6c9948302e2
--- /dev/null
+++ b/mm/hugetlb_vmemmap.c
@@ -0,0 +1,108 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ *
+ * Nowadays we track the status of physical page frames using struct page
+ * structures arranged in one or more arrays. And here exists one-to-one
+ * mapping between the physical page frame and the corresponding struct page
+ * structure.
+ *
+ * The HugeTLB support is built on top of multiple page size support that
+ * is provided by most modern architectures. For example, x86 CPUs normally
+ * support 4K and 2M (1G if architecturally supported) page sizes. Every
+ * HugeTLB has more than one struct page structure. The 2M HugeTLB has 512
+ * struct page structure and 1G HugeTLB has 4096 struct page structures. But
+ * in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
+ * structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page structures to
+ * store metadata associated with each HugeTLB. The rest of the struct page
+ * structures are usually read the compound_head field which are all the same
+ * value. If we can free some struct page memory to buddy system so that we
+ * can save a lot of memory.
+ *
+ * When the system boot up, every 2M HugeTLB has 512 struct page structures
+ * which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     |     2     | -------------> |     2     |
+ * |           |                     |     3     | -------------> |     3     |
+ * |           |                     |     4     | -------------> |     4     |
+ * |     2M    |                     |     5     | -------------> |     5     |
+ * |           |                     |     6     | -------------> |     6     |
+ * |           |                     |     7     | -------------> |     7     |
+ * |           |                     +-----------+                +-----------+
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ *
+ * When a HugeTLB is preallocated, we can change the mapping from above to
+ * bellow.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     |     2     | -------------> +-----------+
+ * |           |                     |     3     | -----------------^ ^ ^ ^ ^
+ * |           |                     |     4     | -------------------+ | | |
+ * |     2M    |                     |     5     | ---------------------+ | |
+ * |           |                     |     6     | -----------------------+ |
+ * |           |                     |     7     | -------------------------+
+ * |           |                     +-----------+
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * For tail pages, the value of compound_head is the same. So we can reuse
+ * first page of tail page structures. We map the virtual addresses of the
+ * remaining 6 pages of tail page structures to the first tail page structures,
+ * and then free these 6 page frames. Therefore, we need to reserve at least 2
+ * pages as vmemmap areas.
+ *
+ * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
+ * vmemmap pages and restore the previous mapping relationship.
+ */
+#define pr_fmt(fmt)	"HugeTLB Vmemmap: " fmt
+
+#include "hugetlb_vmemmap.h"
+
+/*
+ * There are 512 struct page structures(8 pages) associated with each 2MB
+ * hugetlb page. For tail pages, the value of compound_head is the same.
+ * So we can reuse first page of tail page structures. We map the virtual
+ * addresses of the remaining 6 pages of tail page structures to the first
+ * tail page struct, and then free these 6 pages. Therefore, we need to
+ * reserve at least 2 pages as vmemmap areas.
+ */
+#define RESERVE_VMEMMAP_NR		2U
+
+void __init hugetlb_vmemmap_init(struct hstate *h)
+{
+	unsigned int order = huge_page_order(h);
+	unsigned int vmemmap_pages;
+
+	vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
+	/*
+	 * The head page and the first tail page are not to be freed to buddy
+	 * system, the others page will map to the first tail page. So there
+	 * are (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed.
+	 *
+	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? This is
+	 * not expected to happen unless the system is corrupted. So on the
+	 * safe side, it is only a safety net.
+	 */
+	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
+		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
+	else
+		h->nr_free_vmemmap_pages = 0;
+
+	pr_debug("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
+		 h->name);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
new file mode 100644
index 000000000000..40c0c7dfb60d
--- /dev/null
+++ b/mm/hugetlb_vmemmap.h
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ */
+#ifndef _LINUX_HUGETLB_VMEMMAP_H
+#define _LINUX_HUGETLB_VMEMMAP_H
+#include <linux/hugetlb.h>
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void __init hugetlb_vmemmap_init(struct hstate *h);
+#else
+static inline void hugetlb_vmemmap_init(struct hstate *h)
+{
+}
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+#endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (3 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-17 15:06   ` Oscar Salvador
  2020-11-19 23:37   ` Mike Kravetz
  2020-11-13 10:59 ` [PATCH v4 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
                   ` (16 subsequent siblings)
  21 siblings, 2 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

On x86_64, vmemmap is always PMD mapped if the machine has hugepages
support and if we have 2MB contiguos pages and PMD aligned. If we want
to free the unused vmemmap pages, we have to split the huge pmd firstly.
So we should pre-allocate pgtable to split PMD to PTE.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h | 12 +++++++++
 2 files changed, 85 insertions(+)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index a6c9948302e2..b7dfa97b4ea9 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -71,6 +71,8 @@
  */
 #define pr_fmt(fmt)	"HugeTLB Vmemmap: " fmt
 
+#include <linux/list.h>
+#include <asm/pgalloc.h>
 #include "hugetlb_vmemmap.h"
 
 /*
@@ -83,6 +85,77 @@
  */
 #define RESERVE_VMEMMAP_NR		2U
 
+#ifndef VMEMMAP_HPAGE_SHIFT
+#define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
+#endif
+#define VMEMMAP_HPAGE_ORDER		(VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT)
+#define VMEMMAP_HPAGE_NR		(1 << VMEMMAP_HPAGE_ORDER)
+#define VMEMMAP_HPAGE_SIZE		((1UL) << VMEMMAP_HPAGE_SHIFT)
+#define VMEMMAP_HPAGE_MASK		(~(VMEMMAP_HPAGE_SIZE - 1))
+
+#define page_huge_pte(page)		((page)->pmd_huge_pte)
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return h->nr_free_vmemmap_pages;
+}
+
+static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
+}
+
+static inline unsigned long vmemmap_pages_size_per_hpage(struct hstate *h)
+{
+	return (unsigned long)vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
+}
+
+static inline unsigned int pgtable_pages_to_prealloc_per_hpage(struct hstate *h)
+{
+	unsigned long vmemmap_size = vmemmap_pages_size_per_hpage(h);
+
+	/*
+	 * No need pre-allocate page tables when there is no vmemmap pages
+	 * to free.
+	 */
+	if (!free_vmemmap_pages_per_hpage(h))
+		return 0;
+
+	return ALIGN(vmemmap_size, VMEMMAP_HPAGE_SIZE) >> VMEMMAP_HPAGE_SHIFT;
+}
+
+void vmemmap_pgtable_free(struct page *page)
+{
+	struct page *pte_page, *t_page;
+
+	list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) {
+		list_del(&pte_page->lru);
+		pte_free_kernel(&init_mm, page_to_virt(pte_page));
+	}
+}
+
+int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
+{
+	unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
+
+	/* Store preallocated pages on huge page lru list */
+	INIT_LIST_HEAD(&page->lru);
+
+	while (nr--) {
+		pte_t *pte_p;
+
+		pte_p = pte_alloc_one_kernel(&init_mm);
+		if (!pte_p)
+			goto out;
+		list_add(&virt_to_page(pte_p)->lru, &page->lru);
+	}
+
+	return 0;
+out:
+	vmemmap_pgtable_free(page);
+	return -ENOMEM;
+}
+
 void __init hugetlb_vmemmap_init(struct hstate *h)
 {
 	unsigned int order = huge_page_order(h);
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 40c0c7dfb60d..2a72d2f62411 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -9,12 +9,24 @@
 #ifndef _LINUX_HUGETLB_VMEMMAP_H
 #define _LINUX_HUGETLB_VMEMMAP_H
 #include <linux/hugetlb.h>
+#include <linux/mm.h>
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 void __init hugetlb_vmemmap_init(struct hstate *h);
+int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page);
+void vmemmap_pgtable_free(struct page *page);
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
 }
+
+static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
+{
+	return 0;
+}
+
+static inline void vmemmap_pgtable_free(struct page *page)
+{
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (4 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist Muchun Song
                   ` (15 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the later patch, we can use the free_vmemmap_page() to free the
unused vmemmap pages and initialize a page for vmemmap page using
via prepare_vmemmap_page().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 4ed6dee1adc9..239e3cc8f86c 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -3,6 +3,7 @@
 #define __LINUX_BOOTMEM_INFO_H
 
 #include <linux/mmzone.h>
+#include <linux/mm.h>
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
@@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 void get_page_bootmem(unsigned long info, struct page *page,
 		      unsigned long type);
 void put_page_bootmem(struct page *page);
+
+static inline void free_vmemmap_page(struct page *page)
+{
+	VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
+
+	/* bootmem page has reserved flag in the reserve_bootmem_region */
+	if (PageReserved(page)) {
+		unsigned long magic = (unsigned long)page->freelist;
+
+		if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
+			put_page_bootmem(page);
+		else
+			WARN_ON(1);
+	}
+}
+
+static inline void prepare_vmemmap_page(struct page *page)
+{
+	unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
+
+	get_page_bootmem(section_nr, page, SECTION_INFO);
+	mark_page_reserved(page);
+}
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (5 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 08/21] mm/hugetlb: Initialize page table lock for vmemmap Muchun Song
                   ` (14 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The page->private shares storage with page->ptl. In the later patch,
we will use the page->ptl. So here we combine bootmem info and type
into page->freelist so that we can do not use page->private.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/mm/init_64.c        |  2 +-
 include/linux/bootmem_info.h | 18 ++++++++++++++++--
 mm/bootmem_info.c            | 12 ++++++------
 mm/sparse.c                  |  4 ++--
 4 files changed, 25 insertions(+), 11 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0435bee2e172..9b738c6cb659 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -883,7 +883,7 @@ static void __meminit free_pagetable(struct page *page, int order)
 	if (PageReserved(page)) {
 		__ClearPageReserved(page);
 
-		magic = (unsigned long)page->freelist;
+		magic = page_bootmem_type(page);
 		if (magic == SECTION_INFO || magic == MIX_SECTION_INFO) {
 			while (nr_pages--)
 				put_page_bootmem(page++);
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 239e3cc8f86c..95ae80838680 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -6,7 +6,7 @@
 #include <linux/mm.h>
 
 /*
- * Types for free bootmem stored in page->lru.next. These have to be in
+ * Types for free bootmem stored in page->freelist. These have to be in
  * some random range in unsigned long space for debugging purposes.
  */
 enum {
@@ -17,6 +17,20 @@ enum {
 	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
 };
 
+#define BOOTMEM_TYPE_BITS	(ilog2(MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE) + 1)
+#define BOOTMEM_TYPE_MAX	((1UL << BOOTMEM_TYPE_BITS) - 1)
+#define BOOTMEM_INFO_MAX	(ULONG_MAX >> BOOTMEM_TYPE_BITS)
+
+static inline unsigned long page_bootmem_type(struct page *page)
+{
+	return (unsigned long)page->freelist & BOOTMEM_TYPE_MAX;
+}
+
+static inline unsigned long page_bootmem_info(struct page *page)
+{
+	return (unsigned long)page->freelist >> BOOTMEM_TYPE_BITS;
+}
+
 #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 
@@ -30,7 +44,7 @@ static inline void free_vmemmap_page(struct page *page)
 
 	/* bootmem page has reserved flag in the reserve_bootmem_region */
 	if (PageReserved(page)) {
-		unsigned long magic = (unsigned long)page->freelist;
+		unsigned long magic = page_bootmem_type(page);
 
 		if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
 			put_page_bootmem(page);
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
index fcab5a3f8cc0..9baf163965fd 100644
--- a/mm/bootmem_info.c
+++ b/mm/bootmem_info.c
@@ -12,9 +12,9 @@
 
 void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
 {
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
+	BUG_ON(info > BOOTMEM_INFO_MAX);
+	BUG_ON(type > BOOTMEM_TYPE_MAX);
+	page->freelist = (void *)((info << BOOTMEM_TYPE_BITS) | type);
 	page_ref_inc(page);
 }
 
@@ -22,14 +22,12 @@ void put_page_bootmem(struct page *page)
 {
 	unsigned long type;
 
-	type = (unsigned long) page->freelist;
+	type = page_bootmem_type(page);
 	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
 	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
 
 	if (page_ref_dec_return(page) == 1) {
 		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
 		INIT_LIST_HEAD(&page->lru);
 		free_reserved_page(page);
 	}
@@ -101,6 +99,8 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
 	int node = pgdat->node_id;
 	struct page *page;
 
+	BUILD_BUG_ON(MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE > BOOTMEM_TYPE_MAX);
+
 	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
 	page = virt_to_page(pgdat);
 
diff --git a/mm/sparse.c b/mm/sparse.c
index a4138410d890..fca5fa38c2bc 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -740,12 +740,12 @@ static void free_map_bootmem(struct page *memmap)
 		>> PAGE_SHIFT;
 
 	for (i = 0; i < nr_pages; i++, page++) {
-		magic = (unsigned long) page->freelist;
+		magic = page_bootmem_type(page);
 
 		BUG_ON(magic == NODE_INFO);
 
 		maps_section_nr = pfn_to_section_nr(page_to_pfn(page));
-		removing_section_nr = page_private(page);
+		removing_section_nr = page_bootmem_info(page);
 
 		/*
 		 * When this function is called, the removing section is
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 08/21] mm/hugetlb: Initialize page table lock for vmemmap
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (6 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
                   ` (13 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the later patch, we will use the vmemmap page table lock to
guard the splitting of the vmemmap PMD. So initialize the vmemmap
page table lock.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 69 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 69 insertions(+)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index b7dfa97b4ea9..332c131c01a8 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -71,6 +71,8 @@
  */
 #define pr_fmt(fmt)	"HugeTLB Vmemmap: " fmt
 
+#include <linux/pagewalk.h>
+#include <linux/mmzone.h>
 #include <linux/list.h>
 #include <asm/pgalloc.h>
 #include "hugetlb_vmemmap.h"
@@ -179,3 +181,70 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	pr_debug("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
 		 h->name);
 }
+
+static int __init vmemmap_pud_entry(pud_t *pud, unsigned long addr,
+				    unsigned long next, struct mm_walk *walk)
+{
+	struct page *page = pud_page(*pud);
+
+	/*
+	 * The page->private shares storage with page->ptl. So make sure
+	 * that the PG_private is not set and initialize page->private to
+	 * zero.
+	 */
+	VM_BUG_ON_PAGE(PagePrivate(page), page);
+	set_page_private(page, 0);
+
+	BUG_ON(!pmd_ptlock_init(page));
+
+	return 0;
+}
+
+static void __init vmemmap_ptlock_init_section(unsigned long start_pfn)
+{
+	unsigned long section_nr;
+	struct mem_section *ms;
+	struct page *memmap, *memmap_end;
+	struct mm_struct *mm = &init_mm;
+
+	const struct mm_walk_ops ops = {
+		.pud_entry	= vmemmap_pud_entry,
+	};
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+	memmap_end = memmap + PAGES_PER_SECTION;
+
+	mmap_read_lock(mm);
+	BUG_ON(walk_page_range_novma(mm, (unsigned long)memmap,
+				     (unsigned long)memmap_end,
+				     &ops, NULL, NULL));
+	mmap_read_unlock(mm);
+}
+
+static void __init vmemmap_ptlock_init_node(int nid)
+{
+	unsigned long pfn, end_pfn;
+	struct pglist_data *pgdat = NODE_DATA(nid);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION)
+		vmemmap_ptlock_init_section(pfn);
+}
+
+static int __init vmemmap_ptlock_init(void)
+{
+	int nid;
+
+	if (!hugepages_supported())
+		return 0;
+
+	for_each_online_node(nid)
+		vmemmap_ptlock_init_node(nid);
+
+	return 0;
+}
+core_initcall(vmemmap_ptlock_init);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (7 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 08/21] mm/hugetlb: Initialize page table lock for vmemmap Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-17  9:54   ` Song Bao Hua (Barry Song)
  2020-11-13 10:59 ` [PATCH v4 10/21] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
                   ` (12 subsequent siblings)
  21 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we allocate a hugetlb page from the buddy, we should free the
unused vmemmap pages associated with it. We can do that in the
prep_new_huge_page().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/include/asm/hugetlb.h          |   9 ++
 arch/x86/include/asm/pgtable_64_types.h |   8 ++
 mm/hugetlb.c                            |  16 +++
 mm/hugetlb_vmemmap.c                    | 188 ++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                    |   5 +
 5 files changed, 226 insertions(+)

diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
index 1721b1aadeb1..c601fe042832 100644
--- a/arch/x86/include/asm/hugetlb.h
+++ b/arch/x86/include/asm/hugetlb.h
@@ -4,6 +4,15 @@
 
 #include <asm/page.h>
 #include <asm-generic/hugetlb.h>
+#include <asm/pgtable.h>
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+#define vmemmap_pmd_huge vmemmap_pmd_huge
+static inline bool vmemmap_pmd_huge(pmd_t *pmd)
+{
+	return pmd_large(*pmd);
+}
+#endif
 
 #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
 
diff --git a/arch/x86/include/asm/pgtable_64_types.h b/arch/x86/include/asm/pgtable_64_types.h
index 52e5f5f2240d..bedbd2e7d06c 100644
--- a/arch/x86/include/asm/pgtable_64_types.h
+++ b/arch/x86/include/asm/pgtable_64_types.h
@@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d;
 # define VMEMMAP_START		__VMEMMAP_BASE_L4
 #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
 
+/*
+ * VMEMMAP_SIZE - allows the whole linear region to be covered by
+ *                a struct page array.
+ */
+#define VMEMMAP_SIZE		(1UL << (__VIRTUAL_MASK_SHIFT - PAGE_SHIFT - \
+					 1 + ilog2(sizeof(struct page))))
+#define VMEMMAP_END		(VMEMMAP_START + VMEMMAP_SIZE)
+
 #define VMALLOC_END		(VMALLOC_START + (VMALLOC_SIZE_TB << 40) - 1)
 
 #define MODULES_VADDR		(__START_KERNEL_map + KERNEL_IMAGE_SIZE)
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f88032c24667..a0ce6f33a717 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1499,6 +1499,14 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	free_huge_page_vmemmap(h, page);
+	/*
+	 * Because we store preallocated pages on @page->lru,
+	 * vmemmap_pgtable_free() must be called before the
+	 * initialization of @page->lru in INIT_LIST_HEAD().
+	 */
+	vmemmap_pgtable_free(page);
+
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
 	set_hugetlb_cgroup(page, NULL);
@@ -1751,6 +1759,14 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
 	if (!page)
 		return NULL;
 
+	if (vmemmap_pgtable_prealloc(h, page)) {
+		if (hstate_is_gigantic(h))
+			free_gigantic_page(page, huge_page_order(h));
+		else
+			put_page(page);
+		return NULL;
+	}
+
 	if (hstate_is_gigantic(h))
 		prep_compound_gigantic_page(page, huge_page_order(h));
 	prep_new_huge_page(h, page, page_to_nid(page));
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 332c131c01a8..937562a15f1e 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -74,6 +74,7 @@
 #include <linux/pagewalk.h>
 #include <linux/mmzone.h>
 #include <linux/list.h>
+#include <linux/bootmem_info.h>
 #include <asm/pgalloc.h>
 #include "hugetlb_vmemmap.h"
 
@@ -86,6 +87,8 @@
  * reserve at least 2 pages as vmemmap areas.
  */
 #define RESERVE_VMEMMAP_NR		2U
+#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+#define TAIL_PAGE_REUSE			-1
 
 #ifndef VMEMMAP_HPAGE_SHIFT
 #define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
@@ -97,6 +100,21 @@
 
 #define page_huge_pte(page)		((page)->pmd_huge_pte)
 
+#define vmemmap_hpage_addr_end(addr, end)				 \
+({									 \
+	unsigned long __boundary;					 \
+	__boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK; \
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
+})
+
+#ifndef vmemmap_pmd_huge
+#define vmemmap_pmd_huge vmemmap_pmd_huge
+static inline bool vmemmap_pmd_huge(pmd_t *pmd)
+{
+	return pmd_huge(*pmd);
+}
+#endif
+
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return h->nr_free_vmemmap_pages;
@@ -158,6 +176,176 @@ int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
 	return -ENOMEM;
 }
 
+/*
+ * Walk a vmemmap address to the pmd it maps.
+ */
+static pmd_t *vmemmap_to_pmd(unsigned long page)
+{
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	if (page < VMEMMAP_START || page >= VMEMMAP_END)
+		return NULL;
+
+	pgd = pgd_offset_k(page);
+	if (pgd_none(*pgd))
+		return NULL;
+	p4d = p4d_offset(pgd, page);
+	if (p4d_none(*p4d))
+		return NULL;
+	pud = pud_offset(p4d, page);
+
+	if (pud_none(*pud) || pud_bad(*pud))
+		return NULL;
+	pmd = pmd_offset(pud, page);
+
+	return pmd;
+}
+
+static inline spinlock_t *vmemmap_pmd_lock(pmd_t *pmd)
+{
+	return pmd_lock(&init_mm, pmd);
+}
+
+static inline int freed_vmemmap_hpage(struct page *page)
+{
+	return atomic_read(&page->_mapcount) + 1;
+}
+
+static inline int freed_vmemmap_hpage_inc(struct page *page)
+{
+	return atomic_inc_return_relaxed(&page->_mapcount) + 1;
+}
+
+static inline int freed_vmemmap_hpage_dec(struct page *page)
+{
+	return atomic_dec_return_relaxed(&page->_mapcount) + 1;
+}
+
+static inline void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
+					 unsigned long start,
+					 unsigned long end,
+					 struct list_head *free_pages)
+{
+	/* Make the tail pages are mapped read-only. */
+	pgprot_t pgprot = PAGE_KERNEL_RO;
+	pte_t entry = mk_pte(reuse, pgprot);
+	unsigned long addr;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) {
+		struct page *page;
+		pte_t old = *ptep;
+
+		VM_WARN_ON(!pte_present(old));
+		page = pte_page(old);
+		list_add(&page->lru, free_pages);
+
+		set_pte_at(&init_mm, addr, ptep, entry);
+	}
+}
+
+static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
+					 unsigned long addr,
+					 struct list_head *free_pages)
+{
+	unsigned long next;
+	unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
+	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
+	struct page *reuse = NULL;
+
+	addr = start;
+	do {
+		pte_t *ptep;
+
+		ptep = pte_offset_kernel(pmd, addr);
+		if (!reuse)
+			reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		__free_huge_page_pte_vmemmap(reuse, ptep, addr, next,
+					     free_pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
+static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr)
+{
+	int i;
+	pgprot_t pgprot = PAGE_KERNEL;
+	struct mm_struct *mm = &init_mm;
+	struct page *page;
+	pmd_t old_pmd, _pmd;
+
+	old_pmd = READ_ONCE(*pmd);
+	page = pmd_page(old_pmd);
+	pmd_populate_kernel(mm, &_pmd, pte_p);
+
+	for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
+		pte_t entry, *pte;
+
+		entry = mk_pte(page + i, pgprot);
+		pte = pte_offset_kernel(&_pmd, addr);
+		VM_BUG_ON(!pte_none(*pte));
+		set_pte_at(mm, addr, pte, entry);
+	}
+
+	/* make pte visible before pmd */
+	smp_wmb();
+	pmd_populate_kernel(mm, pmd, pte_p);
+}
+
+static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd)
+{
+	struct page *pte_page, *t_page;
+	unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK;
+	unsigned long addr = start;
+
+	list_for_each_entry_safe(pte_page, t_page, &head->lru, lru) {
+		list_del(&pte_page->lru);
+		VM_BUG_ON(freed_vmemmap_hpage(pte_page));
+		split_vmemmap_pmd(pmd++, page_to_virt(pte_page), addr);
+		addr += VMEMMAP_HPAGE_SIZE;
+	}
+
+	flush_tlb_kernel_range(start, addr);
+}
+
+void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	pmd_t *pmd;
+	spinlock_t *ptl;
+	LIST_HEAD(free_pages);
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	pmd = vmemmap_to_pmd((unsigned long)head);
+	BUG_ON(!pmd);
+
+	ptl = vmemmap_pmd_lock(pmd);
+	if (vmemmap_pmd_huge(pmd))
+		split_vmemmap_huge_page(head, pmd);
+
+	__free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages);
+	freed_vmemmap_hpage_inc(pmd_page(*pmd));
+	spin_unlock(ptl);
+
+	free_vmemmap_page_list(&free_pages);
+}
+
 void __init hugetlb_vmemmap_init(struct hstate *h)
 {
 	unsigned int order = huge_page_order(h);
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 2a72d2f62411..fb8b77659ed5 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -15,6 +15,7 @@
 void __init hugetlb_vmemmap_init(struct hstate *h);
 int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page);
 void vmemmap_pgtable_free(struct page *page);
+void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
@@ -28,5 +29,9 @@ static inline int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
 static inline void vmemmap_pgtable_free(struct page *page)
 {
 }
+
+static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 10/21] mm/hugetlb: Defer freeing of hugetlb pages
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (8 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
                   ` (11 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the subsequent patch, we will allocate the vmemmap pages when free
huge pages. But update_and_free_page() is be called from a non-task
context(and hold hugetlb_lock), we can defer the actual freeing in
a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         | 98 +++++++++++++++++++++++++++++++++++++++++++++-------
 mm/hugetlb_vmemmap.c |  5 ---
 mm/hugetlb_vmemmap.h | 10 ++++++
 3 files changed, 96 insertions(+), 17 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a0ce6f33a717..4aabf12aca9b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1221,7 +1221,7 @@ static void destroy_compound_gigantic_page(struct page *page,
 	__ClearPageHead(page);
 }
 
-static void free_gigantic_page(struct page *page, unsigned int order)
+static void __free_gigantic_page(struct page *page, unsigned int order)
 {
 	/*
 	 * If the page isn't allocated using the cma allocator,
@@ -1288,20 +1288,100 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 {
 	return NULL;
 }
-static inline void free_gigantic_page(struct page *page, unsigned int order) { }
+static inline void __free_gigantic_page(struct page *page,
+					unsigned int order) { }
 static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
-static void update_and_free_page(struct hstate *h, struct page *page)
+static void __free_hugepage(struct hstate *h, struct page *page);
+
+/*
+ * As update_and_free_page() is be called from a non-task context(and hold
+ * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
+ * use GFP_ATOMIC to allocate a lot of vmemmap pages.
+ *
+ * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of
+ * pages to be freed and frees them one-by-one. As the page->mapping pointer
+ * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is
+ * reused as the llist_node structure of a lockless linked list of huge
+ * pages to be freed.
+ */
+static LLIST_HEAD(hpage_update_freelist);
+
+static void update_hpage_vmemmap_workfn(struct work_struct *work)
 {
-	int i;
+	struct llist_node *node;
+	struct page *page;
+
+	node = llist_del_all(&hpage_update_freelist);
+
+	while (node) {
+		page = container_of((struct address_space **)node,
+				     struct page, mapping);
+		node = node->next;
+		page->mapping = NULL;
+		__free_hugepage(page_hstate(page), page);
 
+		cond_resched();
+	}
+}
+static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
+
+static inline void __update_and_free_page(struct hstate *h, struct page *page)
+{
+	/* No need to allocate vmemmap pages */
+	if (!free_vmemmap_pages_per_hpage(h)) {
+		__free_hugepage(h, page);
+		return;
+	}
+
+	/*
+	 * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap
+	 * pages.
+	 *
+	 * Only call schedule_work() if hpage_update_freelist is previously
+	 * empty. Otherwise, schedule_work() had been called but the workfn
+	 * hasn't retrieved the list yet.
+	 */
+	if (llist_add((struct llist_node *)&page->mapping,
+		      &hpage_update_freelist))
+		schedule_work(&hpage_update_work);
+}
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+static inline void free_gigantic_page(struct hstate *h, struct page *page)
+{
+	__free_gigantic_page(page, huge_page_order(h));
+}
+#else
+static inline void free_gigantic_page(struct hstate *h, struct page *page)
+{
+	/*
+	 * Temporarily drop the hugetlb_lock, because
+	 * we might block in __free_gigantic_page().
+	 */
+	spin_unlock(&hugetlb_lock);
+	__free_gigantic_page(page, huge_page_order(h));
+	spin_lock(&hugetlb_lock);
+}
+#endif
+
+static void update_and_free_page(struct hstate *h, struct page *page)
+{
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
 		return;
 
 	h->nr_huge_pages--;
 	h->nr_huge_pages_node[page_to_nid(page)]--;
+
+	__update_and_free_page(h, page);
+}
+
+static void __free_hugepage(struct hstate *h, struct page *page)
+{
+	int i;
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
@@ -1313,14 +1393,8 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
 	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
-		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
-		 */
-		spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
-		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
+		free_gigantic_page(h, page);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
@@ -1761,7 +1835,7 @@ static struct page *alloc_fresh_huge_page(struct hstate *h,
 
 	if (vmemmap_pgtable_prealloc(h, page)) {
 		if (hstate_is_gigantic(h))
-			free_gigantic_page(page, huge_page_order(h));
+			free_gigantic_page(h, page);
 		else
 			put_page(page);
 		return NULL;
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 937562a15f1e..e6fca02b57b2 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -115,11 +115,6 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd)
 }
 #endif
 
-static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
-{
-	return h->nr_free_vmemmap_pages;
-}
-
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index fb8b77659ed5..a23fb1375859 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -16,6 +16,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h);
 int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page);
 void vmemmap_pgtable_free(struct page *page);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return h->nr_free_vmemmap_pages;
+}
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
@@ -33,5 +38,10 @@ static inline void vmemmap_pgtable_free(struct page *page)
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (9 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 10/21] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
                   ` (10 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we free a hugetlb page to the buddy, we should allocate the vmemmap
pages associated with it. We can do that in the __free_hugepage().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         |   2 ++
 mm/hugetlb_vmemmap.c | 100 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h |   5 +++
 3 files changed, 107 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4aabf12aca9b..ba927ae7f9bd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1382,6 +1382,8 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 {
 	int i;
 
+	alloc_huge_page_vmemmap(h, page);
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index e6fca02b57b2..9918dc63c062 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -89,6 +89,8 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 #define TAIL_PAGE_REUSE			-1
+#define GFP_VMEMMAP_PAGE		\
+	(GFP_KERNEL | __GFP_NOFAIL | __GFP_MEMALLOC)
 
 #ifndef VMEMMAP_HPAGE_SHIFT
 #define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
@@ -219,6 +221,104 @@ static inline int freed_vmemmap_hpage_dec(struct page *page)
 	return atomic_dec_return_relaxed(&page->_mapcount) + 1;
 }
 
+static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
+					  unsigned long start,
+					  unsigned long end,
+					  struct list_head *remap_pages)
+{
+	pgprot_t pgprot = PAGE_KERNEL;
+	void *from = page_to_virt(reuse);
+	unsigned long addr;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE) {
+		void *to;
+		struct page *page;
+		pte_t entry, old = *ptep;
+
+		page = list_first_entry_or_null(remap_pages, struct page, lru);
+		list_del(&page->lru);
+		to = page_to_virt(page);
+		copy_page(to, from);
+
+		/*
+		 * Make sure that any data that writes to the @to is made
+		 * visible to the physical page.
+		 */
+		flush_kernel_vmap_range(to, PAGE_SIZE);
+
+		prepare_vmemmap_page(page);
+
+		entry = mk_pte(page, pgprot);
+		set_pte_at(&init_mm, addr, ptep++, entry);
+
+		VM_BUG_ON(!pte_present(old) || pte_page(old) != reuse);
+	}
+}
+
+static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
+					  unsigned long addr,
+					  struct list_head *remap_pages)
+{
+	unsigned long next;
+	unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
+	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
+	struct page *reuse = NULL;
+
+	addr = start;
+	do {
+		pte_t *ptep;
+
+		ptep = pte_offset_kernel(pmd, addr);
+		if (!reuse)
+			reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		__remap_huge_page_pte_vmemmap(reuse, ptep, addr, next,
+					      remap_pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
+static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
+{
+	int i;
+
+	for (i = 0; i < free_vmemmap_pages_per_hpage(h); i++) {
+		struct page *page;
+
+		/* This should not fail */
+		page = alloc_page(GFP_VMEMMAP_PAGE);
+		list_add_tail(&page->lru, list);
+	}
+}
+
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	pmd_t *pmd;
+	spinlock_t *ptl;
+	LIST_HEAD(remap_pages);
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	alloc_vmemmap_pages(h, &remap_pages);
+
+	pmd = vmemmap_to_pmd((unsigned long)head);
+	BUG_ON(!pmd);
+
+	ptl = vmemmap_pmd_lock(pmd);
+	__remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head,
+				      &remap_pages);
+	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) {
+		/*
+		 * Todo:
+		 * Merge pte to huge pmd if it has ever been split.
+		 */
+	}
+	spin_unlock(ptl);
+}
+
 static inline void free_vmemmap_page_list(struct list_head *list)
 {
 	struct page *page, *next;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index a23fb1375859..a5054f310528 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -15,6 +15,7 @@
 void __init hugetlb_vmemmap_init(struct hstate *h);
 int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page);
 void vmemmap_pgtable_free(struct page *page);
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
@@ -35,6 +36,10 @@ static inline void vmemmap_pgtable_free(struct page *page)
 {
 }
 
+static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (10 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
                   ` (9 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The __free_huge_page_pmd_vmemmap and __remap_huge_page_pmd_vmemmap are
almost the same code. So introduce remap_free_huge_page_pmd_vmemmap
helper to simplify the code.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 108 +++++++++++++++++++++------------------------------
 1 file changed, 45 insertions(+), 63 deletions(-)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 9918dc63c062..ae9dbfb682ab 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -221,6 +221,47 @@ static inline int freed_vmemmap_hpage_dec(struct page *page)
 	return atomic_dec_return_relaxed(&page->_mapcount) + 1;
 }
 
+static inline void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+typedef void (*remap_pte_fn)(struct page *reuse, pte_t *ptep,
+			     unsigned long start, unsigned long end,
+			     struct list_head *pages);
+
+static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
+					unsigned long addr,
+					struct list_head *pages,
+					remap_pte_fn remap_fn)
+{
+	unsigned long next;
+	unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
+	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
+	struct page *reuse = NULL;
+
+	flush_cache_vunmap(start, end);
+
+	addr = start;
+	do {
+		pte_t *ptep;
+
+		ptep = pte_offset_kernel(pmd, addr);
+		if (!reuse)
+			reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		remap_fn(reuse, ptep, addr, next, pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
 static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 					  unsigned long start,
 					  unsigned long end,
@@ -255,31 +296,6 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 	}
 }
 
-static void __remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
-					  unsigned long addr,
-					  struct list_head *remap_pages)
-{
-	unsigned long next;
-	unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
-	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
-	struct page *reuse = NULL;
-
-	addr = start;
-	do {
-		pte_t *ptep;
-
-		ptep = pte_offset_kernel(pmd, addr);
-		if (!reuse)
-			reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
-
-		next = vmemmap_hpage_addr_end(addr, end);
-		__remap_huge_page_pte_vmemmap(reuse, ptep, addr, next,
-					      remap_pages);
-	} while (pmd++, addr = next, addr != end);
-
-	flush_tlb_kernel_range(start, end);
-}
-
 static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
 {
 	int i;
@@ -308,8 +324,8 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 	BUG_ON(!pmd);
 
 	ptl = vmemmap_pmd_lock(pmd);
-	__remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head,
-				      &remap_pages);
+	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages,
+				    __remap_huge_page_pte_vmemmap);
 	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) {
 		/*
 		 * Todo:
@@ -319,16 +335,6 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 	spin_unlock(ptl);
 }
 
-static inline void free_vmemmap_page_list(struct list_head *list)
-{
-	struct page *page, *next;
-
-	list_for_each_entry_safe(page, next, list, lru) {
-		list_del(&page->lru);
-		free_vmemmap_page(page);
-	}
-}
-
 static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 					 unsigned long start,
 					 unsigned long end,
@@ -351,31 +357,6 @@ static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 	}
 }
 
-static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
-					 unsigned long addr,
-					 struct list_head *free_pages)
-{
-	unsigned long next;
-	unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
-	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
-	struct page *reuse = NULL;
-
-	addr = start;
-	do {
-		pte_t *ptep;
-
-		ptep = pte_offset_kernel(pmd, addr);
-		if (!reuse)
-			reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
-
-		next = vmemmap_hpage_addr_end(addr, end);
-		__free_huge_page_pte_vmemmap(reuse, ptep, addr, next,
-					     free_pages);
-	} while (pmd++, addr = next, addr != end);
-
-	flush_tlb_kernel_range(start, end);
-}
-
 static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long addr)
 {
 	int i;
@@ -434,7 +415,8 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 	if (vmemmap_pmd_huge(pmd))
 		split_vmemmap_huge_page(head, pmd);
 
-	__free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages);
+	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages,
+				    __free_huge_page_pte_vmemmap);
 	freed_vmemmap_hpage_inc(pmd_page(*pmd));
 	spin_unlock(ptl);
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 13/21] mm/hugetlb: Use PG_slab to indicate split pmd
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (11 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
                   ` (8 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we allocate hugetlb page from buddy, we may need split huge pmd
to pte. When we free the hugetlb page, we can merge pte to pmd. So
we need to distinguish whether the previous pmd has been split. The
page table is not allocated from slab. So we can reuse the PG_slab
to indicate that the pmd has been split.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 26 ++++++++++++++++++++++++--
 1 file changed, 24 insertions(+), 2 deletions(-)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index ae9dbfb682ab..58bff13a2301 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -262,6 +262,25 @@ static void remap_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
 	flush_tlb_kernel_range(start, end);
 }
 
+static inline bool pmd_split(pmd_t *pmd)
+{
+	return PageSlab(pmd_page(*pmd));
+}
+
+static inline void set_pmd_split(pmd_t *pmd)
+{
+	/*
+	 * We should not use slab for page table allocation. So we can set
+	 * PG_slab to indicate that the pmd has been split.
+	 */
+	__SetPageSlab(pmd_page(*pmd));
+}
+
+static inline void clear_pmd_split(pmd_t *pmd)
+{
+	__ClearPageSlab(pmd_page(*pmd));
+}
+
 static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 					  unsigned long start,
 					  unsigned long end,
@@ -326,11 +345,12 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 	ptl = vmemmap_pmd_lock(pmd);
 	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &remap_pages,
 				    __remap_huge_page_pte_vmemmap);
-	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd))) {
+	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) {
 		/*
 		 * Todo:
 		 * Merge pte to huge pmd if it has ever been split.
 		 */
+		clear_pmd_split(pmd);
 	}
 	spin_unlock(ptl);
 }
@@ -412,8 +432,10 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 	BUG_ON(!pmd);
 
 	ptl = vmemmap_pmd_lock(pmd);
-	if (vmemmap_pmd_huge(pmd))
+	if (vmemmap_pmd_huge(pmd)) {
 		split_vmemmap_huge_page(head, pmd);
+		set_pmd_split(pmd);
+	}
 
 	remap_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head, &free_pages,
 				    __free_huge_page_pte_vmemmap);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (12 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 15/21] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
                   ` (7 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The gigantic page is allocated by bootmem, if we want to free the
unused vmemmap pages. We also should allocate the page table. So
we also allocate page tables from bootmem.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h |  3 +++
 mm/hugetlb.c            |  5 +++++
 mm/hugetlb_vmemmap.c    | 55 +++++++++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h    | 13 ++++++++++++
 4 files changed, 76 insertions(+)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index eed3dd3bd626..da18fc9ed152 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -506,6 +506,9 @@ struct hstate {
 struct huge_bootmem_page {
 	struct list_head list;
 	struct hstate *hstate;
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	pte_t *vmemmap_pte;
+#endif
 };
 
 struct page *alloc_huge_page(struct vm_area_struct *vma,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ba927ae7f9bd..055604d07046 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -2607,6 +2607,7 @@ static void __init gather_bootmem_prealloc(void)
 		WARN_ON(page_count(page) != 1);
 		prep_compound_huge_page(page, h->order);
 		WARN_ON(PageReserved(page));
+		gather_vmemmap_pgtable_init(m, page);
 		prep_new_huge_page(h, page, page_to_nid(page));
 		put_page(page); /* free it into the hugepage allocator */
 
@@ -2659,6 +2660,10 @@ static void __init hugetlb_hstate_alloc_pages(struct hstate *h)
 			break;
 		cond_resched();
 	}
+
+	if (hstate_is_gigantic(h))
+		i -= gather_vmemmap_pgtable_prealloc();
+
 	if (i < h->max_huge_pages) {
 		char buf[32];
 
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 58bff13a2301..47f81e0b3832 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -75,6 +75,7 @@
 #include <linux/mmzone.h>
 #include <linux/list.h>
 #include <linux/bootmem_info.h>
+#include <linux/memblock.h>
 #include <asm/pgalloc.h>
 #include "hugetlb_vmemmap.h"
 
@@ -173,6 +174,60 @@ int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
 	return -ENOMEM;
 }
 
+unsigned long __init gather_vmemmap_pgtable_prealloc(void)
+{
+	struct huge_bootmem_page *m, *tmp;
+	unsigned long nr_free = 0;
+
+	list_for_each_entry_safe(m, tmp, &huge_boot_pages, list) {
+		struct hstate *h = m->hstate;
+		unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
+		unsigned int pgtable_size;
+
+		if (!nr)
+			continue;
+
+		pgtable_size = nr << PAGE_SHIFT;
+		m->vmemmap_pte = memblock_alloc_try_nid(pgtable_size,
+				PAGE_SIZE, 0, MEMBLOCK_ALLOC_ACCESSIBLE,
+				NUMA_NO_NODE);
+		if (!m->vmemmap_pte) {
+			nr_free++;
+			list_del(&m->list);
+			memblock_free_early(__pa(m), huge_page_size(h));
+		}
+	}
+
+	return nr_free;
+}
+
+void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m,
+					struct page *page)
+{
+	struct hstate *h = m->hstate;
+	unsigned long pte = (unsigned long)m->vmemmap_pte;
+	unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
+
+	/* Store preallocated pages on huge page lru list */
+	INIT_LIST_HEAD(&page->lru);
+
+	while (nr--) {
+		struct page *pte_page = virt_to_page(pte);
+
+		__ClearPageReserved(pte_page);
+		list_add(&pte_page->lru, &page->lru);
+		pte += PAGE_SIZE;
+	}
+
+	/*
+	 * If we had gigantic hugepages allocated at boot time, we need
+	 * to restore the 'stolen' pages to totalram_pages in order to
+	 * fix confusing memory reports from free(1) and another
+	 * side-effects, like CommitLimit going negative.
+	 */
+	adjust_managed_page_count(page, nr);
+}
+
 /*
  * Walk a vmemmap address to the pmd it maps.
  */
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index a5054f310528..79f330bb0714 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -15,6 +15,9 @@
 void __init hugetlb_vmemmap_init(struct hstate *h);
 int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page);
 void vmemmap_pgtable_free(struct page *page);
+unsigned long __init gather_vmemmap_pgtable_prealloc(void);
+void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m,
+					struct page *page);
 void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
@@ -36,6 +39,16 @@ static inline void vmemmap_pgtable_free(struct page *page)
 {
 }
 
+static inline unsigned long gather_vmemmap_pgtable_prealloc(void)
+{
+	return 0;
+}
+
+static inline void gather_vmemmap_pgtable_init(struct huge_bootmem_page *m,
+					       struct page *page)
+{
+}
+
 static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 15/21] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (13 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 16/21] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
                   ` (6 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Because we reuse the first tail page, if we set PageHWPosion on a
tail page. It indicates that we may set PageHWPoison on a series
of pages. So we can use the head[4].mapping to record the real
error page index and set the raw error page PageHWPoison later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         | 11 +++--------
 mm/hugetlb_vmemmap.h | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 055604d07046..b853aacd5c16 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1383,6 +1383,7 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 	int i;
 
 	alloc_huge_page_vmemmap(h, page);
+	subpage_hwpoison_deliver(page);
 
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
@@ -1944,14 +1945,8 @@ int dissolve_free_huge_page(struct page *page)
 		int nid = page_to_nid(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
-		/*
-		 * Move PageHWPoison flag from head page to the raw error page,
-		 * which makes any subpages rather than the error page reusable.
-		 */
-		if (PageHWPoison(head) && page != head) {
-			SetPageHWPoison(page);
-			ClearPageHWPoison(head);
-		}
+
+		set_subpage_hwpoison(head, page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 79f330bb0714..b09fd658ce20 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -21,6 +21,29 @@ void __init gather_vmemmap_pgtable_init(struct huge_bootmem_page *m,
 void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
+static inline void subpage_hwpoison_deliver(struct page *head)
+{
+	struct page *page = head;
+
+	if (PageHWPoison(head))
+		page = head + page_private(head + 4);
+
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void set_subpage_hwpoison(struct page *head, struct page *page)
+{
+	if (PageHWPoison(head))
+		set_page_private(head + 4, page - head);
+}
+
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return h->nr_free_vmemmap_pages;
@@ -57,6 +80,22 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
 
+static inline void subpage_hwpoison_deliver(struct page *head)
+{
+}
+
+static inline void set_subpage_hwpoison(struct page *head, struct page *page)
+{
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (PageHWPoison(head) && page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return 0;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 16/21] mm/hugetlb: Flush work when dissolving hugetlb page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (14 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 15/21] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 17/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
                   ` (5 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We should flush work when dissolving a hugetlb page to make sure that
the hugetlb page is freed to the buddy.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b853aacd5c16..9aad0b63d369 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1328,6 +1328,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work)
 }
 static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
 
+static inline void flush_hpage_update_work(struct hstate *h)
+{
+	if (free_vmemmap_pages_per_hpage(h))
+		flush_work(&hpage_update_work);
+}
+
 static inline void __update_and_free_page(struct hstate *h, struct page *page)
 {
 	/* No need to allocate vmemmap pages */
@@ -1928,6 +1934,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 int dissolve_free_huge_page(struct page *page)
 {
 	int rc = -EBUSY;
+	struct hstate *h = NULL;
 
 	/* Not to disrupt normal path by vainly holding hugetlb_lock */
 	if (!PageHuge(page))
@@ -1941,8 +1948,9 @@ int dissolve_free_huge_page(struct page *page)
 
 	if (!page_count(page)) {
 		struct page *head = compound_head(page);
-		struct hstate *h = page_hstate(head);
 		int nid = page_to_nid(head);
+
+		h = page_hstate(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
 
@@ -1956,6 +1964,14 @@ int dissolve_free_huge_page(struct page *page)
 	}
 out:
 	spin_unlock(&hugetlb_lock);
+
+	/*
+	 * We should flush work before return to make sure that
+	 * the HugeTLB page is freed to the buddy.
+	 */
+	if (!rc && h)
+		flush_hpage_update_work(h);
+
 	return rc;
 }
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 17/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (15 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 16/21] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 18/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
                   ` (4 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
 mm/hugetlb_vmemmap.c                            | 22 ++++++++++++++++++++++
 3 files changed, 34 insertions(+)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 5debfe238027..ccf07293cb63 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1551,6 +1551,15 @@
 			Documentation/admin-guide/mm/hugetlbpage.rst.
 			Format: size[KMG]
 
+	hugetlb_free_vmemmap=
+			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+			this controls freeing unused vmemmap pages associated
+			with each HugeTLB page.
+			Format: { on (default) | off }
+
+			on:  enable the feature
+			off: disable the feature
+
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
 			Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..7d6129ee97dd 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -145,6 +145,9 @@ default_hugepagesz
 
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
+hugetlb_free_vmemmap
+	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this disables freeing
+	unused vmemmap pages associated each HugeTLB page.
 
 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
 indicates the current number of pre-allocated huge pages of the default size.
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 47f81e0b3832..1528b156920c 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -118,6 +118,22 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd)
 }
 #endif
 
+static bool hugetlb_free_vmemmap_disabled __initdata;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	if (!strcmp(buf, "off"))
+		hugetlb_free_vmemmap_disabled = true;
+	else if (strcmp(buf, "on"))
+		return -EINVAL;
+
+	return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
+
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
@@ -505,6 +521,12 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int order = huge_page_order(h);
 	unsigned int vmemmap_pages;
 
+	if (hugetlb_free_vmemmap_disabled) {
+		h->nr_free_vmemmap_pages = 0;
+		pr_info("disable free vmemmap pages for %s\n", h->name);
+		return;
+	}
+
 	vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
 	/*
 	 * The head page and the first tail page are not to be freed to buddy
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 18/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (16 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 17/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 19/21] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
                   ` (3 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Merge pte to huge pmd if it has ever been split. Now only support
gigantic page which's vmemmap pages size is an integer multiple of
PMD_SIZE. This is the simplest case to handle.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/include/asm/hugetlb.h |   8 +++
 mm/hugetlb_vmemmap.c           | 118 ++++++++++++++++++++++++++++++++++++++++-
 2 files changed, 124 insertions(+), 2 deletions(-)

diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
index c601fe042832..1de1c519a84a 100644
--- a/arch/x86/include/asm/hugetlb.h
+++ b/arch/x86/include/asm/hugetlb.h
@@ -12,6 +12,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd)
 {
 	return pmd_large(*pmd);
 }
+
+#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge
+static inline pmd_t vmemmap_pmd_mkhuge(struct page *page)
+{
+	pte_t entry = pfn_pte(page_to_pfn(page), PAGE_KERNEL_LARGE);
+
+	return __pmd(pte_val(entry));
+}
 #endif
 
 #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 1528b156920c..5c00826a98b3 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -118,6 +118,14 @@ static inline bool vmemmap_pmd_huge(pmd_t *pmd)
 }
 #endif
 
+#ifndef vmemmap_pmd_mkhuge
+#define vmemmap_pmd_mkhuge vmemmap_pmd_mkhuge
+static inline pmd_t vmemmap_pmd_mkhuge(struct page *page)
+{
+	return pmd_mkhuge(mk_pmd(page, PAGE_KERNEL));
+}
+#endif
+
 static bool hugetlb_free_vmemmap_disabled __initdata;
 
 static int __init early_hugetlb_free_vmemmap_param(char *buf)
@@ -386,6 +394,104 @@ static void __remap_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
 	}
 }
 
+static void __replace_huge_page_pte_vmemmap(pte_t *ptep, unsigned long start,
+					    unsigned int nr, struct page *huge,
+					    struct list_head *free_pages)
+{
+	unsigned long addr;
+	unsigned long end = start + (nr << PAGE_SHIFT);
+	pgprot_t pgprot = PAGE_KERNEL;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) {
+		struct page *page;
+		pte_t old = *ptep;
+		pte_t entry;
+
+		prepare_vmemmap_page(huge);
+
+		entry = mk_pte(huge++, pgprot);
+		VM_WARN_ON(!pte_present(old));
+		page = pte_page(old);
+		list_add(&page->lru, free_pages);
+
+		set_pte_at(&init_mm, addr, ptep, entry);
+	}
+}
+
+static void replace_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start,
+					  struct page *huge,
+					  struct list_head *free_pages)
+{
+	unsigned long end = start + VMEMMAP_HPAGE_SIZE;
+
+	flush_cache_vunmap(start, end);
+	__replace_huge_page_pte_vmemmap(pte_offset_kernel(pmd, start), start,
+					VMEMMAP_HPAGE_NR, huge, free_pages);
+	flush_tlb_kernel_range(start, end);
+}
+
+static pte_t *merge_vmemmap_pte(pmd_t *pmdp, unsigned long addr)
+{
+	pte_t *pte;
+	struct page *page;
+
+	pte = pte_offset_kernel(pmdp, addr);
+	page = pte_page(*pte);
+	set_pmd(pmdp, vmemmap_pmd_mkhuge(page));
+
+	return pte;
+}
+
+static void merge_huge_page_pmd_vmemmap(pmd_t *pmd, unsigned long start,
+					struct page *huge,
+					struct list_head *free_pages)
+{
+	replace_huge_page_pmd_vmemmap(pmd, start, huge, free_pages);
+	pte_free_kernel(&init_mm, merge_vmemmap_pte(pmd, start));
+	flush_tlb_kernel_range(start, start + VMEMMAP_HPAGE_SIZE);
+}
+
+static inline void dissolve_compound_page(struct page *page, unsigned int order)
+{
+	int i;
+	unsigned int nr_pages = 1 << order;
+
+	for (i = 1; i < nr_pages; i++)
+		set_page_count(page + i, 1);
+}
+
+static void merge_gigantic_page_vmemmap(struct hstate *h, struct page *head,
+					pmd_t *pmd)
+{
+	LIST_HEAD(free_pages);
+	unsigned long addr = (unsigned long)head;
+	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
+
+	for (; addr < end; addr += VMEMMAP_HPAGE_SIZE) {
+		void *to;
+		struct page *page;
+
+		page = alloc_pages(GFP_VMEMMAP_PAGE & ~__GFP_NOFAIL,
+				   VMEMMAP_HPAGE_ORDER);
+		if (!page)
+			goto out;
+
+		dissolve_compound_page(page, VMEMMAP_HPAGE_ORDER);
+		to = page_to_virt(page);
+		memcpy(to, (void *)addr, VMEMMAP_HPAGE_SIZE);
+
+		/*
+		 * Make sure that any data that writes to the
+		 * @to is made visible to the physical page.
+		 */
+		flush_kernel_vmap_range(to, VMEMMAP_HPAGE_SIZE);
+
+		merge_huge_page_pmd_vmemmap(pmd++, addr, page, &free_pages);
+	}
+out:
+	free_vmemmap_page_list(&free_pages);
+}
+
 static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
 {
 	int i;
@@ -418,10 +524,18 @@ void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 				    __remap_huge_page_pte_vmemmap);
 	if (!freed_vmemmap_hpage_dec(pmd_page(*pmd)) && pmd_split(pmd)) {
 		/*
-		 * Todo:
-		 * Merge pte to huge pmd if it has ever been split.
+		 * Merge pte to huge pmd if it has ever been split. Now only
+		 * support gigantic page which's vmemmap pages size is an
+		 * integer multiple of PMD_SIZE. This is the simplest case
+		 * to handle.
 		 */
 		clear_pmd_split(pmd);
+
+		if (IS_ALIGNED(vmemmap_pages_per_hpage(h), VMEMMAP_HPAGE_NR)) {
+			spin_unlock(ptl);
+			merge_gigantic_page_vmemmap(h, head, pmd);
+			return;
+		}
 	}
 	spin_unlock(ptl);
 }
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 19/21] mm/hugetlb: Gather discrete indexes of tail page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (17 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 18/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 20/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
                   ` (2 subsequent siblings)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

For hugetlb page, there are more metadata to save in the struct
page. But the head struct page cannot meet our needs, so we have
to abuse other tail struct page to store the metadata. In order
to avoid conflicts caused by subsequent use of more tail struct
pages, we can gather these discrete indexes of tail struct page
In this case, it will be easier to add a new tail page index later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h        | 13 +++++++++++++
 include/linux/hugetlb_cgroup.h | 15 +++++++++------
 mm/hugetlb.c                   | 12 ++++++------
 mm/hugetlb_vmemmap.h           |  4 ++--
 4 files changed, 30 insertions(+), 14 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index da18fc9ed152..fa9d38a3ac6f 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t;
 #include <linux/shm.h>
 #include <asm/tlbflush.h>
 
+enum {
+	SUBPAGE_INDEX_ACTIVE = 1,	/* reuse page flags of PG_private */
+	SUBPAGE_INDEX_TEMPORARY,	/* reuse page->mapping */
+#ifdef CONFIG_CGROUP_HUGETLB
+	SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */
+	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
+#endif
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
+#endif
+	NR_USED_SUBPAGE,
+};
+
 struct hugepage_subpool {
 	spinlock_t lock;
 	long count;
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 2ad6e92f124a..3d3c1c49efe4 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -24,8 +24,9 @@ struct file_region;
 /*
  * Minimum page order trackable by hugetlb cgroup.
  * At least 4 pages are necessary for all the tracking information.
- * The second tail page (hpage[2]) is the fault usage cgroup.
- * The third tail page (hpage[3]) is the reservation usage cgroup.
+ * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
+ * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
+ * is the reservation usage cgroup.
  */
 #define HUGETLB_CGROUP_MIN_ORDER	2
 
@@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
 	if (rsvd)
-		return (struct hugetlb_cgroup *)page[3].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
 	else
-		return (struct hugetlb_cgroup *)page[2].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
 }
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
@@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
 	if (rsvd)
-		page[3].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+				 (unsigned long)h_cg);
 	else
-		page[2].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+				 (unsigned long)h_cg);
 	return 0;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 9aad0b63d369..dfa982f4b525 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1429,20 +1429,20 @@ struct hstate *size_to_hstate(unsigned long size)
 bool page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return PageHead(page) && PagePrivate(&page[1]);
+	return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /* never called for tail page */
 static void set_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	SetPagePrivate(&page[1]);
+	SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 static void clear_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	ClearPagePrivate(&page[1]);
+	ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /*
@@ -1454,17 +1454,17 @@ static inline bool PageHugeTemporary(struct page *page)
 	if (!PageHuge(page))
 		return false;
 
-	return (unsigned long)page[2].mapping == -1U;
+	return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U;
 }
 
 static inline void SetPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = (void *)-1U;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U;
 }
 
 static inline void ClearPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = NULL;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL;
 }
 
 static void __free_huge_page(struct page *page)
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index b09fd658ce20..86d80c7f1dc7 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -26,7 +26,7 @@ static inline void subpage_hwpoison_deliver(struct page *head)
 	struct page *page = head;
 
 	if (PageHWPoison(head))
-		page = head + page_private(head + 4);
+		page = head + page_private(head + SUBPAGE_INDEX_HWPOISON);
 
 	/*
 	 * Move PageHWPoison flag from head page to the raw error page,
@@ -41,7 +41,7 @@ static inline void subpage_hwpoison_deliver(struct page *head)
 static inline void set_subpage_hwpoison(struct page *head, struct page *page)
 {
 	if (PageHWPoison(head))
-		set_page_private(head + 4, page - head);
+		set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
 }
 
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 20/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (18 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 19/21] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-13 10:59 ` [PATCH v4 21/21] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
  2020-11-17 10:15 ` [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Song Bao Hua (Barry Song)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

There are only `RESERVE_VMEMMAP_SIZE / sizeof(struct page)` struct pages
can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON
to catch this invalid usage of tail struct page.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 5c00826a98b3..f67aec6e3bb1 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -717,6 +717,9 @@ static int __init vmemmap_ptlock_init(void)
 {
 	int nid;
 
+	BUILD_BUG_ON(NR_USED_SUBPAGE >=
+		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
+
 	if (!hugepages_supported())
 		return 0;
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* [PATCH v4 21/21] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (19 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 20/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
@ 2020-11-13 10:59 ` Muchun Song
  2020-11-17 10:15 ` [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Song Bao Hua (Barry Song)
  21 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-13 10:59 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We only can free the unused vmemmap to the buddy system when the
size of struct page is a power of two.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index f67aec6e3bb1..a0a5df9dba6b 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -635,7 +635,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int order = huge_page_order(h);
 	unsigned int vmemmap_pages;
 
-	if (hugetlb_free_vmemmap_disabled) {
+	if (hugetlb_free_vmemmap_disabled ||
+	    !is_power_of_2(sizeof(struct page))) {
 		h->nr_free_vmemmap_pages = 0;
 		pr_info("disable free vmemmap pages for %s\n", h->name);
 		return;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-13 10:59 ` [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-11-16 13:33   ` Oscar Salvador
  2020-11-16 15:40     ` [External] " Muchun Song
  2020-11-18 22:54     ` Mike Kravetz
  2020-11-18 23:48   ` Mike Kravetz
  1 sibling, 2 replies; 49+ messages in thread
From: Oscar Salvador @ 2020-11-16 13:33 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Fri, Nov 13, 2020 at 06:59:35PM +0800, Muchun Song wrote:
> If the size of HugeTLB page is 2MB, we need 512 struct page structures
> (8 pages) to be associated with it. As far as I know, we only use the
> first 4 struct page structures. Use of first 4 struct page structures
> comes from HUGETLB_CGROUP_MIN_ORDER.

Once you mention 2MB HugeTLB page and its specific I would also mention
1GB HugeTLB pages, maybe something along these lines.
I would supress "As far as I know", we __know__ that we only use
the first 4 struct page structures to track metadata information.

> +/*
> + * There are 512 struct page structures(8 pages) associated with each 2MB
> + * hugetlb page. For tail pages, the value of compound_head is the same.
> + * So we can reuse first page of tail page structures. We map the virtual
> + * addresses of the remaining 6 pages of tail page structures to the first
> + * tail page struct, and then free these 6 pages. Therefore, we need to
> + * reserve at least 2 pages as vmemmap areas.
> + */
> +#define RESERVE_VMEMMAP_NR		2U

Either I would include 1GB specific there as well, or I would not add
any specifics at all and just go by saying that first two pages are used,
and the rest can be remapped to the first page that contains the tails.


> +void __init hugetlb_vmemmap_init(struct hstate *h)
> +{
> +	unsigned int order = huge_page_order(h);
> +	unsigned int vmemmap_pages;
> +
> +	vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
> +	/*
> +	 * The head page and the first tail page are not to be freed to buddy
> +	 * system, the others page will map to the first tail page. So there
"the remaining pages" might be more clear.

> +	 * are (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed.
"that can be freed"

> +	 *
> +	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? This is
> +	 * not expected to happen unless the system is corrupted. So on the
> +	 * safe side, it is only a safety net.
> +	 */
> +	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
> +		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
> +	else
> +		h->nr_free_vmemmap_pages = 0;

This made think of something.
Since struct hstate hstates is global, all the fields should be defined to 0.
So, the following assignments in hugetlb_add_hstate:

        h->nr_huge_pages = 0;
        h->free_huge_pages = 0;

should not be needed.
Actually, we do not initialize other values like resv_huge_pages
or surplus_huge_pages.

If that is the case, the "else" could go.

Mike?

The changes itself look good to me.
I think that putting all the vemmap stuff into hugetlb-vmemmap.* was
the right choice.


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c
  2020-11-13 10:59 ` [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
@ 2020-11-16 13:50   ` Oscar Salvador
  0 siblings, 0 replies; 49+ messages in thread
From: Oscar Salvador @ 2020-11-16 13:50 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Fri, Nov 13, 2020 at 06:59:32PM +0800, Muchun Song wrote:
> Move bootmem info registration common API to individual bootmem_info.c
> for later patch use. This is just code movement without any functional
> change.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>

Reviewed-by: Oscar Salvador <osalvador@suse.de>

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  2020-11-13 10:59 ` [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
@ 2020-11-16 13:52   ` Oscar Salvador
  0 siblings, 0 replies; 49+ messages in thread
From: Oscar Salvador @ 2020-11-16 13:52 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Fri, Nov 13, 2020 at 06:59:33PM +0800, Muchun Song wrote:
> In the later patch, we will use {get,put}_page_bootmem() to initialize
> the page for vmemmap or free vmemmap page to buddy. So move them out of
> CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any
> functional change.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>

The change itself makes sense.
I tried to check the possibles scenarios with different CONFIG_ options
toggled, but I will trust you that you ran the checks :-)

Reviewed-by: Oscar Salvador <osalvador@suse.de>


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-16 13:33   ` Oscar Salvador
@ 2020-11-16 15:40     ` Muchun Song
  2020-11-18 22:54     ` Mike Kravetz
  1 sibling, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-16 15:40 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko, Xiongchun duan,
	linux-doc, LKML, Linux Memory Management List, linux-fsdevel

On Mon, Nov 16, 2020 at 9:33 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Fri, Nov 13, 2020 at 06:59:35PM +0800, Muchun Song wrote:
> > If the size of HugeTLB page is 2MB, we need 512 struct page structures
> > (8 pages) to be associated with it. As far as I know, we only use the
> > first 4 struct page structures. Use of first 4 struct page structures
> > comes from HUGETLB_CGROUP_MIN_ORDER.
>
> Once you mention 2MB HugeTLB page and its specific I would also mention
> 1GB HugeTLB pages, maybe something along these lines.
> I would supress "As far as I know", we __know__ that we only use
> the first 4 struct page structures to track metadata information.

Thanks. Will do.

>
> > +/*
> > + * There are 512 struct page structures(8 pages) associated with each 2MB
> > + * hugetlb page. For tail pages, the value of compound_head is the same.
> > + * So we can reuse first page of tail page structures. We map the virtual
> > + * addresses of the remaining 6 pages of tail page structures to the first
> > + * tail page struct, and then free these 6 pages. Therefore, we need to
> > + * reserve at least 2 pages as vmemmap areas.
> > + */
> > +#define RESERVE_VMEMMAP_NR           2U
>
> Either I would include 1GB specific there as well, or I would not add
> any specifics at all and just go by saying that first two pages are used,
> and the rest can be remapped to the first page that contains the tails.

Thanks. Will do.

>
>
> > +void __init hugetlb_vmemmap_init(struct hstate *h)
> > +{
> > +     unsigned int order = huge_page_order(h);
> > +     unsigned int vmemmap_pages;
> > +
> > +     vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
> > +     /*
> > +      * The head page and the first tail page are not to be freed to buddy
> > +      * system, the others page will map to the first tail page. So there
> "the remaining pages" might be more clear.

Thanks.

>
> > +      * are (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed.
> "that can be freed"

Thanks.

>
> > +      *
> > +      * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? This is
> > +      * not expected to happen unless the system is corrupted. So on the
> > +      * safe side, it is only a safety net.
> > +      */
> > +     if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
> > +             h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
> > +     else
> > +             h->nr_free_vmemmap_pages = 0;
>
> This made think of something.
> Since struct hstate hstates is global, all the fields should be defined to 0.
> So, the following assignments in hugetlb_add_hstate:
>
>         h->nr_huge_pages = 0;
>         h->free_huge_pages = 0;
>
> should not be needed.
> Actually, we do not initialize other values like resv_huge_pages
> or surplus_huge_pages.
>
> If that is the case, the "else" could go.

Yeah, I agree with you.

>
> Mike?
>
> The changes itself look good to me.
> I think that putting all the vemmap stuff into hugetlb-vmemmap.* was
> the right choice.
>
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-11-13 10:59 ` [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-11-17  9:54   ` Song Bao Hua (Barry Song)
  2020-11-17 10:26     ` [External] " Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-11-17  9:54 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel



> -----Original Message-----
> From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> Behalf Of Muchun Song
> Sent: Saturday, November 14, 2020 12:00 AM
> To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> willy@infradead.org; osalvador@suse.de; mhocko@suse.com
> Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> linux-fsdevel@vger.kernel.org; Muchun Song <songmuchun@bytedance.com>
> Subject: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated
> with each hugetlb page
> 
> When we allocate a hugetlb page from the buddy, we should free the
> unused vmemmap pages associated with it. We can do that in the
> prep_new_huge_page().
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  arch/x86/include/asm/hugetlb.h          |   9 ++
>  arch/x86/include/asm/pgtable_64_types.h |   8 ++
>  mm/hugetlb.c                            |  16 +++
>  mm/hugetlb_vmemmap.c                    | 188
> ++++++++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h                    |   5 +
>  5 files changed, 226 insertions(+)
> 
> diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
> index 1721b1aadeb1..c601fe042832 100644
> --- a/arch/x86/include/asm/hugetlb.h
> +++ b/arch/x86/include/asm/hugetlb.h
> @@ -4,6 +4,15 @@
> 
>  #include <asm/page.h>
>  #include <asm-generic/hugetlb.h>
> +#include <asm/pgtable.h>
> +
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +#define vmemmap_pmd_huge vmemmap_pmd_huge
> +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> +{
> +	return pmd_large(*pmd);
> +}
> +#endif
> 
>  #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
> 
> diff --git a/arch/x86/include/asm/pgtable_64_types.h
> b/arch/x86/include/asm/pgtable_64_types.h
> index 52e5f5f2240d..bedbd2e7d06c 100644
> --- a/arch/x86/include/asm/pgtable_64_types.h
> +++ b/arch/x86/include/asm/pgtable_64_types.h
> @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d;
>  # define VMEMMAP_START		__VMEMMAP_BASE_L4
>  #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
> 
> +/*
> + * VMEMMAP_SIZE - allows the whole linear region to be covered by
> + *                a struct page array.
> + */
> +#define VMEMMAP_SIZE		(1UL << (__VIRTUAL_MASK_SHIFT -
> PAGE_SHIFT - \
> +					 1 + ilog2(sizeof(struct page))))
> +#define VMEMMAP_END		(VMEMMAP_START + VMEMMAP_SIZE)
> +
>  #define VMALLOC_END		(VMALLOC_START + (VMALLOC_SIZE_TB <<
> 40) - 1)
> 
>  #define MODULES_VADDR		(__START_KERNEL_map +
> KERNEL_IMAGE_SIZE)
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index f88032c24667..a0ce6f33a717 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1499,6 +1499,14 @@ void free_huge_page(struct page *page)
> 
>  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
>  {
> +	free_huge_page_vmemmap(h, page);
> +	/*
> +	 * Because we store preallocated pages on @page->lru,
> +	 * vmemmap_pgtable_free() must be called before the
> +	 * initialization of @page->lru in INIT_LIST_HEAD().
> +	 */
> +	vmemmap_pgtable_free(page);
> +
>  	INIT_LIST_HEAD(&page->lru);
>  	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
>  	set_hugetlb_cgroup(page, NULL);
> @@ -1751,6 +1759,14 @@ static struct page *alloc_fresh_huge_page(struct
> hstate *h,
>  	if (!page)
>  		return NULL;
> 
> +	if (vmemmap_pgtable_prealloc(h, page)) {
> +		if (hstate_is_gigantic(h))
> +			free_gigantic_page(page, huge_page_order(h));
> +		else
> +			put_page(page);
> +		return NULL;
> +	}
> +
>  	if (hstate_is_gigantic(h))
>  		prep_compound_gigantic_page(page, huge_page_order(h));
>  	prep_new_huge_page(h, page, page_to_nid(page));
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 332c131c01a8..937562a15f1e 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -74,6 +74,7 @@
>  #include <linux/pagewalk.h>
>  #include <linux/mmzone.h>
>  #include <linux/list.h>
> +#include <linux/bootmem_info.h>
>  #include <asm/pgalloc.h>
>  #include "hugetlb_vmemmap.h"
> 
> @@ -86,6 +87,8 @@
>   * reserve at least 2 pages as vmemmap areas.
>   */
>  #define RESERVE_VMEMMAP_NR		2U
> +#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR <<
> PAGE_SHIFT)
> +#define TAIL_PAGE_REUSE			-1
> 
>  #ifndef VMEMMAP_HPAGE_SHIFT
>  #define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
> @@ -97,6 +100,21 @@
> 
>  #define page_huge_pte(page)		((page)->pmd_huge_pte)
> 
> +#define vmemmap_hpage_addr_end(addr, end)				 \
> +({									 \
> +	unsigned long __boundary;					 \
> +	__boundary = ((addr) + VMEMMAP_HPAGE_SIZE) &
> VMEMMAP_HPAGE_MASK; \
> +	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
> +})
> +
> +#ifndef vmemmap_pmd_huge
> +#define vmemmap_pmd_huge vmemmap_pmd_huge
> +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> +{
> +	return pmd_huge(*pmd);
> +}
> +#endif
> +
>  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
>  {
>  	return h->nr_free_vmemmap_pages;
> @@ -158,6 +176,176 @@ int vmemmap_pgtable_prealloc(struct hstate *h,
> struct page *page)
>  	return -ENOMEM;
>  }
> 
> +/*
> + * Walk a vmemmap address to the pmd it maps.
> + */
> +static pmd_t *vmemmap_to_pmd(unsigned long page)
> +{
> +	pgd_t *pgd;
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +
> +	if (page < VMEMMAP_START || page >= VMEMMAP_END)
> +		return NULL;
> +
> +	pgd = pgd_offset_k(page);
> +	if (pgd_none(*pgd))
> +		return NULL;
> +	p4d = p4d_offset(pgd, page);
> +	if (p4d_none(*p4d))
> +		return NULL;
> +	pud = pud_offset(p4d, page);
> +
> +	if (pud_none(*pud) || pud_bad(*pud))
> +		return NULL;
> +	pmd = pmd_offset(pud, page);
> +
> +	return pmd;
> +}
> +
> +static inline spinlock_t *vmemmap_pmd_lock(pmd_t *pmd)
> +{
> +	return pmd_lock(&init_mm, pmd);
> +}
> +
> +static inline int freed_vmemmap_hpage(struct page *page)
> +{
> +	return atomic_read(&page->_mapcount) + 1;
> +}
> +
> +static inline int freed_vmemmap_hpage_inc(struct page *page)
> +{
> +	return atomic_inc_return_relaxed(&page->_mapcount) + 1;
> +}
> +
> +static inline int freed_vmemmap_hpage_dec(struct page *page)
> +{
> +	return atomic_dec_return_relaxed(&page->_mapcount) + 1;
> +}
> +
> +static inline void free_vmemmap_page_list(struct list_head *list)
> +{
> +	struct page *page, *next;
> +
> +	list_for_each_entry_safe(page, next, list, lru) {
> +		list_del(&page->lru);
> +		free_vmemmap_page(page);
> +	}
> +}
> +
> +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
> +					 unsigned long start,
> +					 unsigned long end,
> +					 struct list_head *free_pages)
> +{
> +	/* Make the tail pages are mapped read-only. */
> +	pgprot_t pgprot = PAGE_KERNEL_RO;
> +	pte_t entry = mk_pte(reuse, pgprot);
> +	unsigned long addr;
> +
> +	for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) {
> +		struct page *page;
> +		pte_t old = *ptep;
> +
> +		VM_WARN_ON(!pte_present(old));
> +		page = pte_page(old);
> +		list_add(&page->lru, free_pages);
> +
> +		set_pte_at(&init_mm, addr, ptep, entry);
> +	}
> +}
> +
> +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
> +					 unsigned long addr,
> +					 struct list_head *free_pages)
> +{
> +	unsigned long next;
> +	unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
> +	unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
> +	struct page *reuse = NULL;
> +
> +	addr = start;
> +	do {
> +		pte_t *ptep;
> +
> +		ptep = pte_offset_kernel(pmd, addr);
> +		if (!reuse)
> +			reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
> +
> +		next = vmemmap_hpage_addr_end(addr, end);
> +		__free_huge_page_pte_vmemmap(reuse, ptep, addr, next,
> +					     free_pages);
> +	} while (pmd++, addr = next, addr != end);
> +
> +	flush_tlb_kernel_range(start, end);
> +}
> +
> +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long
> addr)

Hi Muchun,

Are you going to restore the pmd mapping after you free the hugetlb? I mean,
When you free continuous 128MB hugetlb pages with 2MB size, will you
redo the PMD vmemmap since 2MB PMD can just contain the page struct of
128MB memory?

If no, wouldn't it be simpler to only use base pages while populating vmemmap?
I mean, once we enable the Kconfig option you add for VMEMMAP_FREE, we
only use base pages to place "page struct" but not split PMD into base pages
afterwards.

One negative side effect might be that base pages are also used for those pages
which won't be hugetlb later. but if most pages of host will be hugetlb for
guest and SPDK, it shouldn't hurt too much.  

Or at least this can be done for hugetlb reserved by cmdline?

> +{
> +	int i;
> +	pgprot_t pgprot = PAGE_KERNEL;
> +	struct mm_struct *mm = &init_mm;
> +	struct page *page;
> +	pmd_t old_pmd, _pmd;
> +
> +	old_pmd = READ_ONCE(*pmd);
> +	page = pmd_page(old_pmd);
> +	pmd_populate_kernel(mm, &_pmd, pte_p);
> +
> +	for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
> +		pte_t entry, *pte;
> +
> +		entry = mk_pte(page + i, pgprot);
> +		pte = pte_offset_kernel(&_pmd, addr);
> +		VM_BUG_ON(!pte_none(*pte));
> +		set_pte_at(mm, addr, pte, entry);
> +	}
> +
> +	/* make pte visible before pmd */
> +	smp_wmb();
> +	pmd_populate_kernel(mm, pmd, pte_p);
> +}
> +
> +static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd)
> +{
> +	struct page *pte_page, *t_page;
> +	unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK;
> +	unsigned long addr = start;
> +
> +	list_for_each_entry_safe(pte_page, t_page, &head->lru, lru) {
> +		list_del(&pte_page->lru);
> +		VM_BUG_ON(freed_vmemmap_hpage(pte_page));
> +		split_vmemmap_pmd(pmd++, page_to_virt(pte_page), addr);
> +		addr += VMEMMAP_HPAGE_SIZE;
> +	}
> +
> +	flush_tlb_kernel_range(start, addr);
> +}
> +
> +void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> +{
> +	pmd_t *pmd;
> +	spinlock_t *ptl;
> +	LIST_HEAD(free_pages);
> +
> +	if (!free_vmemmap_pages_per_hpage(h))
> +		return;
> +
> +	pmd = vmemmap_to_pmd((unsigned long)head);
> +	BUG_ON(!pmd);
> +
> +	ptl = vmemmap_pmd_lock(pmd);
> +	if (vmemmap_pmd_huge(pmd))
> +		split_vmemmap_huge_page(head, pmd);
> +
> +	__free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head,
> &free_pages);
> +	freed_vmemmap_hpage_inc(pmd_page(*pmd));
> +	spin_unlock(ptl);
> +
> +	free_vmemmap_page_list(&free_pages);
> +}
> +
>  void __init hugetlb_vmemmap_init(struct hstate *h)
>  {
>  	unsigned int order = huge_page_order(h);
> diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> index 2a72d2f62411..fb8b77659ed5 100644
> --- a/mm/hugetlb_vmemmap.h
> +++ b/mm/hugetlb_vmemmap.h
> @@ -15,6 +15,7 @@
>  void __init hugetlb_vmemmap_init(struct hstate *h);
>  int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page);
>  void vmemmap_pgtable_free(struct page *page);
> +void free_huge_page_vmemmap(struct hstate *h, struct page *head);
>  #else
>  static inline void hugetlb_vmemmap_init(struct hstate *h)
>  {
> @@ -28,5 +29,9 @@ static inline int vmemmap_pgtable_prealloc(struct hstate
> *h, struct page *page)
>  static inline void vmemmap_pgtable_free(struct page *page)
>  {
>  }
> +
> +static inline void free_huge_page_vmemmap(struct hstate *h, struct page
> *head)
> +{
> +}
>  #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
>  #endif /* _LINUX_HUGETLB_VMEMMAP_H */
> --
> 2.11.0
> 

Thanks
Barry


^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (20 preceding siblings ...)
  2020-11-13 10:59 ` [PATCH v4 21/21] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
@ 2020-11-17 10:15 ` Song Bao Hua (Barry Song)
  2020-11-17 10:49   ` [External] " Muchun Song
  21 siblings, 1 reply; 49+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-11-17 10:15 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel



> -----Original Message-----
> From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> Behalf Of Muchun Song
> Sent: Saturday, November 14, 2020 12:00 AM
> To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> willy@infradead.org; osalvador@suse.de; mhocko@suse.com
> Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> linux-fsdevel@vger.kernel.org; Muchun Song <songmuchun@bytedance.com>
> Subject: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
> 
> Hi all,
> 
> This patch series will free some vmemmap pages(struct page structures)
> associated with each hugetlbpage when preallocated to save memory.
> 
> Nowadays we track the status of physical page frames using struct page
> structures arranged in one or more arrays. And here exists one-to-one
> mapping between the physical page frame and the corresponding struct page
> structure.
> 
> The HugeTLB support is built on top of multiple page size support that
> is provided by most modern architectures. For example, x86 CPUs normally
> support 4K and 2M (1G if architecturally supported) page sizes. Every
> HugeTLB has more than one struct page structure. The 2M HugeTLB has 512
> struct page structure and 1G HugeTLB has 4096 struct page structures. But
> in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
> structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page
> structures to
> store metadata associated with each HugeTLB. The rest of the struct page
> structures are usually read the compound_head field which are all the same
> value. If we can free some struct page memory to buddy system so that we
> can save a lot of memory.
> 
> When the system boot up, every 2M HugeTLB has 512 struct page structures
> which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> 
>    hugetlbpage                  struct pages(8 pages)          page
> frame(8 pages)
>   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
>   |           |                     |     0     | -------------> |     0
> |
>   |           |                     |     1     | -------------> |     1
> |
>   |           |                     |     2     | -------------> |     2
> |
>   |           |                     |     3     | -------------> |     3
> |
>   |           |                     |     4     | -------------> |     4
> |
>   |     2M    |                     |     5     | -------------> |
> 5     |
>   |           |                     |     6     | -------------> |     6
> |
>   |           |                     |     7     | -------------> |     7
> |
>   |           |                     +-----------+
> +-----------+
>   |           |
>   |           |
>   +-----------+
> 
> 
> When a hugetlbpage is preallocated, we can change the mapping from above
> to
> bellow.
> 
>    hugetlbpage                  struct pages(8 pages)          page
> frame(8 pages)
>   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
>   |           |                     |     0     | -------------> |     0
> |
>   |           |                     |     1     | -------------> |     1
> |
>   |           |                     |     2     | ------------->
> +-----------+
>   |           |                     |     3     | -----------------^ ^ ^ ^
> ^
>   |           |                     |     4     | -------------------+ | |
> |
>   |     2M    |                     |     5     | ---------------------+ |
> |
>   |           |                     |     6     | -----------------------+ |
>   |           |                     |     7     | -------------------------+
>   |           |                     +-----------+
>   |           |
>   |           |
>   +-----------+
> 
> For tail pages, the value of compound_head is the same. So we can reuse
> first page of tail page structs. We map the virtual addresses of the
> remaining 6 pages of tail page structs to the first tail page struct,
> and then free these 6 pages. Therefore, we need to reserve at least 2
> pages as vmemmap areas.
> 
> When a hugetlbpage is freed to the buddy system, we should allocate six
> pages for vmemmap pages and restore the previous mapping relationship.
> 
> If we uses the 1G hugetlbpage, we can save 4088 pages(There are 4096 pages
> for
> struct page structures, we reserve 2 pages for vmemmap and 8 pages for page
> tables. So we can save 4088 pages). This is a very substantial gain. On our
> server, run some SPDK/QEMU applications which will use 1024GB hugetlbpage.
> With this feature enabled, we can save ~16GB(1G hugepage)/~11GB(2MB
> hugepage)

Hi Muchun,

Do we really save 11GB for 2MB hugepage?
How much do we save if we only get one 2MB hugetlb from one 128MB mem_section?
It seems we need to get at least one page for the PTEs since we are splitting PMD of 
vmemmap into PTE?

> memory.
> 
> Because there are vmemmap page tables reconstruction on the
> freeing/allocating
> path, it increases some overhead. Here are some overhead analysis.
> 
> 1) Allocating 10240 2MB hugetlb pages.
> 
>    a) With this patch series applied:
>    # time echo 10240 > /proc/sys/vm/nr_hugepages
> 
>    real     0m0.166s
>    user     0m0.000s
>    sys      0m0.166s
> 
>    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
> kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
> @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
> 
>    @latency:
>    [8K, 16K)           8360
> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> @@@@@@@@@@@@@@@@|
>    [16K, 32K)          1868 |@@@@@@@@@@@
> |
>    [32K, 64K)            10 |
> |
>    [64K, 128K)            2 |
> |
> 
>    b) Without this patch series:
>    # time echo 10240 > /proc/sys/vm/nr_hugepages
> 
>    real     0m0.066s
>    user     0m0.000s
>    sys      0m0.066s
> 
>    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
> kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
> @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
> 
>    @latency:
>    [4K, 8K)           10176
> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> @@@@@@@@@@@@@@@@|
>    [8K, 16K)             62 |
> |
>    [16K, 32K)             2 |
> |
> 
>    Summarize: this feature is about ~2x slower than before.
> 
> 2) Freeing 10240 @MB hugetlb pages.
> 
>    a) With this patch series applied:
>    # time echo 0 > /proc/sys/vm/nr_hugepages
> 
>    real     0m0.004s
>    user     0m0.000s
>    sys      0m0.002s
> 

Something is wrong here, it is faster than the case without this patchset:
0.004s vs. 0m0.077s

>    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; }
> kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]);
> delete(@start[tid]); }'
>    Attaching 2 probes...
> 
>    @latency:
>    [16K, 32K)         10240
> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> @@@@@@@@@@@@@@@@|
> 
>    b) Without this patch series:
>    # time echo 0 > /proc/sys/vm/nr_hugepages
> 
>    real     0m0.077s
>    user     0m0.001s
>    sys      0m0.075s
> 
>    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; }
> kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]);
> delete(@start[tid]); }'
>    Attaching 2 probes...
> 
>    @latency:
>    [4K, 8K)            9950
> |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> @@@@@@@@@@@@@@@@|
>    [8K, 16K)            287 |@
> |
>    [16K, 32K)             3 |
> |
> 
>    Summarize: The overhead of __free_hugepage is about ~2-4x slower than
> before.
>               But according to the allocation test above, I think that here is
> 	      also ~2x slower than before.
> 
>               But why the 'real' time of patched is smaller than before?
> Because
> 	      In this patch series, the freeing hugetlb is asynchronous(through
> 	      kwoker).
> 
> Although the overhead has increased, the overhead is not significant. Like MIke
> said, "However, remember that the majority of use cases create hugetlb pages
> at
> or shortly after boot time and add them to the pool. So, additional overhead is
> at pool creation time. There is no change to 'normal run time' operations of
> getting a page from or returning a page to the pool (think page fault/unmap)".
> 

It seems it is true. At runtime, people normally don't change hugetlb.

>   changelog in v4:
>   1. Move all the vmemmap functions to hugetlb_vmemmap.c.
>   2. Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we
> want to
>      disable this feature, we should disable it by a boot/kernel command line.
>   3. Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
>   4. Initialize page table lock for vmemmap through core_initcall mechanism.
> 
>   Thanks for Mike and Oscar's suggestions.
> 
>   changelog in v3:
>   1. Rename some helps function name. Thanks Mike.
>   2. Rework some code. Thanks Mike and Oscar.
>   3. Remap the tail vmemmap page with PAGE_KERNEL_RO instead of
>      PAGE_KERNEL. Thanks Matthew.
>   4. Add some overhead analysis in the cover letter.
>   5. Use vmemap pmd table lock instead of a hugetlb specific global lock.
> 
>   changelog in v2:
>   1. Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
>   2. Fix some typo and code style problems.
>   3. Remove unused handle_vmemmap_fault().
>   4. Merge some commits to one commit suggested by Mike.
> 
> Muchun Song (21):
>   mm/memory_hotplug: Move bootmem info registration API to
>     bootmem_info.c
>   mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
>   mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
>   mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
>   mm/hugetlb: Introduce pgtable allocation/freeing helpers
>   mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
>   mm/bootmem_info: Combine bootmem info and type into page->freelist
>   mm/hugetlb: Initialize page table lock for vmemmap
>   mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
>   mm/hugetlb: Defer freeing of hugetlb pages
>   mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb
>     page
>   mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper
>   mm/hugetlb: Use PG_slab to indicate split pmd
>   mm/hugetlb: Support freeing vmemmap pages of gigantic page
>   mm/hugetlb: Set the PageHWPoison to the raw error page
>   mm/hugetlb: Flush work when dissolving hugetlb page
>   mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
>   mm/hugetlb: Merge pte to huge pmd only for gigantic page
>   mm/hugetlb: Gather discrete indexes of tail page
>   mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct
>     page
>   mm/hugetlb: Disable freeing vmemmap if struct page size is not power
>     of two
> 
>  Documentation/admin-guide/kernel-parameters.txt |   9 +
>  Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
>  arch/x86/include/asm/hugetlb.h                  |  17 +
>  arch/x86/include/asm/pgtable_64_types.h         |   8 +
>  arch/x86/mm/init_64.c                           |   7 +-
>  fs/Kconfig                                      |  14 +
>  include/linux/bootmem_info.h                    |  78 +++
>  include/linux/hugetlb.h                         |  19 +
>  include/linux/hugetlb_cgroup.h                  |  15 +-
>  include/linux/memory_hotplug.h                  |  27 -
>  mm/Makefile                                     |   2 +
>  mm/bootmem_info.c                               | 124 ++++
>  mm/hugetlb.c                                    | 163 +++++-
>  mm/hugetlb_vmemmap.c                            | 732
> ++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h                            | 104 ++++
>  mm/memory_hotplug.c                             | 116 ----
>  mm/sparse.c                                     |   5 +-
>  17 files changed, 1263 insertions(+), 180 deletions(-)
>  create mode 100644 include/linux/bootmem_info.h
>  create mode 100644 mm/bootmem_info.c
>  create mode 100644 mm/hugetlb_vmemmap.c
>  create mode 100644 mm/hugetlb_vmemmap.h
> 

Thanks
Barry


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] RE: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-11-17  9:54   ` Song Bao Hua (Barry Song)
@ 2020-11-17 10:26     ` Muchun Song
  2020-11-18  3:21       ` Song Bao Hua (Barry Song)
  0 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-17 10:26 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, duanxiongchun,
	linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Tue, Nov 17, 2020 at 5:55 PM Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > Behalf Of Muchun Song
> > Sent: Saturday, November 14, 2020 12:00 AM
> > To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> > rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> > jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> > willy@infradead.org; osalvador@suse.de; mhocko@suse.com
> > Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> > linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > linux-fsdevel@vger.kernel.org; Muchun Song <songmuchun@bytedance.com>
> > Subject: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated
> > with each hugetlb page
> >
> > When we allocate a hugetlb page from the buddy, we should free the
> > unused vmemmap pages associated with it. We can do that in the
> > prep_new_huge_page().
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  arch/x86/include/asm/hugetlb.h          |   9 ++
> >  arch/x86/include/asm/pgtable_64_types.h |   8 ++
> >  mm/hugetlb.c                            |  16 +++
> >  mm/hugetlb_vmemmap.c                    | 188
> > ++++++++++++++++++++++++++++++++
> >  mm/hugetlb_vmemmap.h                    |   5 +
> >  5 files changed, 226 insertions(+)
> >
> > diff --git a/arch/x86/include/asm/hugetlb.h b/arch/x86/include/asm/hugetlb.h
> > index 1721b1aadeb1..c601fe042832 100644
> > --- a/arch/x86/include/asm/hugetlb.h
> > +++ b/arch/x86/include/asm/hugetlb.h
> > @@ -4,6 +4,15 @@
> >
> >  #include <asm/page.h>
> >  #include <asm-generic/hugetlb.h>
> > +#include <asm/pgtable.h>
> > +
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +#define vmemmap_pmd_huge vmemmap_pmd_huge
> > +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> > +{
> > +     return pmd_large(*pmd);
> > +}
> > +#endif
> >
> >  #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
> >
> > diff --git a/arch/x86/include/asm/pgtable_64_types.h
> > b/arch/x86/include/asm/pgtable_64_types.h
> > index 52e5f5f2240d..bedbd2e7d06c 100644
> > --- a/arch/x86/include/asm/pgtable_64_types.h
> > +++ b/arch/x86/include/asm/pgtable_64_types.h
> > @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d;
> >  # define VMEMMAP_START               __VMEMMAP_BASE_L4
> >  #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
> >
> > +/*
> > + * VMEMMAP_SIZE - allows the whole linear region to be covered by
> > + *                a struct page array.
> > + */
> > +#define VMEMMAP_SIZE         (1UL << (__VIRTUAL_MASK_SHIFT -
> > PAGE_SHIFT - \
> > +                                      1 + ilog2(sizeof(struct page))))
> > +#define VMEMMAP_END          (VMEMMAP_START + VMEMMAP_SIZE)
> > +
> >  #define VMALLOC_END          (VMALLOC_START + (VMALLOC_SIZE_TB <<
> > 40) - 1)
> >
> >  #define MODULES_VADDR                (__START_KERNEL_map +
> > KERNEL_IMAGE_SIZE)
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index f88032c24667..a0ce6f33a717 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1499,6 +1499,14 @@ void free_huge_page(struct page *page)
> >
> >  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
> >  {
> > +     free_huge_page_vmemmap(h, page);
> > +     /*
> > +      * Because we store preallocated pages on @page->lru,
> > +      * vmemmap_pgtable_free() must be called before the
> > +      * initialization of @page->lru in INIT_LIST_HEAD().
> > +      */
> > +     vmemmap_pgtable_free(page);
> > +
> >       INIT_LIST_HEAD(&page->lru);
> >       set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> >       set_hugetlb_cgroup(page, NULL);
> > @@ -1751,6 +1759,14 @@ static struct page *alloc_fresh_huge_page(struct
> > hstate *h,
> >       if (!page)
> >               return NULL;
> >
> > +     if (vmemmap_pgtable_prealloc(h, page)) {
> > +             if (hstate_is_gigantic(h))
> > +                     free_gigantic_page(page, huge_page_order(h));
> > +             else
> > +                     put_page(page);
> > +             return NULL;
> > +     }
> > +
> >       if (hstate_is_gigantic(h))
> >               prep_compound_gigantic_page(page, huge_page_order(h));
> >       prep_new_huge_page(h, page, page_to_nid(page));
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index 332c131c01a8..937562a15f1e 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -74,6 +74,7 @@
> >  #include <linux/pagewalk.h>
> >  #include <linux/mmzone.h>
> >  #include <linux/list.h>
> > +#include <linux/bootmem_info.h>
> >  #include <asm/pgalloc.h>
> >  #include "hugetlb_vmemmap.h"
> >
> > @@ -86,6 +87,8 @@
> >   * reserve at least 2 pages as vmemmap areas.
> >   */
> >  #define RESERVE_VMEMMAP_NR           2U
> > +#define RESERVE_VMEMMAP_SIZE         (RESERVE_VMEMMAP_NR <<
> > PAGE_SHIFT)
> > +#define TAIL_PAGE_REUSE                      -1
> >
> >  #ifndef VMEMMAP_HPAGE_SHIFT
> >  #define VMEMMAP_HPAGE_SHIFT          HPAGE_SHIFT
> > @@ -97,6 +100,21 @@
> >
> >  #define page_huge_pte(page)          ((page)->pmd_huge_pte)
> >
> > +#define vmemmap_hpage_addr_end(addr, end)                             \
> > +({                                                                    \
> > +     unsigned long __boundary;                                        \
> > +     __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) &
> > VMEMMAP_HPAGE_MASK; \
> > +     (__boundary - 1 < (end) - 1) ? __boundary : (end);               \
> > +})
> > +
> > +#ifndef vmemmap_pmd_huge
> > +#define vmemmap_pmd_huge vmemmap_pmd_huge
> > +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> > +{
> > +     return pmd_huge(*pmd);
> > +}
> > +#endif
> > +
> >  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> >  {
> >       return h->nr_free_vmemmap_pages;
> > @@ -158,6 +176,176 @@ int vmemmap_pgtable_prealloc(struct hstate *h,
> > struct page *page)
> >       return -ENOMEM;
> >  }
> >
> > +/*
> > + * Walk a vmemmap address to the pmd it maps.
> > + */
> > +static pmd_t *vmemmap_to_pmd(unsigned long page)
> > +{
> > +     pgd_t *pgd;
> > +     p4d_t *p4d;
> > +     pud_t *pud;
> > +     pmd_t *pmd;
> > +
> > +     if (page < VMEMMAP_START || page >= VMEMMAP_END)
> > +             return NULL;
> > +
> > +     pgd = pgd_offset_k(page);
> > +     if (pgd_none(*pgd))
> > +             return NULL;
> > +     p4d = p4d_offset(pgd, page);
> > +     if (p4d_none(*p4d))
> > +             return NULL;
> > +     pud = pud_offset(p4d, page);
> > +
> > +     if (pud_none(*pud) || pud_bad(*pud))
> > +             return NULL;
> > +     pmd = pmd_offset(pud, page);
> > +
> > +     return pmd;
> > +}
> > +
> > +static inline spinlock_t *vmemmap_pmd_lock(pmd_t *pmd)
> > +{
> > +     return pmd_lock(&init_mm, pmd);
> > +}
> > +
> > +static inline int freed_vmemmap_hpage(struct page *page)
> > +{
> > +     return atomic_read(&page->_mapcount) + 1;
> > +}
> > +
> > +static inline int freed_vmemmap_hpage_inc(struct page *page)
> > +{
> > +     return atomic_inc_return_relaxed(&page->_mapcount) + 1;
> > +}
> > +
> > +static inline int freed_vmemmap_hpage_dec(struct page *page)
> > +{
> > +     return atomic_dec_return_relaxed(&page->_mapcount) + 1;
> > +}
> > +
> > +static inline void free_vmemmap_page_list(struct list_head *list)
> > +{
> > +     struct page *page, *next;
> > +
> > +     list_for_each_entry_safe(page, next, list, lru) {
> > +             list_del(&page->lru);
> > +             free_vmemmap_page(page);
> > +     }
> > +}
> > +
> > +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t *ptep,
> > +                                      unsigned long start,
> > +                                      unsigned long end,
> > +                                      struct list_head *free_pages)
> > +{
> > +     /* Make the tail pages are mapped read-only. */
> > +     pgprot_t pgprot = PAGE_KERNEL_RO;
> > +     pte_t entry = mk_pte(reuse, pgprot);
> > +     unsigned long addr;
> > +
> > +     for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) {
> > +             struct page *page;
> > +             pte_t old = *ptep;
> > +
> > +             VM_WARN_ON(!pte_present(old));
> > +             page = pte_page(old);
> > +             list_add(&page->lru, free_pages);
> > +
> > +             set_pte_at(&init_mm, addr, ptep, entry);
> > +     }
> > +}
> > +
> > +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t *pmd,
> > +                                      unsigned long addr,
> > +                                      struct list_head *free_pages)
> > +{
> > +     unsigned long next;
> > +     unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
> > +     unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
> > +     struct page *reuse = NULL;
> > +
> > +     addr = start;
> > +     do {
> > +             pte_t *ptep;
> > +
> > +             ptep = pte_offset_kernel(pmd, addr);
> > +             if (!reuse)
> > +                     reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
> > +
> > +             next = vmemmap_hpage_addr_end(addr, end);
> > +             __free_huge_page_pte_vmemmap(reuse, ptep, addr, next,
> > +                                          free_pages);
> > +     } while (pmd++, addr = next, addr != end);
> > +
> > +     flush_tlb_kernel_range(start, end);
> > +}
> > +
> > +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned long
> > addr)
>
> Hi Muchun,
>
> Are you going to restore the pmd mapping after you free the hugetlb? I mean,
> When you free continuous 128MB hugetlb pages with 2MB size, will you
> redo the PMD vmemmap since 2MB PMD can just contain the page struct of
> 128MB memory?

Now we only restore the pmd mapping for the 1GB HugeTLB page. For the
2MB HugeTLB page, we do not(I haven't figured out how to handle it gracefully).

>
> If no, wouldn't it be simpler to only use base pages while populating vmemmap?
> I mean, once we enable the Kconfig option you add for VMEMMAP_FREE, we
> only use base pages to place "page struct" but not split PMD into base pages
> afterwards.
>
> One negative side effect might be that base pages are also used for those pages
> which won't be hugetlb later. but if most pages of host will be hugetlb for
> guest and SPDK, it shouldn't hurt too much.

Yeah, I agree with you. If the user uses a lot of HugeTLB pages(e.g.
SPDK/Guest),
it shouldn't hurt too much. And using base pages while populating vmemmap also
can decrease the overhead(of splitting PMD). In the end, if we don’t
come up with
a more suitable solution to deal with it(mentioned above for 2MB HugeTLB page).
Maybe this is also an idea.

Thanks.

>
> Or at least this can be done for hugetlb reserved by cmdline?
>
> > +{
> > +     int i;
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     struct mm_struct *mm = &init_mm;
> > +     struct page *page;
> > +     pmd_t old_pmd, _pmd;
> > +
> > +     old_pmd = READ_ONCE(*pmd);
> > +     page = pmd_page(old_pmd);
> > +     pmd_populate_kernel(mm, &_pmd, pte_p);
> > +
> > +     for (i = 0; i < VMEMMAP_HPAGE_NR; i++, addr += PAGE_SIZE) {
> > +             pte_t entry, *pte;
> > +
> > +             entry = mk_pte(page + i, pgprot);
> > +             pte = pte_offset_kernel(&_pmd, addr);
> > +             VM_BUG_ON(!pte_none(*pte));
> > +             set_pte_at(mm, addr, pte, entry);
> > +     }
> > +
> > +     /* make pte visible before pmd */
> > +     smp_wmb();
> > +     pmd_populate_kernel(mm, pmd, pte_p);
> > +}
> > +
> > +static void split_vmemmap_huge_page(struct page *head, pmd_t *pmd)
> > +{
> > +     struct page *pte_page, *t_page;
> > +     unsigned long start = (unsigned long)head & VMEMMAP_HPAGE_MASK;
> > +     unsigned long addr = start;
> > +
> > +     list_for_each_entry_safe(pte_page, t_page, &head->lru, lru) {
> > +             list_del(&pte_page->lru);
> > +             VM_BUG_ON(freed_vmemmap_hpage(pte_page));
> > +             split_vmemmap_pmd(pmd++, page_to_virt(pte_page), addr);
> > +             addr += VMEMMAP_HPAGE_SIZE;
> > +     }
> > +
> > +     flush_tlb_kernel_range(start, addr);
> > +}
> > +
> > +void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     pmd_t *pmd;
> > +     spinlock_t *ptl;
> > +     LIST_HEAD(free_pages);
> > +
> > +     if (!free_vmemmap_pages_per_hpage(h))
> > +             return;
> > +
> > +     pmd = vmemmap_to_pmd((unsigned long)head);
> > +     BUG_ON(!pmd);
> > +
> > +     ptl = vmemmap_pmd_lock(pmd);
> > +     if (vmemmap_pmd_huge(pmd))
> > +             split_vmemmap_huge_page(head, pmd);
> > +
> > +     __free_huge_page_pmd_vmemmap(h, pmd, (unsigned long)head,
> > &free_pages);
> > +     freed_vmemmap_hpage_inc(pmd_page(*pmd));
> > +     spin_unlock(ptl);
> > +
> > +     free_vmemmap_page_list(&free_pages);
> > +}
> > +
> >  void __init hugetlb_vmemmap_init(struct hstate *h)
> >  {
> >       unsigned int order = huge_page_order(h);
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > index 2a72d2f62411..fb8b77659ed5 100644
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -15,6 +15,7 @@
> >  void __init hugetlb_vmemmap_init(struct hstate *h);
> >  int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page);
> >  void vmemmap_pgtable_free(struct page *page);
> > +void free_huge_page_vmemmap(struct hstate *h, struct page *head);
> >  #else
> >  static inline void hugetlb_vmemmap_init(struct hstate *h)
> >  {
> > @@ -28,5 +29,9 @@ static inline int vmemmap_pgtable_prealloc(struct hstate
> > *h, struct page *page)
> >  static inline void vmemmap_pgtable_free(struct page *page)
> >  {
> >  }
> > +
> > +static inline void free_huge_page_vmemmap(struct hstate *h, struct page
> > *head)
> > +{
> > +}
> >  #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> >  #endif /* _LINUX_HUGETLB_VMEMMAP_H */
> > --
> > 2.11.0
> >
>
> Thanks
> Barry
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 10:15 ` [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Song Bao Hua (Barry Song)
@ 2020-11-17 10:49   ` Muchun Song
  2020-11-17 11:07     ` Song Bao Hua (Barry Song)
  0 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-17 10:49 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, duanxiongchun,
	linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Tue, Nov 17, 2020 at 6:16 PM Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > Behalf Of Muchun Song
> > Sent: Saturday, November 14, 2020 12:00 AM
> > To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> > rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> > jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> > willy@infradead.org; osalvador@suse.de; mhocko@suse.com
> > Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> > linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > linux-fsdevel@vger.kernel.org; Muchun Song <songmuchun@bytedance.com>
> > Subject: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
> >
> > Hi all,
> >
> > This patch series will free some vmemmap pages(struct page structures)
> > associated with each hugetlbpage when preallocated to save memory.
> >
> > Nowadays we track the status of physical page frames using struct page
> > structures arranged in one or more arrays. And here exists one-to-one
> > mapping between the physical page frame and the corresponding struct page
> > structure.
> >
> > The HugeTLB support is built on top of multiple page size support that
> > is provided by most modern architectures. For example, x86 CPUs normally
> > support 4K and 2M (1G if architecturally supported) page sizes. Every
> > HugeTLB has more than one struct page structure. The 2M HugeTLB has 512
> > struct page structure and 1G HugeTLB has 4096 struct page structures. But
> > in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
> > structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page
> > structures to
> > store metadata associated with each HugeTLB. The rest of the struct page
> > structures are usually read the compound_head field which are all the same
> > value. If we can free some struct page memory to buddy system so that we
> > can save a lot of memory.
> >
> > When the system boot up, every 2M HugeTLB has 512 struct page structures
> > which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> >
> >    hugetlbpage                  struct pages(8 pages)          page
> > frame(8 pages)
> >   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> >   |           |                     |     0     | -------------> |     0
> > |
> >   |           |                     |     1     | -------------> |     1
> > |
> >   |           |                     |     2     | -------------> |     2
> > |
> >   |           |                     |     3     | -------------> |     3
> > |
> >   |           |                     |     4     | -------------> |     4
> > |
> >   |     2M    |                     |     5     | -------------> |
> > 5     |
> >   |           |                     |     6     | -------------> |     6
> > |
> >   |           |                     |     7     | -------------> |     7
> > |
> >   |           |                     +-----------+
> > +-----------+
> >   |           |
> >   |           |
> >   +-----------+
> >
> >
> > When a hugetlbpage is preallocated, we can change the mapping from above
> > to
> > bellow.
> >
> >    hugetlbpage                  struct pages(8 pages)          page
> > frame(8 pages)
> >   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> >   |           |                     |     0     | -------------> |     0
> > |
> >   |           |                     |     1     | -------------> |     1
> > |
> >   |           |                     |     2     | ------------->
> > +-----------+
> >   |           |                     |     3     | -----------------^ ^ ^ ^
> > ^
> >   |           |                     |     4     | -------------------+ | |
> > |
> >   |     2M    |                     |     5     | ---------------------+ |
> > |
> >   |           |                     |     6     | -----------------------+ |
> >   |           |                     |     7     | -------------------------+
> >   |           |                     +-----------+
> >   |           |
> >   |           |
> >   +-----------+
> >
> > For tail pages, the value of compound_head is the same. So we can reuse
> > first page of tail page structs. We map the virtual addresses of the
> > remaining 6 pages of tail page structs to the first tail page struct,
> > and then free these 6 pages. Therefore, we need to reserve at least 2
> > pages as vmemmap areas.
> >
> > When a hugetlbpage is freed to the buddy system, we should allocate six
> > pages for vmemmap pages and restore the previous mapping relationship.
> >
> > If we uses the 1G hugetlbpage, we can save 4088 pages(There are 4096 pages
> > for
> > struct page structures, we reserve 2 pages for vmemmap and 8 pages for page
> > tables. So we can save 4088 pages). This is a very substantial gain. On our
> > server, run some SPDK/QEMU applications which will use 1024GB hugetlbpage.
> > With this feature enabled, we can save ~16GB(1G hugepage)/~11GB(2MB
> > hugepage)
>
> Hi Muchun,
>
> Do we really save 11GB for 2MB hugepage?
> How much do we save if we only get one 2MB hugetlb from one 128MB mem_section?
> It seems we need to get at least one page for the PTEs since we are splitting PMD of
> vmemmap into PTE?

There are 524288(1024GB/2MB) 2MB HugeTLB pages. We can save 6 pages for each
2MB HugeTLB page. So we can save 3145728 pages. But we need to split PMD page
table for every one 128MB mem_section and every section need one page
as PTE page
table. So we need 8192(1024GB/128MB) pages as PTE page tables.
Finally, we can save
3137536(3145728-8192) pages which is 11.97GB.

Thanks Barry.

>
> > memory.
> >
> > Because there are vmemmap page tables reconstruction on the
> > freeing/allocating
> > path, it increases some overhead. Here are some overhead analysis.
> >
> > 1) Allocating 10240 2MB hugetlb pages.
> >
> >    a) With this patch series applied:
> >    # time echo 10240 > /proc/sys/vm/nr_hugepages
> >
> >    real     0m0.166s
> >    user     0m0.000s
> >    sys      0m0.166s
> >
> >    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
> > kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
> > @start[tid]); delete(@start[tid]); }'
> >    Attaching 2 probes...
> >
> >    @latency:
> >    [8K, 16K)           8360
> > |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> > @@@@@@@@@@@@@@@@|
> >    [16K, 32K)          1868 |@@@@@@@@@@@
> > |
> >    [32K, 64K)            10 |
> > |
> >    [64K, 128K)            2 |
> > |
> >
> >    b) Without this patch series:
> >    # time echo 10240 > /proc/sys/vm/nr_hugepages
> >
> >    real     0m0.066s
> >    user     0m0.000s
> >    sys      0m0.066s
> >
> >    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
> > kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
> > @start[tid]); delete(@start[tid]); }'
> >    Attaching 2 probes...
> >
> >    @latency:
> >    [4K, 8K)           10176
> > |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> > @@@@@@@@@@@@@@@@|
> >    [8K, 16K)             62 |
> > |
> >    [16K, 32K)             2 |
> > |
> >
> >    Summarize: this feature is about ~2x slower than before.
> >
> > 2) Freeing 10240 @MB hugetlb pages.
> >
> >    a) With this patch series applied:
> >    # time echo 0 > /proc/sys/vm/nr_hugepages
> >
> >    real     0m0.004s
> >    user     0m0.000s
> >    sys      0m0.002s
> >
>
> Something is wrong here, it is faster than the case without this patchset:
> 0.004s vs. 0m0.077s

Yeah, it is faster. And Why the 'real' time of patched is smaller than before?
Because in this patch series, the freeing HugeTLB is
asynchronous(through worker).

>
> >    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; }
> > kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]);
> > delete(@start[tid]); }'
> >    Attaching 2 probes...
> >
> >    @latency:
> >    [16K, 32K)         10240
> > |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> > @@@@@@@@@@@@@@@@|
> >
> >    b) Without this patch series:
> >    # time echo 0 > /proc/sys/vm/nr_hugepages
> >
> >    real     0m0.077s
> >    user     0m0.001s
> >    sys      0m0.075s
> >
> >    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; }
> > kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]);
> > delete(@start[tid]); }'
> >    Attaching 2 probes...
> >
> >    @latency:
> >    [4K, 8K)            9950
> > |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@
> > @@@@@@@@@@@@@@@@|
> >    [8K, 16K)            287 |@
> > |
> >    [16K, 32K)             3 |
> > |
> >
> >    Summarize: The overhead of __free_hugepage is about ~2-4x slower than
> > before.
> >               But according to the allocation test above, I think that here is
> >             also ~2x slower than before.
> >
> >               But why the 'real' time of patched is smaller than before?
> > Because
> >             In this patch series, the freeing hugetlb is asynchronous(through
> >             kwoker).
> >
> > Although the overhead has increased, the overhead is not significant. Like MIke
> > said, "However, remember that the majority of use cases create hugetlb pages
> > at
> > or shortly after boot time and add them to the pool. So, additional overhead is
> > at pool creation time. There is no change to 'normal run time' operations of
> > getting a page from or returning a page to the pool (think page fault/unmap)".
> >
>
> It seems it is true. At runtime, people normally don't change hugetlb.
>
> >   changelog in v4:
> >   1. Move all the vmemmap functions to hugetlb_vmemmap.c.
> >   2. Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we
> > want to
> >      disable this feature, we should disable it by a boot/kernel command line.
> >   3. Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
> >   4. Initialize page table lock for vmemmap through core_initcall mechanism.
> >
> >   Thanks for Mike and Oscar's suggestions.
> >
> >   changelog in v3:
> >   1. Rename some helps function name. Thanks Mike.
> >   2. Rework some code. Thanks Mike and Oscar.
> >   3. Remap the tail vmemmap page with PAGE_KERNEL_RO instead of
> >      PAGE_KERNEL. Thanks Matthew.
> >   4. Add some overhead analysis in the cover letter.
> >   5. Use vmemap pmd table lock instead of a hugetlb specific global lock.
> >
> >   changelog in v2:
> >   1. Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
> >   2. Fix some typo and code style problems.
> >   3. Remove unused handle_vmemmap_fault().
> >   4. Merge some commits to one commit suggested by Mike.
> >
> > Muchun Song (21):
> >   mm/memory_hotplug: Move bootmem info registration API to
> >     bootmem_info.c
> >   mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
> >   mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
> >   mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
> >   mm/hugetlb: Introduce pgtable allocation/freeing helpers
> >   mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
> >   mm/bootmem_info: Combine bootmem info and type into page->freelist
> >   mm/hugetlb: Initialize page table lock for vmemmap
> >   mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
> >   mm/hugetlb: Defer freeing of hugetlb pages
> >   mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb
> >     page
> >   mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper
> >   mm/hugetlb: Use PG_slab to indicate split pmd
> >   mm/hugetlb: Support freeing vmemmap pages of gigantic page
> >   mm/hugetlb: Set the PageHWPoison to the raw error page
> >   mm/hugetlb: Flush work when dissolving hugetlb page
> >   mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
> >   mm/hugetlb: Merge pte to huge pmd only for gigantic page
> >   mm/hugetlb: Gather discrete indexes of tail page
> >   mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct
> >     page
> >   mm/hugetlb: Disable freeing vmemmap if struct page size is not power
> >     of two
> >
> >  Documentation/admin-guide/kernel-parameters.txt |   9 +
> >  Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
> >  arch/x86/include/asm/hugetlb.h                  |  17 +
> >  arch/x86/include/asm/pgtable_64_types.h         |   8 +
> >  arch/x86/mm/init_64.c                           |   7 +-
> >  fs/Kconfig                                      |  14 +
> >  include/linux/bootmem_info.h                    |  78 +++
> >  include/linux/hugetlb.h                         |  19 +
> >  include/linux/hugetlb_cgroup.h                  |  15 +-
> >  include/linux/memory_hotplug.h                  |  27 -
> >  mm/Makefile                                     |   2 +
> >  mm/bootmem_info.c                               | 124 ++++
> >  mm/hugetlb.c                                    | 163 +++++-
> >  mm/hugetlb_vmemmap.c                            | 732
> > ++++++++++++++++++++++++
> >  mm/hugetlb_vmemmap.h                            | 104 ++++
> >  mm/memory_hotplug.c                             | 116 ----
> >  mm/sparse.c                                     |   5 +-
> >  17 files changed, 1263 insertions(+), 180 deletions(-)
> >  create mode 100644 include/linux/bootmem_info.h
> >  create mode 100644 mm/bootmem_info.c
> >  create mode 100644 mm/hugetlb_vmemmap.c
> >  create mode 100644 mm/hugetlb_vmemmap.h
> >
>
> Thanks
> Barry
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 10:49   ` [External] " Muchun Song
@ 2020-11-17 11:07     ` Song Bao Hua (Barry Song)
  2020-11-17 16:29       ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-11-17 11:07 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, duanxiongchun,
	linux-doc, linux-kernel, linux-mm, linux-fsdevel



> -----Original Message-----
> From: Muchun Song [mailto:songmuchun@bytedance.com]
> Sent: Tuesday, November 17, 2020 11:50 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> willy@infradead.org; osalvador@suse.de; mhocko@suse.com;
> duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> linux-fsdevel@vger.kernel.org
> Subject: Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of
> hugetlb page
> 
> On Tue, Nov 17, 2020 at 6:16 PM Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > > Behalf Of Muchun Song
> > > Sent: Saturday, November 14, 2020 12:00 AM
> > > To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> > > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> > > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> > > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> > > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> > > rdunlap@infradead.org; oneukum@suse.com;
> anshuman.khandual@arm.com;
> > > jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> > > willy@infradead.org; osalvador@suse.de; mhocko@suse.com
> > > Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> > > linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > > linux-fsdevel@vger.kernel.org; Muchun Song
> <songmuchun@bytedance.com>
> > > Subject: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
> > >
> > > Hi all,
> > >
> > > This patch series will free some vmemmap pages(struct page structures)
> > > associated with each hugetlbpage when preallocated to save memory.
> > >
> > > Nowadays we track the status of physical page frames using struct page
> > > structures arranged in one or more arrays. And here exists one-to-one
> > > mapping between the physical page frame and the corresponding struct
> page
> > > structure.
> > >
> > > The HugeTLB support is built on top of multiple page size support that
> > > is provided by most modern architectures. For example, x86 CPUs normally
> > > support 4K and 2M (1G if architecturally supported) page sizes. Every
> > > HugeTLB has more than one struct page structure. The 2M HugeTLB has
> 512
> > > struct page structure and 1G HugeTLB has 4096 struct page structures. But
> > > in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
> > > structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page
> > > structures to
> > > store metadata associated with each HugeTLB. The rest of the struct page
> > > structures are usually read the compound_head field which are all the same
> > > value. If we can free some struct page memory to buddy system so that we
> > > can save a lot of memory.
> > >
> > > When the system boot up, every 2M HugeTLB has 512 struct page
> structures
> > > which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> > >
> > >    hugetlbpage                  struct pages(8 pages)          page
> > > frame(8 pages)
> > >   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > >   |           |                     |     0     | -------------> |
> 0
> > > |
> > >   |           |                     |     1     | -------------> |
> 1
> > > |
> > >   |           |                     |     2     | -------------> |
> 2
> > > |
> > >   |           |                     |     3     | -------------> |
> 3
> > > |
> > >   |           |                     |     4     | -------------> |
> 4
> > > |
> > >   |     2M    |                     |     5     | -------------> |
> > > 5     |
> > >   |           |                     |     6     | -------------> |
> 6
> > > |
> > >   |           |                     |     7     | -------------> |
> 7
> > > |
> > >   |           |                     +-----------+
> > > +-----------+
> > >   |           |
> > >   |           |
> > >   +-----------+
> > >
> > >
> > > When a hugetlbpage is preallocated, we can change the mapping from
> above
> > > to
> > > bellow.
> > >
> > >    hugetlbpage                  struct pages(8 pages)          page
> > > frame(8 pages)
> > >   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > >   |           |                     |     0     | -------------> |
> 0
> > > |
> > >   |           |                     |     1     | -------------> |
> 1
> > > |
> > >   |           |                     |     2     | ------------->
> > > +-----------+
> > >   |           |                     |     3     | -----------------^ ^
> ^ ^
> > > ^
> > >   |           |                     |     4     | -------------------+
> | |
> > > |
> > >   |     2M    |                     |     5     |
> ---------------------+ |
> > > |
> > >   |           |                     |     6     |
> -----------------------+ |
> > >   |           |                     |     7     |
> -------------------------+
> > >   |           |                     +-----------+
> > >   |           |
> > >   |           |
> > >   +-----------+
> > >
> > > For tail pages, the value of compound_head is the same. So we can reuse
> > > first page of tail page structs. We map the virtual addresses of the
> > > remaining 6 pages of tail page structs to the first tail page struct,
> > > and then free these 6 pages. Therefore, we need to reserve at least 2
> > > pages as vmemmap areas.
> > >
> > > When a hugetlbpage is freed to the buddy system, we should allocate six
> > > pages for vmemmap pages and restore the previous mapping relationship.
> > >
> > > If we uses the 1G hugetlbpage, we can save 4088 pages(There are 4096
> pages
> > > for
> > > struct page structures, we reserve 2 pages for vmemmap and 8 pages for
> page
> > > tables. So we can save 4088 pages). This is a very substantial gain. On our
> > > server, run some SPDK/QEMU applications which will use 1024GB
> hugetlbpage.
> > > With this feature enabled, we can save ~16GB(1G hugepage)/~11GB(2MB
> > > hugepage)
> >
> > Hi Muchun,
> >
> > Do we really save 11GB for 2MB hugepage?
> > How much do we save if we only get one 2MB hugetlb from one 128MB
> mem_section?
> > It seems we need to get at least one page for the PTEs since we are splitting
> PMD of
> > vmemmap into PTE?
> 
> There are 524288(1024GB/2MB) 2MB HugeTLB pages. We can save 6 pages for
> each
> 2MB HugeTLB page. So we can save 3145728 pages. But we need to split PMD
> page
> table for every one 128MB mem_section and every section need one page
> as PTE page
> table. So we need 8192(1024GB/128MB) pages as PTE page tables.
> Finally, we can save
> 3137536(3145728-8192) pages which is 11.97GB.

The worst case I can see is that:
if we get 100 hugetlb with 2MB size, but the 100 hugetlb comes from different
mem_section, we won't save 11.97GB. we only save 5/8 * 16GB=10GB.

Anyway, it seems 11GB is in the middle of 10GB and 11.97GB,
so sounds sensible :-)

ideally, we should be able to free PageTail if we change struct page in some way.
Then we will save much more for 2MB hugetlb. but it seems it is not easy.

Thanks
Barry

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-13 10:59 ` [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
@ 2020-11-17 15:06   ` Oscar Salvador
  2020-11-17 15:29     ` [External] " Muchun Song
  2020-11-19  6:17     ` Muchun Song
  2020-11-19 23:37   ` Mike Kravetz
  1 sibling, 2 replies; 49+ messages in thread
From: Oscar Salvador @ 2020-11-17 15:06 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Fri, Nov 13, 2020 at 06:59:36PM +0800, Muchun Song wrote:
> +#define page_huge_pte(page)		((page)->pmd_huge_pte)

Seems you do not need this one anymore.

> +void vmemmap_pgtable_free(struct page *page)
> +{
> +	struct page *pte_page, *t_page;
> +
> +	list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) {
> +		list_del(&pte_page->lru);
> +		pte_free_kernel(&init_mm, page_to_virt(pte_page));
> +	}
> +}
> +
> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> +	unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> +
> +	/* Store preallocated pages on huge page lru list */
> +	INIT_LIST_HEAD(&page->lru);
> +
> +	while (nr--) {
> +		pte_t *pte_p;
> +
> +		pte_p = pte_alloc_one_kernel(&init_mm);
> +		if (!pte_p)
> +			goto out;
> +		list_add(&virt_to_page(pte_p)->lru, &page->lru);
> +	}

Definetely this looks better and easier to handle.
Btw, did you explore Matthew's hint about instead of allocating a new page,
using one of the ones you are going to free to store the ptes?
I am not sure whether it is feasible at all though.


> --- a/mm/hugetlb_vmemmap.h
> +++ b/mm/hugetlb_vmemmap.h
> @@ -9,12 +9,24 @@
>  #ifndef _LINUX_HUGETLB_VMEMMAP_H
>  #define _LINUX_HUGETLB_VMEMMAP_H
>  #include <linux/hugetlb.h>
> +#include <linux/mm.h>

why do we need this here?

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-17 15:06   ` Oscar Salvador
@ 2020-11-17 15:29     ` Muchun Song
  2020-11-19  6:17     ` Muchun Song
  1 sibling, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-17 15:29 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko, Xiongchun duan,
	linux-doc, LKML, Linux Memory Management List, linux-fsdevel

On Tue, Nov 17, 2020 at 11:06 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Fri, Nov 13, 2020 at 06:59:36PM +0800, Muchun Song wrote:
> > +#define page_huge_pte(page)          ((page)->pmd_huge_pte)

Yeah, I forgot to remove it. Thanks.

>
> Seems you do not need this one anymore.
>
> > +void vmemmap_pgtable_free(struct page *page)
> > +{
> > +     struct page *pte_page, *t_page;
> > +
> > +     list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) {
> > +             list_del(&pte_page->lru);
> > +             pte_free_kernel(&init_mm, page_to_virt(pte_page));
> > +     }
> > +}
> > +
> > +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> > +{
> > +     unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> > +
> > +     /* Store preallocated pages on huge page lru list */
> > +     INIT_LIST_HEAD(&page->lru);
> > +
> > +     while (nr--) {
> > +             pte_t *pte_p;
> > +
> > +             pte_p = pte_alloc_one_kernel(&init_mm);
> > +             if (!pte_p)
> > +                     goto out;
> > +             list_add(&virt_to_page(pte_p)->lru, &page->lru);
> > +     }
>
> Definetely this looks better and easier to handle.
> Btw, did you explore Matthew's hint about instead of allocating a new page,
> using one of the ones you are going to free to store the ptes?

Oh, sorry for missing his reply. It is a good idea. I will start an
investigation.
Thanks for reminding me.

> I am not sure whether it is feasible at all though.
>
>
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -9,12 +9,24 @@
> >  #ifndef _LINUX_HUGETLB_VMEMMAP_H
> >  #define _LINUX_HUGETLB_VMEMMAP_H
> >  #include <linux/hugetlb.h>
> > +#include <linux/mm.h>
>
> why do we need this here?

Yeah, also can remove:).


>
> --
> Oscar Salvador
> SUSE L3



--
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 11:07     ` Song Bao Hua (Barry Song)
@ 2020-11-17 16:29       ` Muchun Song
  2020-11-17 19:22         ` Matthew Wilcox
  2020-11-17 19:45         ` Oscar Salvador
  0 siblings, 2 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-17 16:29 UTC (permalink / raw)
  To: Song Bao Hua (Barry Song)
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, duanxiongchun,
	linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Tue, Nov 17, 2020 at 7:08 PM Song Bao Hua (Barry Song)
<song.bao.hua@hisilicon.com> wrote:
>
>
>
> > -----Original Message-----
> > From: Muchun Song [mailto:songmuchun@bytedance.com]
> > Sent: Tuesday, November 17, 2020 11:50 PM
> > To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> > Cc: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> > rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> > jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> > willy@infradead.org; osalvador@suse.de; mhocko@suse.com;
> > duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> > linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > linux-fsdevel@vger.kernel.org
> > Subject: Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of
> > hugetlb page
> >
> > On Tue, Nov 17, 2020 at 6:16 PM Song Bao Hua (Barry Song)
> > <song.bao.hua@hisilicon.com> wrote:
> > >
> > >
> > >
> > > > -----Original Message-----
> > > > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > > > Behalf Of Muchun Song
> > > > Sent: Saturday, November 14, 2020 12:00 AM
> > > > To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> > > > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> > > > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> > > > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> > > > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> > > > rdunlap@infradead.org; oneukum@suse.com;
> > anshuman.khandual@arm.com;
> > > > jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> > > > willy@infradead.org; osalvador@suse.de; mhocko@suse.com
> > > > Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> > > > linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > > > linux-fsdevel@vger.kernel.org; Muchun Song
> > <songmuchun@bytedance.com>
> > > > Subject: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
> > > >
> > > > Hi all,
> > > >
> > > > This patch series will free some vmemmap pages(struct page structures)
> > > > associated with each hugetlbpage when preallocated to save memory.
> > > >
> > > > Nowadays we track the status of physical page frames using struct page
> > > > structures arranged in one or more arrays. And here exists one-to-one
> > > > mapping between the physical page frame and the corresponding struct
> > page
> > > > structure.
> > > >
> > > > The HugeTLB support is built on top of multiple page size support that
> > > > is provided by most modern architectures. For example, x86 CPUs normally
> > > > support 4K and 2M (1G if architecturally supported) page sizes. Every
> > > > HugeTLB has more than one struct page structure. The 2M HugeTLB has
> > 512
> > > > struct page structure and 1G HugeTLB has 4096 struct page structures. But
> > > > in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
> > > > structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page
> > > > structures to
> > > > store metadata associated with each HugeTLB. The rest of the struct page
> > > > structures are usually read the compound_head field which are all the same
> > > > value. If we can free some struct page memory to buddy system so that we
> > > > can save a lot of memory.
> > > >
> > > > When the system boot up, every 2M HugeTLB has 512 struct page
> > structures
> > > > which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> > > >
> > > >    hugetlbpage                  struct pages(8 pages)          page
> > > > frame(8 pages)
> > > >   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > > >   |           |                     |     0     | -------------> |
> > 0
> > > > |
> > > >   |           |                     |     1     | -------------> |
> > 1
> > > > |
> > > >   |           |                     |     2     | -------------> |
> > 2
> > > > |
> > > >   |           |                     |     3     | -------------> |
> > 3
> > > > |
> > > >   |           |                     |     4     | -------------> |
> > 4
> > > > |
> > > >   |     2M    |                     |     5     | -------------> |
> > > > 5     |
> > > >   |           |                     |     6     | -------------> |
> > 6
> > > > |
> > > >   |           |                     |     7     | -------------> |
> > 7
> > > > |
> > > >   |           |                     +-----------+
> > > > +-----------+
> > > >   |           |
> > > >   |           |
> > > >   +-----------+
> > > >
> > > >
> > > > When a hugetlbpage is preallocated, we can change the mapping from
> > above
> > > > to
> > > > bellow.
> > > >
> > > >    hugetlbpage                  struct pages(8 pages)          page
> > > > frame(8 pages)
> > > >   +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > > >   |           |                     |     0     | -------------> |
> > 0
> > > > |
> > > >   |           |                     |     1     | -------------> |
> > 1
> > > > |
> > > >   |           |                     |     2     | ------------->
> > > > +-----------+
> > > >   |           |                     |     3     | -----------------^ ^
> > ^ ^
> > > > ^
> > > >   |           |                     |     4     | -------------------+
> > | |
> > > > |
> > > >   |     2M    |                     |     5     |
> > ---------------------+ |
> > > > |
> > > >   |           |                     |     6     |
> > -----------------------+ |
> > > >   |           |                     |     7     |
> > -------------------------+
> > > >   |           |                     +-----------+
> > > >   |           |
> > > >   |           |
> > > >   +-----------+
> > > >
> > > > For tail pages, the value of compound_head is the same. So we can reuse
> > > > first page of tail page structs. We map the virtual addresses of the
> > > > remaining 6 pages of tail page structs to the first tail page struct,
> > > > and then free these 6 pages. Therefore, we need to reserve at least 2
> > > > pages as vmemmap areas.
> > > >
> > > > When a hugetlbpage is freed to the buddy system, we should allocate six
> > > > pages for vmemmap pages and restore the previous mapping relationship.
> > > >
> > > > If we uses the 1G hugetlbpage, we can save 4088 pages(There are 4096
> > pages
> > > > for
> > > > struct page structures, we reserve 2 pages for vmemmap and 8 pages for
> > page
> > > > tables. So we can save 4088 pages). This is a very substantial gain. On our
> > > > server, run some SPDK/QEMU applications which will use 1024GB
> > hugetlbpage.
> > > > With this feature enabled, we can save ~16GB(1G hugepage)/~11GB(2MB
> > > > hugepage)
> > >
> > > Hi Muchun,
> > >
> > > Do we really save 11GB for 2MB hugepage?
> > > How much do we save if we only get one 2MB hugetlb from one 128MB
> > mem_section?
> > > It seems we need to get at least one page for the PTEs since we are splitting
> > PMD of
> > > vmemmap into PTE?
> >
> > There are 524288(1024GB/2MB) 2MB HugeTLB pages. We can save 6 pages for
> > each
> > 2MB HugeTLB page. So we can save 3145728 pages. But we need to split PMD
> > page
> > table for every one 128MB mem_section and every section need one page
> > as PTE page
> > table. So we need 8192(1024GB/128MB) pages as PTE page tables.
> > Finally, we can save
> > 3137536(3145728-8192) pages which is 11.97GB.
>
> The worst case I can see is that:
> if we get 100 hugetlb with 2MB size, but the 100 hugetlb comes from different
> mem_section, we won't save 11.97GB. we only save 5/8 * 16GB=10GB.
>
> Anyway, it seems 11GB is in the middle of 10GB and 11.97GB,
> so sounds sensible :-)
>
> ideally, we should be able to free PageTail if we change struct page in some way.
> Then we will save much more for 2MB hugetlb. but it seems it is not easy.

Now for the 2MB HugrTLB page, we only free 6 vmemmap pages.
But your words woke me up. Maybe we really can free 7 vmemmap
pages. In this case, we can see 8 of the 512 struct page structures
has beed set PG_head flag. If we can adjust compound_head()
slightly and make compound_head() return the real head struct
page when the parameter is the tail struct page but with PG_head
flag set. I will start an investigation and a test.

Thanks.

>
> Thanks
> Barry



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 16:29       ` Muchun Song
@ 2020-11-17 19:22         ` Matthew Wilcox
  2020-11-18  2:43           ` Muchun Song
  2020-11-17 19:45         ` Oscar Salvador
  1 sibling, 1 reply; 49+ messages in thread
From: Matthew Wilcox @ 2020-11-17 19:22 UTC (permalink / raw)
  To: Muchun Song
  Cc: Song Bao Hua (Barry Song),
	corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, osalvador, mhocko, duanxiongchun,
	linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Wed, Nov 18, 2020 at 12:29:07AM +0800, Muchun Song wrote:
> > ideally, we should be able to free PageTail if we change struct page in some way.
> > Then we will save much more for 2MB hugetlb. but it seems it is not easy.
> 
> Now for the 2MB HugrTLB page, we only free 6 vmemmap pages.
> But your words woke me up. Maybe we really can free 7 vmemmap
> pages. In this case, we can see 8 of the 512 struct page structures
> has beed set PG_head flag. If we can adjust compound_head()
> slightly and make compound_head() return the real head struct
> page when the parameter is the tail struct page but with PG_head
> flag set. I will start an investigation and a test.

What are you thinking?

static inline struct page *compound_head(struct page *page)
{
        unsigned long head = READ_ONCE(page->compound_head);

        if (unlikely(head & 1))
                return (struct page *) (head - 1);
+	if (unlikely(page->flags & PG_head))
+		return (struct page *)(page[1]->compound_head - 1)
        return page;
}

... because if it's that, there are code paths which also just test
PageHead, and so we'd actually need to change PageHead to be something
like:

static inline bool PageHead(struct page *page)
{
	return (page->flags & PG_head) &&
		(page[1]->compound_head == (unsigned long)page + 1);
}

I'm not sure if that's worth doing -- there may be other things I
haven't thought of.

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 16:29       ` Muchun Song
  2020-11-17 19:22         ` Matthew Wilcox
@ 2020-11-17 19:45         ` Oscar Salvador
  2020-11-18  3:27           ` Muchun Song
  2020-11-18  3:27           ` Song Bao Hua (Barry Song)
  1 sibling, 2 replies; 49+ messages in thread
From: Oscar Salvador @ 2020-11-17 19:45 UTC (permalink / raw)
  To: Muchun Song
  Cc: Song Bao Hua (Barry Song),
	corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On 2020-11-17 17:29, Muchun Song wrote:
> Now for the 2MB HugrTLB page, we only free 6 vmemmap pages.
> But your words woke me up. Maybe we really can free 7 vmemmap
> pages. In this case, we can see 8 of the 512 struct page structures
> has beed set PG_head flag. If we can adjust compound_head()
> slightly and make compound_head() return the real head struct
> page when the parameter is the tail struct page but with PG_head
> flag set. I will start an investigation and a test.

I would not overcomplicate things at this stage, but rather keep it 
simple as the code is already tricky enough(without counting the LOC 
thatvit adds).
We can always build on top later on in order to improve things.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 19:22         ` Matthew Wilcox
@ 2020-11-18  2:43           ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-18  2:43 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Song Bao Hua (Barry Song),
	corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, osalvador, mhocko, duanxiongchun,
	linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Wed, Nov 18, 2020 at 3:22 AM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Wed, Nov 18, 2020 at 12:29:07AM +0800, Muchun Song wrote:
> > > ideally, we should be able to free PageTail if we change struct page in some way.
> > > Then we will save much more for 2MB hugetlb. but it seems it is not easy.
> >
> > Now for the 2MB HugrTLB page, we only free 6 vmemmap pages.
> > But your words woke me up. Maybe we really can free 7 vmemmap
> > pages. In this case, we can see 8 of the 512 struct page structures
> > has beed set PG_head flag. If we can adjust compound_head()
> > slightly and make compound_head() return the real head struct
> > page when the parameter is the tail struct page but with PG_head
> > flag set. I will start an investigation and a test.
>
> What are you thinking?
>
> static inline struct page *compound_head(struct page *page)
> {
>         unsigned long head = READ_ONCE(page->compound_head);
>
>         if (unlikely(head & 1))
>                 return (struct page *) (head - 1);
> +       if (unlikely(page->flags & PG_head))
> +               return (struct page *)(page[1]->compound_head - 1)

Yeah, I think so too. Maybe adding an align check is better.

+         if ((test_bit(PG_head, &page->flags) &&
+              IS_ALIGNED((unsigned long)page, PAGE_SIZE))

>         return page;
> }
>
> ... because if it's that, there are code paths which also just test
> PageHead, and so we'd actually need to change PageHead to be something
> like:

Yeah, I also think that rework compound_head() and PageHead() is enough.

Thanks.

>
> static inline bool PageHead(struct page *page)
> {
>         return (page->flags & PG_head) &&
>                 (page[1]->compound_head == (unsigned long)page + 1);
> }
>
> I'm not sure if that's worth doing -- there may be other things I
> haven't thought of.



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: [External] RE: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-11-17 10:26     ` [External] " Muchun Song
@ 2020-11-18  3:21       ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 49+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-11-18  3:21 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, duanxiongchun,
	linux-doc, linux-kernel, linux-mm, linux-fsdevel



> -----Original Message-----
> From: Muchun Song [mailto:songmuchun@bytedance.com]
> Sent: Tuesday, November 17, 2020 11:27 PM
> To: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>
> Cc: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> willy@infradead.org; osalvador@suse.de; mhocko@suse.com;
> duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> linux-fsdevel@vger.kernel.org
> Subject: Re: [External] RE: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap
> pages associated with each hugetlb page
> 
> On Tue, Nov 17, 2020 at 5:55 PM Song Bao Hua (Barry Song)
> <song.bao.hua@hisilicon.com> wrote:
> >
> >
> >
> > > -----Original Message-----
> > > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > > Behalf Of Muchun Song
> > > Sent: Saturday, November 14, 2020 12:00 AM
> > > To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> > > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> > > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> > > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> > > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> > > rdunlap@infradead.org; oneukum@suse.com;
> anshuman.khandual@arm.com;
> > > jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> > > willy@infradead.org; osalvador@suse.de; mhocko@suse.com
> > > Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> > > linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> > > linux-fsdevel@vger.kernel.org; Muchun Song
> <songmuchun@bytedance.com>
> > > Subject: [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages
> associated
> > > with each hugetlb page
> > >
> > > When we allocate a hugetlb page from the buddy, we should free the
> > > unused vmemmap pages associated with it. We can do that in the
> > > prep_new_huge_page().
> > >
> > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > > ---
> > >  arch/x86/include/asm/hugetlb.h          |   9 ++
> > >  arch/x86/include/asm/pgtable_64_types.h |   8 ++
> > >  mm/hugetlb.c                            |  16 +++
> > >  mm/hugetlb_vmemmap.c                    | 188
> > > ++++++++++++++++++++++++++++++++
> > >  mm/hugetlb_vmemmap.h                    |   5 +
> > >  5 files changed, 226 insertions(+)
> > >
> > > diff --git a/arch/x86/include/asm/hugetlb.h
> b/arch/x86/include/asm/hugetlb.h
> > > index 1721b1aadeb1..c601fe042832 100644
> > > --- a/arch/x86/include/asm/hugetlb.h
> > > +++ b/arch/x86/include/asm/hugetlb.h
> > > @@ -4,6 +4,15 @@
> > >
> > >  #include <asm/page.h>
> > >  #include <asm-generic/hugetlb.h>
> > > +#include <asm/pgtable.h>
> > > +
> > > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > > +#define vmemmap_pmd_huge vmemmap_pmd_huge
> > > +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> > > +{
> > > +     return pmd_large(*pmd);
> > > +}
> > > +#endif
> > >
> > >  #define hugepages_supported() boot_cpu_has(X86_FEATURE_PSE)
> > >
> > > diff --git a/arch/x86/include/asm/pgtable_64_types.h
> > > b/arch/x86/include/asm/pgtable_64_types.h
> > > index 52e5f5f2240d..bedbd2e7d06c 100644
> > > --- a/arch/x86/include/asm/pgtable_64_types.h
> > > +++ b/arch/x86/include/asm/pgtable_64_types.h
> > > @@ -139,6 +139,14 @@ extern unsigned int ptrs_per_p4d;
> > >  # define VMEMMAP_START               __VMEMMAP_BASE_L4
> > >  #endif /* CONFIG_DYNAMIC_MEMORY_LAYOUT */
> > >
> > > +/*
> > > + * VMEMMAP_SIZE - allows the whole linear region to be covered by
> > > + *                a struct page array.
> > > + */
> > > +#define VMEMMAP_SIZE         (1UL << (__VIRTUAL_MASK_SHIFT -
> > > PAGE_SHIFT - \
> > > +                                      1 + ilog2(sizeof(struct
> page))))
> > > +#define VMEMMAP_END          (VMEMMAP_START +
> VMEMMAP_SIZE)
> > > +
> > >  #define VMALLOC_END          (VMALLOC_START +
> (VMALLOC_SIZE_TB <<
> > > 40) - 1)
> > >
> > >  #define MODULES_VADDR                (__START_KERNEL_map +
> > > KERNEL_IMAGE_SIZE)
> > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > > index f88032c24667..a0ce6f33a717 100644
> > > --- a/mm/hugetlb.c
> > > +++ b/mm/hugetlb.c
> > > @@ -1499,6 +1499,14 @@ void free_huge_page(struct page *page)
> > >
> > >  static void prep_new_huge_page(struct hstate *h, struct page *page, int
> nid)
> > >  {
> > > +     free_huge_page_vmemmap(h, page);
> > > +     /*
> > > +      * Because we store preallocated pages on @page->lru,
> > > +      * vmemmap_pgtable_free() must be called before the
> > > +      * initialization of @page->lru in INIT_LIST_HEAD().
> > > +      */
> > > +     vmemmap_pgtable_free(page);
> > > +
> > >       INIT_LIST_HEAD(&page->lru);
> > >       set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > >       set_hugetlb_cgroup(page, NULL);
> > > @@ -1751,6 +1759,14 @@ static struct page
> *alloc_fresh_huge_page(struct
> > > hstate *h,
> > >       if (!page)
> > >               return NULL;
> > >
> > > +     if (vmemmap_pgtable_prealloc(h, page)) {
> > > +             if (hstate_is_gigantic(h))
> > > +                     free_gigantic_page(page, huge_page_order(h));
> > > +             else
> > > +                     put_page(page);
> > > +             return NULL;
> > > +     }
> > > +
> > >       if (hstate_is_gigantic(h))
> > >               prep_compound_gigantic_page(page,
> huge_page_order(h));
> > >       prep_new_huge_page(h, page, page_to_nid(page));
> > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > > index 332c131c01a8..937562a15f1e 100644
> > > --- a/mm/hugetlb_vmemmap.c
> > > +++ b/mm/hugetlb_vmemmap.c
> > > @@ -74,6 +74,7 @@
> > >  #include <linux/pagewalk.h>
> > >  #include <linux/mmzone.h>
> > >  #include <linux/list.h>
> > > +#include <linux/bootmem_info.h>
> > >  #include <asm/pgalloc.h>
> > >  #include "hugetlb_vmemmap.h"
> > >
> > > @@ -86,6 +87,8 @@
> > >   * reserve at least 2 pages as vmemmap areas.
> > >   */
> > >  #define RESERVE_VMEMMAP_NR           2U
> > > +#define RESERVE_VMEMMAP_SIZE         (RESERVE_VMEMMAP_NR
> <<
> > > PAGE_SHIFT)
> > > +#define TAIL_PAGE_REUSE                      -1
> > >
> > >  #ifndef VMEMMAP_HPAGE_SHIFT
> > >  #define VMEMMAP_HPAGE_SHIFT          HPAGE_SHIFT
> > > @@ -97,6 +100,21 @@
> > >
> > >  #define page_huge_pte(page)          ((page)->pmd_huge_pte)
> > >
> > > +#define vmemmap_hpage_addr_end(addr, end)
> \
> > >
> +({
>    \
> > > +     unsigned long __boundary;
> \
> > > +     __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) &
> > > VMEMMAP_HPAGE_MASK; \
> > > +     (__boundary - 1 < (end) - 1) ? __boundary : (end);               \
> > > +})
> > > +
> > > +#ifndef vmemmap_pmd_huge
> > > +#define vmemmap_pmd_huge vmemmap_pmd_huge
> > > +static inline bool vmemmap_pmd_huge(pmd_t *pmd)
> > > +{
> > > +     return pmd_huge(*pmd);
> > > +}
> > > +#endif
> > > +
> > >  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate
> *h)
> > >  {
> > >       return h->nr_free_vmemmap_pages;
> > > @@ -158,6 +176,176 @@ int vmemmap_pgtable_prealloc(struct hstate *h,
> > > struct page *page)
> > >       return -ENOMEM;
> > >  }
> > >
> > > +/*
> > > + * Walk a vmemmap address to the pmd it maps.
> > > + */
> > > +static pmd_t *vmemmap_to_pmd(unsigned long page)
> > > +{
> > > +     pgd_t *pgd;
> > > +     p4d_t *p4d;
> > > +     pud_t *pud;
> > > +     pmd_t *pmd;
> > > +
> > > +     if (page < VMEMMAP_START || page >= VMEMMAP_END)
> > > +             return NULL;
> > > +
> > > +     pgd = pgd_offset_k(page);
> > > +     if (pgd_none(*pgd))
> > > +             return NULL;
> > > +     p4d = p4d_offset(pgd, page);
> > > +     if (p4d_none(*p4d))
> > > +             return NULL;
> > > +     pud = pud_offset(p4d, page);
> > > +
> > > +     if (pud_none(*pud) || pud_bad(*pud))
> > > +             return NULL;
> > > +     pmd = pmd_offset(pud, page);
> > > +
> > > +     return pmd;
> > > +}
> > > +
> > > +static inline spinlock_t *vmemmap_pmd_lock(pmd_t *pmd)
> > > +{
> > > +     return pmd_lock(&init_mm, pmd);
> > > +}
> > > +
> > > +static inline int freed_vmemmap_hpage(struct page *page)
> > > +{
> > > +     return atomic_read(&page->_mapcount) + 1;
> > > +}
> > > +
> > > +static inline int freed_vmemmap_hpage_inc(struct page *page)
> > > +{
> > > +     return atomic_inc_return_relaxed(&page->_mapcount) + 1;
> > > +}
> > > +
> > > +static inline int freed_vmemmap_hpage_dec(struct page *page)
> > > +{
> > > +     return atomic_dec_return_relaxed(&page->_mapcount) + 1;
> > > +}
> > > +
> > > +static inline void free_vmemmap_page_list(struct list_head *list)
> > > +{
> > > +     struct page *page, *next;
> > > +
> > > +     list_for_each_entry_safe(page, next, list, lru) {
> > > +             list_del(&page->lru);
> > > +             free_vmemmap_page(page);
> > > +     }
> > > +}
> > > +
> > > +static void __free_huge_page_pte_vmemmap(struct page *reuse, pte_t
> *ptep,
> > > +                                      unsigned long start,
> > > +                                      unsigned long end,
> > > +                                      struct list_head *free_pages)
> > > +{
> > > +     /* Make the tail pages are mapped read-only. */
> > > +     pgprot_t pgprot = PAGE_KERNEL_RO;
> > > +     pte_t entry = mk_pte(reuse, pgprot);
> > > +     unsigned long addr;
> > > +
> > > +     for (addr = start; addr < end; addr += PAGE_SIZE, ptep++) {
> > > +             struct page *page;
> > > +             pte_t old = *ptep;
> > > +
> > > +             VM_WARN_ON(!pte_present(old));
> > > +             page = pte_page(old);
> > > +             list_add(&page->lru, free_pages);
> > > +
> > > +             set_pte_at(&init_mm, addr, ptep, entry);
> > > +     }
> > > +}
> > > +
> > > +static void __free_huge_page_pmd_vmemmap(struct hstate *h, pmd_t
> *pmd,
> > > +                                      unsigned long addr,
> > > +                                      struct list_head *free_pages)
> > > +{
> > > +     unsigned long next;
> > > +     unsigned long start = addr + RESERVE_VMEMMAP_SIZE;
> > > +     unsigned long end = addr + vmemmap_pages_size_per_hpage(h);
> > > +     struct page *reuse = NULL;
> > > +
> > > +     addr = start;
> > > +     do {
> > > +             pte_t *ptep;
> > > +
> > > +             ptep = pte_offset_kernel(pmd, addr);
> > > +             if (!reuse)
> > > +                     reuse = pte_page(ptep[TAIL_PAGE_REUSE]);
> > > +
> > > +             next = vmemmap_hpage_addr_end(addr, end);
> > > +             __free_huge_page_pte_vmemmap(reuse, ptep, addr, next,
> > > +                                          free_pages);
> > > +     } while (pmd++, addr = next, addr != end);
> > > +
> > > +     flush_tlb_kernel_range(start, end);
> > > +}
> > > +
> > > +static void split_vmemmap_pmd(pmd_t *pmd, pte_t *pte_p, unsigned
> long
> > > addr)
> >
> > Hi Muchun,
> >
> > Are you going to restore the pmd mapping after you free the hugetlb? I
> mean,
> > When you free continuous 128MB hugetlb pages with 2MB size, will you
> > redo the PMD vmemmap since 2MB PMD can just contain the page struct of
> > 128MB memory?
> 
> Now we only restore the pmd mapping for the 1GB HugeTLB page. For the
> 2MB HugeTLB page, we do not(I haven't figured out how to handle it
> gracefully).

Actually I don't expect PMD mapping is restored for this stage. If users don't
change hugetlb much. that means they won't return hugetlb to kernel, then
restoring the original mapping or not wouldn't be a big problem.

> 
> >
> > If no, wouldn't it be simpler to only use base pages while populating
> vmemmap?
> > I mean, once we enable the Kconfig option you add for VMEMMAP_FREE, we
> > only use base pages to place "page struct" but not split PMD into base pages
> > afterwards.
> >
> > One negative side effect might be that base pages are also used for those
> pages
> > which won't be hugetlb later. but if most pages of host will be hugetlb for
> > guest and SPDK, it shouldn't hurt too much.
> 
> Yeah, I agree with you. If the user uses a lot of HugeTLB pages(e.g.
> SPDK/Guest),
> it shouldn't hurt too much. And using base pages while populating vmemmap
> also
> can decrease the overhead(of splitting PMD). In the end, if we don’t
> come up with
> a more suitable solution to deal with it(mentioned above for 2MB HugeTLB
> page).
> Maybe this is also an idea.

On ARM64, if and only if we enable "ARM64_SWAPPER_USES_SECTION_MAP",
ARM64 will try to use PMD mapping for vmemmap. Otherwise, all mem_section
are populated by base pages.

Right now, the patchset is very big. If in any way we can make it simpler for
the first step by only using base pages, it would make the whole patchset
much smaller.

PMD mapping of vmemmap would be able to help save PTE page tables for
vmemmap. Eg. 32MB for each 1TB non-hugetlb.

Thanks
Barry

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 19:45         ` Oscar Salvador
@ 2020-11-18  3:27           ` Muchun Song
  2020-11-18  3:27           ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-18  3:27 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Song Bao Hua (Barry Song),
	Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko, Xiongchun duan,
	linux-doc, LKML, Linux Memory Management List, linux-fsdevel

On Wed, Nov 18, 2020 at 3:45 AM Oscar Salvador <osalvador@suse.de> wrote:
>
> On 2020-11-17 17:29, Muchun Song wrote:
> > Now for the 2MB HugrTLB page, we only free 6 vmemmap pages.
> > But your words woke me up. Maybe we really can free 7 vmemmap
> > pages. In this case, we can see 8 of the 512 struct page structures
> > has beed set PG_head flag. If we can adjust compound_head()
> > slightly and make compound_head() return the real head struct
> > page when the parameter is the tail struct page but with PG_head
> > flag set. I will start an investigation and a test.
>
> I would not overcomplicate things at this stage, but rather keep it
> simple as the code is already tricky enough(without counting the LOC
> thatvit adds).
> We can always build on top later on in order to improve things.

I think that this improvement can be a separate patch. In this case,
it can make the code evolution route clearer.

Thanks.


>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of hugetlb page
  2020-11-17 19:45         ` Oscar Salvador
  2020-11-18  3:27           ` Muchun Song
@ 2020-11-18  3:27           ` Song Bao Hua (Barry Song)
  1 sibling, 0 replies; 49+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-11-18  3:27 UTC (permalink / raw)
  To: Oscar Salvador, Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel



> -----Original Message-----
> From: Oscar Salvador [mailto:osalvador@suse.de]
> Sent: Wednesday, November 18, 2020 8:45 AM
> To: Muchun Song <songmuchun@bytedance.com>
> Cc: Song Bao Hua (Barry Song) <song.bao.hua@hisilicon.com>;
> corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> willy@infradead.org; mhocko@suse.com; duanxiongchun@bytedance.com;
> linux-doc@vger.kernel.org; linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> linux-fsdevel@vger.kernel.org
> Subject: Re: [External] RE: [PATCH v4 00/21] Free some vmemmap pages of
> hugetlb page
> 
> On 2020-11-17 17:29, Muchun Song wrote:
> > Now for the 2MB HugrTLB page, we only free 6 vmemmap pages.
> > But your words woke me up. Maybe we really can free 7 vmemmap
> > pages. In this case, we can see 8 of the 512 struct page structures
> > has beed set PG_head flag. If we can adjust compound_head()
> > slightly and make compound_head() return the real head struct
> > page when the parameter is the tail struct page but with PG_head
> > flag set. I will start an investigation and a test.
> 
> I would not overcomplicate things at this stage, but rather keep it
> simple as the code is already tricky enough(without counting the LOC
> thatvit adds).
> We can always build on top later on in order to improve things.

Yep. I am not expecting freeing tail page to be done at this stage. This could
be something in the todo list after the first patchset is solid.

> 
> --
> Oscar Salvador
> SUSE L3

Thanks
Barry


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-11-13 10:59 ` [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-11-18 22:38   ` Mike Kravetz
  2020-11-19  2:57     ` [External] " Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: Mike Kravetz @ 2020-11-18 22:38 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 11/13/20 2:59 AM, Muchun Song wrote:
> The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> whether to enable the feature of freeing unused vmemmap associated
> with HugeTLB pages. Now only support x86.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  arch/x86/mm/init_64.c |  2 +-
>  fs/Kconfig            | 14 ++++++++++++++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 0a45f062826e..0435bee2e172 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
>  
>  static void __init register_page_bootmem_info(void)
>  {
> -#ifdef CONFIG_NUMA
> +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
>  	int i;
>  
>  	for_each_online_node(i)
> diff --git a/fs/Kconfig b/fs/Kconfig
> index 976e8b9033c4..67e1bc99574f 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -245,6 +245,20 @@ config HUGETLBFS
>  config HUGETLB_PAGE
>  	def_bool HUGETLBFS
>  
> +config HUGETLB_PAGE_FREE_VMEMMAP
> +	def_bool HUGETLB_PAGE
> +	depends on X86
> +	depends on SPARSEMEM_VMEMMAP
> +	depends on HAVE_BOOTMEM_INFO_NODE
> +	help
> +	  When using SPARSEMEM_VMEMMAP, the system can save up some memory

Should that read,

	When using HUGETLB_PAGE_FREE_VMEMMAP, ...

as the help message is for this config option.

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-16 13:33   ` Oscar Salvador
  2020-11-16 15:40     ` [External] " Muchun Song
@ 2020-11-18 22:54     ` Mike Kravetz
  1 sibling, 0 replies; 49+ messages in thread
From: Mike Kravetz @ 2020-11-18 22:54 UTC (permalink / raw)
  To: Oscar Salvador, Muchun Song
  Cc: corbet, tglx, mingo, bp, x86, hpa, dave.hansen, luto, peterz,
	viro, akpm, paulmck, mchehab+huawei, pawan.kumar.gupta, rdunlap,
	oneukum, anshuman.khandual, jroedel, almasrymina, rientjes,
	willy, mhocko, duanxiongchun, linux-doc, linux-kernel, linux-mm,
	linux-fsdevel

On 11/16/20 5:33 AM, Oscar Salvador wrote:
> On Fri, Nov 13, 2020 at 06:59:35PM +0800, Muchun Song wrote:
>> +void __init hugetlb_vmemmap_init(struct hstate *h)
>> +{
>> +	unsigned int order = huge_page_order(h);
>> +	unsigned int vmemmap_pages;
>> +
>> +	vmemmap_pages = ((1 << order) * sizeof(struct page)) >> PAGE_SHIFT;
>> +	/*
>> +	 * The head page and the first tail page are not to be freed to buddy
>> +	 * system, the others page will map to the first tail page. So there
> "the remaining pages" might be more clear.
> 
>> +	 * are (@vmemmap_pages - RESERVE_VMEMMAP_NR) pages can be freed.
> "that can be freed"
> 
>> +	 *
>> +	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? This is
>> +	 * not expected to happen unless the system is corrupted. So on the
>> +	 * safe side, it is only a safety net.
>> +	 */
>> +	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
>> +		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
>> +	else
>> +		h->nr_free_vmemmap_pages = 0;
> 
> This made think of something.
> Since struct hstate hstates is global, all the fields should be defined to 0.
> So, the following assignments in hugetlb_add_hstate:
> 
>         h->nr_huge_pages = 0;
>         h->free_huge_pages = 0;
> 
> should not be needed.
> Actually, we do not initialize other values like resv_huge_pages
> or surplus_huge_pages.
> 
> If that is the case, the "else" could go.
> 
> Mike?

Correct.  Those assignments have been in the code for a very long time.

> The changes itself look good to me.
> I think that putting all the vemmap stuff into hugetlb-vmemmap.* was
> the right choice.

Agree!
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-13 10:59 ` [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
  2020-11-16 13:33   ` Oscar Salvador
@ 2020-11-18 23:48   ` Mike Kravetz
  2020-11-19  3:00     ` [External] " Muchun Song
  1 sibling, 1 reply; 49+ messages in thread
From: Mike Kravetz @ 2020-11-18 23:48 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 11/13/20 2:59 AM, Muchun Song wrote:
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> new file mode 100644
> index 000000000000..a6c9948302e2
> --- /dev/null
> +++ b/mm/hugetlb_vmemmap.c
> @@ -0,0 +1,108 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Free some vmemmap pages of HugeTLB
> + *
> + * Copyright (c) 2020, Bytedance. All rights reserved.
> + *
> + *     Author: Muchun Song <songmuchun@bytedance.com>
> + *

Oscar has already made some suggestions to change comments.  I would suggest
changing the below text to something like the following.

> + * Nowadays we track the status of physical page frames using struct page
> + * structures arranged in one or more arrays. And here exists one-to-one
> + * mapping between the physical page frame and the corresponding struct page
> + * structure.
> + *
> + * The HugeTLB support is built on top of multiple page size support that
> + * is provided by most modern architectures. For example, x86 CPUs normally
> + * support 4K and 2M (1G if architecturally supported) page sizes. Every
> + * HugeTLB has more than one struct page structure. The 2M HugeTLB has 512
> + * struct page structure and 1G HugeTLB has 4096 struct page structures. But
> + * in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
> + * structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page structures to
> + * store metadata associated with each HugeTLB. The rest of the struct page
> + * structures are usually read the compound_head field which are all the same
> + * value. If we can free some struct page memory to buddy system so that we
> + * can save a lot of memory.
> + *

struct page structures (page structs) are used to describe a physical page
frame.  By default, there is a one-to-one mapping from a page frame to
it's corresponding page struct.

HugeTLB pages consist of multiple base page size pages and is supported by
many architectures. See hugetlbpage.rst in the Documentation directory for
more details.  On the x86 architecture, HugeTLB pages of size 2MB and 1GB
are currently supported.  Since the base page size on x86 is 4KB, a 2MB
HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
4096 base pages.  For each base page, there is a corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page.  HUGETLB_CGROUP_MIN_ORDER
provides this upper limit.  The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for all
tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned
to the buddy allocator for other uses.

> + * When the system boot up, every 2M HugeTLB has 512 struct page structures
> + * which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> + *
> + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> + * |           |                     |     0     | -------------> |     0     |
> + * |           |                     |     1     | -------------> |     1     |
> + * |           |                     |     2     | -------------> |     2     |
> + * |           |                     |     3     | -------------> |     3     |
> + * |           |                     |     4     | -------------> |     4     |
> + * |     2M    |                     |     5     | -------------> |     5     |
> + * |           |                     |     6     | -------------> |     6     |
> + * |           |                     |     7     | -------------> |     7     |
> + * |           |                     +-----------+                +-----------+
> + * |           |
> + * |           |
> + * +-----------+
> + *
> + *

I think we want the description before the next diagram.

Reworded description here:

The value of compound_head is the same for all tail pages.  The first page of
page structs (page 0) associated with the HugeTLB page contains the 4 page
structs necessary to describe the HugeTLB.  The only use of the remaining pages
of page structs (page 1 to page 7) is to point to compound_head.  Therefore,
we can remap pages 2 to 7 to page 1.  Only 2 pages of page structs will be used
for each HugeTLB page.  This will allow us to free the remaining 6 pages to 
the buddy allocator.  

Here is how things look after remapping.

> + *
> + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> + * |           |                     |     0     | -------------> |     0     |
> + * |           |                     |     1     | -------------> |     1     |
> + * |           |                     |     2     | -------------> +-----------+
> + * |           |                     |     3     | -----------------^ ^ ^ ^ ^
> + * |           |                     |     4     | -------------------+ | | |
> + * |     2M    |                     |     5     | ---------------------+ | |
> + * |           |                     |     6     | -----------------------+ |
> + * |           |                     |     7     | -------------------------+
> + * |           |                     +-----------+
> + * |           |
> + * |           |
> + * +-----------+

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-11-18 22:38   ` Mike Kravetz
@ 2020-11-19  2:57     ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-19  2:57 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Oscar Salvador, Michal Hocko, Xiongchun duan,
	linux-doc, LKML, Linux Memory Management List, linux-fsdevel

On Thu, Nov 19, 2020 at 6:39 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 11/13/20 2:59 AM, Muchun Song wrote:
> > The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> > whether to enable the feature of freeing unused vmemmap associated
> > with HugeTLB pages. Now only support x86.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  arch/x86/mm/init_64.c |  2 +-
> >  fs/Kconfig            | 14 ++++++++++++++
> >  2 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 0a45f062826e..0435bee2e172 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
> >
> >  static void __init register_page_bootmem_info(void)
> >  {
> > -#ifdef CONFIG_NUMA
> > +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
> >       int i;
> >
> >       for_each_online_node(i)
> > diff --git a/fs/Kconfig b/fs/Kconfig
> > index 976e8b9033c4..67e1bc99574f 100644
> > --- a/fs/Kconfig
> > +++ b/fs/Kconfig
> > @@ -245,6 +245,20 @@ config HUGETLBFS
> >  config HUGETLB_PAGE
> >       def_bool HUGETLBFS
> >
> > +config HUGETLB_PAGE_FREE_VMEMMAP
> > +     def_bool HUGETLB_PAGE
> > +     depends on X86
> > +     depends on SPARSEMEM_VMEMMAP
> > +     depends on HAVE_BOOTMEM_INFO_NODE
> > +     help
> > +       When using SPARSEMEM_VMEMMAP, the system can save up some memory
>
> Should that read,
>
>         When using HUGETLB_PAGE_FREE_VMEMMAP, ...
>
> as the help message is for this config option.

Got it. Thanks

>
> --
> Mike Kravetz



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-18 23:48   ` Mike Kravetz
@ 2020-11-19  3:00     ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-19  3:00 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Oscar Salvador, Michal Hocko, Xiongchun duan,
	linux-doc, LKML, Linux Memory Management List, linux-fsdevel

On Thu, Nov 19, 2020 at 7:48 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 11/13/20 2:59 AM, Muchun Song wrote:
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > new file mode 100644
> > index 000000000000..a6c9948302e2
> > --- /dev/null
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -0,0 +1,108 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Free some vmemmap pages of HugeTLB
> > + *
> > + * Copyright (c) 2020, Bytedance. All rights reserved.
> > + *
> > + *     Author: Muchun Song <songmuchun@bytedance.com>
> > + *
>
> Oscar has already made some suggestions to change comments.  I would suggest
> changing the below text to something like the following.

Thanks Mike. I will change the below comments.

>
> > + * Nowadays we track the status of physical page frames using struct page
> > + * structures arranged in one or more arrays. And here exists one-to-one
> > + * mapping between the physical page frame and the corresponding struct page
> > + * structure.
> > + *
> > + * The HugeTLB support is built on top of multiple page size support that
> > + * is provided by most modern architectures. For example, x86 CPUs normally
> > + * support 4K and 2M (1G if architecturally supported) page sizes. Every
> > + * HugeTLB has more than one struct page structure. The 2M HugeTLB has 512
> > + * struct page structure and 1G HugeTLB has 4096 struct page structures. But
> > + * in the core of HugeTLB only uses the first 4 (Use of first 4 struct page
> > + * structures comes from HUGETLB_CGROUP_MIN_ORDER.) struct page structures to
> > + * store metadata associated with each HugeTLB. The rest of the struct page
> > + * structures are usually read the compound_head field which are all the same
> > + * value. If we can free some struct page memory to buddy system so that we
> > + * can save a lot of memory.
> > + *
>
> struct page structures (page structs) are used to describe a physical page
> frame.  By default, there is a one-to-one mapping from a page frame to
> it's corresponding page struct.
>
> HugeTLB pages consist of multiple base page size pages and is supported by
> many architectures. See hugetlbpage.rst in the Documentation directory for
> more details.  On the x86 architecture, HugeTLB pages of size 2MB and 1GB
> are currently supported.  Since the base page size on x86 is 4KB, a 2MB
> HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> 4096 base pages.  For each base page, there is a corresponding page struct.
>
> Within the HugeTLB subsystem, only the first 4 page structs are used to
> contain unique information about a HugeTLB page.  HUGETLB_CGROUP_MIN_ORDER
> provides this upper limit.  The only 'useful' information in the remaining
> page structs is the compound_head field, and this field is the same for all
> tail pages.
>
> By removing redundant page structs for HugeTLB pages, memory can returned
> to the buddy allocator for other uses.
>
> > + * When the system boot up, every 2M HugeTLB has 512 struct page structures
> > + * which size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> > + *
> > + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> > + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > + * |           |                     |     0     | -------------> |     0     |
> > + * |           |                     |     1     | -------------> |     1     |
> > + * |           |                     |     2     | -------------> |     2     |
> > + * |           |                     |     3     | -------------> |     3     |
> > + * |           |                     |     4     | -------------> |     4     |
> > + * |     2M    |                     |     5     | -------------> |     5     |
> > + * |           |                     |     6     | -------------> |     6     |
> > + * |           |                     |     7     | -------------> |     7     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |
> > + * |           |
> > + * +-----------+
> > + *
> > + *
>
> I think we want the description before the next diagram.
>
> Reworded description here:
>
> The value of compound_head is the same for all tail pages.  The first page of
> page structs (page 0) associated with the HugeTLB page contains the 4 page
> structs necessary to describe the HugeTLB.  The only use of the remaining pages
> of page structs (page 1 to page 7) is to point to compound_head.  Therefore,
> we can remap pages 2 to 7 to page 1.  Only 2 pages of page structs will be used
> for each HugeTLB page.  This will allow us to free the remaining 6 pages to
> the buddy allocator.
>
> Here is how things look after remapping.
>
> > + *
> > + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> > + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > + * |           |                     |     0     | -------------> |     0     |
> > + * |           |                     |     1     | -------------> |     1     |
> > + * |           |                     |     2     | -------------> +-----------+
> > + * |           |                     |     3     | -----------------^ ^ ^ ^ ^
> > + * |           |                     |     4     | -------------------+ | | |
> > + * |     2M    |                     |     5     | ---------------------+ | |
> > + * |           |                     |     6     | -----------------------+ |
> > + * |           |                     |     7     | -------------------------+
> > + * |           |                     +-----------+
> > + * |           |
> > + * |           |
> > + * +-----------+
>
> --
> Mike Kravetz



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-17 15:06   ` Oscar Salvador
  2020-11-17 15:29     ` [External] " Muchun Song
@ 2020-11-19  6:17     ` Muchun Song
  2020-11-19 23:21       ` Mike Kravetz
  1 sibling, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-19  6:17 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko, Xiongchun duan,
	linux-doc, LKML, Linux Memory Management List, linux-fsdevel

On Tue, Nov 17, 2020 at 11:06 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Fri, Nov 13, 2020 at 06:59:36PM +0800, Muchun Song wrote:
> > +#define page_huge_pte(page)          ((page)->pmd_huge_pte)
>
> Seems you do not need this one anymore.
>
> > +void vmemmap_pgtable_free(struct page *page)
> > +{
> > +     struct page *pte_page, *t_page;
> > +
> > +     list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) {
> > +             list_del(&pte_page->lru);
> > +             pte_free_kernel(&init_mm, page_to_virt(pte_page));
> > +     }
> > +}
> > +
> > +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> > +{
> > +     unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> > +
> > +     /* Store preallocated pages on huge page lru list */
> > +     INIT_LIST_HEAD(&page->lru);
> > +
> > +     while (nr--) {
> > +             pte_t *pte_p;
> > +
> > +             pte_p = pte_alloc_one_kernel(&init_mm);
> > +             if (!pte_p)
> > +                     goto out;
> > +             list_add(&virt_to_page(pte_p)->lru, &page->lru);
> > +     }
>
> Definetely this looks better and easier to handle.
> Btw, did you explore Matthew's hint about instead of allocating a new page,
> using one of the ones you are going to free to store the ptes?
> I am not sure whether it is feasible at all though.

Hi Oscar and Matthew,

I have started an investigation about this. Finally, I think that it
may not be feasible. If we use a vmemmap page frame as a
page table when we split the PMD table firstly, in this stage,
we need to set 512 pte entry to the vmemmap page frame. If
someone reads the tail struct page struct of the HugeTLB,
it can get the arbitrary value (I am not sure it actually exists,
maybe the memory compaction module can do this). So on
the safe side, I think that allocating a new page is a good
choice.

Thanks.

>
>
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -9,12 +9,24 @@
> >  #ifndef _LINUX_HUGETLB_VMEMMAP_H
> >  #define _LINUX_HUGETLB_VMEMMAP_H
> >  #include <linux/hugetlb.h>
> > +#include <linux/mm.h>
>
> why do we need this here?
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-19  6:17     ` Muchun Song
@ 2020-11-19 23:21       ` Mike Kravetz
  2020-11-20  2:52         ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: Mike Kravetz @ 2020-11-19 23:21 UTC (permalink / raw)
  To: Muchun Song, Oscar Salvador
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Michal Hocko, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On 11/18/20 10:17 PM, Muchun Song wrote:
> On Tue, Nov 17, 2020 at 11:06 PM Oscar Salvador <osalvador@suse.de> wrote:
>>
>> On Fri, Nov 13, 2020 at 06:59:36PM +0800, Muchun Song wrote:
>>> +#define page_huge_pte(page)          ((page)->pmd_huge_pte)
>>
>> Seems you do not need this one anymore.
>>
>>> +void vmemmap_pgtable_free(struct page *page)
>>> +{
>>> +     struct page *pte_page, *t_page;
>>> +
>>> +     list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) {
>>> +             list_del(&pte_page->lru);
>>> +             pte_free_kernel(&init_mm, page_to_virt(pte_page));
>>> +     }
>>> +}
>>> +
>>> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
>>> +{
>>> +     unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
>>> +
>>> +     /* Store preallocated pages on huge page lru list */
>>> +     INIT_LIST_HEAD(&page->lru);
>>> +
>>> +     while (nr--) {
>>> +             pte_t *pte_p;
>>> +
>>> +             pte_p = pte_alloc_one_kernel(&init_mm);
>>> +             if (!pte_p)
>>> +                     goto out;
>>> +             list_add(&virt_to_page(pte_p)->lru, &page->lru);
>>> +     }
>>
>> Definetely this looks better and easier to handle.
>> Btw, did you explore Matthew's hint about instead of allocating a new page,
>> using one of the ones you are going to free to store the ptes?
>> I am not sure whether it is feasible at all though.
> 
> Hi Oscar and Matthew,
> 
> I have started an investigation about this. Finally, I think that it
> may not be feasible. If we use a vmemmap page frame as a
> page table when we split the PMD table firstly, in this stage,
> we need to set 512 pte entry to the vmemmap page frame. If
> someone reads the tail struct page struct of the HugeTLB,
> it can get the arbitrary value (I am not sure it actually exists,
> maybe the memory compaction module can do this). So on
> the safe side, I think that allocating a new page is a good
> choice.

Thanks for looking into this.

If I understand correctly, the issue is that you need the pte page to set
up the new mappings.  In your current code, this is done before removing
the pages of struct pages.  This keeps everything 'consistent' as things
are remapped.

If you want to use one of the 'pages of struct pages' for the new pte
page, then there will be a period of time when things are inconsistent.
Before setting up the mapping, some code could potentially access that
pages of struct pages.

I tend to agree that allocating allocating a new page is the safest thing
to do here.  Or, perhaps someone can think of a way make this safe.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-13 10:59 ` [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
  2020-11-17 15:06   ` Oscar Salvador
@ 2020-11-19 23:37   ` Mike Kravetz
  1 sibling, 0 replies; 49+ messages in thread
From: Mike Kravetz @ 2020-11-19 23:37 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 11/13/20 2:59 AM, Muchun Song wrote:
> On x86_64, vmemmap is always PMD mapped if the machine has hugepages
> support and if we have 2MB contiguos pages and PMD aligned. If we want
                             contiguous              alignment
> to free the unused vmemmap pages, we have to split the huge pmd firstly.
> So we should pre-allocate pgtable to split PMD to PTE.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/hugetlb_vmemmap.c | 73 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h | 12 +++++++++
>  2 files changed, 85 insertions(+)

Thanks for the cleanup.

Oscar made some other comments.  I only have one additional minor comment
below.

With those minor cleanups,
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>

> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
...
> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> +{
> +	unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> +
> +	/* Store preallocated pages on huge page lru list */

Let's expland the above comment to something like this:

	/*
	 * Use the huge page lru list to temporarily store the preallocated
	 * pages.  The preallocated pages are used and the list is emptied
	 * before the huge page is put into use.  When the huge page is put
	 * into use by prep_new_huge_page() the list will be reinitialized.
	 */

> +	INIT_LIST_HEAD(&page->lru);
> +
> +	while (nr--) {
> +		pte_t *pte_p;
> +
> +		pte_p = pte_alloc_one_kernel(&init_mm);
> +		if (!pte_p)
> +			goto out;
> +		list_add(&virt_to_page(pte_p)->lru, &page->lru);
> +	}
> +
> +	return 0;
> +out:
> +	vmemmap_pgtable_free(page);
> +	return -ENOMEM;
> +}

-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers
  2020-11-19 23:21       ` Mike Kravetz
@ 2020-11-20  2:52         ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-20  2:52 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Oscar Salvador, Jonathan Corbet, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko, Xiongchun duan,
	linux-doc, LKML, Linux Memory Management List, linux-fsdevel

On Fri, Nov 20, 2020 at 7:22 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 11/18/20 10:17 PM, Muchun Song wrote:
> > On Tue, Nov 17, 2020 at 11:06 PM Oscar Salvador <osalvador@suse.de> wrote:
> >>
> >> On Fri, Nov 13, 2020 at 06:59:36PM +0800, Muchun Song wrote:
> >>> +#define page_huge_pte(page)          ((page)->pmd_huge_pte)
> >>
> >> Seems you do not need this one anymore.
> >>
> >>> +void vmemmap_pgtable_free(struct page *page)
> >>> +{
> >>> +     struct page *pte_page, *t_page;
> >>> +
> >>> +     list_for_each_entry_safe(pte_page, t_page, &page->lru, lru) {
> >>> +             list_del(&pte_page->lru);
> >>> +             pte_free_kernel(&init_mm, page_to_virt(pte_page));
> >>> +     }
> >>> +}
> >>> +
> >>> +int vmemmap_pgtable_prealloc(struct hstate *h, struct page *page)
> >>> +{
> >>> +     unsigned int nr = pgtable_pages_to_prealloc_per_hpage(h);
> >>> +
> >>> +     /* Store preallocated pages on huge page lru list */
> >>> +     INIT_LIST_HEAD(&page->lru);
> >>> +
> >>> +     while (nr--) {
> >>> +             pte_t *pte_p;
> >>> +
> >>> +             pte_p = pte_alloc_one_kernel(&init_mm);
> >>> +             if (!pte_p)
> >>> +                     goto out;
> >>> +             list_add(&virt_to_page(pte_p)->lru, &page->lru);
> >>> +     }
> >>
> >> Definetely this looks better and easier to handle.
> >> Btw, did you explore Matthew's hint about instead of allocating a new page,
> >> using one of the ones you are going to free to store the ptes?
> >> I am not sure whether it is feasible at all though.
> >
> > Hi Oscar and Matthew,
> >
> > I have started an investigation about this. Finally, I think that it
> > may not be feasible. If we use a vmemmap page frame as a
> > page table when we split the PMD table firstly, in this stage,
> > we need to set 512 pte entry to the vmemmap page frame. If
> > someone reads the tail struct page struct of the HugeTLB,
> > it can get the arbitrary value (I am not sure it actually exists,
> > maybe the memory compaction module can do this). So on
> > the safe side, I think that allocating a new page is a good
> > choice.
>
> Thanks for looking into this.
>
> If I understand correctly, the issue is that you need the pte page to set
> up the new mappings.  In your current code, this is done before removing
> the pages of struct pages.  This keeps everything 'consistent' as things
> are remapped.
>
> If you want to use one of the 'pages of struct pages' for the new pte
> page, then there will be a period of time when things are inconsistent.
> Before setting up the mapping, some code could potentially access that
> pages of struct pages.

Yeah, you are right.

>
> I tend to agree that allocating allocating a new page is the safest thing
> to do here.  Or, perhaps someone can think of a way make this safe.
> --
> Mike Kravetz



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2020-11-20  2:52 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-13 10:59 [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Muchun Song
2020-11-13 10:59 ` [PATCH v4 01/21] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
2020-11-16 13:50   ` Oscar Salvador
2020-11-13 10:59 ` [PATCH v4 02/21] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
2020-11-16 13:52   ` Oscar Salvador
2020-11-13 10:59 ` [PATCH v4 03/21] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-11-18 22:38   ` Mike Kravetz
2020-11-19  2:57     ` [External] " Muchun Song
2020-11-13 10:59 ` [PATCH v4 04/21] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-11-16 13:33   ` Oscar Salvador
2020-11-16 15:40     ` [External] " Muchun Song
2020-11-18 22:54     ` Mike Kravetz
2020-11-18 23:48   ` Mike Kravetz
2020-11-19  3:00     ` [External] " Muchun Song
2020-11-13 10:59 ` [PATCH v4 05/21] mm/hugetlb: Introduce pgtable allocation/freeing helpers Muchun Song
2020-11-17 15:06   ` Oscar Salvador
2020-11-17 15:29     ` [External] " Muchun Song
2020-11-19  6:17     ` Muchun Song
2020-11-19 23:21       ` Mike Kravetz
2020-11-20  2:52         ` Muchun Song
2020-11-19 23:37   ` Mike Kravetz
2020-11-13 10:59 ` [PATCH v4 06/21] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
2020-11-13 10:59 ` [PATCH v4 07/21] mm/bootmem_info: Combine bootmem info and type into page->freelist Muchun Song
2020-11-13 10:59 ` [PATCH v4 08/21] mm/hugetlb: Initialize page table lock for vmemmap Muchun Song
2020-11-13 10:59 ` [PATCH v4 09/21] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-17  9:54   ` Song Bao Hua (Barry Song)
2020-11-17 10:26     ` [External] " Muchun Song
2020-11-18  3:21       ` Song Bao Hua (Barry Song)
2020-11-13 10:59 ` [PATCH v4 10/21] mm/hugetlb: Defer freeing of hugetlb pages Muchun Song
2020-11-13 10:59 ` [PATCH v4 11/21] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-13 10:59 ` [PATCH v4 12/21] mm/hugetlb: Introduce remap_huge_page_pmd_vmemmap helper Muchun Song
2020-11-13 10:59 ` [PATCH v4 13/21] mm/hugetlb: Use PG_slab to indicate split pmd Muchun Song
2020-11-13 10:59 ` [PATCH v4 14/21] mm/hugetlb: Support freeing vmemmap pages of gigantic page Muchun Song
2020-11-13 10:59 ` [PATCH v4 15/21] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-11-13 10:59 ` [PATCH v4 16/21] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-11-13 10:59 ` [PATCH v4 17/21] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-11-13 10:59 ` [PATCH v4 18/21] mm/hugetlb: Merge pte to huge pmd only for gigantic page Muchun Song
2020-11-13 10:59 ` [PATCH v4 19/21] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-11-13 10:59 ` [PATCH v4 20/21] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
2020-11-13 10:59 ` [PATCH v4 21/21] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
2020-11-17 10:15 ` [PATCH v4 00/21] Free some vmemmap pages of hugetlb page Song Bao Hua (Barry Song)
2020-11-17 10:49   ` [External] " Muchun Song
2020-11-17 11:07     ` Song Bao Hua (Barry Song)
2020-11-17 16:29       ` Muchun Song
2020-11-17 19:22         ` Matthew Wilcox
2020-11-18  2:43           ` Muchun Song
2020-11-17 19:45         ` Oscar Salvador
2020-11-18  3:27           ` Muchun Song
2020-11-18  3:27           ` Song Bao Hua (Barry Song)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).