linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page
@ 2020-12-13 15:45 Muchun Song
  2020-12-13 15:45 ` [PATCH v9 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
                   ` (10 more replies)
  0 siblings, 11 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Hi all,

This patch series will free some vmemmap pages(struct page structures)
associated with each hugetlbpage when preallocated to save memory.

In order to reduce the difficulty of the first version of code review.
From this version, we disable PMD/huge page mapping of vmemmap if this
feature was enabled. This accutualy eliminate a bunch of the complex code
doing page table manipulation. When this patch series is solid, we cam add
the code of vmemmap page table manipulation in the future.

The struct page structures (page structs) are used to describe a physical
page frame. By default, there is a one-to-one mapping from a page frame to
it's corresponding page struct.

The HugeTLB pages consist of multiple base page size pages and is supported
by many architectures. See hugetlbpage.rst in the Documentation directory
for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
are currently supported. Since the base page size on x86 is 4KB, a 2MB
HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
4096 base pages. For each base page, there is a corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
provides this upper limit. The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for all
tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned to
the buddy allocator for other uses.

When the system boot up, every 2M HugeTLB has 512 struct page structs which
size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | -------------> |     2     |
 |           |                     +-----------+                +-----------+
 |           |                     |     3     | -------------> |     3     |
 |           |                     +-----------+                +-----------+
 |           |                     |     4     | -------------> |     4     |
 |    2MB    |                     +-----------+                +-----------+
 |           |                     |     5     | -------------> |     5     |
 |           |                     +-----------+                +-----------+
 |           |                     |     6     | -------------> |     6     |
 |           |                     +-----------+                +-----------+
 |           |                     |     7     | -------------> |     7     |
 |           |                     +-----------+                +-----------+
 |           |
 |           |
 |           |
 +-----------+

The value of page->compound_head is the same for all tail pages. The first
page of page structs (page 0) associated with the HugeTLB page contains the 4
page structs necessary to describe the HugeTLB. The only use of the remaining
pages of page structs (page 1 to page 7) is to point to page->compound_head.
Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
will be used for each HugeTLB page. This will allow us to free the remaining
6 pages to the buddy allocator.

Here is how things look after remapping.

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
 |           |                     +-----------+                   | | | | |
 |           |                     |     3     | ------------------+ | | | |
 |           |                     +-----------+                     | | | |
 |           |                     |     4     | --------------------+ | | |
 |    2MB    |                     +-----------+                       | | |
 |           |                     |     5     | ----------------------+ | |
 |           |                     +-----------+                         | |
 |           |                     |     6     | ------------------------+ |
 |           |                     +-----------+                           |
 |           |                     |     7     | --------------------------+
 |           |                     +-----------+
 |           |
 |           |
 |           |
 +-----------+

When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
vmemmap pages and restore the previous mapping relationship.

Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
pages.

In this case, for the 1GB HugeTLB page, we can save 4088 pages(There are
4096 pages for struct page structs, we reserve 2 pages for vmemmap and 8
pages for page tables. So we can save 4088 pages). This is a very substantial
gain. On our server, run some SPDK/QEMU applications which will use 1024GB
hugetlbpage. With this feature enabled, we can save ~16GB(1G hugepage)/~11GB
(2MB hugepage, the worst case is 10GB while the best is 12GB) memory.

Because there are vmemmap page tables reconstruction on the freeing/allocating
path, it increases some overhead. Here are some overhead analysis.

1) Allocating 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           8360 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          1868 |@@@@@@@@@@@                                         |
   [32K, 64K)            10 |                                                    |
   [64K, 128K)            2 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.066s
   user     0m0.000s
   sys      0m0.066s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10176 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             62 |                                                    |
   [16K, 32K)             2 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.004s
   user     0m0.000s
   sys      0m0.002s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [16K, 32K)         10240 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.077s
   user     0m0.001s
   sys      0m0.075s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            9950 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)            287 |@                                                   |
   [16K, 32K)             3 |                                                    |

   Summarize: The overhead of __free_hugepage is about ~2-4x slower than before.
              But according to the allocation test above, I think that here is
	      also ~2x slower than before.

              But why the 'real' time of patched is smaller than before? Because
	      In this patch series, the freeing hugetlb is asynchronous(through
	      kwoker).

Although the overhead has increased, the overhead is not significant. Like Mike
said, "However, remember that the majority of use cases create hugetlb pages at
or shortly after boot time and add them to the pool. So, additional overhead is
at pool creation time. There is no change to 'normal run time' operations of
getting a page from or returning a page to the pool (think page fault/unmap)".

Todo:
  - Free all of the tail vmemmap pages
    Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
    free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
    structures has beed set PG_head flag. If we can adjust compound_head()
    slightly and make compound_head() return the real head struct page when
    the parameter is the tail struct page but with PG_head flag set.

    In order to make the code evolution route clearer. This feature can can be
    a separate patch after this patchset is solid.

  - Support for other architectures (e.g. aarch64).
  - Enable PMD/huge page mapping of vmemmap even if this feature was enabled.

Changelog in v8 -> v9:
  - Rework some code. Very thanks to Oscar.
  - Put all the non-hugetlb vmemmap functions under sparsemem-vmemmap.c.

Changelog in v7 -> v8:
  - Adjust the order of patches.

  Very thanks to David and Oscar. Your suggestions are very valuable.

Changelog in v6 -> v7:
  - Rebase to linux-next 20201130
  - Do not use basepage mapping for vmemmap when this feature is disabled.
  - Rework some patchs.
    [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
    [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page

  Thanks to Oscar and Barry.

Changelog in v5 -> v6:
  - Disable PMD/huge page mapping of vmemmap if this feature was enabled.
  - Simplify the first version code.

Changelog in v4 -> v5:
  - Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].

  Thanks to Mike and Oscar's suggestions.

Changelog in v3 -> v4:
  - Move all the vmemmap functions to hugetlb_vmemmap.c.
  - Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
    disable this feature, we should disable it by a boot/kernel command line.
  - Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
  - Initialize page table lock for vmemmap through core_initcall mechanism.

  Thanks for Mike and Oscar's suggestions.

Changelog in v2 -> v3:
  - Rename some helps function name. Thanks Mike.
  - Rework some code. Thanks Mike and Oscar.
  - Remap the tail vmemmap page with PAGE_KERNEL_RO instead of PAGE_KERNEL.
    Thanks Matthew.
  - Add some overhead analysis in the cover letter.
  - Use vmemap pmd table lock instead of a hugetlb specific global lock.

Changelog in v1 -> v2:
  - Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  - Fix some typo and code style problems.
  - Remove unused handle_vmemmap_fault().
  - Merge some commits to one commit suggested by Mike.

Muchun Song (11):
  mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c
  mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  mm/hugetlb: Defer freeing of HugeTLB pages
  mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB
    page
  mm/hugetlb: Set the PageHWPoison to the raw error page
  mm/hugetlb: Flush work when dissolving hugetlb page
  mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  mm/hugetlb: Gather discrete indexes of tail page
  mm/hugetlb: Optimize the code with the help of the compiler

 Documentation/admin-guide/kernel-parameters.txt |   9 +
 Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
 arch/x86/mm/init_64.c                           |  13 +-
 fs/Kconfig                                      |  15 ++
 include/linux/bootmem_info.h                    |  65 ++++++
 include/linux/hugetlb.h                         |  36 ++++
 include/linux/hugetlb_cgroup.h                  |  15 +-
 include/linux/memory_hotplug.h                  |  27 ---
 include/linux/mm.h                              |   3 +
 mm/Makefile                                     |   2 +
 mm/bootmem_info.c                               | 124 +++++++++++
 mm/hugetlb.c                                    | 161 ++++++++++++--
 mm/hugetlb_vmemmap.c                            | 268 ++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                            |  45 ++++
 mm/memory_hotplug.c                             | 116 ----------
 mm/sparse-vmemmap.c                             | 237 +++++++++++++++++++++
 mm/sparse.c                                     |   1 +
 17 files changed, 966 insertions(+), 174 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

-- 
2.11.0



^ permalink raw reply	[flat|nested] 43+ messages in thread

* [PATCH v9 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-13 15:45 ` [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
                   ` (9 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Move bootmem info registration common API to individual bootmem_info.c.
And we will use {get,put}_page_bootmem() to initialize the page for the
vmemmap pages or free the vmemmap pages to buddy in the later patch.
So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
movement without any functional change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/mm/init_64.c          |   3 +-
 include/linux/bootmem_info.h   |  40 +++++++++++++
 include/linux/memory_hotplug.h |  27 ---------
 mm/Makefile                    |   1 +
 mm/bootmem_info.c              | 124 +++++++++++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 116 --------------------------------------
 mm/sparse.c                    |   1 +
 7 files changed, 168 insertions(+), 144 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..0a45f062826e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -33,6 +33,7 @@
 #include <linux/nmi.h>
 #include <linux/gfp.h>
 #include <linux/kcore.h>
+#include <linux/bootmem_info.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 	return err;
 }
 
-#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void register_page_bootmem_memmap(unsigned long section_nr,
 				  struct page *start_page, unsigned long nr_pages)
 {
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
new file mode 100644
index 000000000000..4ed6dee1adc9
--- /dev/null
+++ b/include/linux/bootmem_info.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_BOOTMEM_INFO_H
+#define __LINUX_BOOTMEM_INFO_H
+
+#include <linux/mmzone.h>
+
+/*
+ * Types for free bootmem stored in page->lru.next. These have to be in
+ * some random range in unsigned long space for debugging purposes.
+ */
+enum {
+	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
+	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
+	MIX_SECTION_INFO,
+	NODE_INFO,
+	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
+};
+
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+
+void get_page_bootmem(unsigned long info, struct page *page,
+		      unsigned long type);
+void put_page_bootmem(struct page *page);
+#else
+static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+}
+
+static inline void put_page_bootmem(struct page *page)
+{
+}
+
+static inline void get_page_bootmem(unsigned long info, struct page *page,
+				    unsigned long type)
+{
+}
+#endif
+
+#endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 15acce5ab106..84590964ad35 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -33,18 +33,6 @@ struct vmem_altmap;
 	___page;						   \
 })
 
-/*
- * Types for free bootmem stored in page->lru.next. These have to be in
- * some random range in unsigned long space for debugging purposes.
- */
-enum {
-	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
-	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
-	MIX_SECTION_INFO,
-	NODE_INFO,
-	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
-};
-
 /* Types for control the zone type of onlined and offlined memory */
 enum {
 	/* Offline the memory. */
@@ -222,17 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
-#else
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-#endif
-extern void put_page_bootmem(struct page *page);
-extern void get_page_bootmem(unsigned long ingo, struct page *page,
-			     unsigned long type);
-
 void get_online_mems(void);
 void put_online_mems(void);
 
@@ -260,10 +237,6 @@ static inline void zone_span_writelock(struct zone *zone) {}
 static inline void zone_span_writeunlock(struct zone *zone) {}
 static inline void zone_seqlock_init(struct zone *zone) {}
 
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-
 static inline int try_online_node(int nid)
 {
 	return 0;
diff --git a/mm/Makefile b/mm/Makefile
index a1af02ba8f3f..ed4b88fa0f5e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_KFENCE) += kfence/
 obj-$(CONFIG_FAILSLAB) += failslab.o
+obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
new file mode 100644
index 000000000000..fcab5a3f8cc0
--- /dev/null
+++ b/mm/bootmem_info.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/mm/bootmem_info.c
+ *
+ *  Copyright (C)
+ */
+#include <linux/mm.h>
+#include <linux/compiler.h>
+#include <linux/memblock.h>
+#include <linux/bootmem_info.h>
+#include <linux/memory_hotplug.h>
+
+void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
+{
+	page->freelist = (void *)type;
+	SetPagePrivate(page);
+	set_page_private(page, info);
+	page_ref_inc(page);
+}
+
+void put_page_bootmem(struct page *page)
+{
+	unsigned long type;
+
+	type = (unsigned long) page->freelist;
+	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
+	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
+
+	if (page_ref_dec_return(page) == 1) {
+		page->freelist = NULL;
+		ClearPagePrivate(page);
+		set_page_private(page, 0);
+		INIT_LIST_HEAD(&page->lru);
+		free_reserved_page(page);
+	}
+}
+
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	/* Get section's memmap address */
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	/*
+	 * Get page for the memmap's phys address
+	 * XXX: need more consideration for sparse_vmemmap...
+	 */
+	page = virt_to_page(memmap);
+	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
+	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
+
+	/* remember memmap's page */
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, SECTION_INFO);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+
+}
+#else /* CONFIG_SPARSEMEM_VMEMMAP */
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+}
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
+
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+	unsigned long i, pfn, end_pfn, nr_pages;
+	int node = pgdat->node_id;
+	struct page *page;
+
+	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
+	page = virt_to_page(pgdat);
+
+	for (i = 0; i < nr_pages; i++, page++)
+		get_page_bootmem(node, page, NODE_INFO);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	/* register section info */
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+		/*
+		 * Some platforms can assign the same pfn to multiple nodes - on
+		 * node0 as well as nodeN.  To avoid registering a pfn against
+		 * multiple nodes we check that this pfn does not already
+		 * reside in some other nodes.
+		 */
+		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
+			register_page_bootmem_info_section(pfn);
+	}
+}
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a8cef4955907..4c4ca99745b7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -141,122 +141,6 @@ static void release_memory_resource(struct resource *res)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
-void get_page_bootmem(unsigned long info,  struct page *page,
-		      unsigned long type)
-{
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
-	page_ref_inc(page);
-}
-
-void put_page_bootmem(struct page *page)
-{
-	unsigned long type;
-
-	type = (unsigned long) page->freelist;
-	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
-	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
-
-	if (page_ref_dec_return(page) == 1) {
-		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
-		INIT_LIST_HEAD(&page->lru);
-		free_reserved_page(page);
-	}
-}
-
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	/* Get section's memmap address */
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	/*
-	 * Get page for the memmap's phys address
-	 * XXX: need more consideration for sparse_vmemmap...
-	 */
-	page = virt_to_page(memmap);
-	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
-	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
-
-	/* remember memmap's page */
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, SECTION_INFO);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-
-}
-#else /* CONFIG_SPARSEMEM_VMEMMAP */
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
-
-void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-	unsigned long i, pfn, end_pfn, nr_pages;
-	int node = pgdat->node_id;
-	struct page *page;
-
-	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
-	page = virt_to_page(pgdat);
-
-	for (i = 0; i < nr_pages; i++, page++)
-		get_page_bootmem(node, page, NODE_INFO);
-
-	pfn = pgdat->node_start_pfn;
-	end_pfn = pgdat_end_pfn(pgdat);
-
-	/* register section info */
-	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		/*
-		 * Some platforms can assign the same pfn to multiple nodes - on
-		 * node0 as well as nodeN.  To avoid registering a pfn against
-		 * multiple nodes we check that this pfn does not already
-		 * reside in some other nodes.
-		 */
-		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
-			register_page_bootmem_info_section(pfn);
-	}
-}
-#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
diff --git a/mm/sparse.c b/mm/sparse.c
index 7bd23f9d6cef..87676bf3af40 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -13,6 +13,7 @@
 #include <linux/vmalloc.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/bootmem_info.h>
 
 #include "internal.h"
 #include <asm/dma.h>
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
  2020-12-13 15:45 ` [PATCH v9 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-16  1:03   ` Mike Kravetz
  2020-12-13 15:45 ` [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
                   ` (8 subsequent siblings)
  10 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
whether to enable the feature of freeing unused vmemmap associated with
HugeTLB pages. And this is just for dependency check. Now only support
x86-64.

Because this config depends on HAVE_BOOTMEM_INFO_NODE. And the function
of the register_page_bootmem_info() is aimed to register bootmem info.
So we should register bootmem info when this config is enabled.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/mm/init_64.c |  2 +-
 fs/Kconfig            | 15 +++++++++++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0a45f062826e..0435bee2e172 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/fs/Kconfig b/fs/Kconfig
index 976e8b9033c4..4c3a9c614983 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -245,6 +245,21 @@ config HUGETLBFS
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
 
+config HUGETLB_PAGE_FREE_VMEMMAP
+	def_bool HUGETLB_PAGE
+	depends on X86_64
+	depends on SPARSEMEM_VMEMMAP
+	depends on HAVE_BOOTMEM_INFO_NODE
+	help
+	  When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
+	  memory from pre-allocated HugeTLB pages when they are not used.
+	  6 pages per HugeTLB page of the pmd level mapping and (PAGE_SIZE - 2)
+	  pages per HugeTLB page of the pud level mapping.
+
+	  When the pages are going to be used or freed up, the vmemmap array
+	  representing that range needs to be remapped again and the pages
+	  we discarded earlier need to be rellocated again.
+
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
 
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
  2020-12-13 15:45 ` [PATCH v9 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
  2020-12-13 15:45 ` [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-16 13:06   ` Oscar Salvador
  2020-12-16 22:08   ` Mike Kravetz
  2020-12-13 15:45 ` [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
                   ` (7 subsequent siblings)
  10 siblings, 2 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Every HugeTLB has more than one struct page structure. We __know__ that
we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
to store metadata associated with each HugeTLB.

There are a lot of struct page structures associated with each HugeTLB
page. For tail pages, the value of compound_head is the same. So we can
reuse first page of tail page structures. We map the virtual addresses
of the remaining pages of tail page structures to the first tail page
struct, and then free these page frames. Therefore, we need to reserve
two pages as vmemmap areas.

When we allocate a HugeTLB page from the buddy, we can free some vmemmap
pages associated with each HugeTLB page. It is more appropriate to do it
in the prep_new_huge_page().

The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
pages associated with a HugeTLB page can be freed, returns zero for
now, which means the feature is disabled. We will enable it once all
the infrastructure is there.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/bootmem_info.h |  27 +++++-
 include/linux/mm.h           |   2 +
 mm/Makefile                  |   1 +
 mm/hugetlb.c                 |   3 +
 mm/hugetlb_vmemmap.c         | 209 +++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h         |  20 +++++
 mm/sparse-vmemmap.c          | 170 +++++++++++++++++++++++++++++++++++
 7 files changed, 431 insertions(+), 1 deletion(-)
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 4ed6dee1adc9..4c80b7be1771 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -2,7 +2,7 @@
 #ifndef __LINUX_BOOTMEM_INFO_H
 #define __LINUX_BOOTMEM_INFO_H
 
-#include <linux/mmzone.h>
+#include <linux/mm.h>
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
@@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 void get_page_bootmem(unsigned long info, struct page *page,
 		      unsigned long type);
 void put_page_bootmem(struct page *page);
+
+/*
+ * Any memory allocated via the memblock allocator and not via the
+ * buddy will be marked reserved already in the memmap. For those
+ * pages, we can call this function to free it to buddy allocator.
+ */
+static inline void free_bootmem_page(struct page *page)
+{
+	unsigned long magic = (unsigned long)page->freelist;
+
+	/*
+	 * The reserve_bootmem_region sets the reserved flag on bootmem
+	 * pages.
+	 */
+	VM_WARN_ON(page_ref_count(page) != 2);
+
+	if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
+		put_page_bootmem(page);
+	else
+		VM_WARN_ON(1);
+}
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
@@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page,
 				    unsigned long type)
 {
 }
+
+static inline void free_bootmem_page(struct page *page)
+{
+}
 #endif
 
 #endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index eabe7d9f80d8..ab02e405a979 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3005,6 +3005,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 }
 #endif
 
+void vmemmap_remap_reuse(unsigned long start, unsigned long size);
+
 void *sparse_buffer_alloc(unsigned long size);
 struct page * __populate_section_memmap(unsigned long pfn,
 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap);
diff --git a/mm/Makefile b/mm/Makefile
index ed4b88fa0f5e..056801d8daae 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
 obj-$(CONFIG_ZSWAP)	+= zswap.o
 obj-$(CONFIG_HAS_DMA)	+= dmapool.o
 obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
+obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
 obj-$(CONFIG_NUMA) 	+= mempolicy.o
 obj-$(CONFIG_SPARSEMEM)	+= sparse.o
 obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 1f3bf1710b66..140135fc8113 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -42,6 +42,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/page_owner.h>
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
@@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	free_huge_page_vmemmap(h, page);
+
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
 	set_hugetlb_cgroup(page, NULL);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
new file mode 100644
index 000000000000..5a714bd60d6b
--- /dev/null
+++ b/mm/hugetlb_vmemmap.c
@@ -0,0 +1,209 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ *
+ * The struct page structures (page structs) are used to describe a physical
+ * page frame. By default, there is a one-to-one mapping from a page frame to
+ * it's corresponding page struct.
+ *
+ * The HugeTLB pages consist of multiple base page size pages and is supported
+ * by many architectures. See hugetlbpage.rst in the Documentation directory
+ * for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and
+ * 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB
+ * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
+ * 4096 base pages. For each base page, there is a corresponding page struct.
+ *
+ * Within the HugeTLB subsystem, only the first 4 page structs are used to
+ * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
+ * provides this upper limit. The only 'useful' information in the remaining
+ * page structs is the compound_head field, and this field is the same for all
+ * tail pages.
+ *
+ * By removing redundant page structs for HugeTLB pages, memory can returned to
+ * the buddy allocator for other uses.
+ *
+ * Different architectures support different HugeTLB pages. For example, the
+ * following table is the HugeTLB page size supported by x86 and arm64
+ * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and
+ * supports contiguous entries, so it supports many kinds of sizes of HugeTLB
+ * page.
+ *
+ * +--------------+-----------+-----------------------------------------------+
+ * | Architecture | Page Size |                HugeTLB Page Size              |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |    x86-64    |    4KB    |    2MB    |    1GB    |           |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |              |    4KB    |   64KB    |    2MB    |    32MB   |    1GB    |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |    arm64     |   16KB    |    2MB    |   32MB    |     1GB   |           |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |              |   64KB    |    2MB    |  512MB    |    16GB   |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ *
+ * When the system boot up, every HugeTLB page has more than one struct page
+ * structs whose size is (unit: pages):
+ *
+ *    struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *
+ * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size
+ * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following
+ * relationship.
+ *
+ *    HugeTLB_Size = n * PAGE_SIZE
+ *
+ * Then,
+ *
+ *    struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *                = n * sizeof(struct page) / PAGE_SIZE
+ *
+ * We can use huge mapping at the pud/pmd level for the HugeTLB page.
+ *
+ * For the HugeTLB page of the pmd level mapping, then
+ *
+ *    struct_size = n * sizeof(struct page) / PAGE_SIZE
+ *                = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE
+ *                = sizeof(struct page) / sizeof(pte_t)
+ *                = 64 / 8
+ *                = 8 (pages)
+ *
+ * Where n is how many pte entries which one page can contains. So the value of
+ * n is (PAGE_SIZE / sizeof(pte_t)).
+ *
+ * This optimization only supports 64-bit system, so the value of sizeof(pte_t)
+ * is 8. And this optimization also applicable only when the size of struct page
+ * is a power of two. In most cases, the size of struct page is 64 (e.g. x86-64
+ * and arm64). So if we use pmd level mapping for a HugeTLB page, the size of
+ * struct page structs of it is 8 pages whose size depends on the size of the
+ * base page.
+ *
+ * For the HugeTLB page of the pud level mapping, then
+ *
+ *    struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd)
+ *                = PAGE_SIZE / 8 * 8 (pages)
+ *                = PAGE_SIZE (pages)
+ *
+ * Where the struct_size(pmd) is the size of the struct page structs of a
+ * HugeTLB page of the pmd level mapping.
+ *
+ * Next, we take the pmd level mapping of the HugeTLB page as an example to
+ * show the internal implementation of this optimization. There are 8 pages
+ * struct page structs associated with a HugeTLB page which is pmd mapped.
+ *
+ * Here is how things look before optimization.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | -------------> |     2     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     3     | -------------> |     3     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     4     | -------------> |     4     |
+ * |    PMD    |                     +-----------+                +-----------+
+ * |   level   |                     |     5     | -------------> |     5     |
+ * |  mapping  |                     +-----------+                +-----------+
+ * |           |                     |     6     | -------------> |     6     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     7     | -------------> |     7     |
+ * |           |                     +-----------+                +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * The value of page->compound_head is the same for all tail pages. The first
+ * page of page structs (page 0) associated with the HugeTLB page contains the 4
+ * page structs necessary to describe the HugeTLB. The only use of the remaining
+ * pages of page structs (page 1 to page 7) is to point to page->compound_head.
+ * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
+ * will be used for each HugeTLB page. This will allow us to free the remaining
+ * 6 pages to the buddy allocator.
+ *
+ * Here is how things look after remapping.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
+ * |           |                     +-----------+                   | | | | |
+ * |           |                     |     3     | ------------------+ | | | |
+ * |           |                     +-----------+                     | | | |
+ * |           |                     |     4     | --------------------+ | | |
+ * |    PMD    |                     +-----------+                       | | |
+ * |   level   |                     |     5     | ----------------------+ | |
+ * |  mapping  |                     +-----------+                         | |
+ * |           |                     |     6     | ------------------------+ |
+ * |           |                     +-----------+                           |
+ * |           |                     |     7     | --------------------------+
+ * |           |                     +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
+ * vmemmap pages and restore the previous mapping relationship.
+ *
+ * For the HugeTLB page of the pud level mapping. It is similar to the former.
+ * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages.
+ *
+ * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures
+ * (e.g. aarch64) provides a contiguous bit in the translation table entries
+ * that hints to the MMU to indicate that it is one of a contiguous set of
+ * entries that can be cached in a single TLB entry.
+ *
+ * The contiguous bit is used to increase the mapping size at the pmd and pte
+ * (last) level. So this type of HugeTLB page can be optimized only when its
+ * size of the struct page structs is greater than 2 pages.
+ */
+#define pr_fmt(fmt)	"HugeTLB vmemmap: " fmt
+
+#include "hugetlb_vmemmap.h"
+
+/*
+ * There are a lot of struct page structures associated with each HugeTLB page.
+ * For tail pages, the value of compound_head is the same. So we can reuse first
+ * page of tail page structures. We map the virtual addresses of the remaining
+ * pages of tail page structures to the first tail page struct, and then free
+ * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
+ */
+#define RESERVE_VMEMMAP_NR		2U
+#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Returns zero for now, which means the feature is disabled. We will
+ * enable it once all the infrastructure is there.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
+
+static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
+{
+	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
+}
+
+void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long vmemmap_addr = (unsigned long)head;
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	vmemmap_remap_reuse(vmemmap_addr + RESERVE_VMEMMAP_SIZE,
+			    free_vmemmap_pages_size_per_hpage(h));
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
new file mode 100644
index 000000000000..6923f03534d5
--- /dev/null
+++ b/mm/hugetlb_vmemmap.h
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ */
+#ifndef _LINUX_HUGETLB_VMEMMAP_H
+#define _LINUX_HUGETLB_VMEMMAP_H
+#include <linux/hugetlb.h>
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+#else
+static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+#endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 16183d85a7d5..78c527617e8d 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -27,8 +27,178 @@
 #include <linux/spinlock.h>
 #include <linux/vmalloc.h>
 #include <linux/sched.h>
+#include <linux/pgtable.h>
+#include <linux/bootmem_info.h>
+
 #include <asm/dma.h>
 #include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+
+/*
+ * vmemmap_rmap_walk - walk vmemmap page table
+ *
+ * @rmap_pte:		called for each non-empty PTE (lowest-level) entry.
+ * @reuse:		the page which is reused for the tail vmemmap pages.
+ * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
+ */
+struct vmemmap_rmap_walk {
+	void (*rmap_pte)(pte_t *pte, unsigned long addr,
+			 struct vmemmap_rmap_walk *walk);
+	struct page *reuse;
+	struct list_head *vmemmap_pages;
+};
+
+/*
+ * The index of the pte page table which is mapped to the tail of the
+ * vmemmap page.
+ */
+#define VMEMMAP_TAIL_PAGE_REUSE		-1
+
+static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
+			      unsigned long end, struct vmemmap_rmap_walk *walk)
+{
+	pte_t *pte;
+
+	pte = pte_offset_kernel(pmd, addr);
+	do {
+		BUG_ON(pte_none(*pte));
+
+		if (!walk->reuse)
+			walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
+
+		if (walk->rmap_pte)
+			walk->rmap_pte(pte, addr, walk);
+	} while (pte++, addr += PAGE_SIZE, addr != end);
+}
+
+static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
+			      unsigned long end, struct vmemmap_rmap_walk *walk)
+{
+	pmd_t *pmd;
+	unsigned long next;
+
+	pmd = pmd_offset(pud, addr);
+	do {
+		BUG_ON(pmd_none(*pmd));
+
+		next = pmd_addr_end(addr, end);
+		vmemmap_pte_range(pmd, addr, next, walk);
+	} while (pmd++, addr = next, addr != end);
+}
+
+static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
+			      unsigned long end, struct vmemmap_rmap_walk *walk)
+{
+	pud_t *pud;
+	unsigned long next;
+
+	pud = pud_offset(p4d, addr);
+	do {
+		BUG_ON(pud_none(*pud));
+
+		next = pud_addr_end(addr, end);
+		vmemmap_pmd_range(pud, addr, next, walk);
+	} while (pud++, addr = next, addr != end);
+}
+
+static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
+			      unsigned long end, struct vmemmap_rmap_walk *walk)
+{
+	p4d_t *p4d;
+	unsigned long next;
+
+	p4d = p4d_offset(pgd, addr);
+	do {
+		BUG_ON(p4d_none(*p4d));
+
+		next = p4d_addr_end(addr, end);
+		vmemmap_pud_range(p4d, addr, next, walk);
+	} while (p4d++, addr = next, addr != end);
+}
+
+static void vmemmap_remap_range(unsigned long start, unsigned long end,
+				struct vmemmap_rmap_walk *walk)
+{
+	unsigned long addr = start;
+	unsigned long next;
+	pgd_t *pgd;
+
+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+
+	pgd = pgd_offset_k(addr);
+	do {
+		BUG_ON(pgd_none(*pgd));
+
+		next = pgd_addr_end(addr, end);
+		vmemmap_p4d_range(pgd, addr, next, walk);
+	} while (pgd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
+/*
+ * Free a vmemmap page. A vmemmap page can be allocated from the memblock
+ * allocator or buddy allocator. If the PG_reserved flag is set, it means
+ * that it allocated from the memblock allocator, just free it via the
+ * free_bootmem_page(). Otherwise, use __free_page().
+ */
+static inline void free_vmemmap_page(struct page *page)
+{
+	if (PageReserved(page))
+		free_bootmem_page(page);
+	else
+		__free_page(page);
+}
+
+/* Free a list of the vmemmap pages */
+static void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+static void vmemmap_remap_reuse_pte(pte_t *pte, unsigned long addr,
+				    struct vmemmap_rmap_walk *walk)
+{
+	/*
+	 * Make the tail pages are mapped with read-only to catch
+	 * illegal write operation to the tail pages.
+	 */
+	pgprot_t pgprot = PAGE_KERNEL_RO;
+	pte_t entry = mk_pte(walk->reuse, pgprot);
+	struct page *page;
+
+	page = pte_page(*pte);
+	list_add(&page->lru, walk->vmemmap_pages);
+
+	set_pte_at(&init_mm, addr, pte, entry);
+}
+
+/**
+ * vmemmap_remap_reuse - remap the vmemmap virtual address range
+ *                       [start, start + size) to the page which
+ *                       [start - PAGE_SIZE, start) is mapped.
+ * @start:	start address of the vmemmap virtual address range
+ * @end:	size of the vmemmap virtual address range
+ */
+void vmemmap_remap_reuse(unsigned long start, unsigned long size)
+{
+	unsigned long end = start + size;
+	LIST_HEAD(vmemmap_pages);
+
+	struct vmemmap_rmap_walk walk = {
+		.rmap_pte	= vmemmap_remap_reuse_pte,
+		.vmemmap_pages	= &vmemmap_pages,
+	};
+
+	vmemmap_remap_range(start, end, &walk);
+	free_vmemmap_page_list(&vmemmap_pages);
+}
 
 /*
  * Allocate a block of memory to be used to back the virtual memory map
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (2 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-16 23:48   ` Mike Kravetz
  2020-12-13 15:45 ` [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
                   ` (6 subsequent siblings)
  10 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the subsequent patch, we will allocate the vmemmap pages when free
HugeTLB pages. But update_and_free_page() is called from a non-task
context(and hold hugetlb_lock), so we can defer the actual freeing in
a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         | 77 ++++++++++++++++++++++++++++++++++++++++++++++++----
 mm/hugetlb_vmemmap.c | 12 --------
 mm/hugetlb_vmemmap.h | 17 ++++++++++++
 3 files changed, 88 insertions(+), 18 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 140135fc8113..0ff9b90e524f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1292,15 +1292,76 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
-static void update_and_free_page(struct hstate *h, struct page *page)
+static void __free_hugepage(struct hstate *h, struct page *page);
+
+/*
+ * As update_and_free_page() is be called from a non-task context(and hold
+ * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
+ * use GFP_ATOMIC to allocate a lot of vmemmap pages.
+ *
+ * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of
+ * pages to be freed and frees them one-by-one. As the page->mapping pointer
+ * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is
+ * reused as the llist_node structure of a lockless linked list of huge
+ * pages to be freed.
+ */
+static LLIST_HEAD(hpage_update_freelist);
+
+static void update_hpage_vmemmap_workfn(struct work_struct *work)
 {
-	int i;
+	struct llist_node *node;
+	struct page *page;
+
+	node = llist_del_all(&hpage_update_freelist);
 
+	while (node) {
+		page = container_of((struct address_space **)node,
+				     struct page, mapping);
+		node = node->next;
+		page->mapping = NULL;
+		__free_hugepage(page_hstate(page), page);
+
+		cond_resched();
+	}
+}
+static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
+
+static inline void __update_and_free_page(struct hstate *h, struct page *page)
+{
+	/* No need to allocate vmemmap pages */
+	if (!free_vmemmap_pages_per_hpage(h)) {
+		__free_hugepage(h, page);
+		return;
+	}
+
+	/*
+	 * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap
+	 * pages.
+	 *
+	 * Only call schedule_work() if hpage_update_freelist is previously
+	 * empty. Otherwise, schedule_work() had been called but the workfn
+	 * hasn't retrieved the list yet.
+	 */
+	if (llist_add((struct llist_node *)&page->mapping,
+		      &hpage_update_freelist))
+		schedule_work(&hpage_update_work);
+}
+
+static void update_and_free_page(struct hstate *h, struct page *page)
+{
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
 		return;
 
 	h->nr_huge_pages--;
 	h->nr_huge_pages_node[page_to_nid(page)]--;
+
+	__update_and_free_page(h, page);
+}
+
+static void __free_hugepage(struct hstate *h, struct page *page)
+{
+	int i;
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
@@ -1313,13 +1374,17 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
 		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
+		 * Temporarily drop the hugetlb_lock only when this type of
+		 * HugeTLB page does not support vmemmap optimization (which
+		 * contex do not hold the hugetlb_lock), because we might block
+		 * in free_gigantic_page().
 		 */
-		spin_unlock(&hugetlb_lock);
+		if (!free_vmemmap_pages_per_hpage(h))
+			spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
 		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
+		if (!free_vmemmap_pages_per_hpage(h))
+			spin_lock(&hugetlb_lock);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 5a714bd60d6b..6d4e77a2b6c7 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -180,18 +180,6 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
-/*
- * How many vmemmap pages associated with a HugeTLB page that can be freed
- * to the buddy allocator.
- *
- * Todo: Returns zero for now, which means the feature is disabled. We will
- * enable it once all the infrastructure is there.
- */
-static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
-{
-	return 0;
-}
-
 static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 {
 	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 6923f03534d5..01f8637adbe0 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -12,9 +12,26 @@
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Returns zero for now, which means the feature is disabled. We will
+ * enable it once all the infrastructure is there.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #else
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (3 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-17  1:17   ` Mike Kravetz
  2020-12-13 15:45 ` [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
                   ` (5 subsequent siblings)
  10 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we free a HugeTLB page to the buddy allocator, we should allocate the
vmemmap pages associated with it. We can do that in the __free_hugepage()
before freeing it to buddy.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/mm.h   |  1 +
 mm/hugetlb.c         |  2 ++
 mm/hugetlb_vmemmap.c | 11 +++++++++
 mm/hugetlb_vmemmap.h |  5 ++++
 mm/sparse-vmemmap.c  | 69 +++++++++++++++++++++++++++++++++++++++++++++++++++-
 5 files changed, 87 insertions(+), 1 deletion(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ab02e405a979..5b8dc36e4d20 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3006,6 +3006,7 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 #endif
 
 void vmemmap_remap_reuse(unsigned long start, unsigned long size);
+void vmemmap_remap_restore(unsigned long start, unsigned long size);
 
 void *sparse_buffer_alloc(unsigned long size);
 struct page * __populate_section_memmap(unsigned long pfn,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0ff9b90e524f..542e6cb81321 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1362,6 +1362,8 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 {
 	int i;
 
+	alloc_huge_page_vmemmap(h, page);
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 6d4e77a2b6c7..02201c2e3dfa 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -185,6 +185,17 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
 }
 
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long vmemmap_addr = (unsigned long)head;
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	vmemmap_remap_restore(vmemmap_addr + RESERVE_VMEMMAP_SIZE,
+			      free_vmemmap_pages_size_per_hpage(h));
+}
+
 void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 	unsigned long vmemmap_addr = (unsigned long)head;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 01f8637adbe0..b2c8d2f11d48 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -11,6 +11,7 @@
 #include <linux/hugetlb.h>
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
 /*
@@ -25,6 +26,10 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 	return 0;
 }
 #else
+static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 78c527617e8d..ffcf092c92ed 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -29,6 +29,7 @@
 #include <linux/sched.h>
 #include <linux/pgtable.h>
 #include <linux/bootmem_info.h>
+#include <linux/delay.h>
 
 #include <asm/dma.h>
 #include <asm/pgalloc.h>
@@ -39,7 +40,8 @@
  *
  * @rmap_pte:		called for each non-empty PTE (lowest-level) entry.
  * @reuse:		the page which is reused for the tail vmemmap pages.
- * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
+ * @vmemmap_pages:	the list head of the vmemmap pages that can be freed
+ *			or is mapped from.
  */
 struct vmemmap_rmap_walk {
 	void (*rmap_pte)(pte_t *pte, unsigned long addr,
@@ -54,6 +56,9 @@ struct vmemmap_rmap_walk {
  */
 #define VMEMMAP_TAIL_PAGE_REUSE		-1
 
+/* The gfp mask of allocating vmemmap page */
+#define GFP_VMEMMAP_PAGE	(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN)
+
 static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
 			      unsigned long end, struct vmemmap_rmap_walk *walk)
 {
@@ -200,6 +205,68 @@ void vmemmap_remap_reuse(unsigned long start, unsigned long size)
 	free_vmemmap_page_list(&vmemmap_pages);
 }
 
+static void vmemmap_remap_restore_pte(pte_t *pte, unsigned long addr,
+				      struct vmemmap_rmap_walk *walk)
+{
+	pgprot_t pgprot = PAGE_KERNEL;
+	struct page *page;
+	void *to;
+
+	BUG_ON(pte_page(*pte) != walk->reuse);
+
+	page = list_first_entry(walk->vmemmap_pages, struct page, lru);
+	list_del(&page->lru);
+	to = page_to_virt(page);
+	copy_page(to, page_to_virt(walk->reuse));
+
+	set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
+}
+
+static void alloc_vmemmap_page_list(struct list_head *list,
+				    unsigned long nr_pages)
+{
+	while (nr_pages--) {
+		struct page *page;
+
+retry:
+		page = alloc_page(GFP_VMEMMAP_PAGE);
+		if (unlikely(!page)) {
+			msleep(100);
+			/*
+			 * We should retry infinitely, because we cannot
+			 * handle allocation failures. Once we allocate
+			 * vmemmap pages successfully, then we can free
+			 * a HugeTLB page.
+			 */
+			goto retry;
+		}
+		list_add_tail(&page->lru, list);
+	}
+}
+
+/**
+ * vmemmap_remap_restore - remap the vmemmap virtual address range
+ *                         [start, start + size) to the page respectively
+ *                         which from the @vmemmap_pages
+ * @start:	start address of the vmemmap virtual address range
+ * @end:	size of the vmemmap virtual address range
+ */
+void vmemmap_remap_restore(unsigned long start, unsigned long size)
+{
+	LIST_HEAD(vmemmap_pages);
+	unsigned long end = start + size;
+
+	struct vmemmap_rmap_walk walk = {
+		.rmap_pte	= vmemmap_remap_restore_pte,
+		.vmemmap_pages	= &vmemmap_pages,
+	};
+
+	might_sleep();
+
+	alloc_vmemmap_page_list(&vmemmap_pages, size >> PAGE_SHIFT);
+	vmemmap_remap_range(start, end, &walk);
+}
+
 /*
  * Allocate a block of memory to be used to back the virtual memory map
  * or to back the page tables that are used to create the mapping.
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (4 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-16 13:28   ` Oscar Salvador
  2020-12-16 13:30   ` Oscar Salvador
  2020-12-13 15:45 ` [PATCH v9 07/11] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
                   ` (4 subsequent siblings)
  10 siblings, 2 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Because we reuse the first tail vmemmap page frame and remap it
with read-only, we cannot set the PageHWPosion on a tail page.
So we can use the head[4].private to record the real error page
index and set the raw error page PageHWPoison later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 48 ++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 40 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 542e6cb81321..29de425f879a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1347,6 +1347,43 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page)
 		schedule_work(&hpage_update_work);
 }
 
+static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
+{
+	struct page *page;
+
+	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
+		return;
+
+	page = head + page_private(head + 4);
+
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
+					struct page *page)
+{
+	if (!PageHWPoison(head))
+		return;
+
+	if (free_vmemmap_pages_per_hpage(h)) {
+		set_page_private(head + 4, page - head);
+	} else if (page != head) {
+		/*
+		 * Move PageHWPoison flag from head page to the raw error page,
+		 * which makes any subpages rather than the error page reusable.
+		 */
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
 static void update_and_free_page(struct hstate *h, struct page *page)
 {
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
@@ -1363,6 +1400,7 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 	int i;
 
 	alloc_huge_page_vmemmap(h, page);
+	hwpoison_subpage_deliver(h, page);
 
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
@@ -1840,14 +1878,8 @@ int dissolve_free_huge_page(struct page *page)
 		int nid = page_to_nid(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
-		/*
-		 * Move PageHWPoison flag from head page to the raw error page,
-		 * which makes any subpages rather than the error page reusable.
-		 */
-		if (PageHWPoison(head) && page != head) {
-			SetPageHWPoison(page);
-			ClearPageHWPoison(head);
-		}
+
+		hwpoison_subpage_set(h, head, page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 07/11] mm/hugetlb: Flush work when dissolving hugetlb page
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (5 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-13 15:45 ` [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
                   ` (3 subsequent siblings)
  10 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We should flush work when dissolving a hugetlb page to make sure that
the hugetlb page is freed to the buddy.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 29de425f879a..b0847b2ce01d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1326,6 +1326,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work)
 }
 static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
 
+static inline void flush_hpage_update_work(struct hstate *h)
+{
+	if (free_vmemmap_pages_per_hpage(h))
+		flush_work(&hpage_update_work);
+}
+
 static inline void __update_and_free_page(struct hstate *h, struct page *page)
 {
 	/* No need to allocate vmemmap pages */
@@ -1861,6 +1867,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 int dissolve_free_huge_page(struct page *page)
 {
 	int rc = -EBUSY;
+	struct hstate *h = NULL;
 
 	/* Not to disrupt normal path by vainly holding hugetlb_lock */
 	if (!PageHuge(page))
@@ -1874,8 +1881,9 @@ int dissolve_free_huge_page(struct page *page)
 
 	if (!page_count(page)) {
 		struct page *head = compound_head(page);
-		struct hstate *h = page_hstate(head);
 		int nid = page_to_nid(head);
+
+		h = page_hstate(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
 
@@ -1889,6 +1897,14 @@ int dissolve_free_huge_page(struct page *page)
 	}
 out:
 	spin_unlock(&hugetlb_lock);
+
+	/*
+	 * We should flush work before return to make sure that
+	 * the HugeTLB page is freed to the buddy.
+	 */
+	if (!rc && h)
+		flush_hpage_update_work(h);
+
 	return rc;
 }
 
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (6 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 07/11] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-16 14:40   ` Oscar Salvador
  2020-12-13 15:45 ` [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
                   ` (2 subsequent siblings)
  10 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
 arch/x86/mm/init_64.c                           |  8 ++++++--
 include/linux/hugetlb.h                         | 19 +++++++++++++++++++
 mm/hugetlb_vmemmap.c                            | 16 ++++++++++++++++
 5 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 3ae25630a223..9e6854f21d55 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1551,6 +1551,15 @@
 			Documentation/admin-guide/mm/hugetlbpage.rst.
 			Format: size[KMG]
 
+	hugetlb_free_vmemmap=
+			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+			this controls freeing unused vmemmap pages associated
+			with each HugeTLB page.
+			Format: { on | off (default) }
+
+			on:  enable the feature
+			off: disable the feature
+
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
 			Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..3a23c2377acc 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -145,6 +145,9 @@ default_hugepagesz
 
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
+hugetlb_free_vmemmap
+	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
+	unused vmemmap pages associated with each HugeTLB page.
 
 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
 indicates the current number of pre-allocated huge pages of the default size.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0435bee2e172..1bce5f20e6ca 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -34,6 +34,7 @@
 #include <linux/gfp.h>
 #include <linux/kcore.h>
 #include <linux/bootmem_info.h>
+#include <linux/hugetlb.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
 	int err;
 
-	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
+	if (is_hugetlb_free_vmemmap_enabled() ||
+	    end - start < PAGES_PER_SECTION * sizeof(struct page))
 		err = vmemmap_populate_basepages(start, end, node, NULL);
 	else if (boot_cpu_has(X86_FEATURE_PSE))
 		err = vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 	pmd_t *pmd;
 	unsigned int nr_pmd_pages;
 	struct page *page;
+	bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
+			    is_hugetlb_free_vmemmap_enabled();
 
 	for (; addr < end; addr = next) {
 		pte_t *pte = NULL;
@@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 		}
 		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
 
-		if (!boot_cpu_has(X86_FEATURE_PSE)) {
+		if (base_mapping) {
 			next = (addr + PAGE_SIZE) & PAGE_MASK;
 			pmd = pmd_offset(pud, addr);
 			if (pmd_none(*pmd))
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ebca2ef02212..7f47f0eeca3b 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
 }
 #endif
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+extern bool hugetlb_free_vmemmap_enabled;
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return hugetlb_free_vmemmap_enabled;
+}
+#else
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
+#endif
+
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 
@@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
 					pte_t *ptep, pte_t pte, unsigned long sz)
 {
 }
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 02201c2e3dfa..64ad929cac61 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -180,6 +180,22 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
+bool hugetlb_free_vmemmap_enabled;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	if (!strcmp(buf, "on"))
+		hugetlb_free_vmemmap_enabled = true;
+	else if (strcmp(buf, "off"))
+		return -EINVAL;
+
+	return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
+
 static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 {
 	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (7 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-16 13:43   ` Oscar Salvador
  2020-12-13 15:45 ` [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
  2020-12-13 15:45 ` [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
  10 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
field in the hstate to indicate how many vmemmap pages associated with
a HugeTLB page that we can free to buddy allocator. And initialize it
in the hugetlb_vmemmap_init(). This patch is actual enablement of the
feature.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 include/linux/hugetlb.h |  3 +++
 mm/hugetlb.c            |  1 +
 mm/hugetlb_vmemmap.c    | 29 +++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h    | 10 ++++++----
 4 files changed, 39 insertions(+), 4 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 7f47f0eeca3b..66d82ae7b712 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -492,6 +492,9 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	unsigned int nr_free_vmemmap_pages;
+#endif
 #ifdef CONFIG_CGROUP_HUGETLB
 	/* cgroup control files */
 	struct cftype cgroup_files_dfl[7];
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index b0847b2ce01d..2b45235a70e9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3323,6 +3323,7 @@ void __init hugetlb_add_hstate(unsigned int order)
 	h->next_nid_to_free = first_memory_node;
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
+	hugetlb_vmemmap_init(h);
 
 	parsed_hstate = h;
 }
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 64ad929cac61..d3b4c39f67c0 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -184,6 +184,10 @@ bool hugetlb_free_vmemmap_enabled;
 
 static int __init early_hugetlb_free_vmemmap_param(char *buf)
 {
+	/* We cannot optimize if a "struct page" crosses page boundaries. */
+	if (!is_power_of_2(sizeof(struct page)))
+		return 0;
+
 	if (!buf)
 		return -EINVAL;
 
@@ -222,3 +226,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 	vmemmap_remap_reuse(vmemmap_addr + RESERVE_VMEMMAP_SIZE,
 			    free_vmemmap_pages_size_per_hpage(h));
 }
+
+void __init hugetlb_vmemmap_init(struct hstate *h)
+{
+	unsigned int nr_pages = pages_per_huge_page(h);
+	unsigned int vmemmap_pages;
+
+	if (!hugetlb_free_vmemmap_enabled)
+		return;
+
+	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
+	/*
+	 * The head page and the first tail page are not to be freed to buddy
+	 * system, the others page will map to the first tail page. So there
+	 * are the remaining pages that can be freed.
+	 *
+	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
+	 * on some architectures (e.g. aarch64). See Documentation/arm64/
+	 * hugetlbpage.rst for more details.
+	 */
+	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
+		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
+
+	pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
+		h->name);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index b2c8d2f11d48..8fd9ae113dbd 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -13,17 +13,15 @@
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+void hugetlb_vmemmap_init(struct hstate *h);
 
 /*
  * How many vmemmap pages associated with a HugeTLB page that can be freed
  * to the buddy allocator.
- *
- * Todo: Returns zero for now, which means the feature is disabled. We will
- * enable it once all the infrastructure is there.
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
-	return 0;
+	return h->nr_free_vmemmap_pages;
 }
 #else
 static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
@@ -38,5 +36,9 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return 0;
 }
+
+static inline void hugetlb_vmemmap_init(struct hstate *h)
+{
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (8 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-16 14:03   ` Oscar Salvador
  2020-12-13 15:45 ` [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
  10 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

For HugeTLB page, there are more metadata to save in the struct page.
But the head struct page cannot meet our needs, so we have to abuse
other tail struct page to store the metadata. In order to avoid
conflicts caused by subsequent use of more tail struct pages, we can
gather these discrete indexes of tail struct page. In this case, it
will be easier to add a new tail page index later.

There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so
add a BUILD_BUG_ON to catch invalid usage of the tail struct page.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h        | 13 +++++++++++++
 include/linux/hugetlb_cgroup.h | 15 +++++++++------
 mm/hugetlb.c                   | 16 ++++++++--------
 mm/hugetlb_vmemmap.c           |  8 ++++++++
 4 files changed, 38 insertions(+), 14 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 66d82ae7b712..7295f6b3d55e 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t;
 #include <linux/shm.h>
 #include <asm/tlbflush.h>
 
+enum {
+	SUBPAGE_INDEX_ACTIVE = 1,	/* reuse page flags of PG_private */
+	SUBPAGE_INDEX_TEMPORARY,	/* reuse page->mapping */
+#ifdef CONFIG_CGROUP_HUGETLB
+	SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */
+	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
+#endif
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
+#endif
+	NR_USED_SUBPAGE,
+};
+
 struct hugepage_subpool {
 	spinlock_t lock;
 	long count;
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 2ad6e92f124a..3d3c1c49efe4 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -24,8 +24,9 @@ struct file_region;
 /*
  * Minimum page order trackable by hugetlb cgroup.
  * At least 4 pages are necessary for all the tracking information.
- * The second tail page (hpage[2]) is the fault usage cgroup.
- * The third tail page (hpage[3]) is the reservation usage cgroup.
+ * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
+ * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
+ * is the reservation usage cgroup.
  */
 #define HUGETLB_CGROUP_MIN_ORDER	2
 
@@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
 	if (rsvd)
-		return (struct hugetlb_cgroup *)page[3].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
 	else
-		return (struct hugetlb_cgroup *)page[2].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
 }
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
@@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
 	if (rsvd)
-		page[3].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+				 (unsigned long)h_cg);
 	else
-		page[2].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+				 (unsigned long)h_cg);
 	return 0;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2b45235a70e9..0e8f13184de0 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1360,7 +1360,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
 	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
 		return;
 
-	page = head + page_private(head + 4);
+	page = head + page_private(head + SUBPAGE_INDEX_HWPOISON);
 
 	/*
 	 * Move PageHWPoison flag from head page to the raw error page,
@@ -1379,7 +1379,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
 		return;
 
 	if (free_vmemmap_pages_per_hpage(h)) {
-		set_page_private(head + 4, page - head);
+		set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
 	} else if (page != head) {
 		/*
 		 * Move PageHWPoison flag from head page to the raw error page,
@@ -1456,20 +1456,20 @@ struct hstate *size_to_hstate(unsigned long size)
 bool page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return PageHead(page) && PagePrivate(&page[1]);
+	return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /* never called for tail page */
 static void set_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	SetPagePrivate(&page[1]);
+	SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 static void clear_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	ClearPagePrivate(&page[1]);
+	ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /*
@@ -1481,17 +1481,17 @@ static inline bool PageHugeTemporary(struct page *page)
 	if (!PageHuge(page))
 		return false;
 
-	return (unsigned long)page[2].mapping == -1U;
+	return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U;
 }
 
 static inline void SetPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = (void *)-1U;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U;
 }
 
 static inline void ClearPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = NULL;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL;
 }
 
 static void __free_huge_page(struct page *page)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index d3b4c39f67c0..bbcefd5fb7d1 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -232,6 +232,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int nr_pages = pages_per_huge_page(h);
 	unsigned int vmemmap_pages;
 
+	/*
+	 * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
+	 * page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so
+	 * add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
+	 */
+	BUILD_BUG_ON(NR_USED_SUBPAGE >=
+		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
+
 	if (!hugetlb_free_vmemmap_enabled)
 		return;
 
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (9 preceding siblings ...)
  2020-12-13 15:45 ` [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
@ 2020-12-13 15:45 ` Muchun Song
  2020-12-17 10:31   ` Oscar Salvador
  10 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-13 15:45 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We cannot optimize if a "struct page" crosses page boundaries. If
it is true, we can optimize the code with the help of a compiler.
When free_vmemmap_pages_per_hpage() returns zero, most functions are
optimized by the compiler.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h | 3 ++-
 mm/hugetlb_vmemmap.c    | 7 +++++++
 mm/hugetlb_vmemmap.h    | 5 +++--
 3 files changed, 12 insertions(+), 3 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 7295f6b3d55e..adc17765e0e9 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -791,7 +791,8 @@ extern bool hugetlb_free_vmemmap_enabled;
 
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
 {
-	return hugetlb_free_vmemmap_enabled;
+	return hugetlb_free_vmemmap_enabled &&
+	       is_power_of_2(sizeof(struct page));
 }
 #else
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index bbcefd5fb7d1..e83c48c63a7b 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -240,6 +240,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	BUILD_BUG_ON(NR_USED_SUBPAGE >=
 		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
 
+	/*
+	 * The compiler can help us to optimize this function to null
+	 * when the size of the struct page is not power of 2.
+	 */
+	if (!is_power_of_2(sizeof(struct page)))
+		return;
+
 	if (!hugetlb_free_vmemmap_enabled)
 		return;
 
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 8fd9ae113dbd..1a29a80f9fe1 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -17,11 +17,12 @@ void hugetlb_vmemmap_init(struct hstate *h);
 
 /*
  * How many vmemmap pages associated with a HugeTLB page that can be freed
- * to the buddy allocator.
+ * to the buddy allocator. The checking of the is_power_of_2() aims to let
+ * the compiler help us optimize the code as much as possible.
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
-	return h->nr_free_vmemmap_pages;
+	return h->nr_free_vmemmap_pages && is_power_of_2(sizeof(struct page));
 }
 #else
 static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-13 15:45 ` [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-12-16  1:03   ` Mike Kravetz
  2020-12-16  3:24     ` [External] " Muchun Song
  2020-12-16  3:45     ` Mike Kravetz
  0 siblings, 2 replies; 43+ messages in thread
From: Mike Kravetz @ 2020-12-16  1:03 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 12/13/20 7:45 AM, Muchun Song wrote:
> The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> whether to enable the feature of freeing unused vmemmap associated with
> HugeTLB pages. And this is just for dependency check. Now only support
> x86-64.
> 
> Because this config depends on HAVE_BOOTMEM_INFO_NODE. And the function
> of the register_page_bootmem_info() is aimed to register bootmem info.
> So we should register bootmem info when this config is enabled.

Suggested commit message rewording?

The HUGETLB_PAGE_FREE_VMEMMAP option is used to enable the freeing of
unnecessary vmemmap associated with HugeTLB pages.  The config option is
introduced early so that supporting code can be written to depend on the
option.  The initial version of the code only provides support for x86-64.

Like other code which frees vmemmap, this config option depends on
HAVE_BOOTMEM_INFO_NODE.  The routine register_page_bootmem_info() is used
to register bootmem info.  Therefore, make sure register_page_bootmem_info
is enabled if HUGETLB_PAGE_FREE_VMEMMAP is defined.

> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  arch/x86/mm/init_64.c |  2 +-
>  fs/Kconfig            | 15 +++++++++++++++
>  2 files changed, 16 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 0a45f062826e..0435bee2e172 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
>  
>  static void __init register_page_bootmem_info(void)
>  {
> -#ifdef CONFIG_NUMA
> +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
>  	int i;
>  
>  	for_each_online_node(i)
> diff --git a/fs/Kconfig b/fs/Kconfig
> index 976e8b9033c4..4c3a9c614983 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -245,6 +245,21 @@ config HUGETLBFS
>  config HUGETLB_PAGE
>  	def_bool HUGETLBFS
>  
> +config HUGETLB_PAGE_FREE_VMEMMAP
> +	def_bool HUGETLB_PAGE
> +	depends on X86_64
> +	depends on SPARSEMEM_VMEMMAP
> +	depends on HAVE_BOOTMEM_INFO_NODE
> +	help
> +	  When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
> +	  memory from pre-allocated HugeTLB pages when they are not used.
> +	  6 pages per HugeTLB page of the pmd level mapping and (PAGE_SIZE - 2)
> +	  pages per HugeTLB page of the pud level mapping.
> +
> +	  When the pages are going to be used or freed up, the vmemmap array
> +	  representing that range needs to be remapped again and the pages
> +	  we discarded earlier need to be rellocated again.

I see the previous discussion with David about wording here.  How about
leaving the functionality description general, and provide a specific
example for x86_64?  As mentioned we can always update when new arch support
is added.  Suggested text?

	The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of
	some vmemmap pages associated with pre-allocated HugeTLB pages.
	For example, on X86_64 6 vmemmap pages of size 4KB each can be
	saved for each 2MB HugeTLB page.  4094 vmemmap pages of size 4KB
	each can be saved for each 1GB HugeTLB page.

	When a HugeTLB page is allocated or freed, the vmemmap array
	representing the range associated with the page will need to be
	remapped.  When a page is allocated, vmemmap pages are freed
	after remapping.  When a page is freed, previously discarded
	vmemmap pages must be allocated before before remapping.

-- 
Mike Kravetz
	
> +
>  config MEMFD_CREATE
>  	def_bool TMPFS || HUGETLBFS
>  
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-16  1:03   ` Mike Kravetz
@ 2020-12-16  3:24     ` Muchun Song
  2020-12-16  3:45     ` Mike Kravetz
  1 sibling, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-16  3:24 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 9:04 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/13/20 7:45 AM, Muchun Song wrote:
> > The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> > whether to enable the feature of freeing unused vmemmap associated with
> > HugeTLB pages. And this is just for dependency check. Now only support
> > x86-64.
> >
> > Because this config depends on HAVE_BOOTMEM_INFO_NODE. And the function
> > of the register_page_bootmem_info() is aimed to register bootmem info.
> > So we should register bootmem info when this config is enabled.
>
> Suggested commit message rewording?
>
> The HUGETLB_PAGE_FREE_VMEMMAP option is used to enable the freeing of
> unnecessary vmemmap associated with HugeTLB pages.  The config option is
> introduced early so that supporting code can be written to depend on the
> option.  The initial version of the code only provides support for x86-64.
>
> Like other code which frees vmemmap, this config option depends on
> HAVE_BOOTMEM_INFO_NODE.  The routine register_page_bootmem_info() is used
> to register bootmem info.  Therefore, make sure register_page_bootmem_info
> is enabled if HUGETLB_PAGE_FREE_VMEMMAP is defined.

Thank Mike. Will update.

>
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  arch/x86/mm/init_64.c |  2 +-
> >  fs/Kconfig            | 15 +++++++++++++++
> >  2 files changed, 16 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 0a45f062826e..0435bee2e172 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
> >
> >  static void __init register_page_bootmem_info(void)
> >  {
> > -#ifdef CONFIG_NUMA
> > +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
> >       int i;
> >
> >       for_each_online_node(i)
> > diff --git a/fs/Kconfig b/fs/Kconfig
> > index 976e8b9033c4..4c3a9c614983 100644
> > --- a/fs/Kconfig
> > +++ b/fs/Kconfig
> > @@ -245,6 +245,21 @@ config HUGETLBFS
> >  config HUGETLB_PAGE
> >       def_bool HUGETLBFS
> >
> > +config HUGETLB_PAGE_FREE_VMEMMAP
> > +     def_bool HUGETLB_PAGE
> > +     depends on X86_64
> > +     depends on SPARSEMEM_VMEMMAP
> > +     depends on HAVE_BOOTMEM_INFO_NODE
> > +     help
> > +       When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
> > +       memory from pre-allocated HugeTLB pages when they are not used.
> > +       6 pages per HugeTLB page of the pmd level mapping and (PAGE_SIZE - 2)
> > +       pages per HugeTLB page of the pud level mapping.
> > +
> > +       When the pages are going to be used or freed up, the vmemmap array
> > +       representing that range needs to be remapped again and the pages
> > +       we discarded earlier need to be rellocated again.
>
> I see the previous discussion with David about wording here.  How about
> leaving the functionality description general, and provide a specific
> example for x86_64?  As mentioned we can always update when new arch support
> is added.  Suggested text?

Good suggestion. Thanks.

>
>         The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of
>         some vmemmap pages associated with pre-allocated HugeTLB pages.
>         For example, on X86_64 6 vmemmap pages of size 4KB each can be
>         saved for each 2MB HugeTLB page.  4094 vmemmap pages of size 4KB
>         each can be saved for each 1GB HugeTLB page.
>
>         When a HugeTLB page is allocated or freed, the vmemmap array
>         representing the range associated with the page will need to be
>         remapped.  When a page is allocated, vmemmap pages are freed
>         after remapping.  When a page is freed, previously discarded
>         vmemmap pages must be allocated before before remapping.
>
> --
> Mike Kravetz
>
> > +
> >  config MEMFD_CREATE
> >       def_bool TMPFS || HUGETLBFS
> >
> >



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-16  1:03   ` Mike Kravetz
  2020-12-16  3:24     ` [External] " Muchun Song
@ 2020-12-16  3:45     ` Mike Kravetz
  2020-12-16  3:52       ` [External] " Muchun Song
  1 sibling, 1 reply; 43+ messages in thread
From: Mike Kravetz @ 2020-12-16  3:45 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 12/15/20 5:03 PM, Mike Kravetz wrote:
> On 12/13/20 7:45 AM, Muchun Song wrote:
>> diff --git a/fs/Kconfig b/fs/Kconfig
>> index 976e8b9033c4..4c3a9c614983 100644
>> --- a/fs/Kconfig
>> +++ b/fs/Kconfig
>> @@ -245,6 +245,21 @@ config HUGETLBFS
>>  config HUGETLB_PAGE
>>  	def_bool HUGETLBFS
>>  
>> +config HUGETLB_PAGE_FREE_VMEMMAP
>> +	def_bool HUGETLB_PAGE
>> +	depends on X86_64
>> +	depends on SPARSEMEM_VMEMMAP
>> +	depends on HAVE_BOOTMEM_INFO_NODE
>> +	help
>> +	  When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
>> +	  memory from pre-allocated HugeTLB pages when they are not used.
>> +	  6 pages per HugeTLB page of the pmd level mapping and (PAGE_SIZE - 2)
>> +	  pages per HugeTLB page of the pud level mapping.
>> +
>> +	  When the pages are going to be used or freed up, the vmemmap array
>> +	  representing that range needs to be remapped again and the pages
>> +	  we discarded earlier need to be rellocated again.
> 
> I see the previous discussion with David about wording here.  How about
> leaving the functionality description general, and provide a specific
> example for x86_64?  As mentioned we can always update when new arch support
> is added.  Suggested text?
> 
> 	The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of
> 	some vmemmap pages associated with pre-allocated HugeTLB pages.
> 	For example, on X86_64 6 vmemmap pages of size 4KB each can be
> 	saved for each 2MB HugeTLB page.  4094 vmemmap pages of size 4KB
> 	each can be saved for each 1GB HugeTLB page.
> 
> 	When a HugeTLB page is allocated or freed, the vmemmap array
> 	representing the range associated with the page will need to be
> 	remapped.  When a page is allocated, vmemmap pages are freed
> 	after remapping.  When a page is freed, previously discarded
> 	vmemmap pages must be allocated before before remapping.

Sorry, I am slowly coming up to speed with discussions when I was away.

It appears vmemmap is not being mapped with huge pages if the boot option
hugetlb_free_vmemmap is on.   Is that correct?

If that is correct, we should document the trade off of increased page
table pages needed to map vmemmap vs the savings from freeing struct page
pages.  If a user/sysadmin only uses a small number of hugetlb pages (as
a percentage of system memory) they could end up using more memory with
hugetlb_free_vmemmap on as opposed to off.  Perhaps, it should be part of
the documentation for hugetlb_free_vmemmap?  If this is true, and people
think this should be documented, I can try to come up with something.

-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-16  3:45     ` Mike Kravetz
@ 2020-12-16  3:52       ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-16  3:52 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 11:45 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/15/20 5:03 PM, Mike Kravetz wrote:
> > On 12/13/20 7:45 AM, Muchun Song wrote:
> >> diff --git a/fs/Kconfig b/fs/Kconfig
> >> index 976e8b9033c4..4c3a9c614983 100644
> >> --- a/fs/Kconfig
> >> +++ b/fs/Kconfig
> >> @@ -245,6 +245,21 @@ config HUGETLBFS
> >>  config HUGETLB_PAGE
> >>      def_bool HUGETLBFS
> >>
> >> +config HUGETLB_PAGE_FREE_VMEMMAP
> >> +    def_bool HUGETLB_PAGE
> >> +    depends on X86_64
> >> +    depends on SPARSEMEM_VMEMMAP
> >> +    depends on HAVE_BOOTMEM_INFO_NODE
> >> +    help
> >> +      When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
> >> +      memory from pre-allocated HugeTLB pages when they are not used.
> >> +      6 pages per HugeTLB page of the pmd level mapping and (PAGE_SIZE - 2)
> >> +      pages per HugeTLB page of the pud level mapping.
> >> +
> >> +      When the pages are going to be used or freed up, the vmemmap array
> >> +      representing that range needs to be remapped again and the pages
> >> +      we discarded earlier need to be rellocated again.
> >
> > I see the previous discussion with David about wording here.  How about
> > leaving the functionality description general, and provide a specific
> > example for x86_64?  As mentioned we can always update when new arch support
> > is added.  Suggested text?
> >
> >       The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of
> >       some vmemmap pages associated with pre-allocated HugeTLB pages.
> >       For example, on X86_64 6 vmemmap pages of size 4KB each can be
> >       saved for each 2MB HugeTLB page.  4094 vmemmap pages of size 4KB
> >       each can be saved for each 1GB HugeTLB page.
> >
> >       When a HugeTLB page is allocated or freed, the vmemmap array
> >       representing the range associated with the page will need to be
> >       remapped.  When a page is allocated, vmemmap pages are freed
> >       after remapping.  When a page is freed, previously discarded
> >       vmemmap pages must be allocated before before remapping.
>
> Sorry, I am slowly coming up to speed with discussions when I was away.
>
> It appears vmemmap is not being mapped with huge pages if the boot option
> hugetlb_free_vmemmap is on.   Is that correct?

Right.

>
> If that is correct, we should document the trade off of increased page
> table pages needed to map vmemmap vs the savings from freeing struct page
> pages.  If a user/sysadmin only uses a small number of hugetlb pages (as
> a percentage of system memory) they could end up using more memory with
> hugetlb_free_vmemmap on as opposed to off.  Perhaps, it should be part of
> the documentation for hugetlb_free_vmemmap?  If this is true, and people

Right, it is better to document it around hugetlb_free_vmemmap.
This should be a part of pathe #8. Thanks.


> think this should be documented, I can try to come up with something.
>
> --
> Mike Kravetz



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-13 15:45 ` [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-16 13:06   ` Oscar Salvador
  2020-12-16 13:15     ` [External] " Muchun Song
  2020-12-16 22:08   ` Mike Kravetz
  1 sibling, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 13:06 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Sun, Dec 13, 2020 at 11:45:26PM +0800, Muchun Song wrote:
> +
> +/*
> + * vmemmap_rmap_walk - walk vmemmap page table
> + *
> + * @rmap_pte:		called for each non-empty PTE (lowest-level) entry.
> + * @reuse:		the page which is reused for the tail vmemmap pages.
> + * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
> + */
> +struct vmemmap_rmap_walk {
> +	void (*rmap_pte)(pte_t *pte, unsigned long addr,
> +			 struct vmemmap_rmap_walk *walk);
> +	struct page *reuse;
> +	struct list_head *vmemmap_pages;
> +};

Why did you chose this approach in this version?
Earlier versions of this patchset had a single vmemmap_to_pmd() function
which returned the PMD, and now we have serveral vmemmap_{levels}_range
and a vmemmap_rmap_walk.
A brief explanation about why this change was introduced would have been nice.

I guess it is because ealier versions were too oriented for the usecase
this patchset presents, while the new versions tries to be more broad
about future re-uses of the interface?


-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-16 13:06   ` Oscar Salvador
@ 2020-12-16 13:15     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-16 13:15 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 9:06 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Dec 13, 2020 at 11:45:26PM +0800, Muchun Song wrote:
> > +
> > +/*
> > + * vmemmap_rmap_walk - walk vmemmap page table
> > + *
> > + * @rmap_pte:                called for each non-empty PTE (lowest-level) entry.
> > + * @reuse:           the page which is reused for the tail vmemmap pages.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + */
> > +struct vmemmap_rmap_walk {
> > +     void (*rmap_pte)(pte_t *pte, unsigned long addr,
> > +                      struct vmemmap_rmap_walk *walk);
> > +     struct page *reuse;
> > +     struct list_head *vmemmap_pages;
> > +};
>
> Why did you chose this approach in this version?
> Earlier versions of this patchset had a single vmemmap_to_pmd() function
> which returned the PMD, and now we have serveral vmemmap_{levels}_range
> and a vmemmap_rmap_walk.

This approach will be more universal. :-)

> A brief explanation about why this change was introduced would have been nice.
>
> I guess it is because ealier versions were too oriented for the usecase
> this patchset presents, while the new versions tries to be more broad
> about future re-uses of the interface?

Yeah, you are right. I plan to reuse those interfaces in the feature.

Thanks.

>
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-13 15:45 ` [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
@ 2020-12-16 13:28   ` Oscar Salvador
  2020-12-16 13:51     ` [External] " Muchun Song
  2020-12-16 13:30   ` Oscar Salvador
  1 sibling, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 13:28 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Sun, Dec 13, 2020 at 11:45:29PM +0800, Muchun Song wrote:
> Because we reuse the first tail vmemmap page frame and remap it
> with read-only, we cannot set the PageHWPosion on a tail page.
> So we can use the head[4].private to record the real error page
> index and set the raw error page PageHWPoison later.

Maybe the following is better?

"Since the first page of tail page structs is remapped read-only,
 we cannot modify any tail struct page, and so we cannot set
 the HWPoison flag on a tail page.
 We can make use of head[4].private to record the real hwpoisoned
 page index.
 Right before freeing the page the real raw page will be retrieved
 and marked as HWPoison.
"

I think it is slighly clearer, but whatever.

> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

I do not quite like the name hwpoison_subpage_deliver, but I cannot
come up with a better one myself, so:

Reviewed-by: Oscar Salvador <osalvador@suse.de>

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-13 15:45 ` [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
  2020-12-16 13:28   ` Oscar Salvador
@ 2020-12-16 13:30   ` Oscar Salvador
  1 sibling, 0 replies; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 13:30 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	naoya.horiguchi

On Sun, Dec 13, 2020 at 11:45:29PM +0800, Muchun Song wrote:
> Because we reuse the first tail vmemmap page frame and remap it
> with read-only, we cannot set the PageHWPosion on a tail page.
> So we can use the head[4].private to record the real error page
> index and set the raw error page PageHWPoison later.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

+CC Naoya

> ---
>  mm/hugetlb.c | 48 ++++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 40 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 542e6cb81321..29de425f879a 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1347,6 +1347,43 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page)
>  		schedule_work(&hpage_update_work);
>  }
>  
> +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
> +{
> +	struct page *page;
> +
> +	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
> +		return;
> +
> +	page = head + page_private(head + 4);
> +
> +	/*
> +	 * Move PageHWPoison flag from head page to the raw error page,
> +	 * which makes any subpages rather than the error page reusable.
> +	 */
> +	if (page != head) {
> +		SetPageHWPoison(page);
> +		ClearPageHWPoison(head);
> +	}
> +}
> +
> +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
> +					struct page *page)
> +{
> +	if (!PageHWPoison(head))
> +		return;
> +
> +	if (free_vmemmap_pages_per_hpage(h)) {
> +		set_page_private(head + 4, page - head);
> +	} else if (page != head) {
> +		/*
> +		 * Move PageHWPoison flag from head page to the raw error page,
> +		 * which makes any subpages rather than the error page reusable.
> +		 */
> +		SetPageHWPoison(page);
> +		ClearPageHWPoison(head);
> +	}
> +}
> +
>  static void update_and_free_page(struct hstate *h, struct page *page)
>  {
>  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> @@ -1363,6 +1400,7 @@ static void __free_hugepage(struct hstate *h, struct page *page)
>  	int i;
>  
>  	alloc_huge_page_vmemmap(h, page);
> +	hwpoison_subpage_deliver(h, page);
>  
>  	for (i = 0; i < pages_per_huge_page(h); i++) {
>  		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
> @@ -1840,14 +1878,8 @@ int dissolve_free_huge_page(struct page *page)
>  		int nid = page_to_nid(head);
>  		if (h->free_huge_pages - h->resv_huge_pages == 0)
>  			goto out;
> -		/*
> -		 * Move PageHWPoison flag from head page to the raw error page,
> -		 * which makes any subpages rather than the error page reusable.
> -		 */
> -		if (PageHWPoison(head) && page != head) {
> -			SetPageHWPoison(page);
> -			ClearPageHWPoison(head);
> -		}
> +
> +		hwpoison_subpage_set(h, head, page);
>  		list_del(&head->lru);
>  		h->free_huge_pages--;
>  		h->free_huge_pages_node[nid]--;
> -- 
> 2.11.0
> 

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-13 15:45 ` [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-12-16 13:43   ` Oscar Salvador
  2020-12-16 13:56     ` [External] " Muchun Song
  2020-12-17  8:34     ` Muchun Song
  0 siblings, 2 replies; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 13:43 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Sun, Dec 13, 2020 at 11:45:32PM +0800, Muchun Song wrote:
> All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
> field in the hstate to indicate how many vmemmap pages associated with
> a HugeTLB page that we can free to buddy allocator. And initialize it
"can be freed to buddy allocator"

> in the hugetlb_vmemmap_init(). This patch is actual enablement of the
> feature.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>

With below nits addressed you can add:

Reviewed-by: Oscar Salvador <osalvador@suse.de>

>  static int __init early_hugetlb_free_vmemmap_param(char *buf)
>  {
> +	/* We cannot optimize if a "struct page" crosses page boundaries. */
> +	if (!is_power_of_2(sizeof(struct page)))
> +		return 0;
> +

I wonder if we should report a warning in case someone wants to enable this
feature and stuct page size it not power of 2.
In case someone wonders why it does not work for him/her.

> +void __init hugetlb_vmemmap_init(struct hstate *h)
> +{
> +	unsigned int nr_pages = pages_per_huge_page(h);
> +	unsigned int vmemmap_pages;
> +
> +	if (!hugetlb_free_vmemmap_enabled)
> +		return;
> +
> +	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
> +	/*
> +	 * The head page and the first tail page are not to be freed to buddy
> +	 * system, the others page will map to the first tail page. So there
> +	 * are the remaining pages that can be freed.
"the other pages will map to the first tail page, so they can be freed."
> +	 *
> +	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
> +	 * on some architectures (e.g. aarch64). See Documentation/arm64/
> +	 * hugetlbpage.rst for more details.
> +	 */
> +	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
> +		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
> +
> +	pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
> +		h->name);

Maybe specify this is hugetlb code:

pr_info("%s: blabla", __func__, ...)
or
pr_info("hugetlb: blalala", ...);

although I am not sure whether we need that at all, or maybe just use
pr_debug().

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-16 13:28   ` Oscar Salvador
@ 2020-12-16 13:51     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-16 13:51 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 9:28 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Dec 13, 2020 at 11:45:29PM +0800, Muchun Song wrote:
> > Because we reuse the first tail vmemmap page frame and remap it
> > with read-only, we cannot set the PageHWPosion on a tail page.
> > So we can use the head[4].private to record the real error page
> > index and set the raw error page PageHWPoison later.
>
> Maybe the following is better?
>
> "Since the first page of tail page structs is remapped read-only,
>  we cannot modify any tail struct page, and so we cannot set
>  the HWPoison flag on a tail page.
>  We can make use of head[4].private to record the real hwpoisoned
>  page index.
>  Right before freeing the page the real raw page will be retrieved
>  and marked as HWPoison.
> "
>
> I think it is slighly clearer, but whatever.

Thank you.

>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> I do not quite like the name hwpoison_subpage_deliver, but I cannot
> come up with a better one myself, so:
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>

Thanks for your review.

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-16 13:43   ` Oscar Salvador
@ 2020-12-16 13:56     ` Muchun Song
  2020-12-16 22:12       ` Oscar Salvador
  2020-12-17  8:34     ` Muchun Song
  1 sibling, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-16 13:56 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 9:44 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Dec 13, 2020 at 11:45:32PM +0800, Muchun Song wrote:
> > All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
> > field in the hstate to indicate how many vmemmap pages associated with
> > a HugeTLB page that we can free to buddy allocator. And initialize it
> "can be freed to buddy allocator"
>
> > in the hugetlb_vmemmap_init(). This patch is actual enablement of the
> > feature.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
>
> With below nits addressed you can add:
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>

Thanks.

>
> >  static int __init early_hugetlb_free_vmemmap_param(char *buf)
> >  {
> > +     /* We cannot optimize if a "struct page" crosses page boundaries. */
> > +     if (!is_power_of_2(sizeof(struct page)))
> > +             return 0;
> > +
>
> I wonder if we should report a warning in case someone wants to enable this
> feature and stuct page size it not power of 2.
> In case someone wonders why it does not work for him/her.
>
> > +void __init hugetlb_vmemmap_init(struct hstate *h)
> > +{
> > +     unsigned int nr_pages = pages_per_huge_page(h);
> > +     unsigned int vmemmap_pages;
> > +
> > +     if (!hugetlb_free_vmemmap_enabled)
> > +             return;
> > +
> > +     vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
> > +     /*
> > +      * The head page and the first tail page are not to be freed to buddy
> > +      * system, the others page will map to the first tail page. So there
> > +      * are the remaining pages that can be freed.
> "the other pages will map to the first tail page, so they can be freed."
> > +      *
> > +      * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
> > +      * on some architectures (e.g. aarch64). See Documentation/arm64/
> > +      * hugetlbpage.rst for more details.
> > +      */
> > +     if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
> > +             h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
> > +
> > +     pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
> > +             h->name);
>
> Maybe specify this is hugetlb code:
>
> pr_info("%s: blabla", __func__, ...)
> or
> pr_info("hugetlb: blalala", ...);
>
> although I am not sure whether we need that at all, or maybe just use
> pr_debug().

The pr_info can tell the user whether the feature is enabled. From this
point of view, it makes sense. Right?

Thanks.

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page
  2020-12-13 15:45 ` [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
@ 2020-12-16 14:03   ` Oscar Salvador
  2020-12-16 14:26     ` [External] " Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 14:03 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Sun, Dec 13, 2020 at 11:45:33PM +0800, Muchun Song wrote:
> For HugeTLB page, there are more metadata to save in the struct page.
> But the head struct page cannot meet our needs, so we have to abuse
> other tail struct page to store the metadata. In order to avoid
> conflicts caused by subsequent use of more tail struct pages, we can
> gather these discrete indexes of tail struct page. In this case, it
> will be easier to add a new tail page index later.
> 
> There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
> page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so
"that can be..."

> add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

I think this makes the current situation with metadata usage in sub-pages
easier to track.

Reviewed-by: Oscar Salvador <osalvador@suse.de>

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page
  2020-12-16 14:03   ` Oscar Salvador
@ 2020-12-16 14:26     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-16 14:26 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 10:03 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Dec 13, 2020 at 11:45:33PM +0800, Muchun Song wrote:
> > For HugeTLB page, there are more metadata to save in the struct page.
> > But the head struct page cannot meet our needs, so we have to abuse
> > other tail struct page to store the metadata. In order to avoid
> > conflicts caused by subsequent use of more tail struct pages, we can
> > gather these discrete indexes of tail struct page. In this case, it
> > will be easier to add a new tail page index later.
> >
> > There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
> > page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so
> "that can be..."

Thanks.

>
> > add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> I think this makes the current situation with metadata usage in sub-pages
> easier to track.

Agree.

>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>

Thank you.

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-13 15:45 ` [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-12-16 14:40   ` Oscar Salvador
  2020-12-16 16:04     ` [External] " Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 14:40 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Sun, Dec 13, 2020 at 11:45:31PM +0800, Muchun Song wrote:
> Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
> freeing unused vmemmap pages associated with each hugetlb page on boot.
I guess this should read "to enable the feature"?
AFAICS, it is disabled by default.

> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Reviewed-by: Oscar Salvador <osalvador@suse.de>

> ---
>  Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
>  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
>  arch/x86/mm/init_64.c                           |  8 ++++++--
>  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
>  mm/hugetlb_vmemmap.c                            | 16 ++++++++++++++++
>  5 files changed, 53 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 3ae25630a223..9e6854f21d55 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -1551,6 +1551,15 @@
>  			Documentation/admin-guide/mm/hugetlbpage.rst.
>  			Format: size[KMG]
>  
> +	hugetlb_free_vmemmap=
> +			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> +			this controls freeing unused vmemmap pages associated
> +			with each HugeTLB page.
> +			Format: { on | off (default) }
> +
> +			on:  enable the feature
> +			off: disable the feature
> +
>  	hung_task_panic=
>  			[KNL] Should the hung task detector generate panics.
>  			Format: 0 | 1
> diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> index f7b1c7462991..3a23c2377acc 100644
> --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> @@ -145,6 +145,9 @@ default_hugepagesz
>  
>  	will all result in 256 2M huge pages being allocated.  Valid default
>  	huge page size is architecture dependent.
> +hugetlb_free_vmemmap
> +	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> +	unused vmemmap pages associated with each HugeTLB page.
>  
>  When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
>  indicates the current number of pre-allocated huge pages of the default size.
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 0435bee2e172..1bce5f20e6ca 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -34,6 +34,7 @@
>  #include <linux/gfp.h>
>  #include <linux/kcore.h>
>  #include <linux/bootmem_info.h>
> +#include <linux/hugetlb.h>
>  
>  #include <asm/processor.h>
>  #include <asm/bios_ebda.h>
> @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
>  {
>  	int err;
>  
> -	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> +	if (is_hugetlb_free_vmemmap_enabled() ||
> +	    end - start < PAGES_PER_SECTION * sizeof(struct page))
>  		err = vmemmap_populate_basepages(start, end, node, NULL);
>  	else if (boot_cpu_has(X86_FEATURE_PSE))
>  		err = vmemmap_populate_hugepages(start, end, node, altmap);
> @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
>  	pmd_t *pmd;
>  	unsigned int nr_pmd_pages;
>  	struct page *page;
> +	bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
> +			    is_hugetlb_free_vmemmap_enabled();
>  
>  	for (; addr < end; addr = next) {
>  		pte_t *pte = NULL;
> @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
>  		}
>  		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
>  
> -		if (!boot_cpu_has(X86_FEATURE_PSE)) {
> +		if (base_mapping) {
>  			next = (addr + PAGE_SIZE) & PAGE_MASK;
>  			pmd = pmd_offset(pud, addr);
>  			if (pmd_none(*pmd))
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ebca2ef02212..7f47f0eeca3b 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
>  }
>  #endif
>  
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +extern bool hugetlb_free_vmemmap_enabled;
> +
> +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> +{
> +	return hugetlb_free_vmemmap_enabled;
> +}
> +#else
> +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> +{
> +	return false;
> +}
> +#endif
> +
>  #else	/* CONFIG_HUGETLB_PAGE */
>  struct hstate {};
>  
> @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
>  					pte_t *ptep, pte_t pte, unsigned long sz)
>  {
>  }
> +
> +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> +{
> +	return false;
> +}
>  #endif	/* CONFIG_HUGETLB_PAGE */
>  
>  static inline spinlock_t *huge_pte_lock(struct hstate *h,
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 02201c2e3dfa..64ad929cac61 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -180,6 +180,22 @@
>  #define RESERVE_VMEMMAP_NR		2U
>  #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
>  
> +bool hugetlb_free_vmemmap_enabled;
> +
> +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> +{
> +	if (!buf)
> +		return -EINVAL;
> +
> +	if (!strcmp(buf, "on"))
> +		hugetlb_free_vmemmap_enabled = true;
> +	else if (strcmp(buf, "off"))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
> +
>  static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
>  {
>  	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> -- 
> 2.11.0
> 

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-16 14:40   ` Oscar Salvador
@ 2020-12-16 16:04     ` Muchun Song
  2020-12-16 22:10       ` Oscar Salvador
  0 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-16 16:04 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 10:40 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Dec 13, 2020 at 11:45:31PM +0800, Muchun Song wrote:
> > Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
> > freeing unused vmemmap pages associated with each hugetlb page on boot.
> I guess this should read "to enable the feature"?
> AFAICS, it is disabled by default.
>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>

Thanks Oscar.

>
> > ---
> >  Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
> >  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
> >  arch/x86/mm/init_64.c                           |  8 ++++++--
> >  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
> >  mm/hugetlb_vmemmap.c                            | 16 ++++++++++++++++
> >  5 files changed, 53 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> > index 3ae25630a223..9e6854f21d55 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -1551,6 +1551,15 @@
> >                       Documentation/admin-guide/mm/hugetlbpage.rst.
> >                       Format: size[KMG]
> >
> > +     hugetlb_free_vmemmap=
> > +                     [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> > +                     this controls freeing unused vmemmap pages associated
> > +                     with each HugeTLB page.
> > +                     Format: { on | off (default) }
> > +
> > +                     on:  enable the feature
> > +                     off: disable the feature
> > +
> >       hung_task_panic=
> >                       [KNL] Should the hung task detector generate panics.
> >                       Format: 0 | 1
> > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> > index f7b1c7462991..3a23c2377acc 100644
> > --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> > @@ -145,6 +145,9 @@ default_hugepagesz
> >
> >       will all result in 256 2M huge pages being allocated.  Valid default
> >       huge page size is architecture dependent.
> > +hugetlb_free_vmemmap
> > +     When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> > +     unused vmemmap pages associated with each HugeTLB page.
> >
> >  When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
> >  indicates the current number of pre-allocated huge pages of the default size.
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 0435bee2e172..1bce5f20e6ca 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -34,6 +34,7 @@
> >  #include <linux/gfp.h>
> >  #include <linux/kcore.h>
> >  #include <linux/bootmem_info.h>
> > +#include <linux/hugetlb.h>
> >
> >  #include <asm/processor.h>
> >  #include <asm/bios_ebda.h>
> > @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> >  {
> >       int err;
> >
> > -     if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> > +     if (is_hugetlb_free_vmemmap_enabled() ||
> > +         end - start < PAGES_PER_SECTION * sizeof(struct page))
> >               err = vmemmap_populate_basepages(start, end, node, NULL);
> >       else if (boot_cpu_has(X86_FEATURE_PSE))
> >               err = vmemmap_populate_hugepages(start, end, node, altmap);
> > @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
> >       pmd_t *pmd;
> >       unsigned int nr_pmd_pages;
> >       struct page *page;
> > +     bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
> > +                         is_hugetlb_free_vmemmap_enabled();
> >
> >       for (; addr < end; addr = next) {
> >               pte_t *pte = NULL;
> > @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
> >               }
> >               get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
> >
> > -             if (!boot_cpu_has(X86_FEATURE_PSE)) {
> > +             if (base_mapping) {
> >                       next = (addr + PAGE_SIZE) & PAGE_MASK;
> >                       pmd = pmd_offset(pud, addr);
> >                       if (pmd_none(*pmd))
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index ebca2ef02212..7f47f0eeca3b 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> >  }
> >  #endif
> >
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +extern bool hugetlb_free_vmemmap_enabled;
> > +
> > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > +{
> > +     return hugetlb_free_vmemmap_enabled;
> > +}
> > +#else
> > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > +{
> > +     return false;
> > +}
> > +#endif
> > +
> >  #else        /* CONFIG_HUGETLB_PAGE */
> >  struct hstate {};
> >
> > @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
> >                                       pte_t *ptep, pte_t pte, unsigned long sz)
> >  {
> >  }
> > +
> > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > +{
> > +     return false;
> > +}
> >  #endif       /* CONFIG_HUGETLB_PAGE */
> >
> >  static inline spinlock_t *huge_pte_lock(struct hstate *h,
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index 02201c2e3dfa..64ad929cac61 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -180,6 +180,22 @@
> >  #define RESERVE_VMEMMAP_NR           2U
> >  #define RESERVE_VMEMMAP_SIZE         (RESERVE_VMEMMAP_NR << PAGE_SHIFT)
> >
> > +bool hugetlb_free_vmemmap_enabled;
> > +
> > +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> > +{
> > +     if (!buf)
> > +             return -EINVAL;
> > +
> > +     if (!strcmp(buf, "on"))
> > +             hugetlb_free_vmemmap_enabled = true;
> > +     else if (strcmp(buf, "off"))
> > +             return -EINVAL;
> > +
> > +     return 0;
> > +}
> > +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
> > +
> >  static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> >  {
> >       return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> > --
> > 2.11.0
> >
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-13 15:45 ` [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
  2020-12-16 13:06   ` Oscar Salvador
@ 2020-12-16 22:08   ` Mike Kravetz
  2020-12-16 22:25     ` Oscar Salvador
  2020-12-17  4:06     ` Muchun Song
  1 sibling, 2 replies; 43+ messages in thread
From: Mike Kravetz @ 2020-12-16 22:08 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 12/13/20 7:45 AM, Muchun Song wrote:
> Every HugeTLB has more than one struct page structure. We __know__ that
> we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
> to store metadata associated with each HugeTLB.
> 
> There are a lot of struct page structures associated with each HugeTLB
> page. For tail pages, the value of compound_head is the same. So we can
> reuse first page of tail page structures. We map the virtual addresses
> of the remaining pages of tail page structures to the first tail page
> struct, and then free these page frames. Therefore, we need to reserve
> two pages as vmemmap areas.
> 
> When we allocate a HugeTLB page from the buddy, we can free some vmemmap
> pages associated with each HugeTLB page. It is more appropriate to do it
> in the prep_new_huge_page().
> 
> The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
> pages associated with a HugeTLB page can be freed, returns zero for
> now, which means the feature is disabled. We will enable it once all
> the infrastructure is there.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  include/linux/bootmem_info.h |  27 +++++-
>  include/linux/mm.h           |   2 +
>  mm/Makefile                  |   1 +
>  mm/hugetlb.c                 |   3 +
>  mm/hugetlb_vmemmap.c         | 209 +++++++++++++++++++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h         |  20 +++++
>  mm/sparse-vmemmap.c          | 170 +++++++++++++++++++++++++++++++++++
>  7 files changed, 431 insertions(+), 1 deletion(-)
>  create mode 100644 mm/hugetlb_vmemmap.c
>  create mode 100644 mm/hugetlb_vmemmap.h

> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 16183d85a7d5..78c527617e8d 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -27,8 +27,178 @@
>  #include <linux/spinlock.h>
>  #include <linux/vmalloc.h>
>  #include <linux/sched.h>
> +#include <linux/pgtable.h>
> +#include <linux/bootmem_info.h>
> +
>  #include <asm/dma.h>
>  #include <asm/pgalloc.h>
> +#include <asm/tlbflush.h>
> +
> +/*
> + * vmemmap_rmap_walk - walk vmemmap page table

I am not sure if 'rmap' should be part of these names.  rmap today is mostly
about reverse mapping lookup.  Did you use rmap for 'remap', or because this
code is patterned after the page table walking rmap code?  Just think the
naming could cause some confusion.

> + *
> + * @rmap_pte:		called for each non-empty PTE (lowest-level) entry.
> + * @reuse:		the page which is reused for the tail vmemmap pages.
> + * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
> + */
> +struct vmemmap_rmap_walk {
> +	void (*rmap_pte)(pte_t *pte, unsigned long addr,
> +			 struct vmemmap_rmap_walk *walk);
> +	struct page *reuse;
> +	struct list_head *vmemmap_pages;
> +};
> +
> +/*
> + * The index of the pte page table which is mapped to the tail of the
> + * vmemmap page.
> + */
> +#define VMEMMAP_TAIL_PAGE_REUSE		-1

That is the index/offset from the range to be remapped.  See comments below.

> +
> +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> +			      unsigned long end, struct vmemmap_rmap_walk *walk)
> +{
> +	pte_t *pte;
> +
> +	pte = pte_offset_kernel(pmd, addr);
> +	do {
> +		BUG_ON(pte_none(*pte));
> +
> +		if (!walk->reuse)
> +			walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);

It may be just me, but I don't like the pte[-1] here.  It certainly does work
as designed because we want to remap all pages in the range to the page before
the range (at offset -1).  But, we do not really validate this 'reuse' page.
There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
for pte[-1].  Based on the usage for HugeTLB pages, we can be confident that
pte[-1] is actually a pte.  In discussions with Oscar, you mentioned another
possible use for these routines.

Don't change anything based on my opinion only.  I would like to see what
others think as well.

> +
> +		if (walk->rmap_pte)
> +			walk->rmap_pte(pte, addr, walk);
> +	} while (pte++, addr += PAGE_SIZE, addr != end);
> +}
> +
> +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
> +			      unsigned long end, struct vmemmap_rmap_walk *walk)
> +{
> +	pmd_t *pmd;
> +	unsigned long next;
> +
> +	pmd = pmd_offset(pud, addr);
> +	do {
> +		BUG_ON(pmd_none(*pmd));
> +
> +		next = pmd_addr_end(addr, end);
> +		vmemmap_pte_range(pmd, addr, next, walk);
> +	} while (pmd++, addr = next, addr != end);
> +}
> +
> +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
> +			      unsigned long end, struct vmemmap_rmap_walk *walk)
> +{
> +	pud_t *pud;
> +	unsigned long next;
> +
> +	pud = pud_offset(p4d, addr);
> +	do {
> +		BUG_ON(pud_none(*pud));
> +
> +		next = pud_addr_end(addr, end);
> +		vmemmap_pmd_range(pud, addr, next, walk);
> +	} while (pud++, addr = next, addr != end);
> +}
> +
> +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
> +			      unsigned long end, struct vmemmap_rmap_walk *walk)
> +{
> +	p4d_t *p4d;
> +	unsigned long next;
> +
> +	p4d = p4d_offset(pgd, addr);
> +	do {
> +		BUG_ON(p4d_none(*p4d));
> +
> +		next = p4d_addr_end(addr, end);
> +		vmemmap_pud_range(p4d, addr, next, walk);
> +	} while (p4d++, addr = next, addr != end);
> +}
> +
> +static void vmemmap_remap_range(unsigned long start, unsigned long end,
> +				struct vmemmap_rmap_walk *walk)
> +{
> +	unsigned long addr = start;
> +	unsigned long next;
> +	pgd_t *pgd;
> +
> +	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> +	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> +
> +	pgd = pgd_offset_k(addr);
> +	do {
> +		BUG_ON(pgd_none(*pgd));
> +
> +		next = pgd_addr_end(addr, end);
> +		vmemmap_p4d_range(pgd, addr, next, walk);
> +	} while (pgd++, addr = next, addr != end);
> +
> +	flush_tlb_kernel_range(start, end);
> +}
> +
> +/*
> + * Free a vmemmap page. A vmemmap page can be allocated from the memblock
> + * allocator or buddy allocator. If the PG_reserved flag is set, it means
> + * that it allocated from the memblock allocator, just free it via the
> + * free_bootmem_page(). Otherwise, use __free_page().
> + */
> +static inline void free_vmemmap_page(struct page *page)
> +{
> +	if (PageReserved(page))
> +		free_bootmem_page(page);
> +	else
> +		__free_page(page);
> +}
> +
> +/* Free a list of the vmemmap pages */
> +static void free_vmemmap_page_list(struct list_head *list)
> +{
> +	struct page *page, *next;
> +
> +	list_for_each_entry_safe(page, next, list, lru) {
> +		list_del(&page->lru);
> +		free_vmemmap_page(page);
> +	}
> +}
> +
> +static void vmemmap_remap_reuse_pte(pte_t *pte, unsigned long addr,
> +				    struct vmemmap_rmap_walk *walk)

See vmemmap_remap_reuse rename suggestion below.  I would suggest reuse
be dropped from the name here and just be called 'vmemmap_remap_pte'.

> +{
> +	/*
> +	 * Make the tail pages are mapped with read-only to catch
> +	 * illegal write operation to the tail pages.
> +	 */
> +	pgprot_t pgprot = PAGE_KERNEL_RO;
> +	pte_t entry = mk_pte(walk->reuse, pgprot);
> +	struct page *page;
> +
> +	page = pte_page(*pte);
> +	list_add(&page->lru, walk->vmemmap_pages);
> +
> +	set_pte_at(&init_mm, addr, pte, entry);
> +}
> +
> +/**
> + * vmemmap_remap_reuse - remap the vmemmap virtual address range

My original commnet here was:

Not sure if the word '_reuse' is best in this function name.  To me, the name
implies this routine will reuse vmemmap pages.  Perhaps, it makes more sense
to rename as 'vmemmap_remap_free'?  It will first remap, then free vmemmap.

But, then I looked at the code above and perhaps you are using the word
'_reuse' because the page before the range will be reused?  The vmemmap
page at offset VMEMMAP_TAIL_PAGE_REUSE (-1).

> + *                       [start, start + size) to the page which
> + *                       [start - PAGE_SIZE, start) is mapped.
> + * @start:	start address of the vmemmap virtual address range
> + * @end:	size of the vmemmap virtual address range

      ^^^^ should be @size:

-- 
Mike Kravetz

> + */
> +void vmemmap_remap_reuse(unsigned long start, unsigned long size)
> +{
> +	unsigned long end = start + size;
> +	LIST_HEAD(vmemmap_pages);
> +
> +	struct vmemmap_rmap_walk walk = {
> +		.rmap_pte	= vmemmap_remap_reuse_pte,
> +		.vmemmap_pages	= &vmemmap_pages,
> +	};
> +
> +	vmemmap_remap_range(start, end, &walk);
> +	free_vmemmap_page_list(&vmemmap_pages);
> +}
>  
>  /*
>   * Allocate a block of memory to be used to back the virtual memory map
> 


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-16 16:04     ` [External] " Muchun Song
@ 2020-12-16 22:10       ` Oscar Salvador
  2020-12-17  2:45         ` Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 22:10 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 12:04:11AM +0800, Muchun Song wrote:
> On Wed, Dec 16, 2020 at 10:40 PM Oscar Salvador <osalvador@suse.de> wrote:
> >
> > On Sun, Dec 13, 2020 at 11:45:31PM +0800, Muchun Song wrote:
> > > Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
> > > freeing unused vmemmap pages associated with each hugetlb page on boot.
> > I guess this should read "to enable the feature"?
> > AFAICS, it is disabled by default.

It still would be great to have an answer for that.

Thanks


> > >  Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
> > >  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
> > >  arch/x86/mm/init_64.c                           |  8 ++++++--
> > >  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
> > >  mm/hugetlb_vmemmap.c                            | 16 ++++++++++++++++
> > >  5 files changed, 53 insertions(+), 2 deletions(-)
> > >
> > > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> > > index 3ae25630a223..9e6854f21d55 100644
> > > --- a/Documentation/admin-guide/kernel-parameters.txt
> > > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > > @@ -1551,6 +1551,15 @@
> > >                       Documentation/admin-guide/mm/hugetlbpage.rst.
> > >                       Format: size[KMG]
> > >
> > > +     hugetlb_free_vmemmap=
> > > +                     [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> > > +                     this controls freeing unused vmemmap pages associated
> > > +                     with each HugeTLB page.
> > > +                     Format: { on | off (default) }
> > > +
> > > +                     on:  enable the feature
> > > +                     off: disable the feature
> > > +
> > >       hung_task_panic=
> > >                       [KNL] Should the hung task detector generate panics.
> > >                       Format: 0 | 1
> > > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> > > index f7b1c7462991..3a23c2377acc 100644
> > > --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> > > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> > > @@ -145,6 +145,9 @@ default_hugepagesz
> > >
> > >       will all result in 256 2M huge pages being allocated.  Valid default
> > >       huge page size is architecture dependent.
> > > +hugetlb_free_vmemmap
> > > +     When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> > > +     unused vmemmap pages associated with each HugeTLB page.
> > >
> > >  When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
> > >  indicates the current number of pre-allocated huge pages of the default size.
> > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > > index 0435bee2e172..1bce5f20e6ca 100644
> > > --- a/arch/x86/mm/init_64.c
> > > +++ b/arch/x86/mm/init_64.c
> > > @@ -34,6 +34,7 @@
> > >  #include <linux/gfp.h>
> > >  #include <linux/kcore.h>
> > >  #include <linux/bootmem_info.h>
> > > +#include <linux/hugetlb.h>
> > >
> > >  #include <asm/processor.h>
> > >  #include <asm/bios_ebda.h>
> > > @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> > >  {
> > >       int err;
> > >
> > > -     if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> > > +     if (is_hugetlb_free_vmemmap_enabled() ||
> > > +         end - start < PAGES_PER_SECTION * sizeof(struct page))
> > >               err = vmemmap_populate_basepages(start, end, node, NULL);
> > >       else if (boot_cpu_has(X86_FEATURE_PSE))
> > >               err = vmemmap_populate_hugepages(start, end, node, altmap);
> > > @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
> > >       pmd_t *pmd;
> > >       unsigned int nr_pmd_pages;
> > >       struct page *page;
> > > +     bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
> > > +                         is_hugetlb_free_vmemmap_enabled();
> > >
> > >       for (; addr < end; addr = next) {
> > >               pte_t *pte = NULL;
> > > @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
> > >               }
> > >               get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
> > >
> > > -             if (!boot_cpu_has(X86_FEATURE_PSE)) {
> > > +             if (base_mapping) {
> > >                       next = (addr + PAGE_SIZE) & PAGE_MASK;
> > >                       pmd = pmd_offset(pud, addr);
> > >                       if (pmd_none(*pmd))
> > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > > index ebca2ef02212..7f47f0eeca3b 100644
> > > --- a/include/linux/hugetlb.h
> > > +++ b/include/linux/hugetlb.h
> > > @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> > >  }
> > >  #endif
> > >
> > > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > > +extern bool hugetlb_free_vmemmap_enabled;
> > > +
> > > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > > +{
> > > +     return hugetlb_free_vmemmap_enabled;
> > > +}
> > > +#else
> > > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > > +{
> > > +     return false;
> > > +}
> > > +#endif
> > > +
> > >  #else        /* CONFIG_HUGETLB_PAGE */
> > >  struct hstate {};
> > >
> > > @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
> > >                                       pte_t *ptep, pte_t pte, unsigned long sz)
> > >  {
> > >  }
> > > +
> > > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > > +{
> > > +     return false;
> > > +}
> > >  #endif       /* CONFIG_HUGETLB_PAGE */
> > >
> > >  static inline spinlock_t *huge_pte_lock(struct hstate *h,
> > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > > index 02201c2e3dfa..64ad929cac61 100644
> > > --- a/mm/hugetlb_vmemmap.c
> > > +++ b/mm/hugetlb_vmemmap.c
> > > @@ -180,6 +180,22 @@
> > >  #define RESERVE_VMEMMAP_NR           2U
> > >  #define RESERVE_VMEMMAP_SIZE         (RESERVE_VMEMMAP_NR << PAGE_SHIFT)
> > >
> > > +bool hugetlb_free_vmemmap_enabled;
> > > +
> > > +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> > > +{
> > > +     if (!buf)
> > > +             return -EINVAL;
> > > +
> > > +     if (!strcmp(buf, "on"))
> > > +             hugetlb_free_vmemmap_enabled = true;
> > > +     else if (strcmp(buf, "off"))
> > > +             return -EINVAL;
> > > +
> > > +     return 0;
> > > +}
> > > +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
> > > +
> > >  static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> > >  {
> > >       return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> > > --
> > > 2.11.0
> > >
> >
> > --
> > Oscar Salvador
> > SUSE L3
> 
> 
> 
> -- 
> Yours,
> Muchun

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-16 13:56     ` [External] " Muchun Song
@ 2020-12-16 22:12       ` Oscar Salvador
  0 siblings, 0 replies; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 22:12 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 09:56:47PM +0800, Muchun Song wrote:
> The pr_info can tell the user whether the feature is enabled. From this
> point of view, it makes sense. Right?

Well, I guess so.
Anyway, it is not that we are going to flood the logs, so it is ok.


-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-16 22:08   ` Mike Kravetz
@ 2020-12-16 22:25     ` Oscar Salvador
  2020-12-16 22:49       ` Mike Kravetz
  2020-12-17  4:06     ` Muchun Song
  1 sibling, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-12-16 22:25 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Wed, Dec 16, 2020 at 02:08:30PM -0800, Mike Kravetz wrote:
> > + * vmemmap_rmap_walk - walk vmemmap page table
> 
> I am not sure if 'rmap' should be part of these names.  rmap today is mostly
> about reverse mapping lookup.  Did you use rmap for 'remap', or because this
> code is patterned after the page table walking rmap code?  Just think the
> naming could cause some confusion.

I also had the same feeling about the 'rmap' usage.

> > +
> > +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> > +			      unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +	pte_t *pte;
> > +
> > +	pte = pte_offset_kernel(pmd, addr);
> > +	do {
> > +		BUG_ON(pte_none(*pte));
> > +
> > +		if (!walk->reuse)
> > +			walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
> 
> It may be just me, but I don't like the pte[-1] here.  It certainly does work
> as designed because we want to remap all pages in the range to the page before
> the range (at offset -1).  But, we do not really validate this 'reuse' page.
> There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
> for pte[-1].  Based on the usage for HugeTLB pages, we can be confident that
> pte[-1] is actually a pte.  In discussions with Oscar, you mentioned another
> possible use for these routines.

Without giving it much of a thought, I guess we could duplicate the
BUG_ON for the pte outside the loop, and add a new one for pte[-1].
Also, since walk->reuse seems to not change once it is set, we can take
it outside the loop? e.g:

	pte *pte;

	pte = pte_offset_kernel(pmd, addr);
	BUG_ON(pte_none(*pte));
	BUG_ON(pte_none(pte[VMEMMAP_TAIL_PAGE_REUSE]));
	walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
	do {
		....
	} while...

Or I am not sure whether we want to keep it inside the loop in case
future cases change walk->reuse during the operation.
But to be honest, I do not think it is realistic of all future possible
uses of this, so I would rather keep it simple for now.

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-16 22:25     ` Oscar Salvador
@ 2020-12-16 22:49       ` Mike Kravetz
  2020-12-17  6:54         ` [External] " Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Mike Kravetz @ 2020-12-16 22:49 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 12/16/20 2:25 PM, Oscar Salvador wrote:
> On Wed, Dec 16, 2020 at 02:08:30PM -0800, Mike Kravetz wrote:
>>> + * vmemmap_rmap_walk - walk vmemmap page table
>>> +
>>> +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
>>> +			      unsigned long end, struct vmemmap_rmap_walk *walk)
>>> +{
>>> +	pte_t *pte;
>>> +
>>> +	pte = pte_offset_kernel(pmd, addr);
>>> +	do {
>>> +		BUG_ON(pte_none(*pte));
>>> +
>>> +		if (!walk->reuse)
>>> +			walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
>>
>> It may be just me, but I don't like the pte[-1] here.  It certainly does work
>> as designed because we want to remap all pages in the range to the page before
>> the range (at offset -1).  But, we do not really validate this 'reuse' page.
>> There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
>> for pte[-1].  Based on the usage for HugeTLB pages, we can be confident that
>> pte[-1] is actually a pte.  In discussions with Oscar, you mentioned another
>> possible use for these routines.
> 
> Without giving it much of a thought, I guess we could duplicate the
> BUG_ON for the pte outside the loop, and add a new one for pte[-1].
> Also, since walk->reuse seems to not change once it is set, we can take
> it outside the loop? e.g:
> 
> 	pte *pte;
> 
> 	pte = pte_offset_kernel(pmd, addr);
> 	BUG_ON(pte_none(*pte));
> 	BUG_ON(pte_none(pte[VMEMMAP_TAIL_PAGE_REUSE]));
> 	walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
> 	do {
> 		....
> 	} while...
> 
> Or I am not sure whether we want to keep it inside the loop in case
> future cases change walk->reuse during the operation.
> But to be honest, I do not think it is realistic of all future possible
> uses of this, so I would rather keep it simple for now.

I was thinking about possibly passing the 'reuse' address as another parameter
to vmemmap_remap_reuse().  We could add this addr to the vmemmap_rmap_walk
struct and set walk->reuse when we get to the pte for that address.  Of
course this would imply that the addr would need to be part of the range.

Ideally, we would walk the page table to get to the reuse page.  My concern
was not explicitly about adding the BUG_ON.  In more general use, *pte could
be the first entry on a pte page.  And, then pte[-1] may not even be a pte.

Again, I don't think this matters for the current HugeTLB use case.  Just a
little concerned if code is put to use for other purposes.
-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages
  2020-12-13 15:45 ` [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
@ 2020-12-16 23:48   ` Mike Kravetz
  2020-12-17  3:19     ` [External] " Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Mike Kravetz @ 2020-12-16 23:48 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 12/13/20 7:45 AM, Muchun Song wrote:
> In the subsequent patch, we will allocate the vmemmap pages when free
> HugeTLB pages. But update_and_free_page() is called from a non-task
> context(and hold hugetlb_lock), so we can defer the actual freeing in
> a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

It is unfortunate we need to add this complexitty, but I can not think
of another way.  One small comment (no required change) below.

Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

> ---
>  mm/hugetlb.c         | 77 ++++++++++++++++++++++++++++++++++++++++++++++++----
>  mm/hugetlb_vmemmap.c | 12 --------
>  mm/hugetlb_vmemmap.h | 17 ++++++++++++
>  3 files changed, 88 insertions(+), 18 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 140135fc8113..0ff9b90e524f 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1292,15 +1292,76 @@ static inline void destroy_compound_gigantic_page(struct page *page,
>  						unsigned int order) { }
>  #endif
>  
> -static void update_and_free_page(struct hstate *h, struct page *page)
> +static void __free_hugepage(struct hstate *h, struct page *page);
> +
> +/*
> + * As update_and_free_page() is be called from a non-task context(and hold
> + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
> + * use GFP_ATOMIC to allocate a lot of vmemmap pages.
> + *
> + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of
> + * pages to be freed and frees them one-by-one. As the page->mapping pointer
> + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is
> + * reused as the llist_node structure of a lockless linked list of huge
> + * pages to be freed.
> + */
> +static LLIST_HEAD(hpage_update_freelist);
> +
> +static void update_hpage_vmemmap_workfn(struct work_struct *work)
>  {
> -	int i;
> +	struct llist_node *node;
> +	struct page *page;
> +
> +	node = llist_del_all(&hpage_update_freelist);
>  
> +	while (node) {
> +		page = container_of((struct address_space **)node,
> +				     struct page, mapping);
> +		node = node->next;
> +		page->mapping = NULL;
> +		__free_hugepage(page_hstate(page), page);
> +
> +		cond_resched();
> +	}
> +}
> +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
> +
> +static inline void __update_and_free_page(struct hstate *h, struct page *page)
> +{
> +	/* No need to allocate vmemmap pages */
> +	if (!free_vmemmap_pages_per_hpage(h)) {
> +		__free_hugepage(h, page);
> +		return;
> +	}
> +
> +	/*
> +	 * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap
> +	 * pages.
> +	 *
> +	 * Only call schedule_work() if hpage_update_freelist is previously
> +	 * empty. Otherwise, schedule_work() had been called but the workfn
> +	 * hasn't retrieved the list yet.
> +	 */
> +	if (llist_add((struct llist_node *)&page->mapping,
> +		      &hpage_update_freelist))
> +		schedule_work(&hpage_update_work);
> +}
> +
> +static void update_and_free_page(struct hstate *h, struct page *page)
> +{
>  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
>  		return;
>  
>  	h->nr_huge_pages--;
>  	h->nr_huge_pages_node[page_to_nid(page)]--;
> +
> +	__update_and_free_page(h, page);
> +}
> +
> +static void __free_hugepage(struct hstate *h, struct page *page)
> +{
> +	int i;
> +

Can we add a comment here saying that this is where the call to allocate
vmemmmap pages will be inserted in a later patch.  Such a comment would
help a bit to understand the restructuring of the code.

-- 
Mike Kravetz

>  	for (i = 0; i < pages_per_huge_page(h); i++) {
>  		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
>  				1 << PG_referenced | 1 << PG_dirty |
> @@ -1313,13 +1374,17 @@ static void update_and_free_page(struct hstate *h, struct page *page)
>  	set_page_refcounted(page);
>  	if (hstate_is_gigantic(h)) {
>  		/*
> -		 * Temporarily drop the hugetlb_lock, because
> -		 * we might block in free_gigantic_page().
> +		 * Temporarily drop the hugetlb_lock only when this type of
> +		 * HugeTLB page does not support vmemmap optimization (which
> +		 * contex do not hold the hugetlb_lock), because we might block
> +		 * in free_gigantic_page().
>  		 */
> -		spin_unlock(&hugetlb_lock);
> +		if (!free_vmemmap_pages_per_hpage(h))
> +			spin_unlock(&hugetlb_lock);
>  		destroy_compound_gigantic_page(page, huge_page_order(h));
>  		free_gigantic_page(page, huge_page_order(h));
> -		spin_lock(&hugetlb_lock);
> +		if (!free_vmemmap_pages_per_hpage(h))
> +			spin_lock(&hugetlb_lock);
>  	} else {
>  		__free_pages(page, huge_page_order(h));
>  	}


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page
  2020-12-13 15:45 ` [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-17  1:17   ` Mike Kravetz
  2020-12-17  3:22     ` [External] " Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Mike Kravetz @ 2020-12-17  1:17 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 12/13/20 7:45 AM, Muchun Song wrote:
> When we free a HugeTLB page to the buddy allocator, we should allocate the
> vmemmap pages associated with it. We can do that in the __free_hugepage()
> before freeing it to buddy.

...

> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 78c527617e8d..ffcf092c92ed 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -29,6 +29,7 @@
>  #include <linux/sched.h>
>  #include <linux/pgtable.h>
>  #include <linux/bootmem_info.h>
> +#include <linux/delay.h>
>  
>  #include <asm/dma.h>
>  #include <asm/pgalloc.h>
> @@ -39,7 +40,8 @@
>   *
>   * @rmap_pte:		called for each non-empty PTE (lowest-level) entry.
>   * @reuse:		the page which is reused for the tail vmemmap pages.
> - * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
> + * @vmemmap_pages:	the list head of the vmemmap pages that can be freed
> + *			or is mapped from.
>   */
>  struct vmemmap_rmap_walk {
>  	void (*rmap_pte)(pte_t *pte, unsigned long addr,
> @@ -54,6 +56,9 @@ struct vmemmap_rmap_walk {
>   */
>  #define VMEMMAP_TAIL_PAGE_REUSE		-1
>  
> +/* The gfp mask of allocating vmemmap page */
> +#define GFP_VMEMMAP_PAGE	(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN)
> +
>  static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
>  			      unsigned long end, struct vmemmap_rmap_walk *walk)
>  {
> @@ -200,6 +205,68 @@ void vmemmap_remap_reuse(unsigned long start, unsigned long size)
>  	free_vmemmap_page_list(&vmemmap_pages);
>  }
>  
> +static void vmemmap_remap_restore_pte(pte_t *pte, unsigned long addr,
> +				      struct vmemmap_rmap_walk *walk)
> +{
> +	pgprot_t pgprot = PAGE_KERNEL;
> +	struct page *page;
> +	void *to;
> +
> +	BUG_ON(pte_page(*pte) != walk->reuse);
> +
> +	page = list_first_entry(walk->vmemmap_pages, struct page, lru);
> +	list_del(&page->lru);
> +	to = page_to_virt(page);
> +	copy_page(to, page_to_virt(walk->reuse));
> +
> +	set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> +}
> +
> +static void alloc_vmemmap_page_list(struct list_head *list,
> +				    unsigned long nr_pages)
> +{
> +	while (nr_pages--) {
> +		struct page *page;
> +
> +retry:
> +		page = alloc_page(GFP_VMEMMAP_PAGE);

Should we try (or require) the vmemmap page be on the same node as the
pages they describe?  I imagine performance would be impacted if a
struct page and the page it describes are on different numa nodes.

> +		if (unlikely(!page)) {
> +			msleep(100);
> +			/*
> +			 * We should retry infinitely, because we cannot
> +			 * handle allocation failures. Once we allocate
> +			 * vmemmap pages successfully, then we can free
> +			 * a HugeTLB page.
> +			 */
> +			goto retry;
> +		}
> +		list_add_tail(&page->lru, list);
> +	}
> +}
> +

-- 
Mike Kravetz


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-16 22:10       ` Oscar Salvador
@ 2020-12-17  2:45         ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-17  2:45 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 6:10 AM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 17, 2020 at 12:04:11AM +0800, Muchun Song wrote:
> > On Wed, Dec 16, 2020 at 10:40 PM Oscar Salvador <osalvador@suse.de> wrote:
> > >
> > > On Sun, Dec 13, 2020 at 11:45:31PM +0800, Muchun Song wrote:
> > > > Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
> > > > freeing unused vmemmap pages associated with each hugetlb page on boot.
> > > I guess this should read "to enable the feature"?
> > > AFAICS, it is disabled by default.

Hi Oscar,

Yeah, you are right. It is disabled by default. I forget to update the
commit log.
Thanks a lot for pointing this out.

>
> It still would be great to have an answer for that.
>
> Thanks
>
>
> > > >  Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
> > > >  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
> > > >  arch/x86/mm/init_64.c                           |  8 ++++++--
> > > >  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
> > > >  mm/hugetlb_vmemmap.c                            | 16 ++++++++++++++++
> > > >  5 files changed, 53 insertions(+), 2 deletions(-)
> > > >
> > > > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> > > > index 3ae25630a223..9e6854f21d55 100644
> > > > --- a/Documentation/admin-guide/kernel-parameters.txt
> > > > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > > > @@ -1551,6 +1551,15 @@
> > > >                       Documentation/admin-guide/mm/hugetlbpage.rst.
> > > >                       Format: size[KMG]
> > > >
> > > > +     hugetlb_free_vmemmap=
> > > > +                     [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> > > > +                     this controls freeing unused vmemmap pages associated
> > > > +                     with each HugeTLB page.
> > > > +                     Format: { on | off (default) }
> > > > +
> > > > +                     on:  enable the feature
> > > > +                     off: disable the feature
> > > > +
> > > >       hung_task_panic=
> > > >                       [KNL] Should the hung task detector generate panics.
> > > >                       Format: 0 | 1
> > > > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> > > > index f7b1c7462991..3a23c2377acc 100644
> > > > --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> > > > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> > > > @@ -145,6 +145,9 @@ default_hugepagesz
> > > >
> > > >       will all result in 256 2M huge pages being allocated.  Valid default
> > > >       huge page size is architecture dependent.
> > > > +hugetlb_free_vmemmap
> > > > +     When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> > > > +     unused vmemmap pages associated with each HugeTLB page.
> > > >
> > > >  When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
> > > >  indicates the current number of pre-allocated huge pages of the default size.
> > > > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > > > index 0435bee2e172..1bce5f20e6ca 100644
> > > > --- a/arch/x86/mm/init_64.c
> > > > +++ b/arch/x86/mm/init_64.c
> > > > @@ -34,6 +34,7 @@
> > > >  #include <linux/gfp.h>
> > > >  #include <linux/kcore.h>
> > > >  #include <linux/bootmem_info.h>
> > > > +#include <linux/hugetlb.h>
> > > >
> > > >  #include <asm/processor.h>
> > > >  #include <asm/bios_ebda.h>
> > > > @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
> > > >  {
> > > >       int err;
> > > >
> > > > -     if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> > > > +     if (is_hugetlb_free_vmemmap_enabled() ||
> > > > +         end - start < PAGES_PER_SECTION * sizeof(struct page))
> > > >               err = vmemmap_populate_basepages(start, end, node, NULL);
> > > >       else if (boot_cpu_has(X86_FEATURE_PSE))
> > > >               err = vmemmap_populate_hugepages(start, end, node, altmap);
> > > > @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
> > > >       pmd_t *pmd;
> > > >       unsigned int nr_pmd_pages;
> > > >       struct page *page;
> > > > +     bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
> > > > +                         is_hugetlb_free_vmemmap_enabled();
> > > >
> > > >       for (; addr < end; addr = next) {
> > > >               pte_t *pte = NULL;
> > > > @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
> > > >               }
> > > >               get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
> > > >
> > > > -             if (!boot_cpu_has(X86_FEATURE_PSE)) {
> > > > +             if (base_mapping) {
> > > >                       next = (addr + PAGE_SIZE) & PAGE_MASK;
> > > >                       pmd = pmd_offset(pud, addr);
> > > >                       if (pmd_none(*pmd))
> > > > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > > > index ebca2ef02212..7f47f0eeca3b 100644
> > > > --- a/include/linux/hugetlb.h
> > > > +++ b/include/linux/hugetlb.h
> > > > @@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
> > > >  }
> > > >  #endif
> > > >
> > > > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > > > +extern bool hugetlb_free_vmemmap_enabled;
> > > > +
> > > > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > > > +{
> > > > +     return hugetlb_free_vmemmap_enabled;
> > > > +}
> > > > +#else
> > > > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > > > +{
> > > > +     return false;
> > > > +}
> > > > +#endif
> > > > +
> > > >  #else        /* CONFIG_HUGETLB_PAGE */
> > > >  struct hstate {};
> > > >
> > > > @@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
> > > >                                       pte_t *ptep, pte_t pte, unsigned long sz)
> > > >  {
> > > >  }
> > > > +
> > > > +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > > > +{
> > > > +     return false;
> > > > +}
> > > >  #endif       /* CONFIG_HUGETLB_PAGE */
> > > >
> > > >  static inline spinlock_t *huge_pte_lock(struct hstate *h,
> > > > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > > > index 02201c2e3dfa..64ad929cac61 100644
> > > > --- a/mm/hugetlb_vmemmap.c
> > > > +++ b/mm/hugetlb_vmemmap.c
> > > > @@ -180,6 +180,22 @@
> > > >  #define RESERVE_VMEMMAP_NR           2U
> > > >  #define RESERVE_VMEMMAP_SIZE         (RESERVE_VMEMMAP_NR << PAGE_SHIFT)
> > > >
> > > > +bool hugetlb_free_vmemmap_enabled;
> > > > +
> > > > +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> > > > +{
> > > > +     if (!buf)
> > > > +             return -EINVAL;
> > > > +
> > > > +     if (!strcmp(buf, "on"))
> > > > +             hugetlb_free_vmemmap_enabled = true;
> > > > +     else if (strcmp(buf, "off"))
> > > > +             return -EINVAL;
> > > > +
> > > > +     return 0;
> > > > +}
> > > > +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
> > > > +
> > > >  static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> > > >  {
> > > >       return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> > > > --
> > > > 2.11.0
> > > >
> > >
> > > --
> > > Oscar Salvador
> > > SUSE L3
> >
> >
> >
> > --
> > Yours,
> > Muchun
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages
  2020-12-16 23:48   ` Mike Kravetz
@ 2020-12-17  3:19     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-17  3:19 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 7:48 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/13/20 7:45 AM, Muchun Song wrote:
> > In the subsequent patch, we will allocate the vmemmap pages when free
> > HugeTLB pages. But update_and_free_page() is called from a non-task
> > context(and hold hugetlb_lock), so we can defer the actual freeing in
> > a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> It is unfortunate we need to add this complexitty, but I can not think
> of another way.  One small comment (no required change) below.
>
> Reviewed-by: Mike Kravetz <mike.kravetz@oracle.com>

Thank you.

>
> > ---
> >  mm/hugetlb.c         | 77 ++++++++++++++++++++++++++++++++++++++++++++++++----
> >  mm/hugetlb_vmemmap.c | 12 --------
> >  mm/hugetlb_vmemmap.h | 17 ++++++++++++
> >  3 files changed, 88 insertions(+), 18 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 140135fc8113..0ff9b90e524f 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1292,15 +1292,76 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> >                                               unsigned int order) { }
> >  #endif
> >
> > -static void update_and_free_page(struct hstate *h, struct page *page)
> > +static void __free_hugepage(struct hstate *h, struct page *page);
> > +
> > +/*
> > + * As update_and_free_page() is be called from a non-task context(and hold
> > + * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
> > + * use GFP_ATOMIC to allocate a lot of vmemmap pages.
> > + *
> > + * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of
> > + * pages to be freed and frees them one-by-one. As the page->mapping pointer
> > + * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is
> > + * reused as the llist_node structure of a lockless linked list of huge
> > + * pages to be freed.
> > + */
> > +static LLIST_HEAD(hpage_update_freelist);
> > +
> > +static void update_hpage_vmemmap_workfn(struct work_struct *work)
> >  {
> > -     int i;
> > +     struct llist_node *node;
> > +     struct page *page;
> > +
> > +     node = llist_del_all(&hpage_update_freelist);
> >
> > +     while (node) {
> > +             page = container_of((struct address_space **)node,
> > +                                  struct page, mapping);
> > +             node = node->next;
> > +             page->mapping = NULL;
> > +             __free_hugepage(page_hstate(page), page);
> > +
> > +             cond_resched();
> > +     }
> > +}
> > +static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
> > +
> > +static inline void __update_and_free_page(struct hstate *h, struct page *page)
> > +{
> > +     /* No need to allocate vmemmap pages */
> > +     if (!free_vmemmap_pages_per_hpage(h)) {
> > +             __free_hugepage(h, page);
> > +             return;
> > +     }
> > +
> > +     /*
> > +      * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap
> > +      * pages.
> > +      *
> > +      * Only call schedule_work() if hpage_update_freelist is previously
> > +      * empty. Otherwise, schedule_work() had been called but the workfn
> > +      * hasn't retrieved the list yet.
> > +      */
> > +     if (llist_add((struct llist_node *)&page->mapping,
> > +                   &hpage_update_freelist))
> > +             schedule_work(&hpage_update_work);
> > +}
> > +
> > +static void update_and_free_page(struct hstate *h, struct page *page)
> > +{
> >       if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> >               return;
> >
> >       h->nr_huge_pages--;
> >       h->nr_huge_pages_node[page_to_nid(page)]--;
> > +
> > +     __update_and_free_page(h, page);
> > +}
> > +
> > +static void __free_hugepage(struct hstate *h, struct page *page)
> > +{
> > +     int i;
> > +
>
> Can we add a comment here saying that this is where the call to allocate
> vmemmmap pages will be inserted in a later patch.  Such a comment would
> help a bit to understand the restructuring of the code.

OK. Will do. Thanks.

>
> --
> Mike Kravetz
>
> >       for (i = 0; i < pages_per_huge_page(h); i++) {
> >               page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
> >                               1 << PG_referenced | 1 << PG_dirty |
> > @@ -1313,13 +1374,17 @@ static void update_and_free_page(struct hstate *h, struct page *page)
> >       set_page_refcounted(page);
> >       if (hstate_is_gigantic(h)) {
> >               /*
> > -              * Temporarily drop the hugetlb_lock, because
> > -              * we might block in free_gigantic_page().
> > +              * Temporarily drop the hugetlb_lock only when this type of
> > +              * HugeTLB page does not support vmemmap optimization (which
> > +              * contex do not hold the hugetlb_lock), because we might block
> > +              * in free_gigantic_page().
> >                */
> > -             spin_unlock(&hugetlb_lock);
> > +             if (!free_vmemmap_pages_per_hpage(h))
> > +                     spin_unlock(&hugetlb_lock);
> >               destroy_compound_gigantic_page(page, huge_page_order(h));
> >               free_gigantic_page(page, huge_page_order(h));
> > -             spin_lock(&hugetlb_lock);
> > +             if (!free_vmemmap_pages_per_hpage(h))
> > +                     spin_lock(&hugetlb_lock);
> >       } else {
> >               __free_pages(page, huge_page_order(h));
> >       }



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page
  2020-12-17  1:17   ` Mike Kravetz
@ 2020-12-17  3:22     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-17  3:22 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 9:17 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/13/20 7:45 AM, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we should allocate the
> > vmemmap pages associated with it. We can do that in the __free_hugepage()
> > before freeing it to buddy.
>
> ...
>
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index 78c527617e8d..ffcf092c92ed 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -29,6 +29,7 @@
> >  #include <linux/sched.h>
> >  #include <linux/pgtable.h>
> >  #include <linux/bootmem_info.h>
> > +#include <linux/delay.h>
> >
> >  #include <asm/dma.h>
> >  #include <asm/pgalloc.h>
> > @@ -39,7 +40,8 @@
> >   *
> >   * @rmap_pte:                called for each non-empty PTE (lowest-level) entry.
> >   * @reuse:           the page which is reused for the tail vmemmap pages.
> > - * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed
> > + *                   or is mapped from.
> >   */
> >  struct vmemmap_rmap_walk {
> >       void (*rmap_pte)(pte_t *pte, unsigned long addr,
> > @@ -54,6 +56,9 @@ struct vmemmap_rmap_walk {
> >   */
> >  #define VMEMMAP_TAIL_PAGE_REUSE              -1
> >
> > +/* The gfp mask of allocating vmemmap page */
> > +#define GFP_VMEMMAP_PAGE     (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_NOWARN)
> > +
> >  static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> >                             unsigned long end, struct vmemmap_rmap_walk *walk)
> >  {
> > @@ -200,6 +205,68 @@ void vmemmap_remap_reuse(unsigned long start, unsigned long size)
> >       free_vmemmap_page_list(&vmemmap_pages);
> >  }
> >
> > +static void vmemmap_remap_restore_pte(pte_t *pte, unsigned long addr,
> > +                                   struct vmemmap_rmap_walk *walk)
> > +{
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     struct page *page;
> > +     void *to;
> > +
> > +     BUG_ON(pte_page(*pte) != walk->reuse);
> > +
> > +     page = list_first_entry(walk->vmemmap_pages, struct page, lru);
> > +     list_del(&page->lru);
> > +     to = page_to_virt(page);
> > +     copy_page(to, page_to_virt(walk->reuse));
> > +
> > +     set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> > +}
> > +
> > +static void alloc_vmemmap_page_list(struct list_head *list,
> > +                                 unsigned long nr_pages)
> > +{
> > +     while (nr_pages--) {
> > +             struct page *page;
> > +
> > +retry:
> > +             page = alloc_page(GFP_VMEMMAP_PAGE);
>
> Should we try (or require) the vmemmap page be on the same node as the
> pages they describe?  I imagine performance would be impacted if a
> struct page and the page it describes are on different numa nodes.

Yeah, it is a good idea. I also think that we should do this. I will do that in
the next version. Thanks.

>
> > +             if (unlikely(!page)) {
> > +                     msleep(100);
> > +                     /*
> > +                      * We should retry infinitely, because we cannot
> > +                      * handle allocation failures. Once we allocate
> > +                      * vmemmap pages successfully, then we can free
> > +                      * a HugeTLB page.
> > +                      */
> > +                     goto retry;
> > +             }
> > +             list_add_tail(&page->lru, list);
> > +     }
> > +}
> > +
>
> --
> Mike Kravetz



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-16 22:08   ` Mike Kravetz
  2020-12-16 22:25     ` Oscar Salvador
@ 2020-12-17  4:06     ` Muchun Song
  1 sibling, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-17  4:06 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 6:08 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/13/20 7:45 AM, Muchun Song wrote:
> > Every HugeTLB has more than one struct page structure. We __know__ that
> > we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
> > to store metadata associated with each HugeTLB.
> >
> > There are a lot of struct page structures associated with each HugeTLB
> > page. For tail pages, the value of compound_head is the same. So we can
> > reuse first page of tail page structures. We map the virtual addresses
> > of the remaining pages of tail page structures to the first tail page
> > struct, and then free these page frames. Therefore, we need to reserve
> > two pages as vmemmap areas.
> >
> > When we allocate a HugeTLB page from the buddy, we can free some vmemmap
> > pages associated with each HugeTLB page. It is more appropriate to do it
> > in the prep_new_huge_page().
> >
> > The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
> > pages associated with a HugeTLB page can be freed, returns zero for
> > now, which means the feature is disabled. We will enable it once all
> > the infrastructure is there.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  include/linux/bootmem_info.h |  27 +++++-
> >  include/linux/mm.h           |   2 +
> >  mm/Makefile                  |   1 +
> >  mm/hugetlb.c                 |   3 +
> >  mm/hugetlb_vmemmap.c         | 209 +++++++++++++++++++++++++++++++++++++++++++
> >  mm/hugetlb_vmemmap.h         |  20 +++++
> >  mm/sparse-vmemmap.c          | 170 +++++++++++++++++++++++++++++++++++
> >  7 files changed, 431 insertions(+), 1 deletion(-)
> >  create mode 100644 mm/hugetlb_vmemmap.c
> >  create mode 100644 mm/hugetlb_vmemmap.h
>
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index 16183d85a7d5..78c527617e8d 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -27,8 +27,178 @@
> >  #include <linux/spinlock.h>
> >  #include <linux/vmalloc.h>
> >  #include <linux/sched.h>
> > +#include <linux/pgtable.h>
> > +#include <linux/bootmem_info.h>
> > +
> >  #include <asm/dma.h>
> >  #include <asm/pgalloc.h>
> > +#include <asm/tlbflush.h>
> > +
> > +/*
> > + * vmemmap_rmap_walk - walk vmemmap page table
>
> I am not sure if 'rmap' should be part of these names.  rmap today is mostly
> about reverse mapping lookup.  Did you use rmap for 'remap', or because this
> code is patterned after the page table walking rmap code?  Just think the
> naming could cause some confusion.

Yeah. I should use "remap" to avoid confusion.

>
> > + *
> > + * @rmap_pte:                called for each non-empty PTE (lowest-level) entry.
> > + * @reuse:           the page which is reused for the tail vmemmap pages.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + */
> > +struct vmemmap_rmap_walk {
> > +     void (*rmap_pte)(pte_t *pte, unsigned long addr,
> > +                      struct vmemmap_rmap_walk *walk);
> > +     struct page *reuse;
> > +     struct list_head *vmemmap_pages;
> > +};
> > +
> > +/*
> > + * The index of the pte page table which is mapped to the tail of the
> > + * vmemmap page.
> > + */
> > +#define VMEMMAP_TAIL_PAGE_REUSE              -1
>
> That is the index/offset from the range to be remapped.  See comments below.

You are right. I need to update the comment.

>
> > +
> > +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     pte_t *pte;
> > +
> > +     pte = pte_offset_kernel(pmd, addr);
> > +     do {
> > +             BUG_ON(pte_none(*pte));
> > +
> > +             if (!walk->reuse)
> > +                     walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
>
> It may be just me, but I don't like the pte[-1] here.  It certainly does work
> as designed because we want to remap all pages in the range to the page before
> the range (at offset -1).  But, we do not really validate this 'reuse' page.
> There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
> for pte[-1].  Based on the usage for HugeTLB pages, we can be confident that
> pte[-1] is actually a pte.  In discussions with Oscar, you mentioned another
> possible use for these routines.

Yeah, we should add a BUG_ON for pte[-1].

>
> Don't change anything based on my opinion only.  I would like to see what
> others think as well.
>
> > +
> > +             if (walk->rmap_pte)
> > +                     walk->rmap_pte(pte, addr, walk);
> > +     } while (pte++, addr += PAGE_SIZE, addr != end);
> > +}
> > +
> > +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     pmd_t *pmd;
> > +     unsigned long next;
> > +
> > +     pmd = pmd_offset(pud, addr);
> > +     do {
> > +             BUG_ON(pmd_none(*pmd));
> > +
> > +             next = pmd_addr_end(addr, end);
> > +             vmemmap_pte_range(pmd, addr, next, walk);
> > +     } while (pmd++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     pud_t *pud;
> > +     unsigned long next;
> > +
> > +     pud = pud_offset(p4d, addr);
> > +     do {
> > +             BUG_ON(pud_none(*pud));
> > +
> > +             next = pud_addr_end(addr, end);
> > +             vmemmap_pmd_range(pud, addr, next, walk);
> > +     } while (pud++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
> > +                           unsigned long end, struct vmemmap_rmap_walk *walk)
> > +{
> > +     p4d_t *p4d;
> > +     unsigned long next;
> > +
> > +     p4d = p4d_offset(pgd, addr);
> > +     do {
> > +             BUG_ON(p4d_none(*p4d));
> > +
> > +             next = p4d_addr_end(addr, end);
> > +             vmemmap_pud_range(p4d, addr, next, walk);
> > +     } while (p4d++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_remap_range(unsigned long start, unsigned long end,
> > +                             struct vmemmap_rmap_walk *walk)
> > +{
> > +     unsigned long addr = start;
> > +     unsigned long next;
> > +     pgd_t *pgd;
> > +
> > +     VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> > +     VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> > +
> > +     pgd = pgd_offset_k(addr);
> > +     do {
> > +             BUG_ON(pgd_none(*pgd));
> > +
> > +             next = pgd_addr_end(addr, end);
> > +             vmemmap_p4d_range(pgd, addr, next, walk);
> > +     } while (pgd++, addr = next, addr != end);
> > +
> > +     flush_tlb_kernel_range(start, end);
> > +}
> > +
> > +/*
> > + * Free a vmemmap page. A vmemmap page can be allocated from the memblock
> > + * allocator or buddy allocator. If the PG_reserved flag is set, it means
> > + * that it allocated from the memblock allocator, just free it via the
> > + * free_bootmem_page(). Otherwise, use __free_page().
> > + */
> > +static inline void free_vmemmap_page(struct page *page)
> > +{
> > +     if (PageReserved(page))
> > +             free_bootmem_page(page);
> > +     else
> > +             __free_page(page);
> > +}
> > +
> > +/* Free a list of the vmemmap pages */
> > +static void free_vmemmap_page_list(struct list_head *list)
> > +{
> > +     struct page *page, *next;
> > +
> > +     list_for_each_entry_safe(page, next, list, lru) {
> > +             list_del(&page->lru);
> > +             free_vmemmap_page(page);
> > +     }
> > +}
> > +
> > +static void vmemmap_remap_reuse_pte(pte_t *pte, unsigned long addr,
> > +                                 struct vmemmap_rmap_walk *walk)
>
> See vmemmap_remap_reuse rename suggestion below.  I would suggest reuse
> be dropped from the name here and just be called 'vmemmap_remap_pte'.

OK. Will do that.

>
> > +{
> > +     /*
> > +      * Make the tail pages are mapped with read-only to catch
> > +      * illegal write operation to the tail pages.
> > +      */
> > +     pgprot_t pgprot = PAGE_KERNEL_RO;
> > +     pte_t entry = mk_pte(walk->reuse, pgprot);
> > +     struct page *page;
> > +
> > +     page = pte_page(*pte);
> > +     list_add(&page->lru, walk->vmemmap_pages);
> > +
> > +     set_pte_at(&init_mm, addr, pte, entry);
> > +}
> > +
> > +/**
> > + * vmemmap_remap_reuse - remap the vmemmap virtual address range
>
> My original commnet here was:
>
> Not sure if the word '_reuse' is best in this function name.  To me, the name
> implies this routine will reuse vmemmap pages.  Perhaps, it makes more sense
> to rename as 'vmemmap_remap_free'?  It will first remap, then free vmemmap.

The vmemmap_remap_free is also a good name to me.
In the next patch, we can use vmemmap_remap_alloc for
allocating vmemmap pages. Looks very symmetrical. :-)

Thanks Mike.

>
> But, then I looked at the code above and perhaps you are using the word
> '_reuse' because the page before the range will be reused?  The vmemmap

Yeah. You are right.

> page at offset VMEMMAP_TAIL_PAGE_REUSE (-1).
>
> > + *                       [start, start + size) to the page which
> > + *                       [start - PAGE_SIZE, start) is mapped.
> > + * @start:   start address of the vmemmap virtual address range
> > + * @end:     size of the vmemmap virtual address range
>
>       ^^^^ should be @size:

Oh, Yeah. Forgot to update it. Thanks.

>
> --
> Mike Kravetz
>
> > + */
> > +void vmemmap_remap_reuse(unsigned long start, unsigned long size)
> > +{
> > +     unsigned long end = start + size;
> > +     LIST_HEAD(vmemmap_pages);
> > +
> > +     struct vmemmap_rmap_walk walk = {
> > +             .rmap_pte       = vmemmap_remap_reuse_pte,
> > +             .vmemmap_pages  = &vmemmap_pages,
> > +     };
> > +
> > +     vmemmap_remap_range(start, end, &walk);
> > +     free_vmemmap_page_list(&vmemmap_pages);
> > +}
> >
> >  /*
> >   * Allocate a block of memory to be used to back the virtual memory map
> >



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-16 22:49       ` Mike Kravetz
@ 2020-12-17  6:54         ` Muchun Song
  2020-12-17  9:05           ` Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Muchun Song @ 2020-12-17  6:54 UTC (permalink / raw)
  To: Mike Kravetz, Oscar Salvador
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Michal Hocko, Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 6:52 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/16/20 2:25 PM, Oscar Salvador wrote:
> > On Wed, Dec 16, 2020 at 02:08:30PM -0800, Mike Kravetz wrote:
> >>> + * vmemmap_rmap_walk - walk vmemmap page table
> >>> +
> >>> +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> >>> +                         unsigned long end, struct vmemmap_rmap_walk *walk)
> >>> +{
> >>> +   pte_t *pte;
> >>> +
> >>> +   pte = pte_offset_kernel(pmd, addr);
> >>> +   do {
> >>> +           BUG_ON(pte_none(*pte));
> >>> +
> >>> +           if (!walk->reuse)
> >>> +                   walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
> >>
> >> It may be just me, but I don't like the pte[-1] here.  It certainly does work
> >> as designed because we want to remap all pages in the range to the page before
> >> the range (at offset -1).  But, we do not really validate this 'reuse' page.
> >> There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
> >> for pte[-1].  Based on the usage for HugeTLB pages, we can be confident that
> >> pte[-1] is actually a pte.  In discussions with Oscar, you mentioned another
> >> possible use for these routines.
> >
> > Without giving it much of a thought, I guess we could duplicate the
> > BUG_ON for the pte outside the loop, and add a new one for pte[-1].
> > Also, since walk->reuse seems to not change once it is set, we can take
> > it outside the loop? e.g:
> >
> >       pte *pte;
> >
> >       pte = pte_offset_kernel(pmd, addr);
> >       BUG_ON(pte_none(*pte));
> >       BUG_ON(pte_none(pte[VMEMMAP_TAIL_PAGE_REUSE]));
> >       walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
> >       do {
> >               ....
> >       } while...
> >
> > Or I am not sure whether we want to keep it inside the loop in case
> > future cases change walk->reuse during the operation.
> > But to be honest, I do not think it is realistic of all future possible
> > uses of this, so I would rather keep it simple for now.
>
> I was thinking about possibly passing the 'reuse' address as another parameter
> to vmemmap_remap_reuse().  We could add this addr to the vmemmap_rmap_walk
> struct and set walk->reuse when we get to the pte for that address.  Of
> course this would imply that the addr would need to be part of the range.

Maybe adding another one parameter is unnecessary.  How about doing
this in the vmemmap_remap_reuse?

The 'reuse' address just is start + PAGE_SIZE.

void vmemmap_remap_free(unsigned long start, unsigned long size)
{
         unsigned long end = start + size;
         unsigned long reuse_addr = start + PAGE_SIZE;
         LIST_HEAD(vmemmap_pages);

         struct vmemmap_remap_walk walk = {
                  .remap_pte = vmemmap_remap_pte,
                  .vmemmap_pages = &vmemmap_pages,
                  .reuse_addr = reuse_addr.
         };

}

>
> Ideally, we would walk the page table to get to the reuse page.  My concern
> was not explicitly about adding the BUG_ON.  In more general use, *pte could
> be the first entry on a pte page.  And, then pte[-1] may not even be a pte.
>
> Again, I don't think this matters for the current HugeTLB use case.  Just a
> little concerned if code is put to use for other purposes.
> --
> Mike Kravetz



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-16 13:43   ` Oscar Salvador
  2020-12-16 13:56     ` [External] " Muchun Song
@ 2020-12-17  8:34     ` Muchun Song
  1 sibling, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-17  8:34 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Dec 16, 2020 at 9:44 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Dec 13, 2020 at 11:45:32PM +0800, Muchun Song wrote:
> > All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
> > field in the hstate to indicate how many vmemmap pages associated with
> > a HugeTLB page that we can free to buddy allocator. And initialize it
> "can be freed to buddy allocator"
>
> > in the hugetlb_vmemmap_init(). This patch is actual enablement of the
> > feature.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
>
> With below nits addressed you can add:
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>
> >  static int __init early_hugetlb_free_vmemmap_param(char *buf)
> >  {
> > +     /* We cannot optimize if a "struct page" crosses page boundaries. */
> > +     if (!is_power_of_2(sizeof(struct page)))
> > +             return 0;
> > +
>
> I wonder if we should report a warning in case someone wants to enable this
> feature and stuct page size it not power of 2.
> In case someone wonders why it does not work for him/her.

Agree. I think that we should add a warning message here.

>
> > +void __init hugetlb_vmemmap_init(struct hstate *h)
> > +{
> > +     unsigned int nr_pages = pages_per_huge_page(h);
> > +     unsigned int vmemmap_pages;
> > +
> > +     if (!hugetlb_free_vmemmap_enabled)
> > +             return;
> > +
> > +     vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
> > +     /*
> > +      * The head page and the first tail page are not to be freed to buddy
> > +      * system, the others page will map to the first tail page. So there
> > +      * are the remaining pages that can be freed.
> "the other pages will map to the first tail page, so they can be freed."
> > +      *
> > +      * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
> > +      * on some architectures (e.g. aarch64). See Documentation/arm64/
> > +      * hugetlbpage.rst for more details.
> > +      */
> > +     if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
> > +             h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
> > +
> > +     pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
> > +             h->name);
>
> Maybe specify this is hugetlb code:
>
> pr_info("%s: blabla", __func__, ...)
> or
> pr_info("hugetlb: blalala", ...);
>
> although I am not sure whether we need that at all, or maybe just use
> pr_debug().
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-17  6:54         ` [External] " Muchun Song
@ 2020-12-17  9:05           ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-17  9:05 UTC (permalink / raw)
  To: Mike Kravetz, Oscar Salvador
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Michal Hocko, Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 2:54 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Thu, Dec 17, 2020 at 6:52 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> >
> > On 12/16/20 2:25 PM, Oscar Salvador wrote:
> > > On Wed, Dec 16, 2020 at 02:08:30PM -0800, Mike Kravetz wrote:
> > >>> + * vmemmap_rmap_walk - walk vmemmap page table
> > >>> +
> > >>> +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> > >>> +                         unsigned long end, struct vmemmap_rmap_walk *walk)
> > >>> +{
> > >>> +   pte_t *pte;
> > >>> +
> > >>> +   pte = pte_offset_kernel(pmd, addr);
> > >>> +   do {
> > >>> +           BUG_ON(pte_none(*pte));
> > >>> +
> > >>> +           if (!walk->reuse)
> > >>> +                   walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
> > >>
> > >> It may be just me, but I don't like the pte[-1] here.  It certainly does work
> > >> as designed because we want to remap all pages in the range to the page before
> > >> the range (at offset -1).  But, we do not really validate this 'reuse' page.
> > >> There is the BUG_ON(pte_none(*pte)) as a sanity check, but we do nothing similar
> > >> for pte[-1].  Based on the usage for HugeTLB pages, we can be confident that
> > >> pte[-1] is actually a pte.  In discussions with Oscar, you mentioned another
> > >> possible use for these routines.
> > >
> > > Without giving it much of a thought, I guess we could duplicate the
> > > BUG_ON for the pte outside the loop, and add a new one for pte[-1].
> > > Also, since walk->reuse seems to not change once it is set, we can take
> > > it outside the loop? e.g:
> > >
> > >       pte *pte;
> > >
> > >       pte = pte_offset_kernel(pmd, addr);
> > >       BUG_ON(pte_none(*pte));
> > >       BUG_ON(pte_none(pte[VMEMMAP_TAIL_PAGE_REUSE]));
> > >       walk->reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
> > >       do {
> > >               ....
> > >       } while...
> > >
> > > Or I am not sure whether we want to keep it inside the loop in case
> > > future cases change walk->reuse during the operation.
> > > But to be honest, I do not think it is realistic of all future possible
> > > uses of this, so I would rather keep it simple for now.
> >
> > I was thinking about possibly passing the 'reuse' address as another parameter
> > to vmemmap_remap_reuse().  We could add this addr to the vmemmap_rmap_walk
> > struct and set walk->reuse when we get to the pte for that address.  Of
> > course this would imply that the addr would need to be part of the range.
>
> Maybe adding another one parameter is unnecessary.  How about doing
> this in the vmemmap_remap_reuse?
>
> The 'reuse' address just is start + PAGE_SIZE.
>
> void vmemmap_remap_free(unsigned long start, unsigned long size)
> {
>          unsigned long end = start + size;
>          unsigned long reuse_addr = start + PAGE_SIZE;
                                           ^^^
                                        Here is "-"
Sorry.

>          LIST_HEAD(vmemmap_pages);
>
>          struct vmemmap_remap_walk walk = {
>                   .remap_pte = vmemmap_remap_pte,
>                   .vmemmap_pages = &vmemmap_pages,
>                   .reuse_addr = reuse_addr.
>          };
>
> }
>
> >
> > Ideally, we would walk the page table to get to the reuse page.  My concern
> > was not explicitly about adding the BUG_ON.  In more general use, *pte could
> > be the first entry on a pte page.  And, then pte[-1] may not even be a pte.
> >
> > Again, I don't think this matters for the current HugeTLB use case.  Just a
> > little concerned if code is put to use for other purposes.
> > --
> > Mike Kravetz
>
>
>
> --
> Yours,
> Muchun



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-13 15:45 ` [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
@ 2020-12-17 10:31   ` Oscar Salvador
  2020-12-17 10:42     ` [External] " Muchun Song
  0 siblings, 1 reply; 43+ messages in thread
From: Oscar Salvador @ 2020-12-17 10:31 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Sun, Dec 13, 2020 at 11:45:34PM +0800, Muchun Song wrote:
>  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
>  {
> -	return h->nr_free_vmemmap_pages;
> +	return h->nr_free_vmemmap_pages && is_power_of_2(sizeof(struct page));

This is wrong as it will return either true or false, but not what we want:

	static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
	{
	        return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
	}

the above will compute to 4096, which is wrong for obvious reasons.

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 43+ messages in thread

* Re: [External] Re: [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-17 10:31   ` Oscar Salvador
@ 2020-12-17 10:42     ` Muchun Song
  0 siblings, 0 replies; 43+ messages in thread
From: Muchun Song @ 2020-12-17 10:42 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 17, 2020 at 6:32 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Sun, Dec 13, 2020 at 11:45:34PM +0800, Muchun Song wrote:
> >  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> >  {
> > -     return h->nr_free_vmemmap_pages;
> > +     return h->nr_free_vmemmap_pages && is_power_of_2(sizeof(struct page));
>
> This is wrong as it will return either true or false, but not what we want:

Yeah, very thanks for pointing that out.

>
>         static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
>         {
>                 return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
>         }
>
> the above will compute to 4096, which is wrong for obvious reasons.

You are right. It is my mistake. Thanks Oscar.

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 43+ messages in thread

end of thread, other threads:[~2020-12-17 10:42 UTC | newest]

Thread overview: 43+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-13 15:45 [PATCH v9 00/11] Free some vmemmap pages of HugeTLB page Muchun Song
2020-12-13 15:45 ` [PATCH v9 01/11] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
2020-12-13 15:45 ` [PATCH v9 02/11] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-12-16  1:03   ` Mike Kravetz
2020-12-16  3:24     ` [External] " Muchun Song
2020-12-16  3:45     ` Mike Kravetz
2020-12-16  3:52       ` [External] " Muchun Song
2020-12-13 15:45 ` [PATCH v9 03/11] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
2020-12-16 13:06   ` Oscar Salvador
2020-12-16 13:15     ` [External] " Muchun Song
2020-12-16 22:08   ` Mike Kravetz
2020-12-16 22:25     ` Oscar Salvador
2020-12-16 22:49       ` Mike Kravetz
2020-12-17  6:54         ` [External] " Muchun Song
2020-12-17  9:05           ` Muchun Song
2020-12-17  4:06     ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 04/11] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
2020-12-16 23:48   ` Mike Kravetz
2020-12-17  3:19     ` [External] " Muchun Song
2020-12-13 15:45 ` [PATCH v9 05/11] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
2020-12-17  1:17   ` Mike Kravetz
2020-12-17  3:22     ` [External] " Muchun Song
2020-12-13 15:45 ` [PATCH v9 06/11] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-12-16 13:28   ` Oscar Salvador
2020-12-16 13:51     ` [External] " Muchun Song
2020-12-16 13:30   ` Oscar Salvador
2020-12-13 15:45 ` [PATCH v9 07/11] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-12-13 15:45 ` [PATCH v9 08/11] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-12-16 14:40   ` Oscar Salvador
2020-12-16 16:04     ` [External] " Muchun Song
2020-12-16 22:10       ` Oscar Salvador
2020-12-17  2:45         ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-12-16 13:43   ` Oscar Salvador
2020-12-16 13:56     ` [External] " Muchun Song
2020-12-16 22:12       ` Oscar Salvador
2020-12-17  8:34     ` Muchun Song
2020-12-13 15:45 ` [PATCH v9 10/11] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-12-16 14:03   ` Oscar Salvador
2020-12-16 14:26     ` [External] " Muchun Song
2020-12-13 15:45 ` [PATCH v9 11/11] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
2020-12-17 10:31   ` Oscar Salvador
2020-12-17 10:42     ` [External] " Muchun Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).