linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page
@ 2020-12-10  3:55 Muchun Song
  2020-12-10  3:55 ` [PATCH v8 01/12] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
                   ` (12 more replies)
  0 siblings, 13 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Hi all,

This patch series will free some vmemmap pages(struct page structures)
associated with each hugetlbpage when preallocated to save memory.

In order to reduce the difficulty of the first version of code review.
From this version, we disable PMD/huge page mapping of vmemmap if this
feature was enabled. This accutualy eliminate a bunch of the complex code
doing page table manipulation. When this patch series is solid, we cam add
the code of vmemmap page table manipulation in the future.

The struct page structures (page structs) are used to describe a physical
page frame. By default, there is a one-to-one mapping from a page frame to
it's corresponding page struct.

The HugeTLB pages consist of multiple base page size pages and is supported
by many architectures. See hugetlbpage.rst in the Documentation directory
for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
are currently supported. Since the base page size on x86 is 4KB, a 2MB
HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
4096 base pages. For each base page, there is a corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
provides this upper limit. The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for all
tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned to
the buddy allocator for other uses.

When the system boot up, every 2M HugeTLB has 512 struct page structs which
size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | -------------> |     2     |
 |           |                     +-----------+                +-----------+
 |           |                     |     3     | -------------> |     3     |
 |           |                     +-----------+                +-----------+
 |           |                     |     4     | -------------> |     4     |
 |    2MB    |                     +-----------+                +-----------+
 |           |                     |     5     | -------------> |     5     |
 |           |                     +-----------+                +-----------+
 |           |                     |     6     | -------------> |     6     |
 |           |                     +-----------+                +-----------+
 |           |                     |     7     | -------------> |     7     |
 |           |                     +-----------+                +-----------+
 |           |
 |           |
 |           |
 +-----------+

The value of page->compound_head is the same for all tail pages. The first
page of page structs (page 0) associated with the HugeTLB page contains the 4
page structs necessary to describe the HugeTLB. The only use of the remaining
pages of page structs (page 1 to page 7) is to point to page->compound_head.
Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
will be used for each HugeTLB page. This will allow us to free the remaining
6 pages to the buddy allocator.

Here is how things look after remapping.

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
 |           |                     +-----------+                   | | | | |
 |           |                     |     3     | ------------------+ | | | |
 |           |                     +-----------+                     | | | |
 |           |                     |     4     | --------------------+ | | |
 |    2MB    |                     +-----------+                       | | |
 |           |                     |     5     | ----------------------+ | |
 |           |                     +-----------+                         | |
 |           |                     |     6     | ------------------------+ |
 |           |                     +-----------+                           |
 |           |                     |     7     | --------------------------+
 |           |                     +-----------+
 |           |
 |           |
 |           |
 +-----------+

When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
vmemmap pages and restore the previous mapping relationship.

Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
pages.

In this case, for the 1GB HugeTLB page, we can save 4088 pages(There are
4096 pages for struct page structs, we reserve 2 pages for vmemmap and 8
pages for page tables. So we can save 4088 pages). This is a very substantial
gain. On our server, run some SPDK/QEMU applications which will use 1024GB
hugetlbpage. With this feature enabled, we can save ~16GB(1G hugepage)/~11GB
(2MB hugepage, the worst case is 10GB while the best is 12GB) memory.

Because there are vmemmap page tables reconstruction on the freeing/allocating
path, it increases some overhead. Here are some overhead analysis.

1) Allocating 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           8360 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          1868 |@@@@@@@@@@@                                         |
   [32K, 64K)            10 |                                                    |
   [64K, 128K)            2 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.066s
   user     0m0.000s
   sys      0m0.066s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10176 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             62 |                                                    |
   [16K, 32K)             2 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.004s
   user     0m0.000s
   sys      0m0.002s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [16K, 32K)         10240 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.077s
   user     0m0.001s
   sys      0m0.075s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            9950 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)            287 |@                                                   |
   [16K, 32K)             3 |                                                    |

   Summarize: The overhead of __free_hugepage is about ~2-4x slower than before.
              But according to the allocation test above, I think that here is
	      also ~2x slower than before.

              But why the 'real' time of patched is smaller than before? Because
	      In this patch series, the freeing hugetlb is asynchronous(through
	      kwoker).

Although the overhead has increased, the overhead is not significant. Like Mike
said, "However, remember that the majority of use cases create hugetlb pages at
or shortly after boot time and add them to the pool. So, additional overhead is
at pool creation time. There is no change to 'normal run time' operations of
getting a page from or returning a page to the pool (think page fault/unmap)".

Todo:
  - Free all of the tail vmemmap pages
    Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
    free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
    structures has beed set PG_head flag. If we can adjust compound_head()
    slightly and make compound_head() return the real head struct page when
    the parameter is the tail struct page but with PG_head flag set.

    In order to make the code evolution route clearer. This feature can can be
    a separate patch after this patchset is solid.

  - Support for other architectures (e.g. aarch64).
  - Enable PMD/huge page mapping of vmemmap even if this feature was enabled.

Changelog in v7 -> v8:
  - Adjust the order of patches.

  Very thanks to David and Oscar. Your suggestions are very valuable.

Changelog in v6 -> v7:
  - Rebase to linux-next 20201130
  - Do not use basepage mapping for vmemmap when this feature is disabled.
  - Rework some patchs.
    [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
    [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page

  Thanks to Oscar and Barry.

Changelog in v5 -> v6:
  - Disable PMD/huge page mapping of vmemmap if this feature was enabled.
  - Simplify the first version code.

Changelog in v4 -> v5:
  - Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].

  Thanks to Mike and Oscar's suggestions.

Changelog in v3 -> v4:
  - Move all the vmemmap functions to hugetlb_vmemmap.c.
  - Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
    disable this feature, we should disable it by a boot/kernel command line.
  - Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
  - Initialize page table lock for vmemmap through core_initcall mechanism.

  Thanks for Mike and Oscar's suggestions.

Changelog in v2 -> v3:
  - Rename some helps function name. Thanks Mike.
  - Rework some code. Thanks Mike and Oscar.
  - Remap the tail vmemmap page with PAGE_KERNEL_RO instead of PAGE_KERNEL.
    Thanks Matthew.
  - Add some overhead analysis in the cover letter.
  - Use vmemap pmd table lock instead of a hugetlb specific global lock.

Changelog in v1 -> v2:
  - Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  - Fix some typo and code style problems.
  - Remove unused handle_vmemmap_fault().
  - Merge some commits to one commit suggested by Mike.

Muchun Song (12):
  mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c
  mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm/bootmem_info: Introduce free_bootmem_page helper
  mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  mm/hugetlb: Defer freeing of HugeTLB pages
  mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB
    page
  mm/hugetlb: Set the PageHWPoison to the raw error page
  mm/hugetlb: Flush work when dissolving hugetlb page
  mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  mm/hugetlb: Gather discrete indexes of tail page
  mm/hugetlb: Optimize the code with the help of the compiler

 Documentation/admin-guide/kernel-parameters.txt |   9 +
 Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
 arch/x86/mm/init_64.c                           |  13 +-
 fs/Kconfig                                      |  15 +
 include/linux/bootmem_info.h                    |  59 +++
 include/linux/hugetlb.h                         |  36 ++
 include/linux/hugetlb_cgroup.h                  |  15 +-
 include/linux/memory_hotplug.h                  |  27 --
 mm/Makefile                                     |   2 +
 mm/bootmem_info.c                               | 124 +++++++
 mm/hugetlb.c                                    | 165 +++++++--
 mm/hugetlb_vmemmap.c                            | 463 ++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                            |  44 +++
 mm/memory_hotplug.c                             | 116 ------
 mm/sparse.c                                     |   1 +
 15 files changed, 918 insertions(+), 174 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

-- 
2.11.0



^ permalink raw reply	[flat|nested] 36+ messages in thread

* [PATCH v8 01/12] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10  3:55 ` [PATCH v8 02/12] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
                   ` (11 subsequent siblings)
  12 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Move bootmem info registration common API to individual bootmem_info.c.
And we will use {get,put}_page_bootmem() to initialize the page for the
vmemmap pages or free the vmemmap pages to buddy in the later patch.
So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
movement without any functional change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
---
 arch/x86/mm/init_64.c          |   3 +-
 include/linux/bootmem_info.h   |  40 +++++++++++++
 include/linux/memory_hotplug.h |  27 ---------
 mm/Makefile                    |   1 +
 mm/bootmem_info.c              | 124 +++++++++++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 116 --------------------------------------
 mm/sparse.c                    |   1 +
 7 files changed, 168 insertions(+), 144 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..0a45f062826e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -33,6 +33,7 @@
 #include <linux/nmi.h>
 #include <linux/gfp.h>
 #include <linux/kcore.h>
+#include <linux/bootmem_info.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 	return err;
 }
 
-#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void register_page_bootmem_memmap(unsigned long section_nr,
 				  struct page *start_page, unsigned long nr_pages)
 {
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
new file mode 100644
index 000000000000..4ed6dee1adc9
--- /dev/null
+++ b/include/linux/bootmem_info.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_BOOTMEM_INFO_H
+#define __LINUX_BOOTMEM_INFO_H
+
+#include <linux/mmzone.h>
+
+/*
+ * Types for free bootmem stored in page->lru.next. These have to be in
+ * some random range in unsigned long space for debugging purposes.
+ */
+enum {
+	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
+	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
+	MIX_SECTION_INFO,
+	NODE_INFO,
+	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
+};
+
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+
+void get_page_bootmem(unsigned long info, struct page *page,
+		      unsigned long type);
+void put_page_bootmem(struct page *page);
+#else
+static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+}
+
+static inline void put_page_bootmem(struct page *page)
+{
+}
+
+static inline void get_page_bootmem(unsigned long info, struct page *page,
+				    unsigned long type)
+{
+}
+#endif
+
+#endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 15acce5ab106..84590964ad35 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -33,18 +33,6 @@ struct vmem_altmap;
 	___page;						   \
 })
 
-/*
- * Types for free bootmem stored in page->lru.next. These have to be in
- * some random range in unsigned long space for debugging purposes.
- */
-enum {
-	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
-	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
-	MIX_SECTION_INFO,
-	NODE_INFO,
-	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
-};
-
 /* Types for control the zone type of onlined and offlined memory */
 enum {
 	/* Offline the memory. */
@@ -222,17 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
-#else
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-#endif
-extern void put_page_bootmem(struct page *page);
-extern void get_page_bootmem(unsigned long ingo, struct page *page,
-			     unsigned long type);
-
 void get_online_mems(void);
 void put_online_mems(void);
 
@@ -260,10 +237,6 @@ static inline void zone_span_writelock(struct zone *zone) {}
 static inline void zone_span_writeunlock(struct zone *zone) {}
 static inline void zone_seqlock_init(struct zone *zone) {}
 
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-
 static inline int try_online_node(int nid)
 {
 	return 0;
diff --git a/mm/Makefile b/mm/Makefile
index a1af02ba8f3f..ed4b88fa0f5e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_KFENCE) += kfence/
 obj-$(CONFIG_FAILSLAB) += failslab.o
+obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
new file mode 100644
index 000000000000..fcab5a3f8cc0
--- /dev/null
+++ b/mm/bootmem_info.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/mm/bootmem_info.c
+ *
+ *  Copyright (C)
+ */
+#include <linux/mm.h>
+#include <linux/compiler.h>
+#include <linux/memblock.h>
+#include <linux/bootmem_info.h>
+#include <linux/memory_hotplug.h>
+
+void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
+{
+	page->freelist = (void *)type;
+	SetPagePrivate(page);
+	set_page_private(page, info);
+	page_ref_inc(page);
+}
+
+void put_page_bootmem(struct page *page)
+{
+	unsigned long type;
+
+	type = (unsigned long) page->freelist;
+	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
+	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
+
+	if (page_ref_dec_return(page) == 1) {
+		page->freelist = NULL;
+		ClearPagePrivate(page);
+		set_page_private(page, 0);
+		INIT_LIST_HEAD(&page->lru);
+		free_reserved_page(page);
+	}
+}
+
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	/* Get section's memmap address */
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	/*
+	 * Get page for the memmap's phys address
+	 * XXX: need more consideration for sparse_vmemmap...
+	 */
+	page = virt_to_page(memmap);
+	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
+	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
+
+	/* remember memmap's page */
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, SECTION_INFO);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+
+}
+#else /* CONFIG_SPARSEMEM_VMEMMAP */
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+}
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
+
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+	unsigned long i, pfn, end_pfn, nr_pages;
+	int node = pgdat->node_id;
+	struct page *page;
+
+	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
+	page = virt_to_page(pgdat);
+
+	for (i = 0; i < nr_pages; i++, page++)
+		get_page_bootmem(node, page, NODE_INFO);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	/* register section info */
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+		/*
+		 * Some platforms can assign the same pfn to multiple nodes - on
+		 * node0 as well as nodeN.  To avoid registering a pfn against
+		 * multiple nodes we check that this pfn does not already
+		 * reside in some other nodes.
+		 */
+		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
+			register_page_bootmem_info_section(pfn);
+	}
+}
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a8cef4955907..4c4ca99745b7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -141,122 +141,6 @@ static void release_memory_resource(struct resource *res)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
-void get_page_bootmem(unsigned long info,  struct page *page,
-		      unsigned long type)
-{
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
-	page_ref_inc(page);
-}
-
-void put_page_bootmem(struct page *page)
-{
-	unsigned long type;
-
-	type = (unsigned long) page->freelist;
-	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
-	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
-
-	if (page_ref_dec_return(page) == 1) {
-		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
-		INIT_LIST_HEAD(&page->lru);
-		free_reserved_page(page);
-	}
-}
-
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	/* Get section's memmap address */
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	/*
-	 * Get page for the memmap's phys address
-	 * XXX: need more consideration for sparse_vmemmap...
-	 */
-	page = virt_to_page(memmap);
-	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
-	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
-
-	/* remember memmap's page */
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, SECTION_INFO);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-
-}
-#else /* CONFIG_SPARSEMEM_VMEMMAP */
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
-
-void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-	unsigned long i, pfn, end_pfn, nr_pages;
-	int node = pgdat->node_id;
-	struct page *page;
-
-	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
-	page = virt_to_page(pgdat);
-
-	for (i = 0; i < nr_pages; i++, page++)
-		get_page_bootmem(node, page, NODE_INFO);
-
-	pfn = pgdat->node_start_pfn;
-	end_pfn = pgdat_end_pfn(pgdat);
-
-	/* register section info */
-	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		/*
-		 * Some platforms can assign the same pfn to multiple nodes - on
-		 * node0 as well as nodeN.  To avoid registering a pfn against
-		 * multiple nodes we check that this pfn does not already
-		 * reside in some other nodes.
-		 */
-		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
-			register_page_bootmem_info_section(pfn);
-	}
-}
-#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
diff --git a/mm/sparse.c b/mm/sparse.c
index 7bd23f9d6cef..87676bf3af40 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -13,6 +13,7 @@
 #include <linux/vmalloc.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/bootmem_info.h>
 
 #include "internal.h"
 #include <asm/dma.h>
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 02/12] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
  2020-12-10  3:55 ` [PATCH v8 01/12] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10  3:55 ` [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper Muchun Song
                   ` (10 subsequent siblings)
  12 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
whether to enable the feature of freeing unused vmemmap associated with
HugeTLB pages. And this is just for dependency check. Now only support
x86-64.

Because this config depends on HAVE_BOOTMEM_INFO_NODE. And the function
of the register_page_bootmem_info() is aimed to register bootmem info.
So we should register bootmem info when this config is enabled.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/mm/init_64.c |  2 +-
 fs/Kconfig            | 15 +++++++++++++++
 2 files changed, 16 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0a45f062826e..0435bee2e172 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/fs/Kconfig b/fs/Kconfig
index 976e8b9033c4..4c3a9c614983 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -245,6 +245,21 @@ config HUGETLBFS
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
 
+config HUGETLB_PAGE_FREE_VMEMMAP
+	def_bool HUGETLB_PAGE
+	depends on X86_64
+	depends on SPARSEMEM_VMEMMAP
+	depends on HAVE_BOOTMEM_INFO_NODE
+	help
+	  When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
+	  memory from pre-allocated HugeTLB pages when they are not used.
+	  6 pages per HugeTLB page of the pmd level mapping and (PAGE_SIZE - 2)
+	  pages per HugeTLB page of the pud level mapping.
+
+	  When the pages are going to be used or freed up, the vmemmap array
+	  representing that range needs to be remapped again and the pages
+	  we discarded earlier need to be rellocated again.
+
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
 
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
  2020-12-10  3:55 ` [PATCH v8 01/12] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
  2020-12-10  3:55 ` [PATCH v8 02/12] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10 14:15   ` Oscar Salvador
  2020-12-10  3:55 ` [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Any memory allocated via the memblock allocator and not via the buddy
will be makred reserved already in the memmap. For those pages, we can
call free_bootmem_page() to free it to buddy allocator.

Becasue we wan to free some vmemmap pages of the HugeTLB to the buddy
allocator, we can use this helper to do that in the later patchs.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/bootmem_info.h | 19 +++++++++++++++++++
 1 file changed, 19 insertions(+)

diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 4ed6dee1adc9..20a8b0df0c39 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -3,6 +3,7 @@
 #define __LINUX_BOOTMEM_INFO_H
 
 #include <linux/mmzone.h>
+#include <linux/mm.h>
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
@@ -22,6 +23,24 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 void get_page_bootmem(unsigned long info, struct page *page,
 		      unsigned long type);
 void put_page_bootmem(struct page *page);
+
+/*
+ * Any memory allocated via the memblock allocator and not via the
+ * buddy will be makred reserved already in the memmap. For those
+ * pages, we can call this function to free it to buddy allocator.
+ */
+static inline void free_bootmem_page(struct page *page)
+{
+	unsigned long magic = (unsigned long)page->freelist;
+
+	/* bootmem page has reserved flag in the reserve_bootmem_region */
+	VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
+
+	if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
+		put_page_bootmem(page);
+	else
+		WARN_ON(1);
+}
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (2 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10 14:42   ` Oscar Salvador
  2020-12-10  3:55 ` [PATCH v8 05/12] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
                   ` (8 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Every HugeTLB has more than one struct page structure. We __know__ that
we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
to store metadata associated with each HugeTLB.

There are a lot of struct page structures associated with each HugeTLB
page. For tail pages, the value of compound_head is the same. So we can
reuse first page of tail page structures. We map the virtual addresses
of the remaining pages of tail page structures to the first tail page
struct, and then free these page frames. Therefore, we need to reserve
two pages as vmemmap areas.

When we allocate a HugeTLB page from the buddy, we can free some vmemmap
pages associated with each HugeTLB page. It is more appropriate to do it
in the prep_new_huge_page().

The free_vmemmap_pages_per_hpage() which indicate that how many vmemmap
pages associated with a HugeTLB page that can be freed to the buddy
allocator just returns zero now, because all infrastructure is not
ready. Once all the infrastructure is ready, we will rework this
function to support the feature.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/Makefile          |   1 +
 mm/hugetlb.c         |   3 +
 mm/hugetlb_vmemmap.c | 340 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h |  20 +++
 4 files changed, 364 insertions(+)
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

diff --git a/mm/Makefile b/mm/Makefile
index ed4b88fa0f5e..056801d8daae 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
 obj-$(CONFIG_ZSWAP)	+= zswap.o
 obj-$(CONFIG_HAS_DMA)	+= dmapool.o
 obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
+obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
 obj-$(CONFIG_NUMA) 	+= mempolicy.o
 obj-$(CONFIG_SPARSEMEM)	+= sparse.o
 obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 1f3bf1710b66..140135fc8113 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -42,6 +42,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/page_owner.h>
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
@@ -1497,6 +1498,8 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	free_huge_page_vmemmap(h, page);
+
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
 	set_hugetlb_cgroup(page, NULL);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
new file mode 100644
index 000000000000..c464c5db8967
--- /dev/null
+++ b/mm/hugetlb_vmemmap.c
@@ -0,0 +1,340 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ *
+ * The struct page structures (page structs) are used to describe a physical
+ * page frame. By default, there is a one-to-one mapping from a page frame to
+ * it's corresponding page struct.
+ *
+ * The HugeTLB pages consist of multiple base page size pages and is supported
+ * by many architectures. See hugetlbpage.rst in the Documentation directory
+ * for more details. On the x86-64 architecture, HugeTLB pages of size 2MB and
+ * 1GB are currently supported. Since the base page size on x86 is 4KB, a 2MB
+ * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
+ * 4096 base pages. For each base page, there is a corresponding page struct.
+ *
+ * Within the HugeTLB subsystem, only the first 4 page structs are used to
+ * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
+ * provides this upper limit. The only 'useful' information in the remaining
+ * page structs is the compound_head field, and this field is the same for all
+ * tail pages.
+ *
+ * By removing redundant page structs for HugeTLB pages, memory can returned to
+ * the buddy allocator for other uses.
+ *
+ * Different architectures support different HugeTLB pages. For example, the
+ * following table is the HugeTLB page size supported by x86 and arm64
+ * architectures. Becasue arm64 supports 4k, 16k, and 64k base pages and
+ * supports contiguous entries, so it supports many kinds of sizes of HugeTLB
+ * page.
+ *
+ * +--------------+-----------+-----------------------------------------------+
+ * | Architecture | Page Size |                HugeTLB Page Size              |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |    x86-64    |    4KB    |    2MB    |    1GB    |           |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |              |    4KB    |   64KB    |    2MB    |    32MB   |    1GB    |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |    arm64     |   16KB    |    2MB    |   32MB    |     1GB   |           |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |              |   64KB    |    2MB    |  512MB    |    16GB   |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ *
+ * When the system boot up, every HugeTLB page has more than one struct page
+ * structs whose size is (unit: pages):
+ *
+ *    struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *
+ * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size
+ * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following
+ * relationship.
+ *
+ *    HugeTLB_Size = n * PAGE_SIZE
+ *
+ * Then,
+ *
+ *    struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *                = n * sizeof(struct page) / PAGE_SIZE
+ *
+ * We can use huge mapping at the pud/pmd level for the HugeTLB page.
+ *
+ * For the pmd level mapping of the HugeTLB page, then
+ *
+ *    struct_size = n * sizeof(struct page) / PAGE_SIZE
+ *                = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE
+ *                = sizeof(struct page) / sizeof(pte_t)
+ *                = 64 / 8
+ *                = 8 (pages)
+ *
+ * Where n is how many pte entries which one page can contains. So the value of
+ * n is (PAGE_SIZE / sizeof(pte_t)).
+ *
+ * This optimization only supports 64-bit system, so the value of sizeof(pte_t)
+ * is 8. And this optimization also applicable only when the size of struct page
+ * is a power of two. In most cases, the size of struct page is 64 (e.g. x86-64
+ * and arm64). So if we use pmd level mapping for a HugeTLB page, the size of
+ * struct page structs of it is 8 pages whose size depends on the size of the
+ * base page.
+ *
+ * For the pud level mapping of the HugeTLB page, then
+ *
+ *    struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd)
+ *                = PAGE_SIZE / 8 * 8 (pages)
+ *                = PAGE_SIZE (pages)
+ *
+ * Where the struct_size(pmd) is the size of the struct page structs of a pmd
+ * level mapping of the HugeTLB page.
+ *
+ * Next, we take the pmd level mapping of the HugeTLB page as an example to
+ * show the internal implementation of this optimization. There are 8 pages
+ * struct page structs associated with a HugeTLB page which is pmd mapped.
+ *
+ * Here is how things look before optimization.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | -------------> |     2     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     3     | -------------> |     3     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     4     | -------------> |     4     |
+ * |    PMD    |                     +-----------+                +-----------+
+ * |   level   |                     |     5     | -------------> |     5     |
+ * |  mapping  |                     +-----------+                +-----------+
+ * |           |                     |     6     | -------------> |     6     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     7     | -------------> |     7     |
+ * |           |                     +-----------+                +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * The value of page->compound_head is the same for all tail pages. The first
+ * page of page structs (page 0) associated with the HugeTLB page contains the 4
+ * page structs necessary to describe the HugeTLB. The only use of the remaining
+ * pages of page structs (page 1 to page 7) is to point to page->compound_head.
+ * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
+ * will be used for each HugeTLB page. This will allow us to free the remaining
+ * 6 pages to the buddy allocator.
+ *
+ * Here is how things look after remapping.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
+ * |           |                     +-----------+                   | | | | |
+ * |           |                     |     3     | ------------------+ | | | |
+ * |           |                     +-----------+                     | | | |
+ * |           |                     |     4     | --------------------+ | | |
+ * |    PMD    |                     +-----------+                       | | |
+ * |   level   |                     |     5     | ----------------------+ | |
+ * |  mapping  |                     +-----------+                         | |
+ * |           |                     |     6     | ------------------------+ |
+ * |           |                     +-----------+                           |
+ * |           |                     |     7     | --------------------------+
+ * |           |                     +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
+ * vmemmap pages and restore the previous mapping relationship.
+ *
+ * For the pud level mapping of the HugeTLB page. It is similar to the former.
+ * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages.
+ *
+ * Apart from the pmd/pud level mapping of the HugeTLB page, some architectures
+ * (e.g. aarch64) provides a contiguous bit in the translation table entries
+ * that hints to the MMU to indicate that it is one of a contiguous set of
+ * entries that can be cached in a single TLB entry.
+ *
+ * The contiguous bit is used to increase the mapping size at the pmd and pte
+ * (last) level. So this type of HugeTLB page can be optimized only when its
+ * size of the struct page structs is greater than 2 pages.
+ */
+#define pr_fmt(fmt)	"HugeTLB vmemmap: " fmt
+
+#include <linux/bootmem_info.h>
+#include "hugetlb_vmemmap.h"
+
+/*
+ * There are a lot of struct page structures associated with each HugeTLB page.
+ * For tail pages, the value of compound_head is the same. So we can reuse first
+ * page of tail page structures. We map the virtual addresses of the remaining
+ * pages of tail page structures to the first tail page struct, and then free
+ * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
+ */
+#define RESERVE_VMEMMAP_NR		2U
+#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+#define VMEMMAP_TAIL_PAGE_REUSE		-1
+
+#ifndef VMEMMAP_HPAGE_SHIFT
+#define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
+#endif
+#define VMEMMAP_HPAGE_ORDER		(VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT)
+#define VMEMMAP_HPAGE_NR		(1 << VMEMMAP_HPAGE_ORDER)
+#define VMEMMAP_HPAGE_SIZE		(1UL << VMEMMAP_HPAGE_SHIFT)
+#define VMEMMAP_HPAGE_MASK		(~(VMEMMAP_HPAGE_SIZE - 1))
+
+#define vmemmap_hpage_addr_end(addr, end)				 \
+({									 \
+	unsigned long __boundary;					 \
+	__boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK; \
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
+})
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Now it is zero, because all infrastructure is not ready. Once all the
+ * infrastructure is ready, we will rework this function to support the feature.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
+
+static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
+}
+
+static inline unsigned long vmemmap_pages_size_per_hpage(struct hstate *h)
+{
+	return (unsigned long)vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
+}
+
+/*
+ * Walk a vmemmap address to the pmd it maps.
+ */
+static pmd_t *vmemmap_to_pmd(unsigned long addr)
+{
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = pgd_offset_k(addr);
+	if (pgd_none(*pgd))
+		return NULL;
+
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d))
+		return NULL;
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud))
+		return NULL;
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd))
+		return NULL;
+
+	return pmd;
+}
+
+static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
+				    unsigned long start, unsigned long end,
+				    struct list_head *vmemmap_pages)
+{
+	/*
+	 * Make the tail pages are mapped with read-only to catch
+	 * illegal write operation to the tail pages.
+	 */
+	pgprot_t pgprot = PAGE_KERNEL_RO;
+	pte_t entry = mk_pte(reuse, pgprot);
+	unsigned long addr;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE, pte++) {
+		struct page *page;
+
+		VM_BUG_ON(pte_none(*pte));
+
+		page = pte_page(*pte);
+		list_add(&page->lru, vmemmap_pages);
+
+		set_pte_at(&init_mm, addr, pte, entry);
+	}
+}
+
+static void vmemmap_remap_range(unsigned long start, unsigned long end,
+				struct list_head *vmemmap_pages)
+{
+	pmd_t *pmd;
+	unsigned long next, addr = start;
+	struct page *reuse = NULL;
+
+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+	VM_BUG_ON((start >> PUD_SHIFT) != (end >> PUD_SHIFT));
+
+	pmd = vmemmap_to_pmd(addr);
+	BUG_ON(!pmd);
+
+	do {
+		pte_t *pte = pte_offset_kernel(pmd, addr);
+
+		if (!reuse)
+			reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		vmemmap_reuse_pte_range(reuse, pte, addr, next, vmemmap_pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
+/*
+ * Free a vmemmap page. A vmemmap page can be allocated from the memblock
+ * allocator or buddy allocator. If the PG_reserved flag is set, it means
+ * that it allocated from the memblock allocator, just free it via the
+ * free_bootmem_page(). Otherwise, use __free_page().
+ */
+static inline void free_vmemmap_page(struct page *page)
+{
+	if (PageReserved(page))
+		free_bootmem_page(page);
+	else
+		__free_page(page);
+}
+
+static inline void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long start, end;
+	unsigned long vmemmap_addr = (unsigned long)head;
+	LIST_HEAD(vmemmap_pages);
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	start = vmemmap_addr + RESERVE_VMEMMAP_SIZE;
+	end = vmemmap_addr + vmemmap_pages_size_per_hpage(h);
+	vmemmap_remap_range(start, end, &vmemmap_pages);
+
+	free_vmemmap_page_list(&vmemmap_pages);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
new file mode 100644
index 000000000000..6923f03534d5
--- /dev/null
+++ b/mm/hugetlb_vmemmap.h
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ */
+#ifndef _LINUX_HUGETLB_VMEMMAP_H
+#define _LINUX_HUGETLB_VMEMMAP_H
+#include <linux/hugetlb.h>
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+#else
+static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+#endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 05/12] mm/hugetlb: Defer freeing of HugeTLB pages
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (3 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10  3:55 ` [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
                   ` (7 subsequent siblings)
  12 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the subsequent patch, we will allocate the vmemmap pages when free
HugeTLB pages. But update_and_free_page() is called from a non-task
context(and hold hugetlb_lock), so we can defer the actual freeing in
a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         | 77 ++++++++++++++++++++++++++++++++++++++++++++++++----
 mm/hugetlb_vmemmap.c | 12 --------
 mm/hugetlb_vmemmap.h | 17 ++++++++++++
 3 files changed, 88 insertions(+), 18 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 140135fc8113..0ff9b90e524f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1292,15 +1292,76 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
-static void update_and_free_page(struct hstate *h, struct page *page)
+static void __free_hugepage(struct hstate *h, struct page *page);
+
+/*
+ * As update_and_free_page() is be called from a non-task context(and hold
+ * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
+ * use GFP_ATOMIC to allocate a lot of vmemmap pages.
+ *
+ * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of
+ * pages to be freed and frees them one-by-one. As the page->mapping pointer
+ * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is
+ * reused as the llist_node structure of a lockless linked list of huge
+ * pages to be freed.
+ */
+static LLIST_HEAD(hpage_update_freelist);
+
+static void update_hpage_vmemmap_workfn(struct work_struct *work)
 {
-	int i;
+	struct llist_node *node;
+	struct page *page;
+
+	node = llist_del_all(&hpage_update_freelist);
 
+	while (node) {
+		page = container_of((struct address_space **)node,
+				     struct page, mapping);
+		node = node->next;
+		page->mapping = NULL;
+		__free_hugepage(page_hstate(page), page);
+
+		cond_resched();
+	}
+}
+static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
+
+static inline void __update_and_free_page(struct hstate *h, struct page *page)
+{
+	/* No need to allocate vmemmap pages */
+	if (!free_vmemmap_pages_per_hpage(h)) {
+		__free_hugepage(h, page);
+		return;
+	}
+
+	/*
+	 * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap
+	 * pages.
+	 *
+	 * Only call schedule_work() if hpage_update_freelist is previously
+	 * empty. Otherwise, schedule_work() had been called but the workfn
+	 * hasn't retrieved the list yet.
+	 */
+	if (llist_add((struct llist_node *)&page->mapping,
+		      &hpage_update_freelist))
+		schedule_work(&hpage_update_work);
+}
+
+static void update_and_free_page(struct hstate *h, struct page *page)
+{
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
 		return;
 
 	h->nr_huge_pages--;
 	h->nr_huge_pages_node[page_to_nid(page)]--;
+
+	__update_and_free_page(h, page);
+}
+
+static void __free_hugepage(struct hstate *h, struct page *page)
+{
+	int i;
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
@@ -1313,13 +1374,17 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
 		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
+		 * Temporarily drop the hugetlb_lock only when this type of
+		 * HugeTLB page does not support vmemmap optimization (which
+		 * contex do not hold the hugetlb_lock), because we might block
+		 * in free_gigantic_page().
 		 */
-		spin_unlock(&hugetlb_lock);
+		if (!free_vmemmap_pages_per_hpage(h))
+			spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
 		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
+		if (!free_vmemmap_pages_per_hpage(h))
+			spin_lock(&hugetlb_lock);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index c464c5db8967..d080488cde16 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -197,18 +197,6 @@
 	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
 })
 
-/*
- * How many vmemmap pages associated with a HugeTLB page that can be freed
- * to the buddy allocator.
- *
- * Todo: Now it is zero, because all infrastructure is not ready. Once all the
- * infrastructure is ready, we will rework this function to support the feature.
- */
-static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
-{
-	return 0;
-}
-
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 6923f03534d5..bf22cd003acb 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -12,9 +12,26 @@
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Now it is zero, because all infrastructure is not ready. Once all the
+ * infrastructure is ready, we will rework this function to support the feature.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #else
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (4 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 05/12] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-11  9:35   ` Oscar Salvador
  2020-12-10  3:55 ` [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
                   ` (6 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we free a HugeTLB page to the buddy allocator, we should allocate the
vmemmap pages associated with it. We can do that in the __free_hugepage()
before freeing it to buddy.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         |  2 ++
 mm/hugetlb_vmemmap.c | 89 +++++++++++++++++++++++++++++++++++++++++++++++++---
 mm/hugetlb_vmemmap.h |  5 +++
 3 files changed, 91 insertions(+), 5 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 0ff9b90e524f..542e6cb81321 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1362,6 +1362,8 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 {
 	int i;
 
+	alloc_huge_page_vmemmap(h, page);
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index d080488cde16..4587a0062808 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -169,6 +169,7 @@
 #define pr_fmt(fmt)	"HugeTLB vmemmap: " fmt
 
 #include <linux/bootmem_info.h>
+#include <linux/delay.h>
 #include "hugetlb_vmemmap.h"
 
 /*
@@ -181,6 +182,8 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 #define VMEMMAP_TAIL_PAGE_REUSE		-1
+#define GFP_VMEMMAP_PAGE		\
+	(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_HIGH | __GFP_NOWARN)
 
 #ifndef VMEMMAP_HPAGE_SHIFT
 #define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
@@ -197,6 +200,11 @@
 	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
 })
 
+typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte,
+					 unsigned long start, unsigned long end,
+					 void *priv);
+
+
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
@@ -236,9 +244,39 @@ static pmd_t *vmemmap_to_pmd(unsigned long addr)
 	return pmd;
 }
 
+static void vmemmap_restore_pte_range(struct page *reuse, pte_t *pte,
+				      unsigned long start, unsigned long end,
+				      void *priv)
+{
+	pgprot_t pgprot = PAGE_KERNEL;
+	void *from = page_to_virt(reuse);
+	unsigned long addr;
+	struct list_head *pages = priv;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE) {
+		void *to;
+		struct page *page;
+
+		VM_BUG_ON(pte_none(*pte) || pte_page(*pte) != reuse);
+
+		page = list_first_entry(pages, struct page, lru);
+		list_del(&page->lru);
+		to = page_to_virt(page);
+		copy_page(to, from);
+
+		/*
+		 * Make sure that any data that writes to the @to is made
+		 * visible to the physical page.
+		 */
+		flush_kernel_vmap_range(to, PAGE_SIZE);
+
+		set_pte_at(&init_mm, addr, pte++, mk_pte(page, pgprot));
+	}
+}
+
 static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
 				    unsigned long start, unsigned long end,
-				    struct list_head *vmemmap_pages)
+				    void *priv)
 {
 	/*
 	 * Make the tail pages are mapped with read-only to catch
@@ -247,6 +285,7 @@ static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
 	pgprot_t pgprot = PAGE_KERNEL_RO;
 	pte_t entry = mk_pte(reuse, pgprot);
 	unsigned long addr;
+	struct list_head *pages = priv;
 
 	for (addr = start; addr < end; addr += PAGE_SIZE, pte++) {
 		struct page *page;
@@ -254,14 +293,14 @@ static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
 		VM_BUG_ON(pte_none(*pte));
 
 		page = pte_page(*pte);
-		list_add(&page->lru, vmemmap_pages);
+		list_add(&page->lru, pages);
 
 		set_pte_at(&init_mm, addr, pte, entry);
 	}
 }
 
 static void vmemmap_remap_range(unsigned long start, unsigned long end,
-				struct list_head *vmemmap_pages)
+				vmemmap_remap_pte_func_t func, void *priv)
 {
 	pmd_t *pmd;
 	unsigned long next, addr = start;
@@ -281,12 +320,52 @@ static void vmemmap_remap_range(unsigned long start, unsigned long end,
 			reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
 
 		next = vmemmap_hpage_addr_end(addr, end);
-		vmemmap_reuse_pte_range(reuse, pte, addr, next, vmemmap_pages);
+		func(reuse, pte, addr, next, priv);
 	} while (pmd++, addr = next, addr != end);
 
 	flush_tlb_kernel_range(start, end);
 }
 
+static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
+{
+	unsigned int nr = free_vmemmap_pages_per_hpage(h);
+
+	while (nr--) {
+		struct page *page;
+
+retry:
+		page = alloc_page(GFP_VMEMMAP_PAGE);
+		if (unlikely(!page)) {
+			msleep(100);
+			/*
+			 * We should retry infinitely, because we cannot
+			 * handle allocation failures. Once we allocate
+			 * vmemmap pages successfully, then we can free
+			 * a HugeTLB page.
+			 */
+			goto retry;
+		}
+		list_add_tail(&page->lru, list);
+	}
+}
+
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long start, end;
+	unsigned long vmemmap_addr = (unsigned long)head;
+	LIST_HEAD(vmemmap_pages);
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	alloc_vmemmap_pages(h, &vmemmap_pages);
+
+	start = vmemmap_addr + RESERVE_VMEMMAP_SIZE;
+	end = vmemmap_addr + vmemmap_pages_size_per_hpage(h);
+	vmemmap_remap_range(start, end, vmemmap_restore_pte_range,
+			    &vmemmap_pages);
+}
+
 /*
  * Free a vmemmap page. A vmemmap page can be allocated from the memblock
  * allocator or buddy allocator. If the PG_reserved flag is set, it means
@@ -322,7 +401,7 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 
 	start = vmemmap_addr + RESERVE_VMEMMAP_SIZE;
 	end = vmemmap_addr + vmemmap_pages_size_per_hpage(h);
-	vmemmap_remap_range(start, end, &vmemmap_pages);
+	vmemmap_remap_range(start, end, vmemmap_reuse_pte_range, &vmemmap_pages);
 
 	free_vmemmap_page_list(&vmemmap_pages);
 }
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index bf22cd003acb..8fd57c49e230 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -11,6 +11,7 @@
 #include <linux/hugetlb.h>
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
 /*
@@ -25,6 +26,10 @@ static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 	return 0;
 }
 #else
+static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (5 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10 11:11   ` Muchun Song
  2020-12-11 13:36   ` Oscar Salvador
  2020-12-10  3:55 ` [PATCH v8 08/12] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
                   ` (5 subsequent siblings)
  12 siblings, 2 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Because we reuse the first tail vmemmap page frame and remap it
with read-only, we cannot set the PageHWPosion on a tail page.
So we can use the head[4].mapping to record the real error page
index and set the raw error page PageHWPoison later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 52 ++++++++++++++++++++++++++++++++++++++++++++--------
 1 file changed, 44 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 542e6cb81321..06157df08d8e 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1347,6 +1347,47 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page)
 		schedule_work(&hpage_update_work);
 }
 
+static inline void subpage_hwpoison_deliver(struct hstate *h, struct page *head)
+{
+	struct page *page = head;
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	if (PageHWPoison(head))
+		page = head + page_private(head + 4);
+
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void set_subpage_hwpoison(struct hstate *h, struct page *head,
+					struct page *page)
+{
+	if (!PageHWPoison(head))
+		return;
+
+	if (free_vmemmap_pages_per_hpage(h)) {
+		set_page_private(head + 4, page - head);
+		return;
+	}
+
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
 static void update_and_free_page(struct hstate *h, struct page *page)
 {
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
@@ -1363,6 +1404,7 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 	int i;
 
 	alloc_huge_page_vmemmap(h, page);
+	subpage_hwpoison_deliver(h, page);
 
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
@@ -1840,14 +1882,8 @@ int dissolve_free_huge_page(struct page *page)
 		int nid = page_to_nid(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
-		/*
-		 * Move PageHWPoison flag from head page to the raw error page,
-		 * which makes any subpages rather than the error page reusable.
-		 */
-		if (PageHWPoison(head) && page != head) {
-			SetPageHWPoison(page);
-			ClearPageHWPoison(head);
-		}
+
+		set_subpage_hwpoison(h, head, page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 08/12] mm/hugetlb: Flush work when dissolving hugetlb page
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (6 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10  3:55 ` [PATCH v8 09/12] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
                   ` (4 subsequent siblings)
  12 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We should flush work when dissolving a hugetlb page to make sure that
the hugetlb page is freed to the buddy.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 06157df08d8e..2e7a59b44364 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1326,6 +1326,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work)
 }
 static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
 
+static inline void flush_hpage_update_work(struct hstate *h)
+{
+	if (free_vmemmap_pages_per_hpage(h))
+		flush_work(&hpage_update_work);
+}
+
 static inline void __update_and_free_page(struct hstate *h, struct page *page)
 {
 	/* No need to allocate vmemmap pages */
@@ -1865,6 +1871,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 int dissolve_free_huge_page(struct page *page)
 {
 	int rc = -EBUSY;
+	struct hstate *h = NULL;
 
 	/* Not to disrupt normal path by vainly holding hugetlb_lock */
 	if (!PageHuge(page))
@@ -1878,8 +1885,9 @@ int dissolve_free_huge_page(struct page *page)
 
 	if (!page_count(page)) {
 		struct page *head = compound_head(page);
-		struct hstate *h = page_hstate(head);
 		int nid = page_to_nid(head);
+
+		h = page_hstate(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
 
@@ -1893,6 +1901,14 @@ int dissolve_free_huge_page(struct page *page)
 	}
 out:
 	spin_unlock(&hugetlb_lock);
+
+	/*
+	 * We should flush work before return to make sure that
+	 * the HugeTLB page is freed to the buddy.
+	 */
+	if (!rc && h)
+		flush_hpage_update_work(h);
+
 	return rc;
 }
 
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 09/12] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (7 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 08/12] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10 10:04   ` Oscar Salvador
  2020-12-10  3:55 ` [PATCH v8 10/12] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
 arch/x86/mm/init_64.c                           |  8 ++++++--
 include/linux/hugetlb.h                         | 19 +++++++++++++++++++
 mm/hugetlb_vmemmap.c                            | 16 ++++++++++++++++
 5 files changed, 53 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 3ae25630a223..9e6854f21d55 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1551,6 +1551,15 @@
 			Documentation/admin-guide/mm/hugetlbpage.rst.
 			Format: size[KMG]
 
+	hugetlb_free_vmemmap=
+			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+			this controls freeing unused vmemmap pages associated
+			with each HugeTLB page.
+			Format: { on | off (default) }
+
+			on:  enable the feature
+			off: disable the feature
+
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
 			Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..6a8b57f6d3b7 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -145,6 +145,9 @@ default_hugepagesz
 
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
+hugetlb_free_vmemmap
+	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
+	unused vmemmap pages associated each HugeTLB page.
 
 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
 indicates the current number of pre-allocated huge pages of the default size.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0435bee2e172..fcdc020904a8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -34,6 +34,7 @@
 #include <linux/gfp.h>
 #include <linux/kcore.h>
 #include <linux/bootmem_info.h>
+#include <linux/hugetlb.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1557,7 +1558,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
 	int err;
 
-	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
+	if (is_hugetlb_free_vmemmap_enabled())
+		err = vmemmap_populate_basepages(start, end, node, NULL);
+	else if (end - start < PAGES_PER_SECTION * sizeof(struct page))
 		err = vmemmap_populate_basepages(start, end, node, NULL);
 	else if (boot_cpu_has(X86_FEATURE_PSE))
 		err = vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1610,7 +1613,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 		}
 		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
 
-		if (!boot_cpu_has(X86_FEATURE_PSE)) {
+		if (!boot_cpu_has(X86_FEATURE_PSE) ||
+		    is_hugetlb_free_vmemmap_enabled()) {
 			next = (addr + PAGE_SIZE) & PAGE_MASK;
 			pmd = pmd_offset(pud, addr);
 			if (pmd_none(*pmd))
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ebca2ef02212..7f47f0eeca3b 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -770,6 +770,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
 }
 #endif
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+extern bool hugetlb_free_vmemmap_enabled;
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return hugetlb_free_vmemmap_enabled;
+}
+#else
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
+#endif
+
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 
@@ -923,6 +937,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
 					pte_t *ptep, pte_t pte, unsigned long sz)
 {
 }
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 4587a0062808..f0926b382338 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -204,6 +204,22 @@ typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte,
 					 unsigned long start, unsigned long end,
 					 void *priv);
 
+bool hugetlb_free_vmemmap_enabled;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	/* We cannot optimize if a "struct page" crosses page boundaries. */
+	if (!strcmp(buf, "on"))
+		hugetlb_free_vmemmap_enabled = true;
+	else if (strcmp(buf, "off"))
+		return -EINVAL;
+
+	return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
 
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 10/12] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (8 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 09/12] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10 10:15   ` Oscar Salvador
  2020-12-10  3:55 ` [PATCH v8 11/12] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
field in the hstate to indicate how many vmemmap pages associated with
a HugeTLB page that we can free to buddy allocator. And initialize it
in the hugetlb_vmemmap_init(). This patch is actual enablement of the
feature.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 include/linux/hugetlb.h |  3 +++
 mm/hugetlb.c            |  1 +
 mm/hugetlb_vmemmap.c    | 29 +++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h    | 10 ++++++----
 4 files changed, 39 insertions(+), 4 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 7f47f0eeca3b..66d82ae7b712 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -492,6 +492,9 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	unsigned int nr_free_vmemmap_pages;
+#endif
 #ifdef CONFIG_CGROUP_HUGETLB
 	/* cgroup control files */
 	struct cftype cgroup_files_dfl[7];
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 2e7a59b44364..6440367a71b6 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3327,6 +3327,7 @@ void __init hugetlb_add_hstate(unsigned int order)
 	h->next_nid_to_free = first_memory_node;
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
+	hugetlb_vmemmap_init(h);
 
 	parsed_hstate = h;
 }
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index f0926b382338..36a2e2db7913 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -421,3 +421,32 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 
 	free_vmemmap_page_list(&vmemmap_pages);
 }
+
+void __init hugetlb_vmemmap_init(struct hstate *h)
+{
+	unsigned int nr_pages = pages_per_huge_page(h);
+	unsigned int vmemmap_pages;
+
+	/* We cannot optimize if a "struct page" crosses page boundaries. */
+	if (!is_power_of_2(sizeof(struct page)))
+		return;
+
+	if (!hugetlb_free_vmemmap_enabled)
+		return;
+
+	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
+	/*
+	 * The head page and the first tail page are not to be freed to buddy
+	 * system, the others page will map to the first tail page. So there
+	 * are the remaining pages that can be freed.
+	 *
+	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
+	 * on some architectures (e.g. aarch64). See Documentation/arm64/
+	 * hugetlbpage.rst for more details.
+	 */
+	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
+		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
+
+	pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
+		h->name);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 8fd57c49e230..0a1c0d33a316 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -11,21 +11,23 @@
 #include <linux/hugetlb.h>
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void hugetlb_vmemmap_init(struct hstate *h);
 void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
 /*
  * How many vmemmap pages associated with a HugeTLB page that can be freed
  * to the buddy allocator.
- *
- * Todo: Now it is zero, because all infrastructure is not ready. Once all the
- * infrastructure is ready, we will rework this function to support the feature.
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
-	return 0;
+	return h->nr_free_vmemmap_pages;
 }
 #else
+static inline void hugetlb_vmemmap_init(struct hstate *h)
+{
+}
+
 static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 11/12] mm/hugetlb: Gather discrete indexes of tail page
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (9 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 10/12] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10  3:55 ` [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
  2020-12-10  9:18 ` [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Oscar Salvador
  12 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

For HugeTLB page, there are more metadata to save in the struct page.
But the head struct page cannot meet our needs, so we have to abuse
other tail struct page to store the metadata. In order to avoid
conflicts caused by subsequent use of more tail struct pages, we can
gather these discrete indexes of tail struct page. In this case, it
will be easier to add a new tail page index later.

There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so
add a BUILD_BUG_ON to catch invalid usage of the tail struct page.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h        | 13 +++++++++++++
 include/linux/hugetlb_cgroup.h | 15 +++++++++------
 mm/hugetlb.c                   | 16 ++++++++--------
 mm/hugetlb_vmemmap.c           |  8 ++++++++
 4 files changed, 38 insertions(+), 14 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 66d82ae7b712..7295f6b3d55e 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t;
 #include <linux/shm.h>
 #include <asm/tlbflush.h>
 
+enum {
+	SUBPAGE_INDEX_ACTIVE = 1,	/* reuse page flags of PG_private */
+	SUBPAGE_INDEX_TEMPORARY,	/* reuse page->mapping */
+#ifdef CONFIG_CGROUP_HUGETLB
+	SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */
+	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
+#endif
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
+#endif
+	NR_USED_SUBPAGE,
+};
+
 struct hugepage_subpool {
 	spinlock_t lock;
 	long count;
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 2ad6e92f124a..3d3c1c49efe4 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -24,8 +24,9 @@ struct file_region;
 /*
  * Minimum page order trackable by hugetlb cgroup.
  * At least 4 pages are necessary for all the tracking information.
- * The second tail page (hpage[2]) is the fault usage cgroup.
- * The third tail page (hpage[3]) is the reservation usage cgroup.
+ * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
+ * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
+ * is the reservation usage cgroup.
  */
 #define HUGETLB_CGROUP_MIN_ORDER	2
 
@@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
 	if (rsvd)
-		return (struct hugetlb_cgroup *)page[3].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
 	else
-		return (struct hugetlb_cgroup *)page[2].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
 }
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
@@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
 	if (rsvd)
-		page[3].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+				 (unsigned long)h_cg);
 	else
-		page[2].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+				 (unsigned long)h_cg);
 	return 0;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 6440367a71b6..e38fee45afd3 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1361,7 +1361,7 @@ static inline void subpage_hwpoison_deliver(struct hstate *h, struct page *head)
 		return;
 
 	if (PageHWPoison(head))
-		page = head + page_private(head + 4);
+		page = head + page_private(head + SUBPAGE_INDEX_HWPOISON);
 
 	/*
 	 * Move PageHWPoison flag from head page to the raw error page,
@@ -1380,7 +1380,7 @@ static inline void set_subpage_hwpoison(struct hstate *h, struct page *head,
 		return;
 
 	if (free_vmemmap_pages_per_hpage(h)) {
-		set_page_private(head + 4, page - head);
+		set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
 		return;
 	}
 
@@ -1460,20 +1460,20 @@ struct hstate *size_to_hstate(unsigned long size)
 bool page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return PageHead(page) && PagePrivate(&page[1]);
+	return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /* never called for tail page */
 static void set_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	SetPagePrivate(&page[1]);
+	SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 static void clear_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	ClearPagePrivate(&page[1]);
+	ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /*
@@ -1485,17 +1485,17 @@ static inline bool PageHugeTemporary(struct page *page)
 	if (!PageHuge(page))
 		return false;
 
-	return (unsigned long)page[2].mapping == -1U;
+	return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U;
 }
 
 static inline void SetPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = (void *)-1U;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U;
 }
 
 static inline void ClearPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = NULL;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL;
 }
 
 static void __free_huge_page(struct page *page)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 36a2e2db7913..7f0b9e002be4 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -427,6 +427,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int nr_pages = pages_per_huge_page(h);
 	unsigned int vmemmap_pages;
 
+	/*
+	 * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
+	 * page structs can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so
+	 * add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
+	 */
+	BUILD_BUG_ON(NR_USED_SUBPAGE >=
+		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
+
 	/* We cannot optimize if a "struct page" crosses page boundaries. */
 	if (!is_power_of_2(sizeof(struct page)))
 		return;
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (10 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 11/12] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
@ 2020-12-10  3:55 ` Muchun Song
  2020-12-10 10:25   ` Oscar Salvador
  2020-12-10  9:18 ` [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Oscar Salvador
  12 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10  3:55 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We cannot optimize if a "struct page" crosses page boundaries. If
it is true, we can optimize the code with the help of a compiler.
When free_vmemmap_pages_per_hpage() returns zero, most functions are
optimized by the compiler.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h | 3 ++-
 mm/hugetlb_vmemmap.c    | 3 +++
 mm/hugetlb_vmemmap.h    | 2 +-
 3 files changed, 6 insertions(+), 2 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 7295f6b3d55e..adc17765e0e9 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -791,7 +791,8 @@ extern bool hugetlb_free_vmemmap_enabled;
 
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
 {
-	return hugetlb_free_vmemmap_enabled;
+	return hugetlb_free_vmemmap_enabled &&
+	       is_power_of_2(sizeof(struct page));
 }
 #else
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 7f0b9e002be4..819ab9bb9298 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -208,6 +208,9 @@ bool hugetlb_free_vmemmap_enabled;
 
 static int __init early_hugetlb_free_vmemmap_param(char *buf)
 {
+	if (!is_power_of_2(sizeof(struct page)))
+		return 0;
+
 	if (!buf)
 		return -EINVAL;
 
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 0a1c0d33a316..5f5e90c81cd2 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -21,7 +21,7 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head);
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
-	return h->nr_free_vmemmap_pages;
+	return h->nr_free_vmemmap_pages && is_power_of_2(sizeof(struct page));
 }
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page
  2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (11 preceding siblings ...)
  2020-12-10  3:55 ` [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
@ 2020-12-10  9:18 ` Oscar Salvador
  12 siblings, 0 replies; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10  9:18 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 11:55:14AM +0800, Muchun Song wrote:
> Muchun Song (12):
>   mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c
>   mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
>   mm/bootmem_info: Introduce free_bootmem_page helper
>   mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
>   mm/hugetlb: Defer freeing of HugeTLB pages
>   mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB
>     page
>   mm/hugetlb: Set the PageHWPoison to the raw error page
>   mm/hugetlb: Flush work when dissolving hugetlb page
>   mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
>   mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
>   mm/hugetlb: Gather discrete indexes of tail page
>   mm/hugetlb: Optimize the code with the help of the compiler

Well, we went from 24 patches down to 12 patches.
Not bad at all :-)

I will have a look later

Thanks


-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 09/12] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-10  3:55 ` [PATCH v8 09/12] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-12-10 10:04   ` Oscar Salvador
  2020-12-10 12:26     ` [External] " Muchun Song
  0 siblings, 1 reply; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10 10:04 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 11:55:23AM +0800, Muchun Song wrote:
> +hugetlb_free_vmemmap
> +	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> +	unused vmemmap pages associated each HugeTLB page.
                                      ^^^ with

> -	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> +	if (is_hugetlb_free_vmemmap_enabled())
> +		err = vmemmap_populate_basepages(start, end, node, NULL);
> +	else if (end - start < PAGES_PER_SECTION * sizeof(struct page))
>  		err = vmemmap_populate_basepages(start, end, node, NULL);

Not sure if joining those in an OR makes se.se

>  	else if (boot_cpu_has(X86_FEATURE_PSE))
>  		err = vmemmap_populate_hugepages(start, end, node, altmap);
> @@ -1610,7 +1613,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
>  		}
>  		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
>  
> -		if (!boot_cpu_has(X86_FEATURE_PSE)) {
> +		if (!boot_cpu_has(X86_FEATURE_PSE) ||
> +		    is_hugetlb_free_vmemmap_enabled()) {

I would add a variable at the beginning called "basepages_populated"
that holds the result of those two conditions.
I am not sure if it slightly improves the code as the conditions do
not need to be rechecked, but the readibility a bit.

> +bool hugetlb_free_vmemmap_enabled;
> +
> +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> +{
> +	if (!buf)
> +		return -EINVAL;
> +
> +	/* We cannot optimize if a "struct page" crosses page boundaries. */

I think this comment belongs to the last patch.


-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 10/12] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-10  3:55 ` [PATCH v8 10/12] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-12-10 10:15   ` Oscar Salvador
  2020-12-10 12:32     ` [External] " Muchun Song
  0 siblings, 1 reply; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10 10:15 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 11:55:24AM +0800, Muchun Song wrote:
> +void __init hugetlb_vmemmap_init(struct hstate *h)
> +{
> +	unsigned int nr_pages = pages_per_huge_page(h);
> +	unsigned int vmemmap_pages;
> +
> +	/* We cannot optimize if a "struct page" crosses page boundaries. */
> +	if (!is_power_of_2(sizeof(struct page)))
> +		return;
> +
> +	if (!hugetlb_free_vmemmap_enabled)
> +		return;

I think it would make sense to squash the last patch and this one.
As per the last patch, if "struct page" is not power of 2,
early_hugetlb_free_vmemmap_param() does not set
hugetlb_free_vmemmap_enabled, so the "!is_power_of_2" check from above
would become useless here.
We know that in order for hugetlb_free_vmemmap_enabled to become true,
the is_power_of_2 must have succeed early on when calling the early_
function.

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-10  3:55 ` [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
@ 2020-12-10 10:25   ` Oscar Salvador
  2020-12-10 12:14     ` [External] " Muchun Song
  0 siblings, 1 reply; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10 10:25 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 2020-12-10 04:55, Muchun Song wrote:
> We cannot optimize if a "struct page" crosses page boundaries. If
> it is true, we can optimize the code with the help of a compiler.
> When free_vmemmap_pages_per_hpage() returns zero, most functions are
> optimized by the compiler.

As I said earlier, I would squash this patch with patch#10 and
remove the !is_power_of_2 check in hugetlb_vmemmap_init and leave
only the check for the boot parameter.
That should be enough.

>  static inline bool is_hugetlb_free_vmemmap_enabled(void)
>  {
> -	return hugetlb_free_vmemmap_enabled;
> +	return hugetlb_free_vmemmap_enabled &&
> +	       is_power_of_2(sizeof(struct page));

Why? hugetlb_free_vmemmap_enabled can only become true
if the is_power_of_2 check succeeds in early_hugetlb_free_vmemmap_param.
The "is_power_of_2" check here can go.

> diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> index 0a1c0d33a316..5f5e90c81cd2 100644
> --- a/mm/hugetlb_vmemmap.h
> +++ b/mm/hugetlb_vmemmap.h
> @@ -21,7 +21,7 @@ void free_huge_page_vmemmap(struct hstate *h, struct
> page *head);
>   */
>  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate 
> *h)
>  {
> -	return h->nr_free_vmemmap_pages;
> +	return h->nr_free_vmemmap_pages && is_power_of_2(sizeof(struct 
> page));

If hugetlb_free_vmemmap_enabled is false, hugetlb_vmemmap_init() leaves
h->nr_free_vmemmap_pages unset to 0, so no need for the is_power_of_2 
check here.


-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-10  3:55 ` [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
@ 2020-12-10 11:11   ` Muchun Song
  2020-12-11 13:36   ` Oscar Salvador
  1 sibling, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10 11:11 UTC (permalink / raw)
  To: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand
  Cc: Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Thu, Dec 10, 2020 at 11:58 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> Because we reuse the first tail vmemmap page frame and remap it
> with read-only, we cannot set the PageHWPosion on a tail page.
> So we can use the head[4].mapping to record the real error page
                              ^^^
                             private

A typo. Will update the next version. Thanks.

> index and set the raw error page PageHWPoison later.
>
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/hugetlb.c | 52 ++++++++++++++++++++++++++++++++++++++++++++--------
>  1 file changed, 44 insertions(+), 8 deletions(-)
>
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 542e6cb81321..06157df08d8e 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1347,6 +1347,47 @@ static inline void __update_and_free_page(struct hstate *h, struct page *page)
>                 schedule_work(&hpage_update_work);
>  }
>
> +static inline void subpage_hwpoison_deliver(struct hstate *h, struct page *head)
> +{
> +       struct page *page = head;
> +
> +       if (!free_vmemmap_pages_per_hpage(h))
> +               return;
> +
> +       if (PageHWPoison(head))
> +               page = head + page_private(head + 4);
> +
> +       /*
> +        * Move PageHWPoison flag from head page to the raw error page,
> +        * which makes any subpages rather than the error page reusable.
> +        */
> +       if (page != head) {
> +               SetPageHWPoison(page);
> +               ClearPageHWPoison(head);
> +       }
> +}
> +
> +static inline void set_subpage_hwpoison(struct hstate *h, struct page *head,
> +                                       struct page *page)
> +{
> +       if (!PageHWPoison(head))
> +               return;
> +
> +       if (free_vmemmap_pages_per_hpage(h)) {
> +               set_page_private(head + 4, page - head);
> +               return;
> +       }
> +
> +       /*
> +        * Move PageHWPoison flag from head page to the raw error page,
> +        * which makes any subpages rather than the error page reusable.
> +        */
> +       if (page != head) {
> +               SetPageHWPoison(page);
> +               ClearPageHWPoison(head);
> +       }
> +}
> +
>  static void update_and_free_page(struct hstate *h, struct page *page)
>  {
>         if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> @@ -1363,6 +1404,7 @@ static void __free_hugepage(struct hstate *h, struct page *page)
>         int i;
>
>         alloc_huge_page_vmemmap(h, page);
> +       subpage_hwpoison_deliver(h, page);
>
>         for (i = 0; i < pages_per_huge_page(h); i++) {
>                 page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
> @@ -1840,14 +1882,8 @@ int dissolve_free_huge_page(struct page *page)
>                 int nid = page_to_nid(head);
>                 if (h->free_huge_pages - h->resv_huge_pages == 0)
>                         goto out;
> -               /*
> -                * Move PageHWPoison flag from head page to the raw error page,
> -                * which makes any subpages rather than the error page reusable.
> -                */
> -               if (PageHWPoison(head) && page != head) {
> -                       SetPageHWPoison(page);
> -                       ClearPageHWPoison(head);
> -               }
> +
> +               set_subpage_hwpoison(h, head, page);
>                 list_del(&head->lru);
>                 h->free_huge_pages--;
>                 h->free_huge_pages_node[nid]--;
> --
> 2.11.0
>


-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-10 10:25   ` Oscar Salvador
@ 2020-12-10 12:14     ` Muchun Song
  2020-12-10 13:16       ` Oscar Salvador
  0 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10 12:14 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 7:39 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On 2020-12-10 04:55, Muchun Song wrote:
> > We cannot optimize if a "struct page" crosses page boundaries. If
> > it is true, we can optimize the code with the help of a compiler.
> > When free_vmemmap_pages_per_hpage() returns zero, most functions are
> > optimized by the compiler.
>
> As I said earlier, I would squash this patch with patch#10 and
> remove the !is_power_of_2 check in hugetlb_vmemmap_init and leave
> only the check for the boot parameter.
> That should be enough.

Yeah, you are right. I just want the compiler to do optimization.

>
> >  static inline bool is_hugetlb_free_vmemmap_enabled(void)
> >  {
> > -     return hugetlb_free_vmemmap_enabled;
> > +     return hugetlb_free_vmemmap_enabled &&
> > +            is_power_of_2(sizeof(struct page));
>
> Why? hugetlb_free_vmemmap_enabled can only become true
> if the is_power_of_2 check succeeds in early_hugetlb_free_vmemmap_param.
> The "is_power_of_2" check here can go.
>
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > index 0a1c0d33a316..5f5e90c81cd2 100644
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -21,7 +21,7 @@ void free_huge_page_vmemmap(struct hstate *h, struct
> > page *head);
> >   */
> >  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate
> > *h)
> >  {
> > -     return h->nr_free_vmemmap_pages;
> > +     return h->nr_free_vmemmap_pages && is_power_of_2(sizeof(struct
> > page));
>
> If hugetlb_free_vmemmap_enabled is false, hugetlb_vmemmap_init() leaves
> h->nr_free_vmemmap_pages unset to 0, so no need for the is_power_of_2
> check here.

Yeah, you are right. But if we do this check can make the code simple.

For example, here is a code snippet.

void func(void)
{
        if (free_vmemmap_pages_per_hpage())
                return;
        /* Do something */
}

With this patch, the func will be optimized to null when is_power_of_2
returns false.

void func(void)
{
}

Without this patch, the compiler cannot do this optimization.

Thanks.

>
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 09/12] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-12-10 10:04   ` Oscar Salvador
@ 2020-12-10 12:26     ` Muchun Song
  0 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10 12:26 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 7:41 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 11:55:23AM +0800, Muchun Song wrote:
> > +hugetlb_free_vmemmap
> > +     When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> > +     unused vmemmap pages associated each HugeTLB page.
>                                       ^^^ with

Thanks.

>
> > -     if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> > +     if (is_hugetlb_free_vmemmap_enabled())
> > +             err = vmemmap_populate_basepages(start, end, node, NULL);
> > +     else if (end - start < PAGES_PER_SECTION * sizeof(struct page))
> >               err = vmemmap_populate_basepages(start, end, node, NULL);
>
> Not sure if joining those in an OR makes se.se

Well, I can do it.

>
> >       else if (boot_cpu_has(X86_FEATURE_PSE))
> >               err = vmemmap_populate_hugepages(start, end, node, altmap);
> > @@ -1610,7 +1613,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
> >               }
> >               get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
> >
> > -             if (!boot_cpu_has(X86_FEATURE_PSE)) {
> > +             if (!boot_cpu_has(X86_FEATURE_PSE) ||
> > +                 is_hugetlb_free_vmemmap_enabled()) {
>
> I would add a variable at the beginning called "basepages_populated"
> that holds the result of those two conditions.
> I am not sure if it slightly improves the code as the conditions do
> not need to be rechecked, but the readibility a bit.

Agree. The condition does not need to be rechecked.
Will  do in the next version. Thanks.

>
> > +bool hugetlb_free_vmemmap_enabled;
> > +
> > +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> > +{
> > +     if (!buf)
> > +             return -EINVAL;
> > +
> > +     /* We cannot optimize if a "struct page" crosses page boundaries. */
>
> I think this comment belongs to the last patch.
>

Thanks.

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 10/12] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-10 10:15   ` Oscar Salvador
@ 2020-12-10 12:32     ` Muchun Song
  0 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10 12:32 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 7:40 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 11:55:24AM +0800, Muchun Song wrote:
> > +void __init hugetlb_vmemmap_init(struct hstate *h)
> > +{
> > +     unsigned int nr_pages = pages_per_huge_page(h);
> > +     unsigned int vmemmap_pages;
> > +
> > +     /* We cannot optimize if a "struct page" crosses page boundaries. */
> > +     if (!is_power_of_2(sizeof(struct page)))
> > +             return;
> > +
> > +     if (!hugetlb_free_vmemmap_enabled)
> > +             return;
>
> I think it would make sense to squash the last patch and this one.
> As per the last patch, if "struct page" is not power of 2,
> early_hugetlb_free_vmemmap_param() does not set
> hugetlb_free_vmemmap_enabled, so the "!is_power_of_2" check from above
> would become useless here.
> We know that in order for hugetlb_free_vmemmap_enabled to become true,
> the is_power_of_2 must have succeed early on when calling the early_
> function.

Yeah, you are right. But if is_power_of_2 returns false. The compiler
can optimize this function to null. If we remove the check, it prevents
the compiler from optimizing the code of hugetlb_vmemmap_init().
So I think leaving it here makes sense. Right?

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-10 12:14     ` [External] " Muchun Song
@ 2020-12-10 13:16       ` Oscar Salvador
  2020-12-10 13:29         ` Muchun Song
  0 siblings, 1 reply; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10 13:16 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 08:14:18PM +0800, Muchun Song wrote:
> Yeah, you are right. But if we do this check can make the code simple.
> 
> For example, here is a code snippet.
> 
> void func(void)
> {
>         if (free_vmemmap_pages_per_hpage())
>                 return;
>         /* Do something */
> }
> 
> With this patch, the func will be optimized to null when is_power_of_2
> returns false.
> 
> void func(void)
> {
> }
> 
> Without this patch, the compiler cannot do this optimization.

Ok, I misread the changelog.

So, then is_hugetlb_free_vmemmap_enabled, free_huge_page_vmemmap, 
free_vmemmap_pages_per_hpage and hugetlb_vmemmap_init are optimized
out, right?

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-10 13:16       ` Oscar Salvador
@ 2020-12-10 13:29         ` Muchun Song
  2020-12-10 16:19           ` Muchun Song
  0 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10 13:29 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 9:16 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 08:14:18PM +0800, Muchun Song wrote:
> > Yeah, you are right. But if we do this check can make the code simple.
> >
> > For example, here is a code snippet.
> >
> > void func(void)
> > {
> >         if (free_vmemmap_pages_per_hpage())
> >                 return;
> >         /* Do something */
> > }
> >
> > With this patch, the func will be optimized to null when is_power_of_2
> > returns false.
> >
> > void func(void)
> > {
> > }
> >
> > Without this patch, the compiler cannot do this optimization.
>
> Ok, I misread the changelog.
>
> So, then is_hugetlb_free_vmemmap_enabled, free_huge_page_vmemmap,
> free_vmemmap_pages_per_hpage and hugetlb_vmemmap_init are optimized
> out, right?

Yes, that's right. I have disassembled to make sure of this. Thanks.

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper
  2020-12-10  3:55 ` [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper Muchun Song
@ 2020-12-10 14:15   ` Oscar Salvador
  2020-12-10 15:22     ` [External] " Muchun Song
  0 siblings, 1 reply; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10 14:15 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 11:55:17AM +0800, Muchun Song wrote:
> Any memory allocated via the memblock allocator and not via the buddy
> will be makred reserved already in the memmap. For those pages, we can
         marked
> call free_bootmem_page() to free it to buddy allocator.
> 
> Becasue we wan to free some vmemmap pages of the HugeTLB to the buddy
Because     want
> allocator, we can use this helper to do that in the later patchs.
                                                           patches

To be honest, I think if would be best to introduce this along with
patch#4, so we get to see where it gets used.

> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  include/linux/bootmem_info.h | 19 +++++++++++++++++++
>  1 file changed, 19 insertions(+)
> 
> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> index 4ed6dee1adc9..20a8b0df0c39 100644
> --- a/include/linux/bootmem_info.h
> +++ b/include/linux/bootmem_info.h
> @@ -3,6 +3,7 @@
>  #define __LINUX_BOOTMEM_INFO_H
>  
>  #include <linux/mmzone.h>
> +#include <linux/mm.h>

<linux/mm.h> already includes <linux/mmzone.h>

> +static inline void free_bootmem_page(struct page *page)
> +{
> +	unsigned long magic = (unsigned long)page->freelist;
> +
> +	/* bootmem page has reserved flag in the reserve_bootmem_region */
reserve_bootmem_region sets the reserved flag on bootmem pages?

> +	VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);

We do check for PageReserved in patch#4 before calling in here.
Do we need yet another check here? IOW, do we need to be this paranoid?

> +	if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> +		put_page_bootmem(page);
> +	else
> +		WARN_ON(1);

Lately, some people have been complaining about using WARN_ON as some
systems come with panic_on_warn set.

I would say that in this case it does not matter much as if the vmemmap
pages are not either SECTION_INFO or MIX_SECTION_INFO it means that a
larger corruption happened elsewhere.

But I think I would align the checks here.
It does not make sense to me to only scream under DEBUG_VM if page's
refcount differs from 2, and have a WARN_ON if the page we are trying
to free was not used for the memmap array.
Both things imply a corruption, so I would set the checks under the same
configurations.

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-10  3:55 ` [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-10 14:42   ` Oscar Salvador
  2020-12-10 14:44     ` Oscar Salvador
  2020-12-10 15:57     ` Muchun Song
  0 siblings, 2 replies; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10 14:42 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 11:55:18AM +0800, Muchun Song wrote:
> The free_vmemmap_pages_per_hpage() which indicate that how many vmemmap
> pages associated with a HugeTLB page that can be freed to the buddy
> allocator just returns zero now, because all infrastructure is not
> ready. Once all the infrastructure is ready, we will rework this
> function to support the feature.

I would reword the above to:

"free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
 pages associated with a HugeTLB page can be freed, returns zero for
 now, which means the feature is disabled.
 We will enable it once all the infrastructure is there."

 Or something along those lines.

> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Overall this looks good to me, and it has seen a considerable
simplification, which is good.
Some nits/questions below:


> +#define vmemmap_hpage_addr_end(addr, end)				 \
> +({									 \
> +	unsigned long __boundary;					 \
> +	__boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK; \
> +	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
> +})

Maybe add a little comment explaining what are you trying to get here.

> +/*
> + * Walk a vmemmap address to the pmd it maps.
> + */
> +static pmd_t *vmemmap_to_pmd(unsigned long addr)
> +{
> +	pgd_t *pgd;
> +	p4d_t *p4d;
> +	pud_t *pud;
> +	pmd_t *pmd;
> +
> +	pgd = pgd_offset_k(addr);
> +	if (pgd_none(*pgd))
> +		return NULL;
> +
> +	p4d = p4d_offset(pgd, addr);
> +	if (p4d_none(*p4d))
> +		return NULL;
> +
> +	pud = pud_offset(p4d, addr);
> +	if (pud_none(*pud))
> +		return NULL;
> +
> +	pmd = pmd_offset(pud, addr);
> +	if (pmd_none(*pmd))
> +		return NULL;
> +
> +	return pmd;
> +}

I saw that some people suggested to put all the non-hugetlb vmemmap
functions under sparsemem-vmemmap.c, which makes some sense if some
feature is going to re-use this code somehow. (I am not sure if the
recent patches that take advantage of this feature for ZONE_DEVICE needs
something like this).

I do not have a strong opinion on this though.

> +static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
> +				    unsigned long start, unsigned long end,
> +				    struct list_head *vmemmap_pages)
> +{
> +	/*
> +	 * Make the tail pages are mapped with read-only to catch
> +	 * illegal write operation to the tail pages.
> +	 */
> +	pgprot_t pgprot = PAGE_KERNEL_RO;
> +	pte_t entry = mk_pte(reuse, pgprot);
> +	unsigned long addr;
> +
> +	for (addr = start; addr < end; addr += PAGE_SIZE, pte++) {
> +		struct page *page;
> +
> +		VM_BUG_ON(pte_none(*pte));

If it is none, page will be NULL and we will crash in the list_add
below?

> +static void vmemmap_remap_range(unsigned long start, unsigned long end,
> +				struct list_head *vmemmap_pages)
> +{
> +	pmd_t *pmd;
> +	unsigned long next, addr = start;
> +	struct page *reuse = NULL;
> +
> +	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> +	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> +	VM_BUG_ON((start >> PUD_SHIFT) != (end >> PUD_SHIFT));
This last VM_BUG_ON, is to see if both fall under the same PUD table?

> +
> +	pmd = vmemmap_to_pmd(addr);
> +	BUG_ON(!pmd);

Which is the criteria you followed to make this BUG_ON and VM_BUG_ON
in the check from vmemmap_reuse_pte_range? 

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-10 14:42   ` Oscar Salvador
@ 2020-12-10 14:44     ` Oscar Salvador
  2020-12-10 15:58       ` [External] " Muchun Song
  2020-12-10 15:57     ` Muchun Song
  1 sibling, 1 reply; 36+ messages in thread
From: Oscar Salvador @ 2020-12-10 14:44 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 03:42:56PM +0100, Oscar Salvador wrote:
> On Thu, Dec 10, 2020 at 11:55:18AM +0800, Muchun Song wrote:
> > The free_vmemmap_pages_per_hpage() which indicate that how many vmemmap
> > pages associated with a HugeTLB page that can be freed to the buddy
> > allocator just returns zero now, because all infrastructure is not
> > ready. Once all the infrastructure is ready, we will rework this
> > function to support the feature.
> 
> I would reword the above to:
> 
> "free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
>  pages associated with a HugeTLB page can be freed, returns zero for
>  now, which means the feature is disabled.
>  We will enable it once all the infrastructure is there."
> 
>  Or something along those lines.
> 
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> 
> Overall this looks good to me, and it has seen a considerable
> simplification, which is good.
> Some nits/questions below:

And as I said, I would merge patch#3 with this one.


-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper
  2020-12-10 14:15   ` Oscar Salvador
@ 2020-12-10 15:22     ` Muchun Song
  2020-12-10 15:26       ` Muchun Song
  0 siblings, 1 reply; 36+ messages in thread
From: Muchun Song @ 2020-12-10 15:22 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 10:16 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 11:55:17AM +0800, Muchun Song wrote:
> > Any memory allocated via the memblock allocator and not via the buddy
> > will be makred reserved already in the memmap. For those pages, we can
>          marked

Thanks.

> > call free_bootmem_page() to free it to buddy allocator.
> >
> > Becasue we wan to free some vmemmap pages of the HugeTLB to the buddy
> Because     want
> > allocator, we can use this helper to do that in the later patchs.
>                                                            patches
>

Thanks.

> To be honest, I think if would be best to introduce this along with
> patch#4, so we get to see where it gets used.
>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  include/linux/bootmem_info.h | 19 +++++++++++++++++++
> >  1 file changed, 19 insertions(+)
> >
> > diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> > index 4ed6dee1adc9..20a8b0df0c39 100644
> > --- a/include/linux/bootmem_info.h
> > +++ b/include/linux/bootmem_info.h
> > @@ -3,6 +3,7 @@
> >  #define __LINUX_BOOTMEM_INFO_H
> >
> >  #include <linux/mmzone.h>
> > +#include <linux/mm.h>
>
> <linux/mm.h> already includes <linux/mmzone.h>

Yeah. Can remove this.

>
> > +static inline void free_bootmem_page(struct page *page)
> > +{
> > +     unsigned long magic = (unsigned long)page->freelist;
> > +
> > +     /* bootmem page has reserved flag in the reserve_bootmem_region */
> reserve_bootmem_region sets the reserved flag on bootmem pages?

Right.

>
> > +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
>
> We do check for PageReserved in patch#4 before calling in here.
> Do we need yet another check here? IOW, do we need to be this paranoid?

Yeah, do not need to check again. We can remove it.

>
> > +     if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> > +             put_page_bootmem(page);
> > +     else
> > +             WARN_ON(1);
>
> Lately, some people have been complaining about using WARN_ON as some
> systems come with panic_on_warn set.
>
> I would say that in this case it does not matter much as if the vmemmap
> pages are not either SECTION_INFO or MIX_SECTION_INFO it means that a
> larger corruption happened elsewhere.
>
> But I think I would align the checks here.
> It does not make sense to me to only scream under DEBUG_VM if page's
> refcount differs from 2, and have a WARN_ON if the page we are trying
> to free was not used for the memmap array.
> Both things imply a corruption, so I would set the checks under the same
> configurations.

Do you suggest changing them all to VM_DEBUG_ON?

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper
  2020-12-10 15:22     ` [External] " Muchun Song
@ 2020-12-10 15:26       ` Muchun Song
  0 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10 15:26 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 11:22 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Thu, Dec 10, 2020 at 10:16 PM Oscar Salvador <osalvador@suse.de> wrote:
> >
> > On Thu, Dec 10, 2020 at 11:55:17AM +0800, Muchun Song wrote:
> > > Any memory allocated via the memblock allocator and not via the buddy
> > > will be makred reserved already in the memmap. For those pages, we can
> >          marked
>
> Thanks.
>
> > > call free_bootmem_page() to free it to buddy allocator.
> > >
> > > Becasue we wan to free some vmemmap pages of the HugeTLB to the buddy
> > Because     want
> > > allocator, we can use this helper to do that in the later patchs.
> >                                                            patches
> >
>
> Thanks.
>
> > To be honest, I think if would be best to introduce this along with
> > patch#4, so we get to see where it gets used.
> >
> > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > > ---
> > >  include/linux/bootmem_info.h | 19 +++++++++++++++++++
> > >  1 file changed, 19 insertions(+)
> > >
> > > diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> > > index 4ed6dee1adc9..20a8b0df0c39 100644
> > > --- a/include/linux/bootmem_info.h
> > > +++ b/include/linux/bootmem_info.h
> > > @@ -3,6 +3,7 @@
> > >  #define __LINUX_BOOTMEM_INFO_H
> > >
> > >  #include <linux/mmzone.h>
> > > +#include <linux/mm.h>
> >
> > <linux/mm.h> already includes <linux/mmzone.h>
>
> Yeah. Can remove this.
>
> >
> > > +static inline void free_bootmem_page(struct page *page)
> > > +{
> > > +     unsigned long magic = (unsigned long)page->freelist;
> > > +
> > > +     /* bootmem page has reserved flag in the reserve_bootmem_region */
> > reserve_bootmem_region sets the reserved flag on bootmem pages?
>
> Right.
>
> >
> > > +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
> >
> > We do check for PageReserved in patch#4 before calling in here.
> > Do we need yet another check here? IOW, do we need to be this paranoid?
>
> Yeah, do not need to check again. We can remove it.
>
> >
> > > +     if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> > > +             put_page_bootmem(page);
> > > +     else
> > > +             WARN_ON(1);
> >
> > Lately, some people have been complaining about using WARN_ON as some
> > systems come with panic_on_warn set.
> >
> > I would say that in this case it does not matter much as if the vmemmap
> > pages are not either SECTION_INFO or MIX_SECTION_INFO it means that a
> > larger corruption happened elsewhere.
> >
> > But I think I would align the checks here.
> > It does not make sense to me to only scream under DEBUG_VM if page's
> > refcount differs from 2, and have a WARN_ON if the page we are trying
> > to free was not used for the memmap array.
> > Both things imply a corruption, so I would set the checks under the same
> > configurations.
>
> Do you suggest changing them all to VM_DEBUG_ON?

Or VM_WARN_ON?

>
> >
> > --
> > Oscar Salvador
> > SUSE L3
>
>
>
> --
> Yours,
> Muchun



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-10 14:42   ` Oscar Salvador
  2020-12-10 14:44     ` Oscar Salvador
@ 2020-12-10 15:57     ` Muchun Song
  1 sibling, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10 15:57 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 10:43 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 11:55:18AM +0800, Muchun Song wrote:
> > The free_vmemmap_pages_per_hpage() which indicate that how many vmemmap
> > pages associated with a HugeTLB page that can be freed to the buddy
> > allocator just returns zero now, because all infrastructure is not
> > ready. Once all the infrastructure is ready, we will rework this
> > function to support the feature.
>
> I would reword the above to:
>
> "free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
>  pages associated with a HugeTLB page can be freed, returns zero for
>  now, which means the feature is disabled.
>  We will enable it once all the infrastructure is there."

Thanks for your suggestion.

>
>  Or something along those lines.
>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>
> Overall this looks good to me, and it has seen a considerable
> simplification, which is good.
> Some nits/questions below:
>
>
> > +#define vmemmap_hpage_addr_end(addr, end)                             \
> > +({                                                                    \
> > +     unsigned long __boundary;                                        \
> > +     __boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK; \
> > +     (__boundary - 1 < (end) - 1) ? __boundary : (end);               \
> > +})
>
> Maybe add a little comment explaining what are you trying to get here.

OK. Will do.

>
> > +/*
> > + * Walk a vmemmap address to the pmd it maps.
> > + */
> > +static pmd_t *vmemmap_to_pmd(unsigned long addr)
> > +{
> > +     pgd_t *pgd;
> > +     p4d_t *p4d;
> > +     pud_t *pud;
> > +     pmd_t *pmd;
> > +
> > +     pgd = pgd_offset_k(addr);
> > +     if (pgd_none(*pgd))
> > +             return NULL;
> > +
> > +     p4d = p4d_offset(pgd, addr);
> > +     if (p4d_none(*p4d))
> > +             return NULL;
> > +
> > +     pud = pud_offset(p4d, addr);
> > +     if (pud_none(*pud))
> > +             return NULL;
> > +
> > +     pmd = pmd_offset(pud, addr);
> > +     if (pmd_none(*pmd))
> > +             return NULL;
> > +
> > +     return pmd;
> > +}
>
> I saw that some people suggested to put all the non-hugetlb vmemmap
> functions under sparsemem-vmemmap.c, which makes some sense if some
> feature is going to re-use this code somehow. (I am not sure if the
> recent patches that take advantage of this feature for ZONE_DEVICE needs
> something like this).
>
> I do not have a strong opinion on this though.

Yeah, I also thought about this. I prefer moving the common code to
the sparsemem-vmemmap.c. If more people agree with this, I can do
this in the next version. :)

>
> > +static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
> > +                                 unsigned long start, unsigned long end,
> > +                                 struct list_head *vmemmap_pages)
> > +{
> > +     /*
> > +      * Make the tail pages are mapped with read-only to catch
> > +      * illegal write operation to the tail pages.
> > +      */
> > +     pgprot_t pgprot = PAGE_KERNEL_RO;
> > +     pte_t entry = mk_pte(reuse, pgprot);
> > +     unsigned long addr;
> > +
> > +     for (addr = start; addr < end; addr += PAGE_SIZE, pte++) {
> > +             struct page *page;
> > +
> > +             VM_BUG_ON(pte_none(*pte));
>
> If it is none, page will be NULL and we will crash in the list_add
> below?

Yeah, I think that here should be a BUG_ON.

>
> > +static void vmemmap_remap_range(unsigned long start, unsigned long end,
> > +                             struct list_head *vmemmap_pages)
> > +{
> > +     pmd_t *pmd;
> > +     unsigned long next, addr = start;
> > +     struct page *reuse = NULL;
> > +
> > +     VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> > +     VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> > +     VM_BUG_ON((start >> PUD_SHIFT) != (end >> PUD_SHIFT));
> This last VM_BUG_ON, is to see if both fall under the same PUD table?

Right.

>
> > +
> > +     pmd = vmemmap_to_pmd(addr);
> > +     BUG_ON(!pmd);
>
> Which is the criteria you followed to make this BUG_ON and VM_BUG_ON
> in the check from vmemmap_reuse_pte_range?

Indeed, I am somewhat confused. Should be unified. I should use
BUG_ON here and in vmemmap_reuse_pte_range.

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page
  2020-12-10 14:44     ` Oscar Salvador
@ 2020-12-10 15:58       ` Muchun Song
  0 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10 15:58 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 10:44 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 03:42:56PM +0100, Oscar Salvador wrote:
> > On Thu, Dec 10, 2020 at 11:55:18AM +0800, Muchun Song wrote:
> > > The free_vmemmap_pages_per_hpage() which indicate that how many vmemmap
> > > pages associated with a HugeTLB page that can be freed to the buddy
> > > allocator just returns zero now, because all infrastructure is not
> > > ready. Once all the infrastructure is ready, we will rework this
> > > function to support the feature.
> >
> > I would reword the above to:
> >
> > "free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
> >  pages associated with a HugeTLB page can be freed, returns zero for
> >  now, which means the feature is disabled.
> >  We will enable it once all the infrastructure is there."
> >
> >  Or something along those lines.
> >
> > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >
> > Overall this looks good to me, and it has seen a considerable
> > simplification, which is good.
> > Some nits/questions below:
>
> And as I said, I would merge patch#3 with this one.

Will do. Thanks.

>
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler
  2020-12-10 13:29         ` Muchun Song
@ 2020-12-10 16:19           ` Muchun Song
  0 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-10 16:19 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Thu, Dec 10, 2020 at 9:29 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Thu, Dec 10, 2020 at 9:16 PM Oscar Salvador <osalvador@suse.de> wrote:
> >
> > On Thu, Dec 10, 2020 at 08:14:18PM +0800, Muchun Song wrote:
> > > Yeah, you are right. But if we do this check can make the code simple.
> > >
> > > For example, here is a code snippet.
> > >
> > > void func(void)
> > > {
> > >         if (free_vmemmap_pages_per_hpage())
> > >                 return;
> > >         /* Do something */
> > > }
> > >
> > > With this patch, the func will be optimized to null when is_power_of_2
> > > returns false.
> > >
> > > void func(void)
> > > {
> > > }
> > >
> > > Without this patch, the compiler cannot do this optimization.
> >
> > Ok, I misread the changelog.
> >
> > So, then is_hugetlb_free_vmemmap_enabled, free_huge_page_vmemmap,
> > free_vmemmap_pages_per_hpage and hugetlb_vmemmap_init are optimized
> > out, right?
>
> Yes, that's right. I have disassembled to make sure of this. Thanks.

Hi Oscar,

Because this is an optimization for code, I leave it in this
separate patch. Do you still suggest squash this with
patch#10? Thanks.

>
> >
> > --
> > Oscar Salvador
> > SUSE L3
>
>
>
> --
> Yours,
> Muchun



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page
  2020-12-10  3:55 ` [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2020-12-11  9:35   ` Oscar Salvador
  2020-12-11 10:52     ` David Hildenbrand
  2020-12-11 13:01     ` [External] " Muchun Song
  0 siblings, 2 replies; 36+ messages in thread
From: Oscar Salvador @ 2020-12-11  9:35 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 11:55:20AM +0800, Muchun Song wrote:
> When we free a HugeTLB page to the buddy allocator, we should allocate the
> vmemmap pages associated with it. We can do that in the __free_hugepage()
"vmemmap pages that describe the range" would look better to me, but it is ok.

> +#define GFP_VMEMMAP_PAGE		\
> +	(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_HIGH | __GFP_NOWARN)
>  
>  #ifndef VMEMMAP_HPAGE_SHIFT
>  #define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
> @@ -197,6 +200,11 @@
>  	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
>  })
>  
> +typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte,
> +					 unsigned long start, unsigned long end,
> +					 void *priv);

Any reason to not have defined GFP_VMEMMAP_PAGE and the new typedef into
hugetlb_vmemmap.h?

  
> +static void vmemmap_restore_pte_range(struct page *reuse, pte_t *pte,
> +				      unsigned long start, unsigned long end,
> +				      void *priv)
> +{
> +	pgprot_t pgprot = PAGE_KERNEL;
> +	void *from = page_to_virt(reuse);
> +	unsigned long addr;
> +	struct list_head *pages = priv;
[...]
> +
> +		/*
> +		 * Make sure that any data that writes to the @to is made
> +		 * visible to the physical page.
> +		 */
> +		flush_kernel_vmap_range(to, PAGE_SIZE);

Correct me if I am wrong, but flush_kernel_vmap_range is a NOOP under arches which
do not have ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE.
Since we only enable support for x86_64, and x86_64 is one of those arches,
could we remove this, and introduced later on in case we enable this feature
on an arch that needs it?

I am not sure if you need to flush the range somehow, as you did in
vmemmap_remap_range.

> +retry:
> +		page = alloc_page(GFP_VMEMMAP_PAGE);
> +		if (unlikely(!page)) {
> +			msleep(100);
> +			/*
> +			 * We should retry infinitely, because we cannot
> +			 * handle allocation failures. Once we allocate
> +			 * vmemmap pages successfully, then we can free
> +			 * a HugeTLB page.
> +			 */
> +			goto retry;

I think this is the trickiest part.
With 2MB HugeTLB pages we only need 6 pages, but with 1GB, the number of pages
we need to allocate increases significantly (4088 pages IIRC).
And you are using __GFP_HIGH, which will allow us to use more memory (by
cutting down the watermark), but it might lead to putting the system
on its knees wrt. memory.
And yes, I know that once we allocate the 4088 pages, 1GB gets freed, but
still.

I would like to hear Michal's thoughts on this one, but I wonder if it makes
sense to not let 1GB-HugeTLB pages be freed.

-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page
  2020-12-11  9:35   ` Oscar Salvador
@ 2020-12-11 10:52     ` David Hildenbrand
  2020-12-11 13:01     ` [External] " Muchun Song
  1 sibling, 0 replies; 36+ messages in thread
From: David Hildenbrand @ 2020-12-11 10:52 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel


> Am 11.12.2020 um 10:35 schrieb Oscar Salvador <osalvador@suse.de>:
> 
> On Thu, Dec 10, 2020 at 11:55:20AM +0800, Muchun Song wrote:
>> When we free a HugeTLB page to the buddy allocator, we should allocate the
>> vmemmap pages associated with it. We can do that in the __free_hugepage()
> "vmemmap pages that describe the range" would look better to me, but it is ok.
> 
>> +#define GFP_VMEMMAP_PAGE        \
>> +    (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_HIGH | __GFP_NOWARN)
>> 
>> #ifndef VMEMMAP_HPAGE_SHIFT
>> #define VMEMMAP_HPAGE_SHIFT        HPAGE_SHIFT
>> @@ -197,6 +200,11 @@
>>    (__boundary - 1 < (end) - 1) ? __boundary : (end);         \
>> })
>> 
>> +typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte,
>> +                     unsigned long start, unsigned long end,
>> +                     void *priv);
> 
> Any reason to not have defined GFP_VMEMMAP_PAGE and the new typedef into
> hugetlb_vmemmap.h?
> 
> 
>> +static void vmemmap_restore_pte_range(struct page *reuse, pte_t *pte,
>> +                      unsigned long start, unsigned long end,
>> +                      void *priv)
>> +{
>> +    pgprot_t pgprot = PAGE_KERNEL;
>> +    void *from = page_to_virt(reuse);
>> +    unsigned long addr;
>> +    struct list_head *pages = priv;
> [...]
>> +
>> +        /*
>> +         * Make sure that any data that writes to the @to is made
>> +         * visible to the physical page.
>> +         */
>> +        flush_kernel_vmap_range(to, PAGE_SIZE);
> 
> Correct me if I am wrong, but flush_kernel_vmap_range is a NOOP under arches which
> do not have ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE.
> Since we only enable support for x86_64, and x86_64 is one of those arches,
> could we remove this, and introduced later on in case we enable this feature
> on an arch that needs it?
> 
> I am not sure if you need to flush the range somehow, as you did in
> vmemmap_remap_range.
> 
>> +retry:
>> +        page = alloc_page(GFP_VMEMMAP_PAGE);
>> +        if (unlikely(!page)) {
>> +            msleep(100);
>> +            /*
>> +             * We should retry infinitely, because we cannot
>> +             * handle allocation failures. Once we allocate
>> +             * vmemmap pages successfully, then we can free
>> +             * a HugeTLB page.
>> +             */
>> +            goto retry;
> 
> I think this is the trickiest part.
> With 2MB HugeTLB pages we only need 6 pages, but with 1GB, the number of pages
> we need to allocate increases significantly (4088 pages IIRC).
> And you are using __GFP_HIGH, which will allow us to use more memory (by
> cutting down the watermark), but it might lead to putting the system
> on its knees wrt. memory.
> And yes, I know that once we allocate the 4088 pages, 1GB gets freed, but
> still.

Similar to memory hotplug, no? I don‘t think this is really an issue that cannot be mitigated. Yeah, we might want to tweak allocation flags.

> 
> I would like to hear Michal's thoughts on this one, but I wonder if it makes
> sense to not let 1GB-HugeTLB pages be freed.
> 
> -- 
> Oscar Salvador
> SUSE L3
> 



^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page
  2020-12-11  9:35   ` Oscar Salvador
  2020-12-11 10:52     ` David Hildenbrand
@ 2020-12-11 13:01     ` Muchun Song
  1 sibling, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-11 13:01 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri, Dec 11, 2020 at 5:35 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 11:55:20AM +0800, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we should allocate the
> > vmemmap pages associated with it. We can do that in the __free_hugepage()
> "vmemmap pages that describe the range" would look better to me, but it is ok.

Thanks.

>
> > +#define GFP_VMEMMAP_PAGE             \
> > +     (GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_HIGH | __GFP_NOWARN)
> >
> >  #ifndef VMEMMAP_HPAGE_SHIFT
> >  #define VMEMMAP_HPAGE_SHIFT          HPAGE_SHIFT
> > @@ -197,6 +200,11 @@
> >       (__boundary - 1 < (end) - 1) ? __boundary : (end);               \
> >  })
> >
> > +typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte,
> > +                                      unsigned long start, unsigned long end,
> > +                                      void *priv);
>
> Any reason to not have defined GFP_VMEMMAP_PAGE and the new typedef into
> hugetlb_vmemmap.h?

Because they can only be used in this hugetlb_vmemmap.c.

>
>
> > +static void vmemmap_restore_pte_range(struct page *reuse, pte_t *pte,
> > +                                   unsigned long start, unsigned long end,
> > +                                   void *priv)
> > +{
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     void *from = page_to_virt(reuse);
> > +     unsigned long addr;
> > +     struct list_head *pages = priv;
> [...]
> > +
> > +             /*
> > +              * Make sure that any data that writes to the @to is made
> > +              * visible to the physical page.
> > +              */
> > +             flush_kernel_vmap_range(to, PAGE_SIZE);
>
> Correct me if I am wrong, but flush_kernel_vmap_range is a NOOP under arches which
> do not have ARCH_HAS_FLUSH_KERNEL_DCACHE_PAGE.
> Since we only enable support for x86_64, and x86_64 is one of those arches,
> could we remove this, and introduced later on in case we enable this feature
> on an arch that needs it?

OK. Will remove.

>
> I am not sure if you need to flush the range somehow, as you did in
> vmemmap_remap_range.
>
> > +retry:
> > +             page = alloc_page(GFP_VMEMMAP_PAGE);
> > +             if (unlikely(!page)) {
> > +                     msleep(100);
> > +                     /*
> > +                      * We should retry infinitely, because we cannot
> > +                      * handle allocation failures. Once we allocate
> > +                      * vmemmap pages successfully, then we can free
> > +                      * a HugeTLB page.
> > +                      */
> > +                     goto retry;
>
> I think this is the trickiest part.
> With 2MB HugeTLB pages we only need 6 pages, but with 1GB, the number of pages
> we need to allocate increases significantly (4088 pages IIRC).
> And you are using __GFP_HIGH, which will allow us to use more memory (by
> cutting down the watermark), but it might lead to putting the system
> on its knees wrt. memory.
> And yes, I know that once we allocate the 4088 pages, 1GB gets freed, but
> still.

Yeah, it is a problem. How about removing __GFP_HIGH only for
1GB HugeTLB page?

>
> I would like to hear Michal's thoughts on this one, but I wonder if it makes
> sense to not let 1GB-HugeTLB pages be freed.
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-10  3:55 ` [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
  2020-12-10 11:11   ` Muchun Song
@ 2020-12-11 13:36   ` Oscar Salvador
  2020-12-11 14:08     ` [External] " Muchun Song
  1 sibling, 1 reply; 36+ messages in thread
From: Oscar Salvador @ 2020-12-11 13:36 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On Thu, Dec 10, 2020 at 11:55:21AM +0800, Muchun Song wrote:
> +static inline void subpage_hwpoison_deliver(struct hstate *h, struct page *head)
> +{
> +	struct page *page = head;
> +
> +	if (!free_vmemmap_pages_per_hpage(h))
> +		return;
> +
> +	if (PageHWPoison(head))
> +		page = head + page_private(head + 4);
> +
> +	/*
> +	 * Move PageHWPoison flag from head page to the raw error page,
> +	 * which makes any subpages rather than the error page reusable.
> +	 */
> +	if (page != head) {
> +		SetPageHWPoison(page);
> +		ClearPageHWPoison(head);
> +	}
> +}

I would make the names coherent.
I am not definitely goot at names, but something like:
hwpoison_subpage_{foo,bar} looks better.

Also, could not subpage_hwpoison_deliver be rewritten like:

  static inline void subpage_hwpoison_deliver(struct hstate *h, struct page *head)
  {
       struct page *page;
  
       if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
               return;
  
       page = head + page_private(head + 4);
       /*
        * Move PageHWPoison flag from head page to the raw error page,
        * which makes any subpages rather than the error page reusable.
        */
       if (page != head) {
               SetPageHWPoison(page);
               ClearPageHWPoison(head);
       }
  }

I think it is better code-wise.

> +	 * Move PageHWPoison flag from head page to the raw error page,
> +	 * which makes any subpages rather than the error page reusable.
> +	 */
> +	if (page != head) {
> +		SetPageHWPoison(page);
> +		ClearPageHWPoison(head);
> +	}

I would put this in an else-if above:

	if (free_vmemmap_pages_per_hpage(h)) {
		set_page_private(head + 4, page - head);
	        return;
	} else if (page != head) {
		SetPageHWPoison(page);
		ClearPageHWPoison(head);
	}

or will we lose the optimization in case free_vmemmap_pages_per_hpage gets compiled out?


-- 
Oscar Salvador
SUSE L3


^ permalink raw reply	[flat|nested] 36+ messages in thread

* Re: [External] Re: [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-12-11 13:36   ` Oscar Salvador
@ 2020-12-11 14:08     ` Muchun Song
  0 siblings, 0 replies; 36+ messages in thread
From: Muchun Song @ 2020-12-11 14:08 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri, Dec 11, 2020 at 9:36 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Thu, Dec 10, 2020 at 11:55:21AM +0800, Muchun Song wrote:
> > +static inline void subpage_hwpoison_deliver(struct hstate *h, struct page *head)
> > +{
> > +     struct page *page = head;
> > +
> > +     if (!free_vmemmap_pages_per_hpage(h))
> > +             return;
> > +
> > +     if (PageHWPoison(head))
> > +             page = head + page_private(head + 4);
> > +
> > +     /*
> > +      * Move PageHWPoison flag from head page to the raw error page,
> > +      * which makes any subpages rather than the error page reusable.
> > +      */
> > +     if (page != head) {
> > +             SetPageHWPoison(page);
> > +             ClearPageHWPoison(head);
> > +     }
> > +}
>
> I would make the names coherent.
> I am not definitely goot at names, but something like:
> hwpoison_subpage_{foo,bar} looks better.

It's better than mine. Thank you.

>
> Also, could not subpage_hwpoison_deliver be rewritten like:
>
>   static inline void subpage_hwpoison_deliver(struct hstate *h, struct page *head)
>   {
>        struct page *page;
>
>        if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
>                return;
>
>        page = head + page_private(head + 4);
>        /*
>         * Move PageHWPoison flag from head page to the raw error page,
>         * which makes any subpages rather than the error page reusable.
>         */
>        if (page != head) {
>                SetPageHWPoison(page);
>                ClearPageHWPoison(head);
>        }
>   }
>
> I think it is better code-wise.

Will do. Thank you.

>
> > +      * Move PageHWPoison flag from head page to the raw error page,
> > +      * which makes any subpages rather than the error page reusable.
> > +      */
> > +     if (page != head) {
> > +             SetPageHWPoison(page);
> > +             ClearPageHWPoison(head);
> > +     }
>
> I would put this in an else-if above:
>
>         if (free_vmemmap_pages_per_hpage(h)) {
>                 set_page_private(head + 4, page - head);
>                 return;
>         } else if (page != head) {
>                 SetPageHWPoison(page);
>                 ClearPageHWPoison(head);
>         }
>
> or will we lose the optimization in case free_vmemmap_pages_per_hpage gets compiled out?
>

Either is OK. The compiler will help us optimize the code when
free_vmemmap_pages_per_hpage always returns false.

Thanks for your suggestions. :-)

>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun


^ permalink raw reply	[flat|nested] 36+ messages in thread

end of thread, other threads:[~2020-12-11 15:13 UTC | newest]

Thread overview: 36+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-10  3:55 [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Muchun Song
2020-12-10  3:55 ` [PATCH v8 01/12] mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c Muchun Song
2020-12-10  3:55 ` [PATCH v8 02/12] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-12-10  3:55 ` [PATCH v8 03/12] mm/bootmem_info: Introduce free_bootmem_page helper Muchun Song
2020-12-10 14:15   ` Oscar Salvador
2020-12-10 15:22     ` [External] " Muchun Song
2020-12-10 15:26       ` Muchun Song
2020-12-10  3:55 ` [PATCH v8 04/12] mm/hugetlb: Free the vmemmap pages associated with each HugeTLB page Muchun Song
2020-12-10 14:42   ` Oscar Salvador
2020-12-10 14:44     ` Oscar Salvador
2020-12-10 15:58       ` [External] " Muchun Song
2020-12-10 15:57     ` Muchun Song
2020-12-10  3:55 ` [PATCH v8 05/12] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
2020-12-10  3:55 ` [PATCH v8 06/12] mm/hugetlb: Allocate the vmemmap pages associated with each HugeTLB page Muchun Song
2020-12-11  9:35   ` Oscar Salvador
2020-12-11 10:52     ` David Hildenbrand
2020-12-11 13:01     ` [External] " Muchun Song
2020-12-10  3:55 ` [PATCH v8 07/12] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-12-10 11:11   ` Muchun Song
2020-12-11 13:36   ` Oscar Salvador
2020-12-11 14:08     ` [External] " Muchun Song
2020-12-10  3:55 ` [PATCH v8 08/12] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-12-10  3:55 ` [PATCH v8 09/12] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-12-10 10:04   ` Oscar Salvador
2020-12-10 12:26     ` [External] " Muchun Song
2020-12-10  3:55 ` [PATCH v8 10/12] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-12-10 10:15   ` Oscar Salvador
2020-12-10 12:32     ` [External] " Muchun Song
2020-12-10  3:55 ` [PATCH v8 11/12] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-12-10  3:55 ` [PATCH v8 12/12] mm/hugetlb: Optimize the code with the help of the compiler Muchun Song
2020-12-10 10:25   ` Oscar Salvador
2020-12-10 12:14     ` [External] " Muchun Song
2020-12-10 13:16       ` Oscar Salvador
2020-12-10 13:29         ` Muchun Song
2020-12-10 16:19           ` Muchun Song
2020-12-10  9:18 ` [PATCH v8 00/12] Free some vmemmap pages of HugeTLB page Oscar Salvador

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).