linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v7 00/15] Free some vmemmap pages of hugetlb page
@ 2020-11-30 15:18 Muchun Song
  2020-11-30 15:18 ` [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
                   ` (15 more replies)
  0 siblings, 16 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Hi all,

This patch series will free some vmemmap pages(struct page structures)
associated with each hugetlbpage when preallocated to save memory.

In order to reduce the difficulty of the first version of code review.
From this version, we disable PMD/huge page mapping of vmemmap if this
feature was enabled. This accutualy eliminate a bunch of the complex code
doing page table manipulation. When this patch series is solid, we cam add
the code of vmemmap page table manipulation in the future.

The struct page structures (page structs) are used to describe a physical
page frame. By default, there is a one-to-one mapping from a page frame to
it's corresponding page struct.

The HugeTLB pages consist of multiple base page size pages and is supported
by many architectures. See hugetlbpage.rst in the Documentation directory
for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
are currently supported. Since the base page size on x86 is 4KB, a 2MB
HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
4096 base pages. For each base page, there is a corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
provides this upper limit. The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for all
tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned to
the buddy allocator for other uses.

When the system boot up, every 2M HugeTLB has 512 struct page structs which
size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | -------------> |     2     |
 |           |                     +-----------+                +-----------+
 |           |                     |     3     | -------------> |     3     |
 |           |                     +-----------+                +-----------+
 |           |                     |     4     | -------------> |     4     |
 |    2MB    |                     +-----------+                +-----------+
 |           |                     |     5     | -------------> |     5     |
 |           |                     +-----------+                +-----------+
 |           |                     |     6     | -------------> |     6     |
 |           |                     +-----------+                +-----------+
 |           |                     |     7     | -------------> |     7     |
 |           |                     +-----------+                +-----------+
 |           |
 |           |
 |           |
 +-----------+

The value of page->compound_head is the same for all tail pages. The first
page of page structs (page 0) associated with the HugeTLB page contains the 4
page structs necessary to describe the HugeTLB. The only use of the remaining
pages of page structs (page 1 to page 7) is to point to page->compound_head.
Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
will be used for each HugeTLB page. This will allow us to free the remaining
6 pages to the buddy allocator.

Here is how things look after remapping.

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
 |           |                     +-----------+                   | | | | |
 |           |                     |     3     | ------------------+ | | | |
 |           |                     +-----------+                     | | | |
 |           |                     |     4     | --------------------+ | | |
 |    2MB    |                     +-----------+                       | | |
 |           |                     |     5     | ----------------------+ | |
 |           |                     +-----------+                         | |
 |           |                     |     6     | ------------------------+ |
 |           |                     +-----------+                           |
 |           |                     |     7     | --------------------------+
 |           |                     +-----------+
 |           |
 |           |
 |           |
 +-----------+

When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
vmemmap pages and restore the previous mapping relationship.

Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
pages.

In this case, for the 1GB HugeTLB page, we can save 4088 pages(There are
4096 pages for struct page structs, we reserve 2 pages for vmemmap and 8
pages for page tables. So we can save 4088 pages). This is a very substantial
gain. On our server, run some SPDK/QEMU applications which will use 1024GB
hugetlbpage. With this feature enabled, we can save ~16GB(1G hugepage)/~11GB
(2MB hugepage, the worst case is 10GB while the best is 12GB) memory.

Because there are vmemmap page tables reconstruction on the freeing/allocating
path, it increases some overhead. Here are some overhead analysis.

1) Allocating 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           8360 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          1868 |@@@@@@@@@@@                                         |
   [32K, 64K)            10 |                                                    |
   [64K, 128K)            2 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.066s
   user     0m0.000s
   sys      0m0.066s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10176 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             62 |                                                    |
   [16K, 32K)             2 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.004s
   user     0m0.000s
   sys      0m0.002s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [16K, 32K)         10240 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.077s
   user     0m0.001s
   sys      0m0.075s

   # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            9950 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)            287 |@                                                   |
   [16K, 32K)             3 |                                                    |

   Summarize: The overhead of __free_hugepage is about ~2-4x slower than before.
              But according to the allocation test above, I think that here is
	      also ~2x slower than before.

              But why the 'real' time of patched is smaller than before? Because
	      In this patch series, the freeing hugetlb is asynchronous(through
	      kwoker).

Although the overhead has increased, the overhead is not significant. Like Mike
said, "However, remember that the majority of use cases create hugetlb pages at
or shortly after boot time and add them to the pool. So, additional overhead is
at pool creation time. There is no change to 'normal run time' operations of
getting a page from or returning a page to the pool (think page fault/unmap)".

Todo:
  1. Free all of the tail vmemmap pages
     Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
     free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
     structures has beed set PG_head flag. If we can adjust compound_head()
     slightly and make compound_head() return the real head struct page when
     the parameter is the tail struct page but with PG_head flag set.

     In order to make the code evolution route clearer. This feature can can be
     a separate patch after this patchset is solid.
  2. Support for other architectures (e.g. aarch64).
  3. Enable PMD/huge page mapping of vmemmap even if this feature was enabled.

Changelog in v7:
  1. Rebase to linux-next 20201130
  2. Do not use basepage mapping for vmemmap when this feature is disabled.
     Thanks to Oscar and Barry.
  3. Rework some patchs.
     [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
     [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page

Changelog in v6:
  1. Disable PMD/huge page mapping of vmemmap if this feature was enabled.
  2. Simplify the first version code.

Changelog in v5:
  1. Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].
     Thanks to Mike and Oscar's suggestions.

Changelog in v4:
  1. Move all the vmemmap functions to hugetlb_vmemmap.c.
  2. Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
     disable this feature, we should disable it by a boot/kernel command line.
  3. Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
  4. Initialize page table lock for vmemmap through core_initcall mechanism.

  Thanks for Mike and Oscar's suggestions.

Changelog in v3:
  1. Rename some helps function name. Thanks Mike.
  2. Rework some code. Thanks Mike and Oscar.
  3. Remap the tail vmemmap page with PAGE_KERNEL_RO instead of
     PAGE_KERNEL. Thanks Matthew.
  4. Add some overhead analysis in the cover letter.
  5. Use vmemap pmd table lock instead of a hugetlb specific global lock.

Changelog in v2:
  1. Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  2. Fix some typo and code style problems.
  3. Remove unused handle_vmemmap_fault().
  4. Merge some commits to one commit suggested by Mike.

Muchun Song (15):
  mm/memory_hotplug: Move bootmem info registration API to
    bootmem_info.c
  mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  mm/hugetlb: Disable freeing vmemmap if struct page size is not power
    of two
  x86/mm/64: Disable PMD page mapping of vmemmap
  mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  mm/hugetlb: Defer freeing of HugeTLB pages
  mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb
    page
  mm/hugetlb: Set the PageHWPoison to the raw error page
  mm/hugetlb: Flush work when dissolving hugetlb page
  mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  mm/hugetlb: Gather discrete indexes of tail page
  mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct
    page

 Documentation/admin-guide/kernel-parameters.txt |   9 +
 Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
 arch/x86/mm/init_64.c                           |  13 +-
 fs/Kconfig                                      |  14 +
 include/linux/bootmem_info.h                    |  64 +++++
 include/linux/hugetlb.h                         |  35 +++
 include/linux/hugetlb_cgroup.h                  |  15 +-
 include/linux/memory_hotplug.h                  |  27 --
 mm/Makefile                                     |   2 +
 mm/bootmem_info.c                               | 124 ++++++++
 mm/hugetlb.c                                    | 144 ++++++++--
 mm/hugetlb_vmemmap.c                            | 367 ++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                            |  79 +++++
 mm/memory_hotplug.c                             | 116 --------
 mm/sparse.c                                     |   1 +
 15 files changed, 834 insertions(+), 179 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-07 12:12   ` David Hildenbrand
  2020-11-30 15:18 ` [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
                   ` (14 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Move bootmem info registration common API to individual bootmem_info.c
for later patch use. This is just code movement without any functional
change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
---
 arch/x86/mm/init_64.c          |  1 +
 include/linux/bootmem_info.h   | 27 ++++++++++++
 include/linux/memory_hotplug.h | 23 ----------
 mm/Makefile                    |  1 +
 mm/bootmem_info.c              | 99 ++++++++++++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 91 +-------------------------------------
 6 files changed, 129 insertions(+), 113 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..c7f7ad55b625 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -33,6 +33,7 @@
 #include <linux/nmi.h>
 #include <linux/gfp.h>
 #include <linux/kcore.h>
+#include <linux/bootmem_info.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
new file mode 100644
index 000000000000..65bb9b23140f
--- /dev/null
+++ b/include/linux/bootmem_info.h
@@ -0,0 +1,27 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_BOOTMEM_INFO_H
+#define __LINUX_BOOTMEM_INFO_H
+
+#include <linux/mmzone.h>
+
+/*
+ * Types for free bootmem stored in page->lru.next. These have to be in
+ * some random range in unsigned long space for debugging purposes.
+ */
+enum {
+	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
+	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
+	MIX_SECTION_INFO,
+	NODE_INFO,
+	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
+};
+
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+#else
+static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+}
+#endif
+
+#endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 15acce5ab106..aec3c3158d6b 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -33,18 +33,6 @@ struct vmem_altmap;
 	___page;						   \
 })
 
-/*
- * Types for free bootmem stored in page->lru.next. These have to be in
- * some random range in unsigned long space for debugging purposes.
- */
-enum {
-	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
-	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
-	MIX_SECTION_INFO,
-	NODE_INFO,
-	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
-};
-
 /* Types for control the zone type of onlined and offlined memory */
 enum {
 	/* Offline the memory. */
@@ -222,13 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
-#else
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-#endif
 extern void put_page_bootmem(struct page *page);
 extern void get_page_bootmem(unsigned long ingo, struct page *page,
 			     unsigned long type);
@@ -260,10 +241,6 @@ static inline void zone_span_writelock(struct zone *zone) {}
 static inline void zone_span_writeunlock(struct zone *zone) {}
 static inline void zone_seqlock_init(struct zone *zone) {}
 
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-
 static inline int try_online_node(int nid)
 {
 	return 0;
diff --git a/mm/Makefile b/mm/Makefile
index a1af02ba8f3f..ed4b88fa0f5e 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_KFENCE) += kfence/
 obj-$(CONFIG_FAILSLAB) += failslab.o
+obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
new file mode 100644
index 000000000000..39fa8fc120bc
--- /dev/null
+++ b/mm/bootmem_info.c
@@ -0,0 +1,99 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/mm/bootmem_info.c
+ *
+ *  Copyright (C)
+ */
+#include <linux/mm.h>
+#include <linux/compiler.h>
+#include <linux/memblock.h>
+#include <linux/bootmem_info.h>
+#include <linux/memory_hotplug.h>
+
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	/* Get section's memmap address */
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	/*
+	 * Get page for the memmap's phys address
+	 * XXX: need more consideration for sparse_vmemmap...
+	 */
+	page = virt_to_page(memmap);
+	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
+	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
+
+	/* remember memmap's page */
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, SECTION_INFO);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+
+}
+#else /* CONFIG_SPARSEMEM_VMEMMAP */
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+}
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
+
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+	unsigned long i, pfn, end_pfn, nr_pages;
+	int node = pgdat->node_id;
+	struct page *page;
+
+	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
+	page = virt_to_page(pgdat);
+
+	for (i = 0; i < nr_pages; i++, page++)
+		get_page_bootmem(node, page, NODE_INFO);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	/* register section info */
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+		/*
+		 * Some platforms can assign the same pfn to multiple nodes - on
+		 * node0 as well as nodeN.  To avoid registering a pfn against
+		 * multiple nodes we check that this pfn does not already
+		 * reside in some other nodes.
+		 */
+		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
+			register_page_bootmem_info_section(pfn);
+	}
+}
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index a8cef4955907..66fb1daf2252 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -21,6 +21,7 @@
 #include <linux/memory.h>
 #include <linux/memremap.h>
 #include <linux/memory_hotplug.h>
+#include <linux/bootmem_info.h>
 #include <linux/highmem.h>
 #include <linux/vmalloc.h>
 #include <linux/ioport.h>
@@ -167,96 +168,6 @@ void put_page_bootmem(struct page *page)
 	}
 }
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	/* Get section's memmap address */
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	/*
-	 * Get page for the memmap's phys address
-	 * XXX: need more consideration for sparse_vmemmap...
-	 */
-	page = virt_to_page(memmap);
-	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
-	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
-
-	/* remember memmap's page */
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, SECTION_INFO);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-
-}
-#else /* CONFIG_SPARSEMEM_VMEMMAP */
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
-
-void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-	unsigned long i, pfn, end_pfn, nr_pages;
-	int node = pgdat->node_id;
-	struct page *page;
-
-	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
-	page = virt_to_page(pgdat);
-
-	for (i = 0; i < nr_pages; i++, page++)
-		get_page_bootmem(node, page, NODE_INFO);
-
-	pfn = pgdat->node_start_pfn;
-	end_pfn = pgdat_end_pfn(pgdat);
-
-	/* register section info */
-	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		/*
-		 * Some platforms can assign the same pfn to multiple nodes - on
-		 * node0 as well as nodeN.  To avoid registering a pfn against
-		 * multiple nodes we check that this pfn does not already
-		 * reside in some other nodes.
-		 */
-		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
-			register_page_bootmem_info_section(pfn);
-	}
-}
-#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
  2020-11-30 15:18 ` [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-07 12:14   ` David Hildenbrand
  2020-11-30 15:18 ` [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
                   ` (13 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the later patch, we will use {get,put}_page_bootmem() to initialize
the page for vmemmap or free vmemmap page to buddy. So move them out of
CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any
functional change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
---
 arch/x86/mm/init_64.c          |  2 +-
 include/linux/bootmem_info.h   | 13 +++++++++++++
 include/linux/memory_hotplug.h |  4 ----
 mm/bootmem_info.c              | 25 +++++++++++++++++++++++++
 mm/memory_hotplug.c            | 27 ---------------------------
 mm/sparse.c                    |  1 +
 6 files changed, 40 insertions(+), 32 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index c7f7ad55b625..0a45f062826e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1572,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 	return err;
 }
 
-#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void register_page_bootmem_memmap(unsigned long section_nr,
 				  struct page *start_page, unsigned long nr_pages)
 {
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 65bb9b23140f..4ed6dee1adc9 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -18,10 +18,23 @@ enum {
 
 #ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+
+void get_page_bootmem(unsigned long info, struct page *page,
+		      unsigned long type);
+void put_page_bootmem(struct page *page);
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
 }
+
+static inline void put_page_bootmem(struct page *page)
+{
+}
+
+static inline void get_page_bootmem(unsigned long info, struct page *page,
+				    unsigned long type)
+{
+}
 #endif
 
 #endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index aec3c3158d6b..84590964ad35 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -210,10 +210,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-extern void put_page_bootmem(struct page *page);
-extern void get_page_bootmem(unsigned long ingo, struct page *page,
-			     unsigned long type);
-
 void get_online_mems(void);
 void put_online_mems(void);
 
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
index 39fa8fc120bc..fcab5a3f8cc0 100644
--- a/mm/bootmem_info.c
+++ b/mm/bootmem_info.c
@@ -10,6 +10,31 @@
 #include <linux/bootmem_info.h>
 #include <linux/memory_hotplug.h>
 
+void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
+{
+	page->freelist = (void *)type;
+	SetPagePrivate(page);
+	set_page_private(page, info);
+	page_ref_inc(page);
+}
+
+void put_page_bootmem(struct page *page)
+{
+	unsigned long type;
+
+	type = (unsigned long) page->freelist;
+	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
+	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
+
+	if (page_ref_dec_return(page) == 1) {
+		page->freelist = NULL;
+		ClearPagePrivate(page);
+		set_page_private(page, 0);
+		INIT_LIST_HEAD(&page->lru);
+		free_reserved_page(page);
+	}
+}
+
 #ifndef CONFIG_SPARSEMEM_VMEMMAP
 static void register_page_bootmem_info_section(unsigned long start_pfn)
 {
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 66fb1daf2252..4c4ca99745b7 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -21,7 +21,6 @@
 #include <linux/memory.h>
 #include <linux/memremap.h>
 #include <linux/memory_hotplug.h>
-#include <linux/bootmem_info.h>
 #include <linux/highmem.h>
 #include <linux/vmalloc.h>
 #include <linux/ioport.h>
@@ -142,32 +141,6 @@ static void release_memory_resource(struct resource *res)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
-void get_page_bootmem(unsigned long info,  struct page *page,
-		      unsigned long type)
-{
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
-	page_ref_inc(page);
-}
-
-void put_page_bootmem(struct page *page)
-{
-	unsigned long type;
-
-	type = (unsigned long) page->freelist;
-	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
-	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
-
-	if (page_ref_dec_return(page) == 1) {
-		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
-		INIT_LIST_HEAD(&page->lru);
-		free_reserved_page(page);
-	}
-}
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
diff --git a/mm/sparse.c b/mm/sparse.c
index 7bd23f9d6cef..87676bf3af40 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -13,6 +13,7 @@
 #include <linux/vmalloc.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/bootmem_info.h>
 
 #include "internal.h"
 #include <asm/dma.h>
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
  2020-11-30 15:18 ` [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
  2020-11-30 15:18 ` [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-07 12:19   ` David Hildenbrand
  2020-11-30 15:18 ` [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
                   ` (12 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
whether to enable the feature of freeing unused vmemmap associated
with HugeTLB pages. And this is just for dependency check. Now only
support x86.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/mm/init_64.c |  2 +-
 fs/Kconfig            | 14 ++++++++++++++
 2 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0a45f062826e..0435bee2e172 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/fs/Kconfig b/fs/Kconfig
index 976e8b9033c4..4961dd488444 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -245,6 +245,20 @@ config HUGETLBFS
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
 
+config HUGETLB_PAGE_FREE_VMEMMAP
+	def_bool HUGETLB_PAGE
+	depends on X86
+	depends on SPARSEMEM_VMEMMAP
+	depends on HAVE_BOOTMEM_INFO_NODE
+	help
+	  When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
+	  memory from pre-allocated HugeTLB pages when they are not used.
+	  6 pages per 2MB HugeTLB page and 4094 per 1GB HugeTLB page.
+
+	  When the pages are going to be used or freed up, the vmemmap array
+	  representing that range needs to be remapped again and the pages
+	  we discarded earlier need to be rellocated again.
+
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
 
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (2 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-07 12:36   ` David Hildenbrand
  2020-11-30 15:18 ` [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
                   ` (11 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Every HugeTLB has more than one struct page structure. The 2M HugeTLB
has 512 struct page structure and 1G HugeTLB has 4096 struct page
structures. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER)
struct page structures to store metadata associated with each HugeTLB.

There are a lot of struct page structures(8 page frames for 2MB HugeTLB
page and 4096 page frames for 1GB HugeTLB page) associated with each
HugeTLB page. For tail pages, the value of compound_head is the same.
So we can reuse first page of tail page structures. We map the virtual
addresses of the remaining pages of tail page structures to the first
tail page struct, and then free these page frames. Therefore, we need
to reserve two pages as vmemmap areas.

So we introduce a new nr_free_vmemmap_pages field in the hstate to
indicate how many vmemmap pages associated with a HugeTLB page that we
can free to buddy system.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
---
 include/linux/hugetlb.h |   3 ++
 mm/Makefile             |   1 +
 mm/hugetlb.c            |   3 ++
 mm/hugetlb_vmemmap.c    | 129 ++++++++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h    |  20 ++++++++
 5 files changed, 156 insertions(+)
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ebca2ef02212..4efeccb7192c 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -492,6 +492,9 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	unsigned int nr_free_vmemmap_pages;
+#endif
 #ifdef CONFIG_CGROUP_HUGETLB
 	/* cgroup control files */
 	struct cftype cgroup_files_dfl[7];
diff --git a/mm/Makefile b/mm/Makefile
index ed4b88fa0f5e..056801d8daae 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
 obj-$(CONFIG_ZSWAP)	+= zswap.o
 obj-$(CONFIG_HAS_DMA)	+= dmapool.o
 obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
+obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
 obj-$(CONFIG_NUMA) 	+= mempolicy.o
 obj-$(CONFIG_SPARSEMEM)	+= sparse.o
 obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 1f3bf1710b66..25f9e8e9fc4a 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -42,6 +42,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/page_owner.h>
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
@@ -3206,6 +3207,8 @@ void __init hugetlb_add_hstate(unsigned int order)
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
 
+	hugetlb_vmemmap_init(h);
+
 	parsed_hstate = h;
 }
 
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
new file mode 100644
index 000000000000..51152e258f39
--- /dev/null
+++ b/mm/hugetlb_vmemmap.c
@@ -0,0 +1,129 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ *
+ * The struct page structures (page structs) are used to describe a physical
+ * page frame. By default, there is a one-to-one mapping from a page frame to
+ * it's corresponding page struct.
+ *
+ * The HugeTLB pages consist of multiple base page size pages and is supported
+ * by many architectures. See hugetlbpage.rst in the Documentation directory
+ * for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
+ * are currently supported. Since the base page size on x86 is 4KB, a 2MB
+ * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
+ * 4096 base pages. For each base page, there is a corresponding page struct.
+ *
+ * Within the HugeTLB subsystem, only the first 4 page structs are used to
+ * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
+ * provides this upper limit. The only 'useful' information in the remaining
+ * page structs is the compound_head field, and this field is the same for all
+ * tail pages.
+ *
+ * By removing redundant page structs for HugeTLB pages, memory can returned to
+ * the buddy allocator for other uses.
+ *
+ * When the system boot up, every 2M HugeTLB has 512 struct page structs which
+ * size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | -------------> |     2     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     3     | -------------> |     3     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     4     | -------------> |     4     |
+ * |    2MB    |                     +-----------+                +-----------+
+ * |           |                     |     5     | -------------> |     5     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     6     | -------------> |     6     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     7     | -------------> |     7     |
+ * |           |                     +-----------+                +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * The value of page->compound_head is the same for all tail pages. The first
+ * page of page structs (page 0) associated with the HugeTLB page contains the 4
+ * page structs necessary to describe the HugeTLB. The only use of the remaining
+ * pages of page structs (page 1 to page 7) is to point to page->compound_head.
+ * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
+ * will be used for each HugeTLB page. This will allow us to free the remaining
+ * 6 pages to the buddy allocator.
+ *
+ * Here is how things look after remapping.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
+ * |           |                     +-----------+                   | | | | |
+ * |           |                     |     3     | ------------------+ | | | |
+ * |           |                     +-----------+                     | | | |
+ * |           |                     |     4     | --------------------+ | | |
+ * |    2MB    |                     +-----------+                       | | |
+ * |           |                     |     5     | ----------------------+ | |
+ * |           |                     +-----------+                         | |
+ * |           |                     |     6     | ------------------------+ |
+ * |           |                     +-----------+                           |
+ * |           |                     |     7     | --------------------------+
+ * |           |                     +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
+ * vmemmap pages and restore the previous mapping relationship.
+ *
+ * Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
+ * to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
+ * pages.
+ */
+#define pr_fmt(fmt)	"HugeTLB vmemmap: " fmt
+
+#include "hugetlb_vmemmap.h"
+
+/*
+ * There are a lot of struct page structures(8 page frames for 2MB HugeTLB page
+ * and 4096 page frames for 1GB HugeTLB page) associated with each HugeTLB page.
+ * For tail pages, the value of compound_head is the same. So we can reuse first
+ * page of tail page structures. We map the virtual addresses of the remaining
+ * pages of tail page structures to the first tail page struct, and then free
+ * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
+ */
+#define RESERVE_VMEMMAP_NR		2U
+
+void __init hugetlb_vmemmap_init(struct hstate *h)
+{
+	unsigned int nr_pages = pages_per_huge_page(h);
+	unsigned int vmemmap_pages;
+
+	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
+	/*
+	 * The head page and the first tail page are not to be freed to buddy
+	 * system, the others page will map to the first tail page. So there
+	 * are the remaining pages that can be freed.
+	 *
+	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
+	 * on some architectures (e.g. aarch64). See Documentation/arm64/
+	 * hugetlbpage.rst for more details.
+	 */
+	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
+		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
+
+	pr_debug("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
+		 h->name);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
new file mode 100644
index 000000000000..40c0c7dfb60d
--- /dev/null
+++ b/mm/hugetlb_vmemmap.h
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ */
+#ifndef _LINUX_HUGETLB_VMEMMAP_H
+#define _LINUX_HUGETLB_VMEMMAP_H
+#include <linux/hugetlb.h>
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void __init hugetlb_vmemmap_init(struct hstate *h);
+#else
+static inline void hugetlb_vmemmap_init(struct hstate *h)
+{
+}
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+#endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (3 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-07 12:39   ` David Hildenbrand
  2020-11-30 15:18 ` [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the later patch, we can use the free_vmemmap_page() to free the
unused vmemmap pages and initialize a page for vmemmap page using
via prepare_vmemmap_page().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 4ed6dee1adc9..239e3cc8f86c 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -3,6 +3,7 @@
 #define __LINUX_BOOTMEM_INFO_H
 
 #include <linux/mmzone.h>
+#include <linux/mm.h>
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
@@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 void get_page_bootmem(unsigned long info, struct page *page,
 		      unsigned long type);
 void put_page_bootmem(struct page *page);
+
+static inline void free_vmemmap_page(struct page *page)
+{
+	VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
+
+	/* bootmem page has reserved flag in the reserve_bootmem_region */
+	if (PageReserved(page)) {
+		unsigned long magic = (unsigned long)page->freelist;
+
+		if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
+			put_page_bootmem(page);
+		else
+			WARN_ON(1);
+	}
+}
+
+static inline void prepare_vmemmap_page(struct page *page)
+{
+	unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
+
+	get_page_bootmem(section_nr, page, SECTION_INFO);
+	mark_page_reserved(page);
+}
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (4 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-09  9:57   ` David Hildenbrand
  2020-11-30 15:18 ` [PATCH v7 07/15] x86/mm/64: Disable PMD page mapping of vmemmap Muchun Song
                   ` (9 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
when the size of struct page is a power of two.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 5 +++++
 1 file changed, 5 insertions(+)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 51152e258f39..ad8fc61ea273 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int nr_pages = pages_per_huge_page(h);
 	unsigned int vmemmap_pages;
 
+	if (!is_power_of_2(sizeof(struct page))) {
+		pr_info("disable freeing vmemmap pages for %s\n", h->name);
+		return;
+	}
+
 	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
 	/*
 	 * The head page and the first tail page are not to be freed to buddy
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 07/15] x86/mm/64: Disable PMD page mapping of vmemmap
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (5 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-11-30 15:18 ` [PATCH v7 08/15] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
                   ` (8 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

If we enable the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, we can just
disbale PMD page mapping of vmemmap to simplify the code. In this
case, we do not need complex code doing vmemmap page table
manipulation. This is a way to simply the first version of this
patch series. In the future, we can add some code doing page table
manipulation.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 arch/x86/mm/init_64.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0435bee2e172..155cb06a6961 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1557,7 +1557,9 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
 	int err;
 
-	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
+	if (IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP))
+		err = vmemmap_populate_basepages(start, end, node, NULL);
+	else if (end - start < PAGES_PER_SECTION * sizeof(struct page))
 		err = vmemmap_populate_basepages(start, end, node, NULL);
 	else if (boot_cpu_has(X86_FEATURE_PSE))
 		err = vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1610,7 +1612,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 		}
 		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
 
-		if (!boot_cpu_has(X86_FEATURE_PSE)) {
+		if (!boot_cpu_has(X86_FEATURE_PSE) ||
+		    IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)) {
 			next = (addr + PAGE_SIZE) & PAGE_MASK;
 			pmd = pmd_offset(pud, addr);
 			if (pmd_none(*pmd))
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 08/15] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (6 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 07/15] x86/mm/64: Disable PMD page mapping of vmemmap Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-11-30 15:18 ` [PATCH v7 09/15] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we allocate a hugetlb page from the buddy, we should free the
unused vmemmap pages associated with it. We can do that in the
prep_new_huge_page().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         |   2 +
 mm/hugetlb_vmemmap.c | 139 +++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h |   5 ++
 3 files changed, 146 insertions(+)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 25f9e8e9fc4a..93dee37ceb6d 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1498,6 +1498,8 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	free_huge_page_vmemmap(h, page);
+
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
 	set_hugetlb_cgroup(page, NULL);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index ad8fc61ea273..2c997b5de3b6 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -94,6 +94,7 @@
  */
 #define pr_fmt(fmt)	"HugeTLB vmemmap: " fmt
 
+#include <linux/bootmem_info.h>
 #include "hugetlb_vmemmap.h"
 
 /*
@@ -105,6 +106,144 @@
  * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
  */
 #define RESERVE_VMEMMAP_NR		2U
+#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+#define VMEMMAP_TAIL_PAGE_REUSE		-1
+
+#ifndef VMEMMAP_HPAGE_SHIFT
+#define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
+#endif
+#define VMEMMAP_HPAGE_ORDER		(VMEMMAP_HPAGE_SHIFT - PAGE_SHIFT)
+#define VMEMMAP_HPAGE_NR		(1 << VMEMMAP_HPAGE_ORDER)
+#define VMEMMAP_HPAGE_SIZE		(1UL << VMEMMAP_HPAGE_SHIFT)
+#define VMEMMAP_HPAGE_MASK		(~(VMEMMAP_HPAGE_SIZE - 1))
+
+#define vmemmap_hpage_addr_end(addr, end)				 \
+({									 \
+	unsigned long __boundary;					 \
+	__boundary = ((addr) + VMEMMAP_HPAGE_SIZE) & VMEMMAP_HPAGE_MASK; \
+	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
+})
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return h->nr_free_vmemmap_pages;
+}
+
+static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
+}
+
+static inline unsigned long vmemmap_pages_size_per_hpage(struct hstate *h)
+{
+	return (unsigned long)vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
+}
+
+/*
+ * Walk a vmemmap address to the pmd it maps.
+ */
+static pmd_t *vmemmap_to_pmd(unsigned long addr)
+{
+	pgd_t *pgd;
+	p4d_t *p4d;
+	pud_t *pud;
+	pmd_t *pmd;
+
+	pgd = pgd_offset_k(addr);
+	if (pgd_none(*pgd))
+		return NULL;
+
+	p4d = p4d_offset(pgd, addr);
+	if (p4d_none(*p4d))
+		return NULL;
+
+	pud = pud_offset(p4d, addr);
+	if (pud_none(*pud))
+		return NULL;
+
+	pmd = pmd_offset(pud, addr);
+	if (pmd_none(*pmd))
+		return NULL;
+
+	return pmd;
+}
+
+static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
+				    unsigned long start, unsigned long end,
+				    struct list_head *vmemmap_pages)
+{
+	/*
+	 * Make the tail pages are mapped with read-only to catch
+	 * illegal write operation to the tail pages.
+	 */
+	pgprot_t pgprot = PAGE_KERNEL_RO;
+	pte_t entry = mk_pte(reuse, pgprot);
+	unsigned long addr;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE, pte++) {
+		struct page *page;
+
+		VM_BUG_ON(pte_none(*pte));
+
+		page = pte_page(*pte);
+		list_add(&page->lru, vmemmap_pages);
+
+		set_pte_at(&init_mm, addr, pte, entry);
+	}
+}
+
+static void vmemmap_remap_range(unsigned long start, unsigned long end,
+				struct list_head *vmemmap_pages)
+{
+	pmd_t *pmd;
+	unsigned long next, addr = start;
+	struct page *reuse = NULL;
+
+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+	VM_BUG_ON((start >> PUD_SHIFT) != (end >> PUD_SHIFT));
+
+	pmd = vmemmap_to_pmd(addr);
+	BUG_ON(!pmd);
+
+	do {
+		pte_t *pte = pte_offset_kernel(pmd, addr);
+
+		if (!reuse)
+			reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
+
+		next = vmemmap_hpage_addr_end(addr, end);
+		vmemmap_reuse_pte_range(reuse, pte, addr, next, vmemmap_pages);
+	} while (pmd++, addr = next, addr != end);
+
+	flush_tlb_kernel_range(start, end);
+}
+
+static inline void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long start, end;
+	unsigned long vmemmap_addr = (unsigned long)head;
+	LIST_HEAD(vmemmap_pages);
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	start = vmemmap_addr + RESERVE_VMEMMAP_SIZE;
+	end = vmemmap_addr + vmemmap_pages_size_per_hpage(h);
+	vmemmap_remap_range(start, end, &vmemmap_pages);
+
+	free_vmemmap_page_list(&vmemmap_pages);
+}
 
 void __init hugetlb_vmemmap_init(struct hstate *h)
 {
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 40c0c7dfb60d..67113b67495f 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -12,9 +12,14 @@
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 void __init hugetlb_vmemmap_init(struct hstate *h);
+void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
 }
+
+static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 09/15] mm/hugetlb: Defer freeing of HugeTLB pages
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (7 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 08/15] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-11-30 15:18 ` [PATCH v7 10/15] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

In the subsequent patch, we will allocate the vmemmap pages when free
HugeTLB pages. But update_and_free_page() is called from a non-task
context(and hold hugetlb_lock), so we can defer the actual freeing in
a workqueue to prevent use GFP_ATOMIC to allocate the vmemmap pages.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         | 96 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/hugetlb_vmemmap.c |  5 ---
 mm/hugetlb_vmemmap.h | 10 ++++++
 3 files changed, 95 insertions(+), 16 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 93dee37ceb6d..5131ae3d2245 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1220,7 +1220,7 @@ static void destroy_compound_gigantic_page(struct page *page,
 	__ClearPageHead(page);
 }
 
-static void free_gigantic_page(struct page *page, unsigned int order)
+static void __free_gigantic_page(struct page *page, unsigned int order)
 {
 	/*
 	 * If the page isn't allocated using the cma allocator,
@@ -1287,20 +1287,100 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask,
 {
 	return NULL;
 }
-static inline void free_gigantic_page(struct page *page, unsigned int order) { }
+static inline void __free_gigantic_page(struct page *page,
+					unsigned int order) { }
 static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
-static void update_and_free_page(struct hstate *h, struct page *page)
+static void __free_hugepage(struct hstate *h, struct page *page);
+
+/*
+ * As update_and_free_page() is be called from a non-task context(and hold
+ * hugetlb_lock), we can defer the actual freeing in a workqueue to prevent
+ * use GFP_ATOMIC to allocate a lot of vmemmap pages.
+ *
+ * update_hpage_vmemmap_workfn() locklessly retrieves the linked list of
+ * pages to be freed and frees them one-by-one. As the page->mapping pointer
+ * is going to be cleared in update_hpage_vmemmap_workfn() anyway, it is
+ * reused as the llist_node structure of a lockless linked list of huge
+ * pages to be freed.
+ */
+static LLIST_HEAD(hpage_update_freelist);
+
+static void update_hpage_vmemmap_workfn(struct work_struct *work)
 {
-	int i;
+	struct llist_node *node;
+	struct page *page;
+
+	node = llist_del_all(&hpage_update_freelist);
+
+	while (node) {
+		page = container_of((struct address_space **)node,
+				     struct page, mapping);
+		node = node->next;
+		page->mapping = NULL;
+		__free_hugepage(page_hstate(page), page);
 
+		cond_resched();
+	}
+}
+static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
+
+static inline void __update_and_free_page(struct hstate *h, struct page *page)
+{
+	/* No need to allocate vmemmap pages */
+	if (!free_vmemmap_pages_per_hpage(h)) {
+		__free_hugepage(h, page);
+		return;
+	}
+
+	/*
+	 * Defer freeing to avoid using GFP_ATOMIC to allocate vmemmap
+	 * pages.
+	 *
+	 * Only call schedule_work() if hpage_update_freelist is previously
+	 * empty. Otherwise, schedule_work() had been called but the workfn
+	 * hasn't retrieved the list yet.
+	 */
+	if (llist_add((struct llist_node *)&page->mapping,
+		      &hpage_update_freelist))
+		schedule_work(&hpage_update_work);
+}
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+static inline void free_gigantic_page(struct hstate *h, struct page *page)
+{
+	__free_gigantic_page(page, huge_page_order(h));
+}
+#else
+static inline void free_gigantic_page(struct hstate *h, struct page *page)
+{
+	/*
+	 * Temporarily drop the hugetlb_lock, because
+	 * we might block in __free_gigantic_page().
+	 */
+	spin_unlock(&hugetlb_lock);
+	__free_gigantic_page(page, huge_page_order(h));
+	spin_lock(&hugetlb_lock);
+}
+#endif
+
+static void update_and_free_page(struct hstate *h, struct page *page)
+{
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
 		return;
 
 	h->nr_huge_pages--;
 	h->nr_huge_pages_node[page_to_nid(page)]--;
+
+	__update_and_free_page(h, page);
+}
+
+static void __free_hugepage(struct hstate *h, struct page *page)
+{
+	int i;
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
@@ -1312,14 +1392,8 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
 	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
-		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
-		 */
-		spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
-		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
+		free_gigantic_page(h, page);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 2c997b5de3b6..af42fad1f131 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -124,11 +124,6 @@
 	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
 })
 
-static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
-{
-	return h->nr_free_vmemmap_pages;
-}
-
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 67113b67495f..293897b9f1d8 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -13,6 +13,11 @@
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 void __init hugetlb_vmemmap_init(struct hstate *h);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return h->nr_free_vmemmap_pages;
+}
 #else
 static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
@@ -21,5 +26,10 @@ static inline void hugetlb_vmemmap_init(struct hstate *h)
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 10/15] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (8 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 09/15] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-11-30 15:18 ` [PATCH v7 11/15] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we free a hugetlb page to the buddy, we should allocate the vmemmap
pages associated with it. We can do that in the __free_hugepage().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         |  2 ++
 mm/hugetlb_vmemmap.c | 90 +++++++++++++++++++++++++++++++++++++++++++++++++---
 mm/hugetlb_vmemmap.h |  5 +++
 3 files changed, 92 insertions(+), 5 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 5131ae3d2245..ebe35532d432 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1381,6 +1381,8 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 {
 	int i;
 
+	alloc_huge_page_vmemmap(h, page);
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index af42fad1f131..a3714db7f400 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -95,6 +95,7 @@
 #define pr_fmt(fmt)	"HugeTLB vmemmap: " fmt
 
 #include <linux/bootmem_info.h>
+#include <linux/delay.h>
 #include "hugetlb_vmemmap.h"
 
 /*
@@ -108,6 +109,8 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 #define VMEMMAP_TAIL_PAGE_REUSE		-1
+#define GFP_VMEMMAP_PAGE		\
+	(GFP_KERNEL | __GFP_RETRY_MAYFAIL | __GFP_HIGH)
 
 #ifndef VMEMMAP_HPAGE_SHIFT
 #define VMEMMAP_HPAGE_SHIFT		HPAGE_SHIFT
@@ -124,6 +127,11 @@
 	(__boundary - 1 < (end) - 1) ? __boundary : (end);		 \
 })
 
+typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte,
+					 unsigned long start, unsigned long end,
+					 void *priv);
+
+
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR;
@@ -163,9 +171,40 @@ static pmd_t *vmemmap_to_pmd(unsigned long addr)
 	return pmd;
 }
 
+static void vmemmap_restore_pte_range(struct page *reuse, pte_t *pte,
+				      unsigned long start, unsigned long end,
+				      void *priv)
+{
+	pgprot_t pgprot = PAGE_KERNEL;
+	void *from = page_to_virt(reuse);
+	unsigned long addr;
+	struct list_head *pages = priv;
+
+	for (addr = start; addr < end; addr += PAGE_SIZE) {
+		void *to;
+		struct page *page;
+
+		VM_BUG_ON(pte_none(*pte) || pte_page(*pte) != reuse);
+
+		page = list_first_entry(pages, struct page, lru);
+		list_del(&page->lru);
+		to = page_to_virt(page);
+		copy_page(to, from);
+
+		/*
+		 * Make sure that any data that writes to the @to is made
+		 * visible to the physical page.
+		 */
+		flush_kernel_vmap_range(to, PAGE_SIZE);
+
+		prepare_vmemmap_page(page);
+		set_pte_at(&init_mm, addr, pte++, mk_pte(page, pgprot));
+	}
+}
+
 static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
 				    unsigned long start, unsigned long end,
-				    struct list_head *vmemmap_pages)
+				    void *priv)
 {
 	/*
 	 * Make the tail pages are mapped with read-only to catch
@@ -174,6 +213,7 @@ static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
 	pgprot_t pgprot = PAGE_KERNEL_RO;
 	pte_t entry = mk_pte(reuse, pgprot);
 	unsigned long addr;
+	struct list_head *pages = priv;
 
 	for (addr = start; addr < end; addr += PAGE_SIZE, pte++) {
 		struct page *page;
@@ -181,14 +221,14 @@ static void vmemmap_reuse_pte_range(struct page *reuse, pte_t *pte,
 		VM_BUG_ON(pte_none(*pte));
 
 		page = pte_page(*pte);
-		list_add(&page->lru, vmemmap_pages);
+		list_add(&page->lru, pages);
 
 		set_pte_at(&init_mm, addr, pte, entry);
 	}
 }
 
 static void vmemmap_remap_range(unsigned long start, unsigned long end,
-				struct list_head *vmemmap_pages)
+				vmemmap_remap_pte_func_t func, void *priv)
 {
 	pmd_t *pmd;
 	unsigned long next, addr = start;
@@ -208,12 +248,52 @@ static void vmemmap_remap_range(unsigned long start, unsigned long end,
 			reuse = pte_page(pte[VMEMMAP_TAIL_PAGE_REUSE]);
 
 		next = vmemmap_hpage_addr_end(addr, end);
-		vmemmap_reuse_pte_range(reuse, pte, addr, next, vmemmap_pages);
+		func(reuse, pte, addr, next, priv);
 	} while (pmd++, addr = next, addr != end);
 
 	flush_tlb_kernel_range(start, end);
 }
 
+static inline void alloc_vmemmap_pages(struct hstate *h, struct list_head *list)
+{
+	unsigned int nr = free_vmemmap_pages_per_hpage(h);
+
+	while (nr--) {
+		struct page *page;
+
+retry:
+		page = alloc_page(GFP_VMEMMAP_PAGE);
+		if (unlikely(!page)) {
+			msleep(100);
+			/*
+			 * We should retry infinitely, because we cannot
+			 * handle allocation failures. Once we allocate
+			 * vmemmap pages successfully, then we can free
+			 * a HugeTLB page.
+			 */
+			goto retry;
+		}
+		list_add_tail(&page->lru, list);
+	}
+}
+
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long start, end;
+	unsigned long vmemmap_addr = (unsigned long)head;
+	LIST_HEAD(vmemmap_pages);
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	alloc_vmemmap_pages(h, &vmemmap_pages);
+
+	start = vmemmap_addr + RESERVE_VMEMMAP_SIZE;
+	end = vmemmap_addr + vmemmap_pages_size_per_hpage(h);
+	vmemmap_remap_range(start, end, vmemmap_restore_pte_range,
+			    &vmemmap_pages);
+}
+
 static inline void free_vmemmap_page_list(struct list_head *list)
 {
 	struct page *page, *next;
@@ -235,7 +315,7 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 
 	start = vmemmap_addr + RESERVE_VMEMMAP_SIZE;
 	end = vmemmap_addr + vmemmap_pages_size_per_hpage(h);
-	vmemmap_remap_range(start, end, &vmemmap_pages);
+	vmemmap_remap_range(start, end, vmemmap_reuse_pte_range, &vmemmap_pages);
 
 	free_vmemmap_page_list(&vmemmap_pages);
 }
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 293897b9f1d8..7887095488f4 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -12,6 +12,7 @@
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 void __init hugetlb_vmemmap_init(struct hstate *h);
+void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
@@ -23,6 +24,10 @@ static inline void hugetlb_vmemmap_init(struct hstate *h)
 {
 }
 
+static inline void alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 11/15] mm/hugetlb: Set the PageHWPoison to the raw error page
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (9 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 10/15] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-11-30 15:18 ` [PATCH v7 12/15] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Because we reuse the first tail vmemmap page frame and remap it
with read-only, we cannot set the PageHWPosion on a tail page.
So we can use the head[4].mapping to record the real error page
index and set the raw error page PageHWPoison later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c         | 11 +++--------
 mm/hugetlb_vmemmap.h | 39 +++++++++++++++++++++++++++++++++++++++
 2 files changed, 42 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index ebe35532d432..12cb46b8e901 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1382,6 +1382,7 @@ static void __free_hugepage(struct hstate *h, struct page *page)
 	int i;
 
 	alloc_huge_page_vmemmap(h, page);
+	subpage_hwpoison_deliver(page);
 
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
@@ -1849,14 +1850,8 @@ int dissolve_free_huge_page(struct page *page)
 		int nid = page_to_nid(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
-		/*
-		 * Move PageHWPoison flag from head page to the raw error page,
-		 * which makes any subpages rather than the error page reusable.
-		 */
-		if (PageHWPoison(head) && page != head) {
-			SetPageHWPoison(page);
-			ClearPageHWPoison(head);
-		}
+
+		set_subpage_hwpoison(head, page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 7887095488f4..4bb35d87ae10 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -15,6 +15,29 @@ void __init hugetlb_vmemmap_init(struct hstate *h);
 void alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 
+static inline void subpage_hwpoison_deliver(struct page *head)
+{
+	struct page *page = head;
+
+	if (PageHWPoison(head))
+		page = head + page_private(head + 4);
+
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void set_subpage_hwpoison(struct page *head, struct page *page)
+{
+	if (PageHWPoison(head))
+		set_page_private(head + 4, page - head);
+}
+
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return h->nr_free_vmemmap_pages;
@@ -32,6 +55,22 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
 
+static inline void subpage_hwpoison_deliver(struct page *head)
+{
+}
+
+static inline void set_subpage_hwpoison(struct page *head, struct page *page)
+{
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (PageHWPoison(head) && page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return 0;
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 12/15] mm/hugetlb: Flush work when dissolving hugetlb page
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (10 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 11/15] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-11-30 15:18 ` [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
                   ` (3 subsequent siblings)
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

We should flush work when dissolving a hugetlb page to make sure that
the hugetlb page is freed to the buddy.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb.c | 18 +++++++++++++++++-
 1 file changed, 17 insertions(+), 1 deletion(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 12cb46b8e901..a114d6bfd1b9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1327,6 +1327,12 @@ static void update_hpage_vmemmap_workfn(struct work_struct *work)
 }
 static DECLARE_WORK(hpage_update_work, update_hpage_vmemmap_workfn);
 
+static inline void flush_hpage_update_work(struct hstate *h)
+{
+	if (free_vmemmap_pages_per_hpage(h))
+		flush_work(&hpage_update_work);
+}
+
 static inline void __update_and_free_page(struct hstate *h, struct page *page)
 {
 	/* No need to allocate vmemmap pages */
@@ -1833,6 +1839,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 int dissolve_free_huge_page(struct page *page)
 {
 	int rc = -EBUSY;
+	struct hstate *h = NULL;
 
 	/* Not to disrupt normal path by vainly holding hugetlb_lock */
 	if (!PageHuge(page))
@@ -1846,8 +1853,9 @@ int dissolve_free_huge_page(struct page *page)
 
 	if (!page_count(page)) {
 		struct page *head = compound_head(page);
-		struct hstate *h = page_hstate(head);
 		int nid = page_to_nid(head);
+
+		h = page_hstate(head);
 		if (h->free_huge_pages - h->resv_huge_pages == 0)
 			goto out;
 
@@ -1861,6 +1869,14 @@ int dissolve_free_huge_page(struct page *page)
 	}
 out:
 	spin_unlock(&hugetlb_lock);
+
+	/*
+	 * We should flush work before return to make sure that
+	 * the HugeTLB page is freed to the buddy.
+	 */
+	if (!rc && h)
+		flush_hpage_update_work(h);
+
 	return rc;
 }
 
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (11 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 12/15] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-04  0:01   ` Song Bao Hua (Barry Song)
  2020-11-30 15:18 ` [PATCH v7 14/15] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
 arch/x86/mm/init_64.c                           |  5 +++--
 include/linux/hugetlb.h                         | 19 +++++++++++++++++++
 mm/hugetlb_vmemmap.c                            | 18 +++++++++++++++++-
 5 files changed, 51 insertions(+), 3 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 3ae25630a223..9e6854f21d55 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1551,6 +1551,15 @@
 			Documentation/admin-guide/mm/hugetlbpage.rst.
 			Format: size[KMG]
 
+	hugetlb_free_vmemmap=
+			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+			this controls freeing unused vmemmap pages associated
+			with each HugeTLB page.
+			Format: { on | off (default) }
+
+			on:  enable the feature
+			off: disable the feature
+
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
 			Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..6a8b57f6d3b7 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -145,6 +145,9 @@ default_hugepagesz
 
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
+hugetlb_free_vmemmap
+	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
+	unused vmemmap pages associated each HugeTLB page.
 
 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
 indicates the current number of pre-allocated huge pages of the default size.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 155cb06a6961..fcdc020904a8 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -34,6 +34,7 @@
 #include <linux/gfp.h>
 #include <linux/kcore.h>
 #include <linux/bootmem_info.h>
+#include <linux/hugetlb.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1557,7 +1558,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
 	int err;
 
-	if (IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP))
+	if (is_hugetlb_free_vmemmap_enabled())
 		err = vmemmap_populate_basepages(start, end, node, NULL);
 	else if (end - start < PAGES_PER_SECTION * sizeof(struct page))
 		err = vmemmap_populate_basepages(start, end, node, NULL);
@@ -1613,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
 
 		if (!boot_cpu_has(X86_FEATURE_PSE) ||
-		    IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)) {
+		    is_hugetlb_free_vmemmap_enabled()) {
 			next = (addr + PAGE_SIZE) & PAGE_MASK;
 			pmd = pmd_offset(pud, addr);
 			if (pmd_none(*pmd))
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 4efeccb7192c..66d82ae7b712 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -773,6 +773,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
 }
 #endif
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+extern bool hugetlb_free_vmemmap_enabled;
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return hugetlb_free_vmemmap_enabled;
+}
+#else
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
+#endif
+
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 
@@ -926,6 +940,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
 					pte_t *ptep, pte_t pte, unsigned long sz)
 {
 }
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index a3714db7f400..ebc710d148e4 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -131,6 +131,21 @@ typedef void (*vmemmap_remap_pte_func_t)(struct page *reuse, pte_t *pte,
 					 unsigned long start, unsigned long end,
 					 void *priv);
 
+bool hugetlb_free_vmemmap_enabled;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	if (!strcmp(buf, "on"))
+		hugetlb_free_vmemmap_enabled = true;
+	else if (strcmp(buf, "off"))
+		return -EINVAL;
+
+	return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
 
 static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
 {
@@ -325,7 +340,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int nr_pages = pages_per_huge_page(h);
 	unsigned int vmemmap_pages;
 
-	if (!is_power_of_2(sizeof(struct page))) {
+	if (!is_power_of_2(sizeof(struct page)) ||
+	    !hugetlb_free_vmemmap_enabled) {
 		pr_info("disable freeing vmemmap pages for %s\n", h->name);
 		return;
 	}
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 14/15] mm/hugetlb: Gather discrete indexes of tail page
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (12 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-11-30 15:18 ` [PATCH v7 15/15] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
  2020-12-03  8:35 ` [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

For hugetlb page, there are more metadata to save in the struct
page. But the head struct page cannot meet our needs, so we have
to abuse other tail struct page to store the metadata. In order
to avoid conflicts caused by subsequent use of more tail struct
pages, we can gather these discrete indexes of tail struct page
In this case, it will be easier to add a new tail page index later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 include/linux/hugetlb.h        | 13 +++++++++++++
 include/linux/hugetlb_cgroup.h | 15 +++++++++------
 mm/hugetlb.c                   | 12 ++++++------
 mm/hugetlb_vmemmap.h           |  4 ++--
 4 files changed, 30 insertions(+), 14 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 66d82ae7b712..7295f6b3d55e 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -28,6 +28,19 @@ typedef struct { unsigned long pd; } hugepd_t;
 #include <linux/shm.h>
 #include <asm/tlbflush.h>
 
+enum {
+	SUBPAGE_INDEX_ACTIVE = 1,	/* reuse page flags of PG_private */
+	SUBPAGE_INDEX_TEMPORARY,	/* reuse page->mapping */
+#ifdef CONFIG_CGROUP_HUGETLB
+	SUBPAGE_INDEX_CGROUP = SUBPAGE_INDEX_TEMPORARY,/* reuse page->private */
+	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
+#endif
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
+#endif
+	NR_USED_SUBPAGE,
+};
+
 struct hugepage_subpool {
 	spinlock_t lock;
 	long count;
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 2ad6e92f124a..3d3c1c49efe4 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -24,8 +24,9 @@ struct file_region;
 /*
  * Minimum page order trackable by hugetlb cgroup.
  * At least 4 pages are necessary for all the tracking information.
- * The second tail page (hpage[2]) is the fault usage cgroup.
- * The third tail page (hpage[3]) is the reservation usage cgroup.
+ * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
+ * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
+ * is the reservation usage cgroup.
  */
 #define HUGETLB_CGROUP_MIN_ORDER	2
 
@@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
 	if (rsvd)
-		return (struct hugetlb_cgroup *)page[3].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
 	else
-		return (struct hugetlb_cgroup *)page[2].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
 }
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
@@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
 	if (rsvd)
-		page[3].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+				 (unsigned long)h_cg);
 	else
-		page[2].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+				 (unsigned long)h_cg);
 	return 0;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index a114d6bfd1b9..b2f8cfc6c524 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1428,20 +1428,20 @@ struct hstate *size_to_hstate(unsigned long size)
 bool page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHuge(page), page);
-	return PageHead(page) && PagePrivate(&page[1]);
+	return PageHead(page) && PagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /* never called for tail page */
 static void set_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	SetPagePrivate(&page[1]);
+	SetPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 static void clear_page_huge_active(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
-	ClearPagePrivate(&page[1]);
+	ClearPagePrivate(&page[SUBPAGE_INDEX_ACTIVE]);
 }
 
 /*
@@ -1453,17 +1453,17 @@ static inline bool PageHugeTemporary(struct page *page)
 	if (!PageHuge(page))
 		return false;
 
-	return (unsigned long)page[2].mapping == -1U;
+	return (unsigned long)page[SUBPAGE_INDEX_TEMPORARY].mapping == -1U;
 }
 
 static inline void SetPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = (void *)-1U;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = (void *)-1U;
 }
 
 static inline void ClearPageHugeTemporary(struct page *page)
 {
-	page[2].mapping = NULL;
+	page[SUBPAGE_INDEX_TEMPORARY].mapping = NULL;
 }
 
 static void __free_huge_page(struct page *page)
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 4bb35d87ae10..54c2ca0e0dbe 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -20,7 +20,7 @@ static inline void subpage_hwpoison_deliver(struct page *head)
 	struct page *page = head;
 
 	if (PageHWPoison(head))
-		page = head + page_private(head + 4);
+		page = head + page_private(head + SUBPAGE_INDEX_HWPOISON);
 
 	/*
 	 * Move PageHWPoison flag from head page to the raw error page,
@@ -35,7 +35,7 @@ static inline void subpage_hwpoison_deliver(struct page *head)
 static inline void set_subpage_hwpoison(struct page *head, struct page *page)
 {
 	if (PageHWPoison(head))
-		set_page_private(head + 4, page - head);
+		set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
 }
 
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* [PATCH v7 15/15] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (13 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 14/15] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
@ 2020-11-30 15:18 ` Muchun Song
  2020-12-03  8:35 ` [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
  15 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-11-30 15:18 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

There are only `RESERVE_VMEMMAP_SIZE / sizeof(struct page)` struct pages
can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP, so add a BUILD_BUG_ON
to catch this invalid usage of tail struct page.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/hugetlb_vmemmap.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index ebc710d148e4..3328de88819e 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -340,6 +340,9 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int nr_pages = pages_per_huge_page(h);
 	unsigned int vmemmap_pages;
 
+	BUILD_BUG_ON(NR_USED_SUBPAGE >=
+		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
+
 	if (!is_power_of_2(sizeof(struct page)) ||
 	    !hugetlb_free_vmemmap_enabled) {
 		pr_info("disable freeing vmemmap pages for %s\n", h->name);
-- 
2.11.0


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 00/15] Free some vmemmap pages of hugetlb page
  2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
                   ` (14 preceding siblings ...)
  2020-11-30 15:18 ` [PATCH v7 15/15] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
@ 2020-12-03  8:35 ` Muchun Song
  2020-12-03 23:48   ` Mike Kravetz
  15 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-03  8:35 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Andrew Morton, Jonathan Corbet, dave.hansen, hpa, x86, bp, mingo,
	Thomas Gleixner, pawan.kumar.gupta, mchehab+huawei, paulmck,
	viro, Peter Zijlstra, luto, oneukum, jroedel, Matthew Wilcox,
	David Rientjes, Mina Almasry, Randy Dunlap, anshuman.khandual,
	Oscar Salvador, Michal Hocko, Song Bao Hua (Barry Song),
	Xiongchun duan, LKML, Linux Memory Management List, linux-doc,
	linux-fsdevel

On Mon, Nov 30, 2020 at 11:19 PM Muchun Song <songmuchun@bytedance.com> wrote:
>
> Hi all,
>
> This patch series will free some vmemmap pages(struct page structures)
> associated with each hugetlbpage when preallocated to save memory.

Hi Mike,

What's your opinion on this version?  Any comments or suggestions?
And hoping you or more people review the series. Thank you very
much.

>
> In order to reduce the difficulty of the first version of code review.
> From this version, we disable PMD/huge page mapping of vmemmap if this
> feature was enabled. This accutualy eliminate a bunch of the complex code
> doing page table manipulation. When this patch series is solid, we cam add
> the code of vmemmap page table manipulation in the future.
>
> The struct page structures (page structs) are used to describe a physical
> page frame. By default, there is a one-to-one mapping from a page frame to
> it's corresponding page struct.
>
> The HugeTLB pages consist of multiple base page size pages and is supported
> by many architectures. See hugetlbpage.rst in the Documentation directory
> for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
> are currently supported. Since the base page size on x86 is 4KB, a 2MB
> HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> 4096 base pages. For each base page, there is a corresponding page struct.
>
> Within the HugeTLB subsystem, only the first 4 page structs are used to
> contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> provides this upper limit. The only 'useful' information in the remaining
> page structs is the compound_head field, and this field is the same for all
> tail pages.
>
> By removing redundant page structs for HugeTLB pages, memory can returned to
> the buddy allocator for other uses.
>
> When the system boot up, every 2M HugeTLB has 512 struct page structs which
> size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
>
>     HugeTLB                  struct pages(8 pages)         page frame(8 pages)
>  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
>  |           |                     |     0     | -------------> |     0     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     1     | -------------> |     1     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     2     | -------------> |     2     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     3     | -------------> |     3     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     4     | -------------> |     4     |
>  |    2MB    |                     +-----------+                +-----------+
>  |           |                     |     5     | -------------> |     5     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     6     | -------------> |     6     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     7     | -------------> |     7     |
>  |           |                     +-----------+                +-----------+
>  |           |
>  |           |
>  |           |
>  +-----------+
>
> The value of page->compound_head is the same for all tail pages. The first
> page of page structs (page 0) associated with the HugeTLB page contains the 4
> page structs necessary to describe the HugeTLB. The only use of the remaining
> pages of page structs (page 1 to page 7) is to point to page->compound_head.
> Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
> will be used for each HugeTLB page. This will allow us to free the remaining
> 6 pages to the buddy allocator.
>
> Here is how things look after remapping.
>
>     HugeTLB                  struct pages(8 pages)         page frame(8 pages)
>  +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
>  |           |                     |     0     | -------------> |     0     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     1     | -------------> |     1     |
>  |           |                     +-----------+                +-----------+
>  |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
>  |           |                     +-----------+                   | | | | |
>  |           |                     |     3     | ------------------+ | | | |
>  |           |                     +-----------+                     | | | |
>  |           |                     |     4     | --------------------+ | | |
>  |    2MB    |                     +-----------+                       | | |
>  |           |                     |     5     | ----------------------+ | |
>  |           |                     +-----------+                         | |
>  |           |                     |     6     | ------------------------+ |
>  |           |                     +-----------+                           |
>  |           |                     |     7     | --------------------------+
>  |           |                     +-----------+
>  |           |
>  |           |
>  |           |
>  +-----------+
>
> When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
> vmemmap pages and restore the previous mapping relationship.
>
> Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
> to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
> pages.
>
> In this case, for the 1GB HugeTLB page, we can save 4088 pages(There are
> 4096 pages for struct page structs, we reserve 2 pages for vmemmap and 8
> pages for page tables. So we can save 4088 pages). This is a very substantial
> gain. On our server, run some SPDK/QEMU applications which will use 1024GB
> hugetlbpage. With this feature enabled, we can save ~16GB(1G hugepage)/~11GB
> (2MB hugepage, the worst case is 10GB while the best is 12GB) memory.
>
> Because there are vmemmap page tables reconstruction on the freeing/allocating
> path, it increases some overhead. Here are some overhead analysis.
>
> 1) Allocating 10240 2MB hugetlb pages.
>
>    a) With this patch series applied:
>    # time echo 10240 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.166s
>    user     0m0.000s
>    sys      0m0.166s
>
>    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [8K, 16K)           8360 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>    [16K, 32K)          1868 |@@@@@@@@@@@                                         |
>    [32K, 64K)            10 |                                                    |
>    [64K, 128K)            2 |                                                    |
>
>    b) Without this patch series:
>    # time echo 10240 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.066s
>    user     0m0.000s
>    sys      0m0.066s
>
>    # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; } kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [4K, 8K)           10176 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>    [8K, 16K)             62 |                                                    |
>    [16K, 32K)             2 |                                                    |
>
>    Summarize: this feature is about ~2x slower than before.
>
> 2) Freeing 10240 2MB hugetlb pages.
>
>    a) With this patch series applied:
>    # time echo 0 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.004s
>    user     0m0.000s
>    sys      0m0.002s
>
>    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [16K, 32K)         10240 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>
>    b) Without this patch series:
>    # time echo 0 > /proc/sys/vm/nr_hugepages
>
>    real     0m0.077s
>    user     0m0.001s
>    sys      0m0.075s
>
>    # bpftrace -e 'kprobe:__free_hugepage { @start[tid] = nsecs; } kretprobe:__free_hugepage /@start[tid]/ { @latency = hist(nsecs - @start[tid]); delete(@start[tid]); }'
>    Attaching 2 probes...
>
>    @latency:
>    [4K, 8K)            9950 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
>    [8K, 16K)            287 |@                                                   |
>    [16K, 32K)             3 |                                                    |
>
>    Summarize: The overhead of __free_hugepage is about ~2-4x slower than before.
>               But according to the allocation test above, I think that here is
>               also ~2x slower than before.
>
>               But why the 'real' time of patched is smaller than before? Because
>               In this patch series, the freeing hugetlb is asynchronous(through
>               kwoker).
>
> Although the overhead has increased, the overhead is not significant. Like Mike
> said, "However, remember that the majority of use cases create hugetlb pages at
> or shortly after boot time and add them to the pool. So, additional overhead is
> at pool creation time. There is no change to 'normal run time' operations of
> getting a page from or returning a page to the pool (think page fault/unmap)".
>
> Todo:
>   1. Free all of the tail vmemmap pages
>      Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
>      free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
>      structures has beed set PG_head flag. If we can adjust compound_head()
>      slightly and make compound_head() return the real head struct page when
>      the parameter is the tail struct page but with PG_head flag set.
>
>      In order to make the code evolution route clearer. This feature can can be
>      a separate patch after this patchset is solid.
>   2. Support for other architectures (e.g. aarch64).
>   3. Enable PMD/huge page mapping of vmemmap even if this feature was enabled.
>
> Changelog in v7:
>   1. Rebase to linux-next 20201130
>   2. Do not use basepage mapping for vmemmap when this feature is disabled.
>      Thanks to Oscar and Barry.
>   3. Rework some patchs.
>      [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
>      [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page
>
> Changelog in v6:
>   1. Disable PMD/huge page mapping of vmemmap if this feature was enabled.
>   2. Simplify the first version code.
>
> Changelog in v5:
>   1. Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].
>      Thanks to Mike and Oscar's suggestions.
>
> Changelog in v4:
>   1. Move all the vmemmap functions to hugetlb_vmemmap.c.
>   2. Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
>      disable this feature, we should disable it by a boot/kernel command line.
>   3. Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
>   4. Initialize page table lock for vmemmap through core_initcall mechanism.
>
>   Thanks for Mike and Oscar's suggestions.
>
> Changelog in v3:
>   1. Rename some helps function name. Thanks Mike.
>   2. Rework some code. Thanks Mike and Oscar.
>   3. Remap the tail vmemmap page with PAGE_KERNEL_RO instead of
>      PAGE_KERNEL. Thanks Matthew.
>   4. Add some overhead analysis in the cover letter.
>   5. Use vmemap pmd table lock instead of a hugetlb specific global lock.
>
> Changelog in v2:
>   1. Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
>   2. Fix some typo and code style problems.
>   3. Remove unused handle_vmemmap_fault().
>   4. Merge some commits to one commit suggested by Mike.
>
> Muchun Song (15):
>   mm/memory_hotplug: Move bootmem info registration API to
>     bootmem_info.c
>   mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
>   mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
>   mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
>   mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
>   mm/hugetlb: Disable freeing vmemmap if struct page size is not power
>     of two
>   x86/mm/64: Disable PMD page mapping of vmemmap
>   mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
>   mm/hugetlb: Defer freeing of HugeTLB pages
>   mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb
>     page
>   mm/hugetlb: Set the PageHWPoison to the raw error page
>   mm/hugetlb: Flush work when dissolving hugetlb page
>   mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
>   mm/hugetlb: Gather discrete indexes of tail page
>   mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct
>     page
>
>  Documentation/admin-guide/kernel-parameters.txt |   9 +
>  Documentation/admin-guide/mm/hugetlbpage.rst    |   3 +
>  arch/x86/mm/init_64.c                           |  13 +-
>  fs/Kconfig                                      |  14 +
>  include/linux/bootmem_info.h                    |  64 +++++
>  include/linux/hugetlb.h                         |  35 +++
>  include/linux/hugetlb_cgroup.h                  |  15 +-
>  include/linux/memory_hotplug.h                  |  27 --
>  mm/Makefile                                     |   2 +
>  mm/bootmem_info.c                               | 124 ++++++++
>  mm/hugetlb.c                                    | 144 ++++++++--
>  mm/hugetlb_vmemmap.c                            | 367 ++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h                            |  79 +++++
>  mm/memory_hotplug.c                             | 116 --------
>  mm/sparse.c                                     |   1 +
>  15 files changed, 834 insertions(+), 179 deletions(-)
>  create mode 100644 include/linux/bootmem_info.h
>  create mode 100644 mm/bootmem_info.c
>  create mode 100644 mm/hugetlb_vmemmap.c
>  create mode 100644 mm/hugetlb_vmemmap.h
>
> --
> 2.11.0
>


--
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 00/15] Free some vmemmap pages of hugetlb page
  2020-12-03  8:35 ` [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
@ 2020-12-03 23:48   ` Mike Kravetz
  2020-12-04  3:39     ` [External] " Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: Mike Kravetz @ 2020-12-03 23:48 UTC (permalink / raw)
  To: Muchun Song
  Cc: Andrew Morton, Jonathan Corbet, dave.hansen, hpa, x86, bp, mingo,
	Thomas Gleixner, pawan.kumar.gupta, mchehab+huawei, paulmck,
	viro, Peter Zijlstra, luto, oneukum, jroedel, Matthew Wilcox,
	David Rientjes, Mina Almasry, Randy Dunlap, anshuman.khandual,
	Oscar Salvador, Michal Hocko, Song Bao Hua (Barry Song),
	Xiongchun duan, LKML, Linux Memory Management List, linux-doc,
	linux-fsdevel

On 12/3/20 12:35 AM, Muchun Song wrote:
> On Mon, Nov 30, 2020 at 11:19 PM Muchun Song <songmuchun@bytedance.com> wrote:
>>
>> Hi all,
>>
>> This patch series will free some vmemmap pages(struct page structures)
>> associated with each hugetlbpage when preallocated to save memory.
> 
> Hi Mike,
> 
> What's your opinion on this version?  Any comments or suggestions?
> And hoping you or more people review the series. Thank you very
> much.

Sorry Muchun, I have been busy with other things and have not looked at
this new version.  Should have some time soon.

As previously mentioned, I feel qualified to review the hugetlb changes
and some other closely related changes.  However, this patch set is
touching quite a few areas and I do not feel qualified to make authoritative
statements about them all.  I too hope others will take a look.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 49+ messages in thread

* RE: [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap
  2020-11-30 15:18 ` [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2020-12-04  0:01   ` Song Bao Hua (Barry Song)
  0 siblings, 0 replies; 49+ messages in thread
From: Song Bao Hua (Barry Song) @ 2020-12-04  0:01 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel



> -----Original Message-----
> From: Muchun Song [mailto:songmuchun@bytedance.com]
> Sent: Tuesday, December 1, 2020 4:19 AM
> To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de;
> mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com;
> dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org;
> viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org;
> mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com;
> rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com;
> jroedel@suse.de; almasrymina@google.com; rientjes@google.com;
> willy@infradead.org; osalvador@suse.de; mhocko@suse.com; Song Bao Hua (Barry
> Song) <song.bao.hua@hisilicon.com>
> Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org;
> linux-kernel@vger.kernel.org; linux-mm@kvack.org;
> linux-fsdevel@vger.kernel.org; Muchun Song <songmuchun@bytedance.com>
> Subject: [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter
> hugetlb_free_vmemmap
> 
> Add a kernel parameter hugetlb_free_vmemmap to disable the feature of
> freeing unused vmemmap pages associated with each hugetlb page on boot.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>


Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>

> ---
>  Documentation/admin-guide/kernel-parameters.txt |  9 +++++++++
>  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
>  arch/x86/mm/init_64.c                           |  5 +++--
>  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
>  mm/hugetlb_vmemmap.c                            | 18 +++++++++++++++++-
>  5 files changed, 51 insertions(+), 3 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt
> b/Documentation/admin-guide/kernel-parameters.txt
> index 3ae25630a223..9e6854f21d55 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -1551,6 +1551,15 @@
>  			Documentation/admin-guide/mm/hugetlbpage.rst.
>  			Format: size[KMG]
> 
> +	hugetlb_free_vmemmap=
> +			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> +			this controls freeing unused vmemmap pages associated
> +			with each HugeTLB page.
> +			Format: { on | off (default) }
> +
> +			on:  enable the feature
> +			off: disable the feature
> +
>  	hung_task_panic=
>  			[KNL] Should the hung task detector generate panics.
>  			Format: 0 | 1
> diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst
> b/Documentation/admin-guide/mm/hugetlbpage.rst
> index f7b1c7462991..6a8b57f6d3b7 100644
> --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> @@ -145,6 +145,9 @@ default_hugepagesz
> 
>  	will all result in 256 2M huge pages being allocated.  Valid default
>  	huge page size is architecture dependent.
> +hugetlb_free_vmemmap
> +	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
> +	unused vmemmap pages associated each HugeTLB page.
> 
>  When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
>  indicates the current number of pre-allocated huge pages of the default size.
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 155cb06a6961..fcdc020904a8 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -34,6 +34,7 @@
>  #include <linux/gfp.h>
>  #include <linux/kcore.h>
>  #include <linux/bootmem_info.h>
> +#include <linux/hugetlb.h>
> 
>  #include <asm/processor.h>
>  #include <asm/bios_ebda.h>
> @@ -1557,7 +1558,7 @@ int __meminit vmemmap_populate(unsigned long start,
> unsigned long end, int node,
>  {
>  	int err;
> 
> -	if (IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP))
> +	if (is_hugetlb_free_vmemmap_enabled())
>  		err = vmemmap_populate_basepages(start, end, node, NULL);
>  	else if (end - start < PAGES_PER_SECTION * sizeof(struct page))
>  		err = vmemmap_populate_basepages(start, end, node, NULL);
> @@ -1613,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long
> section_nr,
>  		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
> 
>  		if (!boot_cpu_has(X86_FEATURE_PSE) ||
> -		    IS_ENABLED(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)) {
> +		    is_hugetlb_free_vmemmap_enabled()) {
>  			next = (addr + PAGE_SIZE) & PAGE_MASK;
>  			pmd = pmd_offset(pud, addr);
>  			if (pmd_none(*pmd))
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index 4efeccb7192c..66d82ae7b712 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -773,6 +773,20 @@ static inline void huge_ptep_modify_prot_commit(struct
> vm_area_struct *vma,
>  }
>  #endif
> 
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +extern bool hugetlb_free_vmemmap_enabled;
> +
> +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> +{
> +	return hugetlb_free_vmemmap_enabled;
> +}
> +#else
> +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> +{
> +	return false;
> +}
> +#endif
> +
>  #else	/* CONFIG_HUGETLB_PAGE */
>  struct hstate {};
> 
> @@ -926,6 +940,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct
> *mm, unsigned long addr
>  					pte_t *ptep, pte_t pte, unsigned long sz)
>  {
>  }
> +
> +static inline bool is_hugetlb_free_vmemmap_enabled(void)
> +{
> +	return false;
> +}
>  #endif	/* CONFIG_HUGETLB_PAGE */
> 
>  static inline spinlock_t *huge_pte_lock(struct hstate *h,
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index a3714db7f400..ebc710d148e4 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -131,6 +131,21 @@ typedef void (*vmemmap_remap_pte_func_t)(struct page
> *reuse, pte_t *pte,
>  					 unsigned long start, unsigned long end,
>  					 void *priv);
> 
> +bool hugetlb_free_vmemmap_enabled;
> +
> +static int __init early_hugetlb_free_vmemmap_param(char *buf)
> +{
> +	if (!buf)
> +		return -EINVAL;
> +
> +	if (!strcmp(buf, "on"))
> +		hugetlb_free_vmemmap_enabled = true;
> +	else if (strcmp(buf, "off"))
> +		return -EINVAL;
> +
> +	return 0;
> +}
> +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
> 
>  static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h)
>  {
> @@ -325,7 +340,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>  	unsigned int nr_pages = pages_per_huge_page(h);
>  	unsigned int vmemmap_pages;
> 
> -	if (!is_power_of_2(sizeof(struct page))) {
> +	if (!is_power_of_2(sizeof(struct page)) ||
> +	    !hugetlb_free_vmemmap_enabled) {
>  		pr_info("disable freeing vmemmap pages for %s\n", h->name);
>  		return;
>  	}
> --
> 2.11.0

Thanks
Barry


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 00/15] Free some vmemmap pages of hugetlb page
  2020-12-03 23:48   ` Mike Kravetz
@ 2020-12-04  3:39     ` Muchun Song
  2020-12-07 18:38       ` Oscar Salvador
  0 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-04  3:39 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Andrew Morton, Jonathan Corbet, dave.hansen, hpa, x86, bp, mingo,
	Thomas Gleixner, pawan.kumar.gupta, mchehab+huawei, paulmck,
	viro, Peter Zijlstra, luto, oneukum, jroedel, Matthew Wilcox,
	David Rientjes, Mina Almasry, Randy Dunlap, anshuman.khandual,
	Oscar Salvador, Michal Hocko, Song Bao Hua (Barry Song),
	Xiongchun duan, LKML, Linux Memory Management List, linux-doc,
	linux-fsdevel

On Fri, Dec 4, 2020 at 7:49 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 12/3/20 12:35 AM, Muchun Song wrote:
> > On Mon, Nov 30, 2020 at 11:19 PM Muchun Song <songmuchun@bytedance.com> wrote:
> >>
> >> Hi all,
> >>
> >> This patch series will free some vmemmap pages(struct page structures)
> >> associated with each hugetlbpage when preallocated to save memory.
> >
> > Hi Mike,
> >
> > What's your opinion on this version?  Any comments or suggestions?
> > And hoping you or more people review the series. Thank you very
> > much.
>
> Sorry Muchun, I have been busy with other things and have not looked at
> this new version.  Should have some time soon.

Thanks very much.

>
> As previously mentioned, I feel qualified to review the hugetlb changes
> and some other closely related changes.  However, this patch set is
> touching quite a few areas and I do not feel qualified to make authoritative
> statements about them all.  I too hope others will take a look.

Agree. I also hope others can take a look at other modules(e.g.
sparse-vmemmap, memory-hotplug). Thanks for everyone's efforts
on this.

> --
> Mike Kravetz



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c
  2020-11-30 15:18 ` [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
@ 2020-12-07 12:12   ` David Hildenbrand
  0 siblings, 0 replies; 49+ messages in thread
From: David Hildenbrand @ 2020-12-07 12:12 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 30.11.20 16:18, Muchun Song wrote:
> Move bootmem info registration common API to individual bootmem_info.c
> for later patch use. This is just code movement without any functional
> change.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> ---
>  arch/x86/mm/init_64.c          |  1 +
>  include/linux/bootmem_info.h   | 27 ++++++++++++
>  include/linux/memory_hotplug.h | 23 ----------
>  mm/Makefile                    |  1 +
>  mm/bootmem_info.c              | 99 ++++++++++++++++++++++++++++++++++++++++++
>  mm/memory_hotplug.c            | 91 +-------------------------------------
>  6 files changed, 129 insertions(+), 113 deletions(-)
>  create mode 100644 include/linux/bootmem_info.h
>  create mode 100644 mm/bootmem_info.c
> 

Reviewed-by: David Hildenbrand <david@redhat.com>


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  2020-11-30 15:18 ` [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
@ 2020-12-07 12:14   ` David Hildenbrand
  2020-12-07 12:16     ` [External] " Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-07 12:14 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 30.11.20 16:18, Muchun Song wrote:
> In the later patch, we will use {get,put}_page_bootmem() to initialize
> the page for vmemmap or free vmemmap page to buddy. So move them out of
> CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any
> functional change.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> ---
>  arch/x86/mm/init_64.c          |  2 +-
>  include/linux/bootmem_info.h   | 13 +++++++++++++
>  include/linux/memory_hotplug.h |  4 ----
>  mm/bootmem_info.c              | 25 +++++++++++++++++++++++++
>  mm/memory_hotplug.c            | 27 ---------------------------
>  mm/sparse.c                    |  1 +
>  6 files changed, 40 insertions(+), 32 deletions(-)
> 

I'd squash this into the previous patch and name it like

"mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c"


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() to bootmem_info.c
  2020-12-07 12:14   ` David Hildenbrand
@ 2020-12-07 12:16     ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-07 12:16 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Mon, Dec 7, 2020 at 8:14 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 30.11.20 16:18, Muchun Song wrote:
> > In the later patch, we will use {get,put}_page_bootmem() to initialize
> > the page for vmemmap or free vmemmap page to buddy. So move them out of
> > CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code movement without any
> > functional change.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > ---
> >  arch/x86/mm/init_64.c          |  2 +-
> >  include/linux/bootmem_info.h   | 13 +++++++++++++
> >  include/linux/memory_hotplug.h |  4 ----
> >  mm/bootmem_info.c              | 25 +++++++++++++++++++++++++
> >  mm/memory_hotplug.c            | 27 ---------------------------
> >  mm/sparse.c                    |  1 +
> >  6 files changed, 40 insertions(+), 32 deletions(-)
> >
>
> I'd squash this into the previous patch and name it like
>
> "mm/memory_hotplug: Factor out bootmem core functions to bootmem_info.c"

OK, will do. Thanks for your suggestions. :)

>
>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-11-30 15:18 ` [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2020-12-07 12:19   ` David Hildenbrand
  2020-12-07 12:42     ` [External] " Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-07 12:19 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 30.11.20 16:18, Muchun Song wrote:
> The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> whether to enable the feature of freeing unused vmemmap associated
> with HugeTLB pages. And this is just for dependency check. Now only
> support x86.

x86 - i386 and x86-64? (I assume the latter only ;) )

> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  arch/x86/mm/init_64.c |  2 +-
>  fs/Kconfig            | 14 ++++++++++++++
>  2 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> index 0a45f062826e..0435bee2e172 100644
> --- a/arch/x86/mm/init_64.c
> +++ b/arch/x86/mm/init_64.c
> @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
>  
>  static void __init register_page_bootmem_info(void)
>  {
> -#ifdef CONFIG_NUMA
> +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
>  	int i;
>  

Why does this hunk belong into this patch? Looks like this should go
into another patch.

>  	for_each_online_node(i)
> diff --git a/fs/Kconfig b/fs/Kconfig
> index 976e8b9033c4..4961dd488444 100644
> --- a/fs/Kconfig
> +++ b/fs/Kconfig
> @@ -245,6 +245,20 @@ config HUGETLBFS
>  config HUGETLB_PAGE
>  	def_bool HUGETLBFS
>  
> +config HUGETLB_PAGE_FREE_VMEMMAP
> +	def_bool HUGETLB_PAGE
> +	depends on X86
> +	depends on SPARSEMEM_VMEMMAP
> +	depends on HAVE_BOOTMEM_INFO_NODE
> +	help
> +	  When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
> +	  memory from pre-allocated HugeTLB pages when they are not used.
> +	  6 pages per 2MB HugeTLB page and 4094 per 1GB HugeTLB page.

Calculations only apply to 4k base pages, no? (maybe generalize this a
bit or mention 4k base pages - I'm pretty sure we'll see the "depends on
X86" part fairly soon if this goes upstream)


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-11-30 15:18 ` [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2020-12-07 12:36   ` David Hildenbrand
  2020-12-07 13:11     ` [External] " Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-07 12:36 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 30.11.20 16:18, Muchun Song wrote:
> Every HugeTLB has more than one struct page structure. The 2M HugeTLB
> has 512 struct page structure and 1G HugeTLB has 4096 struct page
> structures. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER)
> struct page structures to store metadata associated with each HugeTLB.
> 
> There are a lot of struct page structures(8 page frames for 2MB HugeTLB
> page and 4096 page frames for 1GB HugeTLB page) associated with each
> HugeTLB page. For tail pages, the value of compound_head is the same.
> So we can reuse first page of tail page structures. We map the virtual
> addresses of the remaining pages of tail page structures to the first
> tail page struct, and then free these page frames. Therefore, we need
> to reserve two pages as vmemmap areas.
> 
> So we introduce a new nr_free_vmemmap_pages field in the hstate to
> indicate how many vmemmap pages associated with a HugeTLB page that we
> can free to buddy system.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> ---
>  include/linux/hugetlb.h |   3 ++
>  mm/Makefile             |   1 +
>  mm/hugetlb.c            |   3 ++
>  mm/hugetlb_vmemmap.c    | 129 ++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h    |  20 ++++++++
>  5 files changed, 156 insertions(+)
>  create mode 100644 mm/hugetlb_vmemmap.c
>  create mode 100644 mm/hugetlb_vmemmap.h
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index ebca2ef02212..4efeccb7192c 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -492,6 +492,9 @@ struct hstate {
>  	unsigned int nr_huge_pages_node[MAX_NUMNODES];
>  	unsigned int free_huge_pages_node[MAX_NUMNODES];
>  	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +	unsigned int nr_free_vmemmap_pages;
> +#endif
>  #ifdef CONFIG_CGROUP_HUGETLB
>  	/* cgroup control files */
>  	struct cftype cgroup_files_dfl[7];
> diff --git a/mm/Makefile b/mm/Makefile
> index ed4b88fa0f5e..056801d8daae 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
>  obj-$(CONFIG_ZSWAP)	+= zswap.o
>  obj-$(CONFIG_HAS_DMA)	+= dmapool.o
>  obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
> +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
>  obj-$(CONFIG_NUMA) 	+= mempolicy.o
>  obj-$(CONFIG_SPARSEMEM)	+= sparse.o
>  obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 1f3bf1710b66..25f9e8e9fc4a 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -42,6 +42,7 @@
>  #include <linux/userfaultfd_k.h>
>  #include <linux/page_owner.h>
>  #include "internal.h"
> +#include "hugetlb_vmemmap.h"
>  
>  int hugetlb_max_hstate __read_mostly;
>  unsigned int default_hstate_idx;
> @@ -3206,6 +3207,8 @@ void __init hugetlb_add_hstate(unsigned int order)
>  	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
>  					huge_page_size(h)/1024);
>  
> +	hugetlb_vmemmap_init(h);
> +
>  	parsed_hstate = h;
>  }
>  
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> new file mode 100644
> index 000000000000..51152e258f39
> --- /dev/null
> +++ b/mm/hugetlb_vmemmap.c
> @@ -0,0 +1,129 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Free some vmemmap pages of HugeTLB
> + *
> + * Copyright (c) 2020, Bytedance. All rights reserved.
> + *
> + *     Author: Muchun Song <songmuchun@bytedance.com>
> + *
> + * The struct page structures (page structs) are used to describe a physical
> + * page frame. By default, there is a one-to-one mapping from a page frame to
> + * it's corresponding page struct.
> + *
> + * The HugeTLB pages consist of multiple base page size pages and is supported
> + * by many architectures. See hugetlbpage.rst in the Documentation directory
> + * for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
> + * are currently supported. Since the base page size on x86 is 4KB, a 2MB
> + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> + * 4096 base pages. For each base page, there is a corresponding page struct.
> + *
> + * Within the HugeTLB subsystem, only the first 4 page structs are used to
> + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> + * provides this upper limit. The only 'useful' information in the remaining
> + * page structs is the compound_head field, and this field is the same for all
> + * tail pages.
> + *
> + * By removing redundant page structs for HugeTLB pages, memory can returned to
> + * the buddy allocator for other uses.
> + *
> + * When the system boot up, every 2M HugeTLB has 512 struct page structs which
> + * size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).


You should try to generalize all descriptions regarding differing base
page sizes. E.g., arm64 supports 4k, 16k, and 64k base pages.

[...]

> @@ -0,0 +1,20 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Free some vmemmap pages of HugeTLB
> + *
> + * Copyright (c) 2020, Bytedance. All rights reserved.
> + *
> + *     Author: Muchun Song <songmuchun@bytedance.com>
> + */
> +#ifndef _LINUX_HUGETLB_VMEMMAP_H
> +#define _LINUX_HUGETLB_VMEMMAP_H
> +#include <linux/hugetlb.h>
> +
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +void __init hugetlb_vmemmap_init(struct hstate *h);
> +#else
> +static inline void hugetlb_vmemmap_init(struct hstate *h)
> +{
> +}
> +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> +#endif /* _LINUX_HUGETLB_VMEMMAP_H */
> 

This patch as it stands is rather sub-optimal. I mean, all it does is
add documentation and print what could be done.

Can we instead introduce the basic infrastructure and enable it via this
patch on top, where we glue all the pieces together? Or is there
something I am missing?

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-11-30 15:18 ` [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
@ 2020-12-07 12:39   ` David Hildenbrand
  2020-12-07 13:23     ` [External] " Muchun Song
  2020-12-09  7:36     ` Muchun Song
  0 siblings, 2 replies; 49+ messages in thread
From: David Hildenbrand @ 2020-12-07 12:39 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 30.11.20 16:18, Muchun Song wrote:
> In the later patch, we can use the free_vmemmap_page() to free the
> unused vmemmap pages and initialize a page for vmemmap page using
> via prepare_vmemmap_page().
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> index 4ed6dee1adc9..239e3cc8f86c 100644
> --- a/include/linux/bootmem_info.h
> +++ b/include/linux/bootmem_info.h
> @@ -3,6 +3,7 @@
>  #define __LINUX_BOOTMEM_INFO_H
>  
>  #include <linux/mmzone.h>
> +#include <linux/mm.h>
>  
>  /*
>   * Types for free bootmem stored in page->lru.next. These have to be in
> @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
>  void get_page_bootmem(unsigned long info, struct page *page,
>  		      unsigned long type);
>  void put_page_bootmem(struct page *page);
> +
> +static inline void free_vmemmap_page(struct page *page)
> +{
> +	VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
> +
> +	/* bootmem page has reserved flag in the reserve_bootmem_region */
> +	if (PageReserved(page)) {
> +		unsigned long magic = (unsigned long)page->freelist;
> +
> +		if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> +			put_page_bootmem(page);
> +		else
> +			WARN_ON(1);
> +	}
> +}
> +
> +static inline void prepare_vmemmap_page(struct page *page)
> +{
> +	unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
> +
> +	get_page_bootmem(section_nr, page, SECTION_INFO);
> +	mark_page_reserved(page);
> +}

Can you clarify in the description when exactly these functions are
called and on which type of pages?

Would indicating "bootmem" in the function names make it clearer what we
are dealing with?

E.g., any memory allocated via the memblock allocator and not via the
buddy will be makred reserved already in the memmap. It's unclear to me
why we need the mark_page_reserved() here - can you enlighten me? :)

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-07 12:19   ` David Hildenbrand
@ 2020-12-07 12:42     ` Muchun Song
  2020-12-07 12:47       ` David Hildenbrand
  0 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-07 12:42 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Mon, Dec 7, 2020 at 8:19 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 30.11.20 16:18, Muchun Song wrote:
> > The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> > whether to enable the feature of freeing unused vmemmap associated
> > with HugeTLB pages. And this is just for dependency check. Now only
> > support x86.
>
> x86 - i386 and x86-64? (I assume the latter only ;) )

Yeah, you are right. Only the latter support SPARSEMEM_VMEMMAP.

>
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  arch/x86/mm/init_64.c |  2 +-
> >  fs/Kconfig            | 14 ++++++++++++++
> >  2 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> > index 0a45f062826e..0435bee2e172 100644
> > --- a/arch/x86/mm/init_64.c
> > +++ b/arch/x86/mm/init_64.c
> > @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
> >
> >  static void __init register_page_bootmem_info(void)
> >  {
> > -#ifdef CONFIG_NUMA
> > +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
> >       int i;
> >
>
> Why does this hunk belong into this patch? Looks like this should go
> into another patch.

Of course can. But Mike suggests that it is better to use it when
introducing a new config. Because this config depends on
HAVE_BOOTMEM_INFO_NODE. And register_page_bootmem_info
is aimed to register bootmem info. So maybe it is reasonable from
this point of view. What is your opinion?

>
> >       for_each_online_node(i)
> > diff --git a/fs/Kconfig b/fs/Kconfig
> > index 976e8b9033c4..4961dd488444 100644
> > --- a/fs/Kconfig
> > +++ b/fs/Kconfig
> > @@ -245,6 +245,20 @@ config HUGETLBFS
> >  config HUGETLB_PAGE
> >       def_bool HUGETLBFS
> >
> > +config HUGETLB_PAGE_FREE_VMEMMAP
> > +     def_bool HUGETLB_PAGE
> > +     depends on X86
> > +     depends on SPARSEMEM_VMEMMAP
> > +     depends on HAVE_BOOTMEM_INFO_NODE
> > +     help
> > +       When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
> > +       memory from pre-allocated HugeTLB pages when they are not used.
> > +       6 pages per 2MB HugeTLB page and 4094 per 1GB HugeTLB page.
>
> Calculations only apply to 4k base pages, no?

No, if the base page is not 4k, we also can free 6 pages.

For example:

If the base page size is 64k, the PMD huge page size is 512MB. We also
can free 6 pages(64k * 6) vmemmap. But maybe I should document this.

Thanks.

> (maybe generalize this a
> bit or mention 4k base pages - I'm pretty sure we'll see the "depends on
> X86" part fairly soon if this goes upstream)

Yeah, it can be easy to adapt to different architectures. :)

>
>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-07 12:42     ` [External] " Muchun Song
@ 2020-12-07 12:47       ` David Hildenbrand
  2020-12-07 13:22         ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-07 12:47 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On 07.12.20 13:42, Muchun Song wrote:
> On Mon, Dec 7, 2020 at 8:19 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 30.11.20 16:18, Muchun Song wrote:
>>> The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
>>> whether to enable the feature of freeing unused vmemmap associated
>>> with HugeTLB pages. And this is just for dependency check. Now only
>>> support x86.
>>
>> x86 - i386 and x86-64? (I assume the latter only ;) )
> 
> Yeah, you are right. Only the latter support SPARSEMEM_VMEMMAP.
> 
>>
>>>
>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>> ---
>>>  arch/x86/mm/init_64.c |  2 +-
>>>  fs/Kconfig            | 14 ++++++++++++++
>>>  2 files changed, 15 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
>>> index 0a45f062826e..0435bee2e172 100644
>>> --- a/arch/x86/mm/init_64.c
>>> +++ b/arch/x86/mm/init_64.c
>>> @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
>>>
>>>  static void __init register_page_bootmem_info(void)
>>>  {
>>> -#ifdef CONFIG_NUMA
>>> +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
>>>       int i;
>>>
>>
>> Why does this hunk belong into this patch? Looks like this should go
>> into another patch.
> 
> Of course can. But Mike suggests that it is better to use it when
> introducing a new config. Because this config depends on
> HAVE_BOOTMEM_INFO_NODE. And register_page_bootmem_info
> is aimed to register bootmem info. So maybe it is reasonable from
> this point of view. What is your opinion?
>

Ah, I see. Maybe mention in the patch description, because the
"Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP" part left me
clueless. Stumbling over this change only left me rather clueless.

>>
>>>       for_each_online_node(i)
>>> diff --git a/fs/Kconfig b/fs/Kconfig
>>> index 976e8b9033c4..4961dd488444 100644
>>> --- a/fs/Kconfig
>>> +++ b/fs/Kconfig
>>> @@ -245,6 +245,20 @@ config HUGETLBFS
>>>  config HUGETLB_PAGE
>>>       def_bool HUGETLBFS
>>>
>>> +config HUGETLB_PAGE_FREE_VMEMMAP
>>> +     def_bool HUGETLB_PAGE
>>> +     depends on X86
>>> +     depends on SPARSEMEM_VMEMMAP
>>> +     depends on HAVE_BOOTMEM_INFO_NODE
>>> +     help
>>> +       When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
>>> +       memory from pre-allocated HugeTLB pages when they are not used.
>>> +       6 pages per 2MB HugeTLB page and 4094 per 1GB HugeTLB page.
>>
>> Calculations only apply to 4k base pages, no?
> 
> No, if the base page is not 4k, we also can free 6 pages.
> 
> For example:
> 
> If the base page size is 64k, the PMD huge page size is 512MB. We also

Note that 2MB huge pages on arm64 with 64k base pages are possible as
well. Also, I think powerpc always has 16MB huge pages, independent of
base page sizes.


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-07 12:36   ` David Hildenbrand
@ 2020-12-07 13:11     ` Muchun Song
  2020-12-09  8:54       ` David Hildenbrand
  0 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-07 13:11 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Mon, Dec 7, 2020 at 8:36 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 30.11.20 16:18, Muchun Song wrote:
> > Every HugeTLB has more than one struct page structure. The 2M HugeTLB
> > has 512 struct page structure and 1G HugeTLB has 4096 struct page
> > structures. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER)
> > struct page structures to store metadata associated with each HugeTLB.
> >
> > There are a lot of struct page structures(8 page frames for 2MB HugeTLB
> > page and 4096 page frames for 1GB HugeTLB page) associated with each
> > HugeTLB page. For tail pages, the value of compound_head is the same.
> > So we can reuse first page of tail page structures. We map the virtual
> > addresses of the remaining pages of tail page structures to the first
> > tail page struct, and then free these page frames. Therefore, we need
> > to reserve two pages as vmemmap areas.
> >
> > So we introduce a new nr_free_vmemmap_pages field in the hstate to
> > indicate how many vmemmap pages associated with a HugeTLB page that we
> > can free to buddy system.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> > ---
> >  include/linux/hugetlb.h |   3 ++
> >  mm/Makefile             |   1 +
> >  mm/hugetlb.c            |   3 ++
> >  mm/hugetlb_vmemmap.c    | 129 ++++++++++++++++++++++++++++++++++++++++++++++++
> >  mm/hugetlb_vmemmap.h    |  20 ++++++++
> >  5 files changed, 156 insertions(+)
> >  create mode 100644 mm/hugetlb_vmemmap.c
> >  create mode 100644 mm/hugetlb_vmemmap.h
> >
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index ebca2ef02212..4efeccb7192c 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -492,6 +492,9 @@ struct hstate {
> >       unsigned int nr_huge_pages_node[MAX_NUMNODES];
> >       unsigned int free_huge_pages_node[MAX_NUMNODES];
> >       unsigned int surplus_huge_pages_node[MAX_NUMNODES];
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +     unsigned int nr_free_vmemmap_pages;
> > +#endif
> >  #ifdef CONFIG_CGROUP_HUGETLB
> >       /* cgroup control files */
> >       struct cftype cgroup_files_dfl[7];
> > diff --git a/mm/Makefile b/mm/Makefile
> > index ed4b88fa0f5e..056801d8daae 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)     += frontswap.o
> >  obj-$(CONFIG_ZSWAP)  += zswap.o
> >  obj-$(CONFIG_HAS_DMA)        += dmapool.o
> >  obj-$(CONFIG_HUGETLBFS)      += hugetlb.o
> > +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)      += hugetlb_vmemmap.o
> >  obj-$(CONFIG_NUMA)   += mempolicy.o
> >  obj-$(CONFIG_SPARSEMEM)      += sparse.o
> >  obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 1f3bf1710b66..25f9e8e9fc4a 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -42,6 +42,7 @@
> >  #include <linux/userfaultfd_k.h>
> >  #include <linux/page_owner.h>
> >  #include "internal.h"
> > +#include "hugetlb_vmemmap.h"
> >
> >  int hugetlb_max_hstate __read_mostly;
> >  unsigned int default_hstate_idx;
> > @@ -3206,6 +3207,8 @@ void __init hugetlb_add_hstate(unsigned int order)
> >       snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
> >                                       huge_page_size(h)/1024);
> >
> > +     hugetlb_vmemmap_init(h);
> > +
> >       parsed_hstate = h;
> >  }
> >
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > new file mode 100644
> > index 000000000000..51152e258f39
> > --- /dev/null
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -0,0 +1,129 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Free some vmemmap pages of HugeTLB
> > + *
> > + * Copyright (c) 2020, Bytedance. All rights reserved.
> > + *
> > + *     Author: Muchun Song <songmuchun@bytedance.com>
> > + *
> > + * The struct page structures (page structs) are used to describe a physical
> > + * page frame. By default, there is a one-to-one mapping from a page frame to
> > + * it's corresponding page struct.
> > + *
> > + * The HugeTLB pages consist of multiple base page size pages and is supported
> > + * by many architectures. See hugetlbpage.rst in the Documentation directory
> > + * for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
> > + * are currently supported. Since the base page size on x86 is 4KB, a 2MB
> > + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> > + * 4096 base pages. For each base page, there is a corresponding page struct.
> > + *
> > + * Within the HugeTLB subsystem, only the first 4 page structs are used to
> > + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> > + * provides this upper limit. The only 'useful' information in the remaining
> > + * page structs is the compound_head field, and this field is the same for all
> > + * tail pages.
> > + *
> > + * By removing redundant page structs for HugeTLB pages, memory can returned to
> > + * the buddy allocator for other uses.
> > + *
> > + * When the system boot up, every 2M HugeTLB has 512 struct page structs which
> > + * size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
>
>
> You should try to generalize all descriptions regarding differing base
> page sizes. E.g., arm64 supports 4k, 16k, and 64k base pages.

Will do. Thanks.

>
> [...]
>
> > @@ -0,0 +1,20 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Free some vmemmap pages of HugeTLB
> > + *
> > + * Copyright (c) 2020, Bytedance. All rights reserved.
> > + *
> > + *     Author: Muchun Song <songmuchun@bytedance.com>
> > + */
> > +#ifndef _LINUX_HUGETLB_VMEMMAP_H
> > +#define _LINUX_HUGETLB_VMEMMAP_H
> > +#include <linux/hugetlb.h>
> > +
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +void __init hugetlb_vmemmap_init(struct hstate *h);
> > +#else
> > +static inline void hugetlb_vmemmap_init(struct hstate *h)
> > +{
> > +}
> > +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> > +#endif /* _LINUX_HUGETLB_VMEMMAP_H */
> >
>
> This patch as it stands is rather sub-optimal. I mean, all it does is
> add documentation and print what could be done.
>
> Can we instead introduce the basic infrastructure and enable it via this
> patch on top, where we glue all the pieces together? Or is there
> something I am missing?

Maybe we can make the config of CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
default n in the Kconfig. When everything is ready, then make it
default to y. Right?


>
> --
> Thanks,
>
> David / dhildenb
>


--
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2020-12-07 12:47       ` David Hildenbrand
@ 2020-12-07 13:22         ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-07 13:22 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Mon, Dec 7, 2020 at 8:47 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 07.12.20 13:42, Muchun Song wrote:
> > On Mon, Dec 7, 2020 at 8:19 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 30.11.20 16:18, Muchun Song wrote:
> >>> The purpose of introducing HUGETLB_PAGE_FREE_VMEMMAP is to configure
> >>> whether to enable the feature of freeing unused vmemmap associated
> >>> with HugeTLB pages. And this is just for dependency check. Now only
> >>> support x86.
> >>
> >> x86 - i386 and x86-64? (I assume the latter only ;) )
> >
> > Yeah, you are right. Only the latter support SPARSEMEM_VMEMMAP.
> >
> >>
> >>>
> >>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>> ---
> >>>  arch/x86/mm/init_64.c |  2 +-
> >>>  fs/Kconfig            | 14 ++++++++++++++
> >>>  2 files changed, 15 insertions(+), 1 deletion(-)
> >>>
> >>> diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
> >>> index 0a45f062826e..0435bee2e172 100644
> >>> --- a/arch/x86/mm/init_64.c
> >>> +++ b/arch/x86/mm/init_64.c
> >>> @@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
> >>>
> >>>  static void __init register_page_bootmem_info(void)
> >>>  {
> >>> -#ifdef CONFIG_NUMA
> >>> +#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
> >>>       int i;
> >>>
> >>
> >> Why does this hunk belong into this patch? Looks like this should go
> >> into another patch.
> >
> > Of course can. But Mike suggests that it is better to use it when
> > introducing a new config. Because this config depends on
> > HAVE_BOOTMEM_INFO_NODE. And register_page_bootmem_info
> > is aimed to register bootmem info. So maybe it is reasonable from
> > this point of view. What is your opinion?
> >
>
> Ah, I see. Maybe mention in the patch description, because the
> "Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP" part left me
> clueless. Stumbling over this change only left me rather clueless.

OK, I will improve the commit log. Thanks.

>
> >>
> >>>       for_each_online_node(i)
> >>> diff --git a/fs/Kconfig b/fs/Kconfig
> >>> index 976e8b9033c4..4961dd488444 100644
> >>> --- a/fs/Kconfig
> >>> +++ b/fs/Kconfig
> >>> @@ -245,6 +245,20 @@ config HUGETLBFS
> >>>  config HUGETLB_PAGE
> >>>       def_bool HUGETLBFS
> >>>
> >>> +config HUGETLB_PAGE_FREE_VMEMMAP
> >>> +     def_bool HUGETLB_PAGE
> >>> +     depends on X86
> >>> +     depends on SPARSEMEM_VMEMMAP
> >>> +     depends on HAVE_BOOTMEM_INFO_NODE
> >>> +     help
> >>> +       When using HUGETLB_PAGE_FREE_VMEMMAP, the system can save up some
> >>> +       memory from pre-allocated HugeTLB pages when they are not used.
> >>> +       6 pages per 2MB HugeTLB page and 4094 per 1GB HugeTLB page.
> >>
> >> Calculations only apply to 4k base pages, no?
> >
> > No, if the base page is not 4k, we also can free 6 pages.
> >
> > For example:
> >
> > If the base page size is 64k, the PMD huge page size is 512MB. We also
>
> Note that 2MB huge pages on arm64 with 64k base pages are possible as
> well. Also, I think powerpc always has 16MB huge pages, independent of
> base page sizes.

I see now. Now only support x86-64, you are right, I should point out the base
page size. When supporting more architectures in the future. We can update
the message here. :)

Thanks.

>
>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-12-07 12:39   ` David Hildenbrand
@ 2020-12-07 13:23     ` Muchun Song
  2020-12-09  7:36     ` Muchun Song
  1 sibling, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-07 13:23 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Mon, Dec 7, 2020 at 8:39 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 30.11.20 16:18, Muchun Song wrote:
> > In the later patch, we can use the free_vmemmap_page() to free the
> > unused vmemmap pages and initialize a page for vmemmap page using
> > via prepare_vmemmap_page().
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> > index 4ed6dee1adc9..239e3cc8f86c 100644
> > --- a/include/linux/bootmem_info.h
> > +++ b/include/linux/bootmem_info.h
> > @@ -3,6 +3,7 @@
> >  #define __LINUX_BOOTMEM_INFO_H
> >
> >  #include <linux/mmzone.h>
> > +#include <linux/mm.h>
> >
> >  /*
> >   * Types for free bootmem stored in page->lru.next. These have to be in
> > @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
> >  void get_page_bootmem(unsigned long info, struct page *page,
> >                     unsigned long type);
> >  void put_page_bootmem(struct page *page);
> > +
> > +static inline void free_vmemmap_page(struct page *page)
> > +{
> > +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
> > +
> > +     /* bootmem page has reserved flag in the reserve_bootmem_region */
> > +     if (PageReserved(page)) {
> > +             unsigned long magic = (unsigned long)page->freelist;
> > +
> > +             if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> > +                     put_page_bootmem(page);
> > +             else
> > +                     WARN_ON(1);
> > +     }
> > +}
> > +
> > +static inline void prepare_vmemmap_page(struct page *page)
> > +{
> > +     unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
> > +
> > +     get_page_bootmem(section_nr, page, SECTION_INFO);
> > +     mark_page_reserved(page);
> > +}
>
> Can you clarify in the description when exactly these functions are
> called and on which type of pages?

Will do.

>
> Would indicating "bootmem" in the function names make it clearer what we
> are dealing with?
>
> E.g., any memory allocated via the memblock allocator and not via the
> buddy will be makred reserved already in the memmap. It's unclear to me
> why we need the mark_page_reserved() here - can you enlighten me? :)

Very thanks for your suggestions.

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 00/15] Free some vmemmap pages of hugetlb page
  2020-12-04  3:39     ` [External] " Muchun Song
@ 2020-12-07 18:38       ` Oscar Salvador
  2020-12-08  2:26         ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: Oscar Salvador @ 2020-12-07 18:38 UTC (permalink / raw)
  To: Muchun Song
  Cc: Mike Kravetz, Andrew Morton, Jonathan Corbet, dave.hansen, hpa,
	x86, bp, mingo, Thomas Gleixner, pawan.kumar.gupta,
	mchehab+huawei, paulmck, viro, Peter Zijlstra, luto, oneukum,
	jroedel, Matthew Wilcox, David Rientjes, Mina Almasry,
	Randy Dunlap, anshuman.khandual, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, LKML, Linux Memory Management List, linux-doc,
	linux-fsdevel

On Fri, Dec 04, 2020 at 11:39:31AM +0800, Muchun Song wrote:
> On Fri, Dec 4, 2020 at 7:49 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> > As previously mentioned, I feel qualified to review the hugetlb changes
> > and some other closely related changes.  However, this patch set is
> > touching quite a few areas and I do not feel qualified to make authoritative
> > statements about them all.  I too hope others will take a look.
> 
> Agree. I also hope others can take a look at other modules(e.g.
> sparse-vmemmap, memory-hotplug). Thanks for everyone's efforts
> on this.

I got sidetracked by some other stuff but I plan to continue reviewing
this series.

One thing that came to my mind is that if we do as David suggested in
patch#4, and we move it towards the end to actually __enable__ this
once all the infrastructure is there (unless hstate->nr_vmemmap_pages
differs from 0 we should not be doing any work AFAIK), we could also
move patch#6 to the end (right before the enablement), kill patch#7
and only leave patch#13.

The reason for that (killing patch#7 and leaving patch#13 only)
is that it does not make much sense to me to disable PMD-mapped vmemmap
depending on the CONFIG_HUGETLB_xxxxx as that is enabled by default
to replace that later by the boot kernel parameter.
It looks more natural to me to disable it when we introduce the kernel
boot parameter, before the actual enablement of the feature.

As I said, I plan to start the review again, but the order above would
make more sense to me.

thanks

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 00/15] Free some vmemmap pages of hugetlb page
  2020-12-07 18:38       ` Oscar Salvador
@ 2020-12-08  2:26         ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-08  2:26 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Mike Kravetz, Andrew Morton, Jonathan Corbet, dave.hansen, hpa,
	x86, bp, mingo, Thomas Gleixner, pawan.kumar.gupta,
	mchehab+huawei, paulmck, viro, Peter Zijlstra, luto, oneukum,
	jroedel, Matthew Wilcox, David Rientjes, Mina Almasry,
	Randy Dunlap, anshuman.khandual, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, LKML, Linux Memory Management List, linux-doc,
	linux-fsdevel

On Tue, Dec 8, 2020 at 2:38 AM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Fri, Dec 04, 2020 at 11:39:31AM +0800, Muchun Song wrote:
> > On Fri, Dec 4, 2020 at 7:49 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
> > > As previously mentioned, I feel qualified to review the hugetlb changes
> > > and some other closely related changes.  However, this patch set is
> > > touching quite a few areas and I do not feel qualified to make authoritative
> > > statements about them all.  I too hope others will take a look.
> >
> > Agree. I also hope others can take a look at other modules(e.g.
> > sparse-vmemmap, memory-hotplug). Thanks for everyone's efforts
> > on this.
>
> I got sidetracked by some other stuff but I plan to continue reviewing
> this series.

Many thanks, Oscar.

>
> One thing that came to my mind is that if we do as David suggested in
> patch#4, and we move it towards the end to actually __enable__ this
> once all the infrastructure is there (unless hstate->nr_vmemmap_pages
> differs from 0 we should not be doing any work AFAIK), we could also
> move patch#6 to the end (right before the enablement), kill patch#7
> and only leave patch#13.
>
> The reason for that (killing patch#7 and leaving patch#13 only)
> is that it does not make much sense to me to disable PMD-mapped vmemmap
> depending on the CONFIG_HUGETLB_xxxxx as that is enabled by default
> to replace that later by the boot kernel parameter.
> It looks more natural to me to disable it when we introduce the kernel
> boot parameter, before the actual enablement of the feature.

Thanks for your suggestions. I agree with you. :)

>
> As I said, I plan to start the review again, but the order above would
> make more sense to me.
>
> thanks
>
> --
> Oscar Salvador
> SUSE L3



-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-12-07 12:39   ` David Hildenbrand
  2020-12-07 13:23     ` [External] " Muchun Song
@ 2020-12-09  7:36     ` Muchun Song
  2020-12-09  8:49       ` David Hildenbrand
  1 sibling, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-09  7:36 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Mon, Dec 7, 2020 at 8:39 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 30.11.20 16:18, Muchun Song wrote:
> > In the later patch, we can use the free_vmemmap_page() to free the
> > unused vmemmap pages and initialize a page for vmemmap page using
> > via prepare_vmemmap_page().
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> > index 4ed6dee1adc9..239e3cc8f86c 100644
> > --- a/include/linux/bootmem_info.h
> > +++ b/include/linux/bootmem_info.h
> > @@ -3,6 +3,7 @@
> >  #define __LINUX_BOOTMEM_INFO_H
> >
> >  #include <linux/mmzone.h>
> > +#include <linux/mm.h>
> >
> >  /*
> >   * Types for free bootmem stored in page->lru.next. These have to be in
> > @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
> >  void get_page_bootmem(unsigned long info, struct page *page,
> >                     unsigned long type);
> >  void put_page_bootmem(struct page *page);
> > +
> > +static inline void free_vmemmap_page(struct page *page)
> > +{
> > +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
> > +
> > +     /* bootmem page has reserved flag in the reserve_bootmem_region */
> > +     if (PageReserved(page)) {
> > +             unsigned long magic = (unsigned long)page->freelist;
> > +
> > +             if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> > +                     put_page_bootmem(page);
> > +             else
> > +                     WARN_ON(1);
> > +     }
> > +}
> > +
> > +static inline void prepare_vmemmap_page(struct page *page)
> > +{
> > +     unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
> > +
> > +     get_page_bootmem(section_nr, page, SECTION_INFO);
> > +     mark_page_reserved(page);
> > +}
>
> Can you clarify in the description when exactly these functions are
> called and on which type of pages?
>
> Would indicating "bootmem" in the function names make it clearer what we
> are dealing with?
>
> E.g., any memory allocated via the memblock allocator and not via the
> buddy will be makred reserved already in the memmap. It's unclear to me
> why we need the mark_page_reserved() here - can you enlighten me? :)

Sorry for ignoring this question. Because the vmemmap pages are allocated
from the bootmem allocator which is marked as PG_reserved. For those bootmem
pages, we should call put_page_bootmem for free. You can see that we
clear the PG_reserved in the put_page_bootmem. In order to be consistent,
the prepare_vmemmap_page also marks the page as PG_reserved.

Thanks.

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-12-09  7:36     ` Muchun Song
@ 2020-12-09  8:49       ` David Hildenbrand
  2020-12-09  9:25         ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-09  8:49 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On 09.12.20 08:36, Muchun Song wrote:
> On Mon, Dec 7, 2020 at 8:39 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 30.11.20 16:18, Muchun Song wrote:
>>> In the later patch, we can use the free_vmemmap_page() to free the
>>> unused vmemmap pages and initialize a page for vmemmap page using
>>> via prepare_vmemmap_page().
>>>
>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>> ---
>>>  include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
>>>  1 file changed, 24 insertions(+)
>>>
>>> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
>>> index 4ed6dee1adc9..239e3cc8f86c 100644
>>> --- a/include/linux/bootmem_info.h
>>> +++ b/include/linux/bootmem_info.h
>>> @@ -3,6 +3,7 @@
>>>  #define __LINUX_BOOTMEM_INFO_H
>>>
>>>  #include <linux/mmzone.h>
>>> +#include <linux/mm.h>
>>>
>>>  /*
>>>   * Types for free bootmem stored in page->lru.next. These have to be in
>>> @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
>>>  void get_page_bootmem(unsigned long info, struct page *page,
>>>                     unsigned long type);
>>>  void put_page_bootmem(struct page *page);
>>> +
>>> +static inline void free_vmemmap_page(struct page *page)
>>> +{
>>> +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
>>> +
>>> +     /* bootmem page has reserved flag in the reserve_bootmem_region */
>>> +     if (PageReserved(page)) {
>>> +             unsigned long magic = (unsigned long)page->freelist;
>>> +
>>> +             if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
>>> +                     put_page_bootmem(page);
>>> +             else
>>> +                     WARN_ON(1);
>>> +     }
>>> +}
>>> +
>>> +static inline void prepare_vmemmap_page(struct page *page)
>>> +{
>>> +     unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
>>> +
>>> +     get_page_bootmem(section_nr, page, SECTION_INFO);
>>> +     mark_page_reserved(page);
>>> +}
>>
>> Can you clarify in the description when exactly these functions are
>> called and on which type of pages?
>>
>> Would indicating "bootmem" in the function names make it clearer what we
>> are dealing with?
>>
>> E.g., any memory allocated via the memblock allocator and not via the
>> buddy will be makred reserved already in the memmap. It's unclear to me
>> why we need the mark_page_reserved() here - can you enlighten me? :)
> 
> Sorry for ignoring this question. Because the vmemmap pages are allocated
> from the bootmem allocator which is marked as PG_reserved. For those bootmem
> pages, we should call put_page_bootmem for free. You can see that we
> clear the PG_reserved in the put_page_bootmem. In order to be consistent,
> the prepare_vmemmap_page also marks the page as PG_reserved.

I don't think that really makes sense.

After put_page_bootmem() put the last reference, it clears PG_reserved
and hands the page over to the buddy via free_reserved_page(). From that
point on, further get_page_bootmem() would be completely wrong and
dangerous.

Both, put_page_bootmem() and get_page_bootmem() rely on the fact that
they are dealing with memblock allcoations - marked via PG_reserved. If
prepare_vmemmap_page() would be called on something that's *not* coming
from the memblock allocator, it would be completely broken - or am I
missing something?

AFAIKT, there should rather be a BUG_ON(!PageReserved(page)) in
prepare_vmemmap_page() - or proper handling to deal with !memblock
allocations.


And as I said, indicating "bootmem" as part of the function names might
make it clearer that this is not for getting any vmemmap pages (esp.
allocated when hotplugging memory).

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-07 13:11     ` [External] " Muchun Song
@ 2020-12-09  8:54       ` David Hildenbrand
  2020-12-09  9:27         ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-09  8:54 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On 07.12.20 14:11, Muchun Song wrote:
> On Mon, Dec 7, 2020 at 8:36 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 30.11.20 16:18, Muchun Song wrote:
>>> Every HugeTLB has more than one struct page structure. The 2M HugeTLB
>>> has 512 struct page structure and 1G HugeTLB has 4096 struct page
>>> structures. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER)
>>> struct page structures to store metadata associated with each HugeTLB.
>>>
>>> There are a lot of struct page structures(8 page frames for 2MB HugeTLB
>>> page and 4096 page frames for 1GB HugeTLB page) associated with each
>>> HugeTLB page. For tail pages, the value of compound_head is the same.
>>> So we can reuse first page of tail page structures. We map the virtual
>>> addresses of the remaining pages of tail page structures to the first
>>> tail page struct, and then free these page frames. Therefore, we need
>>> to reserve two pages as vmemmap areas.
>>>
>>> So we introduce a new nr_free_vmemmap_pages field in the hstate to
>>> indicate how many vmemmap pages associated with a HugeTLB page that we
>>> can free to buddy system.
>>>
>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
>>> ---
>>>  include/linux/hugetlb.h |   3 ++
>>>  mm/Makefile             |   1 +
>>>  mm/hugetlb.c            |   3 ++
>>>  mm/hugetlb_vmemmap.c    | 129 ++++++++++++++++++++++++++++++++++++++++++++++++
>>>  mm/hugetlb_vmemmap.h    |  20 ++++++++
>>>  5 files changed, 156 insertions(+)
>>>  create mode 100644 mm/hugetlb_vmemmap.c
>>>  create mode 100644 mm/hugetlb_vmemmap.h
>>>
>>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
>>> index ebca2ef02212..4efeccb7192c 100644
>>> --- a/include/linux/hugetlb.h
>>> +++ b/include/linux/hugetlb.h
>>> @@ -492,6 +492,9 @@ struct hstate {
>>>       unsigned int nr_huge_pages_node[MAX_NUMNODES];
>>>       unsigned int free_huge_pages_node[MAX_NUMNODES];
>>>       unsigned int surplus_huge_pages_node[MAX_NUMNODES];
>>> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
>>> +     unsigned int nr_free_vmemmap_pages;
>>> +#endif
>>>  #ifdef CONFIG_CGROUP_HUGETLB
>>>       /* cgroup control files */
>>>       struct cftype cgroup_files_dfl[7];
>>> diff --git a/mm/Makefile b/mm/Makefile
>>> index ed4b88fa0f5e..056801d8daae 100644
>>> --- a/mm/Makefile
>>> +++ b/mm/Makefile
>>> @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)     += frontswap.o
>>>  obj-$(CONFIG_ZSWAP)  += zswap.o
>>>  obj-$(CONFIG_HAS_DMA)        += dmapool.o
>>>  obj-$(CONFIG_HUGETLBFS)      += hugetlb.o
>>> +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)      += hugetlb_vmemmap.o
>>>  obj-$(CONFIG_NUMA)   += mempolicy.o
>>>  obj-$(CONFIG_SPARSEMEM)      += sparse.o
>>>  obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
>>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
>>> index 1f3bf1710b66..25f9e8e9fc4a 100644
>>> --- a/mm/hugetlb.c
>>> +++ b/mm/hugetlb.c
>>> @@ -42,6 +42,7 @@
>>>  #include <linux/userfaultfd_k.h>
>>>  #include <linux/page_owner.h>
>>>  #include "internal.h"
>>> +#include "hugetlb_vmemmap.h"
>>>
>>>  int hugetlb_max_hstate __read_mostly;
>>>  unsigned int default_hstate_idx;
>>> @@ -3206,6 +3207,8 @@ void __init hugetlb_add_hstate(unsigned int order)
>>>       snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
>>>                                       huge_page_size(h)/1024);
>>>
>>> +     hugetlb_vmemmap_init(h);
>>> +
>>>       parsed_hstate = h;
>>>  }
>>>
>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>>> new file mode 100644
>>> index 000000000000..51152e258f39
>>> --- /dev/null
>>> +++ b/mm/hugetlb_vmemmap.c
>>> @@ -0,0 +1,129 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +/*
>>> + * Free some vmemmap pages of HugeTLB
>>> + *
>>> + * Copyright (c) 2020, Bytedance. All rights reserved.
>>> + *
>>> + *     Author: Muchun Song <songmuchun@bytedance.com>
>>> + *
>>> + * The struct page structures (page structs) are used to describe a physical
>>> + * page frame. By default, there is a one-to-one mapping from a page frame to
>>> + * it's corresponding page struct.
>>> + *
>>> + * The HugeTLB pages consist of multiple base page size pages and is supported
>>> + * by many architectures. See hugetlbpage.rst in the Documentation directory
>>> + * for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
>>> + * are currently supported. Since the base page size on x86 is 4KB, a 2MB
>>> + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
>>> + * 4096 base pages. For each base page, there is a corresponding page struct.
>>> + *
>>> + * Within the HugeTLB subsystem, only the first 4 page structs are used to
>>> + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
>>> + * provides this upper limit. The only 'useful' information in the remaining
>>> + * page structs is the compound_head field, and this field is the same for all
>>> + * tail pages.
>>> + *
>>> + * By removing redundant page structs for HugeTLB pages, memory can returned to
>>> + * the buddy allocator for other uses.
>>> + *
>>> + * When the system boot up, every 2M HugeTLB has 512 struct page structs which
>>> + * size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
>>
>>
>> You should try to generalize all descriptions regarding differing base
>> page sizes. E.g., arm64 supports 4k, 16k, and 64k base pages.
> 
> Will do. Thanks.
> 
>>
>> [...]
>>
>>> @@ -0,0 +1,20 @@
>>> +// SPDX-License-Identifier: GPL-2.0
>>> +/*
>>> + * Free some vmemmap pages of HugeTLB
>>> + *
>>> + * Copyright (c) 2020, Bytedance. All rights reserved.
>>> + *
>>> + *     Author: Muchun Song <songmuchun@bytedance.com>
>>> + */
>>> +#ifndef _LINUX_HUGETLB_VMEMMAP_H
>>> +#define _LINUX_HUGETLB_VMEMMAP_H
>>> +#include <linux/hugetlb.h>
>>> +
>>> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
>>> +void __init hugetlb_vmemmap_init(struct hstate *h);
>>> +#else
>>> +static inline void hugetlb_vmemmap_init(struct hstate *h)
>>> +{
>>> +}
>>> +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
>>> +#endif /* _LINUX_HUGETLB_VMEMMAP_H */
>>>
>>
>> This patch as it stands is rather sub-optimal. I mean, all it does is
>> add documentation and print what could be done.
>>
>> Can we instead introduce the basic infrastructure and enable it via this
>> patch on top, where we glue all the pieces together? Or is there
>> something I am missing?
> 
> Maybe we can make the config of CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> default n in the Kconfig. When everything is ready, then make it
> default to y. Right?

I think it can make sense to introduce the
CONFIG_HUGETLB_PAGE_FREE_VMEMMAP option first if necessary for other
patches. But I think the the documentation and the dummy call should
rather be moved to the end of the series where you glue everything you
introduced together and officially unlock the feature. Others might
disagree :)

BTW, I'm planning on reviewing the other parts of this series, I'm just
fairly busy, so it might take a while (I think we're targeting 5.12
either way as the 5.11 merge window will start fairly soon).

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-12-09  8:49       ` David Hildenbrand
@ 2020-12-09  9:25         ` Muchun Song
  2020-12-09  9:32           ` David Hildenbrand
  0 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-09  9:25 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 4:50 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 09.12.20 08:36, Muchun Song wrote:
> > On Mon, Dec 7, 2020 at 8:39 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 30.11.20 16:18, Muchun Song wrote:
> >>> In the later patch, we can use the free_vmemmap_page() to free the
> >>> unused vmemmap pages and initialize a page for vmemmap page using
> >>> via prepare_vmemmap_page().
> >>>
> >>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>> ---
> >>>  include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
> >>>  1 file changed, 24 insertions(+)
> >>>
> >>> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> >>> index 4ed6dee1adc9..239e3cc8f86c 100644
> >>> --- a/include/linux/bootmem_info.h
> >>> +++ b/include/linux/bootmem_info.h
> >>> @@ -3,6 +3,7 @@
> >>>  #define __LINUX_BOOTMEM_INFO_H
> >>>
> >>>  #include <linux/mmzone.h>
> >>> +#include <linux/mm.h>
> >>>
> >>>  /*
> >>>   * Types for free bootmem stored in page->lru.next. These have to be in
> >>> @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
> >>>  void get_page_bootmem(unsigned long info, struct page *page,
> >>>                     unsigned long type);
> >>>  void put_page_bootmem(struct page *page);
> >>> +
> >>> +static inline void free_vmemmap_page(struct page *page)
> >>> +{
> >>> +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
> >>> +
> >>> +     /* bootmem page has reserved flag in the reserve_bootmem_region */
> >>> +     if (PageReserved(page)) {
> >>> +             unsigned long magic = (unsigned long)page->freelist;
> >>> +
> >>> +             if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> >>> +                     put_page_bootmem(page);
> >>> +             else
> >>> +                     WARN_ON(1);
> >>> +     }
> >>> +}
> >>> +
> >>> +static inline void prepare_vmemmap_page(struct page *page)
> >>> +{
> >>> +     unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
> >>> +
> >>> +     get_page_bootmem(section_nr, page, SECTION_INFO);
> >>> +     mark_page_reserved(page);
> >>> +}
> >>
> >> Can you clarify in the description when exactly these functions are
> >> called and on which type of pages?
> >>
> >> Would indicating "bootmem" in the function names make it clearer what we
> >> are dealing with?
> >>
> >> E.g., any memory allocated via the memblock allocator and not via the
> >> buddy will be makred reserved already in the memmap. It's unclear to me
> >> why we need the mark_page_reserved() here - can you enlighten me? :)
> >
> > Sorry for ignoring this question. Because the vmemmap pages are allocated
> > from the bootmem allocator which is marked as PG_reserved. For those bootmem
> > pages, we should call put_page_bootmem for free. You can see that we
> > clear the PG_reserved in the put_page_bootmem. In order to be consistent,
> > the prepare_vmemmap_page also marks the page as PG_reserved.
>
> I don't think that really makes sense.
>
> After put_page_bootmem() put the last reference, it clears PG_reserved
> and hands the page over to the buddy via free_reserved_page(). From that
> point on, further get_page_bootmem() would be completely wrong and
> dangerous.
>
> Both, put_page_bootmem() and get_page_bootmem() rely on the fact that
> they are dealing with memblock allcoations - marked via PG_reserved. If
> prepare_vmemmap_page() would be called on something that's *not* coming
> from the memblock allocator, it would be completely broken - or am I
> missing something?
>
> AFAIKT, there should rather be a BUG_ON(!PageReserved(page)) in
> prepare_vmemmap_page() - or proper handling to deal with !memblock
> allocations.
>

I want to allocate some pages as the vmemmap when
we free a HugeTLB page to the buddy allocator. So I use
the prepare_vmemmap_page() to initialize the page (which
allocated from buddy allocator) and make it as the vmemmap
of the freed HugeTLB page.

Any suggestions to deal with this case?

I have a solution to address this. When the pages allocated
from the buddy as vmemmap pages,  we do not call
prepare_vmemmap_page().

When we free some vmemmap pages of a HugeTLB
page, if the PG_reserved of the vmemmap page is set,
we call free_vmemmap_page() to free it to buddy,
otherwise call free_page(). What is your opinion?

>
> And as I said, indicating "bootmem" as part of the function names might
> make it clearer that this is not for getting any vmemmap pages (esp.
> allocated when hotplugging memory).

Agree. I am doing that for the next version.

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate
  2020-12-09  8:54       ` David Hildenbrand
@ 2020-12-09  9:27         ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-09  9:27 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 4:54 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 07.12.20 14:11, Muchun Song wrote:
> > On Mon, Dec 7, 2020 at 8:36 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 30.11.20 16:18, Muchun Song wrote:
> >>> Every HugeTLB has more than one struct page structure. The 2M HugeTLB
> >>> has 512 struct page structure and 1G HugeTLB has 4096 struct page
> >>> structures. We __know__ that we only use the first 4(HUGETLB_CGROUP_MIN_ORDER)
> >>> struct page structures to store metadata associated with each HugeTLB.
> >>>
> >>> There are a lot of struct page structures(8 page frames for 2MB HugeTLB
> >>> page and 4096 page frames for 1GB HugeTLB page) associated with each
> >>> HugeTLB page. For tail pages, the value of compound_head is the same.
> >>> So we can reuse first page of tail page structures. We map the virtual
> >>> addresses of the remaining pages of tail page structures to the first
> >>> tail page struct, and then free these page frames. Therefore, we need
> >>> to reserve two pages as vmemmap areas.
> >>>
> >>> So we introduce a new nr_free_vmemmap_pages field in the hstate to
> >>> indicate how many vmemmap pages associated with a HugeTLB page that we
> >>> can free to buddy system.
> >>>
> >>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> >>> ---
> >>>  include/linux/hugetlb.h |   3 ++
> >>>  mm/Makefile             |   1 +
> >>>  mm/hugetlb.c            |   3 ++
> >>>  mm/hugetlb_vmemmap.c    | 129 ++++++++++++++++++++++++++++++++++++++++++++++++
> >>>  mm/hugetlb_vmemmap.h    |  20 ++++++++
> >>>  5 files changed, 156 insertions(+)
> >>>  create mode 100644 mm/hugetlb_vmemmap.c
> >>>  create mode 100644 mm/hugetlb_vmemmap.h
> >>>
> >>> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> >>> index ebca2ef02212..4efeccb7192c 100644
> >>> --- a/include/linux/hugetlb.h
> >>> +++ b/include/linux/hugetlb.h
> >>> @@ -492,6 +492,9 @@ struct hstate {
> >>>       unsigned int nr_huge_pages_node[MAX_NUMNODES];
> >>>       unsigned int free_huge_pages_node[MAX_NUMNODES];
> >>>       unsigned int surplus_huge_pages_node[MAX_NUMNODES];
> >>> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> >>> +     unsigned int nr_free_vmemmap_pages;
> >>> +#endif
> >>>  #ifdef CONFIG_CGROUP_HUGETLB
> >>>       /* cgroup control files */
> >>>       struct cftype cgroup_files_dfl[7];
> >>> diff --git a/mm/Makefile b/mm/Makefile
> >>> index ed4b88fa0f5e..056801d8daae 100644
> >>> --- a/mm/Makefile
> >>> +++ b/mm/Makefile
> >>> @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)     += frontswap.o
> >>>  obj-$(CONFIG_ZSWAP)  += zswap.o
> >>>  obj-$(CONFIG_HAS_DMA)        += dmapool.o
> >>>  obj-$(CONFIG_HUGETLBFS)      += hugetlb.o
> >>> +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)      += hugetlb_vmemmap.o
> >>>  obj-$(CONFIG_NUMA)   += mempolicy.o
> >>>  obj-$(CONFIG_SPARSEMEM)      += sparse.o
> >>>  obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> >>> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> >>> index 1f3bf1710b66..25f9e8e9fc4a 100644
> >>> --- a/mm/hugetlb.c
> >>> +++ b/mm/hugetlb.c
> >>> @@ -42,6 +42,7 @@
> >>>  #include <linux/userfaultfd_k.h>
> >>>  #include <linux/page_owner.h>
> >>>  #include "internal.h"
> >>> +#include "hugetlb_vmemmap.h"
> >>>
> >>>  int hugetlb_max_hstate __read_mostly;
> >>>  unsigned int default_hstate_idx;
> >>> @@ -3206,6 +3207,8 @@ void __init hugetlb_add_hstate(unsigned int order)
> >>>       snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
> >>>                                       huge_page_size(h)/1024);
> >>>
> >>> +     hugetlb_vmemmap_init(h);
> >>> +
> >>>       parsed_hstate = h;
> >>>  }
> >>>
> >>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> >>> new file mode 100644
> >>> index 000000000000..51152e258f39
> >>> --- /dev/null
> >>> +++ b/mm/hugetlb_vmemmap.c
> >>> @@ -0,0 +1,129 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +/*
> >>> + * Free some vmemmap pages of HugeTLB
> >>> + *
> >>> + * Copyright (c) 2020, Bytedance. All rights reserved.
> >>> + *
> >>> + *     Author: Muchun Song <songmuchun@bytedance.com>
> >>> + *
> >>> + * The struct page structures (page structs) are used to describe a physical
> >>> + * page frame. By default, there is a one-to-one mapping from a page frame to
> >>> + * it's corresponding page struct.
> >>> + *
> >>> + * The HugeTLB pages consist of multiple base page size pages and is supported
> >>> + * by many architectures. See hugetlbpage.rst in the Documentation directory
> >>> + * for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
> >>> + * are currently supported. Since the base page size on x86 is 4KB, a 2MB
> >>> + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> >>> + * 4096 base pages. For each base page, there is a corresponding page struct.
> >>> + *
> >>> + * Within the HugeTLB subsystem, only the first 4 page structs are used to
> >>> + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> >>> + * provides this upper limit. The only 'useful' information in the remaining
> >>> + * page structs is the compound_head field, and this field is the same for all
> >>> + * tail pages.
> >>> + *
> >>> + * By removing redundant page structs for HugeTLB pages, memory can returned to
> >>> + * the buddy allocator for other uses.
> >>> + *
> >>> + * When the system boot up, every 2M HugeTLB has 512 struct page structs which
> >>> + * size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).
> >>
> >>
> >> You should try to generalize all descriptions regarding differing base
> >> page sizes. E.g., arm64 supports 4k, 16k, and 64k base pages.
> >
> > Will do. Thanks.
> >
> >>
> >> [...]
> >>
> >>> @@ -0,0 +1,20 @@
> >>> +// SPDX-License-Identifier: GPL-2.0
> >>> +/*
> >>> + * Free some vmemmap pages of HugeTLB
> >>> + *
> >>> + * Copyright (c) 2020, Bytedance. All rights reserved.
> >>> + *
> >>> + *     Author: Muchun Song <songmuchun@bytedance.com>
> >>> + */
> >>> +#ifndef _LINUX_HUGETLB_VMEMMAP_H
> >>> +#define _LINUX_HUGETLB_VMEMMAP_H
> >>> +#include <linux/hugetlb.h>
> >>> +
> >>> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> >>> +void __init hugetlb_vmemmap_init(struct hstate *h);
> >>> +#else
> >>> +static inline void hugetlb_vmemmap_init(struct hstate *h)
> >>> +{
> >>> +}
> >>> +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> >>> +#endif /* _LINUX_HUGETLB_VMEMMAP_H */
> >>>
> >>
> >> This patch as it stands is rather sub-optimal. I mean, all it does is
> >> add documentation and print what could be done.
> >>
> >> Can we instead introduce the basic infrastructure and enable it via this
> >> patch on top, where we glue all the pieces together? Or is there
> >> something I am missing?
> >
> > Maybe we can make the config of CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > default n in the Kconfig. When everything is ready, then make it
> > default to y. Right?
>
> I think it can make sense to introduce the
> CONFIG_HUGETLB_PAGE_FREE_VMEMMAP option first if necessary for other
> patches. But I think the the documentation and the dummy call should
> rather be moved to the end of the series where you glue everything you
> introduced together and officially unlock the feature. Others might
> disagree :)

I see. Thanks for your suggestions.

>
> BTW, I'm planning on reviewing the other parts of this series, I'm just
> fairly busy, so it might take a while (I think we're targeting 5.12
> either way as the 5.11 merge window will start fairly soon).
>

Very thanks.

> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-12-09  9:25         ` Muchun Song
@ 2020-12-09  9:32           ` David Hildenbrand
  2020-12-09  9:43             ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-09  9:32 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On 09.12.20 10:25, Muchun Song wrote:
> On Wed, Dec 9, 2020 at 4:50 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 09.12.20 08:36, Muchun Song wrote:
>>> On Mon, Dec 7, 2020 at 8:39 PM David Hildenbrand <david@redhat.com> wrote:
>>>>
>>>> On 30.11.20 16:18, Muchun Song wrote:
>>>>> In the later patch, we can use the free_vmemmap_page() to free the
>>>>> unused vmemmap pages and initialize a page for vmemmap page using
>>>>> via prepare_vmemmap_page().
>>>>>
>>>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>>>> ---
>>>>>  include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
>>>>>  1 file changed, 24 insertions(+)
>>>>>
>>>>> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
>>>>> index 4ed6dee1adc9..239e3cc8f86c 100644
>>>>> --- a/include/linux/bootmem_info.h
>>>>> +++ b/include/linux/bootmem_info.h
>>>>> @@ -3,6 +3,7 @@
>>>>>  #define __LINUX_BOOTMEM_INFO_H
>>>>>
>>>>>  #include <linux/mmzone.h>
>>>>> +#include <linux/mm.h>
>>>>>
>>>>>  /*
>>>>>   * Types for free bootmem stored in page->lru.next. These have to be in
>>>>> @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
>>>>>  void get_page_bootmem(unsigned long info, struct page *page,
>>>>>                     unsigned long type);
>>>>>  void put_page_bootmem(struct page *page);
>>>>> +
>>>>> +static inline void free_vmemmap_page(struct page *page)
>>>>> +{
>>>>> +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
>>>>> +
>>>>> +     /* bootmem page has reserved flag in the reserve_bootmem_region */
>>>>> +     if (PageReserved(page)) {
>>>>> +             unsigned long magic = (unsigned long)page->freelist;
>>>>> +
>>>>> +             if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
>>>>> +                     put_page_bootmem(page);
>>>>> +             else
>>>>> +                     WARN_ON(1);
>>>>> +     }
>>>>> +}
>>>>> +
>>>>> +static inline void prepare_vmemmap_page(struct page *page)
>>>>> +{
>>>>> +     unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
>>>>> +
>>>>> +     get_page_bootmem(section_nr, page, SECTION_INFO);
>>>>> +     mark_page_reserved(page);
>>>>> +}
>>>>
>>>> Can you clarify in the description when exactly these functions are
>>>> called and on which type of pages?
>>>>
>>>> Would indicating "bootmem" in the function names make it clearer what we
>>>> are dealing with?
>>>>
>>>> E.g., any memory allocated via the memblock allocator and not via the
>>>> buddy will be makred reserved already in the memmap. It's unclear to me
>>>> why we need the mark_page_reserved() here - can you enlighten me? :)
>>>
>>> Sorry for ignoring this question. Because the vmemmap pages are allocated
>>> from the bootmem allocator which is marked as PG_reserved. For those bootmem
>>> pages, we should call put_page_bootmem for free. You can see that we
>>> clear the PG_reserved in the put_page_bootmem. In order to be consistent,
>>> the prepare_vmemmap_page also marks the page as PG_reserved.
>>
>> I don't think that really makes sense.
>>
>> After put_page_bootmem() put the last reference, it clears PG_reserved
>> and hands the page over to the buddy via free_reserved_page(). From that
>> point on, further get_page_bootmem() would be completely wrong and
>> dangerous.
>>
>> Both, put_page_bootmem() and get_page_bootmem() rely on the fact that
>> they are dealing with memblock allcoations - marked via PG_reserved. If
>> prepare_vmemmap_page() would be called on something that's *not* coming
>> from the memblock allocator, it would be completely broken - or am I
>> missing something?
>>
>> AFAIKT, there should rather be a BUG_ON(!PageReserved(page)) in
>> prepare_vmemmap_page() - or proper handling to deal with !memblock
>> allocations.
>>
> 
> I want to allocate some pages as the vmemmap when
> we free a HugeTLB page to the buddy allocator. So I use
> the prepare_vmemmap_page() to initialize the page (which
> allocated from buddy allocator) and make it as the vmemmap
> of the freed HugeTLB page.
> 
> Any suggestions to deal with this case?

If you obtained pages via the buddy, there shouldn't be anything special
to handle, no? What speaks against


prepare_vmemmap_page():
if (!PageReserved(page))
	return;


put_page_bootmem():
if (!PageReserved(page))
	__free_page();


Or if we care about multiple references, get_page() and put_page().

> 
> I have a solution to address this. When the pages allocated
> from the buddy as vmemmap pages,  we do not call
> prepare_vmemmap_page().
> 
> When we free some vmemmap pages of a HugeTLB
> page, if the PG_reserved of the vmemmap page is set,
> we call free_vmemmap_page() to free it to buddy,
> otherwise call free_page(). What is your opinion?

That would also work. Then, please include "bootmem" as part of the
function name. If you plan on using my suggestion, you can drop
"bootmem" from the name as it works for both types of pages.


-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page()
  2020-12-09  9:32           ` David Hildenbrand
@ 2020-12-09  9:43             ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-09  9:43 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 5:33 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 09.12.20 10:25, Muchun Song wrote:
> > On Wed, Dec 9, 2020 at 4:50 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 09.12.20 08:36, Muchun Song wrote:
> >>> On Mon, Dec 7, 2020 at 8:39 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>
> >>>> On 30.11.20 16:18, Muchun Song wrote:
> >>>>> In the later patch, we can use the free_vmemmap_page() to free the
> >>>>> unused vmemmap pages and initialize a page for vmemmap page using
> >>>>> via prepare_vmemmap_page().
> >>>>>
> >>>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>>>> ---
> >>>>>  include/linux/bootmem_info.h | 24 ++++++++++++++++++++++++
> >>>>>  1 file changed, 24 insertions(+)
> >>>>>
> >>>>> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> >>>>> index 4ed6dee1adc9..239e3cc8f86c 100644
> >>>>> --- a/include/linux/bootmem_info.h
> >>>>> +++ b/include/linux/bootmem_info.h
> >>>>> @@ -3,6 +3,7 @@
> >>>>>  #define __LINUX_BOOTMEM_INFO_H
> >>>>>
> >>>>>  #include <linux/mmzone.h>
> >>>>> +#include <linux/mm.h>
> >>>>>
> >>>>>  /*
> >>>>>   * Types for free bootmem stored in page->lru.next. These have to be in
> >>>>> @@ -22,6 +23,29 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
> >>>>>  void get_page_bootmem(unsigned long info, struct page *page,
> >>>>>                     unsigned long type);
> >>>>>  void put_page_bootmem(struct page *page);
> >>>>> +
> >>>>> +static inline void free_vmemmap_page(struct page *page)
> >>>>> +{
> >>>>> +     VM_WARN_ON(!PageReserved(page) || page_ref_count(page) != 2);
> >>>>> +
> >>>>> +     /* bootmem page has reserved flag in the reserve_bootmem_region */
> >>>>> +     if (PageReserved(page)) {
> >>>>> +             unsigned long magic = (unsigned long)page->freelist;
> >>>>> +
> >>>>> +             if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> >>>>> +                     put_page_bootmem(page);
> >>>>> +             else
> >>>>> +                     WARN_ON(1);
> >>>>> +     }
> >>>>> +}
> >>>>> +
> >>>>> +static inline void prepare_vmemmap_page(struct page *page)
> >>>>> +{
> >>>>> +     unsigned long section_nr = pfn_to_section_nr(page_to_pfn(page));
> >>>>> +
> >>>>> +     get_page_bootmem(section_nr, page, SECTION_INFO);
> >>>>> +     mark_page_reserved(page);
> >>>>> +}
> >>>>
> >>>> Can you clarify in the description when exactly these functions are
> >>>> called and on which type of pages?
> >>>>
> >>>> Would indicating "bootmem" in the function names make it clearer what we
> >>>> are dealing with?
> >>>>
> >>>> E.g., any memory allocated via the memblock allocator and not via the
> >>>> buddy will be makred reserved already in the memmap. It's unclear to me
> >>>> why we need the mark_page_reserved() here - can you enlighten me? :)
> >>>
> >>> Sorry for ignoring this question. Because the vmemmap pages are allocated
> >>> from the bootmem allocator which is marked as PG_reserved. For those bootmem
> >>> pages, we should call put_page_bootmem for free. You can see that we
> >>> clear the PG_reserved in the put_page_bootmem. In order to be consistent,
> >>> the prepare_vmemmap_page also marks the page as PG_reserved.
> >>
> >> I don't think that really makes sense.
> >>
> >> After put_page_bootmem() put the last reference, it clears PG_reserved
> >> and hands the page over to the buddy via free_reserved_page(). From that
> >> point on, further get_page_bootmem() would be completely wrong and
> >> dangerous.
> >>
> >> Both, put_page_bootmem() and get_page_bootmem() rely on the fact that
> >> they are dealing with memblock allcoations - marked via PG_reserved. If
> >> prepare_vmemmap_page() would be called on something that's *not* coming
> >> from the memblock allocator, it would be completely broken - or am I
> >> missing something?
> >>
> >> AFAIKT, there should rather be a BUG_ON(!PageReserved(page)) in
> >> prepare_vmemmap_page() - or proper handling to deal with !memblock
> >> allocations.
> >>
> >
> > I want to allocate some pages as the vmemmap when
> > we free a HugeTLB page to the buddy allocator. So I use
> > the prepare_vmemmap_page() to initialize the page (which
> > allocated from buddy allocator) and make it as the vmemmap
> > of the freed HugeTLB page.
> >
> > Any suggestions to deal with this case?
>
> If you obtained pages via the buddy, there shouldn't be anything special
> to handle, no? What speaks against
>
>
> prepare_vmemmap_page():
> if (!PageReserved(page))
>         return;
>
>
> put_page_bootmem():
> if (!PageReserved(page))
>         __free_page();
>

Thanks.

>
> Or if we care about multiple references, get_page() and put_page().
>
> >
> > I have a solution to address this. When the pages allocated
> > from the buddy as vmemmap pages,  we do not call
> > prepare_vmemmap_page().
> >
> > When we free some vmemmap pages of a HugeTLB
> > page, if the PG_reserved of the vmemmap page is set,
> > we call free_vmemmap_page() to free it to buddy,
> > otherwise call free_page(). What is your opinion?
>
> That would also work. Then, please include "bootmem" as part of the
> function name. If you plan on using my suggestion, you can drop
> "bootmem" from the name as it works for both types of pages.
>

Agree. Thanks.

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-11-30 15:18 ` [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
@ 2020-12-09  9:57   ` David Hildenbrand
  2020-12-09 10:03     ` [External] " Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-09  9:57 UTC (permalink / raw)
  To: Muchun Song, corbet, mike.kravetz, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 30.11.20 16:18, Muchun Song wrote:
> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
> when the size of struct page is a power of two.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  mm/hugetlb_vmemmap.c | 5 +++++
>  1 file changed, 5 insertions(+)
> 
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 51152e258f39..ad8fc61ea273 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>  	unsigned int nr_pages = pages_per_huge_page(h);
>  	unsigned int vmemmap_pages;
>  
> +	if (!is_power_of_2(sizeof(struct page))) {
> +		pr_info("disable freeing vmemmap pages for %s\n", h->name);

I'd just drop that pr_info(). Users are able to observe that it's
working (below), so they are able to identify that it's not working as well.

> +		return;
> +	}
> +
>  	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
>  	/*
>  	 * The head page and the first tail page are not to be freed to buddy
> 

Please squash this patch into the enabling patch and add a comment
instead, like

/* We cannot optimize if a "struct page" crosses page boundaries. */

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09  9:57   ` David Hildenbrand
@ 2020-12-09 10:03     ` Muchun Song
  2020-12-09 10:06       ` David Hildenbrand
  0 siblings, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-09 10:03 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 30.11.20 16:18, Muchun Song wrote:
> > We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
> > when the size of struct page is a power of two.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  mm/hugetlb_vmemmap.c | 5 +++++
> >  1 file changed, 5 insertions(+)
> >
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index 51152e258f39..ad8fc61ea273 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> >       unsigned int nr_pages = pages_per_huge_page(h);
> >       unsigned int vmemmap_pages;
> >
> > +     if (!is_power_of_2(sizeof(struct page))) {
> > +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
>
> I'd just drop that pr_info(). Users are able to observe that it's
> working (below), so they are able to identify that it's not working as well.

The below is just a pr_debug. Do you suggest converting it to pr_info?

>
> > +             return;
> > +     }
> > +
> >       vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
> >       /*
> >        * The head page and the first tail page are not to be freed to buddy
> >
>
> Please squash this patch into the enabling patch and add a comment
> instead, like
>
> /* We cannot optimize if a "struct page" crosses page boundaries. */
>

Will do. Thanks.

> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09 10:03     ` [External] " Muchun Song
@ 2020-12-09 10:06       ` David Hildenbrand
  2020-12-09 10:10         ` David Hildenbrand
  2020-12-09 10:10         ` Muchun Song
  0 siblings, 2 replies; 49+ messages in thread
From: David Hildenbrand @ 2020-12-09 10:06 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On 09.12.20 11:03, Muchun Song wrote:
> On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 30.11.20 16:18, Muchun Song wrote:
>>> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
>>> when the size of struct page is a power of two.
>>>
>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>> ---
>>>  mm/hugetlb_vmemmap.c | 5 +++++
>>>  1 file changed, 5 insertions(+)
>>>
>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>>> index 51152e258f39..ad8fc61ea273 100644
>>> --- a/mm/hugetlb_vmemmap.c
>>> +++ b/mm/hugetlb_vmemmap.c
>>> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>>>       unsigned int nr_pages = pages_per_huge_page(h);
>>>       unsigned int vmemmap_pages;
>>>
>>> +     if (!is_power_of_2(sizeof(struct page))) {
>>> +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
>>
>> I'd just drop that pr_info(). Users are able to observe that it's
>> working (below), so they are able to identify that it's not working as well.
> 
> The below is just a pr_debug. Do you suggest converting it to pr_info?

Good question. I wonder if users really have to know in most cases.
Maybe pr_debug() is good enough in environments where we want to debug
why stuff is not working as expected.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09 10:06       ` David Hildenbrand
@ 2020-12-09 10:10         ` David Hildenbrand
  2020-12-09 10:16           ` Muchun Song
  2020-12-09 15:13           ` Muchun Song
  2020-12-09 10:10         ` Muchun Song
  1 sibling, 2 replies; 49+ messages in thread
From: David Hildenbrand @ 2020-12-09 10:10 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On 09.12.20 11:06, David Hildenbrand wrote:
> On 09.12.20 11:03, Muchun Song wrote:
>> On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
>>>
>>> On 30.11.20 16:18, Muchun Song wrote:
>>>> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
>>>> when the size of struct page is a power of two.
>>>>
>>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>>> ---
>>>>  mm/hugetlb_vmemmap.c | 5 +++++
>>>>  1 file changed, 5 insertions(+)
>>>>
>>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>>>> index 51152e258f39..ad8fc61ea273 100644
>>>> --- a/mm/hugetlb_vmemmap.c
>>>> +++ b/mm/hugetlb_vmemmap.c
>>>> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>>>>       unsigned int nr_pages = pages_per_huge_page(h);
>>>>       unsigned int vmemmap_pages;
>>>>
>>>> +     if (!is_power_of_2(sizeof(struct page))) {
>>>> +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
>>>
>>> I'd just drop that pr_info(). Users are able to observe that it's
>>> working (below), so they are able to identify that it's not working as well.
>>
>> The below is just a pr_debug. Do you suggest converting it to pr_info?
> 
> Good question. I wonder if users really have to know in most cases.
> Maybe pr_debug() is good enough in environments where we want to debug
> why stuff is not working as expected.
> 

Oh, another thought, can we glue availability of
HUGETLB_PAGE_FREE_VMEMMAP (or a new define based on the config and the
size of a stuct page) to the size of struct page somehow?

I mean, it's known at compile time that this will never work.

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09 10:06       ` David Hildenbrand
  2020-12-09 10:10         ` David Hildenbrand
@ 2020-12-09 10:10         ` Muchun Song
  1 sibling, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-09 10:10 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 6:06 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 09.12.20 11:03, Muchun Song wrote:
> > On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 30.11.20 16:18, Muchun Song wrote:
> >>> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
> >>> when the size of struct page is a power of two.
> >>>
> >>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>> ---
> >>>  mm/hugetlb_vmemmap.c | 5 +++++
> >>>  1 file changed, 5 insertions(+)
> >>>
> >>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> >>> index 51152e258f39..ad8fc61ea273 100644
> >>> --- a/mm/hugetlb_vmemmap.c
> >>> +++ b/mm/hugetlb_vmemmap.c
> >>> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> >>>       unsigned int nr_pages = pages_per_huge_page(h);
> >>>       unsigned int vmemmap_pages;
> >>>
> >>> +     if (!is_power_of_2(sizeof(struct page))) {
> >>> +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
> >>
> >> I'd just drop that pr_info(). Users are able to observe that it's
> >> working (below), so they are able to identify that it's not working as well.
> >
> > The below is just a pr_debug. Do you suggest converting it to pr_info?
>
> Good question. I wonder if users really have to know in most cases.
> Maybe pr_debug() is good enough in environments where we want to debug
> why stuff is not working as expected.

When someone enables this feature via the boot cmdline, maybe he should
want to know whether this feature works. From this point of view, the pr_info
is necessary. Right?

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09 10:10         ` David Hildenbrand
@ 2020-12-09 10:16           ` Muchun Song
  2020-12-09 15:13           ` Muchun Song
  1 sibling, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-09 10:16 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 6:10 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 09.12.20 11:06, David Hildenbrand wrote:
> > On 09.12.20 11:03, Muchun Song wrote:
> >> On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
> >>>
> >>> On 30.11.20 16:18, Muchun Song wrote:
> >>>> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
> >>>> when the size of struct page is a power of two.
> >>>>
> >>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>>> ---
> >>>>  mm/hugetlb_vmemmap.c | 5 +++++
> >>>>  1 file changed, 5 insertions(+)
> >>>>
> >>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> >>>> index 51152e258f39..ad8fc61ea273 100644
> >>>> --- a/mm/hugetlb_vmemmap.c
> >>>> +++ b/mm/hugetlb_vmemmap.c
> >>>> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> >>>>       unsigned int nr_pages = pages_per_huge_page(h);
> >>>>       unsigned int vmemmap_pages;
> >>>>
> >>>> +     if (!is_power_of_2(sizeof(struct page))) {
> >>>> +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
> >>>
> >>> I'd just drop that pr_info(). Users are able to observe that it's
> >>> working (below), so they are able to identify that it's not working as well.
> >>
> >> The below is just a pr_debug. Do you suggest converting it to pr_info?
> >
> > Good question. I wonder if users really have to know in most cases.
> > Maybe pr_debug() is good enough in environments where we want to debug
> > why stuff is not working as expected.
> >
>
> Oh, another thought, can we glue availability of
> HUGETLB_PAGE_FREE_VMEMMAP (or a new define based on the config and the
> size of a stuct page) to the size of struct page somehow?
>
> I mean, it's known at compile time that this will never work.

Good question. I also thought about this question. Just like the
macro SPINLOCK_SIZE does, we also can generate a new macro
to indicate the size of the struct page. :)

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09 10:10         ` David Hildenbrand
  2020-12-09 10:16           ` Muchun Song
@ 2020-12-09 15:13           ` Muchun Song
  2020-12-09 15:47             ` David Hildenbrand
  1 sibling, 1 reply; 49+ messages in thread
From: Muchun Song @ 2020-12-09 15:13 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 6:10 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 09.12.20 11:06, David Hildenbrand wrote:
> > On 09.12.20 11:03, Muchun Song wrote:
> >> On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
> >>>
> >>> On 30.11.20 16:18, Muchun Song wrote:
> >>>> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
> >>>> when the size of struct page is a power of two.
> >>>>
> >>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>>> ---
> >>>>  mm/hugetlb_vmemmap.c | 5 +++++
> >>>>  1 file changed, 5 insertions(+)
> >>>>
> >>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> >>>> index 51152e258f39..ad8fc61ea273 100644
> >>>> --- a/mm/hugetlb_vmemmap.c
> >>>> +++ b/mm/hugetlb_vmemmap.c
> >>>> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> >>>>       unsigned int nr_pages = pages_per_huge_page(h);
> >>>>       unsigned int vmemmap_pages;
> >>>>
> >>>> +     if (!is_power_of_2(sizeof(struct page))) {
> >>>> +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
> >>>
> >>> I'd just drop that pr_info(). Users are able to observe that it's
> >>> working (below), so they are able to identify that it's not working as well.
> >>
> >> The below is just a pr_debug. Do you suggest converting it to pr_info?
> >
> > Good question. I wonder if users really have to know in most cases.
> > Maybe pr_debug() is good enough in environments where we want to debug
> > why stuff is not working as expected.
> >
>
> Oh, another thought, can we glue availability of
> HUGETLB_PAGE_FREE_VMEMMAP (or a new define based on the config and the
> size of a stuct page) to the size of struct page somehow?
>
> I mean, it's known at compile time that this will never work.

I want to define a macro which indicates the size of the
struct page. There is place (kernel/bounds.c) where can
do similar things. When I added the following code in
that file.

        DEFINE(STRUCT_PAGE_SIZE, sizeof(struct page));

Then the compiler will output a message like:

       make[2]: Circular kernel/bounds.s <- include/generated/bounds.h
dependency dropped.

Then I realise that the size of the struct page also depends
on include/generated/bounds.h. But this file is not generated.

Hi David,

Do you have some idea about this?

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09 15:13           ` Muchun Song
@ 2020-12-09 15:47             ` David Hildenbrand
  2020-12-09 15:50               ` Muchun Song
  0 siblings, 1 reply; 49+ messages in thread
From: David Hildenbrand @ 2020-12-09 15:47 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On 09.12.20 16:13, Muchun Song wrote:
> On Wed, Dec 9, 2020 at 6:10 PM David Hildenbrand <david@redhat.com> wrote:
>>
>> On 09.12.20 11:06, David Hildenbrand wrote:
>>> On 09.12.20 11:03, Muchun Song wrote:
>>>> On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
>>>>>
>>>>> On 30.11.20 16:18, Muchun Song wrote:
>>>>>> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
>>>>>> when the size of struct page is a power of two.
>>>>>>
>>>>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>>>>>> ---
>>>>>>  mm/hugetlb_vmemmap.c | 5 +++++
>>>>>>  1 file changed, 5 insertions(+)
>>>>>>
>>>>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
>>>>>> index 51152e258f39..ad8fc61ea273 100644
>>>>>> --- a/mm/hugetlb_vmemmap.c
>>>>>> +++ b/mm/hugetlb_vmemmap.c
>>>>>> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>>>>>>       unsigned int nr_pages = pages_per_huge_page(h);
>>>>>>       unsigned int vmemmap_pages;
>>>>>>
>>>>>> +     if (!is_power_of_2(sizeof(struct page))) {
>>>>>> +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
>>>>>
>>>>> I'd just drop that pr_info(). Users are able to observe that it's
>>>>> working (below), so they are able to identify that it's not working as well.
>>>>
>>>> The below is just a pr_debug. Do you suggest converting it to pr_info?
>>>
>>> Good question. I wonder if users really have to know in most cases.
>>> Maybe pr_debug() is good enough in environments where we want to debug
>>> why stuff is not working as expected.
>>>
>>
>> Oh, another thought, can we glue availability of
>> HUGETLB_PAGE_FREE_VMEMMAP (or a new define based on the config and the
>> size of a stuct page) to the size of struct page somehow?
>>
>> I mean, it's known at compile time that this will never work.
> 
> I want to define a macro which indicates the size of the
> struct page. There is place (kernel/bounds.c) where can
> do similar things. When I added the following code in
> that file.
> 
>         DEFINE(STRUCT_PAGE_SIZE, sizeof(struct page));
> 
> Then the compiler will output a message like:
> 

Hm, from what I understand you cannot use sizeof() in #if etc. So it
might not be possible after all. At least the compiler should optimize
code like

if (!is_power_of_2(sizeof(struct page))) {
	// either this
} else {
	// or that
}

that can never be reached

-- 
Thanks,

David / dhildenb


^ permalink raw reply	[flat|nested] 49+ messages in thread

* Re: [External] Re: [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two
  2020-12-09 15:47             ` David Hildenbrand
@ 2020-12-09 15:50               ` Muchun Song
  0 siblings, 0 replies; 49+ messages in thread
From: Muchun Song @ 2020-12-09 15:50 UTC (permalink / raw)
  To: David Hildenbrand
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador, Michal Hocko,
	Song Bao Hua (Barry Song),
	Xiongchun duan, linux-doc, LKML, Linux Memory Management List,
	linux-fsdevel

On Wed, Dec 9, 2020 at 11:48 PM David Hildenbrand <david@redhat.com> wrote:
>
> On 09.12.20 16:13, Muchun Song wrote:
> > On Wed, Dec 9, 2020 at 6:10 PM David Hildenbrand <david@redhat.com> wrote:
> >>
> >> On 09.12.20 11:06, David Hildenbrand wrote:
> >>> On 09.12.20 11:03, Muchun Song wrote:
> >>>> On Wed, Dec 9, 2020 at 5:57 PM David Hildenbrand <david@redhat.com> wrote:
> >>>>>
> >>>>> On 30.11.20 16:18, Muchun Song wrote:
> >>>>>> We only can free the tail vmemmap pages of HugeTLB to the buddy allocator
> >>>>>> when the size of struct page is a power of two.
> >>>>>>
> >>>>>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >>>>>> ---
> >>>>>>  mm/hugetlb_vmemmap.c | 5 +++++
> >>>>>>  1 file changed, 5 insertions(+)
> >>>>>>
> >>>>>> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> >>>>>> index 51152e258f39..ad8fc61ea273 100644
> >>>>>> --- a/mm/hugetlb_vmemmap.c
> >>>>>> +++ b/mm/hugetlb_vmemmap.c
> >>>>>> @@ -111,6 +111,11 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> >>>>>>       unsigned int nr_pages = pages_per_huge_page(h);
> >>>>>>       unsigned int vmemmap_pages;
> >>>>>>
> >>>>>> +     if (!is_power_of_2(sizeof(struct page))) {
> >>>>>> +             pr_info("disable freeing vmemmap pages for %s\n", h->name);
> >>>>>
> >>>>> I'd just drop that pr_info(). Users are able to observe that it's
> >>>>> working (below), so they are able to identify that it's not working as well.
> >>>>
> >>>> The below is just a pr_debug. Do you suggest converting it to pr_info?
> >>>
> >>> Good question. I wonder if users really have to know in most cases.
> >>> Maybe pr_debug() is good enough in environments where we want to debug
> >>> why stuff is not working as expected.
> >>>
> >>
> >> Oh, another thought, can we glue availability of
> >> HUGETLB_PAGE_FREE_VMEMMAP (or a new define based on the config and the
> >> size of a stuct page) to the size of struct page somehow?
> >>
> >> I mean, it's known at compile time that this will never work.
> >
> > I want to define a macro which indicates the size of the
> > struct page. There is place (kernel/bounds.c) where can
> > do similar things. When I added the following code in
> > that file.
> >
> >         DEFINE(STRUCT_PAGE_SIZE, sizeof(struct page));
> >
> > Then the compiler will output a message like:
> >
>
> Hm, from what I understand you cannot use sizeof() in #if etc. So it
> might not be possible after all. At least the compiler should optimize
> code like
>
> if (!is_power_of_2(sizeof(struct page))) {
>         // either this
> } else {
>         // or that
> }
>
> that can never be reached

Got it. Thanks so much.

>
> --
> Thanks,
>
> David / dhildenb
>


-- 
Yours,
Muchun

^ permalink raw reply	[flat|nested] 49+ messages in thread

end of thread, other threads:[~2020-12-09 15:52 UTC | newest]

Thread overview: 49+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-30 15:18 [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 01/15] mm/memory_hotplug: Move bootmem info registration API to bootmem_info.c Muchun Song
2020-12-07 12:12   ` David Hildenbrand
2020-11-30 15:18 ` [PATCH v7 02/15] mm/memory_hotplug: Move {get,put}_page_bootmem() " Muchun Song
2020-12-07 12:14   ` David Hildenbrand
2020-12-07 12:16     ` [External] " Muchun Song
2020-11-30 15:18 ` [PATCH v7 03/15] mm/hugetlb: Introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2020-12-07 12:19   ` David Hildenbrand
2020-12-07 12:42     ` [External] " Muchun Song
2020-12-07 12:47       ` David Hildenbrand
2020-12-07 13:22         ` Muchun Song
2020-11-30 15:18 ` [PATCH v7 04/15] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2020-12-07 12:36   ` David Hildenbrand
2020-12-07 13:11     ` [External] " Muchun Song
2020-12-09  8:54       ` David Hildenbrand
2020-12-09  9:27         ` Muchun Song
2020-11-30 15:18 ` [PATCH v7 05/15] mm/bootmem_info: Introduce {free,prepare}_vmemmap_page() Muchun Song
2020-12-07 12:39   ` David Hildenbrand
2020-12-07 13:23     ` [External] " Muchun Song
2020-12-09  7:36     ` Muchun Song
2020-12-09  8:49       ` David Hildenbrand
2020-12-09  9:25         ` Muchun Song
2020-12-09  9:32           ` David Hildenbrand
2020-12-09  9:43             ` Muchun Song
2020-11-30 15:18 ` [PATCH v7 06/15] mm/hugetlb: Disable freeing vmemmap if struct page size is not power of two Muchun Song
2020-12-09  9:57   ` David Hildenbrand
2020-12-09 10:03     ` [External] " Muchun Song
2020-12-09 10:06       ` David Hildenbrand
2020-12-09 10:10         ` David Hildenbrand
2020-12-09 10:16           ` Muchun Song
2020-12-09 15:13           ` Muchun Song
2020-12-09 15:47             ` David Hildenbrand
2020-12-09 15:50               ` Muchun Song
2020-12-09 10:10         ` Muchun Song
2020-11-30 15:18 ` [PATCH v7 07/15] x86/mm/64: Disable PMD page mapping of vmemmap Muchun Song
2020-11-30 15:18 ` [PATCH v7 08/15] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 09/15] mm/hugetlb: Defer freeing of HugeTLB pages Muchun Song
2020-11-30 15:18 ` [PATCH v7 10/15] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 11/15] mm/hugetlb: Set the PageHWPoison to the raw error page Muchun Song
2020-11-30 15:18 ` [PATCH v7 12/15] mm/hugetlb: Flush work when dissolving hugetlb page Muchun Song
2020-11-30 15:18 ` [PATCH v7 13/15] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Muchun Song
2020-12-04  0:01   ` Song Bao Hua (Barry Song)
2020-11-30 15:18 ` [PATCH v7 14/15] mm/hugetlb: Gather discrete indexes of tail page Muchun Song
2020-11-30 15:18 ` [PATCH v7 15/15] mm/hugetlb: Add BUILD_BUG_ON to catch invalid usage of tail struct page Muchun Song
2020-12-03  8:35 ` [PATCH v7 00/15] Free some vmemmap pages of hugetlb page Muchun Song
2020-12-03 23:48   ` Mike Kravetz
2020-12-04  3:39     ` [External] " Muchun Song
2020-12-07 18:38       ` Oscar Salvador
2020-12-08  2:26         ` Muchun Song

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).