All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page
@ 2021-02-19 10:49 Muchun Song
  2021-02-19 10:49 ` [PATCH v16 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
                   ` (8 more replies)
  0 siblings, 9 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Hi all,

This patch series will free some vmemmap pages(struct page structures)
associated with each hugetlbpage when preallocated to save memory.

In order to reduce the difficulty of the first version of code review.
From this version, we disable PMD/huge page mapping of vmemmap if this
feature was enabled. This accutualy eliminate a bunch of the complex code
doing page table manipulation. When this patch series is solid, we cam add
the code of vmemmap page table manipulation in the future.

The struct page structures (page structs) are used to describe a physical
page frame. By default, there is a one-to-one mapping from a page frame to
it's corresponding page struct.

The HugeTLB pages consist of multiple base page size pages and is supported
by many architectures. See hugetlbpage.rst in the Documentation directory
for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
are currently supported. Since the base page size on x86 is 4KB, a 2MB
HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
4096 base pages. For each base page, there is a corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
provides this upper limit. The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for all
tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned to
the buddy allocator for other uses.

When the system boot up, every 2M HugeTLB has 512 struct page structs which
size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | -------------> |     2     |
 |           |                     +-----------+                +-----------+
 |           |                     |     3     | -------------> |     3     |
 |           |                     +-----------+                +-----------+
 |           |                     |     4     | -------------> |     4     |
 |    2MB    |                     +-----------+                +-----------+
 |           |                     |     5     | -------------> |     5     |
 |           |                     +-----------+                +-----------+
 |           |                     |     6     | -------------> |     6     |
 |           |                     +-----------+                +-----------+
 |           |                     |     7     | -------------> |     7     |
 |           |                     +-----------+                +-----------+
 |           |
 |           |
 |           |
 +-----------+

The value of page->compound_head is the same for all tail pages. The first
page of page structs (page 0) associated with the HugeTLB page contains the 4
page structs necessary to describe the HugeTLB. The only use of the remaining
pages of page structs (page 1 to page 7) is to point to page->compound_head.
Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
will be used for each HugeTLB page. This will allow us to free the remaining
6 pages to the buddy allocator.

Here is how things look after remapping.

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
 |           |                     +-----------+                   | | | | |
 |           |                     |     3     | ------------------+ | | | |
 |           |                     +-----------+                     | | | |
 |           |                     |     4     | --------------------+ | | |
 |    2MB    |                     +-----------+                       | | |
 |           |                     |     5     | ----------------------+ | |
 |           |                     +-----------+                         | |
 |           |                     |     6     | ------------------------+ |
 |           |                     +-----------+                           |
 |           |                     |     7     | --------------------------+
 |           |                     +-----------+
 |           |
 |           |
 |           |
 +-----------+

When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
vmemmap pages and restore the previous mapping relationship.

Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
pages.

In this case, for the 1GB HugeTLB page, we can save 4094 pages. This is a
very substantial gain. On our server, run some SPDK/QEMU applications which
will use 1024GB hugetlbpage. With this feature enabled, we can save ~16GB
(1G hugepage)/~12GB (2MB hugepage) memory.

Because there are vmemmap page tables reconstruction on the freeing/allocating
path, it increases some overhead. Here are some overhead analysis.

1) Allocating 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           5476 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          4760 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@       |
   [32K, 64K)             4 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.067s
   user     0m0.000s
   sys      0m0.067s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10147 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             93 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 2MB hugetlb pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.213s
   user     0m0.000s
   sys      0m0.213s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)              6 |                                                    |
   [16K, 32K)         10227 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [32K, 64K)             7 |                                                    |

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.081s
   user     0m0.000s
   sys      0m0.081s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            6805 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)           3427 |@@@@@@@@@@@@@@@@@@@@@@@@@@                          |
   [16K, 32K)             8 |                                                    |

   Summarize: The overhead of __free_hugepage is about ~2-3x slower than before.

Although the overhead has increased, the overhead is not significant. Like Mike
said, "However, remember that the majority of use cases create hugetlb pages at
or shortly after boot time and add them to the pool. So, additional overhead is
at pool creation time. There is no change to 'normal run time' operations of
getting a page from or returning a page to the pool (think page fault/unmap)".

Despite the overhead and in addition to the memory gains from this series. The
following data is obtained by Joao Martins. Very thanks to his effort.

There's an additional benefit which is page (un)pinners will see an improvement
and Joao presumes because there are fewer memmap pages and thus the tail/head
pages are staying in cache more often.

Out of the box Joao saw (when comparing linux-next against linux-next + this series)
with gup_test and pinning a 16G hugetlb file (with 1G pages):

	get_user_pages(): ~32k -> ~9k
	unpin_user_pages(): ~75k -> ~70k

Usually any tight loop fetching compound_head(), or reading tail pages data (e.g.
compound_head) benefit a lot. There's some unpinning inefficiencies Joao was
fixing[0], but with that in added it shows even more:

	unpin_user_pages(): ~27k -> ~3.8k

[0] https://lore.kernel.org/linux-mm/20210204202500.26474-1-joao.m.martins@oracle.com/

Todo:
  - Free all of the tail vmemmap pages
    Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
    free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
    structures has beed set PG_head flag. If we can adjust compound_head()
    slightly and make compound_head() return the real head struct page when
    the parameter is the tail struct page but with PG_head flag set.

    In order to make the code evolution route clearer. This feature can can be
    a separate patch after this patchset is solid.

  - Support for other architectures (e.g. aarch64).
  - Enable PMD/huge page mapping of vmemmap even if this feature was enabled.

Changelog in v15 -> v16:
  - Use GFP_KERNEL to allocate vmemmap pages.

  Thanks to Mike, David H and Michal's suggestions.

Changelog in v14 -> v15:
  - Fix some issues suggested by Oscar. Thanks to Oscar.
  - Add numbers which Joao Martins tested to cover letter. Thanks to his effort.

Changelog in v13 -> v14:
  - Refuse to free the HugeTLB page when the system is under memory pressure.
  - Use GFP_ATOMIC to allocate vmemmap pages instead of GFP_KERNEL.
  - Rebase to linux-next 20210202.
  - Fix and add some comments for vmemmap_remap_free().

  Thanks to Oscar, Mike, David H and David R's suggestions and review.

Changelog in v12 -> v13:
  - Remove VM_WARN_ON_PAGE macro.
  - Add more comments in vmemmap_pte_range() and vmemmap_remap_free().

  Thanks to Oscar and Mike's suggestions and review.

Changelog in v11 -> v12:
  - Move VM_WARN_ON_PAGE to a separate patch.
  - Call __free_hugepage() with hugetlb_lock (See patch #5.) to serialize
    with dissolve_free_huge_page(). It is to prepare for patch #9.
  - Introduce PageHugeInflight. See patch #9.

Changelog in v10 -> v11:
  - Fix compiler error when !CONFIG_HUGETLB_PAGE_FREE_VMEMMAP.
  - Rework some comments and commit changes.
  - Rework vmemmap_remap_free() to 3 parameters.

  Thanks to Oscar and Mike's suggestions and review.

Changelog in v9 -> v10:
  - Fix a bug in patch #11. Thanks to Oscar for pointing that out.
  - Rework some commit log or comments. Thanks Mike and Oscar for the suggestions.
  - Drop VMEMMAP_TAIL_PAGE_REUSE in the patch #3.

  Thank you very much Mike and Oscar for reviewing the code.

Changelog in v8 -> v9:
  - Rework some code. Very thanks to Oscar.
  - Put all the non-hugetlb vmemmap functions under sparsemem-vmemmap.c.

Changelog in v7 -> v8:
  - Adjust the order of patches.

  Very thanks to David and Oscar. Your suggestions are very valuable.

Changelog in v6 -> v7:
  - Rebase to linux-next 20201130
  - Do not use basepage mapping for vmemmap when this feature is disabled.
  - Rework some patchs.
    [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
    [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page

  Thanks to Oscar and Barry.

Changelog in v5 -> v6:
  - Disable PMD/huge page mapping of vmemmap if this feature was enabled.
  - Simplify the first version code.

Changelog in v4 -> v5:
  - Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].

  Thanks to Mike and Oscar's suggestions.

Changelog in v3 -> v4:
  - Move all the vmemmap functions to hugetlb_vmemmap.c.
  - Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
    disable this feature, we should disable it by a boot/kernel command line.
  - Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
  - Initialize page table lock for vmemmap through core_initcall mechanism.

  Thanks for Mike and Oscar's suggestions.

Changelog in v2 -> v3:
  - Rename some helps function name. Thanks Mike.
  - Rework some code. Thanks Mike and Oscar.
  - Remap the tail vmemmap page with PAGE_KERNEL_RO instead of PAGE_KERNEL.
    Thanks Matthew.
  - Add some overhead analysis in the cover letter.
  - Use vmemap pmd table lock instead of a hugetlb specific global lock.

Changelog in v1 -> v2:
  - Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  - Fix some typo and code style problems.
  - Remove unused handle_vmemmap_fault().
  - Merge some commits to one commit suggested by Mike.

Muchun Song (9):
  mm: memory_hotplug: factor out bootmem core functions to
    bootmem_info.c
  mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
  mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  mm: hugetlb: set the PageHWPoison to the raw error page
  mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate
  mm: hugetlb: gather discrete indexes of tail page
  mm: hugetlb: optimize the code with the help of the compiler

 Documentation/admin-guide/kernel-parameters.txt |  14 ++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  12 +
 arch/x86/mm/init_64.c                           |  13 +-
 fs/Kconfig                                      |   6 +
 include/linux/bootmem_info.h                    |  65 ++++++
 include/linux/hugetlb.h                         |  47 +++-
 include/linux/hugetlb_cgroup.h                  |  19 +-
 include/linux/memory_hotplug.h                  |  27 ---
 include/linux/mm.h                              |   5 +
 mm/Makefile                                     |   2 +
 mm/bootmem_info.c                               | 124 ++++++++++
 mm/hugetlb.c                                    | 166 +++++++++++---
 mm/hugetlb_vmemmap.c                            | 293 ++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                            |  51 +++++
 mm/memory_hotplug.c                             | 116 ----------
 mm/sparse-vmemmap.c                             | 280 ++++++++++++++++++++++
 mm/sparse.c                                     |   1 +
 17 files changed, 1054 insertions(+), 187 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

-- 
2.11.0


^ permalink raw reply	[flat|nested] 25+ messages in thread

* [PATCH v16 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 10:49 ` [PATCH v16 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin

Move bootmem info registration common API to individual bootmem_info.c.
And we will use {get,put}_page_bootmem() to initialize the page for the
vmemmap pages or free the vmemmap pages to buddy in the later patch.
So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
movement without any functional change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
---
 arch/x86/mm/init_64.c          |   3 +-
 include/linux/bootmem_info.h   |  40 +++++++++++++
 include/linux/memory_hotplug.h |  27 ---------
 mm/Makefile                    |   1 +
 mm/bootmem_info.c              | 124 +++++++++++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 116 --------------------------------------
 mm/sparse.c                    |   1 +
 7 files changed, 168 insertions(+), 144 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..0a45f062826e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -33,6 +33,7 @@
 #include <linux/nmi.h>
 #include <linux/gfp.h>
 #include <linux/kcore.h>
+#include <linux/bootmem_info.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 	return err;
 }
 
-#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void register_page_bootmem_memmap(unsigned long section_nr,
 				  struct page *start_page, unsigned long nr_pages)
 {
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
new file mode 100644
index 000000000000..4ed6dee1adc9
--- /dev/null
+++ b/include/linux/bootmem_info.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_BOOTMEM_INFO_H
+#define __LINUX_BOOTMEM_INFO_H
+
+#include <linux/mmzone.h>
+
+/*
+ * Types for free bootmem stored in page->lru.next. These have to be in
+ * some random range in unsigned long space for debugging purposes.
+ */
+enum {
+	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
+	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
+	MIX_SECTION_INFO,
+	NODE_INFO,
+	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
+};
+
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+
+void get_page_bootmem(unsigned long info, struct page *page,
+		      unsigned long type);
+void put_page_bootmem(struct page *page);
+#else
+static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+}
+
+static inline void put_page_bootmem(struct page *page)
+{
+}
+
+static inline void get_page_bootmem(unsigned long info, struct page *page,
+				    unsigned long type)
+{
+}
+#endif
+
+#endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 7288aa5ef73b..96659a8b9d02 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -18,18 +18,6 @@ struct vmem_altmap;
 #ifdef CONFIG_MEMORY_HOTPLUG
 struct page *pfn_to_online_page(unsigned long pfn);
 
-/*
- * Types for free bootmem stored in page->lru.next. These have to be in
- * some random range in unsigned long space for debugging purposes.
- */
-enum {
-	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
-	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
-	MIX_SECTION_INFO,
-	NODE_INFO,
-	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
-};
-
 /* Types for control the zone type of onlined and offlined memory */
 enum {
 	/* Offline the memory. */
@@ -210,17 +198,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
-#else
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-#endif
-extern void put_page_bootmem(struct page *page);
-extern void get_page_bootmem(unsigned long ingo, struct page *page,
-			     unsigned long type);
-
 void get_online_mems(void);
 void put_online_mems(void);
 
@@ -248,10 +225,6 @@ static inline void zone_span_writelock(struct zone *zone) {}
 static inline void zone_span_writeunlock(struct zone *zone) {}
 static inline void zone_seqlock_init(struct zone *zone) {}
 
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-
 static inline int try_online_node(int nid)
 {
 	return 0;
diff --git a/mm/Makefile b/mm/Makefile
index b2a564eec27f..ce4ddbe4461d 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_KFENCE) += kfence/
 obj-$(CONFIG_FAILSLAB) += failslab.o
+obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
new file mode 100644
index 000000000000..fcab5a3f8cc0
--- /dev/null
+++ b/mm/bootmem_info.c
@@ -0,0 +1,124 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ *  linux/mm/bootmem_info.c
+ *
+ *  Copyright (C)
+ */
+#include <linux/mm.h>
+#include <linux/compiler.h>
+#include <linux/memblock.h>
+#include <linux/bootmem_info.h>
+#include <linux/memory_hotplug.h>
+
+void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
+{
+	page->freelist = (void *)type;
+	SetPagePrivate(page);
+	set_page_private(page, info);
+	page_ref_inc(page);
+}
+
+void put_page_bootmem(struct page *page)
+{
+	unsigned long type;
+
+	type = (unsigned long) page->freelist;
+	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
+	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
+
+	if (page_ref_dec_return(page) == 1) {
+		page->freelist = NULL;
+		ClearPagePrivate(page);
+		set_page_private(page, 0);
+		INIT_LIST_HEAD(&page->lru);
+		free_reserved_page(page);
+	}
+}
+
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	/* Get section's memmap address */
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	/*
+	 * Get page for the memmap's phys address
+	 * XXX: need more consideration for sparse_vmemmap...
+	 */
+	page = virt_to_page(memmap);
+	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
+	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
+
+	/* remember memmap's page */
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, SECTION_INFO);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+
+}
+#else /* CONFIG_SPARSEMEM_VMEMMAP */
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+}
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
+
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+	unsigned long i, pfn, end_pfn, nr_pages;
+	int node = pgdat->node_id;
+	struct page *page;
+
+	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
+	page = virt_to_page(pgdat);
+
+	for (i = 0; i < nr_pages; i++, page++)
+		get_page_bootmem(node, page, NODE_INFO);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	/* register section info */
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+		/*
+		 * Some platforms can assign the same pfn to multiple nodes - on
+		 * node0 as well as nodeN.  To avoid registering a pfn against
+		 * multiple nodes we check that this pfn does not already
+		 * reside in some other nodes.
+		 */
+		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
+			register_page_bootmem_info_section(pfn);
+	}
+}
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 5ba51a8bdaeb..a2a72b617040 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -144,122 +144,6 @@ static void release_memory_resource(struct resource *res)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
-void get_page_bootmem(unsigned long info,  struct page *page,
-		      unsigned long type)
-{
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
-	page_ref_inc(page);
-}
-
-void put_page_bootmem(struct page *page)
-{
-	unsigned long type;
-
-	type = (unsigned long) page->freelist;
-	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
-	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
-
-	if (page_ref_dec_return(page) == 1) {
-		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
-		INIT_LIST_HEAD(&page->lru);
-		free_reserved_page(page);
-	}
-}
-
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	/* Get section's memmap address */
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	/*
-	 * Get page for the memmap's phys address
-	 * XXX: need more consideration for sparse_vmemmap...
-	 */
-	page = virt_to_page(memmap);
-	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
-	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
-
-	/* remember memmap's page */
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, SECTION_INFO);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-
-}
-#else /* CONFIG_SPARSEMEM_VMEMMAP */
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
-
-void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-	unsigned long i, pfn, end_pfn, nr_pages;
-	int node = pgdat->node_id;
-	struct page *page;
-
-	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
-	page = virt_to_page(pgdat);
-
-	for (i = 0; i < nr_pages; i++, page++)
-		get_page_bootmem(node, page, NODE_INFO);
-
-	pfn = pgdat->node_start_pfn;
-	end_pfn = pgdat_end_pfn(pgdat);
-
-	/* register section info */
-	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		/*
-		 * Some platforms can assign the same pfn to multiple nodes - on
-		 * node0 as well as nodeN.  To avoid registering a pfn against
-		 * multiple nodes we check that this pfn does not already
-		 * reside in some other nodes.
-		 */
-		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
-			register_page_bootmem_info_section(pfn);
-	}
-}
-#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
diff --git a/mm/sparse.c b/mm/sparse.c
index 7bd23f9d6cef..87676bf3af40 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -13,6 +13,7 @@
 #include <linux/vmalloc.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/bootmem_info.h>
 
 #include "internal.h"
 #include <asm/dma.h>
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
  2021-02-19 10:49 ` [PATCH v16 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 10:49 ` [PATCH v16 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin

The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of
some vmemmap pages associated with pre-allocated HugeTLB pages.
For example, on X86_64 6 vmemmap pages of size 4KB each can be
saved for each 2MB HugeTLB page. 4094 vmemmap pages of size 4KB
each can be saved for each 1GB HugeTLB page.

When a HugeTLB page is allocated or freed, the vmemmap array
representing the range associated with the page will need to be
remapped. When a page is allocated, vmemmap pages are freed
after remapping. When a page is freed, previously discarded
vmemmap pages must be allocated before remapping.

The config option is introduced early so that supporting code
can be written to depend on the option. The initial version of
the code only provides support for x86-64.

Like other code which frees vmemmap, this config option depends on
HAVE_BOOTMEM_INFO_NODE. The routine register_page_bootmem_info() is
used to register bootmem info. Therefore, make sure
register_page_bootmem_info is enabled if HUGETLB_PAGE_FREE_VMEMMAP
is defined.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
---
 arch/x86/mm/init_64.c | 2 +-
 fs/Kconfig            | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0a45f062826e..0435bee2e172 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/fs/Kconfig b/fs/Kconfig
index 97e7b77c9309..de87f234f1e9 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -237,6 +237,12 @@ config HUGETLBFS
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
 
+config HUGETLB_PAGE_FREE_VMEMMAP
+	def_bool HUGETLB_PAGE
+	depends on X86_64
+	depends on SPARSEMEM_VMEMMAP
+	depends on HAVE_BOOTMEM_INFO_NODE
+
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
  2021-02-19 10:49 ` [PATCH v16 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
  2021-02-19 10:49 ` [PATCH v16 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 10:49 ` [PATCH v16 4/9] mm: hugetlb: alloc " Muchun Song
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Every HugeTLB has more than one struct page structure. We __know__ that
we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
to store metadata associated with each HugeTLB.

There are a lot of struct page structures associated with each HugeTLB
page. For tail pages, the value of compound_head is the same. So we can
reuse first page of tail page structures. We map the virtual addresses
of the remaining pages of tail page structures to the first tail page
struct, and then free these page frames. Therefore, we need to reserve
two pages as vmemmap areas.

When we allocate a HugeTLB page from the buddy, we can free some vmemmap
pages associated with each HugeTLB page. It is more appropriate to do it
in the prep_new_huge_page().

The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
pages associated with a HugeTLB page can be freed, returns zero for
now, which means the feature is disabled. We will enable it once all
the infrastructure is there.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
---
 include/linux/bootmem_info.h |  27 +++++-
 include/linux/mm.h           |   3 +
 mm/Makefile                  |   1 +
 mm/hugetlb.c                 |   3 +
 mm/hugetlb_vmemmap.c         | 219 +++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h         |  20 ++++
 mm/sparse-vmemmap.c          | 207 ++++++++++++++++++++++++++++++++++++++++
 7 files changed, 479 insertions(+), 1 deletion(-)
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 4ed6dee1adc9..ec03a624dfa2 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -2,7 +2,7 @@
 #ifndef __LINUX_BOOTMEM_INFO_H
 #define __LINUX_BOOTMEM_INFO_H
 
-#include <linux/mmzone.h>
+#include <linux/mm.h>
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
@@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 void get_page_bootmem(unsigned long info, struct page *page,
 		      unsigned long type);
 void put_page_bootmem(struct page *page);
+
+/*
+ * Any memory allocated via the memblock allocator and not via the
+ * buddy will be marked reserved already in the memmap. For those
+ * pages, we can call this function to free it to buddy allocator.
+ */
+static inline void free_bootmem_page(struct page *page)
+{
+	unsigned long magic = (unsigned long)page->freelist;
+
+	/*
+	 * The reserve_bootmem_region sets the reserved flag on bootmem
+	 * pages.
+	 */
+	VM_BUG_ON_PAGE(page_ref_count(page) != 2, page);
+
+	if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
+		put_page_bootmem(page);
+	else
+		VM_BUG_ON_PAGE(1, page);
+}
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
@@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page,
 				    unsigned long type)
 {
 }
+
+static inline void free_bootmem_page(struct page *page)
+{
+}
 #endif
 
 #endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index a608feb0d42e..d7dddf334779 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2979,6 +2979,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 }
 #endif
 
+void vmemmap_remap_free(unsigned long start, unsigned long end,
+			unsigned long reuse);
+
 void *sparse_buffer_alloc(unsigned long size);
 struct page * __populate_section_memmap(unsigned long pfn,
 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap);
diff --git a/mm/Makefile b/mm/Makefile
index ce4ddbe4461d..47b250e4c9b2 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
 obj-$(CONFIG_ZSWAP)	+= zswap.o
 obj-$(CONFIG_HAS_DMA)	+= dmapool.o
 obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
+obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
 obj-$(CONFIG_NUMA) 	+= mempolicy.o
 obj-$(CONFIG_SPARSEMEM)	+= sparse.o
 obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 8c53f3f2e12e..4cfca27c6d32 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -42,6 +42,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/page_owner.h>
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
@@ -1462,6 +1463,8 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	free_huge_page_vmemmap(h, page);
+
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
 	set_hugetlb_cgroup(page, NULL);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
new file mode 100644
index 000000000000..0209b736e0b4
--- /dev/null
+++ b/mm/hugetlb_vmemmap.c
@@ -0,0 +1,219 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ *
+ * The struct page structures (page structs) are used to describe a physical
+ * page frame. By default, there is a one-to-one mapping from a page frame to
+ * it's corresponding page struct.
+ *
+ * HugeTLB pages consist of multiple base page size pages and is supported by
+ * many architectures. See hugetlbpage.rst in the Documentation directory for
+ * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB
+ * are currently supported. Since the base page size on x86 is 4KB, a 2MB
+ * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
+ * 4096 base pages. For each base page, there is a corresponding page struct.
+ *
+ * Within the HugeTLB subsystem, only the first 4 page structs are used to
+ * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
+ * provides this upper limit. The only 'useful' information in the remaining
+ * page structs is the compound_head field, and this field is the same for all
+ * tail pages.
+ *
+ * By removing redundant page structs for HugeTLB pages, memory can be returned
+ * to the buddy allocator for other uses.
+ *
+ * Different architectures support different HugeTLB pages. For example, the
+ * following table is the HugeTLB page size supported by x86 and arm64
+ * architectures. Because arm64 supports 4k, 16k, and 64k base pages and
+ * supports contiguous entries, so it supports many kinds of sizes of HugeTLB
+ * page.
+ *
+ * +--------------+-----------+-----------------------------------------------+
+ * | Architecture | Page Size |                HugeTLB Page Size              |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |    x86-64    |    4KB    |    2MB    |    1GB    |           |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |              |    4KB    |   64KB    |    2MB    |    32MB   |    1GB    |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |    arm64     |   16KB    |    2MB    |   32MB    |     1GB   |           |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |              |   64KB    |    2MB    |  512MB    |    16GB   |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ *
+ * When the system boot up, every HugeTLB page has more than one struct page
+ * structs which size is (unit: pages):
+ *
+ *    struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *
+ * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size
+ * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following
+ * relationship.
+ *
+ *    HugeTLB_Size = n * PAGE_SIZE
+ *
+ * Then,
+ *
+ *    struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *                = n * sizeof(struct page) / PAGE_SIZE
+ *
+ * We can use huge mapping at the pud/pmd level for the HugeTLB page.
+ *
+ * For the HugeTLB page of the pmd level mapping, then
+ *
+ *    struct_size = n * sizeof(struct page) / PAGE_SIZE
+ *                = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE
+ *                = sizeof(struct page) / sizeof(pte_t)
+ *                = 64 / 8
+ *                = 8 (pages)
+ *
+ * Where n is how many pte entries which one page can contains. So the value of
+ * n is (PAGE_SIZE / sizeof(pte_t)).
+ *
+ * This optimization only supports 64-bit system, so the value of sizeof(pte_t)
+ * is 8. And this optimization also applicable only when the size of struct page
+ * is a power of two. In most cases, the size of struct page is 64 bytes (e.g.
+ * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the
+ * size of struct page structs of it is 8 page frames which size depends on the
+ * size of the base page.
+ *
+ * For the HugeTLB page of the pud level mapping, then
+ *
+ *    struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd)
+ *                = PAGE_SIZE / 8 * 8 (pages)
+ *                = PAGE_SIZE (pages)
+ *
+ * Where the struct_size(pmd) is the size of the struct page structs of a
+ * HugeTLB page of the pmd level mapping.
+ *
+ * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB
+ * HugeTLB page consists in 4096.
+ *
+ * Next, we take the pmd level mapping of the HugeTLB page as an example to
+ * show the internal implementation of this optimization. There are 8 pages
+ * struct page structs associated with a HugeTLB page which is pmd mapped.
+ *
+ * Here is how things look before optimization.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | -------------> |     2     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     3     | -------------> |     3     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     4     | -------------> |     4     |
+ * |    PMD    |                     +-----------+                +-----------+
+ * |   level   |                     |     5     | -------------> |     5     |
+ * |  mapping  |                     +-----------+                +-----------+
+ * |           |                     |     6     | -------------> |     6     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     7     | -------------> |     7     |
+ * |           |                     +-----------+                +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * The value of page->compound_head is the same for all tail pages. The first
+ * page of page structs (page 0) associated with the HugeTLB page contains the 4
+ * page structs necessary to describe the HugeTLB. The only use of the remaining
+ * pages of page structs (page 1 to page 7) is to point to page->compound_head.
+ * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
+ * will be used for each HugeTLB page. This will allow us to free the remaining
+ * 6 pages to the buddy allocator.
+ *
+ * Here is how things look after remapping.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
+ * |           |                     +-----------+                   | | | | |
+ * |           |                     |     3     | ------------------+ | | | |
+ * |           |                     +-----------+                     | | | |
+ * |           |                     |     4     | --------------------+ | | |
+ * |    PMD    |                     +-----------+                       | | |
+ * |   level   |                     |     5     | ----------------------+ | |
+ * |  mapping  |                     +-----------+                         | |
+ * |           |                     |     6     | ------------------------+ |
+ * |           |                     +-----------+                           |
+ * |           |                     |     7     | --------------------------+
+ * |           |                     +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
+ * vmemmap pages and restore the previous mapping relationship.
+ *
+ * For the HugeTLB page of the pud level mapping. It is similar to the former.
+ * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages.
+ *
+ * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures
+ * (e.g. aarch64) provides a contiguous bit in the translation table entries
+ * that hints to the MMU to indicate that it is one of a contiguous set of
+ * entries that can be cached in a single TLB entry.
+ *
+ * The contiguous bit is used to increase the mapping size at the pmd and pte
+ * (last) level. So this type of HugeTLB page can be optimized only when its
+ * size of the struct page structs is greater than 2 pages.
+ */
+#include "hugetlb_vmemmap.h"
+
+/*
+ * There are a lot of struct page structures associated with each HugeTLB page.
+ * For tail pages, the value of compound_head is the same. So we can reuse first
+ * page of tail page structures. We map the virtual addresses of the remaining
+ * pages of tail page structures to the first tail page struct, and then free
+ * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
+ */
+#define RESERVE_VMEMMAP_NR		2U
+#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Returns zero for now, which means the feature is disabled. We will
+ * enable it once all the infrastructure is there.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
+
+static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
+{
+	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
+}
+
+void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long vmemmap_addr = (unsigned long)head;
+	unsigned long vmemmap_end, vmemmap_reuse;
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
+	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
+	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
+
+	/*
+	 * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end)
+	 * to the page which @vmemmap_reuse is mapped to, then free the pages
+	 * which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
+	 */
+	vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
new file mode 100644
index 000000000000..6923f03534d5
--- /dev/null
+++ b/mm/hugetlb_vmemmap.h
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ */
+#ifndef _LINUX_HUGETLB_VMEMMAP_H
+#define _LINUX_HUGETLB_VMEMMAP_H
+#include <linux/hugetlb.h>
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+#else
+static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+#endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 16183d85a7d5..d3076a7a3783 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -27,8 +27,215 @@
 #include <linux/spinlock.h>
 #include <linux/vmalloc.h>
 #include <linux/sched.h>
+#include <linux/pgtable.h>
+#include <linux/bootmem_info.h>
+
 #include <asm/dma.h>
 #include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+
+/**
+ * vmemmap_remap_walk - walk vmemmap page table
+ *
+ * @remap_pte:		called for each lowest-level entry (PTE).
+ * @reuse_page:		the page which is reused for the tail vmemmap pages.
+ * @reuse_addr:		the virtual address of the @reuse_page page.
+ * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
+ */
+struct vmemmap_remap_walk {
+	void (*remap_pte)(pte_t *pte, unsigned long addr,
+			  struct vmemmap_remap_walk *walk);
+	struct page *reuse_page;
+	unsigned long reuse_addr;
+	struct list_head *vmemmap_pages;
+};
+
+static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	pte_t *pte;
+
+	pte = pte_offset_kernel(pmd, addr);
+
+	/*
+	 * The reuse_page is found 'first' in table walk before we start
+	 * remapping (which is calling @walk->remap_pte).
+	 */
+	if (!walk->reuse_page) {
+		BUG_ON(pte_none(*pte));
+		BUG_ON(walk->reuse_addr != addr);
+
+		walk->reuse_page = pte_page(*pte++);
+		/*
+		 * Because the reuse address is part of the range that we are
+		 * walking, skip the reuse address range.
+		 */
+		addr += PAGE_SIZE;
+	}
+
+	for (; addr != end; addr += PAGE_SIZE, pte++) {
+		BUG_ON(pte_none(*pte));
+
+		walk->remap_pte(pte, addr, walk);
+	}
+}
+
+static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	pmd_t *pmd;
+	unsigned long next;
+
+	pmd = pmd_offset(pud, addr);
+	do {
+		BUG_ON(pmd_none(*pmd) || pmd_leaf(*pmd));
+
+		next = pmd_addr_end(addr, end);
+		vmemmap_pte_range(pmd, addr, next, walk);
+	} while (pmd++, addr = next, addr != end);
+}
+
+static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	pud_t *pud;
+	unsigned long next;
+
+	pud = pud_offset(p4d, addr);
+	do {
+		BUG_ON(pud_none(*pud));
+
+		next = pud_addr_end(addr, end);
+		vmemmap_pmd_range(pud, addr, next, walk);
+	} while (pud++, addr = next, addr != end);
+}
+
+static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	p4d_t *p4d;
+	unsigned long next;
+
+	p4d = p4d_offset(pgd, addr);
+	do {
+		BUG_ON(p4d_none(*p4d));
+
+		next = p4d_addr_end(addr, end);
+		vmemmap_pud_range(p4d, addr, next, walk);
+	} while (p4d++, addr = next, addr != end);
+}
+
+static void vmemmap_remap_range(unsigned long start, unsigned long end,
+				struct vmemmap_remap_walk *walk)
+{
+	unsigned long addr = start;
+	unsigned long next;
+	pgd_t *pgd;
+
+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+
+	pgd = pgd_offset_k(addr);
+	do {
+		BUG_ON(pgd_none(*pgd));
+
+		next = pgd_addr_end(addr, end);
+		vmemmap_p4d_range(pgd, addr, next, walk);
+	} while (pgd++, addr = next, addr != end);
+
+	/*
+	 * We only change the mapping of the vmemmap virtual address range
+	 * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
+	 * belongs to the range.
+	 */
+	flush_tlb_kernel_range(start + PAGE_SIZE, end);
+}
+
+/*
+ * Free a vmemmap page. A vmemmap page can be allocated from the memblock
+ * allocator or buddy allocator. If the PG_reserved flag is set, it means
+ * that it allocated from the memblock allocator, just free it via the
+ * free_bootmem_page(). Otherwise, use __free_page().
+ */
+static inline void free_vmemmap_page(struct page *page)
+{
+	if (PageReserved(page))
+		free_bootmem_page(page);
+	else
+		__free_page(page);
+}
+
+/* Free a list of the vmemmap pages */
+static void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
+			      struct vmemmap_remap_walk *walk)
+{
+	/*
+	 * Remap the tail pages as read-only to catch illegal write operation
+	 * to the tail pages.
+	 */
+	pgprot_t pgprot = PAGE_KERNEL_RO;
+	pte_t entry = mk_pte(walk->reuse_page, pgprot);
+	struct page *page = pte_page(*pte);
+
+	list_add(&page->lru, walk->vmemmap_pages);
+	set_pte_at(&init_mm, addr, pte, entry);
+}
+
+/**
+ * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end)
+ *			to the page which @reuse is mapped to, then free vmemmap
+ *			which the range are mapped to.
+ * @start:	start address of the vmemmap virtual address range that we want
+ *		to remap.
+ * @end:	end address of the vmemmap virtual address range that we want to
+ *		remap.
+ * @reuse:	reuse address.
+ *
+ * Note: This function depends on vmemmap being base page mapped. Please make
+ * sure that we disable PMD mapping of vmemmap pages when calling this function.
+ */
+void vmemmap_remap_free(unsigned long start, unsigned long end,
+			unsigned long reuse)
+{
+	LIST_HEAD(vmemmap_pages);
+	struct vmemmap_remap_walk walk = {
+		.remap_pte	= vmemmap_remap_pte,
+		.reuse_addr	= reuse,
+		.vmemmap_pages	= &vmemmap_pages,
+	};
+
+	/*
+	 * In order to make remapping routine most efficient for the huge pages,
+	 * the routine of vmemmap page table walking has the following rules
+	 * (see more details from the vmemmap_pte_range()):
+	 *
+	 * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE)
+	 *   should be continuous.
+	 * - The @reuse address is part of the range [@reuse, @end) that we are
+	 *   walking which is passed to vmemmap_remap_range().
+	 * - The @reuse address is the first in the complete range.
+	 *
+	 * So we need to make sure that @start and @reuse meet the above rules.
+	 */
+	BUG_ON(start - reuse != PAGE_SIZE);
+
+	vmemmap_remap_range(reuse, end, &walk);
+	free_vmemmap_page_list(&vmemmap_pages);
+}
 
 /*
  * Allocate a block of memory to be used to back the virtual memory map
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (2 preceding siblings ...)
  2021-02-19 10:49 ` [PATCH v16 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 14:12   ` Michal Hocko
  2021-02-23  0:00   ` Mike Kravetz
  2021-02-19 10:49 ` [PATCH v16 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

When we free a HugeTLB page to the buddy allocator, we should allocate
the vmemmap pages associated with it. But we may cannot allocate vmemmap
pages when the system is under memory pressure, in this case, we just
refuse to free the HugeTLB page instead of looping forever trying to
allocate the pages. This changes some behavior (list below) on some
corner cases.

 1) Failing to free a huge page triggered by the user (decrease nr_pages).

    Need try again later by the user.

 2) Failing to free a surplus huge page when freed by the application.

    Try again later when freeing a huge page next time.

 3) Failing to dissolve a free huge page on ZONE_MOVABLE via
    offline_pages().

    This is a bit unfortunate if we have plenty of ZONE_MOVABLE memory
    but are low on kernel memory. For example, migration of huge pages
    would still work, however, dissolving the free page does not work.
    This is a corner cases. When the system is that much under memory
    pressure, offlining/unplug can be expected to fail.

 4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
    alloc_contig_range() - once we have that handling in place. Mainly
    affects CMA and virtio-mem.

    Similar to 3). virito-mem will handle migration errors gracefully.
    CMA might be able to fallback on other free areas within the CMA
    region.

We do not want to use GFP_ATOMIC to allocate vmemmap pages. Because it
grants access to memory reserves and we do not think it is reasonable
to use memory reserves. We use GFP_KERNEL in alloc_huge_page_vmemmap().

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 Documentation/admin-guide/mm/hugetlbpage.rst |  8 +++
 include/linux/mm.h                           |  2 +
 mm/hugetlb.c                                 | 81 ++++++++++++++++++++--------
 mm/hugetlb_vmemmap.c                         | 22 ++++++++
 mm/hugetlb_vmemmap.h                         |  6 +++
 mm/sparse-vmemmap.c                          | 75 +++++++++++++++++++++++++-
 6 files changed, 171 insertions(+), 23 deletions(-)

diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..fb8f649e5635 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -60,6 +60,10 @@ HugePages_Surp
         the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
         maximum number of surplus huge pages is controlled by
         ``/proc/sys/vm/nr_overcommit_hugepages``.
+	Note: When the feature of freeing unused vmemmap pages associated
+	with each hugetlb page is enabled, the number of the surplus huge
+	pages may be temporarily larger than the maximum number of surplus
+	huge pages when the system is under memory pressure.
 Hugepagesize
 	is the default hugepage size (in Kb).
 Hugetlb
@@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
 privileges can dynamically allocate more or free some persistent huge pages
 by increasing or decreasing the value of ``nr_hugepages``.
 
+Note: When the feature of freeing unused vmemmap pages associated with each
+hugetlb page is enabled, we can failed to free the huge pages triggered by
+the user when ths system is under memory pressure.  Please try again later.
+
 Pages that are used as huge pages are reserved inside the kernel and cannot
 be used for other purposes.  Huge pages cannot be swapped out under
 memory pressure.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index d7dddf334779..33c5911afe18 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2981,6 +2981,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 
 void vmemmap_remap_free(unsigned long start, unsigned long end,
 			unsigned long reuse);
+int vmemmap_remap_alloc(unsigned long start, unsigned long end,
+			unsigned long reuse, gfp_t gfp_mask);
 
 void *sparse_buffer_alloc(unsigned long size);
 struct page * __populate_section_memmap(unsigned long pfn,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 4cfca27c6d32..bcf856974c48 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1305,37 +1305,68 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
-static void update_and_free_page(struct hstate *h, struct page *page)
+static int update_and_free_page(struct hstate *h, struct page *page)
+	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
 {
 	int i;
+	int nid = page_to_nid(page);
 
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
-		return;
+		return 0;
 
 	h->nr_huge_pages--;
-	h->nr_huge_pages_node[page_to_nid(page)]--;
+	h->nr_huge_pages_node[nid]--;
+	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
+	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
+	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
+	set_page_refcounted(page);
+	spin_unlock(&hugetlb_lock);
+
+	if (alloc_huge_page_vmemmap(h, page)) {
+		int zeroed;
+
+		spin_lock(&hugetlb_lock);
+		INIT_LIST_HEAD(&page->lru);
+		set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
+		h->nr_huge_pages++;
+		h->nr_huge_pages_node[nid]++;
+
+		/*
+		 * If we cannot allocate vmemmap pages, just refuse to free the
+		 * page and put the page back on the hugetlb free list and treat
+		 * as a surplus page.
+		 */
+		h->surplus_huge_pages++;
+		h->surplus_huge_pages_node[nid]++;
+
+		/*
+		 * This page is now managed by the hugetlb allocator and has
+		 * no users -- drop the last reference.
+		 */
+		zeroed = put_page_testzero(page);
+		VM_BUG_ON_PAGE(!zeroed, page);
+		arch_clear_hugepage_flags(page);
+		enqueue_huge_page(h, page);
+
+		return -ENOMEM;
+	}
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
 				1 << PG_active | 1 << PG_private |
 				1 << PG_writeback);
 	}
-	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
-	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
-	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
-	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
-		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
-		 */
-		spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
 		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
+
+	spin_lock(&hugetlb_lock);
+
+	return 0;
 }
 
 struct hstate *size_to_hstate(unsigned long size)
@@ -1403,9 +1434,9 @@ static void __free_huge_page(struct page *page)
 	} else if (h->surplus_huge_pages_node[nid]) {
 		/* remove the page from active list */
 		list_del(&page->lru);
-		update_and_free_page(h, page);
 		h->surplus_huge_pages--;
 		h->surplus_huge_pages_node[nid]--;
+		update_and_free_page(h, page);
 	} else {
 		arch_clear_hugepage_flags(page);
 		enqueue_huge_page(h, page);
@@ -1693,6 +1724,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 			struct page *page =
 				list_entry(h->hugepage_freelists[node].next,
 					  struct page, lru);
+			ClearHPageFreed(page);
 			list_del(&page->lru);
 			h->free_huge_pages--;
 			h->free_huge_pages_node[node]--;
@@ -1700,8 +1732,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 				h->surplus_huge_pages--;
 				h->surplus_huge_pages_node[node]--;
 			}
-			update_and_free_page(h, page);
-			ret = 1;
+			ret = !update_and_free_page(h, page);
 			break;
 		}
 	}
@@ -1714,10 +1745,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
  * nothing for in-use hugepages and non-hugepages.
  * This function returns values like below:
  *
- *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
- *          (allocated or reserved.)
- *       0: successfully dissolved free hugepages or the page is not a
- *          hugepage (considered as already dissolved)
+ *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
+ *           when the system is under memory pressure and the feature of
+ *           freeing unused vmemmap pages associated with each hugetlb page
+ *           is enabled.
+ *  -EBUSY:  failed to dissolved free hugepages or the hugepage is in-use
+ *           (allocated or reserved.)
+ *       0:  successfully dissolved free hugepages or the page is not a
+ *           hugepage (considered as already dissolved)
  */
 int dissolve_free_huge_page(struct page *page)
 {
@@ -1768,12 +1803,14 @@ int dissolve_free_huge_page(struct page *page)
 			SetPageHWPoison(page);
 			ClearPageHWPoison(head);
 		}
+		ClearHPageFreed(page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
 		h->max_huge_pages--;
-		update_and_free_page(h, head);
-		rc = 0;
+		rc = update_and_free_page(h, head);
+		if (rc)
+			h->max_huge_pages++;
 	}
 out:
 	spin_unlock(&hugetlb_lock);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 0209b736e0b4..29a3380f3b20 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -198,6 +198,28 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
 }
 
+int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long vmemmap_addr = (unsigned long)head;
+	unsigned long vmemmap_end, vmemmap_reuse;
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return 0;
+
+	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
+	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
+	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
+	/*
+	 * The pages which the vmemmap virtual address range [@vmemmap_addr,
+	 * @vmemmap_end) are mapped to are freed to the buddy allocator, and
+	 * the range is mapped to the page which @vmemmap_reuse is mapped to.
+	 * When a HugeTLB page is freed to the buddy allocator, previously
+	 * discarded vmemmap pages must be allocated and remapping.
+	 */
+	return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
+				   GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
+}
+
 void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 	unsigned long vmemmap_addr = (unsigned long)head;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 6923f03534d5..e5547d53b9f5 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -11,8 +11,14 @@
 #include <linux/hugetlb.h>
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
 #else
+static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	return 0;
+}
+
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index d3076a7a3783..60fc6cd6cd23 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -40,7 +40,8 @@
  * @remap_pte:		called for each lowest-level entry (PTE).
  * @reuse_page:		the page which is reused for the tail vmemmap pages.
  * @reuse_addr:		the virtual address of the @reuse_page page.
- * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
+ * @vmemmap_pages:	the list head of the vmemmap pages that can be freed
+ *			or is mapped from.
  */
 struct vmemmap_remap_walk {
 	void (*remap_pte)(pte_t *pte, unsigned long addr,
@@ -237,6 +238,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end,
 	free_vmemmap_page_list(&vmemmap_pages);
 }
 
+static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
+				struct vmemmap_remap_walk *walk)
+{
+	pgprot_t pgprot = PAGE_KERNEL;
+	struct page *page;
+	void *to;
+
+	BUG_ON(pte_page(*pte) != walk->reuse_page);
+
+	page = list_first_entry(walk->vmemmap_pages, struct page, lru);
+	list_del(&page->lru);
+	to = page_to_virt(page);
+	copy_page(to, (void *)walk->reuse_addr);
+
+	set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
+}
+
+static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
+				   gfp_t gfp_mask, struct list_head *list)
+{
+	unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
+	int nid = page_to_nid((struct page *)start);
+	struct page *page, *next;
+
+	while (nr_pages--) {
+		page = alloc_pages_node(nid, gfp_mask, 0);
+		if (!page)
+			goto out;
+		list_add_tail(&page->lru, list);
+	}
+
+	return 0;
+out:
+	list_for_each_entry_safe(page, next, list, lru)
+		__free_pages(page, 0);
+	return -ENOMEM;
+}
+
+/**
+ * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end)
+ *			 to the page which is from the @vmemmap_pages
+ *			 respectively.
+ * @start:	start address of the vmemmap virtual address range that we want
+ *		to remap.
+ * @end:	end address of the vmemmap virtual address range that we want to
+ *		remap.
+ * @reuse:	reuse address.
+ * @gpf_mask:	GFP flag for allocating vmemmap pages.
+ */
+int vmemmap_remap_alloc(unsigned long start, unsigned long end,
+			unsigned long reuse, gfp_t gfp_mask)
+{
+	LIST_HEAD(vmemmap_pages);
+	struct vmemmap_remap_walk walk = {
+		.remap_pte	= vmemmap_restore_pte,
+		.reuse_addr	= reuse,
+		.vmemmap_pages	= &vmemmap_pages,
+	};
+
+	/* See the comment in the vmemmap_remap_free(). */
+	BUG_ON(start - reuse != PAGE_SIZE);
+
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
+	if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
+		return -ENOMEM;
+
+	vmemmap_remap_range(reuse, end, &walk);
+
+	return 0;
+}
+
 /*
  * Allocate a block of memory to be used to back the virtual memory map
  * or to back the page tables that are used to create the mapping.
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 5/9] mm: hugetlb: set the PageHWPoison to the raw error page
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (3 preceding siblings ...)
  2021-02-19 10:49 ` [PATCH v16 4/9] mm: hugetlb: alloc " Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 10:49 ` [PATCH v16 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
                   ` (3 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Because we reuse the first tail vmemmap page frame and remap it
with read-only, we cannot set the PageHWPosion on some tail pages.
So we can use the head[4].private (There are at least 128 struct
page structures associated with the optimized HugeTLB page, so
using head[4].private is safe) to record the real error page index
and set the raw error page PageHWPoison later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Rientjes <rientjes@google.com>
---
 mm/hugetlb.c         | 83 ++++++++++++++++++++++++++++++++++++++++++++++------
 mm/hugetlb_vmemmap.c | 12 --------
 mm/hugetlb_vmemmap.h | 17 +++++++++++
 3 files changed, 91 insertions(+), 21 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index bcf856974c48..f0877411f790 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1305,6 +1305,74 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
+{
+	struct page *page;
+
+	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
+		return;
+
+	page = head + page_private(head + 4);
+
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
+					struct page *page)
+{
+	if (!PageHWPoison(head))
+		return;
+
+	if (free_vmemmap_pages_per_hpage(h)) {
+		set_page_private(head + 4, page - head);
+	} else if (page != head) {
+		/*
+		 * Move PageHWPoison flag from head page to the raw error page,
+		 * which makes any subpages rather than the error page reusable.
+		 */
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
+{
+	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
+		return;
+
+	set_page_private(head + 4, 0);
+}
+#else
+static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
+{
+}
+
+static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
+					struct page *page)
+{
+	if (PageHWPoison(head) && page != head) {
+		/*
+		 * Move PageHWPoison flag from head page to the raw error page,
+		 * which makes any subpages rather than the error page reusable.
+		 */
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
+{
+}
+#endif
+
 static int update_and_free_page(struct hstate *h, struct page *page)
 	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
 {
@@ -1351,6 +1419,8 @@ static int update_and_free_page(struct hstate *h, struct page *page)
 		return -ENOMEM;
 	}
 
+	hwpoison_subpage_deliver(h, page);
+
 	for (i = 0; i < pages_per_huge_page(h); i++) {
 		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
 				1 << PG_referenced | 1 << PG_dirty |
@@ -1795,22 +1865,17 @@ int dissolve_free_huge_page(struct page *page)
 			goto retry;
 		}
 
-		/*
-		 * Move PageHWPoison flag from head page to the raw error page,
-		 * which makes any subpages rather than the error page reusable.
-		 */
-		if (PageHWPoison(head) && page != head) {
-			SetPageHWPoison(page);
-			ClearPageHWPoison(head);
-		}
+		hwpoison_subpage_set(h, head, page);
 		ClearHPageFreed(page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
 		h->max_huge_pages--;
 		rc = update_and_free_page(h, head);
-		if (rc)
+		if (rc) {
 			h->max_huge_pages++;
+			hwpoison_subpage_clear(h, head);
+		}
 	}
 out:
 	spin_unlock(&hugetlb_lock);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 29a3380f3b20..f7ab3d99250a 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -181,18 +181,6 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
-/*
- * How many vmemmap pages associated with a HugeTLB page that can be freed
- * to the buddy allocator.
- *
- * Todo: Returns zero for now, which means the feature is disabled. We will
- * enable it once all the infrastructure is there.
- */
-static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
-{
-	return 0;
-}
-
 static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 {
 	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index e5547d53b9f5..a37771b0b82a 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -13,6 +13,18 @@
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Returns zero for now, which means the feature is disabled. We will
+ * enable it once all the infrastructure is there.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #else
 static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
@@ -22,5 +34,10 @@ static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (4 preceding siblings ...)
  2021-02-19 10:49 ` [PATCH v16 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 10:49 ` [PATCH v16 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
                   ` (2 subsequent siblings)
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin

Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

We disables PMD mapping of vmemmap pages for x86-64 arch when this
feature is enabled. Because vmemmap_remap_free() depends on vmemmap
being base page mapped.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++
 Documentation/admin-guide/mm/hugetlbpage.rst    | 14 +++++++++-----
 arch/x86/mm/init_64.c                           |  8 ++++++--
 include/linux/hugetlb.h                         | 19 +++++++++++++++++++
 mm/hugetlb_vmemmap.c                            | 24 ++++++++++++++++++++++++
 5 files changed, 72 insertions(+), 7 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 5adf1e57e932..7db2591f3ad3 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1577,6 +1577,20 @@
 			Documentation/admin-guide/mm/hugetlbpage.rst.
 			Format: size[KMG]
 
+	hugetlb_free_vmemmap=
+			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+			this controls freeing unused vmemmap pages associated
+			with each HugeTLB page. When this option is enabled,
+			we disable PMD/huge page mapping of vmemmap pages which
+			increase page table pages. So if a user/sysadmin only
+			uses a small number of HugeTLB pages (as a percentage
+			of system memory), they could end up using more memory
+			with hugetlb_free_vmemmap on as opposed to off.
+			Format: { on | off (default) }
+
+			on:  enable the feature
+			off: disable the feature
+
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
 			Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index fb8f649e5635..3bf494c01da4 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -60,8 +60,8 @@ HugePages_Surp
         the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
         maximum number of surplus huge pages is controlled by
         ``/proc/sys/vm/nr_overcommit_hugepages``.
-	Note: When the feature of freeing unused vmemmap pages associated
-	with each hugetlb page is enabled, the number of the surplus huge
+	Note: When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP and the kernel parameter
+	of ``hugetlb_free_vmemmap=on`` are set, the number of the surplus huge
 	pages may be temporarily larger than the maximum number of surplus
 	huge pages when the system is under memory pressure.
 Hugepagesize
@@ -84,9 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
 privileges can dynamically allocate more or free some persistent huge pages
 by increasing or decreasing the value of ``nr_hugepages``.
 
-Note: When the feature of freeing unused vmemmap pages associated with each
-hugetlb page is enabled, we can failed to free the huge pages triggered by
-the user when ths system is under memory pressure.  Please try again later.
+Note: When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP and the kernel parameter of
+``hugetlb_free_vmemmap=on`` are set, we can failed to free the huge pages
+triggered by the user when ths system is under memory pressure.  Please
+try again later.
 
 Pages that are used as huge pages are reserved inside the kernel and cannot
 be used for other purposes.  Huge pages cannot be swapped out under
@@ -153,6 +154,9 @@ default_hugepagesz
 
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
+hugetlb_free_vmemmap
+	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
+	unused vmemmap pages associated with each HugeTLB page.
 
 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
 indicates the current number of pre-allocated huge pages of the default size.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0435bee2e172..39f88c5faadc 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -34,6 +34,7 @@
 #include <linux/gfp.h>
 #include <linux/kcore.h>
 #include <linux/bootmem_info.h>
+#include <linux/hugetlb.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
 	int err;
 
-	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
+	if ((is_hugetlb_free_vmemmap_enabled()  && !altmap) ||
+	    end - start < PAGES_PER_SECTION * sizeof(struct page))
 		err = vmemmap_populate_basepages(start, end, node, NULL);
 	else if (boot_cpu_has(X86_FEATURE_PSE))
 		err = vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 	pmd_t *pmd;
 	unsigned int nr_pmd_pages;
 	struct page *page;
+	bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
+			    is_hugetlb_free_vmemmap_enabled();
 
 	for (; addr < end; addr = next) {
 		pte_t *pte = NULL;
@@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 		}
 		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
 
-		if (!boot_cpu_has(X86_FEATURE_PSE)) {
+		if (base_mapping) {
 			next = (addr + PAGE_SIZE) & PAGE_MASK;
 			pmd = pmd_offset(pud, addr);
 			if (pmd_none(*pmd))
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 37fd248ce271..ad249e56ac49 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -854,6 +854,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
 
 void set_page_huge_active(struct page *page);
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+extern bool hugetlb_free_vmemmap_enabled;
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return hugetlb_free_vmemmap_enabled;
+}
+#else
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
+#endif
+
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 
@@ -1007,6 +1021,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
 					pte_t *ptep, pte_t pte, unsigned long sz)
 {
 }
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index f7ab3d99250a..7807ed6678e0 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -169,6 +169,8 @@
  * (last) level. So this type of HugeTLB page can be optimized only when its
  * size of the struct page structs is greater than 2 pages.
  */
+#define pr_fmt(fmt)	"HugeTLB: " fmt
+
 #include "hugetlb_vmemmap.h"
 
 /*
@@ -181,6 +183,28 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
+bool hugetlb_free_vmemmap_enabled;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+	/* We cannot optimize if a "struct page" crosses page boundaries. */
+	if ((!is_power_of_2(sizeof(struct page)))) {
+		pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n");
+		return 0;
+	}
+
+	if (!buf)
+		return -EINVAL;
+
+	if (!strcmp(buf, "on"))
+		hugetlb_free_vmemmap_enabled = true;
+	else if (strcmp(buf, "off"))
+		return -EINVAL;
+
+	return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
+
 static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 {
 	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (5 preceding siblings ...)
  2021-02-19 10:49 ` [PATCH v16 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 10:49 ` [PATCH v16 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
  2021-02-19 10:49 ` [PATCH v16 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin

All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
field in the hstate to indicate how many vmemmap pages associated with
a HugeTLB page that can be freed to buddy allocator. And initialize it
in the hugetlb_vmemmap_init(). This patch is actual enablement of the
feature.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
---
 include/linux/hugetlb.h |  3 +++
 mm/hugetlb.c            |  1 +
 mm/hugetlb_vmemmap.c    | 25 +++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h    | 10 ++++++----
 4 files changed, 35 insertions(+), 4 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ad249e56ac49..775aea53669a 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -560,6 +560,9 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	unsigned int nr_free_vmemmap_pages;
+#endif
 #ifdef CONFIG_CGROUP_HUGETLB
 	/* cgroup control files */
 	struct cftype cgroup_files_dfl[7];
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f0877411f790..f58331853db9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3305,6 +3305,7 @@ void __init hugetlb_add_hstate(unsigned int order)
 	h->next_nid_to_free = first_memory_node;
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
+	hugetlb_vmemmap_init(h);
 
 	parsed_hstate = h;
 }
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 7807ed6678e0..b65f0d5189bd 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -251,3 +251,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 	 */
 	vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse);
 }
+
+void __init hugetlb_vmemmap_init(struct hstate *h)
+{
+	unsigned int nr_pages = pages_per_huge_page(h);
+	unsigned int vmemmap_pages;
+
+	if (!hugetlb_free_vmemmap_enabled)
+		return;
+
+	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
+	/*
+	 * The head page and the first tail page are not to be freed to buddy
+	 * allocator, the other pages will map to the first tail page, so they
+	 * can be freed.
+	 *
+	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
+	 * on some architectures (e.g. aarch64). See Documentation/arm64/
+	 * hugetlbpage.rst for more details.
+	 */
+	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
+		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
+
+	pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
+		h->name);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index a37771b0b82a..cb2bef8f9e73 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -13,17 +13,15 @@
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+void hugetlb_vmemmap_init(struct hstate *h);
 
 /*
  * How many vmemmap pages associated with a HugeTLB page that can be freed
  * to the buddy allocator.
- *
- * Todo: Returns zero for now, which means the feature is disabled. We will
- * enable it once all the infrastructure is there.
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
-	return 0;
+	return h->nr_free_vmemmap_pages;
 }
 #else
 static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
@@ -35,6 +33,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
 
+static inline void hugetlb_vmemmap_init(struct hstate *h)
+{
+}
+
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return 0;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 8/9] mm: hugetlb: gather discrete indexes of tail page
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (6 preceding siblings ...)
  2021-02-19 10:49 ` [PATCH v16 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  2021-02-19 10:49 ` [PATCH v16 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin

For HugeTLB page, there are more metadata to save in the struct page.
But the head struct page cannot meet our needs, so we have to abuse
other tail struct page to store the metadata. In order to avoid
conflicts caused by subsequent use of more tail struct pages, we can
gather these discrete indexes of tail struct page. In this case, it
will be easier to add a new tail page index later.

There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP,
so add a BUILD_BUG_ON to catch invalid usage of the tail struct page.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
---
 include/linux/hugetlb.h        | 24 ++++++++++++++++++++++--
 include/linux/hugetlb_cgroup.h | 19 +++++++++++--------
 mm/hugetlb.c                   |  6 +++---
 mm/hugetlb_vmemmap.c           |  8 ++++++++
 4 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 775aea53669a..caeef778526a 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -28,6 +28,26 @@ typedef struct { unsigned long pd; } hugepd_t;
 #include <linux/shm.h>
 #include <asm/tlbflush.h>
 
+/*
+ * For HugeTLB page, there are more metadata to save in the struct page. But
+ * the head struct page cannot meet our needs, so we have to abuse other tail
+ * struct page to store the metadata. In order to avoid conflicts caused by
+ * subsequent use of more tail struct pages, we gather these discrete indexes
+ * of tail struct page here.
+ */
+enum {
+	SUBPAGE_INDEX_SUBPOOL = 1,	/* reuse page->private */
+#ifdef CONFIG_CGROUP_HUGETLB
+	SUBPAGE_INDEX_CGROUP,		/* reuse page->private */
+	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
+	__MAX_CGROUP_SUBPAGE_INDEX = SUBPAGE_INDEX_CGROUP_RSVD,
+#endif
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
+#endif
+	__NR_USED_SUBPAGE,
+};
+
 struct hugepage_subpool {
 	spinlock_t lock;
 	long count;
@@ -607,13 +627,13 @@ extern unsigned int default_hstate_idx;
  */
 static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
 {
-	return (struct hugepage_subpool *)(hpage+1)->private;
+	return (void *)page_private(hpage + SUBPAGE_INDEX_SUBPOOL);
 }
 
 static inline void hugetlb_set_page_subpool(struct page *hpage,
 					struct hugepage_subpool *subpool)
 {
-	set_page_private(hpage+1, (unsigned long)subpool);
+	set_page_private(hpage + SUBPAGE_INDEX_SUBPOOL, (unsigned long)subpool);
 }
 
 static inline struct hstate *hstate_file(struct file *f)
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 2ad6e92f124a..54ec689e3c9c 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -21,15 +21,16 @@ struct hugetlb_cgroup;
 struct resv_map;
 struct file_region;
 
+#ifdef CONFIG_CGROUP_HUGETLB
 /*
  * Minimum page order trackable by hugetlb cgroup.
  * At least 4 pages are necessary for all the tracking information.
- * The second tail page (hpage[2]) is the fault usage cgroup.
- * The third tail page (hpage[3]) is the reservation usage cgroup.
+ * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
+ * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
+ * is the reservation usage cgroup.
  */
-#define HUGETLB_CGROUP_MIN_ORDER	2
+#define HUGETLB_CGROUP_MIN_ORDER order_base_2(__MAX_CGROUP_SUBPAGE_INDEX + 1)
 
-#ifdef CONFIG_CGROUP_HUGETLB
 enum hugetlb_memory_event {
 	HUGETLB_MAX,
 	HUGETLB_NR_MEMORY_EVENTS,
@@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
 	if (rsvd)
-		return (struct hugetlb_cgroup *)page[3].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
 	else
-		return (struct hugetlb_cgroup *)page[2].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
 }
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
@@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
 	if (rsvd)
-		page[3].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+				 (unsigned long)h_cg);
 	else
-		page[2].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+				 (unsigned long)h_cg);
 	return 0;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index f58331853db9..5d5349fdc9cd 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1313,7 +1313,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
 	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
 		return;
 
-	page = head + page_private(head + 4);
+	page = head + page_private(head + SUBPAGE_INDEX_HWPOISON);
 
 	/*
 	 * Move PageHWPoison flag from head page to the raw error page,
@@ -1332,7 +1332,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
 		return;
 
 	if (free_vmemmap_pages_per_hpage(h)) {
-		set_page_private(head + 4, page - head);
+		set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
 	} else if (page != head) {
 		/*
 		 * Move PageHWPoison flag from head page to the raw error page,
@@ -1348,7 +1348,7 @@ static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
 	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
 		return;
 
-	set_page_private(head + 4, 0);
+	set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0);
 }
 #else
 static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index b65f0d5189bd..33e42678abe3 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -257,6 +257,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int nr_pages = pages_per_huge_page(h);
 	unsigned int vmemmap_pages;
 
+	/*
+	 * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
+	 * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP,
+	 * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
+	 */
+	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
+		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
+
 	if (!hugetlb_free_vmemmap_enabled)
 		return;
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* [PATCH v16 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (7 preceding siblings ...)
  2021-02-19 10:49 ` [PATCH v16 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
@ 2021-02-19 10:49 ` Muchun Song
  8 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-19 10:49 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin

When the "struct page size" crosses page boundaries we cannot
make use of this feature. Let free_vmemmap_pages_per_hpage()
return zero if that is the case, most of the functions can be
optimized away.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
---
 include/linux/hugetlb.h | 3 ++-
 mm/hugetlb_vmemmap.c    | 7 +++++++
 mm/hugetlb_vmemmap.h    | 6 ++++++
 3 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index caeef778526a..8d684edc3bf9 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -882,7 +882,8 @@ extern bool hugetlb_free_vmemmap_enabled;
 
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
 {
-	return hugetlb_free_vmemmap_enabled;
+	return hugetlb_free_vmemmap_enabled &&
+	       is_power_of_2(sizeof(struct page));
 }
 #else
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 33e42678abe3..1ba1ef45c48c 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -265,6 +265,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
 		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
 
+	/*
+	 * The compiler can help us to optimize this function to null
+	 * when the size of the struct page is not power of 2.
+	 */
+	if (!is_power_of_2(sizeof(struct page)))
+		return;
+
 	if (!hugetlb_free_vmemmap_enabled)
 		return;
 
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index cb2bef8f9e73..29aaaf7b741e 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -21,6 +21,12 @@ void hugetlb_vmemmap_init(struct hstate *h);
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
+	/*
+	 * This check aims to let the compiler help us optimize the code as
+	 * much as possible.
+	 */
+	if (!is_power_of_2(sizeof(struct page)))
+		return 0;
 	return h->nr_free_vmemmap_pages;
 }
 #else
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-19 10:49 ` [PATCH v16 4/9] mm: hugetlb: alloc " Muchun Song
@ 2021-02-19 14:12   ` Michal Hocko
  2021-02-20  4:20     ` [External] " Muchun Song
  2021-02-23  0:00   ` Mike Kravetz
  1 sibling, 1 reply; 25+ messages in thread
From: Michal Hocko @ 2021-02-19 14:12 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Fri 19-02-21 18:49:49, Muchun Song wrote:
> When we free a HugeTLB page to the buddy allocator, we should allocate
> the vmemmap pages associated with it. But we may cannot allocate vmemmap
> pages when the system is under memory pressure, in this case, we just
> refuse to free the HugeTLB page instead of looping forever trying to
> allocate the pages. This changes some behavior (list below) on some
> corner cases.
> 
>  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> 
>     Need try again later by the user.
> 
>  2) Failing to free a surplus huge page when freed by the application.
> 
>     Try again later when freeing a huge page next time.

This means that surplus pages can accumulate right? This should be
rather unlikely because one released huge page could then be reused for
normal allocations - including vmemmap. Unlucky timing might still end
up in the accumulation though. Not something critical though.

>  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
>     offline_pages().
> 
>     This is a bit unfortunate if we have plenty of ZONE_MOVABLE memory
>     but are low on kernel memory. For example, migration of huge pages
>     would still work, however, dissolving the free page does not work.
>     This is a corner cases. When the system is that much under memory
>     pressure, offlining/unplug can be expected to fail.

Please mention that this is unfortunate because it prevents from the
memory offlining which shouldn't happen for movable zones. People
depending on the memory hotplug and movable zone should carefuly
consider whether savings on unmovable memory are worth losing their
hotplug functionality in some situations.

>  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
>     alloc_contig_range() - once we have that handling in place. Mainly
>     affects CMA and virtio-mem.

What about hugetlb page poisoning on HW failure (resp. soft offlining)?

> 
>     Similar to 3). virito-mem will handle migration errors gracefully.
>     CMA might be able to fallback on other free areas within the CMA
>     region.
> 
> We do not want to use GFP_ATOMIC to allocate vmemmap pages. Because it
> grants access to memory reserves and we do not think it is reasonable
> to use memory reserves. We use GFP_KERNEL in alloc_huge_page_vmemmap().

This likely needs more context around. Maybe something like
"
Vmemmap pages are allocated from the page freeing context. In order for
those allocations to be not disruptive (e.g. trigger oom killer)
__GFP_NORETRY is used. hugetlb_lock is dropped for the allocation
because a non sleeping allocation would be too fragile and it could fail
too easily under memory pressure. GFP_ATOMIC or other modes to access
memory reserves is not used because we want to prevent consuming
reserves under heavy hugetlb freeing.
"

I haven't gone through the patch in a great detail yet, from a high
level POV it looks good although the counter changes and reshuffling
seems little wild. That requires a more detailed look I do not have time
for right now. Mike would be much better for that anywya ;)

I do not see any check for an atomic context in free_huge_page path. I
have suggested to replace in_task by in_atomic check (with a gotcha that
the later doesn't work without preempt_count but there is a work to
address that).
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-19 14:12   ` Michal Hocko
@ 2021-02-20  4:20     ` Muchun Song
  2021-02-22  9:25       ` Michal Hocko
  0 siblings, 1 reply; 25+ messages in thread
From: Muchun Song @ 2021-02-20  4:20 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Fri, Feb 19, 2021 at 10:12 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Fri 19-02-21 18:49:49, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we should allocate
> > the vmemmap pages associated with it. But we may cannot allocate vmemmap
> > pages when the system is under memory pressure, in this case, we just
> > refuse to free the HugeTLB page instead of looping forever trying to
> > allocate the pages. This changes some behavior (list below) on some
> > corner cases.
> >
> >  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> >
> >     Need try again later by the user.
> >
> >  2) Failing to free a surplus huge page when freed by the application.
> >
> >     Try again later when freeing a huge page next time.
>
> This means that surplus pages can accumulate right? This should be
> rather unlikely because one released huge page could then be reused for
> normal allocations - including vmemmap. Unlucky timing might still end
> up in the accumulation though. Not something critical though.

Agree.

>
> >  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
> >     offline_pages().
> >
> >     This is a bit unfortunate if we have plenty of ZONE_MOVABLE memory
> >     but are low on kernel memory. For example, migration of huge pages
> >     would still work, however, dissolving the free page does not work.
> >     This is a corner cases. When the system is that much under memory
> >     pressure, offlining/unplug can be expected to fail.
>
> Please mention that this is unfortunate because it prevents from the
> memory offlining which shouldn't happen for movable zones. People
> depending on the memory hotplug and movable zone should carefuly
> consider whether savings on unmovable memory are worth losing their
> hotplug functionality in some situations.

Make sense. I will mention this in the change log. Thanks.

>
> >  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
> >     alloc_contig_range() - once we have that handling in place. Mainly
> >     affects CMA and virtio-mem.
>
> What about hugetlb page poisoning on HW failure (resp. soft offlining)?

If the HW poisoned hugetlb page failed to be dissolved, the page
will go back to the free list with PG_HWPoison set. But the page
will not be used, because we will check whether the page is HW
poisoned when it is dequeued from the free list. If so, we will skip
this page.

>
> >
> >     Similar to 3). virito-mem will handle migration errors gracefully.
> >     CMA might be able to fallback on other free areas within the CMA
> >     region.
> >
> > We do not want to use GFP_ATOMIC to allocate vmemmap pages. Because it
> > grants access to memory reserves and we do not think it is reasonable
> > to use memory reserves. We use GFP_KERNEL in alloc_huge_page_vmemmap().
>
> This likely needs more context around. Maybe something like
> "
> Vmemmap pages are allocated from the page freeing context. In order for
> those allocations to be not disruptive (e.g. trigger oom killer)
> __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation
> because a non sleeping allocation would be too fragile and it could fail
> too easily under memory pressure. GFP_ATOMIC or other modes to access
> memory reserves is not used because we want to prevent consuming
> reserves under heavy hugetlb freeing.
> "

Thanks. I will use this to the change log.

>
> I haven't gone through the patch in a great detail yet, from a high
> level POV it looks good although the counter changes and reshuffling
> seems little wild. That requires a more detailed look I do not have time
> for right now. Mike would be much better for that anywya ;)

Yeah. Hope Mike will review this (I believe he is good at this area).

>
> I do not see any check for an atomic context in free_huge_page path. I
> have suggested to replace in_task by in_atomic check (with a gotcha that
> the later doesn't work without preempt_count but there is a work to
> address that).

Sorry. I forgot it. I will replace in_task with in_atomic in the next version.
Thanks for your suggestions.

> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-20  4:20     ` [External] " Muchun Song
@ 2021-02-22  9:25       ` Michal Hocko
  2021-02-22 10:31         ` Muchun Song
  0 siblings, 1 reply; 25+ messages in thread
From: Michal Hocko @ 2021-02-22  9:25 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Sat 20-02-21 12:20:36, Muchun Song wrote:
> On Fri, Feb 19, 2021 at 10:12 PM Michal Hocko <mhocko@suse.com> wrote:
[...]
> > What about hugetlb page poisoning on HW failure (resp. soft offlining)?
> 
> If the HW poisoned hugetlb page failed to be dissolved, the page
> will go back to the free list with PG_HWPoison set. But the page
> will not be used, because we will check whether the page is HW
> poisoned when it is dequeued from the free list. If so, we will skip
> this page.

Can this lead to an underprovisioned pool then? Or is there a new
hugetlb allocated to replace the poisoned one?

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-22  9:25       ` Michal Hocko
@ 2021-02-22 10:31         ` Muchun Song
  2021-02-22 10:50           ` Oscar Salvador
  0 siblings, 1 reply; 25+ messages in thread
From: Muchun Song @ 2021-02-22 10:31 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Mon, Feb 22, 2021 at 5:25 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Sat 20-02-21 12:20:36, Muchun Song wrote:
> > On Fri, Feb 19, 2021 at 10:12 PM Michal Hocko <mhocko@suse.com> wrote:
> [...]
> > > What about hugetlb page poisoning on HW failure (resp. soft offlining)?
> >
> > If the HW poisoned hugetlb page failed to be dissolved, the page
> > will go back to the free list with PG_HWPoison set. But the page
> > will not be used, because we will check whether the page is HW
> > poisoned when it is dequeued from the free list. If so, we will skip
> > this page.
>
> Can this lead to an under provisioned pool then? Or is there a new
> hugetlb allocated to replace the poisoned one?

Actually, no page will be allocated. Your concern is right. But without
this patch, the result does not change. e.g. The HW poisoned page
can fail to be dissolved when h->free_huge_pages is equal to
h->resv_huge_pages. But no one seems to have reported this issue so
far. Maybe this behavior needs improvement in the feature.

>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-22 10:31         ` Muchun Song
@ 2021-02-22 10:50           ` Oscar Salvador
  0 siblings, 0 replies; 25+ messages in thread
From: Oscar Salvador @ 2021-02-22 10:50 UTC (permalink / raw)
  To: Muchun Song
  Cc: Michal Hocko, Jonathan Corbet, Mike Kravetz, Thomas Gleixner,
	mingo, bp, x86, hpa, dave.hansen, luto, Peter Zijlstra, viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Mon, Feb 22, 2021 at 06:31:12PM +0800, Muchun Song wrote:
> On Mon, Feb 22, 2021 at 5:25 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Sat 20-02-21 12:20:36, Muchun Song wrote:
> > > On Fri, Feb 19, 2021 at 10:12 PM Michal Hocko <mhocko@suse.com> wrote:
> > [...]
> > > > What about hugetlb page poisoning on HW failure (resp. soft offlining)?
> > >
> > > If the HW poisoned hugetlb page failed to be dissolved, the page
> > > will go back to the free list with PG_HWPoison set. But the page
> > > will not be used, because we will check whether the page is HW
> > > poisoned when it is dequeued from the free list. If so, we will skip
> > > this page.

Not really. If the huge page is dissolved, we will take the page out of the
the freelist. See take_page_off_buddy in memory_failure_hugetlb.

In an ideal world, we should inspect that page in free_pages_prepare(),
remove the HPWpoisoned page and process the others, without letting that
page hit Buddy.
And not only for hugetlb, but for any higher order page.
See how memory_failure() happily disengage itself when it finds a higher
order page.
It does it because we have the premise that once that page hits Buddy,
it will stay there as the check_new_page guards us.
But this has been proofed to be quite a weak measure, as compaction does
not performs such a check, and so the page can sneak in.

I fixed that for soft-offline, and for memory-failure in some cases, but more
needs to be done and is it in my TODO list.

> > Can this lead to an under provisioned pool then? Or is there a new
> > hugetlb allocated to replace the poisoned one?
> 
> Actually, no page will be allocated. Your concern is right. But without
> this patch, the result does not change. e.g. The HW poisoned page
> can fail to be dissolved when h->free_huge_pages is equal to
> h->resv_huge_pages. But no one seems to have reported this issue so
> far. Maybe this behavior needs improvement in the feature.

Yes, something to improve.
I shall have a look.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-19 10:49 ` [PATCH v16 4/9] mm: hugetlb: alloc " Muchun Song
  2021-02-19 14:12   ` Michal Hocko
@ 2021-02-23  0:00   ` Mike Kravetz
  2021-02-23  5:35       ` Muchun Song
  2021-02-23  9:27     ` Oscar Salvador
  1 sibling, 2 replies; 25+ messages in thread
From: Mike Kravetz @ 2021-02-23  0:00 UTC (permalink / raw)
  To: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel

On 2/19/21 2:49 AM, Muchun Song wrote:
> When we free a HugeTLB page to the buddy allocator, we should allocate
> the vmemmap pages associated with it. But we may cannot allocate vmemmap
> pages when the system is under memory pressure, in this case, we just
> refuse to free the HugeTLB page instead of looping forever trying to
> allocate the pages. This changes some behavior (list below) on some
> corner cases.

Thank you for listing changes in behavior and possible side effects of
not being able to allocate vmemmmap and free huge page to buddy!

I will not repeat Michal's comment about the check for an atomic context
in free_huge_page path.

> 
>  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> 
>     Need try again later by the user.
> 
>  2) Failing to free a surplus huge page when freed by the application.
> 
>     Try again later when freeing a huge page next time.
> 
>  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
>     offline_pages().
> 
>     This is a bit unfortunate if we have plenty of ZONE_MOVABLE memory
>     but are low on kernel memory. For example, migration of huge pages
>     would still work, however, dissolving the free page does not work.
>     This is a corner cases. When the system is that much under memory
>     pressure, offlining/unplug can be expected to fail.
> 
>  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
>     alloc_contig_range() - once we have that handling in place. Mainly
>     affects CMA and virtio-mem.
> 
>     Similar to 3). virito-mem will handle migration errors gracefully.
>     CMA might be able to fallback on other free areas within the CMA
>     region.
> 
> We do not want to use GFP_ATOMIC to allocate vmemmap pages. Because it
> grants access to memory reserves and we do not think it is reasonable
> to use memory reserves. We use GFP_KERNEL in alloc_huge_page_vmemmap().
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> ---
>  Documentation/admin-guide/mm/hugetlbpage.rst |  8 +++
>  include/linux/mm.h                           |  2 +
>  mm/hugetlb.c                                 | 81 ++++++++++++++++++++--------
>  mm/hugetlb_vmemmap.c                         | 22 ++++++++
>  mm/hugetlb_vmemmap.h                         |  6 +++
>  mm/sparse-vmemmap.c                          | 75 +++++++++++++++++++++++++-
>  6 files changed, 171 insertions(+), 23 deletions(-)
> 
> diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> index f7b1c7462991..fb8f649e5635 100644
> --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> @@ -60,6 +60,10 @@ HugePages_Surp
>          the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
>          maximum number of surplus huge pages is controlled by
>          ``/proc/sys/vm/nr_overcommit_hugepages``.
> +	Note: When the feature of freeing unused vmemmap pages associated
> +	with each hugetlb page is enabled, the number of the surplus huge

Small wording change:

	with each hugetlb page is enabled, the number of surplus huge

> +	pages may be temporarily larger than the maximum number of surplus
> +	huge pages when the system is under memory pressure.
>  Hugepagesize
>  	is the default hugepage size (in Kb).
>  Hugetlb
> @@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
>  privileges can dynamically allocate more or free some persistent huge pages
>  by increasing or decreasing the value of ``nr_hugepages``.
>  
> +Note: When the feature of freeing unused vmemmap pages associated with each
> +hugetlb page is enabled, we can failed to free the huge pages triggered by

Small wording change:

   hugetlb page is enabled, we can fail to free the huge pages triggered by

> +the user when ths system is under memory pressure.  Please try again later.
> +
>  Pages that are used as huge pages are reserved inside the kernel and cannot
>  be used for other purposes.  Huge pages cannot be swapped out under
>  memory pressure.
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index d7dddf334779..33c5911afe18 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2981,6 +2981,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
>  
>  void vmemmap_remap_free(unsigned long start, unsigned long end,
>  			unsigned long reuse);
> +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> +			unsigned long reuse, gfp_t gfp_mask);
>  
>  void *sparse_buffer_alloc(unsigned long size);
>  struct page * __populate_section_memmap(unsigned long pfn,
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 4cfca27c6d32..bcf856974c48 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1305,37 +1305,68 @@ static inline void destroy_compound_gigantic_page(struct page *page,
>  						unsigned int order) { }
>  #endif
>  
> -static void update_and_free_page(struct hstate *h, struct page *page)
> +static int update_and_free_page(struct hstate *h, struct page *page)
> +	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
>  {
>  	int i;
> +	int nid = page_to_nid(page);
>  
>  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> -		return;
> +		return 0;
>  
>  	h->nr_huge_pages--;
> -	h->nr_huge_pages_node[page_to_nid(page)]--;
> +	h->nr_huge_pages_node[nid]--;
> +	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> +	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> +	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> +	set_page_refcounted(page);

I think you added the set_page_refcounted() because the huge page will
appear as just a compound page without a reference after dropping the
hugetlb lock?  It might be better to set the reference before modifying
the destructor.  Otherwise, page scanning code could find the non-hugetlb
compound page with no reference.  I could not find any code where this
would be a problem, but I think it would be safer to set the reference
first.

> +	spin_unlock(&hugetlb_lock);

I really like the way this code is structured.  It is much simpler than
previous versions with retries or workqueue.  There is nothing wrong with
always dropping the lock here.  However, I wonder if we should think about
optimizing for the case where this feature is not enabled and we are not
freeing a 1G huge page.  I suspect this will be the most common case for
some time, and there is no need to drop the lock in this case.

Please do not change the code based on my comment.  I just wanted to bring
this up for thought.

Is it as simple as checking?
        if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
                spin_unlock(&hugetlb_lock);

        /* before return */
        if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
                spin_lock(&hugetlb_lock);

> +
> +	if (alloc_huge_page_vmemmap(h, page)) {
> +		int zeroed;
> +
> +		spin_lock(&hugetlb_lock);
> +		INIT_LIST_HEAD(&page->lru);
> +		set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> +		h->nr_huge_pages++;
> +		h->nr_huge_pages_node[nid]++;
> +
> +		/*
> +		 * If we cannot allocate vmemmap pages, just refuse to free the
> +		 * page and put the page back on the hugetlb free list and treat
> +		 * as a surplus page.
> +		 */
> +		h->surplus_huge_pages++;
> +		h->surplus_huge_pages_node[nid]++;
> +
> +		/*
> +		 * This page is now managed by the hugetlb allocator and has
> +		 * no users -- drop the last reference.
> +		 */
> +		zeroed = put_page_testzero(page);
> +		VM_BUG_ON_PAGE(!zeroed, page);
> +		arch_clear_hugepage_flags(page);
> +		enqueue_huge_page(h, page);
> +
> +		return -ENOMEM;
> +	}
> +
>  	for (i = 0; i < pages_per_huge_page(h); i++) {
>  		page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
>  				1 << PG_referenced | 1 << PG_dirty |
>  				1 << PG_active | 1 << PG_private |
>  				1 << PG_writeback);
>  	}
> -	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> -	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> -	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> -	set_page_refcounted(page);
>  	if (hstate_is_gigantic(h)) {
> -		/*
> -		 * Temporarily drop the hugetlb_lock, because
> -		 * we might block in free_gigantic_page().
> -		 */
> -		spin_unlock(&hugetlb_lock);
>  		destroy_compound_gigantic_page(page, huge_page_order(h));
>  		free_gigantic_page(page, huge_page_order(h));
> -		spin_lock(&hugetlb_lock);
>  	} else {
>  		__free_pages(page, huge_page_order(h));
>  	}
> +
> +	spin_lock(&hugetlb_lock);
> +
> +	return 0;
>  }
>  
>  struct hstate *size_to_hstate(unsigned long size)
> @@ -1403,9 +1434,9 @@ static void __free_huge_page(struct page *page)
>  	} else if (h->surplus_huge_pages_node[nid]) {
>  		/* remove the page from active list */
>  		list_del(&page->lru);
> -		update_and_free_page(h, page);
>  		h->surplus_huge_pages--;
>  		h->surplus_huge_pages_node[nid]--;
> +		update_and_free_page(h, page);
>  	} else {
>  		arch_clear_hugepage_flags(page);
>  		enqueue_huge_page(h, page);
> @@ -1693,6 +1724,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
>  			struct page *page =
>  				list_entry(h->hugepage_freelists[node].next,
>  					  struct page, lru);
> +			ClearHPageFreed(page);

Quick question.  Is this change directly related to the vmemmap changes,
or is it a cleanup that you noticed?

>  			list_del(&page->lru);
>  			h->free_huge_pages--;
>  			h->free_huge_pages_node[node]--;
> @@ -1700,8 +1732,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
>  				h->surplus_huge_pages--;
>  				h->surplus_huge_pages_node[node]--;
>  			}
> -			update_and_free_page(h, page);
> -			ret = 1;
> +			ret = !update_and_free_page(h, page);
>  			break;
>  		}
>  	}
> @@ -1714,10 +1745,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
>   * nothing for in-use hugepages and non-hugepages.
>   * This function returns values like below:
>   *
> - *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
> - *          (allocated or reserved.)
> - *       0: successfully dissolved free hugepages or the page is not a
> - *          hugepage (considered as already dissolved)
> + *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
> + *           when the system is under memory pressure and the feature of
> + *           freeing unused vmemmap pages associated with each hugetlb page
> + *           is enabled.
> + *  -EBUSY:  failed to dissolved free hugepages or the hugepage is in-use
> + *           (allocated or reserved.)
> + *       0:  successfully dissolved free hugepages or the page is not a
> + *           hugepage (considered as already dissolved)
>   */
>  int dissolve_free_huge_page(struct page *page)
>  {
> @@ -1768,12 +1803,14 @@ int dissolve_free_huge_page(struct page *page)
>  			SetPageHWPoison(page);
>  			ClearPageHWPoison(head);
>  		}
> +		ClearHPageFreed(page);
>  		list_del(&head->lru);
>  		h->free_huge_pages--;
>  		h->free_huge_pages_node[nid]--;
>  		h->max_huge_pages--;
> -		update_and_free_page(h, head);
> -		rc = 0;
> +		rc = update_and_free_page(h, head);
> +		if (rc)
> +			h->max_huge_pages++;

Since update_and_free_page failed, the number of surplus pages was
incremented.  Surplus pages are the number of pages greater than
max_huge_pages.  Since we are incrementing max_huge_pages, we should
decrement (undo) the addition to surplus_huge_pages and
surplus_huge_pages_node[nid].  So, I think we want
			h->surplus_huge_pages--;
			h->surplus_huge_pages_node[nid]--;
here as well.

>  	}
>  out:
>  	spin_unlock(&hugetlb_lock);

In previous version of this patch series, we discussed and refined the
vmemmap manipulation routines below.  They still look good to me.

In general, I like the approach taken in this patch.  Hopefully, others
will comment and we can move the series forward.
-- 
Mike Kravetz

> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 0209b736e0b4..29a3380f3b20 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -198,6 +198,28 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
>  	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
>  }
>  
> +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> +{
> +	unsigned long vmemmap_addr = (unsigned long)head;
> +	unsigned long vmemmap_end, vmemmap_reuse;
> +
> +	if (!free_vmemmap_pages_per_hpage(h))
> +		return 0;
> +
> +	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> +	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> +	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> +	/*
> +	 * The pages which the vmemmap virtual address range [@vmemmap_addr,
> +	 * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> +	 * the range is mapped to the page which @vmemmap_reuse is mapped to.
> +	 * When a HugeTLB page is freed to the buddy allocator, previously
> +	 * discarded vmemmap pages must be allocated and remapping.
> +	 */
> +	return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> +				   GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
> +}
> +
>  void free_huge_page_vmemmap(struct hstate *h, struct page *head)
>  {
>  	unsigned long vmemmap_addr = (unsigned long)head;
> diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> index 6923f03534d5..e5547d53b9f5 100644
> --- a/mm/hugetlb_vmemmap.h
> +++ b/mm/hugetlb_vmemmap.h
> @@ -11,8 +11,14 @@
>  #include <linux/hugetlb.h>
>  
>  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
>  void free_huge_page_vmemmap(struct hstate *h, struct page *head);
>  #else
> +static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> +{
> +	return 0;
> +}
> +
>  static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
>  {
>  }
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index d3076a7a3783..60fc6cd6cd23 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -40,7 +40,8 @@
>   * @remap_pte:		called for each lowest-level entry (PTE).
>   * @reuse_page:		the page which is reused for the tail vmemmap pages.
>   * @reuse_addr:		the virtual address of the @reuse_page page.
> - * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
> + * @vmemmap_pages:	the list head of the vmemmap pages that can be freed
> + *			or is mapped from.
>   */
>  struct vmemmap_remap_walk {
>  	void (*remap_pte)(pte_t *pte, unsigned long addr,
> @@ -237,6 +238,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end,
>  	free_vmemmap_page_list(&vmemmap_pages);
>  }
>  
> +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
> +				struct vmemmap_remap_walk *walk)
> +{
> +	pgprot_t pgprot = PAGE_KERNEL;
> +	struct page *page;
> +	void *to;
> +
> +	BUG_ON(pte_page(*pte) != walk->reuse_page);
> +
> +	page = list_first_entry(walk->vmemmap_pages, struct page, lru);
> +	list_del(&page->lru);
> +	to = page_to_virt(page);
> +	copy_page(to, (void *)walk->reuse_addr);
> +
> +	set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> +}
> +
> +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
> +				   gfp_t gfp_mask, struct list_head *list)
> +{
> +	unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
> +	int nid = page_to_nid((struct page *)start);
> +	struct page *page, *next;
> +
> +	while (nr_pages--) {
> +		page = alloc_pages_node(nid, gfp_mask, 0);
> +		if (!page)
> +			goto out;
> +		list_add_tail(&page->lru, list);
> +	}
> +
> +	return 0;
> +out:
> +	list_for_each_entry_safe(page, next, list, lru)
> +		__free_pages(page, 0);
> +	return -ENOMEM;
> +}
> +
> +/**
> + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end)
> + *			 to the page which is from the @vmemmap_pages
> + *			 respectively.
> + * @start:	start address of the vmemmap virtual address range that we want
> + *		to remap.
> + * @end:	end address of the vmemmap virtual address range that we want to
> + *		remap.
> + * @reuse:	reuse address.
> + * @gpf_mask:	GFP flag for allocating vmemmap pages.
> + */
> +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> +			unsigned long reuse, gfp_t gfp_mask)
> +{
> +	LIST_HEAD(vmemmap_pages);
> +	struct vmemmap_remap_walk walk = {
> +		.remap_pte	= vmemmap_restore_pte,
> +		.reuse_addr	= reuse,
> +		.vmemmap_pages	= &vmemmap_pages,
> +	};
> +
> +	/* See the comment in the vmemmap_remap_free(). */
> +	BUG_ON(start - reuse != PAGE_SIZE);
> +
> +	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
> +
> +	if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
> +		return -ENOMEM;
> +
> +	vmemmap_remap_range(reuse, end, &walk);
> +
> +	return 0;
> +}
> +
>  /*
>   * Allocate a block of memory to be used to back the virtual memory map
>   * or to back the page tables that are used to create the mapping.
> 

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-23  0:00   ` Mike Kravetz
@ 2021-02-23  5:35       ` Muchun Song
  2021-02-23  9:27     ` Oscar Salvador
  1 sibling, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-23  5:35 UTC (permalink / raw)
  To: Mike Kravetz, Michal Hocko, Oscar Salvador
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Tue, Feb 23, 2021 at 8:01 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 2/19/21 2:49 AM, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we should allocate
> > the vmemmap pages associated with it. But we may cannot allocate vmemmap
> > pages when the system is under memory pressure, in this case, we just
> > refuse to free the HugeTLB page instead of looping forever trying to
> > allocate the pages. This changes some behavior (list below) on some
> > corner cases.
>
> Thank you for listing changes in behavior and possible side effects of
> not being able to allocate vmemmmap and free huge page to buddy!
>
> I will not repeat Michal's comment about the check for an atomic context
> in free_huge_page path.
>
> >
> >  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> >
> >     Need try again later by the user.
> >
> >  2) Failing to free a surplus huge page when freed by the application.
> >
> >     Try again later when freeing a huge page next time.
> >
> >  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
> >     offline_pages().
> >
> >     This is a bit unfortunate if we have plenty of ZONE_MOVABLE memory
> >     but are low on kernel memory. For example, migration of huge pages
> >     would still work, however, dissolving the free page does not work.
> >     This is a corner cases. When the system is that much under memory
> >     pressure, offlining/unplug can be expected to fail.
> >
> >  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
> >     alloc_contig_range() - once we have that handling in place. Mainly
> >     affects CMA and virtio-mem.
> >
> >     Similar to 3). virito-mem will handle migration errors gracefully.
> >     CMA might be able to fallback on other free areas within the CMA
> >     region.
> >
> > We do not want to use GFP_ATOMIC to allocate vmemmap pages. Because it
> > grants access to memory reserves and we do not think it is reasonable
> > to use memory reserves. We use GFP_KERNEL in alloc_huge_page_vmemmap().
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  Documentation/admin-guide/mm/hugetlbpage.rst |  8 +++
> >  include/linux/mm.h                           |  2 +
> >  mm/hugetlb.c                                 | 81 ++++++++++++++++++++--------
> >  mm/hugetlb_vmemmap.c                         | 22 ++++++++
> >  mm/hugetlb_vmemmap.h                         |  6 +++
> >  mm/sparse-vmemmap.c                          | 75 +++++++++++++++++++++++++-
> >  6 files changed, 171 insertions(+), 23 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> > index f7b1c7462991..fb8f649e5635 100644
> > --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> > @@ -60,6 +60,10 @@ HugePages_Surp
> >          the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
> >          maximum number of surplus huge pages is controlled by
> >          ``/proc/sys/vm/nr_overcommit_hugepages``.
> > +     Note: When the feature of freeing unused vmemmap pages associated
> > +     with each hugetlb page is enabled, the number of the surplus huge
>
> Small wording change:
>
>         with each hugetlb page is enabled, the number of surplus huge

Thanks. I will update this.

>
> > +     pages may be temporarily larger than the maximum number of surplus
> > +     huge pages when the system is under memory pressure.
> >  Hugepagesize
> >       is the default hugepage size (in Kb).
> >  Hugetlb
> > @@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
> >  privileges can dynamically allocate more or free some persistent huge pages
> >  by increasing or decreasing the value of ``nr_hugepages``.
> >
> > +Note: When the feature of freeing unused vmemmap pages associated with each
> > +hugetlb page is enabled, we can failed to free the huge pages triggered by
>
> Small wording change:
>
>    hugetlb page is enabled, we can fail to free the huge pages triggered by

Thanks. I will update this.

>
> > +the user when ths system is under memory pressure.  Please try again later.
> > +
> >  Pages that are used as huge pages are reserved inside the kernel and cannot
> >  be used for other purposes.  Huge pages cannot be swapped out under
> >  memory pressure.
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index d7dddf334779..33c5911afe18 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2981,6 +2981,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
> >
> >  void vmemmap_remap_free(unsigned long start, unsigned long end,
> >                       unsigned long reuse);
> > +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> > +                     unsigned long reuse, gfp_t gfp_mask);
> >
> >  void *sparse_buffer_alloc(unsigned long size);
> >  struct page * __populate_section_memmap(unsigned long pfn,
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 4cfca27c6d32..bcf856974c48 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1305,37 +1305,68 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> >                                               unsigned int order) { }
> >  #endif
> >
> > -static void update_and_free_page(struct hstate *h, struct page *page)
> > +static int update_and_free_page(struct hstate *h, struct page *page)
> > +     __releases(&hugetlb_lock) __acquires(&hugetlb_lock)
> >  {
> >       int i;
> > +     int nid = page_to_nid(page);
> >
> >       if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > -             return;
> > +             return 0;
> >
> >       h->nr_huge_pages--;
> > -     h->nr_huge_pages_node[page_to_nid(page)]--;
> > +     h->nr_huge_pages_node[nid]--;
> > +     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> > +     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> > +     set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > +     set_page_refcounted(page);
>
> I think you added the set_page_refcounted() because the huge page will
> appear as just a compound page without a reference after dropping the
> hugetlb lock?

Right.

> It might be better to set the reference before modifying
> the destructor.  Otherwise, page scanning code could find the non-hugetlb
> compound page with no reference.  I could not find any code where this
> would be a problem, but I think it would be safer to set the reference
> first.

Make sense to me. It is better to set the refcount first.

>
> > +     spin_unlock(&hugetlb_lock);
>
> I really like the way this code is structured.  It is much simpler than
> previous versions with retries or workqueue.  There is nothing wrong with
> always dropping the lock here.  However, I wonder if we should think about
> optimizing for the case where this feature is not enabled and we are not
> freeing a 1G huge page.  I suspect this will be the most common case for
> some time, and there is no need to drop the lock in this case.
>
> Please do not change the code based on my comment.  I just wanted to bring
> this up for thought.

At least make sense to me. It may take a long time to free a 1G
huge page. Dropping the lock may be a good choice. But I also
want to listen to Oscar and Michal's opinion on this.

>
> Is it as simple as checking?
>         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
>                 spin_unlock(&hugetlb_lock);

>
>         /* before return */
>         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
>                 spin_lock(&hugetlb_lock);
>
> > +
> > +     if (alloc_huge_page_vmemmap(h, page)) {
> > +             int zeroed;
> > +
> > +             spin_lock(&hugetlb_lock);
> > +             INIT_LIST_HEAD(&page->lru);
> > +             set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > +             h->nr_huge_pages++;
> > +             h->nr_huge_pages_node[nid]++;
> > +
> > +             /*
> > +              * If we cannot allocate vmemmap pages, just refuse to free the
> > +              * page and put the page back on the hugetlb free list and treat
> > +              * as a surplus page.
> > +              */
> > +             h->surplus_huge_pages++;
> > +             h->surplus_huge_pages_node[nid]++;
> > +
> > +             /*
> > +              * This page is now managed by the hugetlb allocator and has
> > +              * no users -- drop the last reference.
> > +              */
> > +             zeroed = put_page_testzero(page);
> > +             VM_BUG_ON_PAGE(!zeroed, page);
> > +             arch_clear_hugepage_flags(page);
> > +             enqueue_huge_page(h, page);
> > +
> > +             return -ENOMEM;
> > +     }
> > +
> >       for (i = 0; i < pages_per_huge_page(h); i++) {
> >               page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
> >                               1 << PG_referenced | 1 << PG_dirty |
> >                               1 << PG_active | 1 << PG_private |
> >                               1 << PG_writeback);
> >       }
> > -     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> > -     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> > -     set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > -     set_page_refcounted(page);
> >       if (hstate_is_gigantic(h)) {
> > -             /*
> > -              * Temporarily drop the hugetlb_lock, because
> > -              * we might block in free_gigantic_page().
> > -              */
> > -             spin_unlock(&hugetlb_lock);
> >               destroy_compound_gigantic_page(page, huge_page_order(h));
> >               free_gigantic_page(page, huge_page_order(h));
> > -             spin_lock(&hugetlb_lock);
> >       } else {
> >               __free_pages(page, huge_page_order(h));
> >       }
> > +
> > +     spin_lock(&hugetlb_lock);
> > +
> > +     return 0;
> >  }
> >
> >  struct hstate *size_to_hstate(unsigned long size)
> > @@ -1403,9 +1434,9 @@ static void __free_huge_page(struct page *page)
> >       } else if (h->surplus_huge_pages_node[nid]) {
> >               /* remove the page from active list */
> >               list_del(&page->lru);
> > -             update_and_free_page(h, page);
> >               h->surplus_huge_pages--;
> >               h->surplus_huge_pages_node[nid]--;
> > +             update_and_free_page(h, page);
> >       } else {
> >               arch_clear_hugepage_flags(page);
> >               enqueue_huge_page(h, page);
> > @@ -1693,6 +1724,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >                       struct page *page =
> >                               list_entry(h->hugepage_freelists[node].next,
> >                                         struct page, lru);
> > +                     ClearHPageFreed(page);
>
> Quick question.  Is this change directly related to the vmemmap changes,
> or is it a cleanup that you noticed?

Just a cleanup. Maybe there should be a separate patch for this.

>
> >                       list_del(&page->lru);
> >                       h->free_huge_pages--;
> >                       h->free_huge_pages_node[node]--;
> > @@ -1700,8 +1732,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >                               h->surplus_huge_pages--;
> >                               h->surplus_huge_pages_node[node]--;
> >                       }
> > -                     update_and_free_page(h, page);
> > -                     ret = 1;
> > +                     ret = !update_and_free_page(h, page);
> >                       break;
> >               }
> >       }
> > @@ -1714,10 +1745,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >   * nothing for in-use hugepages and non-hugepages.
> >   * This function returns values like below:
> >   *
> > - *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
> > - *          (allocated or reserved.)
> > - *       0: successfully dissolved free hugepages or the page is not a
> > - *          hugepage (considered as already dissolved)
> > + *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
> > + *           when the system is under memory pressure and the feature of
> > + *           freeing unused vmemmap pages associated with each hugetlb page
> > + *           is enabled.
> > + *  -EBUSY:  failed to dissolved free hugepages or the hugepage is in-use
> > + *           (allocated or reserved.)
> > + *       0:  successfully dissolved free hugepages or the page is not a
> > + *           hugepage (considered as already dissolved)
> >   */
> >  int dissolve_free_huge_page(struct page *page)
> >  {
> > @@ -1768,12 +1803,14 @@ int dissolve_free_huge_page(struct page *page)
> >                       SetPageHWPoison(page);
> >                       ClearPageHWPoison(head);
> >               }
> > +             ClearHPageFreed(page);
> >               list_del(&head->lru);
> >               h->free_huge_pages--;
> >               h->free_huge_pages_node[nid]--;
> >               h->max_huge_pages--;
> > -             update_and_free_page(h, head);
> > -             rc = 0;
> > +             rc = update_and_free_page(h, head);
> > +             if (rc)
> > +                     h->max_huge_pages++;
>
> Since update_and_free_page failed, the number of surplus pages was
> incremented.  Surplus pages are the number of pages greater than
> max_huge_pages.  Since we are incrementing max_huge_pages, we should
> decrement (undo) the addition to surplus_huge_pages and
> surplus_huge_pages_node[nid].  So, I think we want
>                         h->surplus_huge_pages--;
>                         h->surplus_huge_pages_node[nid]--;
> here as well.

You are right. Thanks for reminding me of this.

>
> >       }
> >  out:
> >       spin_unlock(&hugetlb_lock);
>
> In previous version of this patch series, we discussed and refined the
> vmemmap manipulation routines below.  They still look good to me.
>
> In general, I like the approach taken in this patch.  Hopefully, others
> will comment and we can move the series forward.
> --
> Mike Kravetz
>
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index 0209b736e0b4..29a3380f3b20 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -198,6 +198,28 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> >       return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> >  }
> >
> > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     unsigned long vmemmap_addr = (unsigned long)head;
> > +     unsigned long vmemmap_end, vmemmap_reuse;
> > +
> > +     if (!free_vmemmap_pages_per_hpage(h))
> > +             return 0;
> > +
> > +     vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> > +     vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> > +     vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> > +     /*
> > +      * The pages which the vmemmap virtual address range [@vmemmap_addr,
> > +      * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> > +      * the range is mapped to the page which @vmemmap_reuse is mapped to.
> > +      * When a HugeTLB page is freed to the buddy allocator, previously
> > +      * discarded vmemmap pages must be allocated and remapping.
> > +      */
> > +     return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> > +                                GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
> > +}
> > +
> >  void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> >  {
> >       unsigned long vmemmap_addr = (unsigned long)head;
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > index 6923f03534d5..e5547d53b9f5 100644
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -11,8 +11,14 @@
> >  #include <linux/hugetlb.h>
> >
> >  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
> >  void free_huge_page_vmemmap(struct hstate *h, struct page *head);
> >  #else
> > +static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     return 0;
> > +}
> > +
> >  static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> >  {
> >  }
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index d3076a7a3783..60fc6cd6cd23 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -40,7 +40,8 @@
> >   * @remap_pte:               called for each lowest-level entry (PTE).
> >   * @reuse_page:              the page which is reused for the tail vmemmap pages.
> >   * @reuse_addr:              the virtual address of the @reuse_page page.
> > - * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed
> > + *                   or is mapped from.
> >   */
> >  struct vmemmap_remap_walk {
> >       void (*remap_pte)(pte_t *pte, unsigned long addr,
> > @@ -237,6 +238,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end,
> >       free_vmemmap_page_list(&vmemmap_pages);
> >  }
> >
> > +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
> > +                             struct vmemmap_remap_walk *walk)
> > +{
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     struct page *page;
> > +     void *to;
> > +
> > +     BUG_ON(pte_page(*pte) != walk->reuse_page);
> > +
> > +     page = list_first_entry(walk->vmemmap_pages, struct page, lru);
> > +     list_del(&page->lru);
> > +     to = page_to_virt(page);
> > +     copy_page(to, (void *)walk->reuse_addr);
> > +
> > +     set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> > +}
> > +
> > +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
> > +                                gfp_t gfp_mask, struct list_head *list)
> > +{
> > +     unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
> > +     int nid = page_to_nid((struct page *)start);
> > +     struct page *page, *next;
> > +
> > +     while (nr_pages--) {
> > +             page = alloc_pages_node(nid, gfp_mask, 0);
> > +             if (!page)
> > +                     goto out;
> > +             list_add_tail(&page->lru, list);
> > +     }
> > +
> > +     return 0;
> > +out:
> > +     list_for_each_entry_safe(page, next, list, lru)
> > +             __free_pages(page, 0);
> > +     return -ENOMEM;
> > +}
> > +
> > +/**
> > + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end)
> > + *                    to the page which is from the @vmemmap_pages
> > + *                    respectively.
> > + * @start:   start address of the vmemmap virtual address range that we want
> > + *           to remap.
> > + * @end:     end address of the vmemmap virtual address range that we want to
> > + *           remap.
> > + * @reuse:   reuse address.
> > + * @gpf_mask:        GFP flag for allocating vmemmap pages.
> > + */
> > +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> > +                     unsigned long reuse, gfp_t gfp_mask)
> > +{
> > +     LIST_HEAD(vmemmap_pages);
> > +     struct vmemmap_remap_walk walk = {
> > +             .remap_pte      = vmemmap_restore_pte,
> > +             .reuse_addr     = reuse,
> > +             .vmemmap_pages  = &vmemmap_pages,
> > +     };
> > +
> > +     /* See the comment in the vmemmap_remap_free(). */
> > +     BUG_ON(start - reuse != PAGE_SIZE);
> > +
> > +     might_sleep_if(gfpflags_allow_blocking(gfp_mask));
> > +
> > +     if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
> > +             return -ENOMEM;
> > +
> > +     vmemmap_remap_range(reuse, end, &walk);
> > +
> > +     return 0;
> > +}
> > +
> >  /*
> >   * Allocate a block of memory to be used to back the virtual memory map
> >   * or to back the page tables that are used to create the mapping.
> >

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
@ 2021-02-23  5:35       ` Muchun Song
  0 siblings, 0 replies; 25+ messages in thread
From: Muchun Song @ 2021-02-23  5:35 UTC (permalink / raw)
  To: Mike Kravetz, Michal Hocko, Oscar Salvador
  Cc: Jonathan Corbet, Thomas Gleixner, mingo, bp, x86, hpa,
	dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton, paulmck,
	mchehab+huawei, pawan.kumar.gupta, Randy Dunlap, oneukum,
	anshuman.khandual, jroedel, Mina Almasry, David Rientjes,
	Matthew Wilcox, Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Tue, Feb 23, 2021 at 8:01 AM Mike Kravetz <mike.kravetz@oracle.com> wrote:
>
> On 2/19/21 2:49 AM, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we should allocate
> > the vmemmap pages associated with it. But we may cannot allocate vmemmap
> > pages when the system is under memory pressure, in this case, we just
> > refuse to free the HugeTLB page instead of looping forever trying to
> > allocate the pages. This changes some behavior (list below) on some
> > corner cases.
>
> Thank you for listing changes in behavior and possible side effects of
> not being able to allocate vmemmmap and free huge page to buddy!
>
> I will not repeat Michal's comment about the check for an atomic context
> in free_huge_page path.
>
> >
> >  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> >
> >     Need try again later by the user.
> >
> >  2) Failing to free a surplus huge page when freed by the application.
> >
> >     Try again later when freeing a huge page next time.
> >
> >  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
> >     offline_pages().
> >
> >     This is a bit unfortunate if we have plenty of ZONE_MOVABLE memory
> >     but are low on kernel memory. For example, migration of huge pages
> >     would still work, however, dissolving the free page does not work.
> >     This is a corner cases. When the system is that much under memory
> >     pressure, offlining/unplug can be expected to fail.
> >
> >  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
> >     alloc_contig_range() - once we have that handling in place. Mainly
> >     affects CMA and virtio-mem.
> >
> >     Similar to 3). virito-mem will handle migration errors gracefully.
> >     CMA might be able to fallback on other free areas within the CMA
> >     region.
> >
> > We do not want to use GFP_ATOMIC to allocate vmemmap pages. Because it
> > grants access to memory reserves and we do not think it is reasonable
> > to use memory reserves. We use GFP_KERNEL in alloc_huge_page_vmemmap().
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > ---
> >  Documentation/admin-guide/mm/hugetlbpage.rst |  8 +++
> >  include/linux/mm.h                           |  2 +
> >  mm/hugetlb.c                                 | 81 ++++++++++++++++++++--------
> >  mm/hugetlb_vmemmap.c                         | 22 ++++++++
> >  mm/hugetlb_vmemmap.h                         |  6 +++
> >  mm/sparse-vmemmap.c                          | 75 +++++++++++++++++++++++++-
> >  6 files changed, 171 insertions(+), 23 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
> > index f7b1c7462991..fb8f649e5635 100644
> > --- a/Documentation/admin-guide/mm/hugetlbpage.rst
> > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst
> > @@ -60,6 +60,10 @@ HugePages_Surp
> >          the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
> >          maximum number of surplus huge pages is controlled by
> >          ``/proc/sys/vm/nr_overcommit_hugepages``.
> > +     Note: When the feature of freeing unused vmemmap pages associated
> > +     with each hugetlb page is enabled, the number of the surplus huge
>
> Small wording change:
>
>         with each hugetlb page is enabled, the number of surplus huge

Thanks. I will update this.

>
> > +     pages may be temporarily larger than the maximum number of surplus
> > +     huge pages when the system is under memory pressure.
> >  Hugepagesize
> >       is the default hugepage size (in Kb).
> >  Hugetlb
> > @@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
> >  privileges can dynamically allocate more or free some persistent huge pages
> >  by increasing or decreasing the value of ``nr_hugepages``.
> >
> > +Note: When the feature of freeing unused vmemmap pages associated with each
> > +hugetlb page is enabled, we can failed to free the huge pages triggered by
>
> Small wording change:
>
>    hugetlb page is enabled, we can fail to free the huge pages triggered by

Thanks. I will update this.

>
> > +the user when ths system is under memory pressure.  Please try again later.
> > +
> >  Pages that are used as huge pages are reserved inside the kernel and cannot
> >  be used for other purposes.  Huge pages cannot be swapped out under
> >  memory pressure.
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index d7dddf334779..33c5911afe18 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2981,6 +2981,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
> >
> >  void vmemmap_remap_free(unsigned long start, unsigned long end,
> >                       unsigned long reuse);
> > +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> > +                     unsigned long reuse, gfp_t gfp_mask);
> >
> >  void *sparse_buffer_alloc(unsigned long size);
> >  struct page * __populate_section_memmap(unsigned long pfn,
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 4cfca27c6d32..bcf856974c48 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1305,37 +1305,68 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> >                                               unsigned int order) { }
> >  #endif
> >
> > -static void update_and_free_page(struct hstate *h, struct page *page)
> > +static int update_and_free_page(struct hstate *h, struct page *page)
> > +     __releases(&hugetlb_lock) __acquires(&hugetlb_lock)
> >  {
> >       int i;
> > +     int nid = page_to_nid(page);
> >
> >       if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > -             return;
> > +             return 0;
> >
> >       h->nr_huge_pages--;
> > -     h->nr_huge_pages_node[page_to_nid(page)]--;
> > +     h->nr_huge_pages_node[nid]--;
> > +     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> > +     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> > +     set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > +     set_page_refcounted(page);
>
> I think you added the set_page_refcounted() because the huge page will
> appear as just a compound page without a reference after dropping the
> hugetlb lock?

Right.

> It might be better to set the reference before modifying
> the destructor.  Otherwise, page scanning code could find the non-hugetlb
> compound page with no reference.  I could not find any code where this
> would be a problem, but I think it would be safer to set the reference
> first.

Make sense to me. It is better to set the refcount first.

>
> > +     spin_unlock(&hugetlb_lock);
>
> I really like the way this code is structured.  It is much simpler than
> previous versions with retries or workqueue.  There is nothing wrong with
> always dropping the lock here.  However, I wonder if we should think about
> optimizing for the case where this feature is not enabled and we are not
> freeing a 1G huge page.  I suspect this will be the most common case for
> some time, and there is no need to drop the lock in this case.
>
> Please do not change the code based on my comment.  I just wanted to bring
> this up for thought.

At least make sense to me. It may take a long time to free a 1G
huge page. Dropping the lock may be a good choice. But I also
want to listen to Oscar and Michal's opinion on this.

>
> Is it as simple as checking?
>         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
>                 spin_unlock(&hugetlb_lock);

>
>         /* before return */
>         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
>                 spin_lock(&hugetlb_lock);
>
> > +
> > +     if (alloc_huge_page_vmemmap(h, page)) {
> > +             int zeroed;
> > +
> > +             spin_lock(&hugetlb_lock);
> > +             INIT_LIST_HEAD(&page->lru);
> > +             set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > +             h->nr_huge_pages++;
> > +             h->nr_huge_pages_node[nid]++;
> > +
> > +             /*
> > +              * If we cannot allocate vmemmap pages, just refuse to free the
> > +              * page and put the page back on the hugetlb free list and treat
> > +              * as a surplus page.
> > +              */
> > +             h->surplus_huge_pages++;
> > +             h->surplus_huge_pages_node[nid]++;
> > +
> > +             /*
> > +              * This page is now managed by the hugetlb allocator and has
> > +              * no users -- drop the last reference.
> > +              */
> > +             zeroed = put_page_testzero(page);
> > +             VM_BUG_ON_PAGE(!zeroed, page);
> > +             arch_clear_hugepage_flags(page);
> > +             enqueue_huge_page(h, page);
> > +
> > +             return -ENOMEM;
> > +     }
> > +
> >       for (i = 0; i < pages_per_huge_page(h); i++) {
> >               page[i].flags &= ~(1 << PG_locked | 1 << PG_error |
> >                               1 << PG_referenced | 1 << PG_dirty |
> >                               1 << PG_active | 1 << PG_private |
> >                               1 << PG_writeback);
> >       }
> > -     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> > -     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> > -     set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > -     set_page_refcounted(page);
> >       if (hstate_is_gigantic(h)) {
> > -             /*
> > -              * Temporarily drop the hugetlb_lock, because
> > -              * we might block in free_gigantic_page().
> > -              */
> > -             spin_unlock(&hugetlb_lock);
> >               destroy_compound_gigantic_page(page, huge_page_order(h));
> >               free_gigantic_page(page, huge_page_order(h));
> > -             spin_lock(&hugetlb_lock);
> >       } else {
> >               __free_pages(page, huge_page_order(h));
> >       }
> > +
> > +     spin_lock(&hugetlb_lock);
> > +
> > +     return 0;
> >  }
> >
> >  struct hstate *size_to_hstate(unsigned long size)
> > @@ -1403,9 +1434,9 @@ static void __free_huge_page(struct page *page)
> >       } else if (h->surplus_huge_pages_node[nid]) {
> >               /* remove the page from active list */
> >               list_del(&page->lru);
> > -             update_and_free_page(h, page);
> >               h->surplus_huge_pages--;
> >               h->surplus_huge_pages_node[nid]--;
> > +             update_and_free_page(h, page);
> >       } else {
> >               arch_clear_hugepage_flags(page);
> >               enqueue_huge_page(h, page);
> > @@ -1693,6 +1724,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >                       struct page *page =
> >                               list_entry(h->hugepage_freelists[node].next,
> >                                         struct page, lru);
> > +                     ClearHPageFreed(page);
>
> Quick question.  Is this change directly related to the vmemmap changes,
> or is it a cleanup that you noticed?

Just a cleanup. Maybe there should be a separate patch for this.

>
> >                       list_del(&page->lru);
> >                       h->free_huge_pages--;
> >                       h->free_huge_pages_node[node]--;
> > @@ -1700,8 +1732,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >                               h->surplus_huge_pages--;
> >                               h->surplus_huge_pages_node[node]--;
> >                       }
> > -                     update_and_free_page(h, page);
> > -                     ret = 1;
> > +                     ret = !update_and_free_page(h, page);
> >                       break;
> >               }
> >       }
> > @@ -1714,10 +1745,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
> >   * nothing for in-use hugepages and non-hugepages.
> >   * This function returns values like below:
> >   *
> > - *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
> > - *          (allocated or reserved.)
> > - *       0: successfully dissolved free hugepages or the page is not a
> > - *          hugepage (considered as already dissolved)
> > + *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
> > + *           when the system is under memory pressure and the feature of
> > + *           freeing unused vmemmap pages associated with each hugetlb page
> > + *           is enabled.
> > + *  -EBUSY:  failed to dissolved free hugepages or the hugepage is in-use
> > + *           (allocated or reserved.)
> > + *       0:  successfully dissolved free hugepages or the page is not a
> > + *           hugepage (considered as already dissolved)
> >   */
> >  int dissolve_free_huge_page(struct page *page)
> >  {
> > @@ -1768,12 +1803,14 @@ int dissolve_free_huge_page(struct page *page)
> >                       SetPageHWPoison(page);
> >                       ClearPageHWPoison(head);
> >               }
> > +             ClearHPageFreed(page);
> >               list_del(&head->lru);
> >               h->free_huge_pages--;
> >               h->free_huge_pages_node[nid]--;
> >               h->max_huge_pages--;
> > -             update_and_free_page(h, head);
> > -             rc = 0;
> > +             rc = update_and_free_page(h, head);
> > +             if (rc)
> > +                     h->max_huge_pages++;
>
> Since update_and_free_page failed, the number of surplus pages was
> incremented.  Surplus pages are the number of pages greater than
> max_huge_pages.  Since we are incrementing max_huge_pages, we should
> decrement (undo) the addition to surplus_huge_pages and
> surplus_huge_pages_node[nid].  So, I think we want
>                         h->surplus_huge_pages--;
>                         h->surplus_huge_pages_node[nid]--;
> here as well.

You are right. Thanks for reminding me of this.

>
> >       }
> >  out:
> >       spin_unlock(&hugetlb_lock);
>
> In previous version of this patch series, we discussed and refined the
> vmemmap manipulation routines below.  They still look good to me.
>
> In general, I like the approach taken in this patch.  Hopefully, others
> will comment and we can move the series forward.
> --
> Mike Kravetz
>
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index 0209b736e0b4..29a3380f3b20 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -198,6 +198,28 @@ static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> >       return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> >  }
> >
> > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     unsigned long vmemmap_addr = (unsigned long)head;
> > +     unsigned long vmemmap_end, vmemmap_reuse;
> > +
> > +     if (!free_vmemmap_pages_per_hpage(h))
> > +             return 0;
> > +
> > +     vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> > +     vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> > +     vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> > +     /*
> > +      * The pages which the vmemmap virtual address range [@vmemmap_addr,
> > +      * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> > +      * the range is mapped to the page which @vmemmap_reuse is mapped to.
> > +      * When a HugeTLB page is freed to the buddy allocator, previously
> > +      * discarded vmemmap pages must be allocated and remapping.
> > +      */
> > +     return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> > +                                GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
> > +}
> > +
> >  void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> >  {
> >       unsigned long vmemmap_addr = (unsigned long)head;
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > index 6923f03534d5..e5547d53b9f5 100644
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -11,8 +11,14 @@
> >  #include <linux/hugetlb.h>
> >
> >  #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
> >  void free_huge_page_vmemmap(struct hstate *h, struct page *head);
> >  #else
> > +static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     return 0;
> > +}
> > +
> >  static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> >  {
> >  }
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index d3076a7a3783..60fc6cd6cd23 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -40,7 +40,8 @@
> >   * @remap_pte:               called for each lowest-level entry (PTE).
> >   * @reuse_page:              the page which is reused for the tail vmemmap pages.
> >   * @reuse_addr:              the virtual address of the @reuse_page page.
> > - * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed
> > + *                   or is mapped from.
> >   */
> >  struct vmemmap_remap_walk {
> >       void (*remap_pte)(pte_t *pte, unsigned long addr,
> > @@ -237,6 +238,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end,
> >       free_vmemmap_page_list(&vmemmap_pages);
> >  }
> >
> > +static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
> > +                             struct vmemmap_remap_walk *walk)
> > +{
> > +     pgprot_t pgprot = PAGE_KERNEL;
> > +     struct page *page;
> > +     void *to;
> > +
> > +     BUG_ON(pte_page(*pte) != walk->reuse_page);
> > +
> > +     page = list_first_entry(walk->vmemmap_pages, struct page, lru);
> > +     list_del(&page->lru);
> > +     to = page_to_virt(page);
> > +     copy_page(to, (void *)walk->reuse_addr);
> > +
> > +     set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
> > +}
> > +
> > +static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
> > +                                gfp_t gfp_mask, struct list_head *list)
> > +{
> > +     unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
> > +     int nid = page_to_nid((struct page *)start);
> > +     struct page *page, *next;
> > +
> > +     while (nr_pages--) {
> > +             page = alloc_pages_node(nid, gfp_mask, 0);
> > +             if (!page)
> > +                     goto out;
> > +             list_add_tail(&page->lru, list);
> > +     }
> > +
> > +     return 0;
> > +out:
> > +     list_for_each_entry_safe(page, next, list, lru)
> > +             __free_pages(page, 0);
> > +     return -ENOMEM;
> > +}
> > +
> > +/**
> > + * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end)
> > + *                    to the page which is from the @vmemmap_pages
> > + *                    respectively.
> > + * @start:   start address of the vmemmap virtual address range that we want
> > + *           to remap.
> > + * @end:     end address of the vmemmap virtual address range that we want to
> > + *           remap.
> > + * @reuse:   reuse address.
> > + * @gpf_mask:        GFP flag for allocating vmemmap pages.
> > + */
> > +int vmemmap_remap_alloc(unsigned long start, unsigned long end,
> > +                     unsigned long reuse, gfp_t gfp_mask)
> > +{
> > +     LIST_HEAD(vmemmap_pages);
> > +     struct vmemmap_remap_walk walk = {
> > +             .remap_pte      = vmemmap_restore_pte,
> > +             .reuse_addr     = reuse,
> > +             .vmemmap_pages  = &vmemmap_pages,
> > +     };
> > +
> > +     /* See the comment in the vmemmap_remap_free(). */
> > +     BUG_ON(start - reuse != PAGE_SIZE);
> > +
> > +     might_sleep_if(gfpflags_allow_blocking(gfp_mask));
> > +
> > +     if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
> > +             return -ENOMEM;
> > +
> > +     vmemmap_remap_range(reuse, end, &walk);
> > +
> > +     return 0;
> > +}
> > +
> >  /*
> >   * Allocate a block of memory to be used to back the virtual memory map
> >   * or to back the page tables that are used to create the mapping.
> >


^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-23  0:00   ` Mike Kravetz
  2021-02-23  5:35       ` Muchun Song
@ 2021-02-23  9:27     ` Oscar Salvador
  2021-02-23 10:27       ` [External] " Muchun Song
  1 sibling, 1 reply; 25+ messages in thread
From: Oscar Salvador @ 2021-02-23  9:27 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel

On Mon, Feb 22, 2021 at 04:00:27PM -0800, Mike Kravetz wrote:
> > -static void update_and_free_page(struct hstate *h, struct page *page)
> > +static int update_and_free_page(struct hstate *h, struct page *page)
> > +	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
> >  {
> >  	int i;
> > +	int nid = page_to_nid(page);
> >  
> >  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > -		return;
> > +		return 0;
> >  
> >  	h->nr_huge_pages--;
> > -	h->nr_huge_pages_node[page_to_nid(page)]--;
> > +	h->nr_huge_pages_node[nid]--;
> > +	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> > +	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> > +	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > +	set_page_refcounted(page);
> 
> I think you added the set_page_refcounted() because the huge page will
> appear as just a compound page without a reference after dropping the
> hugetlb lock?  It might be better to set the reference before modifying
> the destructor.  Otherwise, page scanning code could find the non-hugetlb
> compound page with no reference.  I could not find any code where this
> would be a problem, but I think it would be safer to set the reference
> first.

But we already had set_page_refcounted() before this patchset there.
Are the worries only because we drop the lock? AFAICS, the "page-scanning"
problem could have happened before as well?
Although, what does page scanning mean in this context?

I am not opposed to move it above, but I would like to understand the concern
here.

> 
> > +	spin_unlock(&hugetlb_lock);
> 
> I really like the way this code is structured.  It is much simpler than
> previous versions with retries or workqueue.  There is nothing wrong with
> always dropping the lock here.  However, I wonder if we should think about
> optimizing for the case where this feature is not enabled and we are not
> freeing a 1G huge page.  I suspect this will be the most common case for
> some time, and there is no need to drop the lock in this case.
> 
> Please do not change the code based on my comment.  I just wanted to bring
> this up for thought.
> 
> Is it as simple as checking?
>         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
>                 spin_unlock(&hugetlb_lock);
> 
>         /* before return */
>         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
>                 spin_lock(&hugetlb_lock);

AFAIK, we at least need the hstate_is_gigantic? Comment below says that
free_gigantic_page might block, so we need to drop the lock.
And I am fine with the change overall.

Unless I am missing something, we should not need to drop the lock unless
we need to allocate vmemmap pages (apart from gigantic pages).

> 
> > +
> > +	if (alloc_huge_page_vmemmap(h, page)) {
> > +		int zeroed;
> > +
> > +		spin_lock(&hugetlb_lock);
> > +		INIT_LIST_HEAD(&page->lru);
> > +		set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > +		h->nr_huge_pages++;
> > +		h->nr_huge_pages_node[nid]++;

I think prep_new_huge_page() does this for us?

> > +
> > +		/*
> > +		 * If we cannot allocate vmemmap pages, just refuse to free the
> > +		 * page and put the page back on the hugetlb free list and treat
> > +		 * as a surplus page.
> > +		 */
> > +		h->surplus_huge_pages++;
> > +		h->surplus_huge_pages_node[nid]++;
> > +
> > +		/*
> > +		 * This page is now managed by the hugetlb allocator and has
> > +		 * no users -- drop the last reference.
> > +		 */
> > +		zeroed = put_page_testzero(page);
> > +		VM_BUG_ON_PAGE(!zeroed, page);

Can this actually happen? AFAIK, page landed in update_and_free_page should be
zero refcounted, then we increase the reference, and I cannot see how the
reference might have changed in the meantime.

I am all for catching corner cases, but not sure how realistic this is.
Moreover, if we __ever__ get there, things can get nasty.

We basically will have an in-use page in the free hugetlb pool, so corruption
will happen. At that point, a plain BUG_ON might be better.

But as I said, I do not think we need that.

I yet need to look further, but what I have seen so far looks good.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-23  9:27     ` Oscar Salvador
@ 2021-02-23 10:27       ` Muchun Song
  2021-02-23 10:50         ` Oscar Salvador
  0 siblings, 1 reply; 25+ messages in thread
From: Muchun Song @ 2021-02-23 10:27 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Mike Kravetz, Jonathan Corbet, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Tue, Feb 23, 2021 at 5:28 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Mon, Feb 22, 2021 at 04:00:27PM -0800, Mike Kravetz wrote:
> > > -static void update_and_free_page(struct hstate *h, struct page *page)
> > > +static int update_and_free_page(struct hstate *h, struct page *page)
> > > +   __releases(&hugetlb_lock) __acquires(&hugetlb_lock)
> > >  {
> > >     int i;
> > > +   int nid = page_to_nid(page);
> > >
> > >     if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > > -           return;
> > > +           return 0;
> > >
> > >     h->nr_huge_pages--;
> > > -   h->nr_huge_pages_node[page_to_nid(page)]--;
> > > +   h->nr_huge_pages_node[nid]--;
> > > +   VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> > > +   VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
> > > +   set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > > +   set_page_refcounted(page);
> >
> > I think you added the set_page_refcounted() because the huge page will
> > appear as just a compound page without a reference after dropping the
> > hugetlb lock?  It might be better to set the reference before modifying
> > the destructor.  Otherwise, page scanning code could find the non-hugetlb
> > compound page with no reference.  I could not find any code where this
> > would be a problem, but I think it would be safer to set the reference
> > first.
>
> But we already had set_page_refcounted() before this patchset there.
> Are the worries only because we drop the lock? AFAICS, the "page-scanning"
> problem could have happened before as well?
> Although, what does page scanning mean in this context?
>
> I am not opposed to move it above, but I would like to understand the concern
> here.
>
> >
> > > +   spin_unlock(&hugetlb_lock);
> >
> > I really like the way this code is structured.  It is much simpler than
> > previous versions with retries or workqueue.  There is nothing wrong with
> > always dropping the lock here.  However, I wonder if we should think about
> > optimizing for the case where this feature is not enabled and we are not
> > freeing a 1G huge page.  I suspect this will be the most common case for
> > some time, and there is no need to drop the lock in this case.
> >
> > Please do not change the code based on my comment.  I just wanted to bring
> > this up for thought.
> >
> > Is it as simple as checking?
> >         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
> >                 spin_unlock(&hugetlb_lock);
> >
> >         /* before return */
> >         if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
> >                 spin_lock(&hugetlb_lock);
>
> AFAIK, we at least need the hstate_is_gigantic? Comment below says that
> free_gigantic_page might block, so we need to drop the lock.
> And I am fine with the change overall.
>
> Unless I am missing something, we should not need to drop the lock unless
> we need to allocate vmemmap pages (apart from gigantic pages).
>
> >
> > > +
> > > +   if (alloc_huge_page_vmemmap(h, page)) {
> > > +           int zeroed;
> > > +
> > > +           spin_lock(&hugetlb_lock);
> > > +           INIT_LIST_HEAD(&page->lru);
> > > +           set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > > +           h->nr_huge_pages++;
> > > +           h->nr_huge_pages_node[nid]++;
>
> I think prep_new_huge_page() does this for us?

Actually, there are some differences. e.g. prep_new_huge_page()
will reset hugetlb cgroup and ClearHPageFreed, but we do not need
them here. And prep_new_huge_page will acquire and release
the hugetlb_lock. But here we also need hold the lock to update
the surplus counter and enqueue the page to the free list.
So I do not think reuse prep_new_huge_page is a good idea.

>
> > > +
> > > +           /*
> > > +            * If we cannot allocate vmemmap pages, just refuse to free the
> > > +            * page and put the page back on the hugetlb free list and treat
> > > +            * as a surplus page.
> > > +            */
> > > +           h->surplus_huge_pages++;
> > > +           h->surplus_huge_pages_node[nid]++;
> > > +
> > > +           /*
> > > +            * This page is now managed by the hugetlb allocator and has
> > > +            * no users -- drop the last reference.
> > > +            */
> > > +           zeroed = put_page_testzero(page);
> > > +           VM_BUG_ON_PAGE(!zeroed, page);
>
> Can this actually happen? AFAIK, page landed in update_and_free_page should be
> zero refcounted, then we increase the reference, and I cannot see how the
> reference might have changed in the meantime.

I am not sure whether other modules get the page and then put the
page. I see gather_surplus_pages does the same thing. So I copied
from there. I try to look at the memory_failure routine.


CPU0:                           CPU1:
                                set_compound_page_dtor(HUGETLB_PAGE_DTOR);
memory_failure_hugetlb
  get_hwpoison_page
    __get_hwpoison_page
      get_page_unless_zero
                                put_page_testzero()

Maybe this can happen. But it is a very corner case. If we want to
deal with this. We can put_page_testzero() first and then
set_compound_page_dtor(HUGETLB_PAGE_DTOR).

>
> I am all for catching corner cases, but not sure how realistic this is.
> Moreover, if we __ever__ get there, things can get nasty.
>
> We basically will have an in-use page in the free hugetlb pool, so corruption
> will happen. At that point, a plain BUG_ON might be better.
>
> But as I said, I do not think we need that.
>
> I yet need to look further, but what I have seen so far looks good.
>
> --
> Oscar Salvador
> SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-23 10:27       ` [External] " Muchun Song
@ 2021-02-23 10:50         ` Oscar Salvador
  2021-02-23 15:41           ` Oscar Salvador
  0 siblings, 1 reply; 25+ messages in thread
From: Oscar Salvador @ 2021-02-23 10:50 UTC (permalink / raw)
  To: Muchun Song
  Cc: Mike Kravetz, Jonathan Corbet, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Tue, Feb 23, 2021 at 06:27:07PM +0800, Muchun Song wrote:
> > > > +
> > > > +   if (alloc_huge_page_vmemmap(h, page)) {
> > > > +           int zeroed;
> > > > +
> > > > +           spin_lock(&hugetlb_lock);
> > > > +           INIT_LIST_HEAD(&page->lru);
> > > > +           set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > > > +           h->nr_huge_pages++;
> > > > +           h->nr_huge_pages_node[nid]++;
> >
> > I think prep_new_huge_page() does this for us?
> 
> Actually, there are some differences. e.g. prep_new_huge_page()
> will reset hugetlb cgroup and ClearHPageFreed, but we do not need
> them here. And prep_new_huge_page will acquire and release
> the hugetlb_lock. But here we also need hold the lock to update
> the surplus counter and enqueue the page to the free list.
> So I do not think reuse prep_new_huge_page is a good idea.

I see, I missed that.

> > Can this actually happen? AFAIK, page landed in update_and_free_page should be
> > zero refcounted, then we increase the reference, and I cannot see how the
> > reference might have changed in the meantime.
> 
> I am not sure whether other modules get the page and then put the
> page. I see gather_surplus_pages does the same thing. So I copied
> from there. I try to look at the memory_failure routine.
> 
> 
> CPU0:                           CPU1:
>                                 set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> memory_failure_hugetlb
>   get_hwpoison_page
>     __get_hwpoison_page
>       get_page_unless_zero
>                                 put_page_testzero()
> 
> Maybe this can happen. But it is a very corner case. If we want to
> deal with this. We can put_page_testzero() first and then
> set_compound_page_dtor(HUGETLB_PAGE_DTOR).

I have to check further, but it looks like this could actually happen.
Handling this with VM_BUG_ON is wrong, because memory_failure/soft_offline are
entitled to increase the refcount of the page.

AFAICS,

 CPU0:                                    CPU1:
                                          set_compound_page_dtor(HUGETLB_PAGE_DTOR);
 memory_failure_hugetlb
   get_hwpoison_page
     __get_hwpoison_page
       get_page_unless_zero
                                          put_page_testzero()
        identify_page_state
         me_huge_page

I think we can reach me_huge_page with either refcount = 1 or refcount =2,
depending whether put_page_testzero has been issued.

For now, I would not re-enqueue the page if put_page_testzero == false.
I have to see how this can be handled gracefully.



-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-23 10:50         ` Oscar Salvador
@ 2021-02-23 15:41           ` Oscar Salvador
  2021-02-23 22:31             ` Oscar Salvador
  0 siblings, 1 reply; 25+ messages in thread
From: Oscar Salvador @ 2021-02-23 15:41 UTC (permalink / raw)
  To: Muchun Song
  Cc: Mike Kravetz, Jonathan Corbet, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Tue, Feb 23, 2021 at 11:50:05AM +0100, Oscar Salvador wrote:
> > CPU0:                           CPU1:
> >                                 set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> > memory_failure_hugetlb
> >   get_hwpoison_page
> >     __get_hwpoison_page
> >       get_page_unless_zero
> >                                 put_page_testzero()
> > 
> > Maybe this can happen. But it is a very corner case. If we want to
> > deal with this. We can put_page_testzero() first and then
> > set_compound_page_dtor(HUGETLB_PAGE_DTOR).
> 
> I have to check further, but it looks like this could actually happen.
> Handling this with VM_BUG_ON is wrong, because memory_failure/soft_offline are
> entitled to increase the refcount of the page.
> 
> AFAICS,
> 
>  CPU0:                                    CPU1:
>                                           set_compound_page_dtor(HUGETLB_PAGE_DTOR);
>  memory_failure_hugetlb
>    get_hwpoison_page
>      __get_hwpoison_page
>        get_page_unless_zero
>                                           put_page_testzero()
>         identify_page_state
>          me_huge_page
> 
> I think we can reach me_huge_page with either refcount = 1 or refcount =2,
> depending whether put_page_testzero has been issued.
> 
> For now, I would not re-enqueue the page if put_page_testzero == false.
> I have to see how this can be handled gracefully.

I took a brief look.
It is not really your patch fault. Hugetlb <-> memory-failure synchronization is
a bit odd, it definitely needs improvment.

The thing is, we can have different scenarios here.
E.g: by the time we return from put_page_testzero, we might have refcount ==
0 and PageHWPoison, or refcount == 1 PageHWPoison.

The former will let a user get a page from the pool and get a sigbus
when it faults in the page, and the latter will be even more odd as we
will have a self-refcounted page in the free pool (and hwpoisoned).

As I said, it is not this patchset fault. I just made me realize this
problem.

I have to think some more about this.

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-23 15:41           ` Oscar Salvador
@ 2021-02-23 22:31             ` Oscar Salvador
  2021-02-24  3:47               ` Muchun Song
  0 siblings, 1 reply; 25+ messages in thread
From: Oscar Salvador @ 2021-02-23 22:31 UTC (permalink / raw)
  To: Muchun Song
  Cc: Mike Kravetz, Jonathan Corbet, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Tue, Feb 23, 2021 at 04:41:28PM +0100, Oscar Salvador wrote:
> On Tue, Feb 23, 2021 at 11:50:05AM +0100, Oscar Salvador wrote:
> > > CPU0:                           CPU1:
> > >                                 set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> > > memory_failure_hugetlb
> > >   get_hwpoison_page
> > >     __get_hwpoison_page
> > >       get_page_unless_zero
> > >                                 put_page_testzero()
> > > 
> > > Maybe this can happen. But it is a very corner case. If we want to
> > > deal with this. We can put_page_testzero() first and then
> > > set_compound_page_dtor(HUGETLB_PAGE_DTOR).
> > 
> > I have to check further, but it looks like this could actually happen.
> > Handling this with VM_BUG_ON is wrong, because memory_failure/soft_offline are
> > entitled to increase the refcount of the page.
> > 
> > AFAICS,
> > 
> >  CPU0:                                    CPU1:
> >                                           set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> >  memory_failure_hugetlb
> >    get_hwpoison_page
> >      __get_hwpoison_page
> >        get_page_unless_zero
> >                                           put_page_testzero()
> >         identify_page_state
> >          me_huge_page
> > 
> > I think we can reach me_huge_page with either refcount = 1 or refcount =2,
> > depending whether put_page_testzero has been issued.
> > 
> > For now, I would not re-enqueue the page if put_page_testzero == false.
> > I have to see how this can be handled gracefully.
> 
> I took a brief look.
> It is not really your patch fault. Hugetlb <-> memory-failure synchronization is
> a bit odd, it definitely needs improvment.
> 
> The thing is, we can have different scenarios here.
> E.g: by the time we return from put_page_testzero, we might have refcount ==
> 0 and PageHWPoison, or refcount == 1 PageHWPoison.
> 
> The former will let a user get a page from the pool and get a sigbus
> when it faults in the page, and the latter will be even more odd as we
> will have a self-refcounted page in the free pool (and hwpoisoned).
> 
> As I said, it is not this patchset fault. I just made me realize this
> problem.
> 
> I have to think some more about this.

I have been thinking more about this.
memory failure events can occur at any time, and we might not be in a
position where we can handle gracefully the error, meaning that the page
might end up in non desirable state.

E.g: we could flag the page right before enqueing it.

I still think that VM_BUG_ON should go, as the refcount can be perfectly
increased by memory-failure/soft_offline handlers, so BUGing there does
not make much sense.

One think we could do is to check the state of the page we want to
retrieve from the free hugepage pool.
We should discard any HWpoisoned ones, and dissolve them.

The thing is, memory-failure/soft_offline should allocate a new hugepage
for the free pool, so keep the pool stable.
Something like [1].

Anyway, this is orthogonal to this patch, and something I will work on
soon.

[1] https://lore.kernel.org/linux-mm/20210222135137.25717-2-osalvador@suse.de/T/#u

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-23 22:31             ` Oscar Salvador
@ 2021-02-24  3:47               ` Muchun Song
  2021-02-24  8:31                 ` Oscar Salvador
  0 siblings, 1 reply; 25+ messages in thread
From: Muchun Song @ 2021-02-24  3:47 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Mike Kravetz, Jonathan Corbet, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Feb 24, 2021 at 6:32 AM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Tue, Feb 23, 2021 at 04:41:28PM +0100, Oscar Salvador wrote:
> > On Tue, Feb 23, 2021 at 11:50:05AM +0100, Oscar Salvador wrote:
> > > > CPU0:                           CPU1:
> > > >                                 set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> > > > memory_failure_hugetlb
> > > >   get_hwpoison_page
> > > >     __get_hwpoison_page
> > > >       get_page_unless_zero
> > > >                                 put_page_testzero()
> > > >
> > > > Maybe this can happen. But it is a very corner case. If we want to
> > > > deal with this. We can put_page_testzero() first and then
> > > > set_compound_page_dtor(HUGETLB_PAGE_DTOR).
> > >
> > > I have to check further, but it looks like this could actually happen.
> > > Handling this with VM_BUG_ON is wrong, because memory_failure/soft_offline are
> > > entitled to increase the refcount of the page.
> > >
> > > AFAICS,
> > >
> > >  CPU0:                                    CPU1:
> > >                                           set_compound_page_dtor(HUGETLB_PAGE_DTOR);
> > >  memory_failure_hugetlb
> > >    get_hwpoison_page
> > >      __get_hwpoison_page
> > >        get_page_unless_zero
> > >                                           put_page_testzero()
> > >         identify_page_state
> > >          me_huge_page
> > >
> > > I think we can reach me_huge_page with either refcount = 1 or refcount =2,
> > > depending whether put_page_testzero has been issued.
> > >
> > > For now, I would not re-enqueue the page if put_page_testzero == false.
> > > I have to see how this can be handled gracefully.
> >
> > I took a brief look.
> > It is not really your patch fault. Hugetlb <-> memory-failure synchronization is
> > a bit odd, it definitely needs improvment.
> >
> > The thing is, we can have different scenarios here.
> > E.g: by the time we return from put_page_testzero, we might have refcount ==
> > 0 and PageHWPoison, or refcount == 1 PageHWPoison.
> >
> > The former will let a user get a page from the pool and get a sigbus
> > when it faults in the page, and the latter will be even more odd as we
> > will have a self-refcounted page in the free pool (and hwpoisoned).

I have been looking at the dequeue_huge_page_node_exact().
If a PageHWPoison huge page is in the free pool list, the page will
not be allocated to the user. The PageHWPoison huge page
will be skip in the dequeue_huge_page_node_exact().

> >
> > As I said, it is not this patchset fault. I just made me realize this
> > problem.
> >
> > I have to think some more about this.
>
> I have been thinking more about this.
> memory failure events can occur at any time, and we might not be in a
> position where we can handle gracefully the error, meaning that the page
> might end up in non desirable state.
>
> E.g: we could flag the page right before enqueing it.
>
> I still think that VM_BUG_ON should go, as the refcount can be perfectly
> increased by memory-failure/soft_offline handlers, so BUGing there does
> not make much sense.

Make sense. I will remove the VM_BUG_ON.

>
> One think we could do is to check the state of the page we want to
> retrieve from the free hugepage pool.
> We should discard any HWpoisoned ones, and dissolve them.
>
> The thing is, memory-failure/soft_offline should allocate a new hugepage
> for the free pool, so keep the pool stable.
> Something like [1].
>
> Anyway, this is orthogonal to this patch, and something I will work on
> soon.
>
> [1] https://lore.kernel.org/linux-mm/20210222135137.25717-2-osalvador@suse.de/T/#u

Thanks for your efforts on this.

>
> --
> Oscar Salvador
> SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

* Re: [External] Re: [PATCH v16 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-02-24  3:47               ` Muchun Song
@ 2021-02-24  8:31                 ` Oscar Salvador
  0 siblings, 0 replies; 25+ messages in thread
From: Oscar Salvador @ 2021-02-24  8:31 UTC (permalink / raw)
  To: Muchun Song
  Cc: Mike Kravetz, Jonathan Corbet, Thomas Gleixner, mingo, bp, x86,
	hpa, dave.hansen, luto, Peter Zijlstra, viro, Andrew Morton,
	paulmck, mchehab+huawei, pawan.kumar.gupta, Randy Dunlap,
	oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel

On Wed, Feb 24, 2021 at 11:47:49AM +0800, Muchun Song wrote:
> I have been looking at the dequeue_huge_page_node_exact().
> If a PageHWPoison huge page is in the free pool list, the page will
> not be allocated to the user. The PageHWPoison huge page
> will be skip in the dequeue_huge_page_node_exact().

Yes, now I see where the problem lies.

hugetlb_no_page()->..->dequeue_huge_page_node_exact() will fail if the only
page in the pool is hwpoisoned, as expected.
Then alloc_buddy_huge_page_with_mpol() will be tried, but since surplus_huge_pages
counter is stale, we will fail there.
That relates to the problem Mike pointed out, that we should decrease again the
surplus_huge_pages.

I think hwpoisoned pages should not be in the free pool though.
Probably we want to take them off when we notice we have one:
e.g: dequeue_huge_page_node_exact could place the page in another list 
and place it back in case it was unpoisoned.

But anyway, that has nothing to do with this (apart from the surplus problem).

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 25+ messages in thread

end of thread, other threads:[~2021-02-24  8:33 UTC | newest]

Thread overview: 25+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
2021-02-19 10:49 ` [PATCH v16 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
2021-02-19 10:49 ` [PATCH v16 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2021-02-19 10:49 ` [PATCH v16 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
2021-02-19 10:49 ` [PATCH v16 4/9] mm: hugetlb: alloc " Muchun Song
2021-02-19 14:12   ` Michal Hocko
2021-02-20  4:20     ` [External] " Muchun Song
2021-02-22  9:25       ` Michal Hocko
2021-02-22 10:31         ` Muchun Song
2021-02-22 10:50           ` Oscar Salvador
2021-02-23  0:00   ` Mike Kravetz
2021-02-23  5:35     ` [External] " Muchun Song
2021-02-23  5:35       ` Muchun Song
2021-02-23  9:27     ` Oscar Salvador
2021-02-23 10:27       ` [External] " Muchun Song
2021-02-23 10:50         ` Oscar Salvador
2021-02-23 15:41           ` Oscar Salvador
2021-02-23 22:31             ` Oscar Salvador
2021-02-24  3:47               ` Muchun Song
2021-02-24  8:31                 ` Oscar Salvador
2021-02-19 10:49 ` [PATCH v16 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
2021-02-19 10:49 ` [PATCH v16 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
2021-02-19 10:49 ` [PATCH v16 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2021-02-19 10:49 ` [PATCH v16 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
2021-02-19 10:49 ` [PATCH v16 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.