All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page
@ 2021-03-08 10:27 Muchun Song
  2021-03-08 10:27 ` [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
                   ` (8 more replies)
  0 siblings, 9 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:27 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song

Hi everyone,

This patch series will free some vmemmap pages(struct page structures)
associated with each HugeTLB page when preallocated to save memory.

In order to reduce the difficulty of the first version of code review.
From this version, we disable PMD/huge page mapping of vmemmap if this
feature was enabled. This acutely eliminates a bunch of the complex code
doing page table manipulation. When this patch series is solid, we cam add
the code of vmemmap page table manipulation in the future.

The struct page structures (page structs) are used to describe a physical
page frame. By default, there is an one-to-one mapping from a page frame to
it's corresponding page struct.

The HugeTLB pages consist of multiple base page size pages and is supported
by many architectures. See hugetlbpage.rst in the Documentation directory
for more details. On the x86 architecture, HugeTLB pages of size 2MB and 1GB
are currently supported. Since the base page size on x86 is 4KB, a 2MB
HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
4096 base pages. For each base page, there is a corresponding page struct.

Within the HugeTLB subsystem, only the first 4 page structs are used to
contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
provides this upper limit. The only 'useful' information in the remaining
page structs is the compound_head field, and this field is the same for all
tail pages.

By removing redundant page structs for HugeTLB pages, memory can returned to
the buddy allocator for other uses.

When the system boot up, every 2M HugeTLB has 512 struct page structs which
size is 8 pages(sizeof(struct page) * 512 / PAGE_SIZE).

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | -------------> |     2     |
 |           |                     +-----------+                +-----------+
 |           |                     |     3     | -------------> |     3     |
 |           |                     +-----------+                +-----------+
 |           |                     |     4     | -------------> |     4     |
 |    2MB    |                     +-----------+                +-----------+
 |           |                     |     5     | -------------> |     5     |
 |           |                     +-----------+                +-----------+
 |           |                     |     6     | -------------> |     6     |
 |           |                     +-----------+                +-----------+
 |           |                     |     7     | -------------> |     7     |
 |           |                     +-----------+                +-----------+
 |           |
 |           |
 |           |
 +-----------+

The value of page->compound_head is the same for all tail pages. The first
page of page structs (page 0) associated with the HugeTLB page contains the 4
page structs necessary to describe the HugeTLB. The only use of the remaining
pages of page structs (page 1 to page 7) is to point to page->compound_head.
Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
will be used for each HugeTLB page. This will allow us to free the remaining
6 pages to the buddy allocator.

Here is how things look after remapping.

    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
 +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
 |           |                     |     0     | -------------> |     0     |
 |           |                     +-----------+                +-----------+
 |           |                     |     1     | -------------> |     1     |
 |           |                     +-----------+                +-----------+
 |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
 |           |                     +-----------+                   | | | | |
 |           |                     |     3     | ------------------+ | | | |
 |           |                     +-----------+                     | | | |
 |           |                     |     4     | --------------------+ | | |
 |    2MB    |                     +-----------+                       | | |
 |           |                     |     5     | ----------------------+ | |
 |           |                     +-----------+                         | |
 |           |                     |     6     | ------------------------+ |
 |           |                     +-----------+                           |
 |           |                     |     7     | --------------------------+
 |           |                     +-----------+
 |           |
 |           |
 |           |
 +-----------+

When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
vmemmap pages and restore the previous mapping relationship.

Apart from 2MB HugeTLB page, we also have 1GB HugeTLB page. It is similar
to the 2MB HugeTLB page. We also can use this approach to free the vmemmap
pages.

In this case, for the 1GB HugeTLB page, we can save 4094 pages. This is a
very substantial gain. On our server, run some SPDK/QEMU applications which
will use 1024GB HugeTLB page. With this feature enabled, we can save ~16GB
(1G hugepage)/~12GB (2MB hugepage) memory.

Because there are vmemmap page tables reconstruction on the freeing/allocating
path, it increases some overhead. Here are some overhead analysis.

1) Allocating 10240 2MB HugeTLB pages.

   a) With this patch series applied:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.166s
   user     0m0.000s
   sys      0m0.166s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)           5476 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [16K, 32K)          4760 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@       |
   [32K, 64K)             4 |                                                    |

   b) Without this patch series:
   # time echo 10240 > /proc/sys/vm/nr_hugepages

   real     0m0.067s
   user     0m0.000s
   sys      0m0.067s

   # bpftrace -e 'kprobe:alloc_fresh_huge_page { @start[tid] = nsecs; }
     kretprobe:alloc_fresh_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)           10147 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)             93 |                                                    |

   Summarize: this feature is about ~2x slower than before.

2) Freeing 10240 2MB HugeTLB pages.

   a) With this patch series applied:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.213s
   user     0m0.000s
   sys      0m0.213s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [8K, 16K)              6 |                                                    |
   [16K, 32K)         10227 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [32K, 64K)             7 |                                                    |

   b) Without this patch series:
   # time echo 0 > /proc/sys/vm/nr_hugepages

   real     0m0.081s
   user     0m0.000s
   sys      0m0.081s

   # bpftrace -e 'kprobe:free_pool_huge_page { @start[tid] = nsecs; }
     kretprobe:free_pool_huge_page /@start[tid]/ { @latency = hist(nsecs -
     @start[tid]); delete(@start[tid]); }'
   Attaching 2 probes...

   @latency:
   [4K, 8K)            6805 |@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@@|
   [8K, 16K)           3427 |@@@@@@@@@@@@@@@@@@@@@@@@@@                          |
   [16K, 32K)             8 |                                                    |

   Summarize: The overhead of __free_hugepage is about ~2-3x slower than before.

Although the overhead has increased, the overhead is not significant. Like Mike
said, "However, remember that the majority of use cases create HugeTLB pages at
or shortly after boot time and add them to the pool. So, additional overhead is
at pool creation time. There is no change to 'normal run time' operations of
getting a page from or returning a page to the pool (think page fault/unmap)".

Despite the overhead and in addition to the memory gains from this series. The
following data is obtained by Joao Martins. Very thanks to his effort.

There's an additional benefit which is page (un)pinners will see an improvement
and Joao presumes because there are fewer memmap pages and thus the tail/head
pages are staying in cache more often.

Out of the box Joao saw (when comparing linux-next against linux-next + this series)
with gup_test and pinning a 16G HugeTLB file (with 1G pages):

	get_user_pages(): ~32k -> ~9k
	unpin_user_pages(): ~75k -> ~70k

Usually any tight loop fetching compound_head(), or reading tail pages data (e.g.
compound_head) benefit a lot. There's some unpinning inefficiencies Joao was
fixing[0], but with that in added it shows even more:

	unpin_user_pages(): ~27k -> ~3.8k

[0] https://lore.kernel.org/linux-mm/20210204202500.26474-1-joao.m.martins@oracle.com/

Todo:
  - Free all of the tail vmemmap pages
    Now for the 2MB HugrTLB page, we only free 6 vmemmap pages. we really can
    free 7 vmemmap pages. In this case, we can see 8 of the 512 struct page
    structures has beed set PG_head flag. If we can adjust compound_head()
    slightly and make compound_head() return the real head struct page when
    the parameter is the tail struct page but with PG_head flag set.

    In order to make the code evolution route clearer. This feature can can be
    a separate patch after this patchset is solid.

  - Support for other architectures (e.g. aarch64).
  - Enable PMD/huge page mapping of vmemmap even if this feature was enabled.

Changelog in v17 -> v18:
  - Add complete copyright to bootmem_info.c (Suggested by Balbir).
  - Fix some issues (in patch #4) suggested by Mike.

  Thanks to Balbir and Mike's review. Also thanks to Chen Huang and
  Bodeddula Balasubramaniam's test.

Changelog in v16 -> v17:
  - Fix issues suggested by Mike and Oscar.
  - Update commit log suggested by Michal.

  Thanks to Mike, David H and Michal's suggestions and review.

Changelog in v15 -> v16:
  - Use GFP_KERNEL to allocate vmemmap pages.

  Thanks to Mike, David H and Michal's suggestions.

Changelog in v14 -> v15:
  - Fix some issues suggested by Oscar. Thanks to Oscar.
  - Add numbers which Joao Martins tested to cover letter. Thanks to his effort.

Changelog in v13 -> v14:
  - Refuse to free the HugeTLB page when the system is under memory pressure.
  - Use GFP_ATOMIC to allocate vmemmap pages instead of GFP_KERNEL.
  - Rebase to linux-next 20210202.
  - Fix and add some comments for vmemmap_remap_free().

  Thanks to Oscar, Mike, David H and David R's suggestions and review.

Changelog in v12 -> v13:
  - Remove VM_WARN_ON_PAGE macro.
  - Add more comments in vmemmap_pte_range() and vmemmap_remap_free().

  Thanks to Oscar and Mike's suggestions and review.

Changelog in v11 -> v12:
  - Move VM_WARN_ON_PAGE to a separate patch.
  - Call __free_hugepage() with hugetlb_lock (See patch #5.) to serialize
    with dissolve_free_huge_page(). It is to prepare for patch #9.
  - Introduce PageHugeInflight. See patch #9.

Changelog in v10 -> v11:
  - Fix compiler error when !CONFIG_HUGETLB_PAGE_FREE_VMEMMAP.
  - Rework some comments and commit changes.
  - Rework vmemmap_remap_free() to 3 parameters.

  Thanks to Oscar and Mike's suggestions and review.

Changelog in v9 -> v10:
  - Fix a bug in patch #11. Thanks to Oscar for pointing that out.
  - Rework some commit log or comments. Thanks Mike and Oscar for the suggestions.
  - Drop VMEMMAP_TAIL_PAGE_REUSE in the patch #3.

  Thank you very much Mike and Oscar for reviewing the code.

Changelog in v8 -> v9:
  - Rework some code. Very thanks to Oscar.
  - Put all the non-hugetlb vmemmap functions under sparsemem-vmemmap.c.

Changelog in v7 -> v8:
  - Adjust the order of patches.

  Very thanks to David and Oscar. Your suggestions are very valuable.

Changelog in v6 -> v7:
  - Rebase to linux-next 20201130
  - Do not use basepage mapping for vmemmap when this feature is disabled.
  - Rework some patchs.
    [PATCH v6 08/16] mm/hugetlb: Free the vmemmap pages associated with each hugetlb page
    [PATCH v6 10/16] mm/hugetlb: Allocate the vmemmap pages associated with each hugetlb page

  Thanks to Oscar and Barry.

Changelog in v5 -> v6:
  - Disable PMD/huge page mapping of vmemmap if this feature was enabled.
  - Simplify the first version code.

Changelog in v4 -> v5:
  - Rework somme comments and code in the [PATCH v4 04/21] and [PATCH v4 05/21].

  Thanks to Mike and Oscar's suggestions.

Changelog in v3 -> v4:
  - Move all the vmemmap functions to hugetlb_vmemmap.c.
  - Make the CONFIG_HUGETLB_PAGE_FREE_VMEMMAP default to y, if we want to
    disable this feature, we should disable it by a boot/kernel command line.
  - Remove vmemmap_pgtable_{init, deposit, withdraw}() helper functions.
  - Initialize page table lock for vmemmap through core_initcall mechanism.

  Thanks for Mike and Oscar's suggestions.

Changelog in v2 -> v3:
  - Rename some helps function name. Thanks Mike.
  - Rework some code. Thanks Mike and Oscar.
  - Remap the tail vmemmap page with PAGE_KERNEL_RO instead of PAGE_KERNEL.
    Thanks Matthew.
  - Add some overhead analysis in the cover letter.
  - Use vmemap pmd table lock instead of a hugetlb specific global lock.

Changelog in v1 -> v2:
  - Fix do not call dissolve_compound_page in alloc_huge_page_vmemmap().
  - Fix some typo and code style problems.
  - Remove unused handle_vmemmap_fault().
  - Merge some commits to one commit suggested by Mike.

Muchun Song (9):
  mm: memory_hotplug: factor out bootmem core functions to
    bootmem_info.c
  mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
  mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  mm: hugetlb: set the PageHWPoison to the raw error page
  mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate
  mm: hugetlb: gather discrete indexes of tail page
  mm: hugetlb: optimize the code with the help of the compiler

 Documentation/admin-guide/kernel-parameters.txt |  14 ++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  11 +
 arch/x86/mm/init_64.c                           |  13 +-
 fs/Kconfig                                      |   6 +
 include/linux/bootmem_info.h                    |  65 ++++++
 include/linux/hugetlb.h                         |  47 +++-
 include/linux/hugetlb_cgroup.h                  |  19 +-
 include/linux/memory_hotplug.h                  |  27 ---
 include/linux/mm.h                              |   5 +
 mm/Makefile                                     |   2 +
 mm/bootmem_info.c                               | 127 ++++++++++
 mm/hugetlb.c                                    | 176 +++++++++++---
 mm/hugetlb_vmemmap.c                            | 293 ++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h                            |  51 +++++
 mm/memory_hotplug.c                             | 116 ----------
 mm/sparse-vmemmap.c                             | 280 ++++++++++++++++++++++
 mm/sparse.c                                     |   1 +
 17 files changed, 1065 insertions(+), 188 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

-- 
2.11.0


^ permalink raw reply	[flat|nested] 52+ messages in thread

* [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
@ 2021-03-08 10:27 ` Muchun Song
  2021-03-10 14:14   ` Michal Hocko
  2021-03-08 10:28 ` [PATCH v18 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
                   ` (7 subsequent siblings)
  8 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:27 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin, Chen Huang, Bodeddula Balasubramaniam

Move bootmem info registration common API to individual bootmem_info.c.
And we will use {get,put}_page_bootmem() to initialize the page for the
vmemmap pages or free the vmemmap pages to buddy in the later patch.
So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
movement without any functional change.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: David Hildenbrand <david@redhat.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 arch/x86/mm/init_64.c          |   3 +-
 include/linux/bootmem_info.h   |  40 +++++++++++++
 include/linux/memory_hotplug.h |  27 ---------
 mm/Makefile                    |   1 +
 mm/bootmem_info.c              | 127 +++++++++++++++++++++++++++++++++++++++++
 mm/memory_hotplug.c            | 116 -------------------------------------
 mm/sparse.c                    |   1 +
 7 files changed, 171 insertions(+), 144 deletions(-)
 create mode 100644 include/linux/bootmem_info.h
 create mode 100644 mm/bootmem_info.c

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index b5a3fa4033d3..0a45f062826e 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -33,6 +33,7 @@
 #include <linux/nmi.h>
 #include <linux/gfp.h>
 #include <linux/kcore.h>
+#include <linux/bootmem_info.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1571,7 +1572,7 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 	return err;
 }
 
-#if defined(CONFIG_MEMORY_HOTPLUG_SPARSE) && defined(CONFIG_HAVE_BOOTMEM_INFO_NODE)
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
 void register_page_bootmem_memmap(unsigned long section_nr,
 				  struct page *start_page, unsigned long nr_pages)
 {
diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
new file mode 100644
index 000000000000..4ed6dee1adc9
--- /dev/null
+++ b/include/linux/bootmem_info.h
@@ -0,0 +1,40 @@
+/* SPDX-License-Identifier: GPL-2.0 */
+#ifndef __LINUX_BOOTMEM_INFO_H
+#define __LINUX_BOOTMEM_INFO_H
+
+#include <linux/mmzone.h>
+
+/*
+ * Types for free bootmem stored in page->lru.next. These have to be in
+ * some random range in unsigned long space for debugging purposes.
+ */
+enum {
+	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
+	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
+	MIX_SECTION_INFO,
+	NODE_INFO,
+	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
+};
+
+#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
+
+void get_page_bootmem(unsigned long info, struct page *page,
+		      unsigned long type);
+void put_page_bootmem(struct page *page);
+#else
+static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+}
+
+static inline void put_page_bootmem(struct page *page)
+{
+}
+
+static inline void get_page_bootmem(unsigned long info, struct page *page,
+				    unsigned long type)
+{
+}
+#endif
+
+#endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/memory_hotplug.h b/include/linux/memory_hotplug.h
index 7288aa5ef73b..96659a8b9d02 100644
--- a/include/linux/memory_hotplug.h
+++ b/include/linux/memory_hotplug.h
@@ -18,18 +18,6 @@ struct vmem_altmap;
 #ifdef CONFIG_MEMORY_HOTPLUG
 struct page *pfn_to_online_page(unsigned long pfn);
 
-/*
- * Types for free bootmem stored in page->lru.next. These have to be in
- * some random range in unsigned long space for debugging purposes.
- */
-enum {
-	MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE = 12,
-	SECTION_INFO = MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE,
-	MIX_SECTION_INFO,
-	NODE_INFO,
-	MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE = NODE_INFO,
-};
-
 /* Types for control the zone type of onlined and offlined memory */
 enum {
 	/* Offline the memory. */
@@ -210,17 +198,6 @@ static inline void arch_refresh_nodedata(int nid, pg_data_t *pgdat)
 #endif /* CONFIG_NUMA */
 #endif /* CONFIG_HAVE_ARCH_NODEDATA_EXTENSION */
 
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-extern void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
-#else
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-#endif
-extern void put_page_bootmem(struct page *page);
-extern void get_page_bootmem(unsigned long ingo, struct page *page,
-			     unsigned long type);
-
 void get_online_mems(void);
 void put_online_mems(void);
 
@@ -248,10 +225,6 @@ static inline void zone_span_writelock(struct zone *zone) {}
 static inline void zone_span_writeunlock(struct zone *zone) {}
 static inline void zone_seqlock_init(struct zone *zone) {}
 
-static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-}
-
 static inline int try_online_node(int nid)
 {
 	return 0;
diff --git a/mm/Makefile b/mm/Makefile
index 72227b24a616..daabf86d7da8 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KASAN)	+= kasan/
 obj-$(CONFIG_KFENCE) += kfence/
 obj-$(CONFIG_FAILSLAB) += failslab.o
+obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
 obj-$(CONFIG_MEMORY_HOTPLUG) += memory_hotplug.o
 obj-$(CONFIG_MEMTEST)		+= memtest.o
 obj-$(CONFIG_MIGRATION) += migrate.o
diff --git a/mm/bootmem_info.c b/mm/bootmem_info.c
new file mode 100644
index 000000000000..5b152dba7344
--- /dev/null
+++ b/mm/bootmem_info.c
@@ -0,0 +1,127 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Bootmem core functions.
+ *
+ * Copyright (c) 2020, Bytedance.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ *
+ */
+#include <linux/mm.h>
+#include <linux/compiler.h>
+#include <linux/memblock.h>
+#include <linux/bootmem_info.h>
+#include <linux/memory_hotplug.h>
+
+void get_page_bootmem(unsigned long info, struct page *page, unsigned long type)
+{
+	page->freelist = (void *)type;
+	SetPagePrivate(page);
+	set_page_private(page, info);
+	page_ref_inc(page);
+}
+
+void put_page_bootmem(struct page *page)
+{
+	unsigned long type;
+
+	type = (unsigned long) page->freelist;
+	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
+	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
+
+	if (page_ref_dec_return(page) == 1) {
+		page->freelist = NULL;
+		ClearPagePrivate(page);
+		set_page_private(page, 0);
+		INIT_LIST_HEAD(&page->lru);
+		free_reserved_page(page);
+	}
+}
+
+#ifndef CONFIG_SPARSEMEM_VMEMMAP
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	/* Get section's memmap address */
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	/*
+	 * Get page for the memmap's phys address
+	 * XXX: need more consideration for sparse_vmemmap...
+	 */
+	page = virt_to_page(memmap);
+	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
+	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
+
+	/* remember memmap's page */
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, SECTION_INFO);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+
+}
+#else /* CONFIG_SPARSEMEM_VMEMMAP */
+static void register_page_bootmem_info_section(unsigned long start_pfn)
+{
+	unsigned long mapsize, section_nr, i;
+	struct mem_section *ms;
+	struct page *page, *memmap;
+	struct mem_section_usage *usage;
+
+	section_nr = pfn_to_section_nr(start_pfn);
+	ms = __nr_to_section(section_nr);
+
+	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
+
+	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
+
+	usage = ms->usage;
+	page = virt_to_page(usage);
+
+	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
+
+	for (i = 0; i < mapsize; i++, page++)
+		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
+}
+#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
+
+void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
+{
+	unsigned long i, pfn, end_pfn, nr_pages;
+	int node = pgdat->node_id;
+	struct page *page;
+
+	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
+	page = virt_to_page(pgdat);
+
+	for (i = 0; i < nr_pages; i++, page++)
+		get_page_bootmem(node, page, NODE_INFO);
+
+	pfn = pgdat->node_start_pfn;
+	end_pfn = pgdat_end_pfn(pgdat);
+
+	/* register section info */
+	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
+		/*
+		 * Some platforms can assign the same pfn to multiple nodes - on
+		 * node0 as well as nodeN.  To avoid registering a pfn against
+		 * multiple nodes we check that this pfn does not already
+		 * reside in some other nodes.
+		 */
+		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
+			register_page_bootmem_info_section(pfn);
+	}
+}
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index 5ba51a8bdaeb..a2a72b617040 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -144,122 +144,6 @@ static void release_memory_resource(struct resource *res)
 }
 
 #ifdef CONFIG_MEMORY_HOTPLUG_SPARSE
-void get_page_bootmem(unsigned long info,  struct page *page,
-		      unsigned long type)
-{
-	page->freelist = (void *)type;
-	SetPagePrivate(page);
-	set_page_private(page, info);
-	page_ref_inc(page);
-}
-
-void put_page_bootmem(struct page *page)
-{
-	unsigned long type;
-
-	type = (unsigned long) page->freelist;
-	BUG_ON(type < MEMORY_HOTPLUG_MIN_BOOTMEM_TYPE ||
-	       type > MEMORY_HOTPLUG_MAX_BOOTMEM_TYPE);
-
-	if (page_ref_dec_return(page) == 1) {
-		page->freelist = NULL;
-		ClearPagePrivate(page);
-		set_page_private(page, 0);
-		INIT_LIST_HEAD(&page->lru);
-		free_reserved_page(page);
-	}
-}
-
-#ifdef CONFIG_HAVE_BOOTMEM_INFO_NODE
-#ifndef CONFIG_SPARSEMEM_VMEMMAP
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	/* Get section's memmap address */
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	/*
-	 * Get page for the memmap's phys address
-	 * XXX: need more consideration for sparse_vmemmap...
-	 */
-	page = virt_to_page(memmap);
-	mapsize = sizeof(struct page) * PAGES_PER_SECTION;
-	mapsize = PAGE_ALIGN(mapsize) >> PAGE_SHIFT;
-
-	/* remember memmap's page */
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, SECTION_INFO);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-
-}
-#else /* CONFIG_SPARSEMEM_VMEMMAP */
-static void register_page_bootmem_info_section(unsigned long start_pfn)
-{
-	unsigned long mapsize, section_nr, i;
-	struct mem_section *ms;
-	struct page *page, *memmap;
-	struct mem_section_usage *usage;
-
-	section_nr = pfn_to_section_nr(start_pfn);
-	ms = __nr_to_section(section_nr);
-
-	memmap = sparse_decode_mem_map(ms->section_mem_map, section_nr);
-
-	register_page_bootmem_memmap(section_nr, memmap, PAGES_PER_SECTION);
-
-	usage = ms->usage;
-	page = virt_to_page(usage);
-
-	mapsize = PAGE_ALIGN(mem_section_usage_size()) >> PAGE_SHIFT;
-
-	for (i = 0; i < mapsize; i++, page++)
-		get_page_bootmem(section_nr, page, MIX_SECTION_INFO);
-}
-#endif /* !CONFIG_SPARSEMEM_VMEMMAP */
-
-void __init register_page_bootmem_info_node(struct pglist_data *pgdat)
-{
-	unsigned long i, pfn, end_pfn, nr_pages;
-	int node = pgdat->node_id;
-	struct page *page;
-
-	nr_pages = PAGE_ALIGN(sizeof(struct pglist_data)) >> PAGE_SHIFT;
-	page = virt_to_page(pgdat);
-
-	for (i = 0; i < nr_pages; i++, page++)
-		get_page_bootmem(node, page, NODE_INFO);
-
-	pfn = pgdat->node_start_pfn;
-	end_pfn = pgdat_end_pfn(pgdat);
-
-	/* register section info */
-	for (; pfn < end_pfn; pfn += PAGES_PER_SECTION) {
-		/*
-		 * Some platforms can assign the same pfn to multiple nodes - on
-		 * node0 as well as nodeN.  To avoid registering a pfn against
-		 * multiple nodes we check that this pfn does not already
-		 * reside in some other nodes.
-		 */
-		if (pfn_valid(pfn) && (early_pfn_to_nid(pfn) == node))
-			register_page_bootmem_info_section(pfn);
-	}
-}
-#endif /* CONFIG_HAVE_BOOTMEM_INFO_NODE */
-
 static int check_pfn_span(unsigned long pfn, unsigned long nr_pages,
 		const char *reason)
 {
diff --git a/mm/sparse.c b/mm/sparse.c
index 7bd23f9d6cef..87676bf3af40 100644
--- a/mm/sparse.c
+++ b/mm/sparse.c
@@ -13,6 +13,7 @@
 #include <linux/vmalloc.h>
 #include <linux/swap.h>
 #include <linux/swapops.h>
+#include <linux/bootmem_info.h>
 
 #include "internal.h"
 #include <asm/dma.h>
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
  2021-03-08 10:27 ` [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-08 10:28 ` [PATCH v18 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin, Chen Huang, Bodeddula Balasubramaniam,
	Balbir Singh

The option HUGETLB_PAGE_FREE_VMEMMAP allows for the freeing of
some vmemmap pages associated with pre-allocated HugeTLB pages.
For example, on X86_64 6 vmemmap pages of size 4KB each can be
saved for each 2MB HugeTLB page. 4094 vmemmap pages of size 4KB
each can be saved for each 1GB HugeTLB page.

When a HugeTLB page is allocated or freed, the vmemmap array
representing the range associated with the page will need to be
remapped. When a page is allocated, vmemmap pages are freed
after remapping. When a page is freed, previously discarded
vmemmap pages must be allocated before remapping.

The config option is introduced early so that supporting code
can be written to depend on the option. The initial version of
the code only provides support for x86-64.

Like other code which frees vmemmap, this config option depends on
HAVE_BOOTMEM_INFO_NODE. The routine register_page_bootmem_info() is
used to register bootmem info. Therefore, make sure
register_page_bootmem_info is enabled if HUGETLB_PAGE_FREE_VMEMMAP
is defined.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
Reviewed-by: Balbir Singh <bsingharora@gmail.com>
---
 arch/x86/mm/init_64.c | 2 +-
 fs/Kconfig            | 6 ++++++
 2 files changed, 7 insertions(+), 1 deletion(-)

diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0a45f062826e..0435bee2e172 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -1225,7 +1225,7 @@ static struct kcore_list kcore_vsyscall;
 
 static void __init register_page_bootmem_info(void)
 {
-#ifdef CONFIG_NUMA
+#if defined(CONFIG_NUMA) || defined(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)
 	int i;
 
 	for_each_online_node(i)
diff --git a/fs/Kconfig b/fs/Kconfig
index eccbcf1e3f2e..b5dcc68aab25 100644
--- a/fs/Kconfig
+++ b/fs/Kconfig
@@ -237,6 +237,12 @@ config HUGETLBFS
 config HUGETLB_PAGE
 	def_bool HUGETLBFS
 
+config HUGETLB_PAGE_FREE_VMEMMAP
+	def_bool HUGETLB_PAGE
+	depends on X86_64
+	depends on SPARSEMEM_VMEMMAP
+	depends on HAVE_BOOTMEM_INFO_NODE
+
 config MEMFD_CREATE
 	def_bool TMPFS || HUGETLBFS
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
  2021-03-08 10:27 ` [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
  2021-03-08 10:28 ` [PATCH v18 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-10 14:32   ` Michal Hocko
  2021-03-08 10:28 ` [PATCH v18 4/9] mm: hugetlb: alloc " Muchun Song
                   ` (5 subsequent siblings)
  8 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Chen Huang, Bodeddula Balasubramaniam

Every HugeTLB has more than one struct page structure. We __know__ that
we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
to store metadata associated with each HugeTLB.

There are a lot of struct page structures associated with each HugeTLB
page. For tail pages, the value of compound_head is the same. So we can
reuse first page of tail page structures. We map the virtual addresses
of the remaining pages of tail page structures to the first tail page
struct, and then free these page frames. Therefore, we need to reserve
two pages as vmemmap areas.

When we allocate a HugeTLB page from the buddy, we can free some vmemmap
pages associated with each HugeTLB page. It is more appropriate to do it
in the prep_new_huge_page().

The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
pages associated with a HugeTLB page can be freed, returns zero for
now, which means the feature is disabled. We will enable it once all
the infrastructure is there.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 include/linux/bootmem_info.h |  27 +++++-
 include/linux/mm.h           |   3 +
 mm/Makefile                  |   1 +
 mm/hugetlb.c                 |   3 +
 mm/hugetlb_vmemmap.c         | 219 +++++++++++++++++++++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h         |  20 ++++
 mm/sparse-vmemmap.c          | 207 ++++++++++++++++++++++++++++++++++++++++
 7 files changed, 479 insertions(+), 1 deletion(-)
 create mode 100644 mm/hugetlb_vmemmap.c
 create mode 100644 mm/hugetlb_vmemmap.h

diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
index 4ed6dee1adc9..ec03a624dfa2 100644
--- a/include/linux/bootmem_info.h
+++ b/include/linux/bootmem_info.h
@@ -2,7 +2,7 @@
 #ifndef __LINUX_BOOTMEM_INFO_H
 #define __LINUX_BOOTMEM_INFO_H
 
-#include <linux/mmzone.h>
+#include <linux/mm.h>
 
 /*
  * Types for free bootmem stored in page->lru.next. These have to be in
@@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
 void get_page_bootmem(unsigned long info, struct page *page,
 		      unsigned long type);
 void put_page_bootmem(struct page *page);
+
+/*
+ * Any memory allocated via the memblock allocator and not via the
+ * buddy will be marked reserved already in the memmap. For those
+ * pages, we can call this function to free it to buddy allocator.
+ */
+static inline void free_bootmem_page(struct page *page)
+{
+	unsigned long magic = (unsigned long)page->freelist;
+
+	/*
+	 * The reserve_bootmem_region sets the reserved flag on bootmem
+	 * pages.
+	 */
+	VM_BUG_ON_PAGE(page_ref_count(page) != 2, page);
+
+	if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
+		put_page_bootmem(page);
+	else
+		VM_BUG_ON_PAGE(1, page);
+}
 #else
 static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
 {
@@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page,
 				    unsigned long type)
 {
 }
+
+static inline void free_bootmem_page(struct page *page)
+{
+}
 #endif
 
 #endif /* __LINUX_BOOTMEM_INFO_H */
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 77e64e3eac80..4ddfc31f21c6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2971,6 +2971,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 }
 #endif
 
+void vmemmap_remap_free(unsigned long start, unsigned long end,
+			unsigned long reuse);
+
 void *sparse_buffer_alloc(unsigned long size);
 struct page * __populate_section_memmap(unsigned long pfn,
 		unsigned long nr_pages, int nid, struct vmem_altmap *altmap);
diff --git a/mm/Makefile b/mm/Makefile
index daabf86d7da8..3d7d57e3b55b 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
 obj-$(CONFIG_ZSWAP)	+= zswap.o
 obj-$(CONFIG_HAS_DMA)	+= dmapool.o
 obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
+obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
 obj-$(CONFIG_NUMA) 	+= mempolicy.o
 obj-$(CONFIG_SPARSEMEM)	+= sparse.o
 obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c232cb67dda2..43fed6785322 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -42,6 +42,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/page_owner.h>
 #include "internal.h"
+#include "hugetlb_vmemmap.h"
 
 int hugetlb_max_hstate __read_mostly;
 unsigned int default_hstate_idx;
@@ -1463,6 +1464,8 @@ void free_huge_page(struct page *page)
 
 static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
 {
+	free_huge_page_vmemmap(h, page);
+
 	INIT_LIST_HEAD(&page->lru);
 	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
 	set_hugetlb_cgroup(page, NULL);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
new file mode 100644
index 000000000000..0209b736e0b4
--- /dev/null
+++ b/mm/hugetlb_vmemmap.c
@@ -0,0 +1,219 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ *
+ * The struct page structures (page structs) are used to describe a physical
+ * page frame. By default, there is a one-to-one mapping from a page frame to
+ * it's corresponding page struct.
+ *
+ * HugeTLB pages consist of multiple base page size pages and is supported by
+ * many architectures. See hugetlbpage.rst in the Documentation directory for
+ * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB
+ * are currently supported. Since the base page size on x86 is 4KB, a 2MB
+ * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
+ * 4096 base pages. For each base page, there is a corresponding page struct.
+ *
+ * Within the HugeTLB subsystem, only the first 4 page structs are used to
+ * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
+ * provides this upper limit. The only 'useful' information in the remaining
+ * page structs is the compound_head field, and this field is the same for all
+ * tail pages.
+ *
+ * By removing redundant page structs for HugeTLB pages, memory can be returned
+ * to the buddy allocator for other uses.
+ *
+ * Different architectures support different HugeTLB pages. For example, the
+ * following table is the HugeTLB page size supported by x86 and arm64
+ * architectures. Because arm64 supports 4k, 16k, and 64k base pages and
+ * supports contiguous entries, so it supports many kinds of sizes of HugeTLB
+ * page.
+ *
+ * +--------------+-----------+-----------------------------------------------+
+ * | Architecture | Page Size |                HugeTLB Page Size              |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |    x86-64    |    4KB    |    2MB    |    1GB    |           |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ * |              |    4KB    |   64KB    |    2MB    |    32MB   |    1GB    |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |    arm64     |   16KB    |    2MB    |   32MB    |     1GB   |           |
+ * |              +-----------+-----------+-----------+-----------+-----------+
+ * |              |   64KB    |    2MB    |  512MB    |    16GB   |           |
+ * +--------------+-----------+-----------+-----------+-----------+-----------+
+ *
+ * When the system boot up, every HugeTLB page has more than one struct page
+ * structs which size is (unit: pages):
+ *
+ *    struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *
+ * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size
+ * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following
+ * relationship.
+ *
+ *    HugeTLB_Size = n * PAGE_SIZE
+ *
+ * Then,
+ *
+ *    struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
+ *                = n * sizeof(struct page) / PAGE_SIZE
+ *
+ * We can use huge mapping at the pud/pmd level for the HugeTLB page.
+ *
+ * For the HugeTLB page of the pmd level mapping, then
+ *
+ *    struct_size = n * sizeof(struct page) / PAGE_SIZE
+ *                = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE
+ *                = sizeof(struct page) / sizeof(pte_t)
+ *                = 64 / 8
+ *                = 8 (pages)
+ *
+ * Where n is how many pte entries which one page can contains. So the value of
+ * n is (PAGE_SIZE / sizeof(pte_t)).
+ *
+ * This optimization only supports 64-bit system, so the value of sizeof(pte_t)
+ * is 8. And this optimization also applicable only when the size of struct page
+ * is a power of two. In most cases, the size of struct page is 64 bytes (e.g.
+ * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the
+ * size of struct page structs of it is 8 page frames which size depends on the
+ * size of the base page.
+ *
+ * For the HugeTLB page of the pud level mapping, then
+ *
+ *    struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd)
+ *                = PAGE_SIZE / 8 * 8 (pages)
+ *                = PAGE_SIZE (pages)
+ *
+ * Where the struct_size(pmd) is the size of the struct page structs of a
+ * HugeTLB page of the pmd level mapping.
+ *
+ * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB
+ * HugeTLB page consists in 4096.
+ *
+ * Next, we take the pmd level mapping of the HugeTLB page as an example to
+ * show the internal implementation of this optimization. There are 8 pages
+ * struct page structs associated with a HugeTLB page which is pmd mapped.
+ *
+ * Here is how things look before optimization.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | -------------> |     2     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     3     | -------------> |     3     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     4     | -------------> |     4     |
+ * |    PMD    |                     +-----------+                +-----------+
+ * |   level   |                     |     5     | -------------> |     5     |
+ * |  mapping  |                     +-----------+                +-----------+
+ * |           |                     |     6     | -------------> |     6     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     7     | -------------> |     7     |
+ * |           |                     +-----------+                +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * The value of page->compound_head is the same for all tail pages. The first
+ * page of page structs (page 0) associated with the HugeTLB page contains the 4
+ * page structs necessary to describe the HugeTLB. The only use of the remaining
+ * pages of page structs (page 1 to page 7) is to point to page->compound_head.
+ * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
+ * will be used for each HugeTLB page. This will allow us to free the remaining
+ * 6 pages to the buddy allocator.
+ *
+ * Here is how things look after remapping.
+ *
+ *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
+ * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
+ * |           |                     |     0     | -------------> |     0     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     1     | -------------> |     1     |
+ * |           |                     +-----------+                +-----------+
+ * |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
+ * |           |                     +-----------+                   | | | | |
+ * |           |                     |     3     | ------------------+ | | | |
+ * |           |                     +-----------+                     | | | |
+ * |           |                     |     4     | --------------------+ | | |
+ * |    PMD    |                     +-----------+                       | | |
+ * |   level   |                     |     5     | ----------------------+ | |
+ * |  mapping  |                     +-----------+                         | |
+ * |           |                     |     6     | ------------------------+ |
+ * |           |                     +-----------+                           |
+ * |           |                     |     7     | --------------------------+
+ * |           |                     +-----------+
+ * |           |
+ * |           |
+ * |           |
+ * +-----------+
+ *
+ * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
+ * vmemmap pages and restore the previous mapping relationship.
+ *
+ * For the HugeTLB page of the pud level mapping. It is similar to the former.
+ * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages.
+ *
+ * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures
+ * (e.g. aarch64) provides a contiguous bit in the translation table entries
+ * that hints to the MMU to indicate that it is one of a contiguous set of
+ * entries that can be cached in a single TLB entry.
+ *
+ * The contiguous bit is used to increase the mapping size at the pmd and pte
+ * (last) level. So this type of HugeTLB page can be optimized only when its
+ * size of the struct page structs is greater than 2 pages.
+ */
+#include "hugetlb_vmemmap.h"
+
+/*
+ * There are a lot of struct page structures associated with each HugeTLB page.
+ * For tail pages, the value of compound_head is the same. So we can reuse first
+ * page of tail page structures. We map the virtual addresses of the remaining
+ * pages of tail page structures to the first tail page struct, and then free
+ * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
+ */
+#define RESERVE_VMEMMAP_NR		2U
+#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Returns zero for now, which means the feature is disabled. We will
+ * enable it once all the infrastructure is there.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
+
+static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
+{
+	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
+}
+
+void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	unsigned long vmemmap_addr = (unsigned long)head;
+	unsigned long vmemmap_end, vmemmap_reuse;
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return;
+
+	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
+	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
+	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
+
+	/*
+	 * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end)
+	 * to the page which @vmemmap_reuse is mapped to, then free the pages
+	 * which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
+	 */
+	vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
new file mode 100644
index 000000000000..6923f03534d5
--- /dev/null
+++ b/mm/hugetlb_vmemmap.h
@@ -0,0 +1,20 @@
+// SPDX-License-Identifier: GPL-2.0
+/*
+ * Free some vmemmap pages of HugeTLB
+ *
+ * Copyright (c) 2020, Bytedance. All rights reserved.
+ *
+ *     Author: Muchun Song <songmuchun@bytedance.com>
+ */
+#ifndef _LINUX_HUGETLB_VMEMMAP_H
+#define _LINUX_HUGETLB_VMEMMAP_H
+#include <linux/hugetlb.h>
+
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+#else
+static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+}
+#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
+#endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index 16183d85a7d5..d3076a7a3783 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -27,8 +27,215 @@
 #include <linux/spinlock.h>
 #include <linux/vmalloc.h>
 #include <linux/sched.h>
+#include <linux/pgtable.h>
+#include <linux/bootmem_info.h>
+
 #include <asm/dma.h>
 #include <asm/pgalloc.h>
+#include <asm/tlbflush.h>
+
+/**
+ * vmemmap_remap_walk - walk vmemmap page table
+ *
+ * @remap_pte:		called for each lowest-level entry (PTE).
+ * @reuse_page:		the page which is reused for the tail vmemmap pages.
+ * @reuse_addr:		the virtual address of the @reuse_page page.
+ * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
+ */
+struct vmemmap_remap_walk {
+	void (*remap_pte)(pte_t *pte, unsigned long addr,
+			  struct vmemmap_remap_walk *walk);
+	struct page *reuse_page;
+	unsigned long reuse_addr;
+	struct list_head *vmemmap_pages;
+};
+
+static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	pte_t *pte;
+
+	pte = pte_offset_kernel(pmd, addr);
+
+	/*
+	 * The reuse_page is found 'first' in table walk before we start
+	 * remapping (which is calling @walk->remap_pte).
+	 */
+	if (!walk->reuse_page) {
+		BUG_ON(pte_none(*pte));
+		BUG_ON(walk->reuse_addr != addr);
+
+		walk->reuse_page = pte_page(*pte++);
+		/*
+		 * Because the reuse address is part of the range that we are
+		 * walking, skip the reuse address range.
+		 */
+		addr += PAGE_SIZE;
+	}
+
+	for (; addr != end; addr += PAGE_SIZE, pte++) {
+		BUG_ON(pte_none(*pte));
+
+		walk->remap_pte(pte, addr, walk);
+	}
+}
+
+static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	pmd_t *pmd;
+	unsigned long next;
+
+	pmd = pmd_offset(pud, addr);
+	do {
+		BUG_ON(pmd_none(*pmd) || pmd_leaf(*pmd));
+
+		next = pmd_addr_end(addr, end);
+		vmemmap_pte_range(pmd, addr, next, walk);
+	} while (pmd++, addr = next, addr != end);
+}
+
+static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	pud_t *pud;
+	unsigned long next;
+
+	pud = pud_offset(p4d, addr);
+	do {
+		BUG_ON(pud_none(*pud));
+
+		next = pud_addr_end(addr, end);
+		vmemmap_pmd_range(pud, addr, next, walk);
+	} while (pud++, addr = next, addr != end);
+}
+
+static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
+			      unsigned long end,
+			      struct vmemmap_remap_walk *walk)
+{
+	p4d_t *p4d;
+	unsigned long next;
+
+	p4d = p4d_offset(pgd, addr);
+	do {
+		BUG_ON(p4d_none(*p4d));
+
+		next = p4d_addr_end(addr, end);
+		vmemmap_pud_range(p4d, addr, next, walk);
+	} while (p4d++, addr = next, addr != end);
+}
+
+static void vmemmap_remap_range(unsigned long start, unsigned long end,
+				struct vmemmap_remap_walk *walk)
+{
+	unsigned long addr = start;
+	unsigned long next;
+	pgd_t *pgd;
+
+	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
+	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
+
+	pgd = pgd_offset_k(addr);
+	do {
+		BUG_ON(pgd_none(*pgd));
+
+		next = pgd_addr_end(addr, end);
+		vmemmap_p4d_range(pgd, addr, next, walk);
+	} while (pgd++, addr = next, addr != end);
+
+	/*
+	 * We only change the mapping of the vmemmap virtual address range
+	 * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
+	 * belongs to the range.
+	 */
+	flush_tlb_kernel_range(start + PAGE_SIZE, end);
+}
+
+/*
+ * Free a vmemmap page. A vmemmap page can be allocated from the memblock
+ * allocator or buddy allocator. If the PG_reserved flag is set, it means
+ * that it allocated from the memblock allocator, just free it via the
+ * free_bootmem_page(). Otherwise, use __free_page().
+ */
+static inline void free_vmemmap_page(struct page *page)
+{
+	if (PageReserved(page))
+		free_bootmem_page(page);
+	else
+		__free_page(page);
+}
+
+/* Free a list of the vmemmap pages */
+static void free_vmemmap_page_list(struct list_head *list)
+{
+	struct page *page, *next;
+
+	list_for_each_entry_safe(page, next, list, lru) {
+		list_del(&page->lru);
+		free_vmemmap_page(page);
+	}
+}
+
+static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
+			      struct vmemmap_remap_walk *walk)
+{
+	/*
+	 * Remap the tail pages as read-only to catch illegal write operation
+	 * to the tail pages.
+	 */
+	pgprot_t pgprot = PAGE_KERNEL_RO;
+	pte_t entry = mk_pte(walk->reuse_page, pgprot);
+	struct page *page = pte_page(*pte);
+
+	list_add(&page->lru, walk->vmemmap_pages);
+	set_pte_at(&init_mm, addr, pte, entry);
+}
+
+/**
+ * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end)
+ *			to the page which @reuse is mapped to, then free vmemmap
+ *			which the range are mapped to.
+ * @start:	start address of the vmemmap virtual address range that we want
+ *		to remap.
+ * @end:	end address of the vmemmap virtual address range that we want to
+ *		remap.
+ * @reuse:	reuse address.
+ *
+ * Note: This function depends on vmemmap being base page mapped. Please make
+ * sure that we disable PMD mapping of vmemmap pages when calling this function.
+ */
+void vmemmap_remap_free(unsigned long start, unsigned long end,
+			unsigned long reuse)
+{
+	LIST_HEAD(vmemmap_pages);
+	struct vmemmap_remap_walk walk = {
+		.remap_pte	= vmemmap_remap_pte,
+		.reuse_addr	= reuse,
+		.vmemmap_pages	= &vmemmap_pages,
+	};
+
+	/*
+	 * In order to make remapping routine most efficient for the huge pages,
+	 * the routine of vmemmap page table walking has the following rules
+	 * (see more details from the vmemmap_pte_range()):
+	 *
+	 * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE)
+	 *   should be continuous.
+	 * - The @reuse address is part of the range [@reuse, @end) that we are
+	 *   walking which is passed to vmemmap_remap_range().
+	 * - The @reuse address is the first in the complete range.
+	 *
+	 * So we need to make sure that @start and @reuse meet the above rules.
+	 */
+	BUG_ON(start - reuse != PAGE_SIZE);
+
+	vmemmap_remap_range(reuse, end, &walk);
+	free_vmemmap_page_list(&vmemmap_pages);
+}
 
 /*
  * Allocate a block of memory to be used to back the virtual memory map
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (2 preceding siblings ...)
  2021-03-08 10:28 ` [PATCH v18 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-10 14:21   ` Oscar Salvador
  2021-03-10 15:19   ` Michal Hocko
  2021-03-08 10:28 ` [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
                   ` (4 subsequent siblings)
  8 siblings, 2 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Chen Huang, Bodeddula Balasubramaniam

When we free a HugeTLB page to the buddy allocator, we need to allocate
the vmemmap pages associated with it. However, we may not be able to
allocate the vmemmap pages when the system is under memory pressure. In
this case, we just refuse to free the HugeTLB page. This changes behavior
in some corner cases as listed below:

 1) Failing to free a huge page triggered by the user (decrease nr_pages).

    User needs to try again later.

 2) Failing to free a surplus huge page when freed by the application.

    Try again later when freeing a huge page next time.

 3) Failing to dissolve a free huge page on ZONE_MOVABLE via
    offline_pages().

    This can happen when we have plenty of ZONE_MOVABLE memory, but
    not enough kernel memory to allocate vmemmmap pages.  We may even
    be able to migrate huge page contents, but will not be able to
    dissolve the source huge page.  This will prevent an offline
    operation and is unfortunate as memory offlining is expected to
    succeed on movable zones.  Users that depend on memory hotplug
    to succeed for movable zones should carefully consider whether the
    memory savings gained from this feature are worth the risk of
    possibly not being able to offline memory in certain situations.

 4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
    alloc_contig_range() - once we have that handling in place. Mainly
    affects CMA and virtio-mem.

    Similar to 3). virito-mem will handle migration errors gracefully.
    CMA might be able to fallback on other free areas within the CMA
    region.

Vmemmap pages are allocated from the page freeing context. In order for
those allocations to be not disruptive (e.g. trigger oom killer)
__GFP_NORETRY is used. hugetlb_lock is dropped for the allocation
because a non sleeping allocation would be too fragile and it could fail
too easily under memory pressure. GFP_ATOMIC or other modes to access
memory reserves is not used because we want to prevent consuming
reserves under heavy hugetlb freeing.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 Documentation/admin-guide/mm/hugetlbpage.rst |  8 +++
 include/linux/mm.h                           |  2 +
 mm/hugetlb.c                                 | 92 +++++++++++++++++++++-------
 mm/hugetlb_vmemmap.c                         | 32 ++++++----
 mm/hugetlb_vmemmap.h                         | 23 +++++++
 mm/sparse-vmemmap.c                          | 75 ++++++++++++++++++++++-
 6 files changed, 197 insertions(+), 35 deletions(-)

diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index f7b1c7462991..6988895d09a8 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -60,6 +60,10 @@ HugePages_Surp
         the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
         maximum number of surplus huge pages is controlled by
         ``/proc/sys/vm/nr_overcommit_hugepages``.
+	Note: When the feature of freeing unused vmemmap pages associated
+	with each hugetlb page is enabled, the number of surplus huge pages
+	may be temporarily larger than the maximum number of surplus huge
+	pages when the system is under memory pressure.
 Hugepagesize
 	is the default hugepage size (in Kb).
 Hugetlb
@@ -80,6 +84,10 @@ returned to the huge page pool when freed by a task.  A user with root
 privileges can dynamically allocate more or free some persistent huge pages
 by increasing or decreasing the value of ``nr_hugepages``.
 
+Note: When the feature of freeing unused vmemmap pages associated with each
+hugetlb page is enabled, we can fail to free the huge pages triggered by
+the user when ths system is under memory pressure.  Please try again later.
+
 Pages that are used as huge pages are reserved inside the kernel and cannot
 be used for other purposes.  Huge pages cannot be swapped out under
 memory pressure.
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 4ddfc31f21c6..77693c944a36 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2973,6 +2973,8 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
 
 void vmemmap_remap_free(unsigned long start, unsigned long end,
 			unsigned long reuse);
+int vmemmap_remap_alloc(unsigned long start, unsigned long end,
+			unsigned long reuse, gfp_t gfp_mask);
 
 void *sparse_buffer_alloc(unsigned long size);
 struct page * __populate_section_memmap(unsigned long pfn,
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 43fed6785322..377e0c1b283f 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1304,16 +1304,59 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
-static void update_and_free_page(struct hstate *h, struct page *page)
+static int update_and_free_page(struct hstate *h, struct page *page)
+	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
 {
 	int i;
 	struct page *subpage = page;
+	int nid = page_to_nid(page);
 
 	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
-		return;
+		return 0;
 
 	h->nr_huge_pages--;
-	h->nr_huge_pages_node[page_to_nid(page)]--;
+	h->nr_huge_pages_node[nid]--;
+	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
+	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
+	set_page_refcounted(page);
+	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
+
+	/*
+	 * If the vmemmap pages associated with the HugeTLB page can be
+	 * optimized or the page is gigantic, we might block in
+	 * alloc_huge_page_vmemmap() or free_gigantic_page(). In both
+	 * cases, drop the hugetlb_lock.
+	 */
+	if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
+		spin_unlock(&hugetlb_lock);
+
+	if (alloc_huge_page_vmemmap(h, page)) {
+		spin_lock(&hugetlb_lock);
+		INIT_LIST_HEAD(&page->lru);
+		set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
+		h->nr_huge_pages++;
+		h->nr_huge_pages_node[nid]++;
+
+		/*
+		 * If we cannot allocate vmemmap pages, just refuse to free the
+		 * page and put the page back on the hugetlb free list and treat
+		 * as a surplus page.
+		 */
+		h->surplus_huge_pages++;
+		h->surplus_huge_pages_node[nid]++;
+
+		/*
+		 * The refcount can possibly be increased by memory-failure or
+		 * soft_offline handlers.
+		 */
+		if (likely(put_page_testzero(page))) {
+			arch_clear_hugepage_flags(page);
+			enqueue_huge_page(h, page);
+		}
+
+		return -ENOMEM;
+	}
+
 	for (i = 0; i < pages_per_huge_page(h);
 	     i++, subpage = mem_map_next(subpage, page, i)) {
 		subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
@@ -1321,22 +1364,18 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 				1 << PG_active | 1 << PG_private |
 				1 << PG_writeback);
 	}
-	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
-	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
-	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
-	set_page_refcounted(page);
+
 	if (hstate_is_gigantic(h)) {
-		/*
-		 * Temporarily drop the hugetlb_lock, because
-		 * we might block in free_gigantic_page().
-		 */
-		spin_unlock(&hugetlb_lock);
 		destroy_compound_gigantic_page(page, huge_page_order(h));
 		free_gigantic_page(page, huge_page_order(h));
-		spin_lock(&hugetlb_lock);
 	} else {
 		__free_pages(page, huge_page_order(h));
 	}
+
+	if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
+		spin_lock(&hugetlb_lock);
+
+	return 0;
 }
 
 struct hstate *size_to_hstate(unsigned long size)
@@ -1404,9 +1443,9 @@ static void __free_huge_page(struct page *page)
 	} else if (h->surplus_huge_pages_node[nid]) {
 		/* remove the page from active list */
 		list_del(&page->lru);
-		update_and_free_page(h, page);
 		h->surplus_huge_pages--;
 		h->surplus_huge_pages_node[nid]--;
+		update_and_free_page(h, page);
 	} else {
 		arch_clear_hugepage_flags(page);
 		enqueue_huge_page(h, page);
@@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
 	/*
 	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
 	 */
-	if (!in_task()) {
+	if (in_atomic()) {
 		/*
 		 * Only call schedule_work() if hpage_freelist is previously
 		 * empty. Otherwise, schedule_work() had been called but the
@@ -1699,8 +1738,7 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
 				h->surplus_huge_pages--;
 				h->surplus_huge_pages_node[node]--;
 			}
-			update_and_free_page(h, page);
-			ret = 1;
+			ret = !update_and_free_page(h, page);
 			break;
 		}
 	}
@@ -1713,10 +1751,14 @@ static int free_pool_huge_page(struct hstate *h, nodemask_t *nodes_allowed,
  * nothing for in-use hugepages and non-hugepages.
  * This function returns values like below:
  *
- *  -EBUSY: failed to dissolved free hugepages or the hugepage is in-use
- *          (allocated or reserved.)
- *       0: successfully dissolved free hugepages or the page is not a
- *          hugepage (considered as already dissolved)
+ *  -ENOMEM: failed to allocate vmemmap pages to free the freed hugepages
+ *           when the system is under memory pressure and the feature of
+ *           freeing unused vmemmap pages associated with each hugetlb page
+ *           is enabled.
+ *  -EBUSY:  failed to dissolved free hugepages or the hugepage is in-use
+ *           (allocated or reserved.)
+ *       0:  successfully dissolved free hugepages or the page is not a
+ *           hugepage (considered as already dissolved)
  */
 int dissolve_free_huge_page(struct page *page)
 {
@@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page)
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
 		h->max_huge_pages--;
-		update_and_free_page(h, head);
-		rc = 0;
+		rc = update_and_free_page(h, head);
+		if (rc) {
+			h->surplus_huge_pages--;
+			h->surplus_huge_pages_node[nid]--;
+			h->max_huge_pages++;
+		}
 	}
 out:
 	spin_unlock(&hugetlb_lock);
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 0209b736e0b4..f7ab3d99250a 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -181,21 +181,31 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
-/*
- * How many vmemmap pages associated with a HugeTLB page that can be freed
- * to the buddy allocator.
- *
- * Todo: Returns zero for now, which means the feature is disabled. We will
- * enable it once all the infrastructure is there.
- */
-static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 {
-	return 0;
+	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
 }
 
-static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
+int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
-	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
+	unsigned long vmemmap_addr = (unsigned long)head;
+	unsigned long vmemmap_end, vmemmap_reuse;
+
+	if (!free_vmemmap_pages_per_hpage(h))
+		return 0;
+
+	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
+	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
+	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
+	/*
+	 * The pages which the vmemmap virtual address range [@vmemmap_addr,
+	 * @vmemmap_end) are mapped to are freed to the buddy allocator, and
+	 * the range is mapped to the page which @vmemmap_reuse is mapped to.
+	 * When a HugeTLB page is freed to the buddy allocator, previously
+	 * discarded vmemmap pages must be allocated and remapping.
+	 */
+	return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
+				   GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
 }
 
 void free_huge_page_vmemmap(struct hstate *h, struct page *head)
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index 6923f03534d5..a37771b0b82a 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -11,10 +11,33 @@
 #include <linux/hugetlb.h>
 
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+
+/*
+ * How many vmemmap pages associated with a HugeTLB page that can be freed
+ * to the buddy allocator.
+ *
+ * Todo: Returns zero for now, which means the feature is disabled. We will
+ * enable it once all the infrastructure is there.
+ */
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #else
+static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
+{
+	return 0;
+}
+
 static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
+
+static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
+{
+	return 0;
+}
 #endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
 #endif /* _LINUX_HUGETLB_VMEMMAP_H */
diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
index d3076a7a3783..60fc6cd6cd23 100644
--- a/mm/sparse-vmemmap.c
+++ b/mm/sparse-vmemmap.c
@@ -40,7 +40,8 @@
  * @remap_pte:		called for each lowest-level entry (PTE).
  * @reuse_page:		the page which is reused for the tail vmemmap pages.
  * @reuse_addr:		the virtual address of the @reuse_page page.
- * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
+ * @vmemmap_pages:	the list head of the vmemmap pages that can be freed
+ *			or is mapped from.
  */
 struct vmemmap_remap_walk {
 	void (*remap_pte)(pte_t *pte, unsigned long addr,
@@ -237,6 +238,78 @@ void vmemmap_remap_free(unsigned long start, unsigned long end,
 	free_vmemmap_page_list(&vmemmap_pages);
 }
 
+static void vmemmap_restore_pte(pte_t *pte, unsigned long addr,
+				struct vmemmap_remap_walk *walk)
+{
+	pgprot_t pgprot = PAGE_KERNEL;
+	struct page *page;
+	void *to;
+
+	BUG_ON(pte_page(*pte) != walk->reuse_page);
+
+	page = list_first_entry(walk->vmemmap_pages, struct page, lru);
+	list_del(&page->lru);
+	to = page_to_virt(page);
+	copy_page(to, (void *)walk->reuse_addr);
+
+	set_pte_at(&init_mm, addr, pte, mk_pte(page, pgprot));
+}
+
+static int alloc_vmemmap_page_list(unsigned long start, unsigned long end,
+				   gfp_t gfp_mask, struct list_head *list)
+{
+	unsigned long nr_pages = (end - start) >> PAGE_SHIFT;
+	int nid = page_to_nid((struct page *)start);
+	struct page *page, *next;
+
+	while (nr_pages--) {
+		page = alloc_pages_node(nid, gfp_mask, 0);
+		if (!page)
+			goto out;
+		list_add_tail(&page->lru, list);
+	}
+
+	return 0;
+out:
+	list_for_each_entry_safe(page, next, list, lru)
+		__free_pages(page, 0);
+	return -ENOMEM;
+}
+
+/**
+ * vmemmap_remap_alloc - remap the vmemmap virtual address range [@start, end)
+ *			 to the page which is from the @vmemmap_pages
+ *			 respectively.
+ * @start:	start address of the vmemmap virtual address range that we want
+ *		to remap.
+ * @end:	end address of the vmemmap virtual address range that we want to
+ *		remap.
+ * @reuse:	reuse address.
+ * @gpf_mask:	GFP flag for allocating vmemmap pages.
+ */
+int vmemmap_remap_alloc(unsigned long start, unsigned long end,
+			unsigned long reuse, gfp_t gfp_mask)
+{
+	LIST_HEAD(vmemmap_pages);
+	struct vmemmap_remap_walk walk = {
+		.remap_pte	= vmemmap_restore_pte,
+		.reuse_addr	= reuse,
+		.vmemmap_pages	= &vmemmap_pages,
+	};
+
+	/* See the comment in the vmemmap_remap_free(). */
+	BUG_ON(start - reuse != PAGE_SIZE);
+
+	might_sleep_if(gfpflags_allow_blocking(gfp_mask));
+
+	if (alloc_vmemmap_page_list(start, end, gfp_mask, &vmemmap_pages))
+		return -ENOMEM;
+
+	vmemmap_remap_range(reuse, end, &walk);
+
+	return 0;
+}
+
 /*
  * Allocate a block of memory to be used to back the virtual memory map
  * or to back the page tables that are used to create the mapping.
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (3 preceding siblings ...)
  2021-03-08 10:28 ` [PATCH v18 4/9] mm: hugetlb: alloc " Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-10 15:27   ` Michal Hocko
  2021-03-08 10:28 ` [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
                   ` (3 subsequent siblings)
  8 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Chen Huang, Bodeddula Balasubramaniam

Because we reuse the first tail vmemmap page frame and remap it
with read-only, we cannot set the PageHWPosion on some tail pages.
So we can use the head[4].private (There are at least 128 struct
page structures associated with the optimized HugeTLB page, so
using head[4].private is safe) to record the real error page index
and set the raw error page PageHWPoison later.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Acked-by: David Rientjes <rientjes@google.com>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 mm/hugetlb.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------
 1 file changed, 72 insertions(+), 8 deletions(-)

diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 377e0c1b283f..c0c1b7635ca9 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1304,6 +1304,74 @@ static inline void destroy_compound_gigantic_page(struct page *page,
 						unsigned int order) { }
 #endif
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
+{
+	struct page *page;
+
+	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
+		return;
+
+	page = head + page_private(head + 4);
+
+	/*
+	 * Move PageHWPoison flag from head page to the raw error page,
+	 * which makes any subpages rather than the error page reusable.
+	 */
+	if (page != head) {
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
+					struct page *page)
+{
+	if (!PageHWPoison(head))
+		return;
+
+	if (free_vmemmap_pages_per_hpage(h)) {
+		set_page_private(head + 4, page - head);
+	} else if (page != head) {
+		/*
+		 * Move PageHWPoison flag from head page to the raw error page,
+		 * which makes any subpages rather than the error page reusable.
+		 */
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
+{
+	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
+		return;
+
+	set_page_private(head + 4, 0);
+}
+#else
+static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
+{
+}
+
+static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
+					struct page *page)
+{
+	if (PageHWPoison(head) && page != head) {
+		/*
+		 * Move PageHWPoison flag from head page to the raw error page,
+		 * which makes any subpages rather than the error page reusable.
+		 */
+		SetPageHWPoison(page);
+		ClearPageHWPoison(head);
+	}
+}
+
+static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
+{
+}
+#endif
+
 static int update_and_free_page(struct hstate *h, struct page *page)
 	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
 {
@@ -1357,6 +1425,8 @@ static int update_and_free_page(struct hstate *h, struct page *page)
 		return -ENOMEM;
 	}
 
+	hwpoison_subpage_deliver(h, page);
+
 	for (i = 0; i < pages_per_huge_page(h);
 	     i++, subpage = mem_map_next(subpage, page, i)) {
 		subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
@@ -1801,14 +1871,7 @@ int dissolve_free_huge_page(struct page *page)
 			goto retry;
 		}
 
-		/*
-		 * Move PageHWPoison flag from head page to the raw error page,
-		 * which makes any subpages rather than the error page reusable.
-		 */
-		if (PageHWPoison(head) && page != head) {
-			SetPageHWPoison(page);
-			ClearPageHWPoison(head);
-		}
+		hwpoison_subpage_set(h, head, page);
 		list_del(&head->lru);
 		h->free_huge_pages--;
 		h->free_huge_pages_node[nid]--;
@@ -1818,6 +1881,7 @@ int dissolve_free_huge_page(struct page *page)
 			h->surplus_huge_pages--;
 			h->surplus_huge_pages_node[nid]--;
 			h->max_huge_pages++;
+			hwpoison_subpage_clear(h, head);
 		}
 	}
 out:
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (4 preceding siblings ...)
  2021-03-08 10:28 ` [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-10 15:37   ` Michal Hocko
  2021-03-08 10:28 ` [PATCH v18 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin, Chen Huang, Bodeddula Balasubramaniam

Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.

We disables PMD mapping of vmemmap pages for x86-64 arch when this
feature is enabled. Because vmemmap_remap_free() depends on vmemmap
being base page mapped.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++
 Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
 arch/x86/mm/init_64.c                           |  8 ++++++--
 include/linux/hugetlb.h                         | 19 +++++++++++++++++++
 mm/hugetlb_vmemmap.c                            | 24 ++++++++++++++++++++++++
 5 files changed, 66 insertions(+), 2 deletions(-)

diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 04545725f187..de91d54573c4 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1557,6 +1557,20 @@
 			Documentation/admin-guide/mm/hugetlbpage.rst.
 			Format: size[KMG]
 
+	hugetlb_free_vmemmap=
+			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+			this controls freeing unused vmemmap pages associated
+			with each HugeTLB page. When this option is enabled,
+			we disable PMD/huge page mapping of vmemmap pages which
+			increase page table pages. So if a user/sysadmin only
+			uses a small number of HugeTLB pages (as a percentage
+			of system memory), they could end up using more memory
+			with hugetlb_free_vmemmap on as opposed to off.
+			Format: { on | off (default) }
+
+			on:  enable the feature
+			off: disable the feature
+
 	hung_task_panic=
 			[KNL] Should the hung task detector generate panics.
 			Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index 6988895d09a8..8abaeb144e44 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -153,6 +153,9 @@ default_hugepagesz
 
 	will all result in 256 2M huge pages being allocated.  Valid default
 	huge page size is architecture dependent.
+hugetlb_free_vmemmap
+	When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
+	unused vmemmap pages associated with each HugeTLB page.
 
 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
 indicates the current number of pre-allocated huge pages of the default size.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0435bee2e172..39f88c5faadc 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -34,6 +34,7 @@
 #include <linux/gfp.h>
 #include <linux/kcore.h>
 #include <linux/bootmem_info.h>
+#include <linux/hugetlb.h>
 
 #include <asm/processor.h>
 #include <asm/bios_ebda.h>
@@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
 {
 	int err;
 
-	if (end - start < PAGES_PER_SECTION * sizeof(struct page))
+	if ((is_hugetlb_free_vmemmap_enabled()  && !altmap) ||
+	    end - start < PAGES_PER_SECTION * sizeof(struct page))
 		err = vmemmap_populate_basepages(start, end, node, NULL);
 	else if (boot_cpu_has(X86_FEATURE_PSE))
 		err = vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 	pmd_t *pmd;
 	unsigned int nr_pmd_pages;
 	struct page *page;
+	bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
+			    is_hugetlb_free_vmemmap_enabled();
 
 	for (; addr < end; addr = next) {
 		pte_t *pte = NULL;
@@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
 		}
 		get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
 
-		if (!boot_cpu_has(X86_FEATURE_PSE)) {
+		if (base_mapping) {
 			next = (addr + PAGE_SIZE) & PAGE_MASK;
 			pmd = pmd_offset(pud, addr);
 			if (pmd_none(*pmd))
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index ce6533584eb7..78934e9aeab6 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -852,6 +852,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
 }
 #endif
 
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+extern bool hugetlb_free_vmemmap_enabled;
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return hugetlb_free_vmemmap_enabled;
+}
+#else
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
+#endif
+
 #else	/* CONFIG_HUGETLB_PAGE */
 struct hstate {};
 
@@ -1005,6 +1019,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
 					pte_t *ptep, pte_t pte, unsigned long sz)
 {
 }
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+	return false;
+}
 #endif	/* CONFIG_HUGETLB_PAGE */
 
 static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index f7ab3d99250a..7807ed6678e0 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -169,6 +169,8 @@
  * (last) level. So this type of HugeTLB page can be optimized only when its
  * size of the struct page structs is greater than 2 pages.
  */
+#define pr_fmt(fmt)	"HugeTLB: " fmt
+
 #include "hugetlb_vmemmap.h"
 
 /*
@@ -181,6 +183,28 @@
 #define RESERVE_VMEMMAP_NR		2U
 #define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
 
+bool hugetlb_free_vmemmap_enabled;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+	/* We cannot optimize if a "struct page" crosses page boundaries. */
+	if ((!is_power_of_2(sizeof(struct page)))) {
+		pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n");
+		return 0;
+	}
+
+	if (!buf)
+		return -EINVAL;
+
+	if (!strcmp(buf, "on"))
+		hugetlb_free_vmemmap_enabled = true;
+	else if (strcmp(buf, "off"))
+		return -EINVAL;
+
+	return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
+
 static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
 {
 	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (5 preceding siblings ...)
  2021-03-08 10:28 ` [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-08 10:28 ` [PATCH v18 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
  2021-03-08 10:28 ` [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
  8 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin, Chen Huang, Bodeddula Balasubramaniam

All the infrastructure is ready, so we introduce nr_free_vmemmap_pages
field in the hstate to indicate how many vmemmap pages associated with
a HugeTLB page that can be freed to buddy allocator. And initialize it
in the hugetlb_vmemmap_init(). This patch is actual enablement of the
feature.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 include/linux/hugetlb.h |  3 +++
 mm/hugetlb.c            |  1 +
 mm/hugetlb_vmemmap.c    | 25 +++++++++++++++++++++++++
 mm/hugetlb_vmemmap.h    | 10 ++++++----
 4 files changed, 35 insertions(+), 4 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 78934e9aeab6..a4d80f7263fc 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -560,6 +560,9 @@ struct hstate {
 	unsigned int nr_huge_pages_node[MAX_NUMNODES];
 	unsigned int free_huge_pages_node[MAX_NUMNODES];
 	unsigned int surplus_huge_pages_node[MAX_NUMNODES];
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	unsigned int nr_free_vmemmap_pages;
+#endif
 #ifdef CONFIG_CGROUP_HUGETLB
 	/* cgroup control files */
 	struct cftype cgroup_files_dfl[7];
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c0c1b7635ca9..c221b937be17 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -3312,6 +3312,7 @@ void __init hugetlb_add_hstate(unsigned int order)
 	h->next_nid_to_free = first_memory_node;
 	snprintf(h->name, HSTATE_NAME_LEN, "hugepages-%lukB",
 					huge_page_size(h)/1024);
+	hugetlb_vmemmap_init(h);
 
 	parsed_hstate = h;
 }
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 7807ed6678e0..b65f0d5189bd 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -251,3 +251,28 @@ void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 	 */
 	vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse);
 }
+
+void __init hugetlb_vmemmap_init(struct hstate *h)
+{
+	unsigned int nr_pages = pages_per_huge_page(h);
+	unsigned int vmemmap_pages;
+
+	if (!hugetlb_free_vmemmap_enabled)
+		return;
+
+	vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT;
+	/*
+	 * The head page and the first tail page are not to be freed to buddy
+	 * allocator, the other pages will map to the first tail page, so they
+	 * can be freed.
+	 *
+	 * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true
+	 * on some architectures (e.g. aarch64). See Documentation/arm64/
+	 * hugetlbpage.rst for more details.
+	 */
+	if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR))
+		h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR;
+
+	pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages,
+		h->name);
+}
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index a37771b0b82a..cb2bef8f9e73 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -13,17 +13,15 @@
 #ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
 int alloc_huge_page_vmemmap(struct hstate *h, struct page *head);
 void free_huge_page_vmemmap(struct hstate *h, struct page *head);
+void hugetlb_vmemmap_init(struct hstate *h);
 
 /*
  * How many vmemmap pages associated with a HugeTLB page that can be freed
  * to the buddy allocator.
- *
- * Todo: Returns zero for now, which means the feature is disabled. We will
- * enable it once all the infrastructure is there.
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
-	return 0;
+	return h->nr_free_vmemmap_pages;
 }
 #else
 static inline int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
@@ -35,6 +33,10 @@ static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
 {
 }
 
+static inline void hugetlb_vmemmap_init(struct hstate *h)
+{
+}
+
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
 	return 0;
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 8/9] mm: hugetlb: gather discrete indexes of tail page
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (6 preceding siblings ...)
  2021-03-08 10:28 ` [PATCH v18 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-10 15:39   ` Michal Hocko
  2021-03-08 10:28 ` [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
  8 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin, Chen Huang, Bodeddula Balasubramaniam

For HugeTLB page, there are more metadata to save in the struct page.
But the head struct page cannot meet our needs, so we have to abuse
other tail struct page to store the metadata. In order to avoid
conflicts caused by subsequent use of more tail struct pages, we can
gather these discrete indexes of tail struct page. In this case, it
will be easier to add a new tail page index later.

There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP,
so add a BUILD_BUG_ON to catch invalid usage of the tail struct page.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 include/linux/hugetlb.h        | 24 ++++++++++++++++++++++--
 include/linux/hugetlb_cgroup.h | 19 +++++++++++--------
 mm/hugetlb.c                   |  6 +++---
 mm/hugetlb_vmemmap.c           |  8 ++++++++
 4 files changed, 44 insertions(+), 13 deletions(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index a4d80f7263fc..c70421e26189 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -28,6 +28,26 @@ typedef struct { unsigned long pd; } hugepd_t;
 #include <linux/shm.h>
 #include <asm/tlbflush.h>
 
+/*
+ * For HugeTLB page, there are more metadata to save in the struct page. But
+ * the head struct page cannot meet our needs, so we have to abuse other tail
+ * struct page to store the metadata. In order to avoid conflicts caused by
+ * subsequent use of more tail struct pages, we gather these discrete indexes
+ * of tail struct page here.
+ */
+enum {
+	SUBPAGE_INDEX_SUBPOOL = 1,	/* reuse page->private */
+#ifdef CONFIG_CGROUP_HUGETLB
+	SUBPAGE_INDEX_CGROUP,		/* reuse page->private */
+	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
+	__MAX_CGROUP_SUBPAGE_INDEX = SUBPAGE_INDEX_CGROUP_RSVD,
+#endif
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
+#endif
+	__NR_USED_SUBPAGE,
+};
+
 struct hugepage_subpool {
 	spinlock_t lock;
 	long count;
@@ -607,13 +627,13 @@ extern unsigned int default_hstate_idx;
  */
 static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
 {
-	return (struct hugepage_subpool *)(hpage+1)->private;
+	return (void *)page_private(hpage + SUBPAGE_INDEX_SUBPOOL);
 }
 
 static inline void hugetlb_set_page_subpool(struct page *hpage,
 					struct hugepage_subpool *subpool)
 {
-	set_page_private(hpage+1, (unsigned long)subpool);
+	set_page_private(hpage + SUBPAGE_INDEX_SUBPOOL, (unsigned long)subpool);
 }
 
 static inline struct hstate *hstate_file(struct file *f)
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index 2ad6e92f124a..54ec689e3c9c 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -21,15 +21,16 @@ struct hugetlb_cgroup;
 struct resv_map;
 struct file_region;
 
+#ifdef CONFIG_CGROUP_HUGETLB
 /*
  * Minimum page order trackable by hugetlb cgroup.
  * At least 4 pages are necessary for all the tracking information.
- * The second tail page (hpage[2]) is the fault usage cgroup.
- * The third tail page (hpage[3]) is the reservation usage cgroup.
+ * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
+ * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
+ * is the reservation usage cgroup.
  */
-#define HUGETLB_CGROUP_MIN_ORDER	2
+#define HUGETLB_CGROUP_MIN_ORDER order_base_2(__MAX_CGROUP_SUBPAGE_INDEX + 1)
 
-#ifdef CONFIG_CGROUP_HUGETLB
 enum hugetlb_memory_event {
 	HUGETLB_MAX,
 	HUGETLB_NR_MEMORY_EVENTS,
@@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
 	if (rsvd)
-		return (struct hugetlb_cgroup *)page[3].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
 	else
-		return (struct hugetlb_cgroup *)page[2].private;
+		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
 }
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
@@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
 	if (rsvd)
-		page[3].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
+				 (unsigned long)h_cg);
 	else
-		page[2].private = (unsigned long)h_cg;
+		set_page_private(page + SUBPAGE_INDEX_CGROUP,
+				 (unsigned long)h_cg);
 	return 0;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index c221b937be17..4956880a7861 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -1312,7 +1312,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
 	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
 		return;
 
-	page = head + page_private(head + 4);
+	page = head + page_private(head + SUBPAGE_INDEX_HWPOISON);
 
 	/*
 	 * Move PageHWPoison flag from head page to the raw error page,
@@ -1331,7 +1331,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
 		return;
 
 	if (free_vmemmap_pages_per_hpage(h)) {
-		set_page_private(head + 4, page - head);
+		set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
 	} else if (page != head) {
 		/*
 		 * Move PageHWPoison flag from head page to the raw error page,
@@ -1347,7 +1347,7 @@ static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
 	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
 		return;
 
-	set_page_private(head + 4, 0);
+	set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0);
 }
 #else
 static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index b65f0d5189bd..33e42678abe3 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -257,6 +257,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	unsigned int nr_pages = pages_per_huge_page(h);
 	unsigned int vmemmap_pages;
 
+	/*
+	 * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
+	 * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP,
+	 * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
+	 */
+	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
+		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
+
 	if (!hugetlb_free_vmemmap_enabled)
 		return;
 
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
                   ` (7 preceding siblings ...)
  2021-03-08 10:28 ` [PATCH v18 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
@ 2021-03-08 10:28 ` Muchun Song
  2021-03-10 15:41   ` Michal Hocko
  8 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-08 10:28 UTC (permalink / raw)
  To: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, mhocko, song.bao.hua,
	david, naoya.horiguchi, joao.m.martins
  Cc: duanxiongchun, linux-doc, linux-kernel, linux-mm, linux-fsdevel,
	Muchun Song, Miaohe Lin, Chen Huang, Bodeddula Balasubramaniam

When the "struct page size" crosses page boundaries we cannot
make use of this feature. Let free_vmemmap_pages_per_hpage()
return zero if that is the case, most of the functions can be
optimized away.

Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Tested-by: Chen Huang <chenhuang5@huawei.com>
Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
---
 include/linux/hugetlb.h | 3 ++-
 mm/hugetlb_vmemmap.c    | 7 +++++++
 mm/hugetlb_vmemmap.h    | 6 ++++++
 3 files changed, 15 insertions(+), 1 deletion(-)

diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index c70421e26189..333dd0479fc2 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -880,7 +880,8 @@ extern bool hugetlb_free_vmemmap_enabled;
 
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
 {
-	return hugetlb_free_vmemmap_enabled;
+	return hugetlb_free_vmemmap_enabled &&
+	       is_power_of_2(sizeof(struct page));
 }
 #else
 static inline bool is_hugetlb_free_vmemmap_enabled(void)
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index 33e42678abe3..1ba1ef45c48c 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -265,6 +265,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
 	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
 		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
 
+	/*
+	 * The compiler can help us to optimize this function to null
+	 * when the size of the struct page is not power of 2.
+	 */
+	if (!is_power_of_2(sizeof(struct page)))
+		return;
+
 	if (!hugetlb_free_vmemmap_enabled)
 		return;
 
diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
index cb2bef8f9e73..29aaaf7b741e 100644
--- a/mm/hugetlb_vmemmap.h
+++ b/mm/hugetlb_vmemmap.h
@@ -21,6 +21,12 @@ void hugetlb_vmemmap_init(struct hstate *h);
  */
 static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
 {
+	/*
+	 * This check aims to let the compiler help us optimize the code as
+	 * much as possible.
+	 */
+	if (!is_power_of_2(sizeof(struct page)))
+		return 0;
 	return h->nr_free_vmemmap_pages;
 }
 #else
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
  2021-03-08 10:27 ` [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
@ 2021-03-10 14:14   ` Michal Hocko
  2021-03-11  2:58     ` [External] " Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 14:14 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Miaohe Lin, Chen Huang,
	Bodeddula Balasubramaniam

[I am sorry for a late review]

On Mon 08-03-21 18:27:59, Muchun Song wrote:
> Move bootmem info registration common API to individual bootmem_info.c.
> And we will use {get,put}_page_bootmem() to initialize the page for the
> vmemmap pages or free the vmemmap pages to buddy in the later patch.
> So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
> movement without any functional change.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Reviewed-by: David Hildenbrand <david@redhat.com>
> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> Tested-by: Chen Huang <chenhuang5@huawei.com>
> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>

Separation from memory_hotplug.c is definitely a right step. I am
wondering about the config dependency though
[...]
> diff --git a/mm/Makefile b/mm/Makefile
> index 72227b24a616..daabf86d7da8 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
>  obj-$(CONFIG_KASAN)	+= kasan/
>  obj-$(CONFIG_KFENCE) += kfence/
>  obj-$(CONFIG_FAILSLAB) += failslab.o
> +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o

I would have expected this would depend on CONFIG_SPARSE.
BOOTMEM_INFO_NODE is really an odd thing to depend on here. There is
some functionality which requires the node info but that can be gated
specifically. Or what is the thinking behind?

This doesn't matter right now because it seems that the *_page_bootmem
is only used by x86 outside of the memory hotplug.

Other than that looks good to me.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-08 10:28 ` [PATCH v18 4/9] mm: hugetlb: alloc " Muchun Song
@ 2021-03-10 14:21   ` Oscar Salvador
  2021-03-11  4:13     ` [External] " Muchun Song
  2021-03-10 15:19   ` Michal Hocko
  1 sibling, 1 reply; 52+ messages in thread
From: Oscar Salvador @ 2021-03-10 14:21 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, mhocko, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Mon, Mar 08, 2021 at 06:28:02PM +0800, Muchun Song wrote:
> When we free a HugeTLB page to the buddy allocator, we need to allocate
> the vmemmap pages associated with it. However, we may not be able to
> allocate the vmemmap pages when the system is under memory pressure. In
> this case, we just refuse to free the HugeTLB page. This changes behavior
> in some corner cases as listed below:
> 
>  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> 
>     User needs to try again later.
> 
>  2) Failing to free a surplus huge page when freed by the application.
> 
>     Try again later when freeing a huge page next time.
> 
>  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
>     offline_pages().
> 
>     This can happen when we have plenty of ZONE_MOVABLE memory, but
>     not enough kernel memory to allocate vmemmmap pages.  We may even
>     be able to migrate huge page contents, but will not be able to
>     dissolve the source huge page.  This will prevent an offline
>     operation and is unfortunate as memory offlining is expected to
>     succeed on movable zones.  Users that depend on memory hotplug
>     to succeed for movable zones should carefully consider whether the
>     memory savings gained from this feature are worth the risk of
>     possibly not being able to offline memory in certain situations.

This is nice to have it here, but a normal user won't dig in the kernel to
figure this out, so my question is: Do we have this documented somewhere under
Documentation/?
If not, could we document it there? It is nice to warn about this things were
sysadmins can find them.

>  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
>     alloc_contig_range() - once we have that handling in place. Mainly
>     affects CMA and virtio-mem.
> 
>     Similar to 3). virito-mem will handle migration errors gracefully.
>     CMA might be able to fallback on other free areas within the CMA
>     region.
> 
> Vmemmap pages are allocated from the page freeing context. In order for
> those allocations to be not disruptive (e.g. trigger oom killer)
> __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation
> because a non sleeping allocation would be too fragile and it could fail
> too easily under memory pressure. GFP_ATOMIC or other modes to access
> memory reserves is not used because we want to prevent consuming
> reserves under heavy hugetlb freeing.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Tested-by: Chen Huang <chenhuang5@huawei.com>
> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>

Sorry for jumping in late.
It looks good to me:

Reviewed-by: Oscar Salvador <osalvador@suse.de>

Minor request above and below:

> ---
>  Documentation/admin-guide/mm/hugetlbpage.rst |  8 +++
>  include/linux/mm.h                           |  2 +
>  mm/hugetlb.c                                 | 92 +++++++++++++++++++++-------
>  mm/hugetlb_vmemmap.c                         | 32 ++++++----
>  mm/hugetlb_vmemmap.h                         | 23 +++++++
>  mm/sparse-vmemmap.c                          | 75 ++++++++++++++++++++++-
>  6 files changed, 197 insertions(+), 35 deletions(-)

[...]



Could we place a brief comment about what we expect to return here?

> -static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
>  {
> -	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> +	unsigned long vmemmap_addr = (unsigned long)head;
> +	unsigned long vmemmap_end, vmemmap_reuse;
> +
> +	if (!free_vmemmap_pages_per_hpage(h))
> +		return 0;
> +
> +	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> +	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> +	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> +	/*
> +	 * The pages which the vmemmap virtual address range [@vmemmap_addr,
> +	 * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> +	 * the range is mapped to the page which @vmemmap_reuse is mapped to.
> +	 * When a HugeTLB page is freed to the buddy allocator, previously
> +	 * discarded vmemmap pages must be allocated and remapping.
> +	 */
> +	return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> +				   GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
>  }

-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
  2021-03-08 10:28 ` [PATCH v18 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
@ 2021-03-10 14:32   ` Michal Hocko
  2021-03-11  3:35     ` [External] " Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 14:32 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Mon 08-03-21 18:28:01, Muchun Song wrote:
> Every HugeTLB has more than one struct page structure. We __know__ that
> we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
> to store metadata associated with each HugeTLB.

I think it would be great to make this explicit somewhere around the
code which uses those struct pages.

> There are a lot of struct page structures associated with each HugeTLB
> page. For tail pages, the value of compound_head is the same. So we can
> reuse first page of tail page structures. We map the virtual addresses
> of the remaining pages of tail page structures to the first tail page
> struct, and then free these page frames. Therefore, we need to reserve
> two pages as vmemmap areas.
> 
> When we allocate a HugeTLB page from the buddy, we can free some vmemmap
> pages associated with each HugeTLB page. It is more appropriate to do it
> in the prep_new_huge_page().
> 
> The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
> pages associated with a HugeTLB page can be freed, returns zero for
> now, which means the feature is disabled. We will enable it once all
> the infrastructure is there.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Tested-by: Chen Huang <chenhuang5@huawei.com>
> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>

I do not see any issues here. I just want to point out that the amount
of *BUG_ON is quite high to my taste. Most of them seem to be added just
in case if something goes wrong or should never happen. These are
usually bad reasons to add them IMHO. I would just drop those unless
there is a very good reason to keep them around.

I really appreciate how you made a high level design documentation to
the source code directly. Talking about struct pages backing struct
pages (vmemmap) is usually a good recipe for headache but those diagrams
make it easy to follow the reasoning.

Anyway
Acked-by: michal Hocko <mhocko@suse.com>

> ---
>  include/linux/bootmem_info.h |  27 +++++-
>  include/linux/mm.h           |   3 +
>  mm/Makefile                  |   1 +
>  mm/hugetlb.c                 |   3 +
>  mm/hugetlb_vmemmap.c         | 219 +++++++++++++++++++++++++++++++++++++++++++
>  mm/hugetlb_vmemmap.h         |  20 ++++
>  mm/sparse-vmemmap.c          | 207 ++++++++++++++++++++++++++++++++++++++++
>  7 files changed, 479 insertions(+), 1 deletion(-)
>  create mode 100644 mm/hugetlb_vmemmap.c
>  create mode 100644 mm/hugetlb_vmemmap.h
> 
> diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> index 4ed6dee1adc9..ec03a624dfa2 100644
> --- a/include/linux/bootmem_info.h
> +++ b/include/linux/bootmem_info.h
> @@ -2,7 +2,7 @@
>  #ifndef __LINUX_BOOTMEM_INFO_H
>  #define __LINUX_BOOTMEM_INFO_H
>  
> -#include <linux/mmzone.h>
> +#include <linux/mm.h>
>  
>  /*
>   * Types for free bootmem stored in page->lru.next. These have to be in
> @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
>  void get_page_bootmem(unsigned long info, struct page *page,
>  		      unsigned long type);
>  void put_page_bootmem(struct page *page);
> +
> +/*
> + * Any memory allocated via the memblock allocator and not via the
> + * buddy will be marked reserved already in the memmap. For those
> + * pages, we can call this function to free it to buddy allocator.
> + */
> +static inline void free_bootmem_page(struct page *page)
> +{
> +	unsigned long magic = (unsigned long)page->freelist;
> +
> +	/*
> +	 * The reserve_bootmem_region sets the reserved flag on bootmem
> +	 * pages.
> +	 */
> +	VM_BUG_ON_PAGE(page_ref_count(page) != 2, page);
> +
> +	if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> +		put_page_bootmem(page);
> +	else
> +		VM_BUG_ON_PAGE(1, page);
> +}
>  #else
>  static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
>  {
> @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page,
>  				    unsigned long type)
>  {
>  }
> +
> +static inline void free_bootmem_page(struct page *page)
> +{
> +}
>  #endif
>  
>  #endif /* __LINUX_BOOTMEM_INFO_H */
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index 77e64e3eac80..4ddfc31f21c6 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -2971,6 +2971,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
>  }
>  #endif
>  
> +void vmemmap_remap_free(unsigned long start, unsigned long end,
> +			unsigned long reuse);
> +
>  void *sparse_buffer_alloc(unsigned long size);
>  struct page * __populate_section_memmap(unsigned long pfn,
>  		unsigned long nr_pages, int nid, struct vmem_altmap *altmap);
> diff --git a/mm/Makefile b/mm/Makefile
> index daabf86d7da8..3d7d57e3b55b 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)	+= frontswap.o
>  obj-$(CONFIG_ZSWAP)	+= zswap.o
>  obj-$(CONFIG_HAS_DMA)	+= dmapool.o
>  obj-$(CONFIG_HUGETLBFS)	+= hugetlb.o
> +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)	+= hugetlb_vmemmap.o
>  obj-$(CONFIG_NUMA) 	+= mempolicy.o
>  obj-$(CONFIG_SPARSEMEM)	+= sparse.o
>  obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index c232cb67dda2..43fed6785322 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -42,6 +42,7 @@
>  #include <linux/userfaultfd_k.h>
>  #include <linux/page_owner.h>
>  #include "internal.h"
> +#include "hugetlb_vmemmap.h"
>  
>  int hugetlb_max_hstate __read_mostly;
>  unsigned int default_hstate_idx;
> @@ -1463,6 +1464,8 @@ void free_huge_page(struct page *page)
>  
>  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
>  {
> +	free_huge_page_vmemmap(h, page);
> +
>  	INIT_LIST_HEAD(&page->lru);
>  	set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
>  	set_hugetlb_cgroup(page, NULL);
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> new file mode 100644
> index 000000000000..0209b736e0b4
> --- /dev/null
> +++ b/mm/hugetlb_vmemmap.c
> @@ -0,0 +1,219 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Free some vmemmap pages of HugeTLB
> + *
> + * Copyright (c) 2020, Bytedance. All rights reserved.
> + *
> + *     Author: Muchun Song <songmuchun@bytedance.com>
> + *
> + * The struct page structures (page structs) are used to describe a physical
> + * page frame. By default, there is a one-to-one mapping from a page frame to
> + * it's corresponding page struct.
> + *
> + * HugeTLB pages consist of multiple base page size pages and is supported by
> + * many architectures. See hugetlbpage.rst in the Documentation directory for
> + * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB
> + * are currently supported. Since the base page size on x86 is 4KB, a 2MB
> + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> + * 4096 base pages. For each base page, there is a corresponding page struct.
> + *
> + * Within the HugeTLB subsystem, only the first 4 page structs are used to
> + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> + * provides this upper limit. The only 'useful' information in the remaining
> + * page structs is the compound_head field, and this field is the same for all
> + * tail pages.
> + *
> + * By removing redundant page structs for HugeTLB pages, memory can be returned
> + * to the buddy allocator for other uses.
> + *
> + * Different architectures support different HugeTLB pages. For example, the
> + * following table is the HugeTLB page size supported by x86 and arm64
> + * architectures. Because arm64 supports 4k, 16k, and 64k base pages and
> + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB
> + * page.
> + *
> + * +--------------+-----------+-----------------------------------------------+
> + * | Architecture | Page Size |                HugeTLB Page Size              |
> + * +--------------+-----------+-----------+-----------+-----------+-----------+
> + * |    x86-64    |    4KB    |    2MB    |    1GB    |           |           |
> + * +--------------+-----------+-----------+-----------+-----------+-----------+
> + * |              |    4KB    |   64KB    |    2MB    |    32MB   |    1GB    |
> + * |              +-----------+-----------+-----------+-----------+-----------+
> + * |    arm64     |   16KB    |    2MB    |   32MB    |     1GB   |           |
> + * |              +-----------+-----------+-----------+-----------+-----------+
> + * |              |   64KB    |    2MB    |  512MB    |    16GB   |           |
> + * +--------------+-----------+-----------+-----------+-----------+-----------+
> + *
> + * When the system boot up, every HugeTLB page has more than one struct page
> + * structs which size is (unit: pages):
> + *
> + *    struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
> + *
> + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size
> + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following
> + * relationship.
> + *
> + *    HugeTLB_Size = n * PAGE_SIZE
> + *
> + * Then,
> + *
> + *    struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
> + *                = n * sizeof(struct page) / PAGE_SIZE
> + *
> + * We can use huge mapping at the pud/pmd level for the HugeTLB page.
> + *
> + * For the HugeTLB page of the pmd level mapping, then
> + *
> + *    struct_size = n * sizeof(struct page) / PAGE_SIZE
> + *                = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE
> + *                = sizeof(struct page) / sizeof(pte_t)
> + *                = 64 / 8
> + *                = 8 (pages)
> + *
> + * Where n is how many pte entries which one page can contains. So the value of
> + * n is (PAGE_SIZE / sizeof(pte_t)).
> + *
> + * This optimization only supports 64-bit system, so the value of sizeof(pte_t)
> + * is 8. And this optimization also applicable only when the size of struct page
> + * is a power of two. In most cases, the size of struct page is 64 bytes (e.g.
> + * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the
> + * size of struct page structs of it is 8 page frames which size depends on the
> + * size of the base page.
> + *
> + * For the HugeTLB page of the pud level mapping, then
> + *
> + *    struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd)
> + *                = PAGE_SIZE / 8 * 8 (pages)
> + *                = PAGE_SIZE (pages)
> + *
> + * Where the struct_size(pmd) is the size of the struct page structs of a
> + * HugeTLB page of the pmd level mapping.
> + *
> + * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB
> + * HugeTLB page consists in 4096.
> + *
> + * Next, we take the pmd level mapping of the HugeTLB page as an example to
> + * show the internal implementation of this optimization. There are 8 pages
> + * struct page structs associated with a HugeTLB page which is pmd mapped.
> + *
> + * Here is how things look before optimization.
> + *
> + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> + * |           |                     |     0     | -------------> |     0     |
> + * |           |                     +-----------+                +-----------+
> + * |           |                     |     1     | -------------> |     1     |
> + * |           |                     +-----------+                +-----------+
> + * |           |                     |     2     | -------------> |     2     |
> + * |           |                     +-----------+                +-----------+
> + * |           |                     |     3     | -------------> |     3     |
> + * |           |                     +-----------+                +-----------+
> + * |           |                     |     4     | -------------> |     4     |
> + * |    PMD    |                     +-----------+                +-----------+
> + * |   level   |                     |     5     | -------------> |     5     |
> + * |  mapping  |                     +-----------+                +-----------+
> + * |           |                     |     6     | -------------> |     6     |
> + * |           |                     +-----------+                +-----------+
> + * |           |                     |     7     | -------------> |     7     |
> + * |           |                     +-----------+                +-----------+
> + * |           |
> + * |           |
> + * |           |
> + * +-----------+
> + *
> + * The value of page->compound_head is the same for all tail pages. The first
> + * page of page structs (page 0) associated with the HugeTLB page contains the 4
> + * page structs necessary to describe the HugeTLB. The only use of the remaining
> + * pages of page structs (page 1 to page 7) is to point to page->compound_head.
> + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
> + * will be used for each HugeTLB page. This will allow us to free the remaining
> + * 6 pages to the buddy allocator.
> + *
> + * Here is how things look after remapping.
> + *
> + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> + * |           |                     |     0     | -------------> |     0     |
> + * |           |                     +-----------+                +-----------+
> + * |           |                     |     1     | -------------> |     1     |
> + * |           |                     +-----------+                +-----------+
> + * |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
> + * |           |                     +-----------+                   | | | | |
> + * |           |                     |     3     | ------------------+ | | | |
> + * |           |                     +-----------+                     | | | |
> + * |           |                     |     4     | --------------------+ | | |
> + * |    PMD    |                     +-----------+                       | | |
> + * |   level   |                     |     5     | ----------------------+ | |
> + * |  mapping  |                     +-----------+                         | |
> + * |           |                     |     6     | ------------------------+ |
> + * |           |                     +-----------+                           |
> + * |           |                     |     7     | --------------------------+
> + * |           |                     +-----------+
> + * |           |
> + * |           |
> + * |           |
> + * +-----------+
> + *
> + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
> + * vmemmap pages and restore the previous mapping relationship.
> + *
> + * For the HugeTLB page of the pud level mapping. It is similar to the former.
> + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages.
> + *
> + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures
> + * (e.g. aarch64) provides a contiguous bit in the translation table entries
> + * that hints to the MMU to indicate that it is one of a contiguous set of
> + * entries that can be cached in a single TLB entry.
> + *
> + * The contiguous bit is used to increase the mapping size at the pmd and pte
> + * (last) level. So this type of HugeTLB page can be optimized only when its
> + * size of the struct page structs is greater than 2 pages.
> + */
> +#include "hugetlb_vmemmap.h"
> +
> +/*
> + * There are a lot of struct page structures associated with each HugeTLB page.
> + * For tail pages, the value of compound_head is the same. So we can reuse first
> + * page of tail page structures. We map the virtual addresses of the remaining
> + * pages of tail page structures to the first tail page struct, and then free
> + * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
> + */
> +#define RESERVE_VMEMMAP_NR		2U
> +#define RESERVE_VMEMMAP_SIZE		(RESERVE_VMEMMAP_NR << PAGE_SHIFT)
> +
> +/*
> + * How many vmemmap pages associated with a HugeTLB page that can be freed
> + * to the buddy allocator.
> + *
> + * Todo: Returns zero for now, which means the feature is disabled. We will
> + * enable it once all the infrastructure is there.
> + */
> +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> +{
> +	return 0;
> +}
> +
> +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> +{
> +	return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> +}
> +
> +void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> +{
> +	unsigned long vmemmap_addr = (unsigned long)head;
> +	unsigned long vmemmap_end, vmemmap_reuse;
> +
> +	if (!free_vmemmap_pages_per_hpage(h))
> +		return;
> +
> +	vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> +	vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> +	vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> +
> +	/*
> +	 * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end)
> +	 * to the page which @vmemmap_reuse is mapped to, then free the pages
> +	 * which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
> +	 */
> +	vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse);
> +}
> diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> new file mode 100644
> index 000000000000..6923f03534d5
> --- /dev/null
> +++ b/mm/hugetlb_vmemmap.h
> @@ -0,0 +1,20 @@
> +// SPDX-License-Identifier: GPL-2.0
> +/*
> + * Free some vmemmap pages of HugeTLB
> + *
> + * Copyright (c) 2020, Bytedance. All rights reserved.
> + *
> + *     Author: Muchun Song <songmuchun@bytedance.com>
> + */
> +#ifndef _LINUX_HUGETLB_VMEMMAP_H
> +#define _LINUX_HUGETLB_VMEMMAP_H
> +#include <linux/hugetlb.h>
> +
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +void free_huge_page_vmemmap(struct hstate *h, struct page *head);
> +#else
> +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> +{
> +}
> +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> +#endif /* _LINUX_HUGETLB_VMEMMAP_H */
> diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> index 16183d85a7d5..d3076a7a3783 100644
> --- a/mm/sparse-vmemmap.c
> +++ b/mm/sparse-vmemmap.c
> @@ -27,8 +27,215 @@
>  #include <linux/spinlock.h>
>  #include <linux/vmalloc.h>
>  #include <linux/sched.h>
> +#include <linux/pgtable.h>
> +#include <linux/bootmem_info.h>
> +
>  #include <asm/dma.h>
>  #include <asm/pgalloc.h>
> +#include <asm/tlbflush.h>
> +
> +/**
> + * vmemmap_remap_walk - walk vmemmap page table
> + *
> + * @remap_pte:		called for each lowest-level entry (PTE).
> + * @reuse_page:		the page which is reused for the tail vmemmap pages.
> + * @reuse_addr:		the virtual address of the @reuse_page page.
> + * @vmemmap_pages:	the list head of the vmemmap pages that can be freed.
> + */
> +struct vmemmap_remap_walk {
> +	void (*remap_pte)(pte_t *pte, unsigned long addr,
> +			  struct vmemmap_remap_walk *walk);
> +	struct page *reuse_page;
> +	unsigned long reuse_addr;
> +	struct list_head *vmemmap_pages;
> +};
> +
> +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> +			      unsigned long end,
> +			      struct vmemmap_remap_walk *walk)
> +{
> +	pte_t *pte;
> +
> +	pte = pte_offset_kernel(pmd, addr);
> +
> +	/*
> +	 * The reuse_page is found 'first' in table walk before we start
> +	 * remapping (which is calling @walk->remap_pte).
> +	 */
> +	if (!walk->reuse_page) {
> +		BUG_ON(pte_none(*pte));
> +		BUG_ON(walk->reuse_addr != addr);
> +
> +		walk->reuse_page = pte_page(*pte++);
> +		/*
> +		 * Because the reuse address is part of the range that we are
> +		 * walking, skip the reuse address range.
> +		 */
> +		addr += PAGE_SIZE;
> +	}
> +
> +	for (; addr != end; addr += PAGE_SIZE, pte++) {
> +		BUG_ON(pte_none(*pte));
> +
> +		walk->remap_pte(pte, addr, walk);
> +	}
> +}
> +
> +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
> +			      unsigned long end,
> +			      struct vmemmap_remap_walk *walk)
> +{
> +	pmd_t *pmd;
> +	unsigned long next;
> +
> +	pmd = pmd_offset(pud, addr);
> +	do {
> +		BUG_ON(pmd_none(*pmd) || pmd_leaf(*pmd));
> +
> +		next = pmd_addr_end(addr, end);
> +		vmemmap_pte_range(pmd, addr, next, walk);
> +	} while (pmd++, addr = next, addr != end);
> +}
> +
> +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
> +			      unsigned long end,
> +			      struct vmemmap_remap_walk *walk)
> +{
> +	pud_t *pud;
> +	unsigned long next;
> +
> +	pud = pud_offset(p4d, addr);
> +	do {
> +		BUG_ON(pud_none(*pud));
> +
> +		next = pud_addr_end(addr, end);
> +		vmemmap_pmd_range(pud, addr, next, walk);
> +	} while (pud++, addr = next, addr != end);
> +}
> +
> +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
> +			      unsigned long end,
> +			      struct vmemmap_remap_walk *walk)
> +{
> +	p4d_t *p4d;
> +	unsigned long next;
> +
> +	p4d = p4d_offset(pgd, addr);
> +	do {
> +		BUG_ON(p4d_none(*p4d));
> +
> +		next = p4d_addr_end(addr, end);
> +		vmemmap_pud_range(p4d, addr, next, walk);
> +	} while (p4d++, addr = next, addr != end);
> +}
> +
> +static void vmemmap_remap_range(unsigned long start, unsigned long end,
> +				struct vmemmap_remap_walk *walk)
> +{
> +	unsigned long addr = start;
> +	unsigned long next;
> +	pgd_t *pgd;
> +
> +	VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> +	VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> +
> +	pgd = pgd_offset_k(addr);
> +	do {
> +		BUG_ON(pgd_none(*pgd));
> +
> +		next = pgd_addr_end(addr, end);
> +		vmemmap_p4d_range(pgd, addr, next, walk);
> +	} while (pgd++, addr = next, addr != end);
> +
> +	/*
> +	 * We only change the mapping of the vmemmap virtual address range
> +	 * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
> +	 * belongs to the range.
> +	 */
> +	flush_tlb_kernel_range(start + PAGE_SIZE, end);
> +}
> +
> +/*
> + * Free a vmemmap page. A vmemmap page can be allocated from the memblock
> + * allocator or buddy allocator. If the PG_reserved flag is set, it means
> + * that it allocated from the memblock allocator, just free it via the
> + * free_bootmem_page(). Otherwise, use __free_page().
> + */
> +static inline void free_vmemmap_page(struct page *page)
> +{
> +	if (PageReserved(page))
> +		free_bootmem_page(page);
> +	else
> +		__free_page(page);
> +}
> +
> +/* Free a list of the vmemmap pages */
> +static void free_vmemmap_page_list(struct list_head *list)
> +{
> +	struct page *page, *next;
> +
> +	list_for_each_entry_safe(page, next, list, lru) {
> +		list_del(&page->lru);
> +		free_vmemmap_page(page);
> +	}
> +}
> +
> +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
> +			      struct vmemmap_remap_walk *walk)
> +{
> +	/*
> +	 * Remap the tail pages as read-only to catch illegal write operation
> +	 * to the tail pages.
> +	 */
> +	pgprot_t pgprot = PAGE_KERNEL_RO;
> +	pte_t entry = mk_pte(walk->reuse_page, pgprot);
> +	struct page *page = pte_page(*pte);
> +
> +	list_add(&page->lru, walk->vmemmap_pages);
> +	set_pte_at(&init_mm, addr, pte, entry);
> +}
> +
> +/**
> + * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end)
> + *			to the page which @reuse is mapped to, then free vmemmap
> + *			which the range are mapped to.
> + * @start:	start address of the vmemmap virtual address range that we want
> + *		to remap.
> + * @end:	end address of the vmemmap virtual address range that we want to
> + *		remap.
> + * @reuse:	reuse address.
> + *
> + * Note: This function depends on vmemmap being base page mapped. Please make
> + * sure that we disable PMD mapping of vmemmap pages when calling this function.
> + */
> +void vmemmap_remap_free(unsigned long start, unsigned long end,
> +			unsigned long reuse)
> +{
> +	LIST_HEAD(vmemmap_pages);
> +	struct vmemmap_remap_walk walk = {
> +		.remap_pte	= vmemmap_remap_pte,
> +		.reuse_addr	= reuse,
> +		.vmemmap_pages	= &vmemmap_pages,
> +	};
> +
> +	/*
> +	 * In order to make remapping routine most efficient for the huge pages,
> +	 * the routine of vmemmap page table walking has the following rules
> +	 * (see more details from the vmemmap_pte_range()):
> +	 *
> +	 * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE)
> +	 *   should be continuous.
> +	 * - The @reuse address is part of the range [@reuse, @end) that we are
> +	 *   walking which is passed to vmemmap_remap_range().
> +	 * - The @reuse address is the first in the complete range.
> +	 *
> +	 * So we need to make sure that @start and @reuse meet the above rules.
> +	 */
> +	BUG_ON(start - reuse != PAGE_SIZE);
> +
> +	vmemmap_remap_range(reuse, end, &walk);
> +	free_vmemmap_page_list(&vmemmap_pages);
> +}
>  
>  /*
>   * Allocate a block of memory to be used to back the virtual memory map
> -- 
> 2.11.0
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-08 10:28 ` [PATCH v18 4/9] mm: hugetlb: alloc " Muchun Song
  2021-03-10 14:21   ` Oscar Salvador
@ 2021-03-10 15:19   ` Michal Hocko
  2021-03-10 18:56     ` Mike Kravetz
  2021-03-11  4:26     ` [External] " Muchun Song
  1 sibling, 2 replies; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 15:19 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Mon 08-03-21 18:28:02, Muchun Song wrote:
[...]
> -static void update_and_free_page(struct hstate *h, struct page *page)
> +static int update_and_free_page(struct hstate *h, struct page *page)
> +	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
>  {
>  	int i;
>  	struct page *subpage = page;
> +	int nid = page_to_nid(page);
>  
>  	if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> -		return;
> +		return 0;
>  
>  	h->nr_huge_pages--;
> -	h->nr_huge_pages_node[page_to_nid(page)]--;
> +	h->nr_huge_pages_node[nid]--;
> +	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> +	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);

> +	set_page_refcounted(page);
> +	set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> +
> +	/*
> +	 * If the vmemmap pages associated with the HugeTLB page can be
> +	 * optimized or the page is gigantic, we might block in
> +	 * alloc_huge_page_vmemmap() or free_gigantic_page(). In both
> +	 * cases, drop the hugetlb_lock.
> +	 */
> +	if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
> +		spin_unlock(&hugetlb_lock);
> +
> +	if (alloc_huge_page_vmemmap(h, page)) {
> +		spin_lock(&hugetlb_lock);
> +		INIT_LIST_HEAD(&page->lru);
> +		set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> +		h->nr_huge_pages++;
> +		h->nr_huge_pages_node[nid]++;
> +
> +		/*
> +		 * If we cannot allocate vmemmap pages, just refuse to free the
> +		 * page and put the page back on the hugetlb free list and treat
> +		 * as a surplus page.
> +		 */
> +		h->surplus_huge_pages++;
> +		h->surplus_huge_pages_node[nid]++;
> +
> +		/*
> +		 * The refcount can possibly be increased by memory-failure or
> +		 * soft_offline handlers.

This comment could be more helpful. I believe you want to say this
		/*
		 * HWpoisoning code can increment the reference
		 * count here. If there is a race then bail out
		 * the holder of the additional reference count will
		 * free up the page with put_page.
> +		 */
> +		if (likely(put_page_testzero(page))) {
> +			arch_clear_hugepage_flags(page);
> +			enqueue_huge_page(h, page);
> +		}
> +
> +		return -ENOMEM;
> +	}
> +
>  	for (i = 0; i < pages_per_huge_page(h);
>  	     i++, subpage = mem_map_next(subpage, page, i)) {
>  		subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
[...]
> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
>  	/*
>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
>  	 */
> -	if (!in_task()) {
> +	if (in_atomic()) {

As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
We need this change for other reasons and so it would be better to pull
it out into a separate patch which also makes HUGETLB depend on
PREEMPT_COUNT.

[...]
> @@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page)
>  		h->free_huge_pages--;
>  		h->free_huge_pages_node[nid]--;
>  		h->max_huge_pages--;
> -		update_and_free_page(h, head);
> -		rc = 0;
> +		rc = update_and_free_page(h, head);
> +		if (rc) {
> +			h->surplus_huge_pages--;
> +			h->surplus_huge_pages_node[nid]--;
> +			h->max_huge_pages++;

This is quite ugly and confusing. update_and_free_page is careful to do
the proper counters accounting and now you just override it partially.
Why cannot we rely on update_and_free_page do the right thing?

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page
  2021-03-08 10:28 ` [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
@ 2021-03-10 15:27   ` Michal Hocko
  2021-03-11  6:34     ` [External] " Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 15:27 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Mon 08-03-21 18:28:03, Muchun Song wrote:
> Because we reuse the first tail vmemmap page frame and remap it
> with read-only, we cannot set the PageHWPosion on some tail pages.
> So we can use the head[4].private (There are at least 128 struct
> page structures associated with the optimized HugeTLB page, so
> using head[4].private is safe) to record the real error page index
> and set the raw error page PageHWPoison later.

Can we have more poisoned tail pages? Also who does consume that index
and set the HWPoison on the proper tail page?
 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Acked-by: David Rientjes <rientjes@google.com>
> Tested-by: Chen Huang <chenhuang5@huawei.com>
> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> ---
>  mm/hugetlb.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------
>  1 file changed, 72 insertions(+), 8 deletions(-)
> 
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index 377e0c1b283f..c0c1b7635ca9 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1304,6 +1304,74 @@ static inline void destroy_compound_gigantic_page(struct page *page,
>  						unsigned int order) { }
>  #endif
>  
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
> +{
> +	struct page *page;
> +
> +	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
> +		return;
> +
> +	page = head + page_private(head + 4);
> +
> +	/*
> +	 * Move PageHWPoison flag from head page to the raw error page,
> +	 * which makes any subpages rather than the error page reusable.
> +	 */
> +	if (page != head) {
> +		SetPageHWPoison(page);
> +		ClearPageHWPoison(head);
> +	}
> +}
> +
> +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
> +					struct page *page)
> +{
> +	if (!PageHWPoison(head))
> +		return;
> +
> +	if (free_vmemmap_pages_per_hpage(h)) {
> +		set_page_private(head + 4, page - head);
> +	} else if (page != head) {
> +		/*
> +		 * Move PageHWPoison flag from head page to the raw error page,
> +		 * which makes any subpages rather than the error page reusable.
> +		 */
> +		SetPageHWPoison(page);
> +		ClearPageHWPoison(head);
> +	}
> +}
> +
> +static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
> +{
> +	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
> +		return;
> +
> +	set_page_private(head + 4, 0);
> +}
> +#else
> +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
> +{
> +}
> +
> +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
> +					struct page *page)
> +{
> +	if (PageHWPoison(head) && page != head) {
> +		/*
> +		 * Move PageHWPoison flag from head page to the raw error page,
> +		 * which makes any subpages rather than the error page reusable.
> +		 */
> +		SetPageHWPoison(page);
> +		ClearPageHWPoison(head);
> +	}
> +}
> +
> +static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
> +{
> +}
> +#endif
> +
>  static int update_and_free_page(struct hstate *h, struct page *page)
>  	__releases(&hugetlb_lock) __acquires(&hugetlb_lock)
>  {
> @@ -1357,6 +1425,8 @@ static int update_and_free_page(struct hstate *h, struct page *page)
>  		return -ENOMEM;
>  	}
>  
> +	hwpoison_subpage_deliver(h, page);
> +
>  	for (i = 0; i < pages_per_huge_page(h);
>  	     i++, subpage = mem_map_next(subpage, page, i)) {
>  		subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
> @@ -1801,14 +1871,7 @@ int dissolve_free_huge_page(struct page *page)
>  			goto retry;
>  		}
>  
> -		/*
> -		 * Move PageHWPoison flag from head page to the raw error page,
> -		 * which makes any subpages rather than the error page reusable.
> -		 */
> -		if (PageHWPoison(head) && page != head) {
> -			SetPageHWPoison(page);
> -			ClearPageHWPoison(head);
> -		}
> +		hwpoison_subpage_set(h, head, page);
>  		list_del(&head->lru);
>  		h->free_huge_pages--;
>  		h->free_huge_pages_node[nid]--;
> @@ -1818,6 +1881,7 @@ int dissolve_free_huge_page(struct page *page)
>  			h->surplus_huge_pages--;
>  			h->surplus_huge_pages_node[nid]--;
>  			h->max_huge_pages++;
> +			hwpoison_subpage_clear(h, head);
>  		}
>  	}
>  out:
> -- 
> 2.11.0
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  2021-03-08 10:28 ` [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
@ 2021-03-10 15:37   ` Michal Hocko
  2021-03-10 17:15     ` Randy Dunlap
  2021-03-11  6:36     ` Muchun Song
  0 siblings, 2 replies; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 15:37 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Miaohe Lin, Chen Huang,
	Bodeddula Balasubramaniam

On Mon 08-03-21 18:28:04, Muchun Song wrote:
> Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
> freeing unused vmemmap pages associated with each hugetlb page on boot.
> 
> We disables PMD mapping of vmemmap pages for x86-64 arch when this
> feature is enabled. Because vmemmap_remap_free() depends on vmemmap
> being base page mapped.
> 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> Tested-by: Chen Huang <chenhuang5@huawei.com>
> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> ---
>  Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++
>  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
>  arch/x86/mm/init_64.c                           |  8 ++++++--
>  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
>  mm/hugetlb_vmemmap.c                            | 24 ++++++++++++++++++++++++
>  5 files changed, 66 insertions(+), 2 deletions(-)
> 
> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> index 04545725f187..de91d54573c4 100644
> --- a/Documentation/admin-guide/kernel-parameters.txt
> +++ b/Documentation/admin-guide/kernel-parameters.txt
> @@ -1557,6 +1557,20 @@
>  			Documentation/admin-guide/mm/hugetlbpage.rst.
>  			Format: size[KMG]
>  
> +	hugetlb_free_vmemmap=
> +			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> +			this controls freeing unused vmemmap pages associated
> +			with each HugeTLB page. When this option is enabled,
> +			we disable PMD/huge page mapping of vmemmap pages which
> +			increase page table pages. So if a user/sysadmin only
> +			uses a small number of HugeTLB pages (as a percentage
> +			of system memory), they could end up using more memory
> +			with hugetlb_free_vmemmap on as opposed to off.
> +			Format: { on | off (default) }

Please note this is an admin guide and for those this seems overly low
level. I would use something like the following
			[KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
			enabled.
			Allows heavy hugetlb users to free up some more
			memory (6 * PAGE_SIZE for each 2MB hugetlb
			page).
			This feauture is not free though. Large page
			tables are not use to back vmemmap pages which
			can lead to a performance degradation for some
			workloads. Also there will be memory allocation
			required when hugetlb pages are freed from the
			pool which can lead to corner cases under heavy
			memory pressure.
> +
> +			on:  enable the feature
> +			off: disable the feature
> +
>  	hung_task_panic=
>  			[KNL] Should the hung task detector generate panics.
>  			Format: 0 | 1
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 8/9] mm: hugetlb: gather discrete indexes of tail page
  2021-03-08 10:28 ` [PATCH v18 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
@ 2021-03-10 15:39   ` Michal Hocko
  0 siblings, 0 replies; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 15:39 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Miaohe Lin, Chen Huang,
	Bodeddula Balasubramaniam

On Mon 08-03-21 18:28:06, Muchun Song wrote:
> For HugeTLB page, there are more metadata to save in the struct page.
> But the head struct page cannot meet our needs, so we have to abuse
> other tail struct page to store the metadata. In order to avoid
> conflicts caused by subsequent use of more tail struct pages, we can
> gather these discrete indexes of tail struct page. In this case, it
> will be easier to add a new tail page index later.
> 
> There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
> page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP,
> so add a BUILD_BUG_ON to catch invalid usage of the tail struct page.

OK, so this is what I have asked in an earlier patch. Good. I would
reorder and make this patch prior to the one relying on the fact though.
 
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> Tested-by: Chen Huang <chenhuang5@huawei.com>
> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>

Acked-by: Michal Hocko <mhocko@suse.com>
> ---
>  include/linux/hugetlb.h        | 24 ++++++++++++++++++++++--
>  include/linux/hugetlb_cgroup.h | 19 +++++++++++--------
>  mm/hugetlb.c                   |  6 +++---
>  mm/hugetlb_vmemmap.c           |  8 ++++++++
>  4 files changed, 44 insertions(+), 13 deletions(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index a4d80f7263fc..c70421e26189 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -28,6 +28,26 @@ typedef struct { unsigned long pd; } hugepd_t;
>  #include <linux/shm.h>
>  #include <asm/tlbflush.h>
>  
> +/*
> + * For HugeTLB page, there are more metadata to save in the struct page. But
> + * the head struct page cannot meet our needs, so we have to abuse other tail
> + * struct page to store the metadata. In order to avoid conflicts caused by
> + * subsequent use of more tail struct pages, we gather these discrete indexes
> + * of tail struct page here.
> + */
> +enum {
> +	SUBPAGE_INDEX_SUBPOOL = 1,	/* reuse page->private */
> +#ifdef CONFIG_CGROUP_HUGETLB
> +	SUBPAGE_INDEX_CGROUP,		/* reuse page->private */
> +	SUBPAGE_INDEX_CGROUP_RSVD,	/* reuse page->private */
> +	__MAX_CGROUP_SUBPAGE_INDEX = SUBPAGE_INDEX_CGROUP_RSVD,
> +#endif
> +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> +	SUBPAGE_INDEX_HWPOISON,		/* reuse page->private */
> +#endif
> +	__NR_USED_SUBPAGE,
> +};
> +
>  struct hugepage_subpool {
>  	spinlock_t lock;
>  	long count;
> @@ -607,13 +627,13 @@ extern unsigned int default_hstate_idx;
>   */
>  static inline struct hugepage_subpool *hugetlb_page_subpool(struct page *hpage)
>  {
> -	return (struct hugepage_subpool *)(hpage+1)->private;
> +	return (void *)page_private(hpage + SUBPAGE_INDEX_SUBPOOL);
>  }
>  
>  static inline void hugetlb_set_page_subpool(struct page *hpage,
>  					struct hugepage_subpool *subpool)
>  {
> -	set_page_private(hpage+1, (unsigned long)subpool);
> +	set_page_private(hpage + SUBPAGE_INDEX_SUBPOOL, (unsigned long)subpool);
>  }
>  
>  static inline struct hstate *hstate_file(struct file *f)
> diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
> index 2ad6e92f124a..54ec689e3c9c 100644
> --- a/include/linux/hugetlb_cgroup.h
> +++ b/include/linux/hugetlb_cgroup.h
> @@ -21,15 +21,16 @@ struct hugetlb_cgroup;
>  struct resv_map;
>  struct file_region;
>  
> +#ifdef CONFIG_CGROUP_HUGETLB
>  /*
>   * Minimum page order trackable by hugetlb cgroup.
>   * At least 4 pages are necessary for all the tracking information.
> - * The second tail page (hpage[2]) is the fault usage cgroup.
> - * The third tail page (hpage[3]) is the reservation usage cgroup.
> + * The second tail page (hpage[SUBPAGE_INDEX_CGROUP]) is the fault
> + * usage cgroup. The third tail page (hpage[SUBPAGE_INDEX_CGROUP_RSVD])
> + * is the reservation usage cgroup.
>   */
> -#define HUGETLB_CGROUP_MIN_ORDER	2
> +#define HUGETLB_CGROUP_MIN_ORDER order_base_2(__MAX_CGROUP_SUBPAGE_INDEX + 1)
>  
> -#ifdef CONFIG_CGROUP_HUGETLB
>  enum hugetlb_memory_event {
>  	HUGETLB_MAX,
>  	HUGETLB_NR_MEMORY_EVENTS,
> @@ -66,9 +67,9 @@ __hugetlb_cgroup_from_page(struct page *page, bool rsvd)
>  	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
>  		return NULL;
>  	if (rsvd)
> -		return (struct hugetlb_cgroup *)page[3].private;
> +		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP_RSVD);
>  	else
> -		return (struct hugetlb_cgroup *)page[2].private;
> +		return (void *)page_private(page + SUBPAGE_INDEX_CGROUP);
>  }
>  
>  static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
> @@ -90,9 +91,11 @@ static inline int __set_hugetlb_cgroup(struct page *page,
>  	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
>  		return -1;
>  	if (rsvd)
> -		page[3].private = (unsigned long)h_cg;
> +		set_page_private(page + SUBPAGE_INDEX_CGROUP_RSVD,
> +				 (unsigned long)h_cg);
>  	else
> -		page[2].private = (unsigned long)h_cg;
> +		set_page_private(page + SUBPAGE_INDEX_CGROUP,
> +				 (unsigned long)h_cg);
>  	return 0;
>  }
>  
> diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> index c221b937be17..4956880a7861 100644
> --- a/mm/hugetlb.c
> +++ b/mm/hugetlb.c
> @@ -1312,7 +1312,7 @@ static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
>  	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
>  		return;
>  
> -	page = head + page_private(head + 4);
> +	page = head + page_private(head + SUBPAGE_INDEX_HWPOISON);
>  
>  	/*
>  	 * Move PageHWPoison flag from head page to the raw error page,
> @@ -1331,7 +1331,7 @@ static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
>  		return;
>  
>  	if (free_vmemmap_pages_per_hpage(h)) {
> -		set_page_private(head + 4, page - head);
> +		set_page_private(head + SUBPAGE_INDEX_HWPOISON, page - head);
>  	} else if (page != head) {
>  		/*
>  		 * Move PageHWPoison flag from head page to the raw error page,
> @@ -1347,7 +1347,7 @@ static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
>  	if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
>  		return;
>  
> -	set_page_private(head + 4, 0);
> +	set_page_private(head + SUBPAGE_INDEX_HWPOISON, 0);
>  }
>  #else
>  static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index b65f0d5189bd..33e42678abe3 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -257,6 +257,14 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>  	unsigned int nr_pages = pages_per_huge_page(h);
>  	unsigned int vmemmap_pages;
>  
> +	/*
> +	 * There are only (RESERVE_VMEMMAP_SIZE / sizeof(struct page)) struct
> +	 * page structs that can be used when CONFIG_HUGETLB_PAGE_FREE_VMEMMAP,
> +	 * so add a BUILD_BUG_ON to catch invalid usage of the tail struct page.
> +	 */
> +	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
> +		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
> +
>  	if (!hugetlb_free_vmemmap_enabled)
>  		return;
>  
> -- 
> 2.11.0
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-08 10:28 ` [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
@ 2021-03-10 15:41   ` Michal Hocko
  2021-03-11  7:33     ` [External] " Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 15:41 UTC (permalink / raw)
  To: Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Miaohe Lin, Chen Huang,
	Bodeddula Balasubramaniam

On Mon 08-03-21 18:28:07, Muchun Song wrote:
> When the "struct page size" crosses page boundaries we cannot
> make use of this feature. Let free_vmemmap_pages_per_hpage()
> return zero if that is the case, most of the functions can be
> optimized away.

I am confused. Don't you check for this in early_hugetlb_free_vmemmap_param already?
Why do we need any runtime checks?

> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> Tested-by: Chen Huang <chenhuang5@huawei.com>
> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> ---
>  include/linux/hugetlb.h | 3 ++-
>  mm/hugetlb_vmemmap.c    | 7 +++++++
>  mm/hugetlb_vmemmap.h    | 6 ++++++
>  3 files changed, 15 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> index c70421e26189..333dd0479fc2 100644
> --- a/include/linux/hugetlb.h
> +++ b/include/linux/hugetlb.h
> @@ -880,7 +880,8 @@ extern bool hugetlb_free_vmemmap_enabled;
>  
>  static inline bool is_hugetlb_free_vmemmap_enabled(void)
>  {
> -	return hugetlb_free_vmemmap_enabled;
> +	return hugetlb_free_vmemmap_enabled &&
> +	       is_power_of_2(sizeof(struct page));
>  }
>  #else
>  static inline bool is_hugetlb_free_vmemmap_enabled(void)
> diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> index 33e42678abe3..1ba1ef45c48c 100644
> --- a/mm/hugetlb_vmemmap.c
> +++ b/mm/hugetlb_vmemmap.c
> @@ -265,6 +265,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
>  	BUILD_BUG_ON(__NR_USED_SUBPAGE >=
>  		     RESERVE_VMEMMAP_SIZE / sizeof(struct page));
>  
> +	/*
> +	 * The compiler can help us to optimize this function to null
> +	 * when the size of the struct page is not power of 2.
> +	 */
> +	if (!is_power_of_2(sizeof(struct page)))
> +		return;
> +
>  	if (!hugetlb_free_vmemmap_enabled)
>  		return;
>  
> diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> index cb2bef8f9e73..29aaaf7b741e 100644
> --- a/mm/hugetlb_vmemmap.h
> +++ b/mm/hugetlb_vmemmap.h
> @@ -21,6 +21,12 @@ void hugetlb_vmemmap_init(struct hstate *h);
>   */
>  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
>  {
> +	/*
> +	 * This check aims to let the compiler help us optimize the code as
> +	 * much as possible.
> +	 */
> +	if (!is_power_of_2(sizeof(struct page)))
> +		return 0;
>  	return h->nr_free_vmemmap_pages;
>  }
>  #else
> -- 
> 2.11.0
> 

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  2021-03-10 15:37   ` Michal Hocko
@ 2021-03-10 17:15     ` Randy Dunlap
  2021-03-11  6:36       ` [External] " Muchun Song
  2021-03-11  6:36     ` Muchun Song
  1 sibling, 1 reply; 52+ messages in thread
From: Randy Dunlap @ 2021-03-10 17:15 UTC (permalink / raw)
  To: Michal Hocko, Muchun Song
  Cc: corbet, mike.kravetz, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Miaohe Lin, Chen Huang,
	Bodeddula Balasubramaniam

On 3/10/21 7:37 AM, Michal Hocko wrote:
> On Mon 08-03-21 18:28:04, Muchun Song wrote:
>> Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
>> freeing unused vmemmap pages associated with each hugetlb page on boot.
>>
>> We disables PMD mapping of vmemmap pages for x86-64 arch when this
>> feature is enabled. Because vmemmap_remap_free() depends on vmemmap
>> being base page mapped.
>>
>> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
>> Reviewed-by: Oscar Salvador <osalvador@suse.de>
>> Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
>> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
>> Tested-by: Chen Huang <chenhuang5@huawei.com>
>> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
>> ---
>>  Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++
>>  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
>>  arch/x86/mm/init_64.c                           |  8 ++++++--
>>  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
>>  mm/hugetlb_vmemmap.c                            | 24 ++++++++++++++++++++++++
>>  5 files changed, 66 insertions(+), 2 deletions(-)
>>
>> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
>> index 04545725f187..de91d54573c4 100644
>> --- a/Documentation/admin-guide/kernel-parameters.txt
>> +++ b/Documentation/admin-guide/kernel-parameters.txt
>> @@ -1557,6 +1557,20 @@
>>  			Documentation/admin-guide/mm/hugetlbpage.rst.
>>  			Format: size[KMG]
>>  
>> +	hugetlb_free_vmemmap=
>> +			[KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
>> +			this controls freeing unused vmemmap pages associated
>> +			with each HugeTLB page. When this option is enabled,
>> +			we disable PMD/huge page mapping of vmemmap pages which
>> +			increase page table pages. So if a user/sysadmin only
>> +			uses a small number of HugeTLB pages (as a percentage
>> +			of system memory), they could end up using more memory
>> +			with hugetlb_free_vmemmap on as opposed to off.
>> +			Format: { on | off (default) }
> 
> Please note this is an admin guide and for those this seems overly low
> level. I would use something like the following
> 			[KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> 			enabled.
> 			Allows heavy hugetlb users to free up some more
> 			memory (6 * PAGE_SIZE for each 2MB hugetlb
> 			page).
> 			This feauture is not free though. Large page
> 			tables are not use to back vmemmap pages which

			       are not used

> 			can lead to a performance degradation for some
> 			workloads. Also there will be memory allocation
> 			required when hugetlb pages are freed from the
> 			pool which can lead to corner cases under heavy
> 			memory pressure.
>> +
>> +			on:  enable the feature
>> +			off: disable the feature
>> +
>>  	hung_task_panic=
>>  			[KNL] Should the hung task detector generate panics.
>>  			Format: 0 | 1


-- 
~Randy


^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 15:19   ` Michal Hocko
@ 2021-03-10 18:56     ` Mike Kravetz
  2021-03-10 21:11       ` Michal Hocko
  2021-03-11  4:26     ` [External] " Muchun Song
  1 sibling, 1 reply; 52+ messages in thread
From: Mike Kravetz @ 2021-03-10 18:56 UTC (permalink / raw)
  To: Michal Hocko, Muchun Song
  Cc: corbet, tglx, mingo, bp, x86, hpa, dave.hansen, luto, peterz,
	viro, akpm, paulmck, mchehab+huawei, pawan.kumar.gupta, rdunlap,
	oneukum, anshuman.khandual, jroedel, almasrymina, rientjes,
	willy, osalvador, song.bao.hua, david, naoya.horiguchi,
	joao.m.martins, duanxiongchun, linux-doc, linux-kernel, linux-mm,
	linux-fsdevel, Chen Huang, Bodeddula Balasubramaniam

On 3/10/21 7:19 AM, Michal Hocko wrote:
> On Mon 08-03-21 18:28:02, Muchun Song wrote:
> [...]
>> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
>>  	/*
>>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
>>  	 */
>> -	if (!in_task()) {
>> +	if (in_atomic()) {
> 
> As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
> We need this change for other reasons and so it would be better to pull
> it out into a separate patch which also makes HUGETLB depend on
> PREEMPT_COUNT.

Yes, the issue of calling put_page for hugetlb pages from any context
still needs work.  IMO, that is outside the scope of this series.  We
already have code in this path which blocks/sleeps.

Making HUGETLB depend on PREEMPT_COUNT is too restrictive.  IIUC,
PREEMPT_COUNT will only be enabled if we enable:
PREEMPT "Preemptible Kernel (Low-Latency Desktop)"
PREEMPT_RT "Fully Preemptible Kernel (Real-Time)"
or, other 'debug' options.  These are not enabled in 'more common'
kernels.  Of course, we do not want to disable HUGETLB in common
configurations.

I'll put together a separate patch where we can discuss the merits of
making the change from !in_task to in_atomic, and what work remains in
this put_page area.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 18:56     ` Mike Kravetz
@ 2021-03-10 21:11       ` Michal Hocko
  2021-03-10 21:49         ` Paul E. McKenney
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-10 21:11 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, paulmck, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed 10-03-21 10:56:08, Mike Kravetz wrote:
> On 3/10/21 7:19 AM, Michal Hocko wrote:
> > On Mon 08-03-21 18:28:02, Muchun Song wrote:
> > [...]
> >> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
> >>  	/*
> >>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
> >>  	 */
> >> -	if (!in_task()) {
> >> +	if (in_atomic()) {
> > 
> > As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
> > We need this change for other reasons and so it would be better to pull
> > it out into a separate patch which also makes HUGETLB depend on
> > PREEMPT_COUNT.
> 
> Yes, the issue of calling put_page for hugetlb pages from any context
> still needs work.  IMO, that is outside the scope of this series.  We
> already have code in this path which blocks/sleeps.
> 
> Making HUGETLB depend on PREEMPT_COUNT is too restrictive.  IIUC,
> PREEMPT_COUNT will only be enabled if we enable:
> PREEMPT "Preemptible Kernel (Low-Latency Desktop)"
> PREEMPT_RT "Fully Preemptible Kernel (Real-Time)"
> or, other 'debug' options.  These are not enabled in 'more common'
> kernels.  Of course, we do not want to disable HUGETLB in common
> configurations.

I haven't tried that but PREEMPT_COUNT should be selectable even without
any change to the preemption model (e.g. !PREEMPT).

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 21:11       ` Michal Hocko
@ 2021-03-10 21:49         ` Paul E. McKenney
  2021-03-10 22:10           ` Mike Kravetz
  0 siblings, 1 reply; 52+ messages in thread
From: Paul E. McKenney @ 2021-03-10 21:49 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Mike Kravetz, Muchun Song, corbet, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 10:11:22PM +0100, Michal Hocko wrote:
> On Wed 10-03-21 10:56:08, Mike Kravetz wrote:
> > On 3/10/21 7:19 AM, Michal Hocko wrote:
> > > On Mon 08-03-21 18:28:02, Muchun Song wrote:
> > > [...]
> > >> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
> > >>  	/*
> > >>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
> > >>  	 */
> > >> -	if (!in_task()) {
> > >> +	if (in_atomic()) {
> > > 
> > > As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
> > > We need this change for other reasons and so it would be better to pull
> > > it out into a separate patch which also makes HUGETLB depend on
> > > PREEMPT_COUNT.
> > 
> > Yes, the issue of calling put_page for hugetlb pages from any context
> > still needs work.  IMO, that is outside the scope of this series.  We
> > already have code in this path which blocks/sleeps.
> > 
> > Making HUGETLB depend on PREEMPT_COUNT is too restrictive.  IIUC,
> > PREEMPT_COUNT will only be enabled if we enable:
> > PREEMPT "Preemptible Kernel (Low-Latency Desktop)"
> > PREEMPT_RT "Fully Preemptible Kernel (Real-Time)"
> > or, other 'debug' options.  These are not enabled in 'more common'
> > kernels.  Of course, we do not want to disable HUGETLB in common
> > configurations.
> 
> I haven't tried that but PREEMPT_COUNT should be selectable even without
> any change to the preemption model (e.g. !PREEMPT).

It works reliably for me, for example as in the diff below.  So,
as Michal says, you should be able to add "select PREEMPT_COUNT" to
whatever Kconfig option you need to.

							Thanx, Paul

diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
index 3128b7c..7d9f989 100644
--- a/kernel/rcu/Kconfig
+++ b/kernel/rcu/Kconfig
@@ -8,6 +8,7 @@ menu "RCU Subsystem"
 config TREE_RCU
 	bool
 	default y if SMP
+	select PREEMPT_COUNT
 	help
 	  This option selects the RCU implementation that is
 	  designed for very large SMP system with hundreds or

^ permalink raw reply related	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 21:49         ` Paul E. McKenney
@ 2021-03-10 22:10           ` Mike Kravetz
  2021-03-10 23:28             ` Paul E. McKenney
  0 siblings, 1 reply; 52+ messages in thread
From: Mike Kravetz @ 2021-03-10 22:10 UTC (permalink / raw)
  To: paulmck, Michal Hocko
  Cc: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, mchehab+huawei, pawan.kumar.gupta,
	rdunlap, oneukum, anshuman.khandual, jroedel, almasrymina,
	rientjes, willy, osalvador, song.bao.hua, david, naoya.horiguchi,
	joao.m.martins, duanxiongchun, linux-doc, linux-kernel, linux-mm,
	linux-fsdevel, Chen Huang, Bodeddula Balasubramaniam

On 3/10/21 1:49 PM, Paul E. McKenney wrote:
> On Wed, Mar 10, 2021 at 10:11:22PM +0100, Michal Hocko wrote:
>> On Wed 10-03-21 10:56:08, Mike Kravetz wrote:
>>> On 3/10/21 7:19 AM, Michal Hocko wrote:
>>>> On Mon 08-03-21 18:28:02, Muchun Song wrote:
>>>> [...]
>>>>> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
>>>>>  	/*
>>>>>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
>>>>>  	 */
>>>>> -	if (!in_task()) {
>>>>> +	if (in_atomic()) {
>>>>
>>>> As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
>>>> We need this change for other reasons and so it would be better to pull
>>>> it out into a separate patch which also makes HUGETLB depend on
>>>> PREEMPT_COUNT.
>>>
>>> Yes, the issue of calling put_page for hugetlb pages from any context
>>> still needs work.  IMO, that is outside the scope of this series.  We
>>> already have code in this path which blocks/sleeps.
>>>
>>> Making HUGETLB depend on PREEMPT_COUNT is too restrictive.  IIUC,
>>> PREEMPT_COUNT will only be enabled if we enable:
>>> PREEMPT "Preemptible Kernel (Low-Latency Desktop)"
>>> PREEMPT_RT "Fully Preemptible Kernel (Real-Time)"
>>> or, other 'debug' options.  These are not enabled in 'more common'
>>> kernels.  Of course, we do not want to disable HUGETLB in common
>>> configurations.
>>
>> I haven't tried that but PREEMPT_COUNT should be selectable even without
>> any change to the preemption model (e.g. !PREEMPT).
> 
> It works reliably for me, for example as in the diff below.  So,
> as Michal says, you should be able to add "select PREEMPT_COUNT" to
> whatever Kconfig option you need to.
> 

Thanks Paul.

I may have been misreading Michal's suggestion of "make HUGETLB depend on
PREEMPT_COUNT".  We could "select PREEMPT_COUNT" if HUGETLB is enabled.
However, since HUGETLB is enabled in most configs, then this would
result in PREEMPT_COUNT also being enabled in most configs.  I honestly
do not know how much this will cost us?  I assume that if it was free or
really cheap it would already be always on?

-- 
Mike Kravetz

> 							Thanx, Paul
> 
> diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
> index 3128b7c..7d9f989 100644
> --- a/kernel/rcu/Kconfig
> +++ b/kernel/rcu/Kconfig
> @@ -8,6 +8,7 @@ menu "RCU Subsystem"
>  config TREE_RCU
>  	bool
>  	default y if SMP
> +	select PREEMPT_COUNT
>  	help
>  	  This option selects the RCU implementation that is
>  	  designed for very large SMP system with hundreds or
> 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 22:10           ` Mike Kravetz
@ 2021-03-10 23:28             ` Paul E. McKenney
  2021-03-11  8:40               ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Paul E. McKenney @ 2021-03-10 23:28 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Michal Hocko, Muchun Song, corbet, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 02:10:12PM -0800, Mike Kravetz wrote:
> On 3/10/21 1:49 PM, Paul E. McKenney wrote:
> > On Wed, Mar 10, 2021 at 10:11:22PM +0100, Michal Hocko wrote:
> >> On Wed 10-03-21 10:56:08, Mike Kravetz wrote:
> >>> On 3/10/21 7:19 AM, Michal Hocko wrote:
> >>>> On Mon 08-03-21 18:28:02, Muchun Song wrote:
> >>>> [...]
> >>>>> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
> >>>>>  	/*
> >>>>>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
> >>>>>  	 */
> >>>>> -	if (!in_task()) {
> >>>>> +	if (in_atomic()) {
> >>>>
> >>>> As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
> >>>> We need this change for other reasons and so it would be better to pull
> >>>> it out into a separate patch which also makes HUGETLB depend on
> >>>> PREEMPT_COUNT.
> >>>
> >>> Yes, the issue of calling put_page for hugetlb pages from any context
> >>> still needs work.  IMO, that is outside the scope of this series.  We
> >>> already have code in this path which blocks/sleeps.
> >>>
> >>> Making HUGETLB depend on PREEMPT_COUNT is too restrictive.  IIUC,
> >>> PREEMPT_COUNT will only be enabled if we enable:
> >>> PREEMPT "Preemptible Kernel (Low-Latency Desktop)"
> >>> PREEMPT_RT "Fully Preemptible Kernel (Real-Time)"
> >>> or, other 'debug' options.  These are not enabled in 'more common'
> >>> kernels.  Of course, we do not want to disable HUGETLB in common
> >>> configurations.
> >>
> >> I haven't tried that but PREEMPT_COUNT should be selectable even without
> >> any change to the preemption model (e.g. !PREEMPT).
> > 
> > It works reliably for me, for example as in the diff below.  So,
> > as Michal says, you should be able to add "select PREEMPT_COUNT" to
> > whatever Kconfig option you need to.
> > 
> 
> Thanks Paul.
> 
> I may have been misreading Michal's suggestion of "make HUGETLB depend on
> PREEMPT_COUNT".  We could "select PREEMPT_COUNT" if HUGETLB is enabled.
> However, since HUGETLB is enabled in most configs, then this would
> result in PREEMPT_COUNT also being enabled in most configs.  I honestly
> do not know how much this will cost us?  I assume that if it was free or
> really cheap it would already be always on?

There are a -lot- of configs out there, so are you sure that HUGETLB is
really enabled in most of them?  ;-)

More seriously, I was going by earlier emails in this and related threads
plus Michal's "PREEMPT_COUNT should be selectable".  But there are other
situations that would like PREEMPT_COUNT.  And to your point, some who
would rather PREEMPT_COUNT not be universally enabled.  I haven't seen
any performance or kernel-size numbers from any of them, however.

							Thanx, Paul

> -- 
> Mike Kravetz
> 
> > 							Thanx, Paul
> > 
> > diff --git a/kernel/rcu/Kconfig b/kernel/rcu/Kconfig
> > index 3128b7c..7d9f989 100644
> > --- a/kernel/rcu/Kconfig
> > +++ b/kernel/rcu/Kconfig
> > @@ -8,6 +8,7 @@ menu "RCU Subsystem"
> >  config TREE_RCU
> >  	bool
> >  	default y if SMP
> > +	select PREEMPT_COUNT
> >  	help
> >  	  This option selects the RCU implementation that is
> >  	  designed for very large SMP system with hundreds or
> > 

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
  2021-03-10 14:14   ` Michal Hocko
@ 2021-03-11  2:58     ` Muchun Song
  2021-03-11  8:45       ` Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-11  2:58 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 10:14 PM Michal Hocko <mhocko@suse.com> wrote:
>
> [I am sorry for a late review]

Thanks for your review.

>
> On Mon 08-03-21 18:27:59, Muchun Song wrote:
> > Move bootmem info registration common API to individual bootmem_info.c.
> > And we will use {get,put}_page_bootmem() to initialize the page for the
> > vmemmap pages or free the vmemmap pages to buddy in the later patch.
> > So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
> > movement without any functional change.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > Reviewed-by: David Hildenbrand <david@redhat.com>
> > Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
>
> Separation from memory_hotplug.c is definitely a right step. I am
> wondering about the config dependency though
> [...]
> > diff --git a/mm/Makefile b/mm/Makefile
> > index 72227b24a616..daabf86d7da8 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
> >  obj-$(CONFIG_KASAN)  += kasan/
> >  obj-$(CONFIG_KFENCE) += kfence/
> >  obj-$(CONFIG_FAILSLAB) += failslab.o
> > +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
>
> I would have expected this would depend on CONFIG_SPARSE.
> BOOTMEM_INFO_NODE is really an odd thing to depend on here. There is
> some functionality which requires the node info but that can be gated
> specifically. Or what is the thinking behind?

At first my idea was to free vmemmap pages through the bootmem
interface. My first instinct is to rely on BOOTMEM_INFO_NODE.
It makes sense to me to depend on CONFIG_SPARSE. I will
update this in the next version.

Thanks.

>
> This doesn't matter right now because it seems that the *_page_bootmem
> is only used by x86 outside of the memory hotplug.
>
> Other than that looks good to me.
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page
  2021-03-10 14:32   ` Michal Hocko
@ 2021-03-11  3:35     ` Muchun Song
  0 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11  3:35 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 10:32 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 08-03-21 18:28:01, Muchun Song wrote:
> > Every HugeTLB has more than one struct page structure. We __know__ that
> > we only use the first 4(HUGETLB_CGROUP_MIN_ORDER) struct page structures
> > to store metadata associated with each HugeTLB.
>
> I think it would be great to make this explicit somewhere around the
> code which uses those struct pages.

OK. I will make patch #8 prior to this one. Thanks.

>
> > There are a lot of struct page structures associated with each HugeTLB
> > page. For tail pages, the value of compound_head is the same. So we can
> > reuse first page of tail page structures. We map the virtual addresses
> > of the remaining pages of tail page structures to the first tail page
> > struct, and then free these page frames. Therefore, we need to reserve
> > two pages as vmemmap areas.
> >
> > When we allocate a HugeTLB page from the buddy, we can free some vmemmap
> > pages associated with each HugeTLB page. It is more appropriate to do it
> > in the prep_new_huge_page().
> >
> > The free_vmemmap_pages_per_hpage(), which indicates how many vmemmap
> > pages associated with a HugeTLB page can be freed, returns zero for
> > now, which means the feature is disabled. We will enable it once all
> > the infrastructure is there.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
>
> I do not see any issues here. I just want to point out that the amount
> of *BUG_ON is quite high to my taste. Most of them seem to be added just
> in case if something goes wrong or should never happen. These are
> usually bad reasons to add them IMHO. I would just drop those unless
> there is a very good reason to keep them around.

OK. I will drop the useless *BUG_ON.

>
> I really appreciate how you made a high level design documentation to
> the source code directly. Talking about struct pages backing struct
> pages (vmemmap) is usually a good recipe for headache but those diagrams
> make it easy to follow the reasoning.
>
> Anyway
> Acked-by: michal Hocko <mhocko@suse.com>

Thanks.

>
> > ---
> >  include/linux/bootmem_info.h |  27 +++++-
> >  include/linux/mm.h           |   3 +
> >  mm/Makefile                  |   1 +
> >  mm/hugetlb.c                 |   3 +
> >  mm/hugetlb_vmemmap.c         | 219 +++++++++++++++++++++++++++++++++++++++++++
> >  mm/hugetlb_vmemmap.h         |  20 ++++
> >  mm/sparse-vmemmap.c          | 207 ++++++++++++++++++++++++++++++++++++++++
> >  7 files changed, 479 insertions(+), 1 deletion(-)
> >  create mode 100644 mm/hugetlb_vmemmap.c
> >  create mode 100644 mm/hugetlb_vmemmap.h
> >
> > diff --git a/include/linux/bootmem_info.h b/include/linux/bootmem_info.h
> > index 4ed6dee1adc9..ec03a624dfa2 100644
> > --- a/include/linux/bootmem_info.h
> > +++ b/include/linux/bootmem_info.h
> > @@ -2,7 +2,7 @@
> >  #ifndef __LINUX_BOOTMEM_INFO_H
> >  #define __LINUX_BOOTMEM_INFO_H
> >
> > -#include <linux/mmzone.h>
> > +#include <linux/mm.h>
> >
> >  /*
> >   * Types for free bootmem stored in page->lru.next. These have to be in
> > @@ -22,6 +22,27 @@ void __init register_page_bootmem_info_node(struct pglist_data *pgdat);
> >  void get_page_bootmem(unsigned long info, struct page *page,
> >                     unsigned long type);
> >  void put_page_bootmem(struct page *page);
> > +
> > +/*
> > + * Any memory allocated via the memblock allocator and not via the
> > + * buddy will be marked reserved already in the memmap. For those
> > + * pages, we can call this function to free it to buddy allocator.
> > + */
> > +static inline void free_bootmem_page(struct page *page)
> > +{
> > +     unsigned long magic = (unsigned long)page->freelist;
> > +
> > +     /*
> > +      * The reserve_bootmem_region sets the reserved flag on bootmem
> > +      * pages.
> > +      */
> > +     VM_BUG_ON_PAGE(page_ref_count(page) != 2, page);
> > +
> > +     if (magic == SECTION_INFO || magic == MIX_SECTION_INFO)
> > +             put_page_bootmem(page);
> > +     else
> > +             VM_BUG_ON_PAGE(1, page);
> > +}
> >  #else
> >  static inline void register_page_bootmem_info_node(struct pglist_data *pgdat)
> >  {
> > @@ -35,6 +56,10 @@ static inline void get_page_bootmem(unsigned long info, struct page *page,
> >                                   unsigned long type)
> >  {
> >  }
> > +
> > +static inline void free_bootmem_page(struct page *page)
> > +{
> > +}
> >  #endif
> >
> >  #endif /* __LINUX_BOOTMEM_INFO_H */
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index 77e64e3eac80..4ddfc31f21c6 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -2971,6 +2971,9 @@ static inline void print_vma_addr(char *prefix, unsigned long rip)
> >  }
> >  #endif
> >
> > +void vmemmap_remap_free(unsigned long start, unsigned long end,
> > +                     unsigned long reuse);
> > +
> >  void *sparse_buffer_alloc(unsigned long size);
> >  struct page * __populate_section_memmap(unsigned long pfn,
> >               unsigned long nr_pages, int nid, struct vmem_altmap *altmap);
> > diff --git a/mm/Makefile b/mm/Makefile
> > index daabf86d7da8..3d7d57e3b55b 100644
> > --- a/mm/Makefile
> > +++ b/mm/Makefile
> > @@ -71,6 +71,7 @@ obj-$(CONFIG_FRONTSWAP)     += frontswap.o
> >  obj-$(CONFIG_ZSWAP)  += zswap.o
> >  obj-$(CONFIG_HAS_DMA)        += dmapool.o
> >  obj-$(CONFIG_HUGETLBFS)      += hugetlb.o
> > +obj-$(CONFIG_HUGETLB_PAGE_FREE_VMEMMAP)      += hugetlb_vmemmap.o
> >  obj-$(CONFIG_NUMA)   += mempolicy.o
> >  obj-$(CONFIG_SPARSEMEM)      += sparse.o
> >  obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index c232cb67dda2..43fed6785322 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -42,6 +42,7 @@
> >  #include <linux/userfaultfd_k.h>
> >  #include <linux/page_owner.h>
> >  #include "internal.h"
> > +#include "hugetlb_vmemmap.h"
> >
> >  int hugetlb_max_hstate __read_mostly;
> >  unsigned int default_hstate_idx;
> > @@ -1463,6 +1464,8 @@ void free_huge_page(struct page *page)
> >
> >  static void prep_new_huge_page(struct hstate *h, struct page *page, int nid)
> >  {
> > +     free_huge_page_vmemmap(h, page);
> > +
> >       INIT_LIST_HEAD(&page->lru);
> >       set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> >       set_hugetlb_cgroup(page, NULL);
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > new file mode 100644
> > index 000000000000..0209b736e0b4
> > --- /dev/null
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -0,0 +1,219 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Free some vmemmap pages of HugeTLB
> > + *
> > + * Copyright (c) 2020, Bytedance. All rights reserved.
> > + *
> > + *     Author: Muchun Song <songmuchun@bytedance.com>
> > + *
> > + * The struct page structures (page structs) are used to describe a physical
> > + * page frame. By default, there is a one-to-one mapping from a page frame to
> > + * it's corresponding page struct.
> > + *
> > + * HugeTLB pages consist of multiple base page size pages and is supported by
> > + * many architectures. See hugetlbpage.rst in the Documentation directory for
> > + * more details. On the x86-64 architecture, HugeTLB pages of size 2MB and 1GB
> > + * are currently supported. Since the base page size on x86 is 4KB, a 2MB
> > + * HugeTLB page consists of 512 base pages and a 1GB HugeTLB page consists of
> > + * 4096 base pages. For each base page, there is a corresponding page struct.
> > + *
> > + * Within the HugeTLB subsystem, only the first 4 page structs are used to
> > + * contain unique information about a HugeTLB page. HUGETLB_CGROUP_MIN_ORDER
> > + * provides this upper limit. The only 'useful' information in the remaining
> > + * page structs is the compound_head field, and this field is the same for all
> > + * tail pages.
> > + *
> > + * By removing redundant page structs for HugeTLB pages, memory can be returned
> > + * to the buddy allocator for other uses.
> > + *
> > + * Different architectures support different HugeTLB pages. For example, the
> > + * following table is the HugeTLB page size supported by x86 and arm64
> > + * architectures. Because arm64 supports 4k, 16k, and 64k base pages and
> > + * supports contiguous entries, so it supports many kinds of sizes of HugeTLB
> > + * page.
> > + *
> > + * +--------------+-----------+-----------------------------------------------+
> > + * | Architecture | Page Size |                HugeTLB Page Size              |
> > + * +--------------+-----------+-----------+-----------+-----------+-----------+
> > + * |    x86-64    |    4KB    |    2MB    |    1GB    |           |           |
> > + * +--------------+-----------+-----------+-----------+-----------+-----------+
> > + * |              |    4KB    |   64KB    |    2MB    |    32MB   |    1GB    |
> > + * |              +-----------+-----------+-----------+-----------+-----------+
> > + * |    arm64     |   16KB    |    2MB    |   32MB    |     1GB   |           |
> > + * |              +-----------+-----------+-----------+-----------+-----------+
> > + * |              |   64KB    |    2MB    |  512MB    |    16GB   |           |
> > + * +--------------+-----------+-----------+-----------+-----------+-----------+
> > + *
> > + * When the system boot up, every HugeTLB page has more than one struct page
> > + * structs which size is (unit: pages):
> > + *
> > + *    struct_size = HugeTLB_Size / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
> > + *
> > + * Where HugeTLB_Size is the size of the HugeTLB page. We know that the size
> > + * of the HugeTLB page is always n times PAGE_SIZE. So we can get the following
> > + * relationship.
> > + *
> > + *    HugeTLB_Size = n * PAGE_SIZE
> > + *
> > + * Then,
> > + *
> > + *    struct_size = n * PAGE_SIZE / PAGE_SIZE * sizeof(struct page) / PAGE_SIZE
> > + *                = n * sizeof(struct page) / PAGE_SIZE
> > + *
> > + * We can use huge mapping at the pud/pmd level for the HugeTLB page.
> > + *
> > + * For the HugeTLB page of the pmd level mapping, then
> > + *
> > + *    struct_size = n * sizeof(struct page) / PAGE_SIZE
> > + *                = PAGE_SIZE / sizeof(pte_t) * sizeof(struct page) / PAGE_SIZE
> > + *                = sizeof(struct page) / sizeof(pte_t)
> > + *                = 64 / 8
> > + *                = 8 (pages)
> > + *
> > + * Where n is how many pte entries which one page can contains. So the value of
> > + * n is (PAGE_SIZE / sizeof(pte_t)).
> > + *
> > + * This optimization only supports 64-bit system, so the value of sizeof(pte_t)
> > + * is 8. And this optimization also applicable only when the size of struct page
> > + * is a power of two. In most cases, the size of struct page is 64 bytes (e.g.
> > + * x86-64 and arm64). So if we use pmd level mapping for a HugeTLB page, the
> > + * size of struct page structs of it is 8 page frames which size depends on the
> > + * size of the base page.
> > + *
> > + * For the HugeTLB page of the pud level mapping, then
> > + *
> > + *    struct_size = PAGE_SIZE / sizeof(pmd_t) * struct_size(pmd)
> > + *                = PAGE_SIZE / 8 * 8 (pages)
> > + *                = PAGE_SIZE (pages)
> > + *
> > + * Where the struct_size(pmd) is the size of the struct page structs of a
> > + * HugeTLB page of the pmd level mapping.
> > + *
> > + * E.g.: A 2MB HugeTLB page on x86_64 consists in 8 page frames while 1GB
> > + * HugeTLB page consists in 4096.
> > + *
> > + * Next, we take the pmd level mapping of the HugeTLB page as an example to
> > + * show the internal implementation of this optimization. There are 8 pages
> > + * struct page structs associated with a HugeTLB page which is pmd mapped.
> > + *
> > + * Here is how things look before optimization.
> > + *
> > + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> > + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > + * |           |                     |     0     | -------------> |     0     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |                     |     1     | -------------> |     1     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |                     |     2     | -------------> |     2     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |                     |     3     | -------------> |     3     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |                     |     4     | -------------> |     4     |
> > + * |    PMD    |                     +-----------+                +-----------+
> > + * |   level   |                     |     5     | -------------> |     5     |
> > + * |  mapping  |                     +-----------+                +-----------+
> > + * |           |                     |     6     | -------------> |     6     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |                     |     7     | -------------> |     7     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |
> > + * |           |
> > + * |           |
> > + * +-----------+
> > + *
> > + * The value of page->compound_head is the same for all tail pages. The first
> > + * page of page structs (page 0) associated with the HugeTLB page contains the 4
> > + * page structs necessary to describe the HugeTLB. The only use of the remaining
> > + * pages of page structs (page 1 to page 7) is to point to page->compound_head.
> > + * Therefore, we can remap pages 2 to 7 to page 1. Only 2 pages of page structs
> > + * will be used for each HugeTLB page. This will allow us to free the remaining
> > + * 6 pages to the buddy allocator.
> > + *
> > + * Here is how things look after remapping.
> > + *
> > + *    HugeTLB                  struct pages(8 pages)         page frame(8 pages)
> > + * +-----------+ ---virt_to_page---> +-----------+   mapping to   +-----------+
> > + * |           |                     |     0     | -------------> |     0     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |                     |     1     | -------------> |     1     |
> > + * |           |                     +-----------+                +-----------+
> > + * |           |                     |     2     | ----------------^ ^ ^ ^ ^ ^
> > + * |           |                     +-----------+                   | | | | |
> > + * |           |                     |     3     | ------------------+ | | | |
> > + * |           |                     +-----------+                     | | | |
> > + * |           |                     |     4     | --------------------+ | | |
> > + * |    PMD    |                     +-----------+                       | | |
> > + * |   level   |                     |     5     | ----------------------+ | |
> > + * |  mapping  |                     +-----------+                         | |
> > + * |           |                     |     6     | ------------------------+ |
> > + * |           |                     +-----------+                           |
> > + * |           |                     |     7     | --------------------------+
> > + * |           |                     +-----------+
> > + * |           |
> > + * |           |
> > + * |           |
> > + * +-----------+
> > + *
> > + * When a HugeTLB is freed to the buddy system, we should allocate 6 pages for
> > + * vmemmap pages and restore the previous mapping relationship.
> > + *
> > + * For the HugeTLB page of the pud level mapping. It is similar to the former.
> > + * We also can use this approach to free (PAGE_SIZE - 2) vmemmap pages.
> > + *
> > + * Apart from the HugeTLB page of the pmd/pud level mapping, some architectures
> > + * (e.g. aarch64) provides a contiguous bit in the translation table entries
> > + * that hints to the MMU to indicate that it is one of a contiguous set of
> > + * entries that can be cached in a single TLB entry.
> > + *
> > + * The contiguous bit is used to increase the mapping size at the pmd and pte
> > + * (last) level. So this type of HugeTLB page can be optimized only when its
> > + * size of the struct page structs is greater than 2 pages.
> > + */
> > +#include "hugetlb_vmemmap.h"
> > +
> > +/*
> > + * There are a lot of struct page structures associated with each HugeTLB page.
> > + * For tail pages, the value of compound_head is the same. So we can reuse first
> > + * page of tail page structures. We map the virtual addresses of the remaining
> > + * pages of tail page structures to the first tail page struct, and then free
> > + * these page frames. Therefore, we need to reserve two pages as vmemmap areas.
> > + */
> > +#define RESERVE_VMEMMAP_NR           2U
> > +#define RESERVE_VMEMMAP_SIZE         (RESERVE_VMEMMAP_NR << PAGE_SHIFT)
> > +
> > +/*
> > + * How many vmemmap pages associated with a HugeTLB page that can be freed
> > + * to the buddy allocator.
> > + *
> > + * Todo: Returns zero for now, which means the feature is disabled. We will
> > + * enable it once all the infrastructure is there.
> > + */
> > +static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> > +{
> > +     return 0;
> > +}
> > +
> > +static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> > +{
> > +     return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> > +}
> > +
> > +void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +     unsigned long vmemmap_addr = (unsigned long)head;
> > +     unsigned long vmemmap_end, vmemmap_reuse;
> > +
> > +     if (!free_vmemmap_pages_per_hpage(h))
> > +             return;
> > +
> > +     vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> > +     vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> > +     vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> > +
> > +     /*
> > +      * Remap the vmemmap virtual address range [@vmemmap_addr, @vmemmap_end)
> > +      * to the page which @vmemmap_reuse is mapped to, then free the pages
> > +      * which the range [@vmemmap_addr, @vmemmap_end] is mapped to.
> > +      */
> > +     vmemmap_remap_free(vmemmap_addr, vmemmap_end, vmemmap_reuse);
> > +}
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > new file mode 100644
> > index 000000000000..6923f03534d5
> > --- /dev/null
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -0,0 +1,20 @@
> > +// SPDX-License-Identifier: GPL-2.0
> > +/*
> > + * Free some vmemmap pages of HugeTLB
> > + *
> > + * Copyright (c) 2020, Bytedance. All rights reserved.
> > + *
> > + *     Author: Muchun Song <songmuchun@bytedance.com>
> > + */
> > +#ifndef _LINUX_HUGETLB_VMEMMAP_H
> > +#define _LINUX_HUGETLB_VMEMMAP_H
> > +#include <linux/hugetlb.h>
> > +
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +void free_huge_page_vmemmap(struct hstate *h, struct page *head);
> > +#else
> > +static inline void free_huge_page_vmemmap(struct hstate *h, struct page *head)
> > +{
> > +}
> > +#endif /* CONFIG_HUGETLB_PAGE_FREE_VMEMMAP */
> > +#endif /* _LINUX_HUGETLB_VMEMMAP_H */
> > diff --git a/mm/sparse-vmemmap.c b/mm/sparse-vmemmap.c
> > index 16183d85a7d5..d3076a7a3783 100644
> > --- a/mm/sparse-vmemmap.c
> > +++ b/mm/sparse-vmemmap.c
> > @@ -27,8 +27,215 @@
> >  #include <linux/spinlock.h>
> >  #include <linux/vmalloc.h>
> >  #include <linux/sched.h>
> > +#include <linux/pgtable.h>
> > +#include <linux/bootmem_info.h>
> > +
> >  #include <asm/dma.h>
> >  #include <asm/pgalloc.h>
> > +#include <asm/tlbflush.h>
> > +
> > +/**
> > + * vmemmap_remap_walk - walk vmemmap page table
> > + *
> > + * @remap_pte:               called for each lowest-level entry (PTE).
> > + * @reuse_page:              the page which is reused for the tail vmemmap pages.
> > + * @reuse_addr:              the virtual address of the @reuse_page page.
> > + * @vmemmap_pages:   the list head of the vmemmap pages that can be freed.
> > + */
> > +struct vmemmap_remap_walk {
> > +     void (*remap_pte)(pte_t *pte, unsigned long addr,
> > +                       struct vmemmap_remap_walk *walk);
> > +     struct page *reuse_page;
> > +     unsigned long reuse_addr;
> > +     struct list_head *vmemmap_pages;
> > +};
> > +
> > +static void vmemmap_pte_range(pmd_t *pmd, unsigned long addr,
> > +                           unsigned long end,
> > +                           struct vmemmap_remap_walk *walk)
> > +{
> > +     pte_t *pte;
> > +
> > +     pte = pte_offset_kernel(pmd, addr);
> > +
> > +     /*
> > +      * The reuse_page is found 'first' in table walk before we start
> > +      * remapping (which is calling @walk->remap_pte).
> > +      */
> > +     if (!walk->reuse_page) {
> > +             BUG_ON(pte_none(*pte));
> > +             BUG_ON(walk->reuse_addr != addr);
> > +
> > +             walk->reuse_page = pte_page(*pte++);
> > +             /*
> > +              * Because the reuse address is part of the range that we are
> > +              * walking, skip the reuse address range.
> > +              */
> > +             addr += PAGE_SIZE;
> > +     }
> > +
> > +     for (; addr != end; addr += PAGE_SIZE, pte++) {
> > +             BUG_ON(pte_none(*pte));
> > +
> > +             walk->remap_pte(pte, addr, walk);
> > +     }
> > +}
> > +
> > +static void vmemmap_pmd_range(pud_t *pud, unsigned long addr,
> > +                           unsigned long end,
> > +                           struct vmemmap_remap_walk *walk)
> > +{
> > +     pmd_t *pmd;
> > +     unsigned long next;
> > +
> > +     pmd = pmd_offset(pud, addr);
> > +     do {
> > +             BUG_ON(pmd_none(*pmd) || pmd_leaf(*pmd));
> > +
> > +             next = pmd_addr_end(addr, end);
> > +             vmemmap_pte_range(pmd, addr, next, walk);
> > +     } while (pmd++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_pud_range(p4d_t *p4d, unsigned long addr,
> > +                           unsigned long end,
> > +                           struct vmemmap_remap_walk *walk)
> > +{
> > +     pud_t *pud;
> > +     unsigned long next;
> > +
> > +     pud = pud_offset(p4d, addr);
> > +     do {
> > +             BUG_ON(pud_none(*pud));
> > +
> > +             next = pud_addr_end(addr, end);
> > +             vmemmap_pmd_range(pud, addr, next, walk);
> > +     } while (pud++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_p4d_range(pgd_t *pgd, unsigned long addr,
> > +                           unsigned long end,
> > +                           struct vmemmap_remap_walk *walk)
> > +{
> > +     p4d_t *p4d;
> > +     unsigned long next;
> > +
> > +     p4d = p4d_offset(pgd, addr);
> > +     do {
> > +             BUG_ON(p4d_none(*p4d));
> > +
> > +             next = p4d_addr_end(addr, end);
> > +             vmemmap_pud_range(p4d, addr, next, walk);
> > +     } while (p4d++, addr = next, addr != end);
> > +}
> > +
> > +static void vmemmap_remap_range(unsigned long start, unsigned long end,
> > +                             struct vmemmap_remap_walk *walk)
> > +{
> > +     unsigned long addr = start;
> > +     unsigned long next;
> > +     pgd_t *pgd;
> > +
> > +     VM_BUG_ON(!IS_ALIGNED(start, PAGE_SIZE));
> > +     VM_BUG_ON(!IS_ALIGNED(end, PAGE_SIZE));
> > +
> > +     pgd = pgd_offset_k(addr);
> > +     do {
> > +             BUG_ON(pgd_none(*pgd));
> > +
> > +             next = pgd_addr_end(addr, end);
> > +             vmemmap_p4d_range(pgd, addr, next, walk);
> > +     } while (pgd++, addr = next, addr != end);
> > +
> > +     /*
> > +      * We only change the mapping of the vmemmap virtual address range
> > +      * [@start + PAGE_SIZE, end), so we only need to flush the TLB which
> > +      * belongs to the range.
> > +      */
> > +     flush_tlb_kernel_range(start + PAGE_SIZE, end);
> > +}
> > +
> > +/*
> > + * Free a vmemmap page. A vmemmap page can be allocated from the memblock
> > + * allocator or buddy allocator. If the PG_reserved flag is set, it means
> > + * that it allocated from the memblock allocator, just free it via the
> > + * free_bootmem_page(). Otherwise, use __free_page().
> > + */
> > +static inline void free_vmemmap_page(struct page *page)
> > +{
> > +     if (PageReserved(page))
> > +             free_bootmem_page(page);
> > +     else
> > +             __free_page(page);
> > +}
> > +
> > +/* Free a list of the vmemmap pages */
> > +static void free_vmemmap_page_list(struct list_head *list)
> > +{
> > +     struct page *page, *next;
> > +
> > +     list_for_each_entry_safe(page, next, list, lru) {
> > +             list_del(&page->lru);
> > +             free_vmemmap_page(page);
> > +     }
> > +}
> > +
> > +static void vmemmap_remap_pte(pte_t *pte, unsigned long addr,
> > +                           struct vmemmap_remap_walk *walk)
> > +{
> > +     /*
> > +      * Remap the tail pages as read-only to catch illegal write operation
> > +      * to the tail pages.
> > +      */
> > +     pgprot_t pgprot = PAGE_KERNEL_RO;
> > +     pte_t entry = mk_pte(walk->reuse_page, pgprot);
> > +     struct page *page = pte_page(*pte);
> > +
> > +     list_add(&page->lru, walk->vmemmap_pages);
> > +     set_pte_at(&init_mm, addr, pte, entry);
> > +}
> > +
> > +/**
> > + * vmemmap_remap_free - remap the vmemmap virtual address range [@start, @end)
> > + *                   to the page which @reuse is mapped to, then free vmemmap
> > + *                   which the range are mapped to.
> > + * @start:   start address of the vmemmap virtual address range that we want
> > + *           to remap.
> > + * @end:     end address of the vmemmap virtual address range that we want to
> > + *           remap.
> > + * @reuse:   reuse address.
> > + *
> > + * Note: This function depends on vmemmap being base page mapped. Please make
> > + * sure that we disable PMD mapping of vmemmap pages when calling this function.
> > + */
> > +void vmemmap_remap_free(unsigned long start, unsigned long end,
> > +                     unsigned long reuse)
> > +{
> > +     LIST_HEAD(vmemmap_pages);
> > +     struct vmemmap_remap_walk walk = {
> > +             .remap_pte      = vmemmap_remap_pte,
> > +             .reuse_addr     = reuse,
> > +             .vmemmap_pages  = &vmemmap_pages,
> > +     };
> > +
> > +     /*
> > +      * In order to make remapping routine most efficient for the huge pages,
> > +      * the routine of vmemmap page table walking has the following rules
> > +      * (see more details from the vmemmap_pte_range()):
> > +      *
> > +      * - The range [@start, @end) and the range [@reuse, @reuse + PAGE_SIZE)
> > +      *   should be continuous.
> > +      * - The @reuse address is part of the range [@reuse, @end) that we are
> > +      *   walking which is passed to vmemmap_remap_range().
> > +      * - The @reuse address is the first in the complete range.
> > +      *
> > +      * So we need to make sure that @start and @reuse meet the above rules.
> > +      */
> > +     BUG_ON(start - reuse != PAGE_SIZE);
> > +
> > +     vmemmap_remap_range(reuse, end, &walk);
> > +     free_vmemmap_page_list(&vmemmap_pages);
> > +}
> >
> >  /*
> >   * Allocate a block of memory to be used to back the virtual memory map
> > --
> > 2.11.0
> >
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 14:21   ` Oscar Salvador
@ 2021-03-11  4:13     ` Muchun Song
  0 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11  4:13 UTC (permalink / raw)
  To: Oscar Salvador
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Michal Hocko,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 10:21 PM Oscar Salvador <osalvador@suse.de> wrote:
>
> On Mon, Mar 08, 2021 at 06:28:02PM +0800, Muchun Song wrote:
> > When we free a HugeTLB page to the buddy allocator, we need to allocate
> > the vmemmap pages associated with it. However, we may not be able to
> > allocate the vmemmap pages when the system is under memory pressure. In
> > this case, we just refuse to free the HugeTLB page. This changes behavior
> > in some corner cases as listed below:
> >
> >  1) Failing to free a huge page triggered by the user (decrease nr_pages).
> >
> >     User needs to try again later.
> >
> >  2) Failing to free a surplus huge page when freed by the application.
> >
> >     Try again later when freeing a huge page next time.
> >
> >  3) Failing to dissolve a free huge page on ZONE_MOVABLE via
> >     offline_pages().
> >
> >     This can happen when we have plenty of ZONE_MOVABLE memory, but
> >     not enough kernel memory to allocate vmemmmap pages.  We may even
> >     be able to migrate huge page contents, but will not be able to
> >     dissolve the source huge page.  This will prevent an offline
> >     operation and is unfortunate as memory offlining is expected to
> >     succeed on movable zones.  Users that depend on memory hotplug
> >     to succeed for movable zones should carefully consider whether the
> >     memory savings gained from this feature are worth the risk of
> >     possibly not being able to offline memory in certain situations.
>
> This is nice to have it here, but a normal user won't dig in the kernel to
> figure this out, so my question is: Do we have this documented somewhere under
> Documentation/?
> If not, could we document it there? It is nice to warn about this things were
> sysadmins can find them.

Make sense. I will do this.

>
> >  4) Failing to dissolve a huge page on CMA/ZONE_MOVABLE via
> >     alloc_contig_range() - once we have that handling in place. Mainly
> >     affects CMA and virtio-mem.
> >
> >     Similar to 3). virito-mem will handle migration errors gracefully.
> >     CMA might be able to fallback on other free areas within the CMA
> >     region.
> >
> > Vmemmap pages are allocated from the page freeing context. In order for
> > those allocations to be not disruptive (e.g. trigger oom killer)
> > __GFP_NORETRY is used. hugetlb_lock is dropped for the allocation
> > because a non sleeping allocation would be too fragile and it could fail
> > too easily under memory pressure. GFP_ATOMIC or other modes to access
> > memory reserves is not used because we want to prevent consuming
> > reserves under heavy hugetlb freeing.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
>
> Sorry for jumping in late.
> It looks good to me:
>
> Reviewed-by: Oscar Salvador <osalvador@suse.de>

Thanks.

>
> Minor request above and below:
>
> > ---
> >  Documentation/admin-guide/mm/hugetlbpage.rst |  8 +++
> >  include/linux/mm.h                           |  2 +
> >  mm/hugetlb.c                                 | 92 +++++++++++++++++++++-------
> >  mm/hugetlb_vmemmap.c                         | 32 ++++++----
> >  mm/hugetlb_vmemmap.h                         | 23 +++++++
> >  mm/sparse-vmemmap.c                          | 75 ++++++++++++++++++++++-
> >  6 files changed, 197 insertions(+), 35 deletions(-)
>
> [...]
>
>
>
> Could we place a brief comment about what we expect to return here?

OK. Will do.

>
> > -static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
> > +int alloc_huge_page_vmemmap(struct hstate *h, struct page *head)
> >  {
> > -     return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
> > +     unsigned long vmemmap_addr = (unsigned long)head;
> > +     unsigned long vmemmap_end, vmemmap_reuse;
> > +
> > +     if (!free_vmemmap_pages_per_hpage(h))
> > +             return 0;
> > +
> > +     vmemmap_addr += RESERVE_VMEMMAP_SIZE;
> > +     vmemmap_end = vmemmap_addr + free_vmemmap_pages_size_per_hpage(h);
> > +     vmemmap_reuse = vmemmap_addr - PAGE_SIZE;
> > +     /*
> > +      * The pages which the vmemmap virtual address range [@vmemmap_addr,
> > +      * @vmemmap_end) are mapped to are freed to the buddy allocator, and
> > +      * the range is mapped to the page which @vmemmap_reuse is mapped to.
> > +      * When a HugeTLB page is freed to the buddy allocator, previously
> > +      * discarded vmemmap pages must be allocated and remapping.
> > +      */
> > +     return vmemmap_remap_alloc(vmemmap_addr, vmemmap_end, vmemmap_reuse,
> > +                                GFP_KERNEL | __GFP_NORETRY | __GFP_THISNODE);
> >  }
>
> --
> Oscar Salvador
> SUSE L3

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 15:19   ` Michal Hocko
  2021-03-10 18:56     ` Mike Kravetz
@ 2021-03-11  4:26     ` Muchun Song
  2021-03-11  8:46       ` Michal Hocko
  1 sibling, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-11  4:26 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 11:19 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 08-03-21 18:28:02, Muchun Song wrote:
> [...]
> > -static void update_and_free_page(struct hstate *h, struct page *page)
> > +static int update_and_free_page(struct hstate *h, struct page *page)
> > +     __releases(&hugetlb_lock) __acquires(&hugetlb_lock)
> >  {
> >       int i;
> >       struct page *subpage = page;
> > +     int nid = page_to_nid(page);
> >
> >       if (hstate_is_gigantic(h) && !gigantic_page_runtime_supported())
> > -             return;
> > +             return 0;
> >
> >       h->nr_huge_pages--;
> > -     h->nr_huge_pages_node[page_to_nid(page)]--;
> > +     h->nr_huge_pages_node[nid]--;
> > +     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
> > +     VM_BUG_ON_PAGE(hugetlb_cgroup_from_page_rsvd(page), page);
>
> > +     set_page_refcounted(page);
> > +     set_compound_page_dtor(page, NULL_COMPOUND_DTOR);
> > +
> > +     /*
> > +      * If the vmemmap pages associated with the HugeTLB page can be
> > +      * optimized or the page is gigantic, we might block in
> > +      * alloc_huge_page_vmemmap() or free_gigantic_page(). In both
> > +      * cases, drop the hugetlb_lock.
> > +      */
> > +     if (free_vmemmap_pages_per_hpage(h) || hstate_is_gigantic(h))
> > +             spin_unlock(&hugetlb_lock);
> > +
> > +     if (alloc_huge_page_vmemmap(h, page)) {
> > +             spin_lock(&hugetlb_lock);
> > +             INIT_LIST_HEAD(&page->lru);
> > +             set_compound_page_dtor(page, HUGETLB_PAGE_DTOR);
> > +             h->nr_huge_pages++;
> > +             h->nr_huge_pages_node[nid]++;
> > +
> > +             /*
> > +              * If we cannot allocate vmemmap pages, just refuse to free the
> > +              * page and put the page back on the hugetlb free list and treat
> > +              * as a surplus page.
> > +              */
> > +             h->surplus_huge_pages++;
> > +             h->surplus_huge_pages_node[nid]++;
> > +
> > +             /*
> > +              * The refcount can possibly be increased by memory-failure or
> > +              * soft_offline handlers.
>
> This comment could be more helpful. I believe you want to say this
>                 /*
>                  * HWpoisoning code can increment the reference
>                  * count here. If there is a race then bail out
>                  * the holder of the additional reference count will
>                  * free up the page with put_page.

Right. I will reuse this. Thanks.

> > +              */
> > +             if (likely(put_page_testzero(page))) {
> > +                     arch_clear_hugepage_flags(page);
> > +                     enqueue_huge_page(h, page);
> > +             }
> > +
> > +             return -ENOMEM;
> > +     }
> > +
> >       for (i = 0; i < pages_per_huge_page(h);
> >            i++, subpage = mem_map_next(subpage, page, i)) {
> >               subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
> [...]
> > @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
> >       /*
> >        * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
> >        */
> > -     if (!in_task()) {
> > +     if (in_atomic()) {
>
> As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
> We need this change for other reasons and so it would be better to pull
> it out into a separate patch which also makes HUGETLB depend on
> PREEMPT_COUNT.
>
> [...]
> > @@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page)
> >               h->free_huge_pages--;
> >               h->free_huge_pages_node[nid]--;
> >               h->max_huge_pages--;
> > -             update_and_free_page(h, head);
> > -             rc = 0;
> > +             rc = update_and_free_page(h, head);
> > +             if (rc) {
> > +                     h->surplus_huge_pages--;
> > +                     h->surplus_huge_pages_node[nid]--;
> > +                     h->max_huge_pages++;
>
> This is quite ugly and confusing. update_and_free_page is careful to do
> the proper counters accounting and now you just override it partially.
> Why cannot we rely on update_and_free_page do the right thing?

Dissolving path is special here. Since update_and_free_page failed,
the number of surplus pages was incremented.  Surplus pages are
the number of pages greater than max_huge_pages.  Since we are
incrementing max_huge_pages, we should decrement (undo) the
addition to surplus_huge_pages and surplus_huge_pages_node[nid].


>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page
  2021-03-10 15:27   ` Michal Hocko
@ 2021-03-11  6:34     ` Muchun Song
  2021-03-11  8:50       ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-11  6:34 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 11:28 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 08-03-21 18:28:03, Muchun Song wrote:
> > Because we reuse the first tail vmemmap page frame and remap it
> > with read-only, we cannot set the PageHWPosion on some tail pages.
> > So we can use the head[4].private (There are at least 128 struct
> > page structures associated with the optimized HugeTLB page, so
> > using head[4].private is safe) to record the real error page index
> > and set the raw error page PageHWPoison later.
>
> Can we have more poisoned tail pages? Also who does consume that index
> and set the HWPoison on the proper tail page?

Good point. I look at the routine of memory failure closely.
If we do not clear the HWPoison of the head page, we cannot
poison another tail page.

So we should not set the destructor of the huge page from
HUGETLB_PAGE_DTOR to NULL_COMPOUND_DTOR
before calling alloc_huge_page_vmemmap(). In this case,
the below check of PageHuge() always returns true.

I need to fix this in the previous patch.

memory_failure()
    if (PageHuge(page))
        memory_failure_hugetlb()
            head = compound_head(page)
            if (TestSetPageHWPoison(head))
                return

Thanks.

>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > Acked-by: David Rientjes <rientjes@google.com>
> > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> > ---
> >  mm/hugetlb.c | 80 ++++++++++++++++++++++++++++++++++++++++++++++++++++++------
> >  1 file changed, 72 insertions(+), 8 deletions(-)
> >
> > diff --git a/mm/hugetlb.c b/mm/hugetlb.c
> > index 377e0c1b283f..c0c1b7635ca9 100644
> > --- a/mm/hugetlb.c
> > +++ b/mm/hugetlb.c
> > @@ -1304,6 +1304,74 @@ static inline void destroy_compound_gigantic_page(struct page *page,
> >                                               unsigned int order) { }
> >  #endif
> >
> > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> > +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
> > +{
> > +     struct page *page;
> > +
> > +     if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
> > +             return;
> > +
> > +     page = head + page_private(head + 4);
> > +
> > +     /*
> > +      * Move PageHWPoison flag from head page to the raw error page,
> > +      * which makes any subpages rather than the error page reusable.
> > +      */
> > +     if (page != head) {
> > +             SetPageHWPoison(page);
> > +             ClearPageHWPoison(head);
> > +     }
> > +}
> > +
> > +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
> > +                                     struct page *page)
> > +{
> > +     if (!PageHWPoison(head))
> > +             return;
> > +
> > +     if (free_vmemmap_pages_per_hpage(h)) {
> > +             set_page_private(head + 4, page - head);
> > +     } else if (page != head) {
> > +             /*
> > +              * Move PageHWPoison flag from head page to the raw error page,
> > +              * which makes any subpages rather than the error page reusable.
> > +              */
> > +             SetPageHWPoison(page);
> > +             ClearPageHWPoison(head);
> > +     }
> > +}
> > +
> > +static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
> > +{
> > +     if (!PageHWPoison(head) || !free_vmemmap_pages_per_hpage(h))
> > +             return;
> > +
> > +     set_page_private(head + 4, 0);
> > +}
> > +#else
> > +static inline void hwpoison_subpage_deliver(struct hstate *h, struct page *head)
> > +{
> > +}
> > +
> > +static inline void hwpoison_subpage_set(struct hstate *h, struct page *head,
> > +                                     struct page *page)
> > +{
> > +     if (PageHWPoison(head) && page != head) {
> > +             /*
> > +              * Move PageHWPoison flag from head page to the raw error page,
> > +              * which makes any subpages rather than the error page reusable.
> > +              */
> > +             SetPageHWPoison(page);
> > +             ClearPageHWPoison(head);
> > +     }
> > +}
> > +
> > +static inline void hwpoison_subpage_clear(struct hstate *h, struct page *head)
> > +{
> > +}
> > +#endif
> > +
> >  static int update_and_free_page(struct hstate *h, struct page *page)
> >       __releases(&hugetlb_lock) __acquires(&hugetlb_lock)
> >  {
> > @@ -1357,6 +1425,8 @@ static int update_and_free_page(struct hstate *h, struct page *page)
> >               return -ENOMEM;
> >       }
> >
> > +     hwpoison_subpage_deliver(h, page);
> > +
> >       for (i = 0; i < pages_per_huge_page(h);
> >            i++, subpage = mem_map_next(subpage, page, i)) {
> >               subpage->flags &= ~(1 << PG_locked | 1 << PG_error |
> > @@ -1801,14 +1871,7 @@ int dissolve_free_huge_page(struct page *page)
> >                       goto retry;
> >               }
> >
> > -             /*
> > -              * Move PageHWPoison flag from head page to the raw error page,
> > -              * which makes any subpages rather than the error page reusable.
> > -              */
> > -             if (PageHWPoison(head) && page != head) {
> > -                     SetPageHWPoison(page);
> > -                     ClearPageHWPoison(head);
> > -             }
> > +             hwpoison_subpage_set(h, head, page);
> >               list_del(&head->lru);
> >               h->free_huge_pages--;
> >               h->free_huge_pages_node[nid]--;
> > @@ -1818,6 +1881,7 @@ int dissolve_free_huge_page(struct page *page)
> >                       h->surplus_huge_pages--;
> >                       h->surplus_huge_pages_node[nid]--;
> >                       h->max_huge_pages++;
> > +                     hwpoison_subpage_clear(h, head);
> >               }
> >       }
> >  out:
> > --
> > 2.11.0
> >
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  2021-03-10 15:37   ` Michal Hocko
  2021-03-10 17:15     ` Randy Dunlap
@ 2021-03-11  6:36     ` Muchun Song
  1 sibling, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11  6:36 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 11:37 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 08-03-21 18:28:04, Muchun Song wrote:
> > Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
> > freeing unused vmemmap pages associated with each hugetlb page on boot.
> >
> > We disables PMD mapping of vmemmap pages for x86-64 arch when this
> > feature is enabled. Because vmemmap_remap_free() depends on vmemmap
> > being base page mapped.
> >
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
> > Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> > ---
> >  Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++
> >  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
> >  arch/x86/mm/init_64.c                           |  8 ++++++--
> >  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
> >  mm/hugetlb_vmemmap.c                            | 24 ++++++++++++++++++++++++
> >  5 files changed, 66 insertions(+), 2 deletions(-)
> >
> > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> > index 04545725f187..de91d54573c4 100644
> > --- a/Documentation/admin-guide/kernel-parameters.txt
> > +++ b/Documentation/admin-guide/kernel-parameters.txt
> > @@ -1557,6 +1557,20 @@
> >                       Documentation/admin-guide/mm/hugetlbpage.rst.
> >                       Format: size[KMG]
> >
> > +     hugetlb_free_vmemmap=
> > +                     [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> > +                     this controls freeing unused vmemmap pages associated
> > +                     with each HugeTLB page. When this option is enabled,
> > +                     we disable PMD/huge page mapping of vmemmap pages which
> > +                     increase page table pages. So if a user/sysadmin only
> > +                     uses a small number of HugeTLB pages (as a percentage
> > +                     of system memory), they could end up using more memory
> > +                     with hugetlb_free_vmemmap on as opposed to off.
> > +                     Format: { on | off (default) }
>
> Please note this is an admin guide and for those this seems overly low

OK.

> level. I would use something like the following
>                         [KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
>                         enabled.
>                         Allows heavy hugetlb users to free up some more
>                         memory (6 * PAGE_SIZE for each 2MB hugetlb
>                         page).
>                         This feauture is not free though. Large page
>                         tables are not use to back vmemmap pages which
>                         can lead to a performance degradation for some
>                         workloads. Also there will be memory allocation
>                         required when hugetlb pages are freed from the
>                         pool which can lead to corner cases under heavy
>                         memory pressure.

Very thanks. I will update this.

> > +
> > +                     on:  enable the feature
> > +                     off: disable the feature
> > +
> >       hung_task_panic=
> >                       [KNL] Should the hung task detector generate panics.
> >                       Format: 0 | 1
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
  2021-03-10 17:15     ` Randy Dunlap
@ 2021-03-11  6:36       ` Muchun Song
  0 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11  6:36 UTC (permalink / raw)
  To: Randy Dunlap
  Cc: Michal Hocko, Jonathan Corbet, Mike Kravetz, Thomas Gleixner,
	Ingo Molnar, bp, x86, hpa, dave.hansen, luto, Peter Zijlstra,
	Alexander Viro, Andrew Morton, paulmck, mchehab+huawei,
	pawan.kumar.gupta, oneukum, anshuman.khandual, jroedel,
	Mina Almasry, David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 1:16 AM Randy Dunlap <rdunlap@infradead.org> wrote:
>
> On 3/10/21 7:37 AM, Michal Hocko wrote:
> > On Mon 08-03-21 18:28:04, Muchun Song wrote:
> >> Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
> >> freeing unused vmemmap pages associated with each hugetlb page on boot.
> >>
> >> We disables PMD mapping of vmemmap pages for x86-64 arch when this
> >> feature is enabled. Because vmemmap_remap_free() depends on vmemmap
> >> being base page mapped.
> >>
> >> Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> >> Reviewed-by: Oscar Salvador <osalvador@suse.de>
> >> Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
> >> Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> >> Tested-by: Chen Huang <chenhuang5@huawei.com>
> >> Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> >> ---
> >>  Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++
> >>  Documentation/admin-guide/mm/hugetlbpage.rst    |  3 +++
> >>  arch/x86/mm/init_64.c                           |  8 ++++++--
> >>  include/linux/hugetlb.h                         | 19 +++++++++++++++++++
> >>  mm/hugetlb_vmemmap.c                            | 24 ++++++++++++++++++++++++
> >>  5 files changed, 66 insertions(+), 2 deletions(-)
> >>
> >> diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
> >> index 04545725f187..de91d54573c4 100644
> >> --- a/Documentation/admin-guide/kernel-parameters.txt
> >> +++ b/Documentation/admin-guide/kernel-parameters.txt
> >> @@ -1557,6 +1557,20 @@
> >>                      Documentation/admin-guide/mm/hugetlbpage.rst.
> >>                      Format: size[KMG]
> >>
> >> +    hugetlb_free_vmemmap=
> >> +                    [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
> >> +                    this controls freeing unused vmemmap pages associated
> >> +                    with each HugeTLB page. When this option is enabled,
> >> +                    we disable PMD/huge page mapping of vmemmap pages which
> >> +                    increase page table pages. So if a user/sysadmin only
> >> +                    uses a small number of HugeTLB pages (as a percentage
> >> +                    of system memory), they could end up using more memory
> >> +                    with hugetlb_free_vmemmap on as opposed to off.
> >> +                    Format: { on | off (default) }
> >
> > Please note this is an admin guide and for those this seems overly low
> > level. I would use something like the following
> >                       [KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
> >                       enabled.
> >                       Allows heavy hugetlb users to free up some more
> >                       memory (6 * PAGE_SIZE for each 2MB hugetlb
> >                       page).
> >                       This feauture is not free though. Large page
> >                       tables are not use to back vmemmap pages which
>
>                                are not used

Thanks.

>
> >                       can lead to a performance degradation for some
> >                       workloads. Also there will be memory allocation
> >                       required when hugetlb pages are freed from the
> >                       pool which can lead to corner cases under heavy
> >                       memory pressure.
> >> +
> >> +                    on:  enable the feature
> >> +                    off: disable the feature
> >> +
> >>      hung_task_panic=
> >>                      [KNL] Should the hung task detector generate panics.
> >>                      Format: 0 | 1
>
>
> --
> ~Randy
>

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-10 15:41   ` Michal Hocko
@ 2021-03-11  7:33     ` Muchun Song
  2021-03-11  8:55       ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-11  7:33 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Wed, Mar 10, 2021 at 11:41 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Mon 08-03-21 18:28:07, Muchun Song wrote:
> > When the "struct page size" crosses page boundaries we cannot
> > make use of this feature. Let free_vmemmap_pages_per_hpage()
> > return zero if that is the case, most of the functions can be
> > optimized away.
>
> I am confused. Don't you check for this in early_hugetlb_free_vmemmap_param already?

Right.

> Why do we need any runtime checks?

If the size of the struct page is not power of 2, compiler can think
is_hugetlb_free_vmemmap_enabled() always return false. So
the code snippet of this user can be optimized away.

E.g.

if (is_hugetlb_free_vmemmap_enabled())
        /* do something */

The compiler can drop "/* do something */" directly, because
it knows is_hugetlb_free_vmemmap_enabled() always returns
false.

Thanks.

>
> > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> > ---
> >  include/linux/hugetlb.h | 3 ++-
> >  mm/hugetlb_vmemmap.c    | 7 +++++++
> >  mm/hugetlb_vmemmap.h    | 6 ++++++
> >  3 files changed, 15 insertions(+), 1 deletion(-)
> >
> > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
> > index c70421e26189..333dd0479fc2 100644
> > --- a/include/linux/hugetlb.h
> > +++ b/include/linux/hugetlb.h
> > @@ -880,7 +880,8 @@ extern bool hugetlb_free_vmemmap_enabled;
> >
> >  static inline bool is_hugetlb_free_vmemmap_enabled(void)
> >  {
> > -     return hugetlb_free_vmemmap_enabled;
> > +     return hugetlb_free_vmemmap_enabled &&
> > +            is_power_of_2(sizeof(struct page));
> >  }
> >  #else
> >  static inline bool is_hugetlb_free_vmemmap_enabled(void)
> > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
> > index 33e42678abe3..1ba1ef45c48c 100644
> > --- a/mm/hugetlb_vmemmap.c
> > +++ b/mm/hugetlb_vmemmap.c
> > @@ -265,6 +265,13 @@ void __init hugetlb_vmemmap_init(struct hstate *h)
> >       BUILD_BUG_ON(__NR_USED_SUBPAGE >=
> >                    RESERVE_VMEMMAP_SIZE / sizeof(struct page));
> >
> > +     /*
> > +      * The compiler can help us to optimize this function to null
> > +      * when the size of the struct page is not power of 2.
> > +      */
> > +     if (!is_power_of_2(sizeof(struct page)))
> > +             return;
> > +
> >       if (!hugetlb_free_vmemmap_enabled)
> >               return;
> >
> > diff --git a/mm/hugetlb_vmemmap.h b/mm/hugetlb_vmemmap.h
> > index cb2bef8f9e73..29aaaf7b741e 100644
> > --- a/mm/hugetlb_vmemmap.h
> > +++ b/mm/hugetlb_vmemmap.h
> > @@ -21,6 +21,12 @@ void hugetlb_vmemmap_init(struct hstate *h);
> >   */
> >  static inline unsigned int free_vmemmap_pages_per_hpage(struct hstate *h)
> >  {
> > +     /*
> > +      * This check aims to let the compiler help us optimize the code as
> > +      * much as possible.
> > +      */
> > +     if (!is_power_of_2(sizeof(struct page)))
> > +             return 0;
> >       return h->nr_free_vmemmap_pages;
> >  }
> >  #else
> > --
> > 2.11.0
> >
>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-10 23:28             ` Paul E. McKenney
@ 2021-03-11  8:40               ` Michal Hocko
  2021-03-11 12:17                 ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-11  8:40 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Mike Kravetz, Muchun Song, corbet, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Wed 10-03-21 15:28:51, Paul E. McKenney wrote:
> On Wed, Mar 10, 2021 at 02:10:12PM -0800, Mike Kravetz wrote:
> > On 3/10/21 1:49 PM, Paul E. McKenney wrote:
> > > On Wed, Mar 10, 2021 at 10:11:22PM +0100, Michal Hocko wrote:
> > >> On Wed 10-03-21 10:56:08, Mike Kravetz wrote:
> > >>> On 3/10/21 7:19 AM, Michal Hocko wrote:
> > >>>> On Mon 08-03-21 18:28:02, Muchun Song wrote:
> > >>>> [...]
> > >>>>> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
> > >>>>>  	/*
> > >>>>>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
> > >>>>>  	 */
> > >>>>> -	if (!in_task()) {
> > >>>>> +	if (in_atomic()) {
> > >>>>
> > >>>> As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
> > >>>> We need this change for other reasons and so it would be better to pull
> > >>>> it out into a separate patch which also makes HUGETLB depend on
> > >>>> PREEMPT_COUNT.
> > >>>
> > >>> Yes, the issue of calling put_page for hugetlb pages from any context
> > >>> still needs work.  IMO, that is outside the scope of this series.  We
> > >>> already have code in this path which blocks/sleeps.
> > >>>
> > >>> Making HUGETLB depend on PREEMPT_COUNT is too restrictive.  IIUC,
> > >>> PREEMPT_COUNT will only be enabled if we enable:
> > >>> PREEMPT "Preemptible Kernel (Low-Latency Desktop)"
> > >>> PREEMPT_RT "Fully Preemptible Kernel (Real-Time)"
> > >>> or, other 'debug' options.  These are not enabled in 'more common'
> > >>> kernels.  Of course, we do not want to disable HUGETLB in common
> > >>> configurations.
> > >>
> > >> I haven't tried that but PREEMPT_COUNT should be selectable even without
> > >> any change to the preemption model (e.g. !PREEMPT).
> > > 
> > > It works reliably for me, for example as in the diff below.  So,
> > > as Michal says, you should be able to add "select PREEMPT_COUNT" to
> > > whatever Kconfig option you need to.
> > > 
> > 
> > Thanks Paul.
> > 
> > I may have been misreading Michal's suggestion of "make HUGETLB depend on
> > PREEMPT_COUNT".  We could "select PREEMPT_COUNT" if HUGETLB is enabled.
> > However, since HUGETLB is enabled in most configs, then this would
> > result in PREEMPT_COUNT also being enabled in most configs.  I honestly
> > do not know how much this will cost us?  I assume that if it was free or
> > really cheap it would already be always on?
> 
> There are a -lot- of configs out there, so are you sure that HUGETLB is
> really enabled in most of them?  ;-)

It certainly is enabled for all distribution kernels and many are
!PREEMPT so I believe this is what Mike was concerned about.

> More seriously, I was going by earlier emails in this and related threads
> plus Michal's "PREEMPT_COUNT should be selectable".  But there are other
> situations that would like PREEMPT_COUNT.  And to your point, some who
> would rather PREEMPT_COUNT not be universally enabled.  I haven't seen
> any performance or kernel-size numbers from any of them, however.

Yeah per cpu preempt counting shouldn't be noticeable but I have to
confess I haven't benchmarked it.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
  2021-03-11  2:58     ` [External] " Muchun Song
@ 2021-03-11  8:45       ` Muchun Song
  2021-03-11  8:53         ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-11  8:45 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 10:58 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> On Wed, Mar 10, 2021 at 10:14 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > [I am sorry for a late review]
>
> Thanks for your review.
>
> >
> > On Mon 08-03-21 18:27:59, Muchun Song wrote:
> > > Move bootmem info registration common API to individual bootmem_info.c.
> > > And we will use {get,put}_page_bootmem() to initialize the page for the
> > > vmemmap pages or free the vmemmap pages to buddy in the later patch.
> > > So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
> > > movement without any functional change.
> > >
> > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> > > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > > Reviewed-by: David Hildenbrand <david@redhat.com>
> > > Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> > > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> >
> > Separation from memory_hotplug.c is definitely a right step. I am
> > wondering about the config dependency though
> > [...]
> > > diff --git a/mm/Makefile b/mm/Makefile
> > > index 72227b24a616..daabf86d7da8 100644
> > > --- a/mm/Makefile
> > > +++ b/mm/Makefile
> > > @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
> > >  obj-$(CONFIG_KASAN)  += kasan/
> > >  obj-$(CONFIG_KFENCE) += kfence/
> > >  obj-$(CONFIG_FAILSLAB) += failslab.o
> > > +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
> >
> > I would have expected this would depend on CONFIG_SPARSE.
> > BOOTMEM_INFO_NODE is really an odd thing to depend on here. There is
> > some functionality which requires the node info but that can be gated
> > specifically. Or what is the thinking behind?

I have tried this. And I find that it is better to depend on
BOOTMEM_INFO_NODE instead of SPARSEMEM.

If we enable SPARSEMEM but disable HAVE_BOOTMEM_INFO_NODE,
the bootmem_info.c also is compiled. Actually, we do not
need those functions on other architectures. And these
functions are also related to bootmem info. So it may be
more reasonable to depend on BOOTMEM_INFO_NODE.
Just my thoughts.

Thanks.


>
> At first my idea was to free vmemmap pages through the bootmem
> interface. My first instinct is to rely on BOOTMEM_INFO_NODE.
> It makes sense to me to depend on CONFIG_SPARSE. I will
> update this in the next version.
>
> Thanks.
>
> >
> > This doesn't matter right now because it seems that the *_page_bootmem
> > is only used by x86 outside of the memory hotplug.
> >
> > Other than that looks good to me.
> > --
> > Michal Hocko
> > SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-11  4:26     ` [External] " Muchun Song
@ 2021-03-11  8:46       ` Michal Hocko
  2021-03-11  8:49         ` Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-11  8:46 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Thu 11-03-21 12:26:32, Muchun Song wrote:
> On Wed, Mar 10, 2021 at 11:19 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Mon 08-03-21 18:28:02, Muchun Song wrote:
[...]
> > > @@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page)
> > >               h->free_huge_pages--;
> > >               h->free_huge_pages_node[nid]--;
> > >               h->max_huge_pages--;
> > > -             update_and_free_page(h, head);
> > > -             rc = 0;
> > > +             rc = update_and_free_page(h, head);
> > > +             if (rc) {
> > > +                     h->surplus_huge_pages--;
> > > +                     h->surplus_huge_pages_node[nid]--;
> > > +                     h->max_huge_pages++;
> >
> > This is quite ugly and confusing. update_and_free_page is careful to do
> > the proper counters accounting and now you just override it partially.
> > Why cannot we rely on update_and_free_page do the right thing?
> 
> Dissolving path is special here. Since update_and_free_page failed,
> the number of surplus pages was incremented.  Surplus pages are
> the number of pages greater than max_huge_pages.  Since we are
> incrementing max_huge_pages, we should decrement (undo) the
> addition to surplus_huge_pages and surplus_huge_pages_node[nid].

Can we make dissolve_free_huge_page less special or tell
update_and_free_page to not account against dissolve_free_huge_page?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-11  8:46       ` Michal Hocko
@ 2021-03-11  8:49         ` Muchun Song
  0 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11  8:49 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 4:46 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 11-03-21 12:26:32, Muchun Song wrote:
> > On Wed, Mar 10, 2021 at 11:19 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Mon 08-03-21 18:28:02, Muchun Song wrote:
> [...]
> > > > @@ -1771,8 +1813,12 @@ int dissolve_free_huge_page(struct page *page)
> > > >               h->free_huge_pages--;
> > > >               h->free_huge_pages_node[nid]--;
> > > >               h->max_huge_pages--;
> > > > -             update_and_free_page(h, head);
> > > > -             rc = 0;
> > > > +             rc = update_and_free_page(h, head);
> > > > +             if (rc) {
> > > > +                     h->surplus_huge_pages--;
> > > > +                     h->surplus_huge_pages_node[nid]--;
> > > > +                     h->max_huge_pages++;
> > >
> > > This is quite ugly and confusing. update_and_free_page is careful to do
> > > the proper counters accounting and now you just override it partially.
> > > Why cannot we rely on update_and_free_page do the right thing?
> >
> > Dissolving path is special here. Since update_and_free_page failed,
> > the number of surplus pages was incremented.  Surplus pages are
> > the number of pages greater than max_huge_pages.  Since we are
> > incrementing max_huge_pages, we should decrement (undo) the
> > addition to surplus_huge_pages and surplus_huge_pages_node[nid].
>
> Can we make dissolve_free_huge_page less special or tell
> update_and_free_page to not account against dissolve_free_huge_page?

Of course can.

Thanks.

> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page
  2021-03-11  6:34     ` [External] " Muchun Song
@ 2021-03-11  8:50       ` Michal Hocko
  2021-03-11  9:13         ` Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-11  8:50 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Thu 11-03-21 14:34:04, Muchun Song wrote:
> On Wed, Mar 10, 2021 at 11:28 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Mon 08-03-21 18:28:03, Muchun Song wrote:
> > > Because we reuse the first tail vmemmap page frame and remap it
> > > with read-only, we cannot set the PageHWPosion on some tail pages.
> > > So we can use the head[4].private (There are at least 128 struct
> > > page structures associated with the optimized HugeTLB page, so
> > > using head[4].private is safe) to record the real error page index
> > > and set the raw error page PageHWPoison later.
> >
> > Can we have more poisoned tail pages? Also who does consume that index
> > and set the HWPoison on the proper tail page?
> 
> Good point. I look at the routine of memory failure closely.
> If we do not clear the HWPoison of the head page, we cannot
> poison another tail page.
> 
> So we should not set the destructor of the huge page from
> HUGETLB_PAGE_DTOR to NULL_COMPOUND_DTOR
> before calling alloc_huge_page_vmemmap(). In this case,
> the below check of PageHuge() always returns true.
> 
> I need to fix this in the previous patch.
> 
> memory_failure()
>     if (PageHuge(page))
>         memory_failure_hugetlb()
>             head = compound_head(page)
>             if (TestSetPageHWPoison(head))
>                 return

I have to say that I am not fully familiar with hwpoisoning code
(especially after recent changes) but IIRC it does rely on hugetlb page
dissolving. With the new code this operation can fail which is a new
situation. Unless I am misunderstanding this can lead to a lost memory
failure operation on other tail pages.

Anyway the above answers the question why a single slot is sufficient so
it would be great to mention that in a changelog along with the caveat
that some pages might miss their poisoning.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
  2021-03-11  8:45       ` Muchun Song
@ 2021-03-11  8:53         ` Michal Hocko
  2021-03-11  9:05           ` Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-11  8:53 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu 11-03-21 16:45:51, Muchun Song wrote:
> On Thu, Mar 11, 2021 at 10:58 AM Muchun Song <songmuchun@bytedance.com> wrote:
> >
> > On Wed, Mar 10, 2021 at 10:14 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > [I am sorry for a late review]
> >
> > Thanks for your review.
> >
> > >
> > > On Mon 08-03-21 18:27:59, Muchun Song wrote:
> > > > Move bootmem info registration common API to individual bootmem_info.c.
> > > > And we will use {get,put}_page_bootmem() to initialize the page for the
> > > > vmemmap pages or free the vmemmap pages to buddy in the later patch.
> > > > So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
> > > > movement without any functional change.
> > > >
> > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > > > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> > > > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > > > Reviewed-by: David Hildenbrand <david@redhat.com>
> > > > Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> > > > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > > > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> > >
> > > Separation from memory_hotplug.c is definitely a right step. I am
> > > wondering about the config dependency though
> > > [...]
> > > > diff --git a/mm/Makefile b/mm/Makefile
> > > > index 72227b24a616..daabf86d7da8 100644
> > > > --- a/mm/Makefile
> > > > +++ b/mm/Makefile
> > > > @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
> > > >  obj-$(CONFIG_KASAN)  += kasan/
> > > >  obj-$(CONFIG_KFENCE) += kfence/
> > > >  obj-$(CONFIG_FAILSLAB) += failslab.o
> > > > +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
> > >
> > > I would have expected this would depend on CONFIG_SPARSE.
> > > BOOTMEM_INFO_NODE is really an odd thing to depend on here. There is
> > > some functionality which requires the node info but that can be gated
> > > specifically. Or what is the thinking behind?
> 
> I have tried this. And I find that it is better to depend on
> BOOTMEM_INFO_NODE instead of SPARSEMEM.
> 
> If we enable SPARSEMEM but disable HAVE_BOOTMEM_INFO_NODE,
> the bootmem_info.c also is compiled. Actually, we do not
> need those functions on other architectures. And these
> functions are also related to bootmem info. So it may be
> more reasonable to depend on BOOTMEM_INFO_NODE.
> Just my thoughts.

If BOOTMEM_INFO_NODE is disbabled then bootmem_info.c would be
effectivelly only {get,put}_page_bootmem, no?

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-11  7:33     ` [External] " Muchun Song
@ 2021-03-11  8:55       ` Michal Hocko
  2021-03-11  9:08         ` Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-11  8:55 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu 11-03-21 15:33:20, Muchun Song wrote:
> On Wed, Mar 10, 2021 at 11:41 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Mon 08-03-21 18:28:07, Muchun Song wrote:
> > > When the "struct page size" crosses page boundaries we cannot
> > > make use of this feature. Let free_vmemmap_pages_per_hpage()
> > > return zero if that is the case, most of the functions can be
> > > optimized away.
> >
> > I am confused. Don't you check for this in early_hugetlb_free_vmemmap_param already?
> 
> Right.
> 
> > Why do we need any runtime checks?
> 
> If the size of the struct page is not power of 2, compiler can think
> is_hugetlb_free_vmemmap_enabled() always return false. So
> the code snippet of this user can be optimized away.
> 
> E.g.
> 
> if (is_hugetlb_free_vmemmap_enabled())
>         /* do something */
> 
> The compiler can drop "/* do something */" directly, because
> it knows is_hugetlb_free_vmemmap_enabled() always returns
> false.

OK, so this is a micro-optimization to generate a better code?
Is this measurable to warrant more code?
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c
  2021-03-11  8:53         ` Michal Hocko
@ 2021-03-11  9:05           ` Muchun Song
  0 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11  9:05 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 4:53 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 11-03-21 16:45:51, Muchun Song wrote:
> > On Thu, Mar 11, 2021 at 10:58 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > >
> > > On Wed, Mar 10, 2021 at 10:14 PM Michal Hocko <mhocko@suse.com> wrote:
> > > >
> > > > [I am sorry for a late review]
> > >
> > > Thanks for your review.
> > >
> > > >
> > > > On Mon 08-03-21 18:27:59, Muchun Song wrote:
> > > > > Move bootmem info registration common API to individual bootmem_info.c.
> > > > > And we will use {get,put}_page_bootmem() to initialize the page for the
> > > > > vmemmap pages or free the vmemmap pages to buddy in the later patch.
> > > > > So move them out of CONFIG_MEMORY_HOTPLUG_SPARSE. This is just code
> > > > > movement without any functional change.
> > > > >
> > > > > Signed-off-by: Muchun Song <songmuchun@bytedance.com>
> > > > > Acked-by: Mike Kravetz <mike.kravetz@oracle.com>
> > > > > Reviewed-by: Oscar Salvador <osalvador@suse.de>
> > > > > Reviewed-by: David Hildenbrand <david@redhat.com>
> > > > > Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
> > > > > Tested-by: Chen Huang <chenhuang5@huawei.com>
> > > > > Tested-by: Bodeddula Balasubramaniam <bodeddub@amazon.com>
> > > >
> > > > Separation from memory_hotplug.c is definitely a right step. I am
> > > > wondering about the config dependency though
> > > > [...]
> > > > > diff --git a/mm/Makefile b/mm/Makefile
> > > > > index 72227b24a616..daabf86d7da8 100644
> > > > > --- a/mm/Makefile
> > > > > +++ b/mm/Makefile
> > > > > @@ -83,6 +83,7 @@ obj-$(CONFIG_SLUB) += slub.o
> > > > >  obj-$(CONFIG_KASAN)  += kasan/
> > > > >  obj-$(CONFIG_KFENCE) += kfence/
> > > > >  obj-$(CONFIG_FAILSLAB) += failslab.o
> > > > > +obj-$(CONFIG_HAVE_BOOTMEM_INFO_NODE) += bootmem_info.o
> > > >
> > > > I would have expected this would depend on CONFIG_SPARSE.
> > > > BOOTMEM_INFO_NODE is really an odd thing to depend on here. There is
> > > > some functionality which requires the node info but that can be gated
> > > > specifically. Or what is the thinking behind?
> >
> > I have tried this. And I find that it is better to depend on
> > BOOTMEM_INFO_NODE instead of SPARSEMEM.
> >
> > If we enable SPARSEMEM but disable HAVE_BOOTMEM_INFO_NODE,
> > the bootmem_info.c also is compiled. Actually, we do not
> > need those functions on other architectures. And these
> > functions are also related to bootmem info. So it may be
> > more reasonable to depend on BOOTMEM_INFO_NODE.
> > Just my thoughts.
>
> If BOOTMEM_INFO_NODE is disbabled then bootmem_info.c would be
> effectivelly only {get,put}_page_bootmem, no?

{get,put}_page_bootmem also would be effective. I found that
get_page_bootmem is only used in the scope of the
CONFIG_BOOTMEM_INFO_NODE. So I move them
to the bootmem_info.c.

Thanks.

>
> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-11  8:55       ` Michal Hocko
@ 2021-03-11  9:08         ` Muchun Song
  2021-03-11  9:39           ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-11  9:08 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 4:55 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 11-03-21 15:33:20, Muchun Song wrote:
> > On Wed, Mar 10, 2021 at 11:41 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Mon 08-03-21 18:28:07, Muchun Song wrote:
> > > > When the "struct page size" crosses page boundaries we cannot
> > > > make use of this feature. Let free_vmemmap_pages_per_hpage()
> > > > return zero if that is the case, most of the functions can be
> > > > optimized away.
> > >
> > > I am confused. Don't you check for this in early_hugetlb_free_vmemmap_param already?
> >
> > Right.
> >
> > > Why do we need any runtime checks?
> >
> > If the size of the struct page is not power of 2, compiler can think
> > is_hugetlb_free_vmemmap_enabled() always return false. So
> > the code snippet of this user can be optimized away.
> >
> > E.g.
> >
> > if (is_hugetlb_free_vmemmap_enabled())
> >         /* do something */
> >
> > The compiler can drop "/* do something */" directly, because
> > it knows is_hugetlb_free_vmemmap_enabled() always returns
> > false.
>
> OK, so this is a micro-optimization to generate a better code?

Right.

> Is this measurable to warrant more code?

I have disassembled the code to confirm this behavior.
I know this is not the hot path. But it actually can decrease
the code size.

Thanks.

> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page
  2021-03-11  8:50       ` Michal Hocko
@ 2021-03-11  9:13         ` Muchun Song
  0 siblings, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11  9:13 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 4:50 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 11-03-21 14:34:04, Muchun Song wrote:
> > On Wed, Mar 10, 2021 at 11:28 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Mon 08-03-21 18:28:03, Muchun Song wrote:
> > > > Because we reuse the first tail vmemmap page frame and remap it
> > > > with read-only, we cannot set the PageHWPosion on some tail pages.
> > > > So we can use the head[4].private (There are at least 128 struct
> > > > page structures associated with the optimized HugeTLB page, so
> > > > using head[4].private is safe) to record the real error page index
> > > > and set the raw error page PageHWPoison later.
> > >
> > > Can we have more poisoned tail pages? Also who does consume that index
> > > and set the HWPoison on the proper tail page?
> >
> > Good point. I look at the routine of memory failure closely.
> > If we do not clear the HWPoison of the head page, we cannot
> > poison another tail page.
> >
> > So we should not set the destructor of the huge page from
> > HUGETLB_PAGE_DTOR to NULL_COMPOUND_DTOR
> > before calling alloc_huge_page_vmemmap(). In this case,
> > the below check of PageHuge() always returns true.
> >
> > I need to fix this in the previous patch.
> >
> > memory_failure()
> >     if (PageHuge(page))
> >         memory_failure_hugetlb()
> >             head = compound_head(page)
> >             if (TestSetPageHWPoison(head))
> >                 return
>
> I have to say that I am not fully familiar with hwpoisoning code
> (especially after recent changes) but IIRC it does rely on hugetlb page
> dissolving. With the new code this operation can fail which is a new
> situation. Unless I am misunderstanding this can lead to a lost memory
> failure operation on other tail pages.
>
> Anyway the above answers the question why a single slot is sufficient so
> it would be great to mention that in a changelog along with the caveat
> that some pages might miss their poisoning.

OK. I will update the changelog. Thanks for your suggestions.

> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-11  9:08         ` Muchun Song
@ 2021-03-11  9:39           ` Michal Hocko
  2021-03-11 10:00             ` Muchun Song
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-11  9:39 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu 11-03-21 17:08:34, Muchun Song wrote:
> On Thu, Mar 11, 2021 at 4:55 PM Michal Hocko <mhocko@suse.com> wrote:
> >
> > On Thu 11-03-21 15:33:20, Muchun Song wrote:
> > > On Wed, Mar 10, 2021 at 11:41 PM Michal Hocko <mhocko@suse.com> wrote:
> > > >
> > > > On Mon 08-03-21 18:28:07, Muchun Song wrote:
> > > > > When the "struct page size" crosses page boundaries we cannot
> > > > > make use of this feature. Let free_vmemmap_pages_per_hpage()
> > > > > return zero if that is the case, most of the functions can be
> > > > > optimized away.
> > > >
> > > > I am confused. Don't you check for this in early_hugetlb_free_vmemmap_param already?
> > >
> > > Right.
> > >
> > > > Why do we need any runtime checks?
> > >
> > > If the size of the struct page is not power of 2, compiler can think
> > > is_hugetlb_free_vmemmap_enabled() always return false. So
> > > the code snippet of this user can be optimized away.
> > >
> > > E.g.
> > >
> > > if (is_hugetlb_free_vmemmap_enabled())
> > >         /* do something */
> > >
> > > The compiler can drop "/* do something */" directly, because
> > > it knows is_hugetlb_free_vmemmap_enabled() always returns
> > > false.
> >
> > OK, so this is a micro-optimization to generate a better code?
> 
> Right.
> 
> > Is this measurable to warrant more code?
> 
> I have disassembled the code to confirm this behavior.
> I know this is not the hot path. But it actually can decrease
> the code size.

struct page which is not power of 2 is not a common case. Are you sure
it makes sense to micro optimize for an outliar. If you really want to
microptimize then do that for a common case - the feature being
disabled - via static key.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-11  9:39           ` Michal Hocko
@ 2021-03-11 10:00             ` Muchun Song
  2021-03-11 12:16               ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Muchun Song @ 2021-03-11 10:00 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 5:39 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 11-03-21 17:08:34, Muchun Song wrote:
> > On Thu, Mar 11, 2021 at 4:55 PM Michal Hocko <mhocko@suse.com> wrote:
> > >
> > > On Thu 11-03-21 15:33:20, Muchun Song wrote:
> > > > On Wed, Mar 10, 2021 at 11:41 PM Michal Hocko <mhocko@suse.com> wrote:
> > > > >
> > > > > On Mon 08-03-21 18:28:07, Muchun Song wrote:
> > > > > > When the "struct page size" crosses page boundaries we cannot
> > > > > > make use of this feature. Let free_vmemmap_pages_per_hpage()
> > > > > > return zero if that is the case, most of the functions can be
> > > > > > optimized away.
> > > > >
> > > > > I am confused. Don't you check for this in early_hugetlb_free_vmemmap_param already?
> > > >
> > > > Right.
> > > >
> > > > > Why do we need any runtime checks?
> > > >
> > > > If the size of the struct page is not power of 2, compiler can think
> > > > is_hugetlb_free_vmemmap_enabled() always return false. So
> > > > the code snippet of this user can be optimized away.
> > > >
> > > > E.g.
> > > >
> > > > if (is_hugetlb_free_vmemmap_enabled())
> > > >         /* do something */
> > > >
> > > > The compiler can drop "/* do something */" directly, because
> > > > it knows is_hugetlb_free_vmemmap_enabled() always returns
> > > > false.
> > >
> > > OK, so this is a micro-optimization to generate a better code?
> >
> > Right.
> >
> > > Is this measurable to warrant more code?
> >
> > I have disassembled the code to confirm this behavior.
> > I know this is not the hot path. But it actually can decrease
> > the code size.
>
> struct page which is not power of 2 is not a common case.

I know this is not a common case. But the check of
is_power_of_2(sizeof(struct page)) does not bring extra
runtime overhead. It just tells the compiler to optimize code
as much as possible.

> Are you sure
> it makes sense to micro optimize for an outliar. If you really want to
> microptimize then do that for a common case - the feature being
> disabled - via static key.

We cannot optimize the code size (vmlinux) even if we use a static
key when the size is not power of 2.

Sorry. I am confused why you disagree with this change.
It does not bring any disadvantages.

> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-11 10:00             ` Muchun Song
@ 2021-03-11 12:16               ` Michal Hocko
  2021-03-11 13:00                 ` Muchun Song
  2021-03-11 13:45                 ` Oscar Salvador
  0 siblings, 2 replies; 52+ messages in thread
From: Michal Hocko @ 2021-03-11 12:16 UTC (permalink / raw)
  To: Muchun Song
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu 11-03-21 18:00:09, Muchun Song wrote:
[...]
> Sorry. I am confused why you disagree with this change.
> It does not bring any disadvantages.

Because it is adding a code which is not really necessary and which will
have to be maintained. Think of future changes which would need to grow
more of these. Hugetlb code paths shouldn't really think about size of
the struct page.
-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-11  8:40               ` Michal Hocko
@ 2021-03-11 12:17                 ` Michal Hocko
  2021-03-11 17:59                   ` Mike Kravetz
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-11 12:17 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: Mike Kravetz, Muchun Song, corbet, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Thu 11-03-21 09:40:57, Michal Hocko wrote:
> On Wed 10-03-21 15:28:51, Paul E. McKenney wrote:
> > On Wed, Mar 10, 2021 at 02:10:12PM -0800, Mike Kravetz wrote:
> > > On 3/10/21 1:49 PM, Paul E. McKenney wrote:
> > > > On Wed, Mar 10, 2021 at 10:11:22PM +0100, Michal Hocko wrote:
> > > >> On Wed 10-03-21 10:56:08, Mike Kravetz wrote:
> > > >>> On 3/10/21 7:19 AM, Michal Hocko wrote:
> > > >>>> On Mon 08-03-21 18:28:02, Muchun Song wrote:
> > > >>>> [...]
> > > >>>>> @@ -1447,7 +1486,7 @@ void free_huge_page(struct page *page)
> > > >>>>>  	/*
> > > >>>>>  	 * Defer freeing if in non-task context to avoid hugetlb_lock deadlock.
> > > >>>>>  	 */
> > > >>>>> -	if (!in_task()) {
> > > >>>>> +	if (in_atomic()) {
> > > >>>>
> > > >>>> As I've said elsewhere in_atomic doesn't work for CONFIG_PREEMPT_COUNT=n.
> > > >>>> We need this change for other reasons and so it would be better to pull
> > > >>>> it out into a separate patch which also makes HUGETLB depend on
> > > >>>> PREEMPT_COUNT.
> > > >>>
> > > >>> Yes, the issue of calling put_page for hugetlb pages from any context
> > > >>> still needs work.  IMO, that is outside the scope of this series.  We
> > > >>> already have code in this path which blocks/sleeps.
> > > >>>
> > > >>> Making HUGETLB depend on PREEMPT_COUNT is too restrictive.  IIUC,
> > > >>> PREEMPT_COUNT will only be enabled if we enable:
> > > >>> PREEMPT "Preemptible Kernel (Low-Latency Desktop)"
> > > >>> PREEMPT_RT "Fully Preemptible Kernel (Real-Time)"
> > > >>> or, other 'debug' options.  These are not enabled in 'more common'
> > > >>> kernels.  Of course, we do not want to disable HUGETLB in common
> > > >>> configurations.
> > > >>
> > > >> I haven't tried that but PREEMPT_COUNT should be selectable even without
> > > >> any change to the preemption model (e.g. !PREEMPT).
> > > > 
> > > > It works reliably for me, for example as in the diff below.  So,
> > > > as Michal says, you should be able to add "select PREEMPT_COUNT" to
> > > > whatever Kconfig option you need to.
> > > > 
> > > 
> > > Thanks Paul.
> > > 
> > > I may have been misreading Michal's suggestion of "make HUGETLB depend on
> > > PREEMPT_COUNT".  We could "select PREEMPT_COUNT" if HUGETLB is enabled.
> > > However, since HUGETLB is enabled in most configs, then this would
> > > result in PREEMPT_COUNT also being enabled in most configs.  I honestly
> > > do not know how much this will cost us?  I assume that if it was free or
> > > really cheap it would already be always on?
> > 
> > There are a -lot- of configs out there, so are you sure that HUGETLB is
> > really enabled in most of them?  ;-)
> 
> It certainly is enabled for all distribution kernels and many are
> !PREEMPT so I believe this is what Mike was concerned about.
> 
> > More seriously, I was going by earlier emails in this and related threads
> > plus Michal's "PREEMPT_COUNT should be selectable".  But there are other
> > situations that would like PREEMPT_COUNT.  And to your point, some who
> > would rather PREEMPT_COUNT not be universally enabled.  I haven't seen
> > any performance or kernel-size numbers from any of them, however.
> 
> Yeah per cpu preempt counting shouldn't be noticeable but I have to
> confess I haven't benchmarked it.

But all this seems moot now http://lkml.kernel.org/r/YEoA08n60+jzsnAl@hirez.programming.kicks-ass.net

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-11 12:16               ` Michal Hocko
@ 2021-03-11 13:00                 ` Muchun Song
  2021-03-11 13:45                 ` Oscar Salvador
  1 sibling, 0 replies; 52+ messages in thread
From: Muchun Song @ 2021-03-11 13:00 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Jonathan Corbet, Mike Kravetz, Thomas Gleixner, Ingo Molnar, bp,
	x86, hpa, dave.hansen, luto, Peter Zijlstra, Alexander Viro,
	Andrew Morton, paulmck, mchehab+huawei, pawan.kumar.gupta,
	Randy Dunlap, oneukum, anshuman.khandual, jroedel, Mina Almasry,
	David Rientjes, Matthew Wilcox, Oscar Salvador,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 8:16 PM Michal Hocko <mhocko@suse.com> wrote:
>
> On Thu 11-03-21 18:00:09, Muchun Song wrote:
> [...]
> > Sorry. I am confused why you disagree with this change.
> > It does not bring any disadvantages.
>
> Because it is adding a code which is not really necessary and which will
> have to be maintained. Think of future changes which would need to grow
> more of these. Hugetlb code paths shouldn't really think about size of
> the struct page.

Got it. I will drop this patch.

> --
> Michal Hocko
> SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [External] Re: [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler
  2021-03-11 12:16               ` Michal Hocko
  2021-03-11 13:00                 ` Muchun Song
@ 2021-03-11 13:45                 ` Oscar Salvador
  1 sibling, 0 replies; 52+ messages in thread
From: Oscar Salvador @ 2021-03-11 13:45 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Muchun Song, Jonathan Corbet, Mike Kravetz, Thomas Gleixner,
	Ingo Molnar, bp, x86, hpa, dave.hansen, luto, Peter Zijlstra,
	Alexander Viro, Andrew Morton, paulmck, mchehab+huawei,
	pawan.kumar.gupta, Randy Dunlap, oneukum, anshuman.khandual,
	jroedel, Mina Almasry, David Rientjes, Matthew Wilcox,
	Song Bao Hua (Barry Song),
	David Hildenbrand,
	HORIGUCHI NAOYA(堀口 直也),
	Joao Martins, Xiongchun duan, linux-doc, LKML,
	Linux Memory Management List, linux-fsdevel, Miaohe Lin,
	Chen Huang, Bodeddula Balasubramaniam

On Thu, Mar 11, 2021 at 01:16:37PM +0100, Michal Hocko wrote:
> On Thu 11-03-21 18:00:09, Muchun Song wrote:
> [...]
> > Sorry. I am confused why you disagree with this change.
> > It does not bring any disadvantages.
> 
> Because it is adding a code which is not really necessary and which will
> have to be maintained. Think of future changes which would need to grow
> more of these. Hugetlb code paths shouldn't really think about size of
> the struct page.

I have to confess that when I looked at the patch I found it nice in the way that
wipes out almost all clode dealing with vmemmap when sizeof(struct page) != power_of_2,
and I was convinced by the fact that only two places required the change.
So all in all it did not look like much churn, and not __that__ hard to maintain.

But I did not think in the case where this trick needs to be spread in more places
if the code changes over time.

So I agree that although it gets rid of a lot of code, it would seldomly pay off as
not many configuration out there are running on !power_of_2, and hugetlb is already
tricky enough.


-- 
Oscar Salvador
SUSE L3

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-11 12:17                 ` Michal Hocko
@ 2021-03-11 17:59                   ` Mike Kravetz
  2021-03-11 22:53                     ` Mike Kravetz
  0 siblings, 1 reply; 52+ messages in thread
From: Mike Kravetz @ 2021-03-11 17:59 UTC (permalink / raw)
  To: Michal Hocko, Paul E. McKenney
  Cc: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, mchehab+huawei, pawan.kumar.gupta,
	rdunlap, oneukum, anshuman.khandual, jroedel, almasrymina,
	rientjes, willy, osalvador, song.bao.hua, david, naoya.horiguchi,
	joao.m.martins, duanxiongchun, linux-doc, linux-kernel, linux-mm,
	linux-fsdevel, Chen Huang, Bodeddula Balasubramaniam

On 3/11/21 4:17 AM, Michal Hocko wrote:
>> Yeah per cpu preempt counting shouldn't be noticeable but I have to
>> confess I haven't benchmarked it.
> 
> But all this seems moot now http://lkml.kernel.org/r/YEoA08n60+jzsnAl@hirez.programming.kicks-ass.net
> 

The proper fix for free_huge_page independent of this series would
involve:

- Make hugetlb_lock and subpool lock irq safe
- Hand off freeing to a workque if the freeing could sleep

Today, the only time we can sleep in free_huge_page is for gigantic
pages allocated via cma.  I 'think' the concern about undesirable
user visible side effects in this case is minimal as freeing/allocating
1G pages is not something that is going to happen at a high frequency.
My thinking could be wrong?

Of more concern, is the introduction of this series.  If this feature
is enabled, then ALL free_huge_page requests must be sent to a workqueue.
Any ideas on how to address this?
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-11 17:59                   ` Mike Kravetz
@ 2021-03-11 22:53                     ` Mike Kravetz
  2021-03-12  8:15                       ` Michal Hocko
  0 siblings, 1 reply; 52+ messages in thread
From: Mike Kravetz @ 2021-03-11 22:53 UTC (permalink / raw)
  To: Michal Hocko, Paul E. McKenney
  Cc: Muchun Song, corbet, tglx, mingo, bp, x86, hpa, dave.hansen,
	luto, peterz, viro, akpm, mchehab+huawei, pawan.kumar.gupta,
	rdunlap, oneukum, anshuman.khandual, jroedel, almasrymina,
	rientjes, willy, osalvador, song.bao.hua, david, naoya.horiguchi,
	joao.m.martins, duanxiongchun, linux-doc, linux-kernel, linux-mm,
	linux-fsdevel, Chen Huang, Bodeddula Balasubramaniam

On 3/11/21 9:59 AM, Mike Kravetz wrote:
> On 3/11/21 4:17 AM, Michal Hocko wrote:
>>> Yeah per cpu preempt counting shouldn't be noticeable but I have to
>>> confess I haven't benchmarked it.
>>
>> But all this seems moot now http://lkml.kernel.org/r/YEoA08n60+jzsnAl@hirez.programming.kicks-ass.net
>>
> 
> The proper fix for free_huge_page independent of this series would
> involve:
> 
> - Make hugetlb_lock and subpool lock irq safe
> - Hand off freeing to a workque if the freeing could sleep
> 
> Today, the only time we can sleep in free_huge_page is for gigantic
> pages allocated via cma.  I 'think' the concern about undesirable
> user visible side effects in this case is minimal as freeing/allocating
> 1G pages is not something that is going to happen at a high frequency.
> My thinking could be wrong?
> 
> Of more concern, is the introduction of this series.  If this feature
> is enabled, then ALL free_huge_page requests must be sent to a workqueue.
> Any ideas on how to address this?
> 

Thinking about this more ...

A call to free_huge_page has two distinct outcomes
1) Page is freed back to the original allocator: buddy or cma
2) Page is put on hugetlb free list

We can only possibly sleep in the first case 1.  In addition, freeing a
page back to the original allocator involves these steps:
1) Removing page from hugetlb lists
2) Updating hugetlb counts: nr_hugepages, surplus
3) Updating page fields
4) Allocate vmemmap pages if needed as in this series
5) Calling free routine of original allocator

If hugetlb_lock is irq safe, we can perform the first 3 steps under that
lock without issue.  We would then use a workqueue to perform the last
two steps.  Since we are updating hugetlb user visible data under the
lock, there should be no delays.  Of course, giving those pages back to
the original allocator could still be delayed, and a user may notice
that.  Not sure if that would be acceptable?  I think Muchun had a
similar setup just for vmemmmap allocation in an early version of this
series.

This would also require changes to where accounting is done in
dissolve_free_huge_page and update_and_free_page as mentioned elsewhere.

P.S. We could further optimize to check for the possibility of sleeping
(cma or vmemmap) and only send to workqueue in those cases.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-11 22:53                     ` Mike Kravetz
@ 2021-03-12  8:15                       ` Michal Hocko
  2021-03-12 17:50                         ` Mike Kravetz
  0 siblings, 1 reply; 52+ messages in thread
From: Michal Hocko @ 2021-03-12  8:15 UTC (permalink / raw)
  To: Mike Kravetz
  Cc: Paul E. McKenney, Muchun Song, corbet, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On Thu 11-03-21 14:53:08, Mike Kravetz wrote:
> On 3/11/21 9:59 AM, Mike Kravetz wrote:
> > On 3/11/21 4:17 AM, Michal Hocko wrote:
> >>> Yeah per cpu preempt counting shouldn't be noticeable but I have to
> >>> confess I haven't benchmarked it.
> >>
> >> But all this seems moot now http://lkml.kernel.org/r/YEoA08n60+jzsnAl@hirez.programming.kicks-ass.net
> >>
> > 
> > The proper fix for free_huge_page independent of this series would
> > involve:
> > 
> > - Make hugetlb_lock and subpool lock irq safe
> > - Hand off freeing to a workque if the freeing could sleep
> > 
> > Today, the only time we can sleep in free_huge_page is for gigantic
> > pages allocated via cma.  I 'think' the concern about undesirable
> > user visible side effects in this case is minimal as freeing/allocating
> > 1G pages is not something that is going to happen at a high frequency.
> > My thinking could be wrong?
> > 
> > Of more concern, is the introduction of this series.  If this feature
> > is enabled, then ALL free_huge_page requests must be sent to a workqueue.
> > Any ideas on how to address this?
> > 
> 
> Thinking about this more ...
> 
> A call to free_huge_page has two distinct outcomes
> 1) Page is freed back to the original allocator: buddy or cma
> 2) Page is put on hugetlb free list
> 
> We can only possibly sleep in the first case 1.  In addition, freeing a
> page back to the original allocator involves these steps:
> 1) Removing page from hugetlb lists
> 2) Updating hugetlb counts: nr_hugepages, surplus
> 3) Updating page fields
> 4) Allocate vmemmap pages if needed as in this series
> 5) Calling free routine of original allocator
> 
> If hugetlb_lock is irq safe, we can perform the first 3 steps under that
> lock without issue.  We would then use a workqueue to perform the last
> two steps.  Since we are updating hugetlb user visible data under the
> lock, there should be no delays.  Of course, giving those pages back to
> the original allocator could still be delayed, and a user may notice
> that.  Not sure if that would be acceptable?

Well, having many in-flight huge pages can certainly be visible. Say you
are freeing hundreds of huge pages and your echo n > nr_hugepages will
return just for you to find out that the memory hasn't been freed and
therefore cannot be reused for another use - recently there was somebody
mentioning their usecase to free up huge pages to prevent OOM for
example. I do expect more people doing something like that.

Now, nr_hugepages can be handled by blocking on the same WQ until all
pre-existing items are processed. Maybe we will need to have a more
generic API to achieve the same for in kernel users but let's wait for
those requests.

> I think Muchun had a
> similar setup just for vmemmmap allocation in an early version of this
> series.
> 
> This would also require changes to where accounting is done in
> dissolve_free_huge_page and update_and_free_page as mentioned elsewhere.

Normalizing dissolve_free_huge_page is definitely a good idea. It is
really tricky how it sticks out and does half of the job of
update_and_free_page.

That being said, if it is possible to have a fully consistent h state
before handing over to WQ for sleeping operation then we should be all
fine. I am slightly worried about potential tricky situations where the
sleeping operation fails because that would require that page to be
added back to the pool again. As said above we would need some sort of
sync with in-flight operations before returning to the userspace.

-- 
Michal Hocko
SUSE Labs

^ permalink raw reply	[flat|nested] 52+ messages in thread

* Re: [PATCH v18 4/9] mm: hugetlb: alloc the vmemmap pages associated with each HugeTLB page
  2021-03-12  8:15                       ` Michal Hocko
@ 2021-03-12 17:50                         ` Mike Kravetz
  0 siblings, 0 replies; 52+ messages in thread
From: Mike Kravetz @ 2021-03-12 17:50 UTC (permalink / raw)
  To: Michal Hocko
  Cc: Paul E. McKenney, Muchun Song, corbet, tglx, mingo, bp, x86, hpa,
	dave.hansen, luto, peterz, viro, akpm, mchehab+huawei,
	pawan.kumar.gupta, rdunlap, oneukum, anshuman.khandual, jroedel,
	almasrymina, rientjes, willy, osalvador, song.bao.hua, david,
	naoya.horiguchi, joao.m.martins, duanxiongchun, linux-doc,
	linux-kernel, linux-mm, linux-fsdevel, Chen Huang,
	Bodeddula Balasubramaniam

On 3/12/21 12:15 AM, Michal Hocko wrote:
> On Thu 11-03-21 14:53:08, Mike Kravetz wrote:
>> On 3/11/21 9:59 AM, Mike Kravetz wrote:
>>> On 3/11/21 4:17 AM, Michal Hocko wrote:
>>>>> Yeah per cpu preempt counting shouldn't be noticeable but I have to
>>>>> confess I haven't benchmarked it.
>>>>
>>>> But all this seems moot now http://lkml.kernel.org/r/YEoA08n60+jzsnAl@hirez.programming.kicks-ass.net
>>>>
>>>
>>> The proper fix for free_huge_page independent of this series would
>>> involve:
>>>
>>> - Make hugetlb_lock and subpool lock irq safe
>>> - Hand off freeing to a workque if the freeing could sleep
>>>
>>> Today, the only time we can sleep in free_huge_page is for gigantic
>>> pages allocated via cma.  I 'think' the concern about undesirable
>>> user visible side effects in this case is minimal as freeing/allocating
>>> 1G pages is not something that is going to happen at a high frequency.
>>> My thinking could be wrong?
>>>
>>> Of more concern, is the introduction of this series.  If this feature
>>> is enabled, then ALL free_huge_page requests must be sent to a workqueue.
>>> Any ideas on how to address this?
>>>
>>
>> Thinking about this more ...
>>
>> A call to free_huge_page has two distinct outcomes
>> 1) Page is freed back to the original allocator: buddy or cma
>> 2) Page is put on hugetlb free list
>>
>> We can only possibly sleep in the first case 1.  In addition, freeing a
>> page back to the original allocator involves these steps:
>> 1) Removing page from hugetlb lists
>> 2) Updating hugetlb counts: nr_hugepages, surplus
>> 3) Updating page fields
>> 4) Allocate vmemmap pages if needed as in this series
>> 5) Calling free routine of original allocator
>>
>> If hugetlb_lock is irq safe, we can perform the first 3 steps under that
>> lock without issue.  We would then use a workqueue to perform the last
>> two steps.  Since we are updating hugetlb user visible data under the
>> lock, there should be no delays.  Of course, giving those pages back to
>> the original allocator could still be delayed, and a user may notice
>> that.  Not sure if that would be acceptable?
> 
> Well, having many in-flight huge pages can certainly be visible. Say you
> are freeing hundreds of huge pages and your echo n > nr_hugepages will
> return just for you to find out that the memory hasn't been freed and
> therefore cannot be reused for another use - recently there was somebody
> mentioning their usecase to free up huge pages to prevent OOM for
> example. I do expect more people doing something like that.
> 
> Now, nr_hugepages can be handled by blocking on the same WQ until all
> pre-existing items are processed. Maybe we will need to have a more
> generic API to achieve the same for in kernel users but let's wait for
> those requests.
> 
>> I think Muchun had a
>> similar setup just for vmemmmap allocation in an early version of this
>> series.
>>
>> This would also require changes to where accounting is done in
>> dissolve_free_huge_page and update_and_free_page as mentioned elsewhere.
> 
> Normalizing dissolve_free_huge_page is definitely a good idea. It is
> really tricky how it sticks out and does half of the job of
> update_and_free_page.
> 
> That being said, if it is possible to have a fully consistent h state
> before handing over to WQ for sleeping operation then we should be all
> fine. I am slightly worried about potential tricky situations where the
> sleeping operation fails because that would require that page to be
> added back to the pool again. As said above we would need some sort of
> sync with in-flight operations before returning to the userspace.

Those sysfs interfaces to allocate/free huge pages will need to be
reworked.  One thing that is totally unacceptable with hugetlb_lock
being irq safe, are the calls to cond_resched_lock(&hugetlb_lock).
We will need to significantly reduce lock hold time in these situations.
I have some ideas on how this might work, but it is going to require
some a good deal of code restructuring and will take some time.
-- 
Mike Kravetz

^ permalink raw reply	[flat|nested] 52+ messages in thread

end of thread, other threads:[~2021-03-12 18:18 UTC | newest]

Thread overview: 52+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-08 10:27 [PATCH v18 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
2021-03-08 10:27 ` [PATCH v18 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
2021-03-10 14:14   ` Michal Hocko
2021-03-11  2:58     ` [External] " Muchun Song
2021-03-11  8:45       ` Muchun Song
2021-03-11  8:53         ` Michal Hocko
2021-03-11  9:05           ` Muchun Song
2021-03-08 10:28 ` [PATCH v18 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2021-03-08 10:28 ` [PATCH v18 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
2021-03-10 14:32   ` Michal Hocko
2021-03-11  3:35     ` [External] " Muchun Song
2021-03-08 10:28 ` [PATCH v18 4/9] mm: hugetlb: alloc " Muchun Song
2021-03-10 14:21   ` Oscar Salvador
2021-03-11  4:13     ` [External] " Muchun Song
2021-03-10 15:19   ` Michal Hocko
2021-03-10 18:56     ` Mike Kravetz
2021-03-10 21:11       ` Michal Hocko
2021-03-10 21:49         ` Paul E. McKenney
2021-03-10 22:10           ` Mike Kravetz
2021-03-10 23:28             ` Paul E. McKenney
2021-03-11  8:40               ` Michal Hocko
2021-03-11 12:17                 ` Michal Hocko
2021-03-11 17:59                   ` Mike Kravetz
2021-03-11 22:53                     ` Mike Kravetz
2021-03-12  8:15                       ` Michal Hocko
2021-03-12 17:50                         ` Mike Kravetz
2021-03-11  4:26     ` [External] " Muchun Song
2021-03-11  8:46       ` Michal Hocko
2021-03-11  8:49         ` Muchun Song
2021-03-08 10:28 ` [PATCH v18 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
2021-03-10 15:27   ` Michal Hocko
2021-03-11  6:34     ` [External] " Muchun Song
2021-03-11  8:50       ` Michal Hocko
2021-03-11  9:13         ` Muchun Song
2021-03-08 10:28 ` [PATCH v18 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Muchun Song
2021-03-10 15:37   ` Michal Hocko
2021-03-10 17:15     ` Randy Dunlap
2021-03-11  6:36       ` [External] " Muchun Song
2021-03-11  6:36     ` Muchun Song
2021-03-08 10:28 ` [PATCH v18 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2021-03-08 10:28 ` [PATCH v18 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
2021-03-10 15:39   ` Michal Hocko
2021-03-08 10:28 ` [PATCH v18 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
2021-03-10 15:41   ` Michal Hocko
2021-03-11  7:33     ` [External] " Muchun Song
2021-03-11  8:55       ` Michal Hocko
2021-03-11  9:08         ` Muchun Song
2021-03-11  9:39           ` Michal Hocko
2021-03-11 10:00             ` Muchun Song
2021-03-11 12:16               ` Michal Hocko
2021-03-11 13:00                 ` Muchun Song
2021-03-11 13:45                 ` Oscar Salvador

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.