From: Muchun Song <songmuchun@bytedance.com>
To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de,
mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com,
dave.hansen@linux.intel.com, luto@kernel.org,
peterz@infradead.org, viro@zeniv.linux.org.uk,
akpm@linux-foundation.org, paulmck@kernel.org,
mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com,
rdunlap@infradead.org, oneukum@suse.com,
anshuman.khandual@arm.com, jroedel@suse.de,
almasrymina@google.com, rientjes@google.com, willy@infradead.org,
osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com,
david@redhat.com, naoya.horiguchi@nec.com,
joao.m.martins@oracle.com
Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org,
linux-kernel@vger.kernel.org, linux-mm@kvack.org,
linux-fsdevel@vger.kernel.org,
Muchun Song <songmuchun@bytedance.com>,
Miaohe Lin <linmiaohe@huawei.com>
Subject: [PATCH v16 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap
Date: Fri, 19 Feb 2021 18:49:51 +0800 [thread overview]
Message-ID: <20210219104954.67390-7-songmuchun@bytedance.com> (raw)
In-Reply-To: <20210219104954.67390-1-songmuchun@bytedance.com>
Add a kernel parameter hugetlb_free_vmemmap to enable the feature of
freeing unused vmemmap pages associated with each hugetlb page on boot.
We disables PMD mapping of vmemmap pages for x86-64 arch when this
feature is enabled. Because vmemmap_remap_free() depends on vmemmap
being base page mapped.
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
Reviewed-by: Oscar Salvador <osalvador@suse.de>
Reviewed-by: Barry Song <song.bao.hua@hisilicon.com>
Reviewed-by: Miaohe Lin <linmiaohe@huawei.com>
---
Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++
Documentation/admin-guide/mm/hugetlbpage.rst | 14 +++++++++-----
arch/x86/mm/init_64.c | 8 ++++++--
include/linux/hugetlb.h | 19 +++++++++++++++++++
mm/hugetlb_vmemmap.c | 24 ++++++++++++++++++++++++
5 files changed, 72 insertions(+), 7 deletions(-)
diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt
index 5adf1e57e932..7db2591f3ad3 100644
--- a/Documentation/admin-guide/kernel-parameters.txt
+++ b/Documentation/admin-guide/kernel-parameters.txt
@@ -1577,6 +1577,20 @@
Documentation/admin-guide/mm/hugetlbpage.rst.
Format: size[KMG]
+ hugetlb_free_vmemmap=
+ [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set,
+ this controls freeing unused vmemmap pages associated
+ with each HugeTLB page. When this option is enabled,
+ we disable PMD/huge page mapping of vmemmap pages which
+ increase page table pages. So if a user/sysadmin only
+ uses a small number of HugeTLB pages (as a percentage
+ of system memory), they could end up using more memory
+ with hugetlb_free_vmemmap on as opposed to off.
+ Format: { on | off (default) }
+
+ on: enable the feature
+ off: disable the feature
+
hung_task_panic=
[KNL] Should the hung task detector generate panics.
Format: 0 | 1
diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation/admin-guide/mm/hugetlbpage.rst
index fb8f649e5635..3bf494c01da4 100644
--- a/Documentation/admin-guide/mm/hugetlbpage.rst
+++ b/Documentation/admin-guide/mm/hugetlbpage.rst
@@ -60,8 +60,8 @@ HugePages_Surp
the pool above the value in ``/proc/sys/vm/nr_hugepages``. The
maximum number of surplus huge pages is controlled by
``/proc/sys/vm/nr_overcommit_hugepages``.
- Note: When the feature of freeing unused vmemmap pages associated
- with each hugetlb page is enabled, the number of the surplus huge
+ Note: When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP and the kernel parameter
+ of ``hugetlb_free_vmemmap=on`` are set, the number of the surplus huge
pages may be temporarily larger than the maximum number of surplus
huge pages when the system is under memory pressure.
Hugepagesize
@@ -84,9 +84,10 @@ returned to the huge page pool when freed by a task. A user with root
privileges can dynamically allocate more or free some persistent huge pages
by increasing or decreasing the value of ``nr_hugepages``.
-Note: When the feature of freeing unused vmemmap pages associated with each
-hugetlb page is enabled, we can failed to free the huge pages triggered by
-the user when ths system is under memory pressure. Please try again later.
+Note: When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP and the kernel parameter of
+``hugetlb_free_vmemmap=on`` are set, we can failed to free the huge pages
+triggered by the user when ths system is under memory pressure. Please
+try again later.
Pages that are used as huge pages are reserved inside the kernel and cannot
be used for other purposes. Huge pages cannot be swapped out under
@@ -153,6 +154,9 @@ default_hugepagesz
will all result in 256 2M huge pages being allocated. Valid default
huge page size is architecture dependent.
+hugetlb_free_vmemmap
+ When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing
+ unused vmemmap pages associated with each HugeTLB page.
When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages``
indicates the current number of pre-allocated huge pages of the default size.
diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c
index 0435bee2e172..39f88c5faadc 100644
--- a/arch/x86/mm/init_64.c
+++ b/arch/x86/mm/init_64.c
@@ -34,6 +34,7 @@
#include <linux/gfp.h>
#include <linux/kcore.h>
#include <linux/bootmem_info.h>
+#include <linux/hugetlb.h>
#include <asm/processor.h>
#include <asm/bios_ebda.h>
@@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start, unsigned long end, int node,
{
int err;
- if (end - start < PAGES_PER_SECTION * sizeof(struct page))
+ if ((is_hugetlb_free_vmemmap_enabled() && !altmap) ||
+ end - start < PAGES_PER_SECTION * sizeof(struct page))
err = vmemmap_populate_basepages(start, end, node, NULL);
else if (boot_cpu_has(X86_FEATURE_PSE))
err = vmemmap_populate_hugepages(start, end, node, altmap);
@@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long section_nr,
pmd_t *pmd;
unsigned int nr_pmd_pages;
struct page *page;
+ bool base_mapping = !boot_cpu_has(X86_FEATURE_PSE) ||
+ is_hugetlb_free_vmemmap_enabled();
for (; addr < end; addr = next) {
pte_t *pte = NULL;
@@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long section_nr,
}
get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO);
- if (!boot_cpu_has(X86_FEATURE_PSE)) {
+ if (base_mapping) {
next = (addr + PAGE_SIZE) & PAGE_MASK;
pmd = pmd_offset(pud, addr);
if (pmd_none(*pmd))
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 37fd248ce271..ad249e56ac49 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -854,6 +854,20 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma,
void set_page_huge_active(struct page *page);
+#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP
+extern bool hugetlb_free_vmemmap_enabled;
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+ return hugetlb_free_vmemmap_enabled;
+}
+#else
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+ return false;
+}
+#endif
+
#else /* CONFIG_HUGETLB_PAGE */
struct hstate {};
@@ -1007,6 +1021,11 @@ static inline void set_huge_swap_pte_at(struct mm_struct *mm, unsigned long addr
pte_t *ptep, pte_t pte, unsigned long sz)
{
}
+
+static inline bool is_hugetlb_free_vmemmap_enabled(void)
+{
+ return false;
+}
#endif /* CONFIG_HUGETLB_PAGE */
static inline spinlock_t *huge_pte_lock(struct hstate *h,
diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c
index f7ab3d99250a..7807ed6678e0 100644
--- a/mm/hugetlb_vmemmap.c
+++ b/mm/hugetlb_vmemmap.c
@@ -169,6 +169,8 @@
* (last) level. So this type of HugeTLB page can be optimized only when its
* size of the struct page structs is greater than 2 pages.
*/
+#define pr_fmt(fmt) "HugeTLB: " fmt
+
#include "hugetlb_vmemmap.h"
/*
@@ -181,6 +183,28 @@
#define RESERVE_VMEMMAP_NR 2U
#define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT)
+bool hugetlb_free_vmemmap_enabled;
+
+static int __init early_hugetlb_free_vmemmap_param(char *buf)
+{
+ /* We cannot optimize if a "struct page" crosses page boundaries. */
+ if ((!is_power_of_2(sizeof(struct page)))) {
+ pr_warn("cannot free vmemmap pages because \"struct page\" crosses page boundaries\n");
+ return 0;
+ }
+
+ if (!buf)
+ return -EINVAL;
+
+ if (!strcmp(buf, "on"))
+ hugetlb_free_vmemmap_enabled = true;
+ else if (strcmp(buf, "off"))
+ return -EINVAL;
+
+ return 0;
+}
+early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param);
+
static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hstate *h)
{
return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT;
--
2.11.0
next prev parent reply other threads:[~2021-02-19 10:53 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-02-19 10:49 [PATCH v16 0/9] Free some vmemmap pages of HugeTLB page Muchun Song
2021-02-19 10:49 ` [PATCH v16 1/9] mm: memory_hotplug: factor out bootmem core functions to bootmem_info.c Muchun Song
2021-02-19 10:49 ` [PATCH v16 2/9] mm: hugetlb: introduce a new config HUGETLB_PAGE_FREE_VMEMMAP Muchun Song
2021-02-19 10:49 ` [PATCH v16 3/9] mm: hugetlb: free the vmemmap pages associated with each HugeTLB page Muchun Song
2021-02-19 10:49 ` [PATCH v16 4/9] mm: hugetlb: alloc " Muchun Song
2021-02-19 14:12 ` Michal Hocko
2021-02-20 4:20 ` [External] " Muchun Song
2021-02-22 9:25 ` Michal Hocko
2021-02-22 10:31 ` Muchun Song
2021-02-22 10:50 ` Oscar Salvador
2021-02-23 0:00 ` Mike Kravetz
2021-02-23 5:35 ` [External] " Muchun Song
2021-02-23 9:27 ` Oscar Salvador
2021-02-23 10:27 ` [External] " Muchun Song
2021-02-23 10:50 ` Oscar Salvador
2021-02-23 15:41 ` Oscar Salvador
2021-02-23 22:31 ` Oscar Salvador
2021-02-24 3:47 ` Muchun Song
2021-02-24 8:31 ` Oscar Salvador
2021-02-19 10:49 ` [PATCH v16 5/9] mm: hugetlb: set the PageHWPoison to the raw error page Muchun Song
2021-02-19 10:49 ` Muchun Song [this message]
2021-02-19 10:49 ` [PATCH v16 7/9] mm: hugetlb: introduce nr_free_vmemmap_pages in the struct hstate Muchun Song
2021-02-19 10:49 ` [PATCH v16 8/9] mm: hugetlb: gather discrete indexes of tail page Muchun Song
2021-02-19 10:49 ` [PATCH v16 9/9] mm: hugetlb: optimize the code with the help of the compiler Muchun Song
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210219104954.67390-7-songmuchun@bytedance.com \
--to=songmuchun@bytedance.com \
--cc=akpm@linux-foundation.org \
--cc=almasrymina@google.com \
--cc=anshuman.khandual@arm.com \
--cc=bp@alien8.de \
--cc=corbet@lwn.net \
--cc=dave.hansen@linux.intel.com \
--cc=david@redhat.com \
--cc=duanxiongchun@bytedance.com \
--cc=hpa@zytor.com \
--cc=joao.m.martins@oracle.com \
--cc=jroedel@suse.de \
--cc=linmiaohe@huawei.com \
--cc=linux-doc@vger.kernel.org \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=luto@kernel.org \
--cc=mchehab+huawei@kernel.org \
--cc=mhocko@suse.com \
--cc=mike.kravetz@oracle.com \
--cc=mingo@redhat.com \
--cc=naoya.horiguchi@nec.com \
--cc=oneukum@suse.com \
--cc=osalvador@suse.de \
--cc=paulmck@kernel.org \
--cc=pawan.kumar.gupta@linux.intel.com \
--cc=peterz@infradead.org \
--cc=rdunlap@infradead.org \
--cc=rientjes@google.com \
--cc=song.bao.hua@hisilicon.com \
--cc=tglx@linutronix.de \
--cc=viro@zeniv.linux.org.uk \
--cc=willy@infradead.org \
--cc=x86@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).