From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.6 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 43119C433E0 for ; Fri, 19 Feb 2021 10:53:09 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AAC3664EBD for ; Fri, 19 Feb 2021 10:53:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AAC3664EBD Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 2E48A8D0016; Fri, 19 Feb 2021 05:53:08 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 2B95D8D000A; Fri, 19 Feb 2021 05:53:08 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 15AA68D0016; Fri, 19 Feb 2021 05:53:08 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0238.hostedemail.com [216.40.44.238]) by kanga.kvack.org (Postfix) with ESMTP id EDC868D000A for ; Fri, 19 Feb 2021 05:53:07 -0500 (EST) Received: from smtpin06.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id A8544812F for ; Fri, 19 Feb 2021 10:53:07 +0000 (UTC) X-FDA: 77834705214.06.FA3B8E3 Received: from mail-pg1-f177.google.com (mail-pg1-f177.google.com [209.85.215.177]) by imf26.hostedemail.com (Postfix) with ESMTP id EC805407F8EB for ; Fri, 19 Feb 2021 10:53:03 +0000 (UTC) Received: by mail-pg1-f177.google.com with SMTP id o7so3780398pgl.1 for ; Fri, 19 Feb 2021 02:53:06 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=mZiFnhdgt9WFLJc/7TZXd65Q0wsAwQQvE2XWkywCY2g=; b=RuSxHpi8/JHtj8fhrQJ2xaQ99euT2zBZP9hc0YdeKZl6V3jSTJG8Gmf1SPWEZYQCx0 JWrNreUBUnvEmADDJdTHWS292D06Mbm9vQJ4z1y2fawx5YDdQe1wGbdm70YlaHqSSlgi 3/zk+IBTlZMQieeYQVe5WgGwdENGBeG7bE+Ae6FIQ80zTPtU1+/QQYqs1H4lZWHeCcc9 JZs9ACb9qNKFxd4SNYf8BXkcBLGIZHgqlPFagwIDYRoavKr+ox1CSYg8TA5bOyWflLk2 /Qppg3m7Ob8NbnfTx4hkAVl5L64uDNxG55ks3EU7vSvSx5eLFbxh0l4Pv43Bs1R8eNtt W6vA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=mZiFnhdgt9WFLJc/7TZXd65Q0wsAwQQvE2XWkywCY2g=; b=pJrXVS3hhsP/TKu8NGk8QRncT9Mu3ZaZzH648TOKNZWJ2teV449CPkcwTeBB/6ZUTg B8OR/XOcLJwWBznKUHEJJSXjhc+OCe1VWflsawo2q0OtvXtH16MCzxZifRwV3uUnLEhU mLaS7kznE6HG0Grqyeb76oz/lChbK8In2UJPMx5ZVtKqmEqFDIKGJpaE667T9QE64pY3 Fk+78zZ1iaRSAtjxQ8O8XDEGIgDZD5H1f77mGi0fGUuTZcmpf3mp0ShVyLYwMjPPuYvv bQ7awyKPN9xQMXrv7SDOJ9/NMILu49mtiO7bOCX13lz7NJ/gyJ/JsOkF2GcG4AeILwTW 8z6w== X-Gm-Message-State: AOAM530wjlU7BYzd8Gru8fTOBx2ExUk7YZQZgG4iHy3nCy3X4wMnPXIn y9Lpqp7WKx4yLdVT2j6PJbSK4g== X-Google-Smtp-Source: ABdhPJx5lDcusvOqMBa9uwZFeTquNB5Eajkgf9ogdanGXwo4Ji2r0F+goSLOoEePQJosom+6D2rF1Q== X-Received: by 2002:a63:66c7:: with SMTP id a190mr8047068pgc.117.1613731985252; Fri, 19 Feb 2021 02:53:05 -0800 (PST) Received: from localhost.localdomain ([139.177.225.250]) by smtp.gmail.com with ESMTPSA id x1sm9662193pgj.37.2021.02.19.02.52.54 (version=TLS1_2 cipher=ECDHE-ECDSA-AES128-GCM-SHA256 bits=128/128); Fri, 19 Feb 2021 02:53:04 -0800 (PST) From: Muchun Song To: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, osalvador@suse.de, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, naoya.horiguchi@nec.com, joao.m.martins@oracle.com Cc: duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org, Muchun Song , Miaohe Lin Subject: [PATCH v16 6/9] mm: hugetlb: add a kernel parameter hugetlb_free_vmemmap Date: Fri, 19 Feb 2021 18:49:51 +0800 Message-Id: <20210219104954.67390-7-songmuchun@bytedance.com> X-Mailer: git-send-email 2.21.0 (Apple Git-122) In-Reply-To: <20210219104954.67390-1-songmuchun@bytedance.com> References: <20210219104954.67390-1-songmuchun@bytedance.com> MIME-Version: 1.0 X-Stat-Signature: mi1mfqtcjqqu7581amgw1tei4ahtuz7m X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: EC805407F8EB Received-SPF: none (bytedance.com>: No applicable sender policy available) receiver=imf26; identity=mailfrom; envelope-from=""; helo=mail-pg1-f177.google.com; client-ip=209.85.215.177 X-HE-DKIM-Result: pass/pass X-HE-Tag: 1613731983-378562 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Add a kernel parameter hugetlb_free_vmemmap to enable the feature of freeing unused vmemmap pages associated with each hugetlb page on boot. We disables PMD mapping of vmemmap pages for x86-64 arch when this feature is enabled. Because vmemmap_remap_free() depends on vmemmap being base page mapped. Signed-off-by: Muchun Song Reviewed-by: Oscar Salvador Reviewed-by: Barry Song Reviewed-by: Miaohe Lin --- Documentation/admin-guide/kernel-parameters.txt | 14 ++++++++++++++ Documentation/admin-guide/mm/hugetlbpage.rst | 14 +++++++++----- arch/x86/mm/init_64.c | 8 ++++++-- include/linux/hugetlb.h | 19 +++++++++++++++++++ mm/hugetlb_vmemmap.c | 24 +++++++++++++++++++= +++++ 5 files changed, 72 insertions(+), 7 deletions(-) diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentat= ion/admin-guide/kernel-parameters.txt index 5adf1e57e932..7db2591f3ad3 100644 --- a/Documentation/admin-guide/kernel-parameters.txt +++ b/Documentation/admin-guide/kernel-parameters.txt @@ -1577,6 +1577,20 @@ Documentation/admin-guide/mm/hugetlbpage.rst. Format: size[KMG] =20 + hugetlb_free_vmemmap=3D + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, + this controls freeing unused vmemmap pages associated + with each HugeTLB page. When this option is enabled, + we disable PMD/huge page mapping of vmemmap pages which + increase page table pages. So if a user/sysadmin only + uses a small number of HugeTLB pages (as a percentage + of system memory), they could end up using more memory + with hugetlb_free_vmemmap on as opposed to off. + Format: { on | off (default) } + + on: enable the feature + off: disable the feature + hung_task_panic=3D [KNL] Should the hung task detector generate panics. Format: 0 | 1 diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst b/Documentation= /admin-guide/mm/hugetlbpage.rst index fb8f649e5635..3bf494c01da4 100644 --- a/Documentation/admin-guide/mm/hugetlbpage.rst +++ b/Documentation/admin-guide/mm/hugetlbpage.rst @@ -60,8 +60,8 @@ HugePages_Surp the pool above the value in ``/proc/sys/vm/nr_hugepages``. The maximum number of surplus huge pages is controlled by ``/proc/sys/vm/nr_overcommit_hugepages``. - Note: When the feature of freeing unused vmemmap pages associated - with each hugetlb page is enabled, the number of the surplus huge + Note: When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP and the kernel parameter + of ``hugetlb_free_vmemmap=3Don`` are set, the number of the surplus hug= e pages may be temporarily larger than the maximum number of surplus huge pages when the system is under memory pressure. Hugepagesize @@ -84,9 +84,10 @@ returned to the huge page pool when freed by a task. = A user with root privileges can dynamically allocate more or free some persistent huge pa= ges by increasing or decreasing the value of ``nr_hugepages``. =20 -Note: When the feature of freeing unused vmemmap pages associated with e= ach -hugetlb page is enabled, we can failed to free the huge pages triggered = by -the user when ths system is under memory pressure. Please try again lat= er. +Note: When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP and the kernel parameter of +``hugetlb_free_vmemmap=3Don`` are set, we can failed to free the huge pa= ges +triggered by the user when ths system is under memory pressure. Please +try again later. =20 Pages that are used as huge pages are reserved inside the kernel and can= not be used for other purposes. Huge pages cannot be swapped out under @@ -153,6 +154,9 @@ default_hugepagesz =20 will all result in 256 2M huge pages being allocated. Valid default huge page size is architecture dependent. +hugetlb_free_vmemmap + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables freeing + unused vmemmap pages associated with each HugeTLB page. =20 When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages= `` indicates the current number of pre-allocated huge pages of the default = size. diff --git a/arch/x86/mm/init_64.c b/arch/x86/mm/init_64.c index 0435bee2e172..39f88c5faadc 100644 --- a/arch/x86/mm/init_64.c +++ b/arch/x86/mm/init_64.c @@ -34,6 +34,7 @@ #include #include #include +#include =20 #include #include @@ -1557,7 +1558,8 @@ int __meminit vmemmap_populate(unsigned long start,= unsigned long end, int node, { int err; =20 - if (end - start < PAGES_PER_SECTION * sizeof(struct page)) + if ((is_hugetlb_free_vmemmap_enabled() && !altmap) || + end - start < PAGES_PER_SECTION * sizeof(struct page)) err =3D vmemmap_populate_basepages(start, end, node, NULL); else if (boot_cpu_has(X86_FEATURE_PSE)) err =3D vmemmap_populate_hugepages(start, end, node, altmap); @@ -1585,6 +1587,8 @@ void register_page_bootmem_memmap(unsigned long sec= tion_nr, pmd_t *pmd; unsigned int nr_pmd_pages; struct page *page; + bool base_mapping =3D !boot_cpu_has(X86_FEATURE_PSE) || + is_hugetlb_free_vmemmap_enabled(); =20 for (; addr < end; addr =3D next) { pte_t *pte =3D NULL; @@ -1610,7 +1614,7 @@ void register_page_bootmem_memmap(unsigned long sec= tion_nr, } get_page_bootmem(section_nr, pud_page(*pud), MIX_SECTION_INFO); =20 - if (!boot_cpu_has(X86_FEATURE_PSE)) { + if (base_mapping) { next =3D (addr + PAGE_SIZE) & PAGE_MASK; pmd =3D pmd_offset(pud, addr); if (pmd_none(*pmd)) diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h index 37fd248ce271..ad249e56ac49 100644 --- a/include/linux/hugetlb.h +++ b/include/linux/hugetlb.h @@ -854,6 +854,20 @@ static inline void huge_ptep_modify_prot_commit(stru= ct vm_area_struct *vma, =20 void set_page_huge_active(struct page *page); =20 +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP +extern bool hugetlb_free_vmemmap_enabled; + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return hugetlb_free_vmemmap_enabled; +} +#else +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} +#endif + #else /* CONFIG_HUGETLB_PAGE */ struct hstate {}; =20 @@ -1007,6 +1021,11 @@ static inline void set_huge_swap_pte_at(struct mm_= struct *mm, unsigned long addr pte_t *ptep, pte_t pte, unsigned long sz) { } + +static inline bool is_hugetlb_free_vmemmap_enabled(void) +{ + return false; +} #endif /* CONFIG_HUGETLB_PAGE */ =20 static inline spinlock_t *huge_pte_lock(struct hstate *h, diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c index f7ab3d99250a..7807ed6678e0 100644 --- a/mm/hugetlb_vmemmap.c +++ b/mm/hugetlb_vmemmap.c @@ -169,6 +169,8 @@ * (last) level. So this type of HugeTLB page can be optimized only when= its * size of the struct page structs is greater than 2 pages. */ +#define pr_fmt(fmt) "HugeTLB: " fmt + #include "hugetlb_vmemmap.h" =20 /* @@ -181,6 +183,28 @@ #define RESERVE_VMEMMAP_NR 2U #define RESERVE_VMEMMAP_SIZE (RESERVE_VMEMMAP_NR << PAGE_SHIFT) =20 +bool hugetlb_free_vmemmap_enabled; + +static int __init early_hugetlb_free_vmemmap_param(char *buf) +{ + /* We cannot optimize if a "struct page" crosses page boundaries. */ + if ((!is_power_of_2(sizeof(struct page)))) { + pr_warn("cannot free vmemmap pages because \"struct page\" crosses pag= e boundaries\n"); + return 0; + } + + if (!buf) + return -EINVAL; + + if (!strcmp(buf, "on")) + hugetlb_free_vmemmap_enabled =3D true; + else if (strcmp(buf, "off")) + return -EINVAL; + + return 0; +} +early_param("hugetlb_free_vmemmap", early_hugetlb_free_vmemmap_param); + static inline unsigned long free_vmemmap_pages_size_per_hpage(struct hst= ate *h) { return (unsigned long)free_vmemmap_pages_per_hpage(h) << PAGE_SHIFT; --=20 2.11.0