From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id F188DC2D0E4 for ; Tue, 24 Nov 2020 10:54:22 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id A81AF2076E for ; Tue, 24 Nov 2020 10:54:22 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1732308AbgKXKyC convert rfc822-to-8bit (ORCPT ); Tue, 24 Nov 2020 05:54:02 -0500 Received: from szxga02-in.huawei.com ([45.249.212.188]:2503 "EHLO szxga02-in.huawei.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1731755AbgKXKyB (ORCPT ); Tue, 24 Nov 2020 05:54:01 -0500 Received: from DGGEMM406-HUB.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4CgLVC6nDmzQjFt; Tue, 24 Nov 2020 18:53:39 +0800 (CST) Received: from dggemi761-chm.china.huawei.com (10.1.198.147) by DGGEMM406-HUB.china.huawei.com (10.3.20.214) with Microsoft SMTP Server (TLS) id 14.3.487.0; Tue, 24 Nov 2020 18:53:56 +0800 Received: from dggemi761-chm.china.huawei.com (10.1.198.147) by dggemi761-chm.china.huawei.com (10.1.198.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1913.5; Tue, 24 Nov 2020 18:53:56 +0800 Received: from dggemi761-chm.china.huawei.com ([10.9.49.202]) by dggemi761-chm.china.huawei.com ([10.9.49.202]) with mapi id 15.01.1913.007; Tue, 24 Nov 2020 18:53:56 +0800 From: "Song Bao Hua (Barry Song)" To: Muchun Song , "corbet@lwn.net" , "mike.kravetz@oracle.com" , "tglx@linutronix.de" , "mingo@redhat.com" , "bp@alien8.de" , "x86@kernel.org" , "hpa@zytor.com" , "dave.hansen@linux.intel.com" , "luto@kernel.org" , "peterz@infradead.org" , "viro@zeniv.linux.org.uk" , "akpm@linux-foundation.org" , "paulmck@kernel.org" , "mchehab+huawei@kernel.org" , "pawan.kumar.gupta@linux.intel.com" , "rdunlap@infradead.org" , "oneukum@suse.com" , "anshuman.khandual@arm.com" , "jroedel@suse.de" , "almasrymina@google.com" , "rientjes@google.com" , "willy@infradead.org" , "osalvador@suse.de" , "mhocko@suse.com" CC: "duanxiongchun@bytedance.com" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" Subject: RE: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Thread-Topic: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Thread-Index: AQHWwkh5Ms2jMqVne0ipo7HFEJnHCanXGbPg Date: Tue, 24 Nov 2020 10:53:56 +0000 Message-ID: <5f6443f10292405d813ffb444ef315fc@hisilicon.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> <20201124095259.58755-15-songmuchun@bytedance.com> In-Reply-To: <20201124095259.58755-15-songmuchun@bytedance.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.126.201.209] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 8BIT MIME-Version: 1.0 X-CFilter-Loop: Reflected Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org > -----Original Message----- > From: Muchun Song [mailto:songmuchun@bytedance.com] > Sent: Tuesday, November 24, 2020 10:53 PM > To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de; > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com; > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org; > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org; > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com; > rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com; > jroedel@suse.de; almasrymina@google.com; rientjes@google.com; > willy@infradead.org; osalvador@suse.de; mhocko@suse.com; Song Bao Hua > (Barry Song) > Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org; > linux-kernel@vger.kernel.org; linux-mm@kvack.org; > linux-fsdevel@vger.kernel.org; Muchun Song > Subject: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter > hugetlb_free_vmemmap > > Add a kernel parameter hugetlb_free_vmemmap to disable the feature of > freeing unused vmemmap pages associated with each hugetlb page on boot. > > Signed-off-by: Muchun Song > --- > Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++ > Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ > mm/hugetlb_vmemmap.c | 19 > ++++++++++++++++++- > 3 files changed, 30 insertions(+), 1 deletion(-) > > diff --git a/Documentation/admin-guide/kernel-parameters.txt > b/Documentation/admin-guide/kernel-parameters.txt > index 5debfe238027..d28c3acde965 100644 > --- a/Documentation/admin-guide/kernel-parameters.txt > +++ b/Documentation/admin-guide/kernel-parameters.txt > @@ -1551,6 +1551,15 @@ > Documentation/admin-guide/mm/hugetlbpage.rst. > Format: size[KMG] > > + hugetlb_free_vmemmap= > + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, > + this controls freeing unused vmemmap pages associated > + with each HugeTLB page. > + Format: { on | off (default) } > + > + on: enable the feature > + off: disable the feature > + We've a parameter here. but wouldn't it be applied to "x86/mm/64/:disable Pmd page mapping of vmemmap" as well? If (hugetlb_free_vmemmap_enabled) Do Basepage mapping? > hung_task_panic= > [KNL] Should the hung task detector generate panics. > Format: 0 | 1 > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst > b/Documentation/admin-guide/mm/hugetlbpage.rst > index f7b1c7462991..6a8b57f6d3b7 100644 > --- a/Documentation/admin-guide/mm/hugetlbpage.rst > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst > @@ -145,6 +145,9 @@ default_hugepagesz > > will all result in 256 2M huge pages being allocated. Valid default > huge page size is architecture dependent. > +hugetlb_free_vmemmap > + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables > freeing > + unused vmemmap pages associated each HugeTLB page. > > When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages`` > indicates the current number of pre-allocated huge pages of the default size. > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 509ca451e232..b2222f8d1245 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -131,6 +131,22 @@ typedef void (*vmemmap_pte_remap_func_t)(struct > page *reuse, pte_t *ptep, > unsigned long start, unsigned long end, > void *priv); > > +static bool hugetlb_free_vmemmap_enabled __initdata; > + > +static int __init early_hugetlb_free_vmemmap_param(char *buf) > +{ > + if (!buf) > + return -EINVAL; > + > + if (!strcmp(buf, "on")) > + hugetlb_free_vmemmap_enabled = true; > + else if (strcmp(buf, "off")) > + return -EINVAL; > + > + return 0; > +} > +early_param("hugetlb_free_vmemmap", > early_hugetlb_free_vmemmap_param); > + > static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) > { > return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; > @@ -322,7 +338,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) > unsigned int order = huge_page_order(h); > unsigned int vmemmap_pages; > > - if (!is_power_of_2(sizeof(struct page))) { > + if (!is_power_of_2(sizeof(struct page)) || > + !hugetlb_free_vmemmap_enabled) { > pr_info("disable freeing vmemmap pages for %s\n", h->name); > return; > } > -- > 2.11.0 Thanks Barry From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C71C6C2D0E4 for ; Tue, 24 Nov 2020 10:54:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 2D9D62076B for ; Tue, 24 Nov 2020 10:54:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 2D9D62076B Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=hisilicon.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5C2D76B00AC; Tue, 24 Nov 2020 05:54:06 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 574BB6B00AE; Tue, 24 Nov 2020 05:54:06 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 4632D6B00AF; Tue, 24 Nov 2020 05:54:06 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0097.hostedemail.com [216.40.44.97]) by kanga.kvack.org (Postfix) with ESMTP id 2EB676B00AC for ; Tue, 24 Nov 2020 05:54:06 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E7151181AEF09 for ; Tue, 24 Nov 2020 10:54:05 +0000 (UTC) X-FDA: 77519002050.23.hook01_5d05ca32736d Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id C20DE37604 for ; Tue, 24 Nov 2020 10:54:05 +0000 (UTC) X-HE-Tag: hook01_5d05ca32736d X-Filterd-Recvd-Size: 7871 Received: from szxga02-in.huawei.com (szxga02-in.huawei.com [45.249.212.188]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Tue, 24 Nov 2020 10:54:04 +0000 (UTC) Received: from DGGEMM406-HUB.china.huawei.com (unknown [172.30.72.56]) by szxga02-in.huawei.com (SkyGuard) with ESMTP id 4CgLVC6nDmzQjFt; Tue, 24 Nov 2020 18:53:39 +0800 (CST) Received: from dggemi761-chm.china.huawei.com (10.1.198.147) by DGGEMM406-HUB.china.huawei.com (10.3.20.214) with Microsoft SMTP Server (TLS) id 14.3.487.0; Tue, 24 Nov 2020 18:53:56 +0800 Received: from dggemi761-chm.china.huawei.com (10.1.198.147) by dggemi761-chm.china.huawei.com (10.1.198.147) with Microsoft SMTP Server (version=TLS1_2, cipher=TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256_P256) id 15.1.1913.5; Tue, 24 Nov 2020 18:53:56 +0800 Received: from dggemi761-chm.china.huawei.com ([10.9.49.202]) by dggemi761-chm.china.huawei.com ([10.9.49.202]) with mapi id 15.01.1913.007; Tue, 24 Nov 2020 18:53:56 +0800 From: "Song Bao Hua (Barry Song)" To: Muchun Song , "corbet@lwn.net" , "mike.kravetz@oracle.com" , "tglx@linutronix.de" , "mingo@redhat.com" , "bp@alien8.de" , "x86@kernel.org" , "hpa@zytor.com" , "dave.hansen@linux.intel.com" , "luto@kernel.org" , "peterz@infradead.org" , "viro@zeniv.linux.org.uk" , "akpm@linux-foundation.org" , "paulmck@kernel.org" , "mchehab+huawei@kernel.org" , "pawan.kumar.gupta@linux.intel.com" , "rdunlap@infradead.org" , "oneukum@suse.com" , "anshuman.khandual@arm.com" , "jroedel@suse.de" , "almasrymina@google.com" , "rientjes@google.com" , "willy@infradead.org" , "osalvador@suse.de" , "mhocko@suse.com" CC: "duanxiongchun@bytedance.com" , "linux-doc@vger.kernel.org" , "linux-kernel@vger.kernel.org" , "linux-mm@kvack.org" , "linux-fsdevel@vger.kernel.org" Subject: RE: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Thread-Topic: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter hugetlb_free_vmemmap Thread-Index: AQHWwkh5Ms2jMqVne0ipo7HFEJnHCanXGbPg Date: Tue, 24 Nov 2020 10:53:56 +0000 Message-ID: <5f6443f10292405d813ffb444ef315fc@hisilicon.com> References: <20201124095259.58755-1-songmuchun@bytedance.com> <20201124095259.58755-15-songmuchun@bytedance.com> In-Reply-To: <20201124095259.58755-15-songmuchun@bytedance.com> Accept-Language: en-GB, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-originating-ip: [10.126.201.209] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 X-CFilter-Loop: Reflected X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > -----Original Message----- > From: Muchun Song [mailto:songmuchun@bytedance.com] > Sent: Tuesday, November 24, 2020 10:53 PM > To: corbet@lwn.net; mike.kravetz@oracle.com; tglx@linutronix.de; > mingo@redhat.com; bp@alien8.de; x86@kernel.org; hpa@zytor.com; > dave.hansen@linux.intel.com; luto@kernel.org; peterz@infradead.org; > viro@zeniv.linux.org.uk; akpm@linux-foundation.org; paulmck@kernel.org; > mchehab+huawei@kernel.org; pawan.kumar.gupta@linux.intel.com; > rdunlap@infradead.org; oneukum@suse.com; anshuman.khandual@arm.com; > jroedel@suse.de; almasrymina@google.com; rientjes@google.com; > willy@infradead.org; osalvador@suse.de; mhocko@suse.com; Song Bao Hua > (Barry Song) > Cc: duanxiongchun@bytedance.com; linux-doc@vger.kernel.org; > linux-kernel@vger.kernel.org; linux-mm@kvack.org; > linux-fsdevel@vger.kernel.org; Muchun Song > Subject: [PATCH v6 14/16] mm/hugetlb: Add a kernel parameter > hugetlb_free_vmemmap >=20 > Add a kernel parameter hugetlb_free_vmemmap to disable the feature of > freeing unused vmemmap pages associated with each hugetlb page on boot. >=20 > Signed-off-by: Muchun Song > --- > Documentation/admin-guide/kernel-parameters.txt | 9 +++++++++ > Documentation/admin-guide/mm/hugetlbpage.rst | 3 +++ > mm/hugetlb_vmemmap.c | 19 > ++++++++++++++++++- > 3 files changed, 30 insertions(+), 1 deletion(-) >=20 > diff --git a/Documentation/admin-guide/kernel-parameters.txt > b/Documentation/admin-guide/kernel-parameters.txt > index 5debfe238027..d28c3acde965 100644 > --- a/Documentation/admin-guide/kernel-parameters.txt > +++ b/Documentation/admin-guide/kernel-parameters.txt > @@ -1551,6 +1551,15 @@ > Documentation/admin-guide/mm/hugetlbpage.rst. > Format: size[KMG] >=20 > + hugetlb_free_vmemmap=3D > + [KNL] When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, > + this controls freeing unused vmemmap pages associated > + with each HugeTLB page. > + Format: { on | off (default) } > + > + on: enable the feature > + off: disable the feature > + We've a parameter here. but wouldn't it be applied to "x86/mm/64/:disable Pmd page mapping of vmemmap" as well? If (hugetlb_free_vmemmap_enabled) Do Basepage mapping? > hung_task_panic=3D > [KNL] Should the hung task detector generate panics. > Format: 0 | 1 > diff --git a/Documentation/admin-guide/mm/hugetlbpage.rst > b/Documentation/admin-guide/mm/hugetlbpage.rst > index f7b1c7462991..6a8b57f6d3b7 100644 > --- a/Documentation/admin-guide/mm/hugetlbpage.rst > +++ b/Documentation/admin-guide/mm/hugetlbpage.rst > @@ -145,6 +145,9 @@ default_hugepagesz >=20 > will all result in 256 2M huge pages being allocated. Valid default > huge page size is architecture dependent. > +hugetlb_free_vmemmap > + When CONFIG_HUGETLB_PAGE_FREE_VMEMMAP is set, this enables > freeing > + unused vmemmap pages associated each HugeTLB page. >=20 > When multiple huge page sizes are supported, ``/proc/sys/vm/nr_hugepages= `` > indicates the current number of pre-allocated huge pages of the default = size. > diff --git a/mm/hugetlb_vmemmap.c b/mm/hugetlb_vmemmap.c > index 509ca451e232..b2222f8d1245 100644 > --- a/mm/hugetlb_vmemmap.c > +++ b/mm/hugetlb_vmemmap.c > @@ -131,6 +131,22 @@ typedef void (*vmemmap_pte_remap_func_t)(struct > page *reuse, pte_t *ptep, > unsigned long start, unsigned long end, > void *priv); >=20 > +static bool hugetlb_free_vmemmap_enabled __initdata; > + > +static int __init early_hugetlb_free_vmemmap_param(char *buf) > +{ > + if (!buf) > + return -EINVAL; > + > + if (!strcmp(buf, "on")) > + hugetlb_free_vmemmap_enabled =3D true; > + else if (strcmp(buf, "off")) > + return -EINVAL; > + > + return 0; > +} > +early_param("hugetlb_free_vmemmap", > early_hugetlb_free_vmemmap_param); > + > static inline unsigned int vmemmap_pages_per_hpage(struct hstate *h) > { > return free_vmemmap_pages_per_hpage(h) + RESERVE_VMEMMAP_NR; > @@ -322,7 +338,8 @@ void __init hugetlb_vmemmap_init(struct hstate *h) > unsigned int order =3D huge_page_order(h); > unsigned int vmemmap_pages; >=20 > - if (!is_power_of_2(sizeof(struct page))) { > + if (!is_power_of_2(sizeof(struct page)) || > + !hugetlb_free_vmemmap_enabled) { > pr_info("disable freeing vmemmap pages for %s\n", h->name); > return; > } > -- > 2.11.0 Thanks Barry