From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-2.3 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id CC90EC10F27 for ; Mon, 9 Mar 2020 08:16:53 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5A13D20674 for ; Mon, 9 Mar 2020 08:16:53 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5A13D20674 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B89F56B0003; Mon, 9 Mar 2020 04:16:52 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B3A7F6B0006; Mon, 9 Mar 2020 04:16:52 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4F5B6B0007; Mon, 9 Mar 2020 04:16:52 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0209.hostedemail.com [216.40.44.209]) by kanga.kvack.org (Postfix) with ESMTP id 8E4336B0003 for ; Mon, 9 Mar 2020 04:16:52 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 4A1065DD2 for ; Mon, 9 Mar 2020 08:16:52 +0000 (UTC) X-FDA: 76575117864.14.cream58_7622fdcf4e70b X-HE-Tag: cream58_7622fdcf4e70b X-Filterd-Recvd-Size: 5159 Received: from huawei.com (szxga07-in.huawei.com [45.249.212.35]) by imf10.hostedemail.com (Postfix) with ESMTP for ; Mon, 9 Mar 2020 08:16:51 +0000 (UTC) Received: from DGGEMS403-HUB.china.huawei.com (unknown [172.30.72.59]) by Forcepoint Email with ESMTP id 9427C2E1C9510EE52B24; Mon, 9 Mar 2020 16:16:40 +0800 (CST) Received: from [127.0.0.1] (10.177.246.209) by DGGEMS403-HUB.china.huawei.com (10.3.19.203) with Microsoft SMTP Server id 14.3.487.0; Mon, 9 Mar 2020 16:16:32 +0800 Subject: Re: [PATCH] mm/hugetlb: avoid weird message in hugetlb_init To: Mike Kravetz CC: , , Matthew Wilcox , Andrew Morton , Qian Cai , , References: <20200305033014.1152-1-longpeng2@huawei.com> <43017337-fe28-16e0-fbdd-d6368bdd2eb2@oracle.com> From: "Longpeng (Mike)" Message-ID: <4a544411-43cc-41b3-68eb-1cc3d6af159b@huawei.com> Date: Mon, 9 Mar 2020 16:16:31 +0800 User-Agent: Mozilla/5.0 (Windows NT 6.1; WOW64; rv:60.0) Gecko/20100101 Thunderbird/60.7.2 MIME-Version: 1.0 In-Reply-To: <43017337-fe28-16e0-fbdd-d6368bdd2eb2@oracle.com> Content-Type: text/plain; charset="utf-8" X-Originating-IP: [10.177.246.209] X-CFilter-Loop: Reflected Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =E5=9C=A8 2020/3/7 4:12, Mike Kravetz =E5=86=99=E9=81=93: > On 3/5/20 10:36 PM, Longpeng (Mike) wrote: >> =E5=9C=A8 2020/3/6 8:09, Mike Kravetz =E5=86=99=E9=81=93: >>> On 3/4/20 7:30 PM, Longpeng(Mike) wrote: >>>> From: Longpeng >> >>> I am thinking we may want to have a more generic solution by allowing [...] >>> Of course, another approach would be to simply require ALL architectu= res >>> to set up hstates for ALL supported huge page sizes. >>> >> I think this is also needed, then we can request all supported size of= hugepages >> by sysfs(e.g. /sys/kernel/mm/hugepages/*) dynamically. Currently, (x86= ) we can >> only request 1G-hugepage through sysfs if we boot with 'default_hugepa= gesz=3D1G', >> even with the first approach. >=20 > I=C2=A0'think' you can use sysfs for 1G huge pages on x86 today. Just = booted a > system without any hugepage options on the command line. >=20 > # cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > 0 > # cat /sys/kernel/mm/hugepages/hugepages-1048576kB/^Cugepages > # echo 1 > /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > # cat /sys/kernel/mm/hugepages/hugepages-1048576kB/nr_hugepages > 1 > # cat /sys/kernel/mm/hugepages/hugepages-1048576kB/free_hugepages > 1 >=20 > x86 and riscv will set up hstates for PUD_SIZE hstates by default if > CONFIG_CONTIG_ALLOC. This is because of a somewhat recent feature that > allowed dynamic allocation of gigantic (page order >=3D MAX_ORDER) page= s. > Before that feature, it made no sense to set up an hstate for gigantic > pages if they were not allocated at boot time and could not be dynamica= lly > added later. >=20 Um... maybe my poor English expressing ability to make you misunderstand. In fact, I want to say that we should allow the user to allocate ALL supp= orted size of hugepages dynamically by default, so we need require ALL architec= tures to set up ALL supported huge page sizes. > I'll code up a proposal that does the following: > - Have arch specific code provide a list of supported huge page sizes > - Arch independent code uses list to create all hstates > - Move processing of "hugepagesz=3D" to arch independent code > - Validate "default_hugepagesz=3D" when value is read from command line >=20 > It make take a few days. When ready, I will pull in the architecture > specific people. >=20 Great! I'm looking forward to your patches. I also have one or two other = small improvements, I hope to discuss with you after you finish these codes. >=20 >> BTW, because it's not easy to discuss with you due to the time differe= nce, I >> have another question about the default hugepages to consult you here.= Why the >> /proc/meminfo only show the info about the default hugepages, but not = others? >> meminfo is more well know than sysfs, some ordinary users know meminfo= but don't >> know use the sysfs to get the hugepages status(e.g. total, free). >=20 > I believe that is simply history. In the beginning there was only the > default huge page size and that was added to meminfo. People then wrot= e > scripts to parse huge page information in meminfo. When support for > other huge pages was added, it was not added to meminfo as it could bre= ak > user scripts parsing the file. Adding information for all potential > huge page sizes may create lots of entries that are unused. I was not > around when these decisions were made, but that is my understanding. > BTW - A recently added meminfo field 'Hugetlb' displays the amount of > memory consumed by huge pages of ALL sizes. I get it, thanks :) > --=20 > Mike Kravetz >=20 --=20 Regards, Longpeng(Mike)