From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.2 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 63203C4361B for ; Wed, 16 Dec 2020 13:44:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id CA85F233A0 for ; Wed, 16 Dec 2020 13:44:02 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org CA85F233A0 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.de Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id CC5D56B0071; Wed, 16 Dec 2020 08:44:01 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id C4EC66B0072; Wed, 16 Dec 2020 08:44:01 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id B65996B0073; Wed, 16 Dec 2020 08:44:01 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0112.hostedemail.com [216.40.44.112]) by kanga.kvack.org (Postfix) with ESMTP id 9E4F16B0071 for ; Wed, 16 Dec 2020 08:44:01 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 5A646180AD81A for ; Wed, 16 Dec 2020 13:44:01 +0000 (UTC) X-FDA: 77599263882.23.steam84_03128d02742c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin23.hostedemail.com (Postfix) with ESMTP id 1EAD937604 for ; Wed, 16 Dec 2020 13:44:01 +0000 (UTC) X-HE-Tag: steam84_03128d02742c X-Filterd-Recvd-Size: 3896 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Dec 2020 13:44:00 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 621CCAC7F; Wed, 16 Dec 2020 13:43:59 +0000 (UTC) Date: Wed, 16 Dec 2020 14:43:54 +0100 From: Oscar Salvador To: Muchun Song Cc: corbet@lwn.net, mike.kravetz@oracle.com, tglx@linutronix.de, mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, peterz@infradead.org, viro@zeniv.linux.org.uk, akpm@linux-foundation.org, paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, rdunlap@infradead.org, oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, almasrymina@google.com, rientjes@google.com, willy@infradead.org, mhocko@suse.com, song.bao.hua@hisilicon.com, david@redhat.com, duanxiongchun@bytedance.com, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org, linux-fsdevel@vger.kernel.org Subject: Re: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate Message-ID: <20201216134354.GD29394@linux> References: <20201213154534.54826-1-songmuchun@bytedance.com> <20201213154534.54826-10-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20201213154534.54826-10-songmuchun@bytedance.com> User-Agent: Mutt/1.10.1 (2018-07-13) X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Sun, Dec 13, 2020 at 11:45:32PM +0800, Muchun Song wrote: > All the infrastructure is ready, so we introduce nr_free_vmemmap_pages > field in the hstate to indicate how many vmemmap pages associated with > a HugeTLB page that we can free to buddy allocator. And initialize it "can be freed to buddy allocator" > in the hugetlb_vmemmap_init(). This patch is actual enablement of the > feature. > > Signed-off-by: Muchun Song > Acked-by: Mike Kravetz With below nits addressed you can add: Reviewed-by: Oscar Salvador > static int __init early_hugetlb_free_vmemmap_param(char *buf) > { > + /* We cannot optimize if a "struct page" crosses page boundaries. */ > + if (!is_power_of_2(sizeof(struct page))) > + return 0; > + I wonder if we should report a warning in case someone wants to enable this feature and stuct page size it not power of 2. In case someone wonders why it does not work for him/her. > +void __init hugetlb_vmemmap_init(struct hstate *h) > +{ > + unsigned int nr_pages = pages_per_huge_page(h); > + unsigned int vmemmap_pages; > + > + if (!hugetlb_free_vmemmap_enabled) > + return; > + > + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; > + /* > + * The head page and the first tail page are not to be freed to buddy > + * system, the others page will map to the first tail page. So there > + * are the remaining pages that can be freed. "the other pages will map to the first tail page, so they can be freed." > + * > + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true > + * on some architectures (e.g. aarch64). See Documentation/arm64/ > + * hugetlbpage.rst for more details. > + */ > + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) > + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; > + > + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, > + h->name); Maybe specify this is hugetlb code: pr_info("%s: blabla", __func__, ...) or pr_info("hugetlb: blalala", ...); although I am not sure whether we need that at all, or maybe just use pr_debug(). -- Oscar Salvador SUSE L3