From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C15C9C4361B for ; Wed, 16 Dec 2020 13:57:26 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3EE0123124 for ; Wed, 16 Dec 2020 13:57:26 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EE0123124 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 909DD6B006C; Wed, 16 Dec 2020 08:57:25 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 8B9016B0071; Wed, 16 Dec 2020 08:57:25 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 782956B0070; Wed, 16 Dec 2020 08:57:25 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0249.hostedemail.com [216.40.44.249]) by kanga.kvack.org (Postfix) with ESMTP id 6313C6B006C for ; Wed, 16 Dec 2020 08:57:25 -0500 (EST) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 275778249980 for ; Wed, 16 Dec 2020 13:57:25 +0000 (UTC) X-FDA: 77599297650.26.brush76_38132962742c Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 081901804B660 for ; Wed, 16 Dec 2020 13:57:25 +0000 (UTC) X-HE-Tag: brush76_38132962742c X-Filterd-Recvd-Size: 6401 Received: from mail-pg1-f172.google.com (mail-pg1-f172.google.com [209.85.215.172]) by imf21.hostedemail.com (Postfix) with ESMTP for ; Wed, 16 Dec 2020 13:57:24 +0000 (UTC) Received: by mail-pg1-f172.google.com with SMTP id f17so17694188pge.6 for ; Wed, 16 Dec 2020 05:57:23 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=4NFYln1tQq7qRd53FFec7AJR736UPrWCWegNdBpKreM=; b=h3itCcTAanIF02P8vHFCLTU7a+J8XQTdKKylTJSMbH4TlDhD8tvSXYLMx83aIV/gP1 GWlqP0rvuhNzRIchF1C7iCNIGioJFOboQPbV0CSxNE1iUav1kWAAaM7xxROFEfpOuguU Gn/EQecbr9eUQfaWVvxyVHGUyzBvGuJTg6r1/smVgh3aEbpOLG7Imbzp/CUDubL92Qll 0O72vzma8nznU7W1+rVOpzo5Npi+Fj7z6EZpDPCDHtTsFCJxNktc5MWvXUtn70il/iHh XQ2ZSAG/7Sh2jWE6RfJp3W+JPbJLklo/172lKkxasBMOHXLEee2UKQt8wIDkg2Fo2k8A 0fdA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=4NFYln1tQq7qRd53FFec7AJR736UPrWCWegNdBpKreM=; b=T0IgLeDTyy7DzWSu6KrtNoBPwhvZck7A8+L6noaGtUr5S9EEunKV5Zjg3p9GCtjpsy jA4eXxbUewVcYO8tMIV7DvOmX0MfknRL2IQ6J9Pmb3sNHuta0P4U8hErPfjtuvwCSQdk 8RF5H0WlJsXd3lIl3sCgJ/B1q8tGyWXBvm2aOQgttzMCOzgf112qtlsE0DqKZMzgIglm 9MDc/mA08JvPQFEosLlgvJCWkpsenqBenhx4iOGPqsc8xAMCmrlArcNthCcYfrOPzUXw ETtchCq624qnXgYyZ3jmB1guIAcoG7LzYzywECxiDc7hLyJASs4HkQN3BTZoeg27+YfJ jPRg== X-Gm-Message-State: AOAM531M8RFSvhXTCggKZmeXJtCWi56V0vLb0qVr5A4u5VLS9lWROQM/ ikL5y9DfwWKR+xrXG68eXXPBU/mSbnvXJ1iFgcttqQ== X-Google-Smtp-Source: ABdhPJzrI2SmuZDGD4lFuVbK3J5ap+2Z3rXF7FUlRnyx4P+kAVGPOBalOHu8fTh7tWhC5GwG9+DhfeSUOzynzWLTE4w= X-Received: by 2002:aa7:8701:0:b029:19e:561:d476 with SMTP id b1-20020aa787010000b029019e0561d476mr1724030pfo.2.1608127043137; Wed, 16 Dec 2020 05:57:23 -0800 (PST) MIME-Version: 1.0 References: <20201213154534.54826-1-songmuchun@bytedance.com> <20201213154534.54826-10-songmuchun@bytedance.com> <20201216134354.GD29394@linux> In-Reply-To: <20201216134354.GD29394@linux> From: Muchun Song Date: Wed, 16 Dec 2020 21:56:47 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate To: Oscar Salvador Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 16, 2020 at 9:44 PM Oscar Salvador wrote: > > On Sun, Dec 13, 2020 at 11:45:32PM +0800, Muchun Song wrote: > > All the infrastructure is ready, so we introduce nr_free_vmemmap_pages > > field in the hstate to indicate how many vmemmap pages associated with > > a HugeTLB page that we can free to buddy allocator. And initialize it > "can be freed to buddy allocator" > > > in the hugetlb_vmemmap_init(). This patch is actual enablement of the > > feature. > > > > Signed-off-by: Muchun Song > > Acked-by: Mike Kravetz > > With below nits addressed you can add: > > Reviewed-by: Oscar Salvador Thanks. > > > static int __init early_hugetlb_free_vmemmap_param(char *buf) > > { > > + /* We cannot optimize if a "struct page" crosses page boundaries. */ > > + if (!is_power_of_2(sizeof(struct page))) > > + return 0; > > + > > I wonder if we should report a warning in case someone wants to enable this > feature and stuct page size it not power of 2. > In case someone wonders why it does not work for him/her. > > > +void __init hugetlb_vmemmap_init(struct hstate *h) > > +{ > > + unsigned int nr_pages = pages_per_huge_page(h); > > + unsigned int vmemmap_pages; > > + > > + if (!hugetlb_free_vmemmap_enabled) > > + return; > > + > > + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; > > + /* > > + * The head page and the first tail page are not to be freed to buddy > > + * system, the others page will map to the first tail page. So there > > + * are the remaining pages that can be freed. > "the other pages will map to the first tail page, so they can be freed." > > + * > > + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true > > + * on some architectures (e.g. aarch64). See Documentation/arm64/ > > + * hugetlbpage.rst for more details. > > + */ > > + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) > > + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; > > + > > + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, > > + h->name); > > Maybe specify this is hugetlb code: > > pr_info("%s: blabla", __func__, ...) > or > pr_info("hugetlb: blalala", ...); > > although I am not sure whether we need that at all, or maybe just use > pr_debug(). The pr_info can tell the user whether the feature is enabled. From this point of view, it makes sense. Right? Thanks. > > -- > Oscar Salvador > SUSE L3 -- Yours, Muchun