From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-8.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5A5FFC4361B for ; Thu, 17 Dec 2020 08:35:29 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id AFA6C2388B for ; Thu, 17 Dec 2020 08:35:28 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org AFA6C2388B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 9B0416B0036; Thu, 17 Dec 2020 03:35:27 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 938546B005D; Thu, 17 Dec 2020 03:35:27 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 828156B0068; Thu, 17 Dec 2020 03:35:27 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0078.hostedemail.com [216.40.44.78]) by kanga.kvack.org (Postfix) with ESMTP id 698D46B0036 for ; Thu, 17 Dec 2020 03:35:27 -0500 (EST) Received: from smtpin28.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 32FF1180AD81D for ; Thu, 17 Dec 2020 08:35:27 +0000 (UTC) X-FDA: 77602115094.28.smile26_4107e5e27433 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin28.hostedemail.com (Postfix) with ESMTP id 16BFB6D63 for ; Thu, 17 Dec 2020 08:35:27 +0000 (UTC) X-HE-Tag: smile26_4107e5e27433 X-Filterd-Recvd-Size: 6274 Received: from mail-pf1-f179.google.com (mail-pf1-f179.google.com [209.85.210.179]) by imf30.hostedemail.com (Postfix) with ESMTP for ; Thu, 17 Dec 2020 08:35:26 +0000 (UTC) Received: by mail-pf1-f179.google.com with SMTP id f9so18552501pfc.11 for ; Thu, 17 Dec 2020 00:35:26 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=UD8UiYZqJIf2Kb/S80auNAb6silZRJyxZE/TFtnTm+A=; b=tWxbf1ddK/nCiLVHjiT+LH4sjGyxYjLBASs/MS6lEDwMw+v+jqcQVQjocFVkVnq+oe FxvthimPCcDfeWT/HMW5GLkxqb20ihPfuCipQ1amnIjrd8H/wfmABZPlc+RbkUUH0Ko6 ae7NO2R7hd8+y/qIY2MTPBV+m14MHLmF6BSTeQVJpyJ5JlmgNmBGzxK63kQjsUk/jEAA 7dUCa1XOB5hpkt4HPaVDx9/ae28lGVJV0Rgotjy/bB4SrfD5C+fIPghHigfNjhN01xtp FTGUewRDErbxGAszaGlP2X/wrxKJkaJwtQAS/tk3TqXU+i9BrRnKdZr2/8h0D1QVFk+G 50aQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=UD8UiYZqJIf2Kb/S80auNAb6silZRJyxZE/TFtnTm+A=; b=UaU87nNrEkr16Y8NkoApeOoq1ULsKaBtOjTVlerFntYN5tx669xfvL9LGWdAti9LKE KoP5pUPWftNVEuDpUVJ07l+F0WOqOYRZcuEGRMR0vSoSjLQBXuowPJX+mDRsyu5h01wE xdz6n8DCnRFfLPQoYsIJfzAobmHQ5UYV1ZLgx4CNkB9StdNeSd5/tGSvntJ4Zy5zrrau F8khWsFuy4EdG+05eHofxIHD78qjzWtGXj1gToKNVge9a/3BMJ08BURSFMpNsVMRjd70 K2+Z277WGnfbwXIriPcHcP2gMy4wPsuAHRIpGP7Or27Q1RFt8kovqqzvEEbhxNSlChtG 3lhQ== X-Gm-Message-State: AOAM532w9gYrZA8w92x7yKyx42AyI9cJtDTp/0QVSbQQWPFTFCarE9N1 WVeBqAxE6ISQahNdMzoS2YJ22cqsCuab4hBa+0nYMQ== X-Google-Smtp-Source: ABdhPJythkfZxHytXhliHdLNrWqdzMo5Hdh1huYzcFpYuttP2PnH1eM1A0sAeWedX40e4XiWjTLCpwgmi44v+L95D3Q= X-Received: by 2002:a63:1f21:: with SMTP id f33mr13398642pgf.31.1608194125214; Thu, 17 Dec 2020 00:35:25 -0800 (PST) MIME-Version: 1.0 References: <20201213154534.54826-1-songmuchun@bytedance.com> <20201213154534.54826-10-songmuchun@bytedance.com> <20201216134354.GD29394@linux> In-Reply-To: <20201216134354.GD29394@linux> From: Muchun Song Date: Thu, 17 Dec 2020 16:34:48 +0800 Message-ID: Subject: Re: [External] Re: [PATCH v9 09/11] mm/hugetlb: Introduce nr_free_vmemmap_pages in the struct hstate To: Oscar Salvador Cc: Jonathan Corbet , Mike Kravetz , Thomas Gleixner , mingo@redhat.com, bp@alien8.de, x86@kernel.org, hpa@zytor.com, dave.hansen@linux.intel.com, luto@kernel.org, Peter Zijlstra , viro@zeniv.linux.org.uk, Andrew Morton , paulmck@kernel.org, mchehab+huawei@kernel.org, pawan.kumar.gupta@linux.intel.com, Randy Dunlap , oneukum@suse.com, anshuman.khandual@arm.com, jroedel@suse.de, Mina Almasry , David Rientjes , Matthew Wilcox , Michal Hocko , "Song Bao Hua (Barry Song)" , David Hildenbrand , Xiongchun duan , linux-doc@vger.kernel.org, LKML , Linux Memory Management List , linux-fsdevel Content-Type: text/plain; charset="UTF-8" X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, Dec 16, 2020 at 9:44 PM Oscar Salvador wrote: > > On Sun, Dec 13, 2020 at 11:45:32PM +0800, Muchun Song wrote: > > All the infrastructure is ready, so we introduce nr_free_vmemmap_pages > > field in the hstate to indicate how many vmemmap pages associated with > > a HugeTLB page that we can free to buddy allocator. And initialize it > "can be freed to buddy allocator" > > > in the hugetlb_vmemmap_init(). This patch is actual enablement of the > > feature. > > > > Signed-off-by: Muchun Song > > Acked-by: Mike Kravetz > > With below nits addressed you can add: > > Reviewed-by: Oscar Salvador > > > static int __init early_hugetlb_free_vmemmap_param(char *buf) > > { > > + /* We cannot optimize if a "struct page" crosses page boundaries. */ > > + if (!is_power_of_2(sizeof(struct page))) > > + return 0; > > + > > I wonder if we should report a warning in case someone wants to enable this > feature and stuct page size it not power of 2. > In case someone wonders why it does not work for him/her. Agree. I think that we should add a warning message here. > > > +void __init hugetlb_vmemmap_init(struct hstate *h) > > +{ > > + unsigned int nr_pages = pages_per_huge_page(h); > > + unsigned int vmemmap_pages; > > + > > + if (!hugetlb_free_vmemmap_enabled) > > + return; > > + > > + vmemmap_pages = (nr_pages * sizeof(struct page)) >> PAGE_SHIFT; > > + /* > > + * The head page and the first tail page are not to be freed to buddy > > + * system, the others page will map to the first tail page. So there > > + * are the remaining pages that can be freed. > "the other pages will map to the first tail page, so they can be freed." > > + * > > + * Could RESERVE_VMEMMAP_NR be greater than @vmemmap_pages? It is true > > + * on some architectures (e.g. aarch64). See Documentation/arm64/ > > + * hugetlbpage.rst for more details. > > + */ > > + if (likely(vmemmap_pages > RESERVE_VMEMMAP_NR)) > > + h->nr_free_vmemmap_pages = vmemmap_pages - RESERVE_VMEMMAP_NR; > > + > > + pr_info("can free %d vmemmap pages for %s\n", h->nr_free_vmemmap_pages, > > + h->name); > > Maybe specify this is hugetlb code: > > pr_info("%s: blabla", __func__, ...) > or > pr_info("hugetlb: blalala", ...); > > although I am not sure whether we need that at all, or maybe just use > pr_debug(). > > -- > Oscar Salvador > SUSE L3 -- Yours, Muchun