From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 33258C2B9F4 for ; Tue, 22 Jun 2021 09:09:57 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C6E056113D for ; Tue, 22 Jun 2021 09:09:56 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C6E056113D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=bytedance.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 447246B0036; Tue, 22 Jun 2021 05:09:56 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 3CF876B0062; Tue, 22 Jun 2021 05:09:56 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 24A266B006C; Tue, 22 Jun 2021 05:09:56 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0129.hostedemail.com [216.40.44.129]) by kanga.kvack.org (Postfix) with ESMTP id E2CCC6B0036 for ; Tue, 22 Jun 2021 05:09:55 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 85CA1B2AC for ; Tue, 22 Jun 2021 09:09:55 +0000 (UTC) X-FDA: 78280787550.15.FE2C527 Received: from mail-pj1-f54.google.com (mail-pj1-f54.google.com [209.85.216.54]) by imf14.hostedemail.com (Postfix) with ESMTP id 99337C0201F9 for ; Tue, 22 Jun 2021 09:09:54 +0000 (UTC) Received: by mail-pj1-f54.google.com with SMTP id k5so11718833pjj.1 for ; Tue, 22 Jun 2021 02:09:54 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20150623.gappssmtp.com; s=20150623; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ydvc8Rl7wiGHkzj56eR2KVXaD0RfaW+A51LSajQqcq4=; b=g1rAp2Bihui++TbOWNROj8o+aLubfPXZ3PUScW959ZG5twVElrZwvWaluQCF4Est7y Vl/ixHVWoiJlMyCH/KSt/31oz/Z9vHSLsxLaAuUPhRGEzqXOwmCVxO3Uh3GdvhvyfrcI vq/RWS8s+QlIxCq8iN9kjVBU6qAOJcWgRth+FUYtS+iGm0r6hf50yqUHlWHJC/fQeP5u Wh992mbKoH1/dKpHm4BBzJRn7kmVtpVNGKlP67c9+9Ch8Zj7VL+a28aodxdL5ufCVsnR PBHRdMGZabz0DXgbrSONevPohFugavyglw4frfBWxJXYkwMaRKz1uygu72Z9oRDuRb2k 1uIA== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ydvc8Rl7wiGHkzj56eR2KVXaD0RfaW+A51LSajQqcq4=; b=jkgtEqRR5FIEZlfE2Fw4Z/J9Ed9PF1ojmu/yes7fs+pi7nCz8GmmLfc0LUVxPMejte YCxAWxYdo5Q6emThofl5N+frvm02c/D6iiAWMeGp5RyhbNJWG/OWrQ4WphehuQFZ3FE4 DALiLx/FKVfIC0YGi2AB5UhkT6eS4Wcypkt9GpN0TbzENbFDLuFJTPPg7pg+ufodjIbU +1CNhqI1wafyazLttHU2uS2IQJzBpOPF1hqEbGUSOXTmSVb19TDASGt0SzGvV91TySh1 Tx2xffv51AQO05q7teNMOqjkFM4JGvK0zQdx7XyIlE4fdiYtGYGgECtp/4Em2DUbGqDR q+zg== X-Gm-Message-State: AOAM531gzAKtqTnisWPWBefxqSGPKWpumxbKMc+e/jCYcttcZnOyAQok wHkX1ZFzDgGPmY3yOcxplHyLQnn7hVKWWrJFTs36xg== X-Google-Smtp-Source: ABdhPJxbCIlOoOQgk5yDdp1DjFNkdWnsRIDQST6UuhHQt6UWX5eK79Z59CtP9V6JBOhqTGG4aWUPuHW+BgXT81YTIxI= X-Received: by 2002:a17:90b:4c8c:: with SMTP id my12mr2891403pjb.13.1624352993336; Tue, 22 Jun 2021 02:09:53 -0700 (PDT) MIME-Version: 1.0 References: <20210622021423.154662-1-mike.kravetz@oracle.com> <20210622021423.154662-2-mike.kravetz@oracle.com> In-Reply-To: <20210622021423.154662-2-mike.kravetz@oracle.com> From: Muchun Song Date: Tue, 22 Jun 2021 17:09:14 +0800 Message-ID: Subject: Re: [External] [PATCH 1/2] hugetlb: remove prep_compound_huge_page cleanup To: Mike Kravetz Cc: Linux Memory Management List , LKML , Jann Horn , Youquan Song , Andrea Arcangeli , Jan Kara , John Hubbard , "Kirill A . Shutemov" , Matthew Wilcox , Michal Hocko , Andrew Morton Content-Type: text/plain; charset="UTF-8" X-Rspamd-Server: rspam01 X-Rspamd-Queue-Id: 99337C0201F9 Authentication-Results: imf14.hostedemail.com; dkim=pass header.d=bytedance-com.20150623.gappssmtp.com header.s=20150623 header.b=g1rAp2Bi; spf=pass (imf14.hostedemail.com: domain of songmuchun@bytedance.com designates 209.85.216.54 as permitted sender) smtp.mailfrom=songmuchun@bytedance.com; dmarc=pass (policy=none) header.from=bytedance.com X-Stat-Signature: onzzk41ogiibmkrrgpfi6smryucrwnu8 X-HE-Tag: 1624352994-561273 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, Jun 22, 2021 at 10:15 AM Mike Kravetz wrote: > > The routine prep_compound_huge_page is a simple wrapper to call either > prep_compound_gigantic_page or prep_compound_page. However, it is only > called from gather_bootmem_prealloc which only processes gigantic pages. > Eliminate the routine and call prep_compound_gigantic_page directly. > > Signed-off-by: Mike Kravetz Nice clean-up. Thanks. Reviewed-by: Muchun Song > --- > mm/hugetlb.c | 29 ++++++++++------------------- > 1 file changed, 10 insertions(+), 19 deletions(-) > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 760b5fb836b8..50596b7d6da9 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1320,8 +1320,6 @@ static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, > return alloc_contig_pages(nr_pages, gfp_mask, nid, nodemask); > } > > -static void prep_new_huge_page(struct hstate *h, struct page *page, int nid); > -static void prep_compound_gigantic_page(struct page *page, unsigned int order); > #else /* !CONFIG_CONTIG_ALLOC */ > static struct page *alloc_gigantic_page(struct hstate *h, gfp_t gfp_mask, > int nid, nodemask_t *nodemask) > @@ -2759,16 +2757,10 @@ int __alloc_bootmem_huge_page(struct hstate *h) > return 1; > } > > -static void __init prep_compound_huge_page(struct page *page, > - unsigned int order) > -{ > - if (unlikely(order > (MAX_ORDER - 1))) > - prep_compound_gigantic_page(page, order); > - else > - prep_compound_page(page, order); > -} > - > -/* Put bootmem huge pages into the standard lists after mem_map is up */ > +/* > + * Put bootmem huge pages into the standard lists after mem_map is up. > + * Note: This only applies to gigantic (order > MAX_ORDER) pages. > + */ > static void __init gather_bootmem_prealloc(void) > { > struct huge_bootmem_page *m; > @@ -2777,20 +2769,19 @@ static void __init gather_bootmem_prealloc(void) > struct page *page = virt_to_page(m); > struct hstate *h = m->hstate; > > + VM_BUG_ON(!hstate_is_gigantic(h)); > WARN_ON(page_count(page) != 1); > - prep_compound_huge_page(page, huge_page_order(h)); > + prep_compound_gigantic_page(page, huge_page_order(h)); > WARN_ON(PageReserved(page)); > prep_new_huge_page(h, page, page_to_nid(page)); > put_page(page); /* free it into the hugepage allocator */ > > /* > - * If we had gigantic hugepages allocated at boot time, we need > - * to restore the 'stolen' pages to totalram_pages in order to > - * fix confusing memory reports from free(1) and another > - * side-effects, like CommitLimit going negative. > + * We need to restore the 'stolen' pages to totalram_pages > + * in order to fix confusing memory reports from free(1) and > + * other side-effects, like CommitLimit going negative. > */ > - if (hstate_is_gigantic(h)) > - adjust_managed_page_count(page, pages_per_huge_page(h)); > + adjust_managed_page_count(page, pages_per_huge_page(h)); > cond_resched(); > } > } > -- > 2.31.1 >