From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.7 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C9BA6C433FE for ; Tue, 21 Sep 2021 06:44:05 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id AE33860200 for ; Tue, 21 Sep 2021 06:44:05 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S229943AbhIUGpc (ORCPT ); Tue, 21 Sep 2021 02:45:32 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:42798 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S229763AbhIUGpa (ORCPT ); Tue, 21 Sep 2021 02:45:30 -0400 Received: from mail-pl1-x635.google.com (mail-pl1-x635.google.com [IPv6:2607:f8b0:4864:20::635]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id EC188C061575 for ; Mon, 20 Sep 2021 23:44:02 -0700 (PDT) Received: by mail-pl1-x635.google.com with SMTP id y1so5770872plk.10 for ; Mon, 20 Sep 2021 23:44:02 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=bytedance-com.20210112.gappssmtp.com; s=20210112; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=qeCXjIpYXsAnaU3oxGDbt64MrBlL7kjozPkuT7rdTbo=; b=z8sUMUbXBQB5gz7JBeFuGBuwWdyT/cHU6HdETGdu021UiSt9gbRTlheHerpAb+kVeQ IND1zux0JBXQPqSmoMiD2wf+bo4XOoXvOPLRBEvfgpn0Eq1MWEwa4HAZ4LCZur5i7cYb id8e3YP91xYJ4+slyeyRZ2+shnBUOfcbohwenmuQg7HHNP33AMStCOxF93lNjLPe61nM O9pRPpjPDzEw3EoVH42F9HmJvO2LiVUxjKEnInpvq6+InJwy7XPJbdY1g8JdKwW+PNmT EAFrWyBDpnbYkh1Mmvb7h81ZblFjXM9yBSQSetRBrFVBeUG4N4AWLBaexFX6rCiEDBA8 4HoQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=qeCXjIpYXsAnaU3oxGDbt64MrBlL7kjozPkuT7rdTbo=; b=ymdGN/96anKSYPHSSroDQ8iSWsbLSDdPLdw7ih7UTy46pfB7OjpsDMmW/ADC7QNzBT f9un8qqS/0VHJuosOI01yiNBJj4dI6hydgGRpBy0ODODH4WsHyKZqzAxlmMWs8SQWjp2 /nxYrE6+j4NZ1JHc5TygZJNY489Jdu/MA5lc9J7u7FrDj2t6431rZzuUdv6sv3zg93Ed 3LWijrWdcXmjXg7OO/lz83C9J6p207bNiS/r2wrJNUadjL3IEEKteatw9J8BKHWxbKWz IAL68e6eMTSfjZKP/h6BbK60U8sf3h4knez54pK1D7/oSOM8bLEh3L3j9UuEcifs8MTw zXdQ== X-Gm-Message-State: AOAM530V6udD/L8KCZba0p8VrGU3OmZhTc9uGmVGuMn13nxW1/IXkLJj 7U6JD0KwcznUqjpbi/NXkQxldUYa7S5KU8S1wQICpw== X-Google-Smtp-Source: ABdhPJzXk2k8y4lEGZiR4eKEM2G5e6sWT451zy4UiuYGJJintr+5EdqX9twlQpMlysUcMfhsPFNXzaTPFoBhERiF4qk= X-Received: by 2002:a17:90b:4f8a:: with SMTP id qe10mr3495963pjb.5.1632206642403; Mon, 20 Sep 2021 23:44:02 -0700 (PDT) MIME-Version: 1.0 References: <20210917034815.80264-1-songmuchun@bytedance.com> <20210917034815.80264-2-songmuchun@bytedance.com> In-Reply-To: From: Muchun Song Date: Tue, 21 Sep 2021 14:43:20 +0800 Message-ID: Subject: Re: [PATCH RESEND v2 1/4] mm: hugetlb: free the 2nd vmemmap page associated with each HugeTLB page To: Barry Song <21cnbao@gmail.com> Cc: Mike Kravetz , Andrew Morton , Oscar Salvador , Michal Hocko , Barry Song , David Hildenbrand , Chen Huang , "Bodeddula, Balasubramaniam" , Jonathan Corbet , Matthew Wilcox , Xiongchun duan , fam.zheng@bytedance.com, Muchun Song , Qi Zheng , linux-doc@vger.kernel.org, LKML , Linux-MM Content-Type: text/plain; charset="UTF-8" Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Sat, Sep 18, 2021 at 6:06 PM Muchun Song wrote: > > On Sat, Sep 18, 2021 at 12:39 PM Barry Song <21cnbao@gmail.com> wrote: > > > > On Sat, Sep 18, 2021 at 12:08 AM Muchun Song wrote: > > > > > > Currently, we only free 6 vmemmap pages associated with a 2MB HugeTLB > > > page. However, we can remap all tail vmemmap pages to the page frame > > > mapped to with the head vmemmap page. Finally, we can free 7 vmemmap > > > pages for a 2MB HugeTLB page. It is a fine gain (e.g. we can save > > > extra 2GB memory when there is 1TB HugeTLB pages in the system > > > compared with the current implementation). > > > > > > But the head vmemmap page is not freed to the buddy allocator and all > > > tail vmemmap pages are mapped to the head vmemmap page frame. So we > > > can see more than one struct page struct with PG_head (e.g. 8 per 2 MB > > > HugeTLB page) associated with each HugeTLB page. We should adjust > > > compound_head() to make it returns the real head struct page when the > > > parameter is the tail struct page but with PG_head flag. > > > > > > Signed-off-by: Muchun Song > > > --- > > > Documentation/admin-guide/kernel-parameters.txt | 2 +- > > > include/linux/page-flags.h | 75 +++++++++++++++++++++++-- > > > mm/hugetlb_vmemmap.c | 60 +++++++++++--------- > > > mm/sparse-vmemmap.c | 21 +++++++ > > > 4 files changed, 126 insertions(+), 32 deletions(-) > > > > > > diff --git a/Documentation/admin-guide/kernel-parameters.txt b/Documentation/admin-guide/kernel-parameters.txt > > > index bdb22006f713..a154a7b3b9a5 100644 > > > --- a/Documentation/admin-guide/kernel-parameters.txt > > > +++ b/Documentation/admin-guide/kernel-parameters.txt > > > @@ -1606,7 +1606,7 @@ > > > [KNL] Reguires CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > > > enabled. > > > Allows heavy hugetlb users to free up some more > > > - memory (6 * PAGE_SIZE for each 2MB hugetlb page). > > > + memory (7 * PAGE_SIZE for each 2MB hugetlb page). > > > Format: { on | off (default) } > > > > > > on: enable the feature > > > diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h > > > index 8e1d97d8f3bd..7b1a918ebd43 100644 > > > --- a/include/linux/page-flags.h > > > +++ b/include/linux/page-flags.h > > > @@ -184,13 +184,64 @@ enum pageflags { > > > > > > #ifndef __GENERATING_BOUNDS_H > > > > > > +#ifdef CONFIG_HUGETLB_PAGE_FREE_VMEMMAP > > > +extern bool hugetlb_free_vmemmap_enabled; > > > + > > > +/* > > > + * If the feature of freeing some vmemmap pages associated with each HugeTLB > > > + * page is enabled, the head vmemmap page frame is reused and all of the tail > > > + * vmemmap addresses map to the head vmemmap page frame (furture details can > > > + * refer to the figure at the head of the mm/hugetlb_vmemmap.c). In other > > > + * word, there are more than one page struct with PG_head associated with each > > > + * HugeTLB page. We __know__ that there is only one head page struct, the tail > > > + * page structs with PG_head are fake head page structs. We need an approach > > > + * to distinguish between those two different types of page structs so that > > > + * compound_head() can return the real head page struct when the parameter is > > > + * the tail page struct but with PG_head. > > > + * > > > + * The page_head_if_fake() returns the real head page struct iff the @page may > > > + * be fake, otherwise, returns the @page if it cannot be a fake page struct. > > > + */ > > > +static __always_inline const struct page *page_head_if_fake(const struct page *page) > > > +{ > > > + if (!hugetlb_free_vmemmap_enabled) > > > + return page; > > > + > > > + /* > > > + * Only addresses aligned with PAGE_SIZE of struct page may be fake head > > > + * struct page. The alignment check aims to avoid access the fields ( > > > + * e.g. compound_head) of the @page[1]. It can avoid touch a (possibly) > > > + * cold cacheline in some cases. > > > + */ > > > + if (IS_ALIGNED((unsigned long)page, PAGE_SIZE) && > > > + test_bit(PG_head, &page->flags)) { > > > + /* > > > + * We can safely access the field of the @page[1] with PG_head > > > + * because the @page is a compound page composed with at least > > > + * two contiguous pages. > > > + */ > > > + unsigned long head = READ_ONCE(page[1].compound_head); > > > + > > > + if (likely(head & 1)) > > > + return (const struct page *)(head - 1); > > > + } > > > + > > > + return page; > > > +} > > > +#else > > > +static __always_inline const struct page *page_head_if_fake(const struct page *page) > > > +{ > > > + return page; > > > +} > > > +#endif > > > + > > > static inline unsigned long _compound_head(const struct page *page) > > > { > > > unsigned long head = READ_ONCE(page->compound_head); > > > > > > if (unlikely(head & 1)) > > > return head - 1; > > > - return (unsigned long)page; > > > + return (unsigned long)page_head_if_fake(page); > > > > hard to read. page_head_if_fake, what is the other side of > > page_head_if_not_fake? > > 1) return itself if the @page is not a fake head page. > 2) return head page if @page is a fake head page. > > So I want to express that page_head_if_fake returns a > head page only and only if the parameter of @page is a > fake head page. Otherwise, it returns itself. > > > I would expect something like > > page_to_page_head() > > or > > get_page_head() > > > > Those names seem to be not appropriate as well, because > its functionality does not make sure it can return a head > page. If the parameter is a head page, it definitely > returns a head page, otherwise, it may return itself which > may be a tail page. > > From this point of view, I still prefer page_head_if_fake. After some thinking, I figured out 2 names. page_head_if_fake() always returns a head page if the parameter of @page is not a compound page or its ->flags has PG_head set (you can think the head page is itself if the page is not a compound page). All the callers of it already guarantee this. It means it has to return a head page unless the @page is a tail page (except fake head page). So I propose two names as follows. 1) page_head_unless_tail 2) page_head_filter_fake The former means it always returns a head page unless the caller passes a tail page as a parameter. The latter means it always returns a head page but filtering out the fake head page. The former is inspired by get_page_unless_zero. What do you think? Thanks. > > > Anyway, I am not quite sure what is the best name. but page_head_if_fake(page) > > sounds odd to me. just like the things have two sides, but if_fake presents > > one side only. > > If others have any ideas, comments are welcome. > > > > > > } > > > > > > #define compound_head(page) ((typeof(page))_compound_head(page)) > > > @@ -225,12 +276,14 @@ static inline unsigned long _compound_head(const struct page *page) > > > > > > static __always_inline int PageTail(struct page *page) > > > { > > > - return READ_ONCE(page->compound_head) & 1; > > > + return READ_ONCE(page->compound_head) & 1 || > > > + page_head_if_fake(page) != page; > > > > i would expect a wrapper like: > > page_is_fake_head() > > Good point. Will do. > > > > > and the above page_to_page_head() can leverage the wrapper. > > here too. > > > > > } > > > > > > static __always_inline int PageCompound(struct page *page) > > > { > > > - return test_bit(PG_head, &page->flags) || PageTail(page); > > > + return test_bit(PG_head, &page->flags) || > > > + READ_ONCE(page->compound_head) & 1; > > > > hard to read. could it be something like the below? > > return PageHead(page) || PageTail(page); > > > > or do we really need to change this function? even a fake head still has > > the true test_bit(PG_head, &page->flags), though it is not a real head, it > > is still a pagecompound, right? > > Right. PageCompound() can not be changed. It is odd but > efficient because calling page_head_if_fake is eliminated. > So I select performance not readability. I'm not sure if it's > worth it. > > Thanks.