From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 655CBC433DB for ; Wed, 6 Jan 2021 16:36:03 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 1B8BB23131 for ; Wed, 6 Jan 2021 16:36:03 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727388AbhAFQgC (ORCPT ); Wed, 6 Jan 2021 11:36:02 -0500 Received: from mx2.suse.de ([195.135.220.15]:58786 "EHLO mx2.suse.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1727337AbhAFQgC (ORCPT ); Wed, 6 Jan 2021 11:36:02 -0500 X-Virus-Scanned: by amavisd-new at test-mx.suse.de DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=suse.com; s=susede1; t=1609950915; h=from:from:reply-to:date:date:message-id:message-id:to:to:cc:cc: mime-version:mime-version:content-type:content-type: in-reply-to:in-reply-to:references:references; bh=FgGZYJS8/C9BpPdNbVJXFi+dt4Fe+ip4PfCbQjiJ3jM=; b=cMXCwiDj7Gcj0G21bVXmk/GsCBvs7OO60y/oALlK4dXwuDUfMEaPtNYN6Ux084qRLo0md4 lICTKIJkdfCsdsPR2+O5pQXs12JSbEYsHOE6tLru3elVz2BS8tIlHFILd28BLTOl8jdbVR 8RpMZWRaXJ4skdMhrlbXlQ9Dfm+Q6u4= Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id 17843ACAF; Wed, 6 Jan 2021 16:35:15 +0000 (UTC) Date: Wed, 6 Jan 2021 17:35:13 +0100 From: Michal Hocko To: Muchun Song Cc: mike.kravetz@oracle.com, akpm@linux-foundation.org, n-horiguchi@ah.jp.nec.com, ak@linux.intel.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH v2 2/6] mm: hugetlbfs: fix cannot migrate the fallocated HugeTLB page Message-ID: <20210106163513.GS13207@dhcp22.suse.cz> References: <20210106084739.63318-1-songmuchun@bytedance.com> <20210106084739.63318-3-songmuchun@bytedance.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <20210106084739.63318-3-songmuchun@bytedance.com> Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Wed 06-01-21 16:47:35, Muchun Song wrote: > Because we only can isolate a active page via isolate_huge_page() > and hugetlbfs_fallocate() forget to mark it as active, we cannot > isolate and migrate those pages. I've little bit hard time to understand this initially and had to dive into the code to make sense of it. I would consider the following wording easier to grasp. Feel free to reuse if you like. " If a new hugetlb page is allocated during fallocate it will not be marked as active (set_page_huge_active) which will result in a later isolate_huge_page failure when the page migration code would like to move that page. Such a failure would be unexpected and wrong. " Now to the fix. I believe that this patch shows that the set_page_huge_active is just too subtle. Is there any reason why we cannot make all freshly allocated huge pages active by default? > Only export set_page_huge_active, just leave clear_page_huge_active > as static. Because there are no external users. > > Fixes: 70c3547e36f5 (hugetlbfs: add hugetlbfs_fallocate()) > Signed-off-by: Muchun Song > --- > fs/hugetlbfs/inode.c | 3 ++- > include/linux/hugetlb.h | 2 ++ > mm/hugetlb.c | 2 +- > 3 files changed, 5 insertions(+), 2 deletions(-) > > diff --git a/fs/hugetlbfs/inode.c b/fs/hugetlbfs/inode.c > index b5c109703daa..21c20fd5f9ee 100644 > --- a/fs/hugetlbfs/inode.c > +++ b/fs/hugetlbfs/inode.c > @@ -735,9 +735,10 @@ static long hugetlbfs_fallocate(struct file *file, int mode, loff_t offset, > > mutex_unlock(&hugetlb_fault_mutex_table[hash]); > > + set_page_huge_active(page); > /* > * unlock_page because locked by add_to_page_cache() > - * page_put due to reference from alloc_huge_page() > + * put_page() due to reference from alloc_huge_page() > */ > unlock_page(page); > put_page(page); > diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h > index ebca2ef02212..b5807f23caf8 100644 > --- a/include/linux/hugetlb.h > +++ b/include/linux/hugetlb.h > @@ -770,6 +770,8 @@ static inline void huge_ptep_modify_prot_commit(struct vm_area_struct *vma, > } > #endif > > +void set_page_huge_active(struct page *page); > + > #else /* CONFIG_HUGETLB_PAGE */ > struct hstate {}; > > diff --git a/mm/hugetlb.c b/mm/hugetlb.c > index 1f3bf1710b66..4741d60f8955 100644 > --- a/mm/hugetlb.c > +++ b/mm/hugetlb.c > @@ -1348,7 +1348,7 @@ bool page_huge_active(struct page *page) > } > > /* never called for tail page */ > -static void set_page_huge_active(struct page *page) > +void set_page_huge_active(struct page *page) > { > VM_BUG_ON_PAGE(!PageHeadHuge(page), page); > SetPagePrivate(&page[1]); > -- > 2.11.0 -- Michal Hocko SUSE Labs