From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id ABAB4C433ED for ; Thu, 13 May 2021 21:30:38 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3D33B613C3 for ; Thu, 13 May 2021 21:30:38 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3D33B613C3 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id C61916B0036; Thu, 13 May 2021 17:30:37 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id C39636B006E; Thu, 13 May 2021 17:30:37 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id AD8B76B0070; Thu, 13 May 2021 17:30:37 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0073.hostedemail.com [216.40.44.73]) by kanga.kvack.org (Postfix) with ESMTP id 7C59E6B0036 for ; Thu, 13 May 2021 17:30:37 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 24F8398B6 for ; Thu, 13 May 2021 21:30:37 +0000 (UTC) X-FDA: 78137502114.15.6375374 Received: from mail-lj1-f169.google.com (mail-lj1-f169.google.com [209.85.208.169]) by imf03.hostedemail.com (Postfix) with ESMTP id 57D43C0007F2 for ; Thu, 13 May 2021 21:30:36 +0000 (UTC) Received: by mail-lj1-f169.google.com with SMTP id u20so3912275ljo.4 for ; Thu, 13 May 2021 14:30:36 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=ulveDe0GcodliqoO6WkEBV2On2D7HTmvHnhRbioOx5M=; b=lVWIar/4CW8qIyvq3H+e1lg6CNLF/OMvV8aieWeuc0eN78OqpUT7SNehXdtPylfzy5 O8v6Ldrigw91iiZAz/3YPZXRHRNzsARdD47vfdVEn9ZXzzi6AgOnzD4j/U6GqqqkJbeP pBr2ZLHWN9A/7/RtuD6i6kvQaJoywlcFVXjUty6YMoqU7DgcsMZZkHxma1HPGkCoWrn8 VP+kNi4EykGH9Y4DuBhp5ha2D8gSJ7iytCHjwn9gjoYSFPXKqcBPy29N8tujqpTAMeXE W0XMEon2708SvAUX2mRqvlnc1p63eLMMEY+WIb20UTMS6pXysmzrvYQZ4wlLEyMd08Dk eLhQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=ulveDe0GcodliqoO6WkEBV2On2D7HTmvHnhRbioOx5M=; b=P/dxHIWdfESc52qmUPQO58QIJuwW1jHb3Y24Dn7/4odU2KZQYcU7YfVmDFfTZFZLcQ VLxfhwLGaaqD9IdCOTiyaBgL/GmCuCcLxDinXzoVMbrewByPlRk/7FbWQcUa4gvCANUp 0uulz60GrHvv4wuARyA5yTjpG3i0ZOlA+fKTOaz9n+4f1mjpftgWKUqLlt4/EYVo5paF pTSiAt7dxNHPAmgJyQNzBa2OTu4/5iN3c9BIb0lQ/ON/1i7jqGGZmt1s3+DWCuShwnAy ob1zUTqerQem7gRgBfd5Cjap6jVC17Nl0HhFy5P6oHtoVuf9QmBTZx/jlUYNTFrvvL8c M97Q== X-Gm-Message-State: AOAM532fnMAhz3sVmHsVK/no9v/H9BYie3jPrqoWqI8zmhHExSzeHjMG g5BmQDuGQVoL6wiA6ZnGw96atg0ec89F9RDo5QM= X-Google-Smtp-Source: ABdhPJw65l2XB0GhvZH8bRdALUHea+Dz5lRurGuGlOyKtRwHwM72wpfFrEC+nyefsHdxCrheoDxyus831oy7akMFNwI= X-Received: by 2002:a2e:3508:: with SMTP id z8mr33792261ljz.424.1620941435168; Thu, 13 May 2021 14:30:35 -0700 (PDT) MIME-Version: 1.0 References: <20210511134857.1581273-1-linmiaohe@huawei.com> <20210511134857.1581273-4-linmiaohe@huawei.com> In-Reply-To: <20210511134857.1581273-4-linmiaohe@huawei.com> From: Yang Shi Date: Thu, 13 May 2021 14:30:23 -0700 Message-ID: Subject: Re: [PATCH v3 3/5] mm/huge_memory.c: add missing read-only THP checking in transparent_hugepage_enabled() To: Miaohe Lin Cc: Andrew Morton , Zi Yan , william.kucharski@oracle.com, Matthew Wilcox , Yang Shi , aneesh.kumar@linux.ibm.com, Ralph Campbell , Song Liu , "Kirill A. Shutemov" , Rik van Riel , Johannes Weiner , Minchan Kim , Hugh Dickins , adobriyan@gmail.com, Linux Kernel Mailing List , Linux MM , Linux FS-devel Mailing List Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf03.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b="lVWIar/4"; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf03.hostedemail.com: domain of shy828301@gmail.com designates 209.85.208.169 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Rspamd-Server: rspam05 X-Rspamd-Queue-Id: 57D43C0007F2 X-Stat-Signature: 3zqpmagge4b4gynrcgfairs1gx1pytus X-HE-Tag: 1620941436-129441 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Tue, May 11, 2021 at 6:49 AM Miaohe Lin wrote: > > Since commit 99cb0dbd47a1 ("mm,thp: add read-only THP support for > (non-shmem) FS"), read-only THP file mapping is supported. But it > forgot to add checking for it in transparent_hugepage_enabled(). > To fix it, we add checking for read-only THP file mapping and also > introduce helper transhuge_vma_enabled() to check whether thp is > enabled for specified vma to reduce duplicated code. We rename > transparent_hugepage_enabled to transparent_hugepage_active to make > the code easier to follow as suggested by David Hildenbrand. > > Fixes: 99cb0dbd47a1 ("mm,thp: add read-only THP support for (non-shmem) FS") > Signed-off-by: Miaohe Lin Looks correct to me. Reviewed-by: Yang Shi Just a nit below: > --- > fs/proc/task_mmu.c | 2 +- > include/linux/huge_mm.h | 27 ++++++++++++++++++++------- > mm/huge_memory.c | 11 ++++++++++- > mm/khugepaged.c | 4 +--- > mm/shmem.c | 3 +-- > 5 files changed, 33 insertions(+), 14 deletions(-) > > diff --git a/fs/proc/task_mmu.c b/fs/proc/task_mmu.c > index fc9784544b24..7389df326edd 100644 > --- a/fs/proc/task_mmu.c > +++ b/fs/proc/task_mmu.c > @@ -832,7 +832,7 @@ static int show_smap(struct seq_file *m, void *v) > __show_smap(m, &mss, false); > > seq_printf(m, "THPeligible: %d\n", > - transparent_hugepage_enabled(vma)); > + transparent_hugepage_active(vma)); > > if (arch_pkeys_enabled()) > seq_printf(m, "ProtectionKey: %8u\n", vma_pkey(vma)); > diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h > index 0a526f211fec..a35c13d1f487 100644 > --- a/include/linux/huge_mm.h > +++ b/include/linux/huge_mm.h > @@ -115,9 +115,19 @@ extern struct kobj_attribute shmem_enabled_attr; > > extern unsigned long transparent_hugepage_flags; > > +static inline bool transhuge_vma_enabled(struct vm_area_struct *vma, I'd like to have this function defined next to transhuge_vma_suitable(). > + unsigned long vm_flags) > +{ > + /* Explicitly disabled through madvise. */ > + if ((vm_flags & VM_NOHUGEPAGE) || > + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) > + return false; > + return true; > +} > + > /* > * to be used on vmas which are known to support THP. > - * Use transparent_hugepage_enabled otherwise > + * Use transparent_hugepage_active otherwise > */ > static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > { > @@ -128,15 +138,12 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_NEVER_DAX)) > return false; > > - if (vma->vm_flags & VM_NOHUGEPAGE) > + if (!transhuge_vma_enabled(vma, vma->vm_flags)) > return false; > > if (vma_is_temporary_stack(vma)) > return false; > > - if (test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) > - return false; > - > if (transparent_hugepage_flags & (1 << TRANSPARENT_HUGEPAGE_FLAG)) > return true; > > @@ -150,7 +157,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > return false; > } > > -bool transparent_hugepage_enabled(struct vm_area_struct *vma); > +bool transparent_hugepage_active(struct vm_area_struct *vma); > > static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > unsigned long haddr) > @@ -351,7 +358,7 @@ static inline bool __transparent_hugepage_enabled(struct vm_area_struct *vma) > return false; > } > > -static inline bool transparent_hugepage_enabled(struct vm_area_struct *vma) > +static inline bool transparent_hugepage_active(struct vm_area_struct *vma) > { > return false; > } > @@ -362,6 +369,12 @@ static inline bool transhuge_vma_suitable(struct vm_area_struct *vma, > return false; > } > > +static inline bool transhuge_vma_enabled(struct vm_area_struct *vma, > + unsigned long vm_flags) > +{ > + return false; > +} > + > static inline void prep_transhuge_page(struct page *page) {} > > static inline bool is_transparent_hugepage(struct page *page) > diff --git a/mm/huge_memory.c b/mm/huge_memory.c > index 76ca1eb2a223..4f37867eed12 100644 > --- a/mm/huge_memory.c > +++ b/mm/huge_memory.c > @@ -63,7 +63,14 @@ static struct shrinker deferred_split_shrinker; > static atomic_t huge_zero_refcount; > struct page *huge_zero_page __read_mostly; > > -bool transparent_hugepage_enabled(struct vm_area_struct *vma) > +static inline bool file_thp_enabled(struct vm_area_struct *vma) > +{ > + return transhuge_vma_enabled(vma, vma->vm_flags) && vma->vm_file && > + !inode_is_open_for_write(vma->vm_file->f_inode) && > + (vma->vm_flags & VM_EXEC); > +} > + > +bool transparent_hugepage_active(struct vm_area_struct *vma) > { > /* The addr is used to check if the vma size fits */ > unsigned long addr = (vma->vm_end & HPAGE_PMD_MASK) - HPAGE_PMD_SIZE; > @@ -74,6 +81,8 @@ bool transparent_hugepage_enabled(struct vm_area_struct *vma) > return __transparent_hugepage_enabled(vma); > if (vma_is_shmem(vma)) > return shmem_huge_enabled(vma); > + if (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS)) > + return file_thp_enabled(vma); > > return false; > } > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index 6c0185fdd815..d97b20fad6e8 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -442,9 +442,7 @@ static inline int khugepaged_test_exit(struct mm_struct *mm) > static bool hugepage_vma_check(struct vm_area_struct *vma, > unsigned long vm_flags) > { > - /* Explicitly disabled through madvise. */ > - if ((vm_flags & VM_NOHUGEPAGE) || > - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) > + if (!transhuge_vma_enabled(vma, vm_flags)) > return false; > > /* Enabled via shmem mount options or sysfs settings. */ > diff --git a/mm/shmem.c b/mm/shmem.c > index a08cedefbfaa..1dcbec313c70 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -4032,8 +4032,7 @@ bool shmem_huge_enabled(struct vm_area_struct *vma) > loff_t i_size; > pgoff_t off; > > - if ((vma->vm_flags & VM_NOHUGEPAGE) || > - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) > + if (!transhuge_vma_enabled(vma, vma->vm_flags)) > return false; > if (shmem_huge == SHMEM_HUGE_FORCE) > return true; > -- > 2.23.0 > >