From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-12.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FREEMAIL_FORGED_FROMDOMAIN,FREEMAIL_FROM, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D85C4C4338F for ; Fri, 30 Jul 2021 23:34:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 4BD1860FE7 for ; Fri, 30 Jul 2021 23:34:24 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 4BD1860FE7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7EB796B0033; Fri, 30 Jul 2021 19:34:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 79AED8D0001; Fri, 30 Jul 2021 19:34:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 662D36B005D; Fri, 30 Jul 2021 19:34:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0016.hostedemail.com [216.40.44.16]) by kanga.kvack.org (Postfix) with ESMTP id 47CE76B0033 for ; Fri, 30 Jul 2021 19:34:23 -0400 (EDT) Received: from smtpin37.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id D745618028E8A for ; Fri, 30 Jul 2021 23:34:22 +0000 (UTC) X-FDA: 78420860364.37.5DBCB21 Received: from mail-ej1-f42.google.com (mail-ej1-f42.google.com [209.85.218.42]) by imf22.hostedemail.com (Postfix) with ESMTP id 9198BAC82 for ; Fri, 30 Jul 2021 23:34:22 +0000 (UTC) Received: by mail-ej1-f42.google.com with SMTP id yk17so11935215ejb.11 for ; Fri, 30 Jul 2021 16:34:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20161025; h=mime-version:references:in-reply-to:from:date:message-id:subject:to :cc; bh=LvujftfLIDd1xol2M4zfvGCFUISvlCpdL4OAw2P3ZNQ=; b=es3QzJA07t9AsdWb5Lih5k3Lv0qF1SPYnr6FEMuSerEny8ejm0HQYSQf8YRyb4tHgT dkRT/T+1Bqb5n/Bhi/lUYsd7zz8nR8DTaueDlGiTcyVm673YV4EVOx0C8/t/ic+POLS+ v+nmPqwhDWOsyunCas7vnkcWh5R4/HX4B6cDRQ2MD+HaOrkOyr7/pmJLoxsqz/hOtItO 5kZvpZa74ke8X2DQ3NOM2PgV/dejFbPHmYQR26/MY6P+g/sbpef4FtJxJKlNhgbSVoZ4 bzoV7wxH4ta5NpOb6s7A0kRXsf0bHls43VxGE0d8b8gLvXlsLVxvJYSv20MQuoCtpb69 JXEg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:mime-version:references:in-reply-to:from:date :message-id:subject:to:cc; bh=LvujftfLIDd1xol2M4zfvGCFUISvlCpdL4OAw2P3ZNQ=; b=pgCgqZg1+ubwKXwHlEgAKGLbmYlRep4lcDDVB2ump0D/uW1WE4SfZNf4f++S1ym1Rl 4uOzHq+yGtjwP5Ust+wwoWAExya53mNzyUXyrtBC0KLSsOja4fCB+iFWSNvmMl6WRh/m MQ8mq6DCwfaHsHH+v/HcUs1qD89TtvSnTUOZDg0KgmD1kc7VNd9NXzNKyODE9JxwV7aV mpGmrz4igyZL9+NmKuEXAZXV0iD8mcVW9NcsE+2x0DeGQMuhwJmZkt2c4EnyLqem/cCQ SjRoqqX+xl7KX3kFNvqWqdpnqmXxyTbkDXVDuOCiLXcgqFD/3OhyUFFSaQdxVUnMHY2b pnpw== X-Gm-Message-State: AOAM5315N/sRM3IvA1QdQpHcg7Av3jxPMJhwrlHEM2FFBBHapQZyQJrJ pkYZesTojtpEJZ+0p/EDkqgeTlo+pg0Kh+r3kg8= X-Google-Smtp-Source: ABdhPJzIPLPTVpjdPRRU7lT4nAMEaC968kH/q3mViPNLI4/P40xZrg4QquOovo9J5oIqcVaFYmARXiqLMGlXxIuoaKM= X-Received: by 2002:a17:906:34cc:: with SMTP id h12mr5187452ejb.25.1627688061494; Fri, 30 Jul 2021 16:34:21 -0700 (PDT) MIME-Version: 1.0 References: <2862852d-badd-7486-3a8e-c5ea9666d6fb@google.com> In-Reply-To: From: Yang Shi Date: Fri, 30 Jul 2021 16:34:09 -0700 Message-ID: Subject: Re: [PATCH 06/16] huge tmpfs: shmem_is_huge(vma, inode, index) To: Hugh Dickins Cc: Andrew Morton , Shakeel Butt , "Kirill A. Shutemov" , Miaohe Lin , Mike Kravetz , Michal Hocko , Rik van Riel , Christoph Hellwig , Matthew Wilcox , "Eric W. Biederman" , Alexey Gladkov , Chris Wilson , Matthew Auld , Linux FS-devel Mailing List , Linux Kernel Mailing List , linux-api@vger.kernel.org, Linux MM Content-Type: text/plain; charset="UTF-8" Authentication-Results: imf22.hostedemail.com; dkim=pass header.d=gmail.com header.s=20161025 header.b=es3QzJA0; dmarc=pass (policy=none) header.from=gmail.com; spf=pass (imf22.hostedemail.com: domain of shy828301@gmail.com designates 209.85.218.42 as permitted sender) smtp.mailfrom=shy828301@gmail.com X-Stat-Signature: b978h69m7uh6z5715kpge4whegu97c1m X-Rspamd-Queue-Id: 9198BAC82 X-Rspamd-Server: rspam01 X-HE-Tag: 1627688062-721607 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Fri, Jul 30, 2021 at 12:42 AM Hugh Dickins wrote: > > Extend shmem_huge_enabled(vma) to shmem_is_huge(vma, inode, index), so > that a consistent set of checks can be applied, even when the inode is > accessed through read/write syscalls (with NULL vma) instead of mmaps > (the index argument is seldom of interest, but required by mount option > "huge=within_size"). Clean up and rearrange the checks a little. > > This then replaces the checks which shmem_fault() and shmem_getpage_gfp() > were making, and eliminates the SGP_HUGE and SGP_NOHUGE modes: while it's > still true that khugepaged's collapse_file() at that point wants a small > page, the race that might allocate it a huge page is too unlikely to be > worth optimizing against (we are there *because* there was at least one > small page in the way), and handled by a later PageTransCompound check. Yes, it seems too unlikely. But if it happens the PageTransCompound check may be not good enough since the page allocated by shmem_getpage() may be charged to wrong memcg (root memcg). And it won't be replaced by a newly allocated huge page so the wrong charge can't be undone. And, another question is it seems the newly allocated huge page will just be uncharged instead of being freed until "khugepaged_pages_to_scan" pages are scanned. The khugepaged_prealloc_page() is called to free the allocated huge page before each call to khugepaged_scan_mm_slot(). But khugepaged_scan_file() -> collapse_fille() -> khugepaged_alloc_page() may be called multiple times in the loop in khugepaged_scan_mm_slot(), so khugepaged_alloc_page() may see that page to trigger VM_BUG IIUC. The code is quite convoluted, I'm not sure whether I miss something or not. And this problem seems very hard to trigger in real life workload. > > Replace a couple of 0s by explicit SHMEM_HUGE_NEVERs; and replace the > obscure !shmem_mapping() symlink check by explicit S_ISLNK() - nothing > else needs that symlink check, so leave it there in shmem_getpage_gfp(). > > Signed-off-by: Hugh Dickins > --- > include/linux/shmem_fs.h | 9 +++-- > mm/khugepaged.c | 2 +- > mm/shmem.c | 84 ++++++++++++---------------------------- > 3 files changed, 32 insertions(+), 63 deletions(-) > > diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h > index 9b7f7ac52351..3b05a28e34c4 100644 > --- a/include/linux/shmem_fs.h > +++ b/include/linux/shmem_fs.h > @@ -86,7 +86,12 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end); > extern int shmem_unuse(unsigned int type, bool frontswap, > unsigned long *fs_pages_to_unuse); > > -extern bool shmem_huge_enabled(struct vm_area_struct *vma); > +extern bool shmem_is_huge(struct vm_area_struct *vma, > + struct inode *inode, pgoff_t index); > +static inline bool shmem_huge_enabled(struct vm_area_struct *vma) > +{ > + return shmem_is_huge(vma, file_inode(vma->vm_file), vma->vm_pgoff); > +} > extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); > extern unsigned long shmem_partial_swap_usage(struct address_space *mapping, > pgoff_t start, pgoff_t end); > @@ -95,8 +100,6 @@ extern unsigned long shmem_partial_swap_usage(struct address_space *mapping, > enum sgp_type { > SGP_READ, /* don't exceed i_size, don't allocate page */ > SGP_CACHE, /* don't exceed i_size, may allocate page */ > - SGP_NOHUGE, /* like SGP_CACHE, but no huge pages */ > - SGP_HUGE, /* like SGP_CACHE, huge pages preferred */ > SGP_WRITE, /* may exceed i_size, may allocate !Uptodate page */ > SGP_FALLOC, /* like SGP_WRITE, but make existing page Uptodate */ > }; > diff --git a/mm/khugepaged.c b/mm/khugepaged.c > index b0412be08fa2..cecb19c3e965 100644 > --- a/mm/khugepaged.c > +++ b/mm/khugepaged.c > @@ -1721,7 +1721,7 @@ static void collapse_file(struct mm_struct *mm, > xas_unlock_irq(&xas); > /* swap in or instantiate fallocated page */ > if (shmem_getpage(mapping->host, index, &page, > - SGP_NOHUGE)) { > + SGP_CACHE)) { > result = SCAN_FAIL; > goto xa_unlocked; > } > diff --git a/mm/shmem.c b/mm/shmem.c > index 740d48ef1eb5..6def7391084c 100644 > --- a/mm/shmem.c > +++ b/mm/shmem.c > @@ -474,39 +474,35 @@ static bool shmem_confirm_swap(struct address_space *mapping, > #ifdef CONFIG_TRANSPARENT_HUGEPAGE > /* ifdef here to avoid bloating shmem.o when not necessary */ > > -static int shmem_huge __read_mostly; > +static int shmem_huge __read_mostly = SHMEM_HUGE_NEVER; > > -bool shmem_huge_enabled(struct vm_area_struct *vma) > +bool shmem_is_huge(struct vm_area_struct *vma, > + struct inode *inode, pgoff_t index) > { > - struct inode *inode = file_inode(vma->vm_file); > - struct shmem_sb_info *sbinfo = SHMEM_SB(inode->i_sb); > loff_t i_size; > - pgoff_t off; > > - if ((vma->vm_flags & VM_NOHUGEPAGE) || > - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) > - return false; > - if (shmem_huge == SHMEM_HUGE_FORCE) > - return true; > if (shmem_huge == SHMEM_HUGE_DENY) > return false; > - switch (sbinfo->huge) { > - case SHMEM_HUGE_NEVER: > + if (vma && ((vma->vm_flags & VM_NOHUGEPAGE) || > + test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags))) > return false; > + if (shmem_huge == SHMEM_HUGE_FORCE) > + return true; > + > + switch (SHMEM_SB(inode->i_sb)->huge) { > case SHMEM_HUGE_ALWAYS: > return true; > case SHMEM_HUGE_WITHIN_SIZE: > - off = round_up(vma->vm_pgoff, HPAGE_PMD_NR); > + index = round_up(index, HPAGE_PMD_NR); > i_size = round_up(i_size_read(inode), PAGE_SIZE); > - if (i_size >= HPAGE_PMD_SIZE && > - i_size >> PAGE_SHIFT >= off) > + if (i_size >= HPAGE_PMD_SIZE && (i_size >> PAGE_SHIFT) >= index) > return true; > fallthrough; > case SHMEM_HUGE_ADVISE: > - /* TODO: implement fadvise() hints */ > - return (vma->vm_flags & VM_HUGEPAGE); > + if (vma && (vma->vm_flags & VM_HUGEPAGE)) > + return true; > + fallthrough; > default: > - VM_BUG_ON(1); > return false; > } > } > @@ -680,6 +676,12 @@ static long shmem_unused_huge_count(struct super_block *sb, > > #define shmem_huge SHMEM_HUGE_DENY > > +bool shmem_is_huge(struct vm_area_struct *vma, > + struct inode *inode, pgoff_t index) > +{ > + return false; > +} > + > static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo, > struct shrink_control *sc, unsigned long nr_to_split) > { > @@ -1829,7 +1831,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > struct shmem_sb_info *sbinfo; > struct mm_struct *charge_mm; > struct page *page; > - enum sgp_type sgp_huge = sgp; > pgoff_t hindex = index; > gfp_t huge_gfp; > int error; > @@ -1838,8 +1839,6 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > > if (index > (MAX_LFS_FILESIZE >> PAGE_SHIFT)) > return -EFBIG; > - if (sgp == SGP_NOHUGE || sgp == SGP_HUGE) > - sgp = SGP_CACHE; > repeat: > if (sgp <= SGP_CACHE && > ((loff_t)index << PAGE_SHIFT) >= i_size_read(inode)) { > @@ -1898,36 +1897,12 @@ static int shmem_getpage_gfp(struct inode *inode, pgoff_t index, > return 0; > } > > - /* shmem_symlink() */ > - if (!shmem_mapping(mapping)) > - goto alloc_nohuge; > - if (shmem_huge == SHMEM_HUGE_DENY || sgp_huge == SGP_NOHUGE) > + /* Never use a huge page for shmem_symlink() */ > + if (S_ISLNK(inode->i_mode)) > goto alloc_nohuge; > - if (shmem_huge == SHMEM_HUGE_FORCE) > - goto alloc_huge; > - switch (sbinfo->huge) { > - case SHMEM_HUGE_NEVER: > + if (!shmem_is_huge(vma, inode, index)) > goto alloc_nohuge; > - case SHMEM_HUGE_WITHIN_SIZE: { > - loff_t i_size; > - pgoff_t off; > - > - off = round_up(index, HPAGE_PMD_NR); > - i_size = round_up(i_size_read(inode), PAGE_SIZE); > - if (i_size >= HPAGE_PMD_SIZE && > - i_size >> PAGE_SHIFT >= off) > - goto alloc_huge; > > - fallthrough; > - } > - case SHMEM_HUGE_ADVISE: > - if (sgp_huge == SGP_HUGE) > - goto alloc_huge; > - /* TODO: implement fadvise() hints */ > - goto alloc_nohuge; > - } > - > -alloc_huge: > huge_gfp = vma_thp_gfp_mask(vma); > huge_gfp = limit_gfp_mask(huge_gfp, gfp); > page = shmem_alloc_and_acct_page(huge_gfp, inode, index, true); > @@ -2083,7 +2058,6 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) > struct vm_area_struct *vma = vmf->vma; > struct inode *inode = file_inode(vma->vm_file); > gfp_t gfp = mapping_gfp_mask(inode->i_mapping); > - enum sgp_type sgp; > int err; > vm_fault_t ret = VM_FAULT_LOCKED; > > @@ -2146,15 +2120,7 @@ static vm_fault_t shmem_fault(struct vm_fault *vmf) > spin_unlock(&inode->i_lock); > } > > - sgp = SGP_CACHE; > - > - if ((vma->vm_flags & VM_NOHUGEPAGE) || > - test_bit(MMF_DISABLE_THP, &vma->vm_mm->flags)) > - sgp = SGP_NOHUGE; > - else if (vma->vm_flags & VM_HUGEPAGE) > - sgp = SGP_HUGE; > - > - err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, sgp, > + err = shmem_getpage_gfp(inode, vmf->pgoff, &vmf->page, SGP_CACHE, > gfp, vma, vmf, &ret); > if (err) > return vmf_error(err); > @@ -3961,7 +3927,7 @@ int __init shmem_init(void) > if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY) > SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge; > else > - shmem_huge = 0; /* just in case it was patched */ > + shmem_huge = SHMEM_HUGE_NEVER; /* just in case it was patched */ > #endif > return 0; > > -- > 2.26.2 >