From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.6 required=3.0 tests=DKIM_INVALID,DKIM_SIGNED, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 89044C5ACD6 for ; Wed, 18 Mar 2020 14:03:06 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3238820768 for ; Wed, 18 Mar 2020 14:03:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=fail reason="signature verification failed" (2048-bit key) header.d=infradead.org header.i=@infradead.org header.b="gjX/xUG7" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3238820768 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=infradead.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B2B386B006C; Wed, 18 Mar 2020 10:02:57 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id A3FAC6B0072; Wed, 18 Mar 2020 10:02:57 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A5856B006C; Wed, 18 Mar 2020 10:02:57 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0099.hostedemail.com [216.40.44.99]) by kanga.kvack.org (Postfix) with ESMTP id 468C06B0037 for ; Wed, 18 Mar 2020 10:02:57 -0400 (EDT) Received: from smtpin03.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 06276824CA0E for ; Wed, 18 Mar 2020 14:02:57 +0000 (UTC) X-FDA: 76608649194.03.stone30_19ab1fc6a514f X-HE-Tag: stone30_19ab1fc6a514f X-Filterd-Recvd-Size: 13924 Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) by imf18.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Mar 2020 14:02:56 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=infradead.org; s=bombadil.20170209; h=Content-Transfer-Encoding: MIME-Version:References:In-Reply-To:Message-Id:Date:Subject:Cc:To:From:Sender :Reply-To:Content-Type:Content-ID:Content-Description; bh=kR4SfS2J5lhSuPNpJGVr8zgY3Bneu9y9xOejsD0ytPg=; b=gjX/xUG758KrJCQdvfkkr7mGyv aMAe1LlU+Lr+Zj0c38svPmuIsBx4LBkRIWid7WLeZFQImOXbolTH5l6LsqgtuMn/3Delu3/XLDXPv L1Yfs3+vRv/Iga5QMQ777zVpSBvhBmbZlrcW3MXU4QMWeWr3IyWNz250tIma5X7losCoWHDWixdLN 1HFt7Y6HqocDuurkPoejc+KkyhLChEGyC7VdVHF6mNyJgqTlC+V5zs1pMKsxaGkI9QTnfNwhue5uG zPkFNOPemow82OjhWWLKdlVjSM5SouG5t+djt0vbNOJBSiMJ6X4sLQ4+Eg7Jum/uksN/dXHcWIHlW bLMxCnHQ==; Received: from willy by bombadil.infradead.org with local (Exim 4.92.3 #3 (Red Hat Linux)) id 1jEZHb-0001cJ-FS; Wed, 18 Mar 2020 14:02:55 +0000 From: Matthew Wilcox To: linux-mm@kvack.org Cc: "Matthew Wilcox (Oracle)" , "Kirill A . Shutemov" , "Aneesh Kumar K . V" Subject: [PATCH v2 5/8] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Date: Wed, 18 Mar 2020 07:02:50 -0700 Message-Id: <20200318140253.6141-6-willy@infradead.org> X-Mailer: git-send-email 2.21.1 In-Reply-To: <20200318140253.6141-1-willy@infradead.org> References: <20200318140253.6141-1-willy@infradead.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: "Matthew Wilcox (Oracle)" Commit e496cf3d7821 ("thp: introduce CONFIG_TRANSPARENT_HUGE_PAGECACHE") notes that it should be reverted when the PowerPC problem was fixed. The commit fixing the PowerPC problem (953c66c2b22a) did not revert the commit; instead setting CONFIG_TRANSPARENT_HUGE_PAGECACHE to the same as CONFIG_TRANSPARENT_HUGEPAGE. Checking with Kirill and Aneesh, this was an oversight, so remove the Kconfig symbol and undo the work of commit e496cf3d7821. Signed-off-by: Matthew Wilcox (Oracle) Cc: Kirill A. Shutemov Cc: Aneesh Kumar K.V --- include/linux/shmem_fs.h | 10 +--------- mm/Kconfig | 6 +----- mm/huge_memory.c | 2 +- mm/khugepaged.c | 12 ++++-------- mm/memory.c | 5 ++--- mm/rmap.c | 2 +- mm/shmem.c | 36 ++++++++++++++++++------------------ 7 files changed, 28 insertions(+), 45 deletions(-) diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h index d56fefef8905..7a35a6901221 100644 --- a/include/linux/shmem_fs.h +++ b/include/linux/shmem_fs.h @@ -78,6 +78,7 @@ extern void shmem_truncate_range(struct inode *inode, l= off_t start, loff_t end); extern int shmem_unuse(unsigned int type, bool frontswap, unsigned long *fs_pages_to_unuse); =20 +extern bool shmem_huge_enabled(struct vm_area_struct *vma); extern unsigned long shmem_swap_usage(struct vm_area_struct *vma); extern unsigned long shmem_partial_swap_usage(struct address_space *mapp= ing, pgoff_t start, pgoff_t end); @@ -114,15 +115,6 @@ static inline bool shmem_file(struct file *file) extern bool shmem_charge(struct inode *inode, long pages); extern void shmem_uncharge(struct inode *inode, long pages); =20 -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE -extern bool shmem_huge_enabled(struct vm_area_struct *vma); -#else -static inline bool shmem_huge_enabled(struct vm_area_struct *vma) -{ - return false; -} -#endif - #ifdef CONFIG_SHMEM extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_p= md, struct vm_area_struct *dst_vma, diff --git a/mm/Kconfig b/mm/Kconfig index ab80933be65f..211a70e8d5cf 100644 --- a/mm/Kconfig +++ b/mm/Kconfig @@ -420,10 +420,6 @@ config THP_SWAP =20 For selection by architectures with reasonable THP sizes. =20 -config TRANSPARENT_HUGE_PAGECACHE - def_bool y - depends on TRANSPARENT_HUGEPAGE - # # UP and nommu archs use km based percpu allocator # @@ -714,7 +710,7 @@ config GUP_GET_PTE_LOW_HIGH =20 config READ_ONLY_THP_FOR_FS bool "Read-only THP for filesystems (EXPERIMENTAL)" - depends on TRANSPARENT_HUGE_PAGECACHE && SHMEM + depends on TRANSPARENT_HUGEPAGE && SHMEM =20 help Allow khugepaged to put read-only file-backed pages in THP. diff --git a/mm/huge_memory.c b/mm/huge_memory.c index b08b199f9a11..e88cce651705 100644 --- a/mm/huge_memory.c +++ b/mm/huge_memory.c @@ -326,7 +326,7 @@ static struct attribute *hugepage_attr[] =3D { &defrag_attr.attr, &use_zero_page_attr.attr, &hpage_pmd_size_attr.attr, -#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE) +#ifdef CONFIG_SHMEM &shmem_enabled_attr.attr, #endif #ifdef CONFIG_DEBUG_VM diff --git a/mm/khugepaged.c b/mm/khugepaged.c index b679908743cb..cc80b0f2c5f8 100644 --- a/mm/khugepaged.c +++ b/mm/khugepaged.c @@ -416,8 +416,6 @@ static bool hugepage_vma_check(struct vm_area_struct = *vma, (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) && vma->vm_file && (vm_flags & VM_DENYWRITE))) { - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) - return false; return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff, HPAGE_PMD_NR); } @@ -1260,7 +1258,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot= ) } } =20 -#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE) +#ifdef CONFIG_SHMEM /* * Notify khugepaged that given addr of the mm is pte-mapped THP. Then * khugepaged should try to collapse the page table. @@ -1975,6 +1973,8 @@ static unsigned int khugepaged_scan_mm_slot(unsigne= d int pages, if (khugepaged_scan.address < hstart) khugepaged_scan.address =3D hstart; VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK); + if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma)) + goto skip; =20 while (khugepaged_scan.address < hend) { int ret; @@ -1986,14 +1986,10 @@ static unsigned int khugepaged_scan_mm_slot(unsig= ned int pages, khugepaged_scan.address + HPAGE_PMD_SIZE > hend); if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) { - struct file *file; + struct file *file =3D get_file(vma->vm_file); pgoff_t pgoff =3D linear_page_index(vma, khugepaged_scan.address); =20 - if (shmem_file(vma->vm_file) - && !shmem_huge_enabled(vma)) - goto skip; - file =3D get_file(vma->vm_file); up_read(&mm->mmap_sem); ret =3D 1; khugepaged_scan_file(mm, file, pgoff, hpage); diff --git a/mm/memory.c b/mm/memory.c index 0bccc622e482..6ab0b03ea9bd 100644 --- a/mm/memory.c +++ b/mm/memory.c @@ -3354,7 +3354,7 @@ static vm_fault_t pte_alloc_one_map(struct vm_fault= *vmf) return 0; } =20 -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE static void deposit_prealloc_pte(struct vm_fault *vmf) { struct vm_area_struct *vma =3D vmf->vma; @@ -3456,8 +3456,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, stru= ct mem_cgroup *memcg, pte_t entry; vm_fault_t ret; =20 - if (pmd_none(*vmf->pmd) && PageTransCompound(page) && - IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) { + if (pmd_none(*vmf->pmd) && PageTransCompound(page)) { /* THP on COW? */ VM_BUG_ON_PAGE(memcg, page); =20 diff --git a/mm/rmap.c b/mm/rmap.c index b3e381919835..af48024c6baf 100644 --- a/mm/rmap.c +++ b/mm/rmap.c @@ -940,7 +940,7 @@ static bool page_mkclean_one(struct page *page, struc= t vm_area_struct *vma, set_pte_at(vma->vm_mm, address, pte, entry); ret =3D 1; } else { -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE pmd_t *pmd =3D pvmw.pmd; pmd_t entry; =20 diff --git a/mm/shmem.c b/mm/shmem.c index c8f7540ef048..056cec644c17 100644 --- a/mm/shmem.c +++ b/mm/shmem.c @@ -410,7 +410,7 @@ static bool shmem_confirm_swap(struct address_space *= mapping, #define SHMEM_HUGE_DENY (-1) #define SHMEM_HUGE_FORCE (-2) =20 -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE /* ifdef here to avoid bloating shmem.o when not necessary */ =20 static int shmem_huge __read_mostly; @@ -580,7 +580,7 @@ static long shmem_unused_huge_count(struct super_bloc= k *sb, struct shmem_sb_info *sbinfo =3D SHMEM_SB(sb); return READ_ONCE(sbinfo->shrinklist_len); } -#else /* !CONFIG_TRANSPARENT_HUGE_PAGECACHE */ +#else /* !CONFIG_TRANSPARENT_HUGEPAGE */ =20 #define shmem_huge SHMEM_HUGE_DENY =20 @@ -589,11 +589,11 @@ static unsigned long shmem_unused_huge_shrink(struc= t shmem_sb_info *sbinfo, { return 0; } -#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo) { - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && (shmem_huge =3D=3D SHMEM_HUGE_FORCE || sbinfo->huge) && shmem_huge !=3D SHMEM_HUGE_DENY) return true; @@ -1059,7 +1059,7 @@ static int shmem_setattr(struct dentry *dentry, str= uct iattr *attr) * Part of the huge page can be beyond i_size: subject * to shrink under memory pressure. */ - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) { + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) { spin_lock(&sbinfo->shrinklist_lock); /* * _careful to defend against unlocked access to @@ -1472,7 +1472,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp, pgoff_t hindex; struct page *page; =20 - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return NULL; =20 hindex =3D round_down(index, HPAGE_PMD_NR); @@ -1511,7 +1511,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t= gfp, int nr; int err =3D -ENOSPC; =20 - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) huge =3D false; nr =3D huge ? HPAGE_PMD_NR : 1; =20 @@ -2089,7 +2089,7 @@ unsigned long shmem_get_unmapped_area(struct file *= file, get_area =3D current->mm->get_unmapped_area; addr =3D get_area(file, uaddr, len, pgoff, flags); =20 - if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) + if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) return addr; if (IS_ERR_VALUE(addr)) return addr; @@ -2228,7 +2228,7 @@ static int shmem_mmap(struct file *file, struct vm_= area_struct *vma) =20 file_accessed(file); vma->vm_ops =3D &shmem_vm_ops; - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < (vma->vm_end & HPAGE_PMD_MASK)) { khugepaged_enter(vma, vma->vm_flags); @@ -3457,7 +3457,7 @@ static int shmem_parse_one(struct fs_context *fc, s= truct fs_parameter *param) case Opt_huge: ctx->huge =3D result.uint_32; if (ctx->huge !=3D SHMEM_HUGE_NEVER && - !(IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + !(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && has_transparent_hugepage())) goto unsupported_parameter; ctx->seen |=3D SHMEM_SEEN_HUGE; @@ -3603,7 +3603,7 @@ static int shmem_show_options(struct seq_file *seq,= struct dentry *root) if (!gid_eq(sbinfo->gid, GLOBAL_ROOT_GID)) seq_printf(seq, ",gid=3D%u", from_kgid_munged(&init_user_ns, sbinfo->gid)); -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE /* Rightly or wrongly, show huge mount option unmasked by shmem_huge */ if (sbinfo->huge) seq_printf(seq, ",huge=3D%s", shmem_format_huge(sbinfo->huge)); @@ -3848,7 +3848,7 @@ static const struct super_operations shmem_ops =3D = { .evict_inode =3D shmem_evict_inode, .drop_inode =3D generic_delete_inode, .put_super =3D shmem_put_super, -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE .nr_cached_objects =3D shmem_unused_huge_count, .free_cached_objects =3D shmem_unused_huge_scan, #endif @@ -3910,7 +3910,7 @@ int __init shmem_init(void) goto out1; } =20 -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY) SHMEM_SB(shm_mnt->mnt_sb)->huge =3D shmem_huge; else @@ -3926,7 +3926,7 @@ int __init shmem_init(void) return error; } =20 -#if defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && defined(CONFIG_SYSFS) +#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS) static ssize_t shmem_enabled_show(struct kobject *kobj, struct kobj_attribute *attr, char *buf) { @@ -3978,9 +3978,9 @@ static ssize_t shmem_enabled_store(struct kobject *= kobj, =20 struct kobj_attribute shmem_enabled_attr =3D __ATTR(shmem_enabled, 0644, shmem_enabled_show, shmem_enabled_store); -#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE && CONFIG_SYSFS */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_SYSFS */ =20 -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE +#ifdef CONFIG_TRANSPARENT_HUGEPAGE bool shmem_huge_enabled(struct vm_area_struct *vma) { struct inode *inode =3D file_inode(vma->vm_file); @@ -4015,7 +4015,7 @@ bool shmem_huge_enabled(struct vm_area_struct *vma) return false; } } -#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE */ +#endif /* CONFIG_TRANSPARENT_HUGEPAGE */ =20 #else /* !CONFIG_SHMEM */ =20 @@ -4184,7 +4184,7 @@ int shmem_zero_setup(struct vm_area_struct *vma) vma->vm_file =3D file; vma->vm_ops =3D &shmem_vm_ops; =20 - if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && + if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) && ((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) < (vma->vm_end & HPAGE_PMD_MASK)) { khugepaged_enter(vma, vma->vm_flags); --=20 2.25.1