linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH 0/6] Misc MM patches
@ 2020-03-03  4:11 Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 1/6] mm: Use vm_fault error code directly Matthew Wilcox
                   ` (5 more replies)
  0 siblings, 6 replies; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03  4:11 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle)

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

These are all things that I noticed while working on the THP patches, but
all stand alone as useful MM cleanups.  Andrew, please consider applying.
They're not really related to each other, so cherry-pick if you like.

Matthew Wilcox (Oracle) (6):
  mm: Use vm_fault error code directly
  mm: Optimise find_subpage for !THP
  mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE
  mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io
  mm: Unexport find_get_entry
  mm: Rewrite pagecache_get_page documentation

 include/linux/pagemap.h  | 15 +++++++++-----
 include/linux/shmem_fs.h | 10 +--------
 mm/Kconfig               |  6 +-----
 mm/filemap.c             | 44 ++++++++++++++++++----------------------
 mm/huge_memory.c         |  2 +-
 mm/khugepaged.c          | 10 ++-------
 mm/memory.c              |  5 ++---
 mm/page-writeback.c      |  2 +-
 mm/rmap.c                |  2 +-
 mm/shmem.c               | 36 ++++++++++++++++----------------
 10 files changed, 57 insertions(+), 75 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH 1/6] mm: Use vm_fault error code directly
  2020-03-03  4:11 [PATCH 0/6] Misc MM patches Matthew Wilcox
@ 2020-03-03  4:11 ` Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 2/6] mm: Optimise find_subpage for !THP Matthew Wilcox
                   ` (4 subsequent siblings)
  5 siblings, 0 replies; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03  4:11 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov, Christoph Hellwig

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Use VM_FAULT_OOM instead of indirecting through vmf_error(-ENOMEM).

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 mm/filemap.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 1784478270e1..1beb7716276b 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -2491,7 +2491,7 @@ vm_fault_t filemap_fault(struct vm_fault *vmf)
 		if (!page) {
 			if (fpin)
 				goto out_retry;
-			return vmf_error(-ENOMEM);
+			return VM_FAULT_OOM;
 		}
 	}
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 2/6] mm: Optimise find_subpage for !THP
  2020-03-03  4:11 [PATCH 0/6] Misc MM patches Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 1/6] mm: Use vm_fault error code directly Matthew Wilcox
@ 2020-03-03  4:11 ` Matthew Wilcox
  2020-03-03 21:28   ` Alexander Duyck
  2020-03-03  4:11 ` [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03  4:11 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

If THP is disabled, find_subpage can become a no-op.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 include/linux/pagemap.h | 15 ++++++++++-----
 1 file changed, 10 insertions(+), 5 deletions(-)

diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index b4ea3a5d00e5..8785f60b05f8 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -333,14 +333,19 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
 			mapping_gfp_mask(mapping));
 }
 
-static inline struct page *find_subpage(struct page *page, pgoff_t offset)
+/*
+ * Given the page we found in the page cache, return the page corresponding
+ * to this offset in the file
+ */
+static inline struct page *find_subpage(struct page *head, pgoff_t offset)
 {
-	if (PageHuge(page))
-		return page;
+	/* HugeTLBfs wants the head page regardless */
+	if (PageHuge(head))
+		return head;
 
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG_ON_PAGE(PageTail(head), head);
 
-	return page + (offset & (compound_nr(page) - 1));
+	return head + (offset & (hpage_nr_pages(head) - 1));
 }
 
 struct page *find_get_entry(struct address_space *mapping, pgoff_t offset);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE
  2020-03-03  4:11 [PATCH 0/6] Misc MM patches Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 1/6] mm: Use vm_fault error code directly Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 2/6] mm: Optimise find_subpage for !THP Matthew Wilcox
@ 2020-03-03  4:11 ` Matthew Wilcox
  2020-03-03 21:52   ` Alexander Duyck
  2020-03-03  4:11 ` [PATCH 4/6] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io Matthew Wilcox
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03  4:11 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov, Aneesh Kumar K . V

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Commit e496cf3d7821 ("thp: introduce CONFIG_TRANSPARENT_HUGE_PAGECACHE")
notes that it should be reverted when the PowerPC problem was fixed.
The commit fixing the PowerPC problem (953c66c2b22a) did not revert
the commit; instead setting CONFIG_TRANSPARENT_HUGE_PAGECACHE to the
same as CONFIG_TRANSPARENT_HUGEPAGE.  Checking with Kirill and Aneesh,
this was an oversight, so remove the Kconfig symbol and undo the work
of commit e496cf3d7821.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
---
 include/linux/shmem_fs.h | 10 +---------
 mm/Kconfig               |  6 +-----
 mm/huge_memory.c         |  2 +-
 mm/khugepaged.c          | 10 ++--------
 mm/memory.c              |  5 ++---
 mm/rmap.c                |  2 +-
 mm/shmem.c               | 36 ++++++++++++++++++------------------
 7 files changed, 26 insertions(+), 45 deletions(-)

diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
index d56fefef8905..7a35a6901221 100644
--- a/include/linux/shmem_fs.h
+++ b/include/linux/shmem_fs.h
@@ -78,6 +78,7 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
 extern int shmem_unuse(unsigned int type, bool frontswap,
 		       unsigned long *fs_pages_to_unuse);
 
+extern bool shmem_huge_enabled(struct vm_area_struct *vma);
 extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
 extern unsigned long shmem_partial_swap_usage(struct address_space *mapping,
 						pgoff_t start, pgoff_t end);
@@ -114,15 +115,6 @@ static inline bool shmem_file(struct file *file)
 extern bool shmem_charge(struct inode *inode, long pages);
 extern void shmem_uncharge(struct inode *inode, long pages);
 
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
-extern bool shmem_huge_enabled(struct vm_area_struct *vma);
-#else
-static inline bool shmem_huge_enabled(struct vm_area_struct *vma)
-{
-	return false;
-}
-#endif
-
 #ifdef CONFIG_SHMEM
 extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
 				  struct vm_area_struct *dst_vma,
diff --git a/mm/Kconfig b/mm/Kconfig
index ab80933be65f..211a70e8d5cf 100644
--- a/mm/Kconfig
+++ b/mm/Kconfig
@@ -420,10 +420,6 @@ config THP_SWAP
 
 	  For selection by architectures with reasonable THP sizes.
 
-config	TRANSPARENT_HUGE_PAGECACHE
-	def_bool y
-	depends on TRANSPARENT_HUGEPAGE
-
 #
 # UP and nommu archs use km based percpu allocator
 #
@@ -714,7 +710,7 @@ config GUP_GET_PTE_LOW_HIGH
 
 config READ_ONLY_THP_FOR_FS
 	bool "Read-only THP for filesystems (EXPERIMENTAL)"
-	depends on TRANSPARENT_HUGE_PAGECACHE && SHMEM
+	depends on TRANSPARENT_HUGEPAGE && SHMEM
 
 	help
 	  Allow khugepaged to put read-only file-backed pages in THP.
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index b08b199f9a11..e88cce651705 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -326,7 +326,7 @@ static struct attribute *hugepage_attr[] = {
 	&defrag_attr.attr,
 	&use_zero_page_attr.attr,
 	&hpage_pmd_size_attr.attr,
-#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE)
+#ifdef CONFIG_SHMEM
 	&shmem_enabled_attr.attr,
 #endif
 #ifdef CONFIG_DEBUG_VM
diff --git a/mm/khugepaged.c b/mm/khugepaged.c
index b679908743cb..f99ac65271aa 100644
--- a/mm/khugepaged.c
+++ b/mm/khugepaged.c
@@ -416,8 +416,6 @@ static bool hugepage_vma_check(struct vm_area_struct *vma,
 	    (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
 	     vma->vm_file &&
 	     (vm_flags & VM_DENYWRITE))) {
-		if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE))
-			return false;
 		return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
 				HPAGE_PMD_NR);
 	}
@@ -1260,7 +1258,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
 	}
 }
 
-#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE)
+#ifdef CONFIG_SHMEM
 /*
  * Notify khugepaged that given addr of the mm is pte-mapped THP. Then
  * khugepaged should try to collapse the page table.
@@ -1986,14 +1984,10 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
 				  khugepaged_scan.address + HPAGE_PMD_SIZE >
 				  hend);
 			if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) {
-				struct file *file;
+				struct file *file = get_file(vma->vm_file);
 				pgoff_t pgoff = linear_page_index(vma,
 						khugepaged_scan.address);
 
-				if (shmem_file(vma->vm_file)
-				    && !shmem_huge_enabled(vma))
-					goto skip;
-				file = get_file(vma->vm_file);
 				up_read(&mm->mmap_sem);
 				ret = 1;
 				khugepaged_scan_file(mm, file, pgoff, hpage);
diff --git a/mm/memory.c b/mm/memory.c
index 0bccc622e482..6ab0b03ea9bd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -3354,7 +3354,7 @@ static vm_fault_t pte_alloc_one_map(struct vm_fault *vmf)
 	return 0;
 }
 
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 static void deposit_prealloc_pte(struct vm_fault *vmf)
 {
 	struct vm_area_struct *vma = vmf->vma;
@@ -3456,8 +3456,7 @@ vm_fault_t alloc_set_pte(struct vm_fault *vmf, struct mem_cgroup *memcg,
 	pte_t entry;
 	vm_fault_t ret;
 
-	if (pmd_none(*vmf->pmd) && PageTransCompound(page) &&
-			IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) {
+	if (pmd_none(*vmf->pmd) && PageTransCompound(page)) {
 		/* THP on COW? */
 		VM_BUG_ON_PAGE(memcg, page);
 
diff --git a/mm/rmap.c b/mm/rmap.c
index b3e381919835..af48024c6baf 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -940,7 +940,7 @@ static bool page_mkclean_one(struct page *page, struct vm_area_struct *vma,
 			set_pte_at(vma->vm_mm, address, pte, entry);
 			ret = 1;
 		} else {
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 			pmd_t *pmd = pvmw.pmd;
 			pmd_t entry;
 
diff --git a/mm/shmem.c b/mm/shmem.c
index c8f7540ef048..056cec644c17 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -410,7 +410,7 @@ static bool shmem_confirm_swap(struct address_space *mapping,
 #define SHMEM_HUGE_DENY		(-1)
 #define SHMEM_HUGE_FORCE	(-2)
 
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 /* ifdef here to avoid bloating shmem.o when not necessary */
 
 static int shmem_huge __read_mostly;
@@ -580,7 +580,7 @@ static long shmem_unused_huge_count(struct super_block *sb,
 	struct shmem_sb_info *sbinfo = SHMEM_SB(sb);
 	return READ_ONCE(sbinfo->shrinklist_len);
 }
-#else /* !CONFIG_TRANSPARENT_HUGE_PAGECACHE */
+#else /* !CONFIG_TRANSPARENT_HUGEPAGE */
 
 #define shmem_huge SHMEM_HUGE_DENY
 
@@ -589,11 +589,11 @@ static unsigned long shmem_unused_huge_shrink(struct shmem_sb_info *sbinfo,
 {
 	return 0;
 }
-#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE */
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 static inline bool is_huge_enabled(struct shmem_sb_info *sbinfo)
 {
-	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) &&
+	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
 	    (shmem_huge == SHMEM_HUGE_FORCE || sbinfo->huge) &&
 	    shmem_huge != SHMEM_HUGE_DENY)
 		return true;
@@ -1059,7 +1059,7 @@ static int shmem_setattr(struct dentry *dentry, struct iattr *attr)
 			 * Part of the huge page can be beyond i_size: subject
 			 * to shrink under memory pressure.
 			 */
-			if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE)) {
+			if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
 				spin_lock(&sbinfo->shrinklist_lock);
 				/*
 				 * _careful to defend against unlocked access to
@@ -1472,7 +1472,7 @@ static struct page *shmem_alloc_hugepage(gfp_t gfp,
 	pgoff_t hindex;
 	struct page *page;
 
-	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE))
+	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
 		return NULL;
 
 	hindex = round_down(index, HPAGE_PMD_NR);
@@ -1511,7 +1511,7 @@ static struct page *shmem_alloc_and_acct_page(gfp_t gfp,
 	int nr;
 	int err = -ENOSPC;
 
-	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE))
+	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
 		huge = false;
 	nr = huge ? HPAGE_PMD_NR : 1;
 
@@ -2089,7 +2089,7 @@ unsigned long shmem_get_unmapped_area(struct file *file,
 	get_area = current->mm->get_unmapped_area;
 	addr = get_area(file, uaddr, len, pgoff, flags);
 
-	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE))
+	if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE))
 		return addr;
 	if (IS_ERR_VALUE(addr))
 		return addr;
@@ -2228,7 +2228,7 @@ static int shmem_mmap(struct file *file, struct vm_area_struct *vma)
 
 	file_accessed(file);
 	vma->vm_ops = &shmem_vm_ops;
-	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) &&
+	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
 			((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) <
 			(vma->vm_end & HPAGE_PMD_MASK)) {
 		khugepaged_enter(vma, vma->vm_flags);
@@ -3457,7 +3457,7 @@ static int shmem_parse_one(struct fs_context *fc, struct fs_parameter *param)
 	case Opt_huge:
 		ctx->huge = result.uint_32;
 		if (ctx->huge != SHMEM_HUGE_NEVER &&
-		    !(IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) &&
+		    !(IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
 		      has_transparent_hugepage()))
 			goto unsupported_parameter;
 		ctx->seen |= SHMEM_SEEN_HUGE;
@@ -3603,7 +3603,7 @@ static int shmem_show_options(struct seq_file *seq, struct dentry *root)
 	if (!gid_eq(sbinfo->gid, GLOBAL_ROOT_GID))
 		seq_printf(seq, ",gid=%u",
 				from_kgid_munged(&init_user_ns, sbinfo->gid));
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	/* Rightly or wrongly, show huge mount option unmasked by shmem_huge */
 	if (sbinfo->huge)
 		seq_printf(seq, ",huge=%s", shmem_format_huge(sbinfo->huge));
@@ -3848,7 +3848,7 @@ static const struct super_operations shmem_ops = {
 	.evict_inode	= shmem_evict_inode,
 	.drop_inode	= generic_delete_inode,
 	.put_super	= shmem_put_super,
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	.nr_cached_objects	= shmem_unused_huge_count,
 	.free_cached_objects	= shmem_unused_huge_scan,
 #endif
@@ -3910,7 +3910,7 @@ int __init shmem_init(void)
 		goto out1;
 	}
 
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	if (has_transparent_hugepage() && shmem_huge > SHMEM_HUGE_DENY)
 		SHMEM_SB(shm_mnt->mnt_sb)->huge = shmem_huge;
 	else
@@ -3926,7 +3926,7 @@ int __init shmem_init(void)
 	return error;
 }
 
-#if defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE) && defined(CONFIG_SYSFS)
+#if defined(CONFIG_TRANSPARENT_HUGEPAGE) && defined(CONFIG_SYSFS)
 static ssize_t shmem_enabled_show(struct kobject *kobj,
 		struct kobj_attribute *attr, char *buf)
 {
@@ -3978,9 +3978,9 @@ static ssize_t shmem_enabled_store(struct kobject *kobj,
 
 struct kobj_attribute shmem_enabled_attr =
 	__ATTR(shmem_enabled, 0644, shmem_enabled_show, shmem_enabled_store);
-#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE && CONFIG_SYSFS */
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE && CONFIG_SYSFS */
 
-#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
+#ifdef CONFIG_TRANSPARENT_HUGEPAGE
 bool shmem_huge_enabled(struct vm_area_struct *vma)
 {
 	struct inode *inode = file_inode(vma->vm_file);
@@ -4015,7 +4015,7 @@ bool shmem_huge_enabled(struct vm_area_struct *vma)
 			return false;
 	}
 }
-#endif /* CONFIG_TRANSPARENT_HUGE_PAGECACHE */
+#endif /* CONFIG_TRANSPARENT_HUGEPAGE */
 
 #else /* !CONFIG_SHMEM */
 
@@ -4184,7 +4184,7 @@ int shmem_zero_setup(struct vm_area_struct *vma)
 	vma->vm_file = file;
 	vma->vm_ops = &shmem_vm_ops;
 
-	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE) &&
+	if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE) &&
 			((vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK) <
 			(vma->vm_end & HPAGE_PMD_MASK)) {
 		khugepaged_enter(vma, vma->vm_flags);
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 4/6] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io
  2020-03-03  4:11 [PATCH 0/6] Misc MM patches Matthew Wilcox
                   ` (2 preceding siblings ...)
  2020-03-03  4:11 ` [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox
@ 2020-03-03  4:11 ` Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 5/6] mm: Unexport find_get_entry Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 6/6] mm: Rewrite pagecache_get_page documentation Matthew Wilcox
  5 siblings, 0 replies; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03  4:11 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), Christoph Hellwig, Kirill A . Shutemov

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

Dumping the page information in this circumstance helps for debugging.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/page-writeback.c | 2 +-
 1 file changed, 1 insertion(+), 1 deletion(-)

diff --git a/mm/page-writeback.c b/mm/page-writeback.c
index 2caf780a42e7..9173c25cf8e6 100644
--- a/mm/page-writeback.c
+++ b/mm/page-writeback.c
@@ -2655,7 +2655,7 @@ int clear_page_dirty_for_io(struct page *page)
 	struct address_space *mapping = page_mapping(page);
 	int ret = 0;
 
-	BUG_ON(!PageLocked(page));
+	VM_BUG_ON_PAGE(!PageLocked(page), page);
 
 	if (mapping && mapping_cap_account_dirty(mapping)) {
 		struct inode *inode = mapping->host;
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 5/6] mm: Unexport find_get_entry
  2020-03-03  4:11 [PATCH 0/6] Misc MM patches Matthew Wilcox
                   ` (3 preceding siblings ...)
  2020-03-03  4:11 ` [PATCH 4/6] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io Matthew Wilcox
@ 2020-03-03  4:11 ` Matthew Wilcox
  2020-03-03  4:11 ` [PATCH 6/6] mm: Rewrite pagecache_get_page documentation Matthew Wilcox
  5 siblings, 0 replies; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03  4:11 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), Christoph Hellwig, Kirill A . Shutemov

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

No in-tree users (proc, madvise, memcg, mincore) can be built as a module.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/filemap.c | 1 -
 1 file changed, 1 deletion(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 1beb7716276b..83ce9ce0bee1 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1536,7 +1536,6 @@ struct page *find_get_entry(struct address_space *mapping, pgoff_t offset)
 
 	return page;
 }
-EXPORT_SYMBOL(find_get_entry);
 
 /**
  * find_lock_entry - locate, pin and lock a page cache entry
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH 6/6] mm: Rewrite pagecache_get_page documentation
  2020-03-03  4:11 [PATCH 0/6] Misc MM patches Matthew Wilcox
                   ` (4 preceding siblings ...)
  2020-03-03  4:11 ` [PATCH 5/6] mm: Unexport find_get_entry Matthew Wilcox
@ 2020-03-03  4:11 ` Matthew Wilcox
  5 siblings, 0 replies; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03  4:11 UTC (permalink / raw)
  To: linux-mm; +Cc: Matthew Wilcox (Oracle), Kirill A . Shutemov

From: "Matthew Wilcox (Oracle)" <willy@infradead.org>

 - These were never called PCG flags; they've been called FGP flags since
   their introduction in 2014.
 - The FGP_FOR_MMAP flag was misleadingly documented as if it was an
   alternative to FGP_CREAT instead of an option to it.
 - Capitalisation, formatting, rewording.

Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
---
 mm/filemap.c | 41 +++++++++++++++++++----------------------
 1 file changed, 19 insertions(+), 22 deletions(-)

diff --git a/mm/filemap.c b/mm/filemap.c
index 83ce9ce0bee1..7de79a2b8220 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -1574,37 +1574,34 @@ struct page *find_lock_entry(struct address_space *mapping, pgoff_t offset)
 EXPORT_SYMBOL(find_lock_entry);
 
 /**
- * pagecache_get_page - find and get a page reference
- * @mapping: the address_space to search
- * @offset: the page index
- * @fgp_flags: PCG flags
- * @gfp_mask: gfp mask to use for the page cache data page allocation
- *
- * Looks up the page cache slot at @mapping & @offset.
+ * pagecache_get_page - Find and get a reference to a page.
+ * @mapping: The address_space to search.
+ * @offset: The page index.
+ * @fgp_flags: %FGP flags modify how the page is returned.
+ * @gfp_mask: Memory allocation flags to use if %FGP_CREAT is specified.
  *
- * PCG flags modify how the page is returned.
+ * Looks up the page cache entry at @mapping & @offset.
  *
- * @fgp_flags can be:
+ * @fgp_flags can be zero or more of these flags:
  *
- * - FGP_ACCESSED: the page will be marked accessed
- * - FGP_LOCK: Page is return locked
- * - FGP_CREAT: If page is not present then a new page is allocated using
- *   @gfp_mask and added to the page cache and the VM's LRU
- *   list. The page is returned locked and with an increased
- *   refcount.
- * - FGP_FOR_MMAP: Similar to FGP_CREAT, only we want to allow the caller to do
- *   its own locking dance if the page is already in cache, or unlock the page
- *   before returning if we had to add the page to pagecache.
+ * * %FGP_ACCESSED - The page will be marked accessed.
+ * * %FGP_LOCK - The page is returned locked.
+ * * %FGP_CREAT - If no page is present then a new page is allocated using
+ *   @gfp_mask and added to the page cache and the VM's LRU list.
+ *   The page is returned locked and with an increased refcount.
+ * * %FGP_FOR_MMAP - The caller wants to do its own locking dance if the
+ *   page is already in cache.  If the page was allocated, unlock it before
+ *   returning so the caller can do the same dance.
  *
- * If FGP_LOCK or FGP_CREAT are specified then the function may sleep even
- * if the GFP flags specified for FGP_CREAT are atomic.
+ * If %FGP_LOCK or %FGP_CREAT are specified then the function may sleep even
+ * if the %GFP flags specified for %FGP_CREAT are atomic.
  *
  * If there is a page cache page, it is returned with an increased refcount.
  *
- * Return: the found page or %NULL otherwise.
+ * Return: The found page or %NULL otherwise.
  */
 struct page *pagecache_get_page(struct address_space *mapping, pgoff_t offset,
-	int fgp_flags, gfp_t gfp_mask)
+		int fgp_flags, gfp_t gfp_mask)
 {
 	struct page *page;
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/6] mm: Optimise find_subpage for !THP
  2020-03-03  4:11 ` [PATCH 2/6] mm: Optimise find_subpage for !THP Matthew Wilcox
@ 2020-03-03 21:28   ` Alexander Duyck
  2020-03-03 21:47     ` Matthew Wilcox
  0 siblings, 1 reply; 14+ messages in thread
From: Alexander Duyck @ 2020-03-03 21:28 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Kirill A . Shutemov

On Mon, Mar 2, 2020 at 8:11 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>
> If THP is disabled, find_subpage can become a no-op.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> ---
>  include/linux/pagemap.h | 15 ++++++++++-----
>  1 file changed, 10 insertions(+), 5 deletions(-)
>
> diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> index b4ea3a5d00e5..8785f60b05f8 100644
> --- a/include/linux/pagemap.h
> +++ b/include/linux/pagemap.h
> @@ -333,14 +333,19 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
>                         mapping_gfp_mask(mapping));
>  }
>
> -static inline struct page *find_subpage(struct page *page, pgoff_t offset)
> +/*
> + * Given the page we found in the page cache, return the page corresponding
> + * to this offset in the file
> + */
> +static inline struct page *find_subpage(struct page *head, pgoff_t offset)
>  {
> -       if (PageHuge(page))
> -               return page;
> +       /* HugeTLBfs wants the head page regardless */
> +       if (PageHuge(head))
> +               return head;
>
> -       VM_BUG_ON_PAGE(PageTail(page), page);
> +       VM_BUG_ON_PAGE(PageTail(head), head);

Is there any specific reason for renaming page to head? Just wondering
since it adds some noise to the patch that wasn't really called out in
the patch description. From what I can tell none of the changes above
this point have any explanation to them in the patch description, and
until I noticed the change below I thought maybe you had the wrong
patch description for this patch.

> -       return page + (offset & (compound_nr(page) - 1));
> +       return head + (offset & (hpage_nr_pages(head) - 1));
>  }

So the patch description refers to this line here, correct?

One thing I am noticing is that it looks like the VM_BUG_ON_PAGE is
now redundant. If I am not not mistaken hpage_nr_pages will call
PageTransHuge which also performs the same check. So do you still need
the VM_BUG_ON_PAGE call in this function?


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/6] mm: Optimise find_subpage for !THP
  2020-03-03 21:28   ` Alexander Duyck
@ 2020-03-03 21:47     ` Matthew Wilcox
  2020-03-05  9:54       ` William Kucharski
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03 21:47 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: linux-mm, Kirill A . Shutemov

On Tue, Mar 03, 2020 at 01:28:13PM -0800, Alexander Duyck wrote:
> On Mon, Mar 2, 2020 at 8:11 PM Matthew Wilcox <willy@infradead.org> wrote:
> > From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
> >
> > If THP is disabled, find_subpage can become a no-op.
> >
> > Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> > Acked-by: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> > ---
> >  include/linux/pagemap.h | 15 ++++++++++-----
> >  1 file changed, 10 insertions(+), 5 deletions(-)
> >
> > diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
> > index b4ea3a5d00e5..8785f60b05f8 100644
> > --- a/include/linux/pagemap.h
> > +++ b/include/linux/pagemap.h
> > @@ -333,14 +333,19 @@ static inline struct page *grab_cache_page_nowait(struct address_space *mapping,
> >                         mapping_gfp_mask(mapping));
> >  }
> >
> > -static inline struct page *find_subpage(struct page *page, pgoff_t offset)
> > +/*
> > + * Given the page we found in the page cache, return the page corresponding
> > + * to this offset in the file
> > + */
> > +static inline struct page *find_subpage(struct page *head, pgoff_t offset)
> >  {
> > -       if (PageHuge(page))
> > -               return page;
> > +       /* HugeTLBfs wants the head page regardless */
> > +       if (PageHuge(head))
> > +               return head;
> >
> > -       VM_BUG_ON_PAGE(PageTail(page), page);
> > +       VM_BUG_ON_PAGE(PageTail(head), head);
> 
> Is there any specific reason for renaming page to head? Just wondering

Christoph Hellwig asked for the rename in an earlier version of this
patchset.

> since it adds some noise to the patch that wasn't really called out in
> the patch description. From what I can tell none of the changes above
> this point have any explanation to them in the patch description, and
> until I noticed the change below I thought maybe you had the wrong
> patch description for this patch.
> 
> > -       return page + (offset & (compound_nr(page) - 1));
> > +       return head + (offset & (hpage_nr_pages(head) - 1));
> >  }
> 
> So the patch description refers to this line here, correct?

Yes, that's the actual change.

> One thing I am noticing is that it looks like the VM_BUG_ON_PAGE is
> now redundant. If I am not not mistaken hpage_nr_pages will call
> PageTransHuge which also performs the same check. So do you still need
> the VM_BUG_ON_PAGE call in this function?

Huh, I didn't know PageTransHuge had that.  I suppose it can go, although
there's only a call to PageTransHuge() in the CONFIG_TRANSPARENT_HUGEPAGE
enabled case.  You might ask how a tail page could get into the page
cache if CONFIG_TRANSPARENT_HUGEPAGE is not enabled, and I would not
have a good answer for that ... although if one has, I think we'd want
to know about it.  So maybe that's a good reason to keep the explicit
check, or maybe it's a good reason to make hpage_nr_pages() check that
assertion if !CONFIG_TRANSPARENT_HUGEPAGE.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE
  2020-03-03  4:11 ` [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox
@ 2020-03-03 21:52   ` Alexander Duyck
  2020-03-03 22:34     ` Matthew Wilcox
  0 siblings, 1 reply; 14+ messages in thread
From: Alexander Duyck @ 2020-03-03 21:52 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Kirill A . Shutemov, Aneesh Kumar K . V

On Mon, Mar 2, 2020 at 8:11 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> From: "Matthew Wilcox (Oracle)" <willy@infradead.org>
>
> Commit e496cf3d7821 ("thp: introduce CONFIG_TRANSPARENT_HUGE_PAGECACHE")
> notes that it should be reverted when the PowerPC problem was fixed.
> The commit fixing the PowerPC problem (953c66c2b22a) did not revert
> the commit; instead setting CONFIG_TRANSPARENT_HUGE_PAGECACHE to the
> same as CONFIG_TRANSPARENT_HUGEPAGE.  Checking with Kirill and Aneesh,
> this was an oversight, so remove the Kconfig symbol and undo the work
> of commit e496cf3d7821.
>
> Signed-off-by: Matthew Wilcox (Oracle) <willy@infradead.org>
> Cc: Kirill A. Shutemov <kirill.shutemov@linux.intel.com>
> Cc: Aneesh Kumar K.V <aneesh.kumar@linux.vnet.ibm.com>
> ---
>  include/linux/shmem_fs.h | 10 +---------
>  mm/Kconfig               |  6 +-----
>  mm/huge_memory.c         |  2 +-
>  mm/khugepaged.c          | 10 ++--------
>  mm/memory.c              |  5 ++---
>  mm/rmap.c                |  2 +-
>  mm/shmem.c               | 36 ++++++++++++++++++------------------
>  7 files changed, 26 insertions(+), 45 deletions(-)
>
> diff --git a/include/linux/shmem_fs.h b/include/linux/shmem_fs.h
> index d56fefef8905..7a35a6901221 100644
> --- a/include/linux/shmem_fs.h
> +++ b/include/linux/shmem_fs.h
> @@ -78,6 +78,7 @@ extern void shmem_truncate_range(struct inode *inode, loff_t start, loff_t end);
>  extern int shmem_unuse(unsigned int type, bool frontswap,
>                        unsigned long *fs_pages_to_unuse);
>
> +extern bool shmem_huge_enabled(struct vm_area_struct *vma);
>  extern unsigned long shmem_swap_usage(struct vm_area_struct *vma);
>  extern unsigned long shmem_partial_swap_usage(struct address_space *mapping,
>                                                 pgoff_t start, pgoff_t end);
> @@ -114,15 +115,6 @@ static inline bool shmem_file(struct file *file)
>  extern bool shmem_charge(struct inode *inode, long pages);
>  extern void shmem_uncharge(struct inode *inode, long pages);
>
> -#ifdef CONFIG_TRANSPARENT_HUGE_PAGECACHE
> -extern bool shmem_huge_enabled(struct vm_area_struct *vma);
> -#else
> -static inline bool shmem_huge_enabled(struct vm_area_struct *vma)
> -{
> -       return false;
> -}
> -#endif
> -
>  #ifdef CONFIG_SHMEM
>  extern int shmem_mcopy_atomic_pte(struct mm_struct *dst_mm, pmd_t *dst_pmd,
>                                   struct vm_area_struct *dst_vma,

<snip>

> diff --git a/mm/khugepaged.c b/mm/khugepaged.c
> index b679908743cb..f99ac65271aa 100644
> --- a/mm/khugepaged.c
> +++ b/mm/khugepaged.c
> @@ -416,8 +416,6 @@ static bool hugepage_vma_check(struct vm_area_struct *vma,
>             (IS_ENABLED(CONFIG_READ_ONLY_THP_FOR_FS) &&
>              vma->vm_file &&
>              (vm_flags & VM_DENYWRITE))) {
> -               if (!IS_ENABLED(CONFIG_TRANSPARENT_HUGE_PAGECACHE))
> -                       return false;
>                 return IS_ALIGNED((vma->vm_start >> PAGE_SHIFT) - vma->vm_pgoff,
>                                 HPAGE_PMD_NR);
>         }
> @@ -1260,7 +1258,7 @@ static void collect_mm_slot(struct mm_slot *mm_slot)
>         }
>  }
>
> -#if defined(CONFIG_SHMEM) && defined(CONFIG_TRANSPARENT_HUGE_PAGECACHE)
> +#ifdef CONFIG_SHMEM
>  /*
>   * Notify khugepaged that given addr of the mm is pte-mapped THP. Then
>   * khugepaged should try to collapse the page table.
> @@ -1986,14 +1984,10 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
>                                   khugepaged_scan.address + HPAGE_PMD_SIZE >
>                                   hend);
>                         if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) {
> -                               struct file *file;
> +                               struct file *file = get_file(vma->vm_file);
>                                 pgoff_t pgoff = linear_page_index(vma,
>                                                 khugepaged_scan.address);
>
> -                               if (shmem_file(vma->vm_file)
> -                                   && !shmem_huge_enabled(vma))
> -                                       goto skip;
> -                               file = get_file(vma->vm_file);
>                                 up_read(&mm->mmap_sem);
>                                 ret = 1;
>                                 khugepaged_scan_file(mm, file, pgoff, hpage);

In the code above you didn't eliminate shmem_huge_enabled, it still
exists and has multiple paths that can return false. Are we guaranteed
that it will return true or is it that it can be ignored here?

All the other changes look correct.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE
  2020-03-03 21:52   ` Alexander Duyck
@ 2020-03-03 22:34     ` Matthew Wilcox
  2020-03-03 22:54       ` Alexander Duyck
  0 siblings, 1 reply; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-03 22:34 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: linux-mm, Kirill A . Shutemov, Aneesh Kumar K . V

On Tue, Mar 03, 2020 at 01:52:10PM -0800, Alexander Duyck wrote:
> On Mon, Mar 2, 2020 at 8:11 PM Matthew Wilcox <willy@infradead.org> wrote:
> > @@ -1986,14 +1984,10 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
> >                                   khugepaged_scan.address + HPAGE_PMD_SIZE >
> >                                   hend);
> >                         if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) {
> > -                               struct file *file;
> > +                               struct file *file = get_file(vma->vm_file);
> >                                 pgoff_t pgoff = linear_page_index(vma,
> >                                                 khugepaged_scan.address);
> >
> > -                               if (shmem_file(vma->vm_file)
> > -                                   && !shmem_huge_enabled(vma))
> > -                                       goto skip;
> > -                               file = get_file(vma->vm_file);
> 
> In the code above you didn't eliminate shmem_huge_enabled, it still
> exists and has multiple paths that can return false. Are we guaranteed
> that it will return true or is it that it can be ignored here?

It probably helps to read this in conjunction with e496cf3d7821 --
that patch did this:

                        if (shmem_file(vma->vm_file)) {
-                               struct file *file = get_file(vma->vm_file);
+                               struct file *file;
                                pgoff_t pgoff = linear_page_index(vma,
                                                khugepaged_scan.address);
+                               if (!shmem_huge_enabled(vma))
+                                       goto skip;
+                               file = get_file(vma->vm_file);
                                up_read(&mm->mmap_sem);

so I'm just undoing that.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE
  2020-03-03 22:34     ` Matthew Wilcox
@ 2020-03-03 22:54       ` Alexander Duyck
  2020-03-04  2:06         ` Matthew Wilcox
  0 siblings, 1 reply; 14+ messages in thread
From: Alexander Duyck @ 2020-03-03 22:54 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-mm, Kirill A . Shutemov, Aneesh Kumar K . V

On Tue, Mar 3, 2020 at 2:34 PM Matthew Wilcox <willy@infradead.org> wrote:
>
> On Tue, Mar 03, 2020 at 01:52:10PM -0800, Alexander Duyck wrote:
> > On Mon, Mar 2, 2020 at 8:11 PM Matthew Wilcox <willy@infradead.org> wrote:
> > > @@ -1986,14 +1984,10 @@ static unsigned int khugepaged_scan_mm_slot(unsigned int pages,
> > >                                   khugepaged_scan.address + HPAGE_PMD_SIZE >
> > >                                   hend);
> > >                         if (IS_ENABLED(CONFIG_SHMEM) && vma->vm_file) {
> > > -                               struct file *file;
> > > +                               struct file *file = get_file(vma->vm_file);
> > >                                 pgoff_t pgoff = linear_page_index(vma,
> > >                                                 khugepaged_scan.address);
> > >
> > > -                               if (shmem_file(vma->vm_file)
> > > -                                   && !shmem_huge_enabled(vma))
> > > -                                       goto skip;
> > > -                               file = get_file(vma->vm_file);
> >
> > In the code above you didn't eliminate shmem_huge_enabled, it still
> > exists and has multiple paths that can return false. Are we guaranteed
> > that it will return true or is it that it can be ignored here?
>
> It probably helps to read this in conjunction with e496cf3d7821 --
> that patch did this:
>
>                         if (shmem_file(vma->vm_file)) {
> -                               struct file *file = get_file(vma->vm_file);
> +                               struct file *file;
>                                 pgoff_t pgoff = linear_page_index(vma,
>                                                 khugepaged_scan.address);
> +                               if (!shmem_huge_enabled(vma))
> +                                       goto skip;
> +                               file = get_file(vma->vm_file);
>                                 up_read(&mm->mmap_sem);
>
> so I'm just undoing that.

It looks like you are only doing a partial revert though. The original
changes in the function were:
@@ -1681,8 +1683,6 @@ static unsigned int
khugepaged_scan_mm_slot(unsigned int pages,
                if (khugepaged_scan.address < hstart)
                        khugepaged_scan.address = hstart;
                VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK);
-               if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma))
-                       goto skip;

                while (khugepaged_scan.address < hend) {
                        int ret;
@@ -1694,9 +1694,12 @@ static unsigned int
khugepaged_scan_mm_slot(unsigned int pages,
                                  khugepaged_scan.address + HPAGE_PMD_SIZE >
                                  hend);
                        if (shmem_file(vma->vm_file)) {
-                               struct file *file = get_file(vma->vm_file);
+                               struct file *file;
                                pgoff_t pgoff = linear_page_index(vma,
                                                khugepaged_scan.address);
+                               if (!shmem_huge_enabled(vma))
+                                       goto skip;
+                               file = get_file(vma->vm_file);
                                up_read(&mm->mmap_sem);
                                ret = 1;
                                khugepaged_scan_shmem(mm, file->f_mapping,

You reverted the second piece, but I didn't notice you reverting the
first. WIth the first piece being reverted it would make more sense as
you would just be skipping the block that much sooner.


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE
  2020-03-03 22:54       ` Alexander Duyck
@ 2020-03-04  2:06         ` Matthew Wilcox
  0 siblings, 0 replies; 14+ messages in thread
From: Matthew Wilcox @ 2020-03-04  2:06 UTC (permalink / raw)
  To: Alexander Duyck; +Cc: linux-mm, Kirill A . Shutemov, Aneesh Kumar K . V

On Tue, Mar 03, 2020 at 02:54:55PM -0800, Alexander Duyck wrote:
> It looks like you are only doing a partial revert though. The original
> changes in the function were:
> @@ -1681,8 +1683,6 @@ static unsigned int
> khugepaged_scan_mm_slot(unsigned int pages,
>                 if (khugepaged_scan.address < hstart)
>                         khugepaged_scan.address = hstart;
>                 VM_BUG_ON(khugepaged_scan.address & ~HPAGE_PMD_MASK);
> -               if (shmem_file(vma->vm_file) && !shmem_huge_enabled(vma))
> -                       goto skip;
> 
>                 while (khugepaged_scan.address < hend) {
>                         int ret;
> @@ -1694,9 +1694,12 @@ static unsigned int
> khugepaged_scan_mm_slot(unsigned int pages,
>                                   khugepaged_scan.address + HPAGE_PMD_SIZE >
>                                   hend);
>                         if (shmem_file(vma->vm_file)) {
> -                               struct file *file = get_file(vma->vm_file);
> +                               struct file *file;
>                                 pgoff_t pgoff = linear_page_index(vma,
>                                                 khugepaged_scan.address);
> +                               if (!shmem_huge_enabled(vma))
> +                                       goto skip;
> +                               file = get_file(vma->vm_file);
>                                 up_read(&mm->mmap_sem);
>                                 ret = 1;
>                                 khugepaged_scan_shmem(mm, file->f_mapping,
> 
> You reverted the second piece, but I didn't notice you reverting the
> first. WIth the first piece being reverted it would make more sense as
> you would just be skipping the block that much sooner.

Ah; that was an oversight on my part.  Thank you for catching it!  I'll
resubmit with that fixed.



^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH 2/6] mm: Optimise find_subpage for !THP
  2020-03-03 21:47     ` Matthew Wilcox
@ 2020-03-05  9:54       ` William Kucharski
  0 siblings, 0 replies; 14+ messages in thread
From: William Kucharski @ 2020-03-05  9:54 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: Alexander Duyck, linux-mm, Kirill A . Shutemov



> On Mar 3, 2020, at 2:47 PM, Matthew Wilcox <willy@infradead.org> wrote:
> 
> or maybe it's a good reason to make hpage_nr_pages() check that
> assertion if !CONFIG_TRANSPARENT_HUGEPAGE.

That would be easy enough to do as at present if
!CONFIG_TRANSPARENT_HUGEPAGE, hpage_nr_pages() simply returns 1.

It would give more coverage to the assert at the cost of a slight
performance degradation.

Do you think it would be worth it?



^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2020-03-05  9:54 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-03  4:11 [PATCH 0/6] Misc MM patches Matthew Wilcox
2020-03-03  4:11 ` [PATCH 1/6] mm: Use vm_fault error code directly Matthew Wilcox
2020-03-03  4:11 ` [PATCH 2/6] mm: Optimise find_subpage for !THP Matthew Wilcox
2020-03-03 21:28   ` Alexander Duyck
2020-03-03 21:47     ` Matthew Wilcox
2020-03-05  9:54       ` William Kucharski
2020-03-03  4:11 ` [PATCH 3/6] mm: Remove CONFIG_TRANSPARENT_HUGE_PAGECACHE Matthew Wilcox
2020-03-03 21:52   ` Alexander Duyck
2020-03-03 22:34     ` Matthew Wilcox
2020-03-03 22:54       ` Alexander Duyck
2020-03-04  2:06         ` Matthew Wilcox
2020-03-03  4:11 ` [PATCH 4/6] mm: Use VM_BUG_ON_PAGE in clear_page_dirty_for_io Matthew Wilcox
2020-03-03  4:11 ` [PATCH 5/6] mm: Unexport find_get_entry Matthew Wilcox
2020-03-03  4:11 ` [PATCH 6/6] mm: Rewrite pagecache_get_page documentation Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).