linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* Is shmem page accounting wrong on split?
@ 2020-08-28 14:25 Matthew Wilcox
  2020-08-28 14:55 ` Matthew Wilcox
  0 siblings, 1 reply; 6+ messages in thread
From: Matthew Wilcox @ 2020-08-28 14:25 UTC (permalink / raw)
  To: linux-mm; +Cc: Hugh Dickins, Yang Shi

If I understand truncate of a shmem THP correctly ...

Let's suppose the file has a single 2MB page at index 0, and is being
truncated down to 7 bytes in size.

shmem_setattr()
  i_size_write(7);
  shmem_truncate_range(7, -1);
    shmem_undo_range(7, -1)
      start = 1;
      page = &head[1];
      shmem_punch_compound();
        split_huge_page()
          end = DIV_ROUND_UP(i_size_read(mapping->host), PAGE_SIZE); # == 1
          __split_huge_page(..., 1, ...);
            __delete_from_page_cache(&head[1], ...);
      truncate_inode_page(page);
        delete_from_page_cache(page)
          __delete_from_page_cache(&head[1])

I think the solution is to call truncate_inode_page() from within
shmem_punch_compound() if we don't call split_huge_page().  I came across
this while reusing all this infrastructure for the XFS THP patchset,
so I'm not in a great position to test this patch.

This solution actually makes my life harder because I have a different
function to call if the page doesn't need to be split.  But it's probably
the right solution for upstream today.

diff --git a/mm/shmem.c b/mm/shmem.c
index b2abca3f7f33..a0bc42974c2d 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -819,15 +819,18 @@ void shmem_unlock_mapping(struct address_space *mapping)
 static bool shmem_punch_compound(struct page *page, pgoff_t start, pgoff_t end)
 {
 	if (!PageTransCompound(page))
-		return true;
+		goto nosplit;
 
 	/* Just proceed to delete a huge page wholly within the range punched */
 	if (PageHead(page) &&
 	    page->index >= start && page->index + HPAGE_PMD_NR <= end)
-		return true;
+		goto nosplit;
 
 	/* Try to split huge page, so we can truly punch the hole or truncate */
 	return split_huge_page(page) >= 0;
+nosplit:
+	truncate_inode_page(page->mapping, page);
+	return true;
 }
 
 /*
@@ -883,8 +886,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 			if ((!unfalloc || !PageUptodate(page)) &&
 			    page_mapping(page) == mapping) {
 				VM_BUG_ON_PAGE(PageWriteback(page), page);
-				if (shmem_punch_compound(page, start, end))
-					truncate_inode_page(mapping, page);
+				shmem_punch_compound(page, start, end);
 			}
 			unlock_page(page);
 		}
@@ -966,9 +968,7 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 					break;
 				}
 				VM_BUG_ON_PAGE(PageWriteback(page), page);
-				if (shmem_punch_compound(page, start, end))
-					truncate_inode_page(mapping, page);
-				else if (IS_ENABLED(CONFIG_TRANSPARENT_HUGEPAGE)) {
+				if (!shmem_punch_compound(page, start, end)) {
 					/* Wipe the page and don't get stuck */
 					clear_highpage(page);
 					flush_dcache_page(page);




^ permalink raw reply related	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-08-28 18:01 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-08-28 14:25 Is shmem page accounting wrong on split? Matthew Wilcox
2020-08-28 14:55 ` Matthew Wilcox
2020-08-28 15:43   ` Yang Shi
2020-08-28 17:08     ` Hugh Dickins
2020-08-28 17:31       ` Matthew Wilcox
2020-08-28 18:01         ` Matthew Wilcox

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).