linux-fsdevel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] iomap: get/put the page in iomap_page_create/release()
@ 2019-01-21 15:17 Christoph Hellwig
  2019-01-31 23:28 ` Darrick J. Wong
  0 siblings, 1 reply; 5+ messages in thread
From: Christoph Hellwig @ 2019-01-21 15:17 UTC (permalink / raw)
  To: linux-xfs; +Cc: linux-fsdevel, Piotr Jaroszynski

From: Piotr Jaroszynski <pjaroszynski@nvidia.com>

migrate_page_move_mapping() expects pages with private data set to have
a page_count elevated by 1.  This is what used to happen for xfs through
the buffer_heads code before the switch to iomap in commit 82cb14175e7d
("xfs: add support for sub-pagesize writeback without buffer_heads").
Not having the count elevated causes move_pages() to fail on memory
mapped files coming from xfs.

Make iomap compatible with the migrate_page_move_mapping() assumption by
elevating the page count as part of iomap_page_create() and lowering it
in iomap_page_release().

It causes the move_pages() syscall to misbehave on memory mapped files
from xfs.  It does not not move any pages, which I suppose is "just" a
perf issue, but it also ends up returning a positive number which is out
of spec for the syscall.  Talking to Michal Hocko, it sounds like
returning positive numbers might be a necessary update to move_pages()
anyway though.

Fixes: 82cb14175e7d ("xfs: add support for sub-pagesize writeback without buffer_heads")
Signed-off-by: Piotr Jaroszynski <pjaroszynski@nvidia.com>
[hch: actually get/put the page iomap_migrate_page() to make it work
      properly]
Signed-off-by: Christoph Hellwig <hch@lst.de>
---
 fs/iomap.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/fs/iomap.c b/fs/iomap.c
index 987fefc054b4..47362397cb82 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -116,6 +116,12 @@ iomap_page_create(struct inode *inode, struct page *page)
 	atomic_set(&iop->read_count, 0);
 	atomic_set(&iop->write_count, 0);
 	bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE);
+
+	/*
+	 * migrate_page_move_mapping() assumes that pages with private data have
+	 * their count elevated by 1.
+	 */
+	get_page(page);
 	set_page_private(page, (unsigned long)iop);
 	SetPagePrivate(page);
 	return iop;
@@ -132,6 +138,7 @@ iomap_page_release(struct page *page)
 	WARN_ON_ONCE(atomic_read(&iop->write_count));
 	ClearPagePrivate(page);
 	set_page_private(page, 0);
+	put_page(page);
 	kfree(iop);
 }
 
@@ -569,8 +576,10 @@ iomap_migrate_page(struct address_space *mapping, struct page *newpage,
 
 	if (page_has_private(page)) {
 		ClearPagePrivate(page);
+		get_page(newpage);
 		set_page_private(newpage, page_private(page));
 		set_page_private(page, 0);
+		put_page(page);
 		SetPagePrivate(newpage);
 	}
 
-- 
2.20.1


^ permalink raw reply related	[flat|nested] 5+ messages in thread
* [PATCH] iomap: get/put the page in iomap_page_create/release()
@ 2018-11-15  0:30 p.jaroszynski
  2018-11-15  9:30 ` Christoph Hellwig
  0 siblings, 1 reply; 5+ messages in thread
From: p.jaroszynski @ 2018-11-15  0:30 UTC (permalink / raw)
  To: Christoph Hellwig, Michal Hocko
  Cc: linux-fsdevel, linux-mm, Piotr Jaroszynski

From: Piotr Jaroszynski <pjaroszynski@nvidia.com>

migrate_page_move_mapping() expects pages with private data set to have
a page_count elevated by 1. This is what used to happen for xfs through
the buffer_heads code before the switch to iomap in 82cb14175e7d ("xfs:
add support for sub-pagesize writeback without buffer_heads"). Not
having the count elevated causes move_pages() to fail on memory mapped
files coming from xfs.

Make iomap compatible with the migrate_page_move_mapping() assumption
by elevating the page count as part of iomap_page_create() and lowering
it in iomap_page_release().

Fixes: 82cb14175e7d ("xfs: add support for sub-pagesize writeback
                      without buffer_heads")
Signed-off-by: Piotr Jaroszynski <pjaroszynski@nvidia.com>
---
 fs/iomap.c | 7 +++++++
 1 file changed, 7 insertions(+)

diff --git a/fs/iomap.c b/fs/iomap.c
index 90c2febc93ac..23977f9f23a2 100644
--- a/fs/iomap.c
+++ b/fs/iomap.c
@@ -117,6 +117,12 @@ iomap_page_create(struct inode *inode, struct page *page)
 	atomic_set(&iop->read_count, 0);
 	atomic_set(&iop->write_count, 0);
 	bitmap_zero(iop->uptodate, PAGE_SIZE / SECTOR_SIZE);
+
+	/*
+	 * At least migrate_page_move_mapping() assumes that pages with private
+	 * data have their count elevated by 1.
+	 */
+	get_page(page);
 	set_page_private(page, (unsigned long)iop);
 	SetPagePrivate(page);
 	return iop;
@@ -133,6 +139,7 @@ iomap_page_release(struct page *page)
 	WARN_ON_ONCE(atomic_read(&iop->write_count));
 	ClearPagePrivate(page);
 	set_page_private(page, 0);
+	put_page(page);
 	kfree(iop);
 }
 
-- 
2.11.0.262.g4b0a5b2.dirty

^ permalink raw reply related	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2019-01-31 23:28 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-01-21 15:17 [PATCH] iomap: get/put the page in iomap_page_create/release() Christoph Hellwig
2019-01-31 23:28 ` Darrick J. Wong
  -- strict thread matches above, loose matches on Subject: below --
2018-11-15  0:30 p.jaroszynski
2018-11-15  9:30 ` Christoph Hellwig
2018-11-15 18:44   ` Piotr Jaroszynski

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).