All of lore.kernel.org
 help / color / mirror / Atom feed
From: Andreas Gruenbacher <agruenba@redhat.com>
To: Christoph Hellwig <hch@infradead.org>,
	"Darrick J . Wong" <djwong@kernel.org>,
	Alexander Viro <viro@zeniv.linux.org.uk>,
	Matthew Wilcox <willy@infradead.org>
Cc: Andreas Gruenbacher <agruenba@redhat.com>,
	linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org,
	linux-ext4@vger.kernel.org, cluster-devel@redhat.com,
	Christoph Hellwig <hch@lst.de>
Subject: [RFC v6 02/10] iomap/gfs2: Unlock and put folio in page_done handler
Date: Sun,  8 Jan 2023 20:40:26 +0100	[thread overview]
Message-ID: <20230108194034.1444764-3-agruenba@redhat.com> (raw)
In-Reply-To: <20230108194034.1444764-1-agruenba@redhat.com>

When an iomap defines a ->page_done() handler in its page_ops, delegate
unlocking the folio and putting the folio reference to that handler.

This allows to fix a race between journaled data writes and folio
writeback in gfs2: before this change, gfs2_iomap_page_done() was called
after unlocking the folio, so writeback could start writing back the
folio's buffers before they could be marked for writing to the journal.
Also, try_to_free_buffers() could free the buffers before
gfs2_iomap_page_done() was done adding the buffers to the current
current transaction.  With this change, gfs2_iomap_page_done() adds the
buffers to the current transaction while the folio is still locked, so
the problems described above can no longer occur.

The only current user of ->page_done() is gfs2, so other filesystems are
not affected.  To catch out any out-of-tree users, switch from a page to
a folio in ->page_done().

Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/gfs2/bmap.c         | 15 ++++++++++++---
 fs/iomap/buffered-io.c |  8 ++++----
 include/linux/iomap.h  |  7 ++++---
 3 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index e7537fd305dd..46206286ad42 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -968,14 +968,23 @@ static int gfs2_iomap_page_prepare(struct inode *inode, loff_t pos,
 }
 
 static void gfs2_iomap_page_done(struct inode *inode, loff_t pos,
-				 unsigned copied, struct page *page)
+				 unsigned copied, struct folio *folio)
 {
 	struct gfs2_trans *tr = current->journal_info;
 	struct gfs2_inode *ip = GFS2_I(inode);
 	struct gfs2_sbd *sdp = GFS2_SB(inode);
 
-	if (page && !gfs2_is_stuffed(ip))
-		gfs2_page_add_databufs(ip, page, offset_in_page(pos), copied);
+	if (!folio) {
+		gfs2_trans_end(sdp);
+		return;
+	}
+
+	if (!gfs2_is_stuffed(ip))
+		gfs2_page_add_databufs(ip, &folio->page, offset_in_page(pos),
+				       copied);
+
+	folio_unlock(folio);
+	folio_put(folio);
 
 	if (tr->tr_num_buf_new)
 		__mark_inode_dirty(inode, I_DIRTY_DATASYNC);
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index c045689b6af8..a9082078e4ed 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -580,12 +580,12 @@ static void __iomap_put_folio(struct iomap_iter *iter, loff_t pos, size_t ret,
 {
 	const struct iomap_page_ops *page_ops = iter->iomap.page_ops;
 
-	if (folio)
+	if (page_ops && page_ops->page_done) {
+		page_ops->page_done(iter->inode, pos, ret, folio);
+	} else if (folio) {
 		folio_unlock(folio);
-	if (page_ops && page_ops->page_done)
-		page_ops->page_done(iter->inode, pos, ret, &folio->page);
-	if (folio)
 		folio_put(folio);
+	}
 }
 
 static int iomap_write_begin_inline(const struct iomap_iter *iter,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 0983dfc9a203..743e2a909162 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -131,13 +131,14 @@ static inline bool iomap_inline_data_valid(const struct iomap *iomap)
  * associated with them.
  *
  * When page_prepare succeeds, page_done will always be called to do any
- * cleanup work necessary.  In that page_done call, @page will be NULL if the
- * associated page could not be obtained.
+ * cleanup work necessary.  In that page_done call, @folio will be NULL if the
+ * associated folio could not be obtained.  When folio is not NULL, page_done
+ * is responsible for unlocking and putting the folio.
  */
 struct iomap_page_ops {
 	int (*page_prepare)(struct inode *inode, loff_t pos, unsigned len);
 	void (*page_done)(struct inode *inode, loff_t pos, unsigned copied,
-			struct page *page);
+			struct folio *folio);
 
 	/*
 	 * Check that the cached iomap still maps correctly to the filesystem's
-- 
2.38.1


WARNING: multiple messages have this Message-ID (diff)
From: Andreas Gruenbacher <agruenba@redhat.com>
To: cluster-devel.redhat.com
Subject: [Cluster-devel] [RFC v6 02/10] iomap/gfs2: Unlock and put folio in page_done handler
Date: Sun,  8 Jan 2023 20:40:26 +0100	[thread overview]
Message-ID: <20230108194034.1444764-3-agruenba@redhat.com> (raw)
In-Reply-To: <20230108194034.1444764-1-agruenba@redhat.com>

When an iomap defines a ->page_done() handler in its page_ops, delegate
unlocking the folio and putting the folio reference to that handler.

This allows to fix a race between journaled data writes and folio
writeback in gfs2: before this change, gfs2_iomap_page_done() was called
after unlocking the folio, so writeback could start writing back the
folio's buffers before they could be marked for writing to the journal.
Also, try_to_free_buffers() could free the buffers before
gfs2_iomap_page_done() was done adding the buffers to the current
current transaction.  With this change, gfs2_iomap_page_done() adds the
buffers to the current transaction while the folio is still locked, so
the problems described above can no longer occur.

The only current user of ->page_done() is gfs2, so other filesystems are
not affected.  To catch out any out-of-tree users, switch from a page to
a folio in ->page_done().

Signed-off-by: Andreas Gruenbacher <agruenba@redhat.com>
Reviewed-by: Darrick J. Wong <djwong@kernel.org>
Reviewed-by: Christoph Hellwig <hch@lst.de>
---
 fs/gfs2/bmap.c         | 15 ++++++++++++---
 fs/iomap/buffered-io.c |  8 ++++----
 include/linux/iomap.h  |  7 ++++---
 3 files changed, 20 insertions(+), 10 deletions(-)

diff --git a/fs/gfs2/bmap.c b/fs/gfs2/bmap.c
index e7537fd305dd..46206286ad42 100644
--- a/fs/gfs2/bmap.c
+++ b/fs/gfs2/bmap.c
@@ -968,14 +968,23 @@ static int gfs2_iomap_page_prepare(struct inode *inode, loff_t pos,
 }
 
 static void gfs2_iomap_page_done(struct inode *inode, loff_t pos,
-				 unsigned copied, struct page *page)
+				 unsigned copied, struct folio *folio)
 {
 	struct gfs2_trans *tr = current->journal_info;
 	struct gfs2_inode *ip = GFS2_I(inode);
 	struct gfs2_sbd *sdp = GFS2_SB(inode);
 
-	if (page && !gfs2_is_stuffed(ip))
-		gfs2_page_add_databufs(ip, page, offset_in_page(pos), copied);
+	if (!folio) {
+		gfs2_trans_end(sdp);
+		return;
+	}
+
+	if (!gfs2_is_stuffed(ip))
+		gfs2_page_add_databufs(ip, &folio->page, offset_in_page(pos),
+				       copied);
+
+	folio_unlock(folio);
+	folio_put(folio);
 
 	if (tr->tr_num_buf_new)
 		__mark_inode_dirty(inode, I_DIRTY_DATASYNC);
diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index c045689b6af8..a9082078e4ed 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -580,12 +580,12 @@ static void __iomap_put_folio(struct iomap_iter *iter, loff_t pos, size_t ret,
 {
 	const struct iomap_page_ops *page_ops = iter->iomap.page_ops;
 
-	if (folio)
+	if (page_ops && page_ops->page_done) {
+		page_ops->page_done(iter->inode, pos, ret, folio);
+	} else if (folio) {
 		folio_unlock(folio);
-	if (page_ops && page_ops->page_done)
-		page_ops->page_done(iter->inode, pos, ret, &folio->page);
-	if (folio)
 		folio_put(folio);
+	}
 }
 
 static int iomap_write_begin_inline(const struct iomap_iter *iter,
diff --git a/include/linux/iomap.h b/include/linux/iomap.h
index 0983dfc9a203..743e2a909162 100644
--- a/include/linux/iomap.h
+++ b/include/linux/iomap.h
@@ -131,13 +131,14 @@ static inline bool iomap_inline_data_valid(const struct iomap *iomap)
  * associated with them.
  *
  * When page_prepare succeeds, page_done will always be called to do any
- * cleanup work necessary.  In that page_done call, @page will be NULL if the
- * associated page could not be obtained.
+ * cleanup work necessary.  In that page_done call, @folio will be NULL if the
+ * associated folio could not be obtained.  When folio is not NULL, page_done
+ * is responsible for unlocking and putting the folio.
  */
 struct iomap_page_ops {
 	int (*page_prepare)(struct inode *inode, loff_t pos, unsigned len);
 	void (*page_done)(struct inode *inode, loff_t pos, unsigned copied,
-			struct page *page);
+			struct folio *folio);
 
 	/*
 	 * Check that the cached iomap still maps correctly to the filesystem's
-- 
2.38.1


  parent reply	other threads:[~2023-01-08 19:42 UTC|newest]

Thread overview: 82+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-01-08 19:40 [RFC v6 00/10] Turn iomap_page_ops into iomap_folio_ops Andreas Gruenbacher
2023-01-08 19:40 ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 19:40 ` [RFC v6 01/10] iomap: Add __iomap_put_folio helper Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 19:40 ` Andreas Gruenbacher [this message]
2023-01-08 19:40   ` [Cluster-devel] [RFC v6 02/10] iomap/gfs2: Unlock and put folio in page_done handler Andreas Gruenbacher
2023-01-08 19:40 ` [RFC v6 03/10] iomap: Rename page_done handler to put_folio Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 19:40 ` [RFC v6 04/10] iomap: Add iomap_get_folio helper Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 21:33   ` Dave Chinner
2023-01-08 21:33     ` [Cluster-devel] " Dave Chinner
2023-01-09 12:46   ` Andreas Gruenbacher
2023-01-09 12:46     ` [Cluster-devel] " Andreas Gruenbacher
2023-01-10  8:46     ` Christoph Hellwig
2023-01-10  8:46       ` [Cluster-devel] " Christoph Hellwig
2023-01-10  9:07       ` Andreas Grünbacher
2023-01-10  9:07         ` [Cluster-devel] " Andreas Grünbacher
2023-01-10 13:34       ` Matthew Wilcox
2023-01-10 13:34         ` [Cluster-devel] " Matthew Wilcox
2023-01-10 15:24         ` Christoph Hellwig
2023-01-10 15:24           ` [Cluster-devel] " Christoph Hellwig
2023-01-11 19:36           ` Matthew Wilcox
2023-01-11 19:36             ` [Cluster-devel] " Matthew Wilcox
2023-01-11 20:52             ` Dave Chinner
2023-01-11 20:52               ` [Cluster-devel] " Dave Chinner
2023-01-12  8:41               ` Christoph Hellwig
2023-01-12  8:41                 ` [Cluster-devel] " Christoph Hellwig
2023-01-15 17:01         ` Darrick J. Wong
2023-01-15 17:01           ` [Cluster-devel] " Darrick J. Wong
2023-01-15 17:06           ` Darrick J. Wong
2023-01-15 17:06             ` [Cluster-devel] " Darrick J. Wong
2023-01-16  5:46             ` Matthew Wilcox
2023-01-16  5:46               ` [Cluster-devel] " Matthew Wilcox
2023-01-16  7:34               ` Christoph Hellwig
2023-01-16  7:34                 ` [Cluster-devel] " Christoph Hellwig
2023-01-16 13:18                 ` Matthew Wilcox
2023-01-16 13:18                   ` [Cluster-devel] " Matthew Wilcox
2023-01-16 16:02                   ` Christoph Hellwig
2023-01-16 16:02                     ` [Cluster-devel] " Christoph Hellwig
2023-01-08 19:40 ` [RFC v6 05/10] iomap/gfs2: Get page in page_prepare handler Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-31 19:37   ` Matthew Wilcox
2023-01-31 19:37     ` [Cluster-devel] " Matthew Wilcox
2023-01-31 21:33     ` Andreas Gruenbacher
2023-01-31 21:33       ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 19:40 ` [RFC v6 06/10] iomap: Add __iomap_get_folio helper Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-10  8:48   ` Christoph Hellwig
2023-01-10  8:48     ` [Cluster-devel] " Christoph Hellwig
2023-01-08 19:40 ` [RFC v6 07/10] iomap: Rename page_prepare handler to get_folio Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 19:40 ` [RFC v6 08/10] iomap/xfs: Eliminate the iomap_valid handler Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 21:59   ` Dave Chinner
2023-01-08 21:59     ` [Cluster-devel] " Dave Chinner
2023-01-09 18:45     ` Andreas Gruenbacher
2023-01-09 18:45       ` [Cluster-devel] " Andreas Gruenbacher
2023-01-09 22:54       ` Dave Chinner
2023-01-09 22:54         ` [Cluster-devel] " Dave Chinner
2023-01-10  1:09         ` Andreas Grünbacher
2023-01-10  1:09           ` [Cluster-devel] " Andreas Grünbacher
2023-01-15 17:29           ` Darrick J. Wong
2023-01-15 17:29             ` [Cluster-devel] " Darrick J. Wong
2023-01-18  7:21             ` Christoph Hellwig
2023-01-18  7:21               ` [Cluster-devel] " Christoph Hellwig
2023-01-18  9:11               ` Damien Le Moal
2023-01-18  9:11                 ` [Cluster-devel] " Damien Le Moal
2023-01-18 19:04               ` Darrick J. Wong
2023-01-18 19:04                 ` [Cluster-devel] " Darrick J. Wong
2023-01-18 19:57                 ` Andreas Grünbacher
2023-01-18 19:57                   ` [Cluster-devel] " Andreas Grünbacher
2023-01-18 21:42             ` Dave Chinner
2023-01-18 21:42               ` [Cluster-devel] " Dave Chinner
2023-01-10  8:51     ` Christoph Hellwig
2023-01-10  8:51       ` [Cluster-devel] " Christoph Hellwig
2023-01-10  8:52   ` Christoph Hellwig
2023-01-10  8:52     ` [Cluster-devel] " Christoph Hellwig
2023-01-08 19:40 ` [RFC v6 09/10] iomap: Rename page_ops to folio_ops Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher
2023-01-08 19:40 ` [RFC v6 10/10] xfs: Make xfs_iomap_folio_ops static Andreas Gruenbacher
2023-01-08 19:40   ` [Cluster-devel] " Andreas Gruenbacher

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20230108194034.1444764-3-agruenba@redhat.com \
    --to=agruenba@redhat.com \
    --cc=cluster-devel@redhat.com \
    --cc=djwong@kernel.org \
    --cc=hch@infradead.org \
    --cc=hch@lst.de \
    --cc=linux-ext4@vger.kernel.org \
    --cc=linux-fsdevel@vger.kernel.org \
    --cc=linux-xfs@vger.kernel.org \
    --cc=viro@zeniv.linux.org.uk \
    --cc=willy@infradead.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.