All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH RFC] xfs: hold buffer across unpin and potential shutdown processing
@ 2021-05-03 12:18 Brian Foster
  2021-05-03 23:25 ` Dave Chinner
  0 siblings, 1 reply; 10+ messages in thread
From: Brian Foster @ 2021-05-03 12:18 UTC (permalink / raw)
  To: linux-xfs

The special processing used to simulate a buffer I/O failure on fs
shutdown has a difficult to reproduce race that can result in a use
after free of the associated buffer. Consider a buffer that has been
committed to the on-disk log and thus is AIL resident. The buffer
lands on the writeback delwri queue, but is subsequently locked,
committed and pinned by another transaction before submitted for
I/O. At this point, the buffer is stuck on the delwri queue as it
cannot be submitted for I/O until it is unpinned. A log checkpoint
I/O failure occurs sometime later, which aborts the bli. The unpin
handler is called with the aborted log item, drops the bli reference
count, the pin count, and falls into the I/O failure simulation
path.

The potential problem here is that once the pin count falls to zero
in ->iop_unpin(), xfsaild is free to retry delwri submission of the
buffer at any time, before the unpin handler even completes. If
delwri queue submission wins the race to the buffer lock, it
observes the shutdown state and simulates the I/O failure itself.
This releases both the bli and delwri queue holds and frees the
buffer while xfs_buf_item_unpin() sits on xfs_buf_lock() waiting to
run through the same failure sequence. This problem is rare and
requires many iterations of fstest generic/019 (which simulates disk
I/O failures) to reproduce.

To avoid this problem, hold the buffer across the unpin sequence in
xfs_buf_item_unpin(). This is a bit unfortunate in that the new hold
is unconditional while really only necessary for a rare, fatal error
scenario, but it guarantees the buffer still exists in the off
chance that the handler attempts to access it.

Signed-off-by: Brian Foster <bfoster@redhat.com>
---

This is a patch I've had around for a bit for a very rare corner case I
was able to reproduce in some past testing. I'm sending this as RFC
because I'm curious if folks have any thoughts on the approach. I'd be
Ok with this change as is, but I think there are alternatives available
too. We could do something fairly simple like bury the hold in the
remove (abort) case only, or perhaps consider checking IN_AIL state
before the pin count drops and base on that (though that seems a bit
more fragile to me). Thoughts?

Brian

 fs/xfs/xfs_buf_item.c | 2 ++
 1 file changed, 2 insertions(+)

diff --git a/fs/xfs/xfs_buf_item.c b/fs/xfs/xfs_buf_item.c
index fb69879e4b2b..a1ad6901eb15 100644
--- a/fs/xfs/xfs_buf_item.c
+++ b/fs/xfs/xfs_buf_item.c
@@ -504,6 +504,7 @@ xfs_buf_item_unpin(
 
 	freed = atomic_dec_and_test(&bip->bli_refcount);
 
+	xfs_buf_hold(bp);
 	if (atomic_dec_and_test(&bp->b_pin_count))
 		wake_up_all(&bp->b_waiters);
 
@@ -560,6 +561,7 @@ xfs_buf_item_unpin(
 		bp->b_flags |= XBF_ASYNC;
 		xfs_buf_ioend_fail(bp);
 	}
+	xfs_buf_rele(bp);
 }
 
 STATIC uint
-- 
2.26.3


^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2021-06-02 16:40 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-05-03 12:18 [PATCH RFC] xfs: hold buffer across unpin and potential shutdown processing Brian Foster
2021-05-03 23:25 ` Dave Chinner
2021-05-05 11:50   ` Brian Foster
2021-05-06  2:56     ` Dave Chinner
2021-05-06 19:29       ` Brian Foster
2021-06-02 13:32         ` Brian Foster
2021-06-02 16:02           ` Darrick J. Wong
2021-06-02 16:31             ` Brian Foster
2021-06-02 16:35               ` Darrick J. Wong
2021-06-02 16:40                 ` Brian Foster

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.