All of lore.kernel.org
 help / color / mirror / Atom feed
From: Matthew Wilcox <willy@infradead.org>
To: linux-xfs@vger.kernel.org
Subject: Randomly fail readahead I/Os
Date: Sat, 3 Oct 2020 02:21:10 +0100	[thread overview]
Message-ID: <20201003012110.GC20115@casper.infradead.org> (raw)

I have a patch in my THP development tree that fails 10% of the readahead
I/Os in order to make sure the fallback paths are tested.  This has
exposed three distinct problems so far, resulted in three patches that
are scheduled for 5.10:

    iomap: Set all uptodate bits for an Uptodate page
    iomap: Mark read blocks uptodate in write_begin
    iomap: Clear page error before beginning a write

I've hit a fourth problem when running generic/127:

XFS (sdb): Internal error isnullstartblock(got.br_startblock) at line 5807 of file fs/xfs/libxfs/xfs_bmap.c.  Caller xfs_bmap_collapse_extents+0x2bd/0x370
CPU: 4 PID: 4493 Comm: fsx Kdump: loaded Not tainted 5.9.0-rc3-00178-g35daf53935c9-dirty #765
Call Trace:
 xfs_corruption_error+0x7c/0x80
 xfs_bmap_collapse_extents+0x2e7/0x370
 xfs_collapse_file_space+0x133/0x1e0
 xfs_file_fallocate+0x110/0x480
 vfs_fallocate+0x128/0x270

That finally persuaded me to port the patch to the current iomap for-next
tree (see below).  Unfortunately, it doesn't reproduce, but I wonder
if it's simply that a 4kB page size is too small.  Would anyone like to
give this a shot on a 64kB page size system?  It usually takes less than
15 minutes to reproduce with my THP patchset, but doesn't reproduce in
2 hours without the THP patchset.

diff --git a/fs/iomap/buffered-io.c b/fs/iomap/buffered-io.c
index 8180061b9e16..2e67631a12ce 100644
--- a/fs/iomap/buffered-io.c
+++ b/fs/iomap/buffered-io.c
@@ -193,6 +193,8 @@ iomap_read_end_io(struct bio *bio)
 	struct bio_vec *bvec;
 	struct bvec_iter_all iter_all;
 
+	if (bio->bi_private == (void *)7)
+		error = -EIO;
 	bio_for_each_segment_all(bvec, bio, iter_all)
 		iomap_read_page_end_io(bvec, error);
 	bio_put(bio);
@@ -286,6 +288,12 @@ iomap_readpage_actor(struct inode *inode, loff_t pos, loff_t length, void *data,
 		if (ctx->rac) /* same as readahead_gfp_mask */
 			gfp |= __GFP_NORETRY | __GFP_NOWARN;
 		ctx->bio = bio_alloc(gfp, min(BIO_MAX_PAGES, nr_vecs));
+		if (ctx->rac) {
+			static int error = 0;
+			ctx->bio->bi_private = (void *)(error++);
+			if (error == 10)
+				error = 0;
+		}
 		/*
 		 * If the bio_alloc fails, try it again for a single page to
 		 * avoid having to deal with partial page reads.  This emulates

                 reply	other threads:[~2020-10-03  1:21 UTC|newest]

Thread overview: [no followups] expand[flat|nested]  mbox.gz  Atom feed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20201003012110.GC20115@casper.infradead.org \
    --to=willy@infradead.org \
    --cc=linux-xfs@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.