From: "Darrick J. Wong" <djwong@kernel.org>
To: Frank Sorenson <frank@tuxrocks.com>
Cc: Dave Chinner <david@fromorbit.com>,
linux-xfs@vger.kernel.org, linux-fsdevel@vger.kernel.org
Subject: Re: [PATCH 2/2] xfs: use iomap_valid method to detect stale cached iomaps
Date: Tue, 4 Oct 2022 18:34:15 -0700 [thread overview]
Message-ID: <YzzfF/o695eRpOhY@magnolia> (raw)
In-Reply-To: <d00aff43-2bdc-0724-1996-4e58e061ecfd@redhat.com>
On Tue, Oct 04, 2022 at 06:34:03PM -0500, Frank Sorenson wrote:
>
>
> On 9/28/22 20:45, Dave Chinner wrote:
> > On Tue, Sep 27, 2022 at 09:54:27PM -0700, Darrick J. Wong wrote:
>
> > > Btw, can you share the reproducer?
>
> > Not sure. The current reproducer I have is 2500 lines of complex C
> > code that was originally based on a reproducer the original reporter
> > provided. It does lots of stuff that isn't directly related to
> > reproducing the issue, and will be impossible to review and maintain
> > as it stands in fstests.
>
> Too true. Fortunately, now that I understand the necessary conditions
> and IO patterns, I managed to prune it all down to ~75 lines of bash
> calling xfs_io. See below.
>
> Frank
> --
> Frank Sorenson
> sorenson@redhat.com
> Principal Software Maintenance Engineer
> Global Support Services - filesystems
> Red Hat
>
> ###########################################
> #!/bin/bash
> # Frank Sorenson <sorenson@redhat.com>, 2022
>
> num_files=8
> num_writers=3
>
> KiB=1024
> MiB=$(( $KiB * $KiB ))
> GiB=$(( $KiB * $KiB * $KiB ))
>
> file_size=$(( 500 * $MiB ))
> #file_size=$(( 1 * $GiB ))
> write_size=$(( 1 * $MiB ))
> start_offset=512
>
> num_loops=$(( ($file_size - $start_offset + (($num_writers * $write_size) - 1)) / ($num_writers * $write_size) ))
> total_size=$(( ($num_loops * $num_writers * $write_size) + $start_offset ))
>
> cgroup_path=/sys/fs/cgroup/test_write_bug
> mkdir -p $cgroup_path || { echo "unable to create cgroup" ; exit ; }
>
> max_mem=$(( 40 * $MiB ))
> high_mem=$(( ($max_mem * 9) / 10 ))
> echo $high_mem >$cgroup_path/memory.high
> echo $max_mem >$cgroup_path/memory.max
Hmm, so we setup a cgroup a very low memory limit, and then kick off a
lot of threads doing IO... which I guess is how you ended up with a long
write to an unwritten extent that races with memory reclaim targetting a
dirty page at the end of that unwritten extent for writeback and
eviction.
I wonder, if we had a way to slow down iomap_write_iter, could we
simulate the writeback and eviction with sync_file_range and
madvise(MADV_FREE)?
(I've been playing with a debug knob to slow down writeback for a
different corruption problem I've been working on, and it's taken the
repro time down from days to a 5 second fstest.)
Anyhow, thanks for the simplified repo, I'll keep thinking about this. :)
--D
> mkdir -p testfiles
> rm -f testfiles/expected
> xfs_io -f -c "pwrite -b $((1 * $MiB)) -S 0x40 0 $total_size" testfiles/expected >/dev/null 2>&1
> expected_sum=$(md5sum testfiles/expected | awk '{print $1}')
>
> echo $$ > $cgroup_path/cgroup.procs || exit # put ourselves in the cgroup
>
> do_one_testfile() {
> filenum=$1
> cpids=""
> offset=$start_offset
>
> rm -f testfiles/test$filenum
> xfs_io -f -c "pwrite -b $start_offset -S 0x40 0 $start_offset" testfiles/test$filenum >/dev/null 2>&1
>
> while [[ $offset -lt $file_size ]] ; do
> cpids=""
> for i in $(seq 1 $num_writers) ; do
> xfs_io -f -c "pwrite -b $write_size -S 0x40 $(( ($offset + (($num_writers - $i) * $write_size) ) )) $write_size" testfiles/test$filenum >/dev/null 2>&1 &
> cpids="$cpids $!"
> done
> wait $cpids
> offset=$(( $offset + ($num_writers * $write_size) ))
> done
> }
>
> round=1
> while [[ 42 ]] ; do
> echo "test round: $round"
> cpids=""
> for i in $(seq 1 $num_files) ; do
> do_one_testfile $i &
> cpids="$cpids $!"
> done
> wait $cpids
>
> replicated="" # now check the files
> for i in $(seq 1 $num_files) ; do
> sum=$(md5sum testfiles/test$i | awk '{print $1}')
> [[ $sum == $expected_sum ]] || replicated="$replicated testfiles/test$i"
> done
>
> [[ -n $replicated ]] && break
> round=$(($round + 1))
> done
> echo "replicated bug with: $replicated"
> echo $$ > /sys/fs/cgroup/cgroup.procs
> rmdir $cgroup_path
next prev parent reply other threads:[~2022-10-05 1:34 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-09-21 8:29 [RFC PATCH 0/2] iomap/xfs: fix data corruption due to stale cached iomaps Dave Chinner
2022-09-21 8:29 ` [PATCH 1/2] iomap: write iomap validity checks Dave Chinner
2022-09-22 3:40 ` Darrick J. Wong
2022-09-22 23:16 ` Dave Chinner
2022-09-28 4:48 ` Darrick J. Wong
2022-09-21 8:29 ` [PATCH 2/2] xfs: use iomap_valid method to detect stale cached iomaps Dave Chinner
2022-09-22 3:44 ` Darrick J. Wong
2022-09-23 0:04 ` Dave Chinner
2022-09-28 4:54 ` Darrick J. Wong
2022-09-29 1:45 ` Dave Chinner
2022-10-04 23:34 ` Frank Sorenson
2022-10-05 1:34 ` Darrick J. Wong [this message]
2022-09-22 4:25 ` [RFC PATCH 0/2] iomap/xfs: fix data corruption due to " Darrick J. Wong
2022-09-22 22:59 ` Dave Chinner
2022-09-28 5:16 ` Darrick J. Wong
2022-09-29 2:11 ` Dave Chinner
2022-09-29 2:15 ` Darrick J. Wong
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YzzfF/o695eRpOhY@magnolia \
--to=djwong@kernel.org \
--cc=david@fromorbit.com \
--cc=frank@tuxrocks.com \
--cc=linux-fsdevel@vger.kernel.org \
--cc=linux-xfs@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).