linux-ext4.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: "Theodore Ts'o" <tytso@mit.edu>
To: Seamus Connor <seamus@tinfoilwizard.net>
Cc: linux-ext4@vger.kernel.org
Subject: Re: reproducible corruption in journal
Date: Wed, 24 Feb 2021 09:53:29 -0500	[thread overview]
Message-ID: <YDZoaacIYStFQT8g@mit.edu> (raw)
In-Reply-To: <dccc26c4-19be-4f07-a593-bec842500d09@www.fastmail.com>

On Tue, Feb 23, 2021 at 04:41:20PM -0800, Seamus Connor wrote:
> Hello All,
> 
> I am investigating an issue on our system where a filesystem is becoming corrupt.

So I'm not 100% sure, but it sounds to me like what is going on is
caused by the following:

*) The jbd/jbd2 layer relies on finding an invalid block (a block
which is missing the jbd/jbd2 "magic number", or where the sequence
number is unexpected) to indicate the end of the journal.

*) We reset to the (4 byte) sequence number to zero on a freshly
mounted file system.

*) It appears that your test is generating a large number of very
small transactions, and you are then "crashing" the file system by
disconnecting the file system from further updates, and running e2fsck
to replay the journal, throwing away the block writes after the
"disconnection", and then remounting the file system.  I'm going to
further guess that size of the small transactions are very similar,
and the amount of time between when the file system is mounted, and
when the file system is forcibly disconnected, is highly predictable
(e.g., always N seconds, plus or minus a small delta).

Is that last point correct?  If so, that's a perfect storm where it's
possible for the journal replay to get confused, and mistake previous
blocks in the journal as ones part of the last valid file system
mount.  It's something which probably never happens in practice in
production, since users are generally not running a super-fixed
workload, and then causing the system to repeatedly crash after a
fixed interval, such that the mistake described above could happen.
That being said, it's arguably still a bug.

Does this hypothesis consistent with what you are seeing?

If so, I can see two possible solutions to avoid this:

1) When we initialize the journal, after replaying the journal and
writing a new journal superblock, we issue a discard for the rest of
the journal.  This won't help for block devices that don't support
discard, but it should slightly reduce work for the FTL, and perhaps
slightly improve the write endurance for flash.

2) We should stop resetting the sequence number to zero, but instead,
keep the sequence number at the last used number.  For testing
purposes, we should have an option where the sequence number is forced
to (0U - 300) so that we test what happens when the 4 byte unsigned
integer wraps.

Cheers,

						- Ted

  reply	other threads:[~2021-02-24 15:06 UTC|newest]

Thread overview: 3+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-02-24  0:41 reproducible corruption in journal Seamus Connor
2021-02-24 14:53 ` Theodore Ts'o [this message]
2021-02-24 18:58   ` Seamus Connor

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YDZoaacIYStFQT8g@mit.edu \
    --to=tytso@mit.edu \
    --cc=linux-ext4@vger.kernel.org \
    --cc=seamus@tinfoilwizard.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).